Integrity Icon (II) is Accountability Lab’s flagship program aimed at ‘naming and faming’ honest civil servants who are pushing for good governance reforms in systems that are widely known for corruption. Now in its fifth year, the campaign fosters long term shifts in narratives and works to change norms through public engagement. It also seeks to impact process and policy change by supporting and connecting  dedicated civil servants. Each year, 5 Integrity Icons are chosen through public nominations and organisational vetting; celebrated through national and social media; and awarded at a final celebration attended by local and international stakeholders. The Lab’s team then works with them to use the trust and credibility generated to push for reforms.

Recently, Integrity Icon reached a point at which Lab staff began to see impact, but wanted to better understand the changes the campaign was creating . In partnership with Open Society Foundations, we undertook a rigorous impact evaluation (read our report here), utilizing a newer method called Contribution Tracing to unpack our assumptions around Integrity Icon’s impact in Liberia, where the program has run annually since 2015. Contribution Tracing is a new, cutting-edge methodology undertaken by very few organizations to date. The study was an opportunity to learn internally, innovate in terms of impact measurement and  contribute to the knowledge base within the governance space more broadly. 

Contribution Tracing (CT) is a non-experimental method that measures an intervention’s contribution to an outcome, after the outcome has been observed (ie the change needs to have taken place already). In undertaking an evaluation of Integrity Icon, CT is an appealing option as it allows for the use of various types of evidence, and offers additional benefits in being able to evaluate the gaps in an organization’s data collection along the way. The rigor of the methodology also makes it a robust choice for campaigns like Integrity Icon where evidence can often be more qualitative.

What were we trying to measure?

This type of study centers on evaluating the extent to which a claim about an intervention can be proven. After extensive and extended staff consultations on the believed impact of the program to date, the following claim was selected as the basis of the study:

The Integrity Icon campaign has further enabled Icons to implement new/improved rules, practices, or processes for good governance in their workplace.

In essence, Lab staff believed that Icons were able to better advocate for positive changes in their agencies after the recognition and support that came with their participation in the program, but without an evaluation there was no way to directly attribute this impact to the campaign. Our theory -or contribution claim – posited that series of actions (some ours but many more on the part of the icons) related to the campaign – was leading to reforms and changes (however small) that we were observing within the icons’ institutions. The CT method enabled us to follow this entire claim through to the end utilizing a variety of different kinds of evidence to assess each piece (or component in CT language) of the claim to understand (a) if it was happening as we expected, and (b) to what extent.

What did we find, and why does this matter?

Tracing 3 Icons from Liberia, we are able to assert with greater confidence that Icons displayed increased motivation brought about by the public acknowledgement and celebration associated with the campaign. This motivation can, based on the cases we investigated, lead to increased drive to advocate for positive changes in the workplace. Additionally, some Icons made use of support offered by Accountability Lab staff in the form of advice and exposure to a larger network, which broadened their sphere of influence. Importantly, the study also found that Icons have unique experiences – while some benefit from and make use of staff support, the sheer motivation gained through publicity is enough to drive others to push for reforms, even in the absence of ongoing Lab support.

Finding cases where Icons have brought about positive changes in their agency or sector after their participation in the campaign serves not only as proof of the impact of the program, but the findings gathered around the ways in which different Icons are impacted by the campaign provide guidance on how to improve the work to enhance its impact over time. Understanding how Integrity Icon acts as a lever for change is incredibly useful for norm-shifting efforts more broadly.


We now know that:


  • The result of the study is ultimately that we can now say, with greater certainty, that the Integrity Icon campaign further enables civil servants with integrity who gain recognition to implement new/improved rules, practices, or processes for good governance in their workplaces.
  • Wide publicity around the campaign is crucial, and teams will continue to build on this. Additionally, drawing networks from the Icons’ departments and sectors into celebratory events can be a benefit, as it may provide leverage with powerholders.
  • There are benefits to providing ongoing support to Icons, beyond their participation in the campaign. In addition to individual support from the Lab, building a network of Icons seems to add to their motivation, as acting with integrity in a system fraught with corruption can be isolating.
  • We started studying the three Icons as one unit, but soon realized that the campaign had a different impact on each Icon’s trajectory, and that each of them were affected by different parts of the intervention in different ways.

The evaluation team was able to find evidence indicating the following factors linked to components in all three cases:


  • Icons displayed a heightened sense of motivation after participation in the program;
  • Icons were driven to strengthen accountability practices and integrity in their agencies;
  • They received recognition as individuals with integrity by peers and supervisors within their sectors;
  • External media outlets provided further recognition;
  • Networks were expanded through introduction to new actors;
  • In all three cases, we found changes related to good governance in the agency that could be attributed to the Icon.

Furthermore, in some, but not all cases, we found evidence of support from Accountability Lab to strengthen their integrity missions.

The Icons’ ability to effect change without the presence of AL support led the evaluation team to conclude that this factor is not a prerequisite and we still have many more questions about what this means for if and how we structure on-going support to icons. 

What we took away, beyond the findings

Collecting data throughout the CT process provided an opportunity to think critically about the Lab’s MEL practices, and helped us identify gaps in crucial areas that could hinder our ability to trace the impact of programs based on actual results rather than forecasts. While we were aware of weaker areas in our system, the study confirmed that:


Learning capacity is crucial. We are committed to adaptive learning and rigorous review of our work, but we need both human and financial resources to make this possible. Our small team, with generous staff support from OSF, struggled to prioritize this and maintain focus due to ever-competing demands on their time. We highly recommend that any organization interested in the methodology thoroughly assess their ability to dedicate staff time to it, or make use of an external evaluator if funds are available.  To work effectively, CT also requires a core group of team members who are able to learn and understand a new methodology quite rapidly.

Context matters. A big part of this study involved upfront planning around which data the team would collect to evaluate the claim, followed by data collection. The process revealed a number of challenges around program data that was assumed to be collected and stored, but that within the country context didn’t necessarily happen. Additionally, language barriers proved challenging during primary data collection as most Liberians speak Koloqua, or Liberian Kreyol, which can be difficult to understand for researchers not familiar with the dialect. We recommend planning for contextually-relevant data collection and to share your expectations with team members upfront. Planning for language barriers from the onset is also really important as it has a knock-on effect on time and resources.

Internal learning can outweigh the public output. The biggest benefit to conducting the study may have been gaining clarity on the Lab’s in-country MEL practices, and a much better understanding of our own assumptions around the impact of this program, particularly the elements that actually drive change. This is helping us improve the types of data we collect and how we do it, and is supporting adaptations in the Integrity Icon program, to shift emphasis to areas that are levers for greater impact across all country teams. 



This evaluation was made possible by the generous support from Open Society Foundations. We’d like to thank Megan Colnar, Jay Locke and Shreya Murali for their time and support. Our sincere gratitude goes to Gavin Stedman-Bryce for providing invaluable training, guidance and quality assurance throughout the process. Last but not least, we’d like to acknowledge and thank the icons, and their colleagues and supervisors, for participating in this study voluntarily.