At the Accountability Lab we have recently been running a series of “learning about learning” discussions to understand how we- as a community- can get better at adaptive learning. We want to help highlight practical ways to internalize and build on lessons around what works and what does not; and to support open conversations around impact (read recent blogs from our impact survey process and learning in Pakistan).
In Monrovia last week, we brought together a group of donors, civil society representatives and government officials to discuss these issues. In Liberia, as in other places, we had the sense that people like to talk about what they do and what they’ve learned, rather than how they have learned; and that this reflects the fact that development actors are not always rewarded for rigorous self-reflection. Yet it is this process of trying, analyzing and adapting that presents the greatest opportunity for shared learning and growth in the accountability sector. Key takeaways from the discussion included:
First, begin with problem-led program development. This is important to supporting programs that work. It may sound simple, but as one participant explained, “everyone has solutions, but no one quite understands the problem.” We agree that our community should draw on thinking around these issues such as Matt Andrew’s Problem-Driven Iterative Adaptation and the Doing Development Differently conversations. In practice, problem-led development requires not only adaptation and iteration, but a nuanced understanding of the political landscape. It was exciting to hear, therefore, that USAID Liberia is aiming to design more problem-led programming with an emphasis on research, mapping key actors and identifying political constraints upfront.
Second, encourage data-driven decision-making. Part of understanding the problem is also gathering data. Encouragingly, Liberian organizations like the Bush Chicken (one of our six “accountapreneurs”) are now making data-driven decisions that are helping them improve outcomes (in this case, increasing readership). Donors encourage baseline analyses, but too often the process of data collection is overlooked or deemed too expensive by civil society organizations. When data is collected, it is often reported out as a requirement for donors without undergoing time-consuming analysis that is critical for learning. If we want to learn, civil society will need to prioritize more rigorous data collection and analysis to support problem-led program development. While this process might be easier with online tools and approaches, the group also highlighted practical, low-tech ways Liberian civil society can document learning, including through innovation diaries, feedback events and community-building spaces.
Third, shift from singular events to collective processes. The dynamics of the donor-partner relationships in Liberia still incentivize individual projects and events. Our discussion highlighted the idea that transcending one-off activities to instead support longer-term, cumulative engagement enables higher quality learning and outcomes over time. This in turn allows organizations to better know those they serve and accurately address their needs. For example, John Kamma and the Citizens’ Bureau in Logan Town have designed their community mediation program from the community-upwards, hiring mediators from within the localities they serve and engaging daily with the citizens they seek to support. This continuity and familiarity allows for rapid changes in program design and has allowed the Citizens’ Bureau to develop a robust network of mediators and supporters that are expanding by citizen demand.
Fourth, we have to move beyond rhetoric on “failing”. Participants highlighted their willingness to try new approaches and risk failure, but also a wariness that donors do not yet provide enough space for this process to actually take place. Development programs tend to involve more than one level of contracting, and even where donors, such as USAID, do seem to be adopting this approach in Liberia, this mindset does not always influence the ways that sub-grantees are held to account. In fact, the space to fail is arguably most important for these smaller organizations who may not be able to rely on a longstanding reputation or relationships for future funding streams if programs do not go to plan. Ways to make the failure mindset real in Liberia might include localized efforts to admit failure; innovation funds to test new ideas within larger programmatic approaches; and more transparent, interactive reporting around activities. At Accountability Lab, we are starting to do this through quarterly impact calls, for example.
Finally, forming communities and networks are an outcome in and of themselves. Relationships among people and organizations working towards a common cause help to close knowledge gaps, especially between those who can provide support and citizens who are working in communities around accountability issues. One donor representative pointed out that in practice, donors rarely take the time to know the people they work with; nor do they quantify network-building as a key performance indicator. This is a huge missed opportunity to improve outcomes and coordination. Even during our conversation, we discovered three ongoing data-gathering exercises that were unaware of each other, but through which there might be useful synergies and resource sharing. The need to create meaningful, structured opportunities for face-to-face interaction is not a profound insight, but it is surprising how rarely it actually takes place in Liberia. We hold periodic “friendraisers” at Accountability Lab as one means to build community around accountability issues. Based on the demand after this discussion, a working group around adaptive learning that includes diverse voices from community-based organizations, donors and others in the field may be another way forward.