Improving the State-of-the-Art: The Peacebuilding Evaluation Project Evidence Summit

News and Press Release
Originally published
View original

It is well known that key stakeholders in the peacebuilding field face significant challenges in monitoring and evaluating their initiatives. Increased pressure to demonstrate effectiveness, constrained budgets, and rising standards of what counts as credible evidence all must be managed. But it is also the case that organizations are identifying creative ways to address these challenges.

To learn from the current efforts underway, the United States Institute of Peace (USIP) and the Alliance for Peacebuilding convened the first Peacebuilding Evaluation Project: Evidence Summit. The day-long event was held at USIP headquarters in Washington, DC in December 2011.

Prior to the summit, a call for presentations was released. A review committee selected nine organizations to present evaluations to an audience of donors, evaluation experts and practitioners. Each presentation was followed by feedback from a panel of commenters that included donors, practitioners and evaluation experts, and then finally a broader conversation with audience members.

The premise of the Evidence Summit is to build on success. By identifying, critiquing and then disseminating these presentations, the summit provided tangible examples of effective peacebuilding evaluation as well as insight on how evaluation can be improved further in the future.

Presentations were made by officials from:

  • Global Partnership for the Prevention of Armed Conflict
  • CDA Collaborative Learning Associates
  • Search for Common Ground
  • Pact, Kenya
  • Friends of the Earth – Middle East
  • Mercy Corps
  • Peace Dividend Trust
  • The Early Years
  • Acting Together, Brandeis University

Panelists and audience members included representatives from USAID, the U.S. State Department, World Bank, the United Nations, International Development Research Centre and other major nongovernmental organizations, foundations and evaluation consultancies. In addition to U.S., Canadian, and European participants, participants from Kenya, Israel, Ireland, Iraq, Thailand and the Philippines also attended. During the presentations and discussions, several key themes emerged:

  • There was sustained discussion of the tension between evaluation methodologies that satisfy the requirements of donors and those that are most useful for organizational learning and decision-making. It was clear by the end of the discussions that many considered these two imperatives irreconcilable. A participant in the closing plenary made the perhaps radical suggestion that the path forward may be to simply separate these two kinds of processes, instead of continuing to pretend a single methodology can meet the needs of external and internal audiences.

  • Multiple participants discussed the challenge of communicating the results of more sophisticated methodologies. The good news is that organizations have improved their understanding of how to use these more sophisticated methodologies to evaluate peacebuilding programs. However, this increasingly raises the issue of how to effectively communicate results from these methodologies to communities, colleagues, donors and boards of directors, who may never have heard of the methodology. One participant, for instance, noted that we need to work to“make complexity comprehensible.”

  • There was an understanding among the participants of the power and importance of qualitative methods and simultaneously a frustration regarding the difficulty in learning how to use these methods effectively. One participant indicated that their organization wasted a lot of resources trying to figure out both how to gather qualitative data effectively and then how to analyze that data. Thus, there was a perceived need for more learning on how to deploy qualitative methodologies that continued to produce results with richness and depth, but also are able to convince external audiences of their rigor and credibility.

  • There was conversation in many of the sessions about the linkages between project-level evaluations and broader research on what makes peacebuilding effective. Participants agreed that project-level evaluations only become meaningful in the context of sustained research on conflict dynamics, broader theories of change and the most effective type of peacebuilding programming. The suggestion was made to create better platforms for systematic collaboration between peacebuilding practitioners and academics working on peacebuilding issues.

Next Steps

In early 2012, a full report on the Evidence Summit will be released by the Alliance for Peacebuilding and USIP. In addition, following up on one key suggestion from the participants, evidence summits will be planned in other cities which are hubs of the broader peacebuilding community. These could include London, Geneva, Nairobi or Bangkok. As part of the broader Peacebuilding Evaluation Project, the Alliance for Peacebuilding is working to develop a voluntary donor standards document designed to guide how donors support evaluation processes and working to develop strategies to more effectively link practitioners to academics. Finally, both USIP and the Alliance plan to make the DC Evidence Summit an annual event to showcase the state-of-the art of peacebuilding evaluation.