THE OECD DAC EVALUATION CRITERIA IN HUMANITARIAN ACTION: WHAT DO WE KNOW?
What are the key challenges and issues associated with applying the OECD DAC criteria in evaluations of humanitarian action?
What can we learn from examining these challenges and how can we use this learning to improve existing guidance?
These were the guiding questions for recent ALNAP research, and the findings are captured in this summary brief. ALNAP has already produced the most widely consulted guidance on the OECD DAC criteria and their use in humanitarian evaluations. But this guidance is more than 15 years old and needs a refresh. In seeking to update its guidance, ALNAP commissioned a review to gather and analyse perspectives from written sources to help inform a wider consultation process. The purpose is to gather evidence to help rewrite the ALNAP guidance on the use of evaluation criteria in humanitarian settings.
This brief highlights the key messages from the main research paper, available here. It includes a summary of each of the seven OECD DAC criteria as defined in ALNAP’s 2006 guide. The final section reviews cross-cutting issues and potential additional criteria. Each of the sections include definitions and a summary of key issues, followed by questions for further exploration.
Background
In 2006, ALNAP published Evaluating humanitarian action using the OECD DAC Criteria, an ALNAP guide for humanitarian agencies. The OECD DAC evaluation criteria are the pre-eminent criteria for evaluating development and humanitarian assistance (Kennedy-Chouane 2020, Picciotto 2013, cited in Patton 2020). As updated in 2019, the six criteria are: effectiveness, relevance, efficiency, impact, sustainability and coherence. ALNAP’s 2006 guide interprets the criteria for application in humanitarian action as: effectiveness, appropriateness/relevance, efficiency, impact, coverage, coherence and connectedness.
Strengths
The OECD DAC evaluation criteria are widely applied – even more so than originally expected (Lundgren 2017). This has important advantages. It makes evaluation synthesis easier, helps to capture common weaknesses in humanitarian action, and makes it easier for evaluators across the globe to work with each other (ALNAP 2016). Lundgren (2017) observes that the criteria are relatively easy to understand and use, and cover the key issues that are important to consider when assessing the performance of an intervention.
Common issues in applying OECD DAC criteria to humanitarian action
As can be expected, such popular and widely applied criteria have been subject to critique. Some of the broader criticisms include an inability to evaluate transformational change (Patton 2020; Ofir 2017), and an insufficient focus on gender, equity or human rights concerns (OECD DAC 2018). Other common issues identified across the criteria include:
• The importance of positionality and whose perspective is used in defining the evaluative questions and who conducts the evaluation. The views of the person who defines what is effective and how the performance of an intervention is measured will likely have a significant impact on findings. This is related to calls for the decolonisation of evaluation. Chilisa and Mertens (2021) find that evaluation is dominated by Western culture and approaches, reinforcing biased power relations (2021:242). Ofir (2017) applies this directly to the OECD DAC criteria, calling out insufficient recognition of the importance of culture and cultural differences (Ofir 2017).
• Others have suggested the need for more guidance to improve standardisation (Darcy and Dillon 2020), while maintaining flexibility in application (DEval 2018).
• Another key challenge is the variable utility and application of the criteria, which depends on the type of programme, the organisation and the intent of the evaluation.
A foundational question for future humanitarian guidance is how closely it should align to the OECD DAC guidance (and the adaptation the OECD made to the criteria in 2019).
Methodology
The research, from which this brief is adapted, relied primarily on desk review and analysis of 155 documents including 43 guidance documents for humanitarian evaluation, 53 papers from academic and grey literature and 59 humanitarian evaluations. This was supplemented with engagement with an advisory group established by ALNAP with formal feedback processes.
The full paper compares each criterion and cross-cutting criteria across sector-wide guidance and standards published by ALNAP, OECD DAC, IASC and the Core Humanitarian Standard; highlights key issues identified in the literature; and analyses organisational guidance and evaluations to provide a snapshot of the contemporary application of each criterion. It also identifies key questions for further exploration.
The methodology is limited in depth by the availability of literature specific to the research questions for each criterion, and the time required for targeted analysis. It does not reflect contemporary or unwritten views of evaluators; this will be captured in the next phase of ALNAP’s consultation process.