World

Understanding factors to evaluation uptake: A user centered approach to adapting evaluations and learnings (August 2021)

Format
Analysis
Source
Posted
Originally published

Attachments

INTRODUCTION

Evaluations and learning exercises conducted in emergency responses allow for rapid course correction and accountability to individuals experiencing acute crisis through improved humanitarian responses. However, evaluations are only as successful as their findings, and recommendations are deemed useful and actionable by end users or decision makers. The International Rescue Committee (IRC) sought to understand enabling and limiting factors associated with the evaluation and learning process and its ensuing recommendations by implementing several evaluation methodologies across responses and soliciting feedback from the users on their experience. This approach was used to begin to unpack what motivates teams to conduct the exercises and act on their findings.

Real Time Evaluations (RTEs) with modified methodologies have been used by the IRC since 2012 with the aim of producing an immediate snapshot of the strengths and challenges in an emergency response to empower urgent, corrective action. However, there are important challenges to conducting evaluations of emergency programming, including short timelines, over-burdened staff, security and safety concerns, de-prioritization of evaluation activities, and rapidly evolving contexts.2 These challenges limit the conduct of evaluations but more importantly the uptake of evaluation findings. Even when data are available, it may not always be used to improve programming if the follow through is lacking. This means resources may be spent on evaluations in emergency settings which may or may not lead to improved interventions for emergency-affected populations.

The IRC implemented three different modalities of program evaluation ranging from an in-depth mixed methods approach (Hybrid approach including RTEs), a light-touch qualitative approach (After Action Review3 or AAR) to a purely quantitative scorecard (Emergency Response Review4 or ERR) over the last two years in fourteen different emergency responses, and sought to understand the characteristics that make the approaches useful as well as changes that would need to be made for future applicability. Because use of, and interest in participating in, evaluations or learning exercises requires a high-level of buy-in, a commitment was made to continuous, user-informed improvements. Reviews, termed iteration sessions, were periodically held over the course of the two years to review user feedback about the evaluation as well as facilitators and barriers to implementation and uptake.

While COVID-19 substantively changed the trajectory of the proposed study, an exploratory review is shared here to provide an overview of the process implemented to improve the methodologies and initial learnings gleaned from this process. A user-centric approach to exercise adaption was seen as an integral part of ensuring uptake when the exercise is not mandatory. The objective is to share this process and learning in the event it can inform quality improvement of evaluation and learning exercises for others in the humanitarian sector.