The Evidence Base on Anticipatory Action

Evaluation and Lessons Learned
Originally published
View original


WFP and ODI review the evidence base on Anticipatory Action (AA) and conclude that to achieve an effective scale up of the approach and ensure Anticipatory Action achieves the intended changes on both disaster response systems and people’s vulnerability and resilience to climate change, robust empirical data and a strong monitoring, evaluation and learning agenda are necessary.

Executive summary

Anticipatory action (A-A) is attracting global attention with the number of pilot initiatives delivering support to vulnerable communities before disasters strike growing in number and size. At the UN Secretary General’s Climate Action Summit on 23 September 2019, the Risk-informed Early Action Partnership (REAP) was launched, with more than 30 partners committing to vastly increasing the coverage of A-A. The target is to cover one billion more people by financing and delivery mechanisms connected to effective early action plans by 2025.

As A-A expands so too does the importance of monitoring and evaluation to improve practice, strengthen accountability, and enhance reflection and learning. This paper takes stock of the evidence produced so far on the benefits of acting early prior to the onset –or deepening – of a crisis, to reduce the impacts. Overall, existing evidence indicates that the effects of A-A at household level are mainly positive, with beneficiaries for instance experiencing less psychosocialstress when floods hit, higher crop productivity and less food insecurity during prolonged periods of drought, and lower livestock mortality during severe cold spells. However, not all expected benefi ts are observed in all cases and fi ndings should be considered in relation to context and the kind of action that was taken. The range of counter factuals used is also limited, so although acting early can be better than doing nothing, it is less clear whether it is also better than doing other things at different points in time.

Initiatives that explicitly link forecasts to predetermined actions and financing are relatively new in the humanitarian sector, so the evidence base is thin but growing. The focus to-date has been largely on producing evidence for advocacy, to generate agreement and buy-in from donors, set global targets and ultimately to encourage further investment in A-A. Early studies have therefore focused on the monetary benefits by using return on investment (ROI) and cost–benefit analysis (CBA). These studies help make the general case for A-A, but greater attention now needs to be paid to producing evidence in a way that can lead to improvement in the design and delivery of A-A programmes.

This report proposes an evidence agenda for A-A focusing on:

1. Greater investment in monitoring, evaluation and learning (MEL) systems

As more A-A initiatives come online the need to evaluate and compare these becomes more critical, but larger initiatives also mean more resources are available for MEL. Greater investment in MEL is needed to ensure evidence is produced to a high standard and can be used to make improvements to the design and delivery of A-A on the ground.

2. Development of a common analytical framework for A-A evaluations

Implementing agencies need to agree a common analytical framework by which to undertake and ultimately assess A-A, and a set of principles that encourage methodological rigour in testing the appropriateness of early actions. This will encourage coherence and quality in the evidence base.
A starting point and critical step is for agencies to start sharing evaluation reports and the data and methods on which they are based.

3. A focus on improving the models

Special care must be taken not to over-estimate the value of avoided losses when calculating and presenting the monetary benefi ts of A-A using ROI and CBA methods.
Transparency about models and assumptions is critical.
Evaluation methodologies should also seek to capture and emphasise the collective benefi ts or public goods associated with A-A.

Eff orts to improve evaluation methodologies for A-A are underway. Encouragingly, implementing agencies consulted for this study are already taking forward recommendations to share information and use more robust evaluation methods. There are moves to develop manuals and guidelines on best practice in monitoring and evaluation, and a new monitoring, evaluation, accountability and learning (MEAL) group on forecast-based early action has been set up and is exploring the idea of creating a common analytical framework to assess A-A.

With strong monitoring, evaluation and learning frameworks built into the design of A-A initiatives, and as these initiatives grow in an attempt to reach 1 billion people, more substantial evidence will soon be available for assessing the benefi ts of acting early before disasters.