Aldo Benini Patrice Chataigner Nadia Noumri Leonie Tax Michael Wilkins
A note for ACAPS
Monitoring information gaps
Assessment registries, formerly known as “surveys of surveys”, are databases about the flow of needs assessments in a crisis or disaster zone. Their purpose is twofold:
1. The registries are archives of shared humanitarian intelligence.
2. They help monitor the distribution of the assessment effort, revealing coverage by areas, social groups and sectors, as well as open and emerging gaps.
This note addresses the second function. It discusses concepts of information value and information gaps as well as key lessons from analyzing collections of assessment reports in four recent crisis contexts. It presents a database template for future registries, supplied in a companion Excel workbook. For greater authenticity, the template is filled with registry data from the response to the Nepal earthquakes in 2015. The note is a continuation of an earlier technical note (Tatham 2011) and comes shortly after a 10-year review of assessment reporting trends and methods (Tax and Noumri 2016).
Assessment resources are in short supply vis-à-vis urgent or difficult information demands. To the extent that agencies share their reports, a coordination body such as UNOCHA can map the progress and quality of the combined effort. Plausibly, only one or two information management or assessment experts will be tasked to read and record the flow of reports on a daily basis. The experts will rarely be able to evaluate the reliability of the underlying data or the validity of the measures that the assessment teams pursued. But they can follow coverage as well as timeliness and shelf-life. They can also form a summary judgment about the degree of detail and the implied ability to prioritize.
It is along these modest lines that we investigate the definition and dynamics of information gaps. Also we review past efforts of tracking assessments in four crisis contexts and enumerate lessons learned from each of them:
· Syria: ACAPS’ involvement in the “Syria Needs Assessment Project (SNAP)” led to the conclusion that the “ability to support solid judgments on the priority needs and to quantify needs on a sectoral level” was a reasonable and necessary standard in evaluating the usability and value of assessments. Moreover, in situations of frequent lack of access and patchy indicators, both the severity of situations and the quality of the assessment information were best measured on simple ordinal scales.
Assessment gaps and priorities could be established by comparing the values of governorates, districts, etc. on these scales.
· Ebola: ACAPS monitored the progress of sectoral assessments during the Ebola viral disease (EVD) in West Africa, particularly in Sierra Leone and Liberia. One of the lessons was that assessment performance varied considerably across the subsectors within a given sector. For example, paradoxically, within the health sector in Sierra Leone, the ability to prioritize was better in assessments that addressed the availability of health care services than in those primarily concerned with disease surveillance. Then and now the humanitarian community did not have a standardized list of sub-sectors; nevertheless, the ability to elucidate differences in severity and priority between sub-sectors make assessments more valuable.
· Nigeria: ACAPS reviewed a year’s worth of assessment reports about the region embroiled in the conflict with Boko Haram. In a novel database format, the analysts created a record for each combination of report and covered Local Government Area. They rated the quality of the information for each of 15 sectors and functional areas. This enabled a simple form of information value estimate. Our secondary analysis shows that the humanitarian community effectively concentrated assessment efforts on the most severely affected accessible areas – a gratifying conclusion.
· Nepal: During four months following the April 25, 2015 earthquake, a designated assessment cell in Kathmandu recorded incoming assessment reports by the level of administrative units. Already by the second week did the vast majority of reports detail their findings at the lowest gazetteered level – the Village Development Committee -, a measure of rapid penetration of the affected terrain. Yet, over the course of the observation period, almost a fifth of all assessments were not that specific. This echoes the “multi-resolution problem” known from remote sensing; it makes it difficult to evaluate information gaps at the lower level. A second challenge arose from the obsolescence of information in the rapidly changing recovery situation. Although the assessment effort achieved good coverage within the first month, two months later much of the information was outdated. The humanitarian community ramped assessments up again, in tune with preparations for the next funding cycle. Multi-resolution in space and decay over time thus are factors that have to be modeled in information gap estimates.
After the review of lessons learned, we turn to the practicalities of information gap management in future humanitarian actions. We provide an Excel workbook template that translates the assessment registry information into a quick look-up facility, into useful estimates of the gaps across the entire theater as well as into lists of units of interest by weighted assessment gaps:
· The quick look-up facility answers whether a unit of interest – a combination of administrative area and sector, for example – has been covered by any recent assessment. It screens out those that have exceeded their shelf-life or fall short on a scale of useful information on which the registry team rates every report.
· The statistical overview depicts the extent of information gaps over time, against user-selected sectors, shelf-lives and information standards. The gaps are weighted by how severe the assumed impacts are in the various units of interest (the severity score, formed from preliminary indicators such as the rough number of destroyed buildings in a sub-district). A gap in a more severely affected unit matters more than one in a slightly affected unit. As more and more assessments are completed, providing a clearer operational picture, the severity scores must be updated, and the overall coverage view will adjust.
Figure 1 depicts the dynamics of information gaps for 200 days from the crisis onset, calculated at three levels of resolution, in a simulated scenario with severity beliefs about each local area and an information usability rating for each area-sector combination in the assessment reports.
Figure 1: Visualization of the information gap dynamic · The workbook produces shortlists of the regions, district, sub-districts with the most highly weighted information gaps. The lists update automatically when the user changes a parameter.
The template has not been tested in the field, but it has been run with the registry database from the response to the earthquakes in Nepal in 2015. It consistently produced the three outputs when we changed the parameters in its user interface. We have confidence that the template can provide assessment registry teams with the core mechanisms that will facilitate the rapid creation of an appropriate database in their specific circumstances. It is built deliberately from basic Excel features, without resorting to macros or user-defined functions, thus lessening the need for outside expert support.