Yesterday, Vijaya Ramachandran and Julie Walz of the Center for Global Development provided a nice overview of the U.S government’s review of its Haiti earthquake response. Ramachandran and Walz found that while the review includes “some frank and enlightening assessments of USG [U.S. government] response and coordination” it contained “very little discussion of aid accountability.”
As Ramachandran and Walz point out, the authors of the review couldn’t determine the effectiveness or impact of aid because of a “disquieting lack of data.” Part of the problem seems to stem from how data collection and management is viewed by aid workers and USG employees, who made up the vast majority of sources for the review. The report states:
During the Haiti response, limitations related to information management followed two major lines. First, there were limited data available for tactical and operational decisions; and second, there were overwhelming requests for data and information from policy leaders in Washington that made systematic data collection more difficult. These demands were often driven by reports in the media.
Thankfully, the authors note that at least “some” of those they interviewed understood that the former led to the latter: limited availability of data was what generated the “overwhelming” number of requests. Others told the authors that requests for information “detracted from the on-ground response” as they were forced to “’chase down’ facts.”
Of course, data is important to the on-the-ground response as well, as the report points out:
Data collection, through surveys and assessments, is an essential component for managing a disaster response. Surveys and assessments are used to identify the needs of the affected population to direct the response. Ideally, these types of data can be used to measure the overall impact of the humanitarian response.
So while chasing down data may have detracted from the response, if those same responders had had data to begin with, they might have been able to respond more effectively.
Perhaps part of the problem is also the type of data that is collected. The review authors point out that there “is a great deal of information available on output indicators, such as liters of water or tons of food delivered, but limited information on the actual impact or outcomes of these interventions.” Similarly, NGOs criticized USAID as the “focus of USAID monitoring and evaluation was on numbers as opposed to the quality of the response.” The report continues, “NGOs felt that too much emphasis was placed by USAID on quantitative results, such as the number of temporary shelters constructed, rather than the sustainability of these shelters.”
By making data collection and management a more important part of relief efforts, and incorporating greater reporting requirements into contracts and grants, not only would aid workers not have to “chase down” information after the fact, but aid could be delivered more effectively to those in need. Meanwhile, those tasked with reviewing the U.S. government’s response to a humanitarian crisis might actually be able to evaluate how effective it was.