World

Evidence and Knowledge in Humanitarian Action

Format
Analysis
Source
Posted
Originally published
Origin
View original

Attachments

BACKGROUND

1.1 Why this topic?
The subject of this background paper – and of the 28th ALNAP Annual Meeting – is how evidence and knowledge inform policy and practice in the humanitarian sector. This is not a new topic by any means, but it raises questions that have become increasingly pressing for the sector in the past few years. The mid-1990s saw a period of NGO-led self-reflection and standard-setting, and a subsequent UN-led focus on better coordination and leadership. This reflected general concerns with the overall performance of the international humanitarian system and the need for greater accountability. One strand of that concern and of parallel donor-led initiatives like the Good Humanitarian Donorship agenda, has been the quality of the data and analysis that underpins crisis responses, the extent to which those responses are genuinely ‘needs based’, and whether their effectiveness can be demonstrated through evidence. Increasingly this has put a spotlight on the diagnostic and predictive analysis generated by early warning, needs assessment and monitoring processes. It has also placed a focus on the more retrospective judgements of evaluation processes, particularly as they concern impact and effectiveness. More generally, it raises questions about current humanitarian policy and practice and the quality of evidence on which they are based.

1.2 Just as important as the availability and quality of evidence is the question of how – or indeed whether – such evidence is used by decision-makers. Recurrent collective failures to respond decisively in the face of strong evidence of impending crisis (notably from famine early warning systems in subSaharan Africa) highlight the point that generating such evidence is only one part of the challenge. This is true also of evidence from past experience: a recurrent theme of evaluations is that the international system and individual organisations struggle to learn lessons and apply evidence from past experience to current practice (Sandison, 2006; Hallam, 2011). The way in which evidence and knowledge is communicated, assimilated and acted upon by decision-makers is central to this.

1.3 It would be misleading to suggest that no progress has been made over the past two decades in relation to the generation and use of evidence and knowledge in the humanitarian sector, whether through organisational learning processes or through more collective endeavours of research, assessment, codification and standard-setting (Walker and Purdin 2004; Young and Harvey 2004). In this respect the sector as a whole certainly looks more professional than it did 15 years ago (Barnett 2005). For example there has been the application of inter-organisational minimum standards like Sphere, and work on joint assessment and analysis within and between sectoral clusters. But in most areas of ‘diagnostic’ and ‘learning’ practice the humanitarian sector appears weak compared to other sectors, including the wider development sector. This cannot be entirely explained by the peculiar nature of the humanitarian enterprise and the constraints of working in crisis contexts. Underlying this paper and the ALNAP meeting is the sense that much humanitarian practice and policy has developed with only limited reference to the evidence base. As a result we may not be working as effectively as we could.

Many feel that the sector can and should do better, not least because humanitarians owe it to those they seek to assist to deal in actual – rather than hypothetical – problems and outcomes.

1.4 Various recent policy developments make these issues particularly pressing at present. Some of these concern donor expectations about the demonstration of results and of ‘value for money’. The humanitarian sector is increasingly subject to the same pressures as other areas of public spending in this regard. The expectation in the medical and public health spheres, and in public policy more generally, is that practice should be justified against established ‘best practice’ and that neither existing policy nor the authority of experts should be immune to challenge on such grounds. What constitutes best practice, and the methods by which best practice can be best identified, is a matter of debate in the wider social sector. Humanitarians are felt by many to have lagged in this debate, even compared to their development colleagues. We may have something to learn from the ways in which other sectors have responded to these pressures. In this paper, we consider some of the more relevant points of comparison with other sectors.

1.5 This paper aims to help structure a dialogue about these issues. It consists in part of a ‘stock take’ of current practice in the humanitarian sector with regard to the generation and use of evidence, highlighting apparent strengths and weaknesses of current practice. It touches on some of the more relevant aspects of current practice in other sectors, including medicine, public health and law. It raises questions about incentives and disincentives for the use of evidence in the humanitarian sector. It also considers some of the ways in which evidence-informed practice might be strengthened, without attempting to provide more than indicative answers to these questions.

1.6 The paper is concerned with evidence and knowledge as seen from various perspectives: that of the person affected by crisis; that of the humanitarian practitioner concerned with response decisions, or with the design, implementation and monitoring of specific programmes; that of the evaluator or researcher, concerned with testing particular programmes or strategies and considering what generic lessons can be learned; and that of the policy-maker or manager concerned with devising strategy, policy and standards. These may of course be overlapping concerns. The point is that we are concerned both with evidence that is context-specific and necessary to inform real-time response decisions; and with evidence that supports more general conclusions about (for example) the relative merits of different programme approaches to different types of crisis. In short, we are concerned with evidence and knowledge in relation both to practice and to policy.