Thomson Reuters Foundation

Inform - Connect - Empower

For want of big data: The real evidence problem in humanitarian aid

Source: Thomson Reuters Foundation - Thu, 28 Feb 2013 12:00 GMT
Author: Abby Stoddard
hum-aid
Tweet Recommend Google + LinkedIn Email Print
Leave us a comment

Any views expressed in this article are those of the author and not of Thomson Reuters Foundation.

By Abby Stoddard, Humanitarian Outcomes

Humanitarians have a complicated relationship with facts. Relief operations rely upon empirical evidence at every stage; from the very beginning, to determine if an emergency is in fact occurring, through all aspects of project design and execution, to ultimately demonstrating the intervention has (one hopes) actually benefitted people.

Yet trying to get solid information during and after humanitarian emergencies can be an exercise in frustration: How many are in need of help? How many are on the ground helping?  To what extent, in the end, has any of it actually helped?

Next week, a hundred or so humanitarian practitioners will gather in Washington for ALNAP’s conference on evidence and knowledge in humanitarian action, and share ideas on how to improve their use of evidence for things like needs assessment, and monitoring and evaluation (M&E).

In truth, at the level of individual programmes, many agencies already do this pretty well. Despite inherent problems with gathering evidence and establishing attribution in rapidly changing environments, beneficiary numbers are calculated, indicators are defined for measuring need and evaluating success, and fairly sophisticated statistical sampling and surveillance techniques are often employed to guide programming.

A few relief NGOs, such as the International Rescue Committee, have even started experimenting with the evidential ‘gold standard’ of clinical-style cohort studies and randomised controlled trials.   

As a group, humanitarian practitioners may naturally tend towards a quick-and-dirty approach when it comes to numbers, but they are not averse to them. If not quite as advanced as some of their development counterparts, they are certainly way ahead of political/peace missions, whose executors in U.N. offices still speak reverentially of the ‘art of diplomacy’ and its ultimate ‘un-measurability’ in terms of effectiveness. Individual humanitarian agencies that do not yet ‘do evidence well’, at least have a body of knowledge and a peer group to consult on the subject. 

COORDINATION DILEMMA

The problem comes, like so much else in the international humanitarian system, in taking it to scale. A collection of even the most rigorously evidence-driven projects, each helping tens of thousands of people, will not add up to a big picture assessment or operational strategy for a major humanitarian emergency affecting millions.

In evaluations of humanitarian action, it is the systemic weakness in crisis-wide needs assessment and M&E, the lack of an overall strategic approach to assistance that comes up again and again. The ‘bitty-ness’ of the humanitarian system means that it enters and leaves a humanitarian emergency without an objective basis.  

A recent report by the Assessment Capacities Project (ACAPs) illustrated how an individual agency’s decisions to respond to an emergency have less to do with people’s needs than with its own calculus of interests and opportunities. The underfunded emergency window of the U.N.’s Central Emergency Response Fund makes allocation decisions based not on objective assessments of predicted needs, but on the budget shortfalls of the U.N. agencies that plan to be present and programming in the given country over the coming year.

For humanitarian action to become more evidence-based, information not only needs to be shared across humanitarian actors, it needs to be universalised, with common indicators and channels for amassing data. When we discuss the problem of evidence, therefore, perhaps we need to be asking some additional questions. They might include:

Early warning and early action - How can the humanitarian system employ techniques of forecasting and probability theory to create the evidence base for decisions on prioritising response resources and preparedness activities?

Impact evaluation - How can we amass the big data needed to employ inferential statistics to test our propositions and generate evidence of what works and what doesn’t?

Such endeavors would not be too technically difficult. The real problem, as ever, harks back to the coordination dilemma – in other words, the lack of a central authority to drive a system-wide process. This is reflected in the years that the humanitarian inter-agency community has struggled to come up with common needs assessment tools that will actually be adopted by a critical mass in the field. 

Even the coordination gains with the U.N. cluster approach - which brings together relief agencies working in different sectors such as food, water and sanitation, or shelter - cannot bridge the final gap.

As systems theory holds, you cannot improve a system by perfecting its parts. Unfortunately, as evidence goes in humanitarian action, ‘the parts’ is all we have.

Abby Stoddard is a senior programme advisor at New York University's Center on International Cooperation and a partner at Humanitarian Outcomes.

We welcome comments that advance the story through relevant opinion, anecdotes, links and data. If you see a comment that you believe is irrelevant or inappropriate, you can flag it to our editors by using the report abuse links. Views expressed in the comments do not represent those of the Thomson Reuters Foundation. For more information see our Acceptable Use Policy.

comments powered by Disqus