Measuring the effect of a single rain cloud

Alex Jacobs

I was delighted to see the reference to Mango’s ‘Who Counts?’ in Alliance‘s special feature on measuring impact last December – though I would have liked to see more space given to some interesting initiatives that aim to find new ways to measure and report performance without trying to measure impact.

Last week, a senior UK politician told me that he wanted to reform aid to ensure that funding decisions were linked to proven impact. Our discussion exactly mirrored the issues discussed in Alliance. He and I agreed and disagreed in equal measure. We agreed that there was a real need to ensure that limited aid funds are used as effectively as possible. We disagreed that ‘impact’ was the right tool for the job.

Roger Riddell’s 2007 book Does Aid Work? reviews a huge variety of literature, impact evaluations and synthesis reports to tackle his title question. The honest answer he gives is that ‘we still don’t know – not for lack of trying, but due to the inherent difficulties of tracing its contribution’. Over the years, many different attempts have been made to measure impact and relate it to specific external interventions, but with strictly limited success. ‘Impact’ has proved to be an inappropriate concept for managing or reporting the performance of aid work.

A lengthy literature explains why, focusing on the problems of causality, unintended results, the non-linear and subjective nature of social change, and the fact that external interventions are normally only one contribution to ‘development impact’, and often quite a small one at that. Trying to measure the performance of aid through ‘impact’ is like trying to measure the effect of a single rain cloud on the level of the sea – which can have seriously problematic consequences for how people think about and deliver aid.

However, it’s not all storm clouds and gloom. A number of exciting initiatives are working on practical alternatives that generate reliable reports of performance and at the same time encourage good practice on the ground.

The Humanitarian Accountability Partnership (HAP) has published a new standard for measuring and reporting how accountable aid agencies are to their intended beneficiaries – a key driver of performance. One of a growing number of organizations that are committed to the HAP standard is Concern Worldwide, which is working with Mango to trial an innovative system called Listen First. This encourages field staff to engage respectfully with intended beneficiaries throughout the project cycle – and it allows managers to monitor and encourage this on a systematic basis. It also puts beneficiaries’ views centre stage by routinely surveying their opinions, for instance on how wisely they think project funds are being spent. Preliminary results are encouraging.

The IDRC has developed another different approach, called Outcome Mapping, which focuses on one specific category of results: changes in the behaviour of people or organizations with whom an agency works directly. These lie within the reasonable sphere of influence of the agency; they are not as distant or remote as ‘impact’. (This could be likened to measuring the effect of a single rain cloud on whether it helps refill a water tank, which is altogether practicable.) This too has been successfully field tested.

David Bonbright’s article in the December issue of Alliance concluded that we need more open recognition that the real question is not ‘what is aid’s ultimate impact?’ but ‘can we come up with new ways of managing and reporting performance?’ Answering that question needs collaboration among donors, implementing agencies and other innovators to develop practical tools that work.

Alex Jacobs
Director, Mango


Comments (0)

Leave a Reply

Your email address will not be published. Required fields are marked *



 
Next Letter to read

Who’s for shiny new impact?

Dave Pitchford