Those who fund and implement aid programmes have a poor record of funding impact evaluations in order to learn what works and what doesn’t. Much more will be needed if aid programmes are going to systematically learn from experience and improve effectiveness. In fact, the knowledge generated by impact evaluations potentially benefits many more people than those directly involved in the programmes being evaluated, and to this extent can be seen as a public good. But how do you fund such public goods? The newly formed 3IE is an attempt to find an answer.
In the past decade, unprecedented funding has become available for international development. The Gates Foundation is making grants on a larger scale than any previous foundation, while according to the OECD Development Assistance Committee, official development assistance from its member countries rose by 31 per cent to a record high of $106.5 billion in 2005. But how will we know whether this is effective? It is estimated that less than 1 per cent of development spending goes towards any type of evaluation; of that, a vanishingly small amount has been devoted to impact evaluation that permits estimation of net programme effects.
When the Center for Global Development (CGD) undertook to document successes in international public health interventions, the working group received 56 nominations. However, half of these were excluded from the final publication because there was no evidence that improved health outcomes could be attributed to the particular intervention.
A similar lack of evidence applies to many (though by no means all) widely promoted programmes. Microcredit is a good example. Since the 1990s, foundations and aid agencies have promoted microfinance as a way to empower women and reduce poverty, and it attracts between $1 billion and $1.5 billion annually. Yet in 2005 a major textbook on the subject reported that ‘there have been few serious impact evaluations of microfinance so far’.
Barriers to learning
According to the CGD’s Evaluation Gap Working Group, there are a number of reasons for the systematic lack of attention to measuring impact. First, a portion of the knowledge generated through impact evaluation is a public good. Those who benefit from the knowledge include – but go far beyond – those directly involved in a programme and its funding. But because a cost-benefit calculation carried out by any particular agency is unlikely to include those benefits, an impact evaluation may appear costlier than justified by the expected returns.
Second, development institutions are typically concerned with doing rather than learning and it is extremely difficult to protect the funding for good evaluation, or to delay the initiation of a programme in order to design the evaluation and conduct a baseline study. More often than not, resources initially earmarked for evaluation are redirected towards project implementation.
Third, large bureaucracies have deeply embedded disincentives to finding out the truth – particularly if the truth might demonstrate the failure of a politically favoured approach.
Fourth, impact evaluation, which compares experiences with and without a particular programme or intervention, is only one form of evaluation. An enormous amount of useful information is provided by process and operational evaluations that can improve programme implementation; by participatory evaluations that provide feedback, meaning, community engagement and design ideas; by strategy, sector and country evaluations that look at a complete set of programmes. Managers and politicians look at all this work and ask why more is needed. While impact evaluations are not necessary or even appropriate for every programme, on a selective basis they are a necessary complement to these other forms of work. Without them, we cannot know whether programmes are actually responsible for positive results.
A major push on impact evaluation
Governments and development agencies are starting to respond to the call. The Government of Mexico has adopted legislation requiring impact evaluations of social programmes. In the World Bank, regional development banks, and a handful of bilateral donor agencies, pioneers are directing money, time and institutional influence to conduct them. The Acumen Fund, the Fritz Institute, and Pratham, among other NGOs, are opening up their programmes to careful impact evaluation. Private funders, including the Gates Foundation, the Hewlett Foundation and Google.org, are voicing strong support for more and better impact evaluation.
The efforts of individual agencies are not likely to be enough, however, given the underlying deterrents. Learning from experience requires sustained funding that extends beyond the annual budget or project cycles and rewards collaboration among researchers, policymakers and managers to design and implement impact evaluations. Furthermore, it requires independence and transparency – through prospective registration of studies, peer review, and public dissemination of data – to improve the quality of impact evaluation and maintain the integrity of its findings.
A group of developing country governments, aid agencies, foundations and NGOs have now established a small organization to lead a collective effort to finance and promote impact evaluations, to be called the International Initiative for Impact Evaluation (3IE). 3IE will aim to:
- identify enduring questions about how to improve social and economic development programmes through broad consultations in order to focus on studies that are needed, relevant and strategic;
- identify programmes where impact evaluations are feasible, findings can affect policy, and results will advance the evidence base;
- adopt quality standards to guide its reviews of impact evaluations through periodic technical consultations;
- finance the design and implementation of impact evaluations that address questions of enduring importance to policymaking in low- and middle-income countries;
- prepare or commission syntheses of impact evaluations to link the findings from individual studies with broader policy questions;
- advocate for the generation and use of impact evaluations;
- disseminate information about opportunities for learning, planned studies, designs, methods and findings.
3IE is not, and should not be, the only actor promoting impact evaluation. In fact, it will need to dedicate considerable time to coordinating its activities and collaborating with other groups. But it is hoped that 3IE’s unique features – its institutional independence, focus on impact evaluation, and broad membership – will help accelerate the development of systematic ways to learn what works and bridge the gap between good intentions and good results.
1 Beatriz Armendáriz de Aghion and Jonathan Morduch (2005) The Economics of Microfinance MIT Press.
2 W Savedoff, R Levine and N Birdsall (May 2006) When will we ever learn: Improving lives through impact evaluation. Available from http://www.cgdev.org/section/initiatives/_active/evalgap
Ruth Levine is director of programs and senior fellow at the Washington DC-based Center for Global Development. Email email@example.com
William Savedoff is a senior partner at international consulting firm Social Insight. Email firstname.lastname@example.org
For more information