Significant effort is now being applied to improving the performance of international development assistance by measuring the change it brings about. But why now and not long before? And how well are such efforts working?
One answer to the ‘Why now?’ question is that a history of Cold War politics and foreign policy agendas uncoupled aid allocations from development results. Consequently, agencies were not penalized (financially or in other ways) for unsound lending or bad projects. One unhealthy legacy of past East–West rivalry was that achievement came to be equated with disbursement, the primary agency objective being to push funds.
Another answer is that recent investment in impact assessment is a response to diminishing political will towards (and levels of) official aid. This trend is allied to a perception that aid does not do enough to reduce poverty. Convincing funders that aid makes an adequate difference is therefore an urgent undertaking. Adopting impact indicators thus serves the domestic need of official agencies to better argue their case with parliamentarians and ministries of finance. As Simon Maxwell’s article (see pp28–30) suggests, adopting concrete targets offers a new stimulus for reversing the decline in aid and better grounds on which to mobilize policy reform.
By and large, current effort at improving measurement focuses on resolving two technical difficulties. First is determining ‘hard’ impact, for example tangible improvements in health and agricultural productivity, and ‘soft’ impact, such as empowerment and organizational capacity.
Second is attributing any change observed to the actions of foreign aid. Using impact information to enhance development performance requires accurate information about cause. Aid agencies will not be able to decide how to improve what they do if they cannot determine precisely what processes and factors led to a particular result.
Identifying the real causes
A major problem in identifying causes is that impact measurement tends to take little or no account of the network and flux of contending interests and influences of all the actors involved. Instead, the assumption is that all interests are equally shared and aligned along a coherent sequence of relationships linking Northern donors and Southern recipients. In other words, the model employed is one of a one-way distribution chain. The logical frameworks commonly used to design development interventions are a typical expression of this linear assumption.
However, through excessive ‘projectizing’, each link in the chain is guarded by an ‘evaluation firewall’, which protects higher levels from eventual ‘heat’ from below. In a typical evaluation, interests and historical (pre-)conditions located higher up the chain are simply not taken into sufficient account. Achievement is the sole responsibility of lower levels. Performance is not treated as a co-responsibility of everyone involved. Consequently, evaluations seldom capture the full picture of factors contributing to performance, particularly donor behaviour. The result is an inadequate foundation for real improvement or for achieving authentic ‘partnership’.
A look at the practice of evaluation shows that firewalls typically serve a protective function in terms of organizational behaviour. They allow those who dispense aid to obscure the fact that decisions may be driven by considerations other than the overt goals they are intended to achieve – such as chasing disbursement targets, generating domestic benefit, and satisfying internal and international political agendas. In short, development impact is only one factor feeding into decision-making along the aid chain and hence influencing performance.
This chain approach to impact measurement impedes a comprehensive appreciation of factors affecting achievement. It also works to exclude recipients’ – be they governments or ultimate beneficiaries – initiative, involvement, judgement, learning, ownership of the process and use of results. Steps being taken by many agencies to make evaluation more participatory, and to involve a wider array of stakeholders, can start to tackle such structural exclusion – though evaluations are typically still those that donors, rather than recipients, want. Nevertheless, adopting a multiple stakeholder perspective is a good starting point for applying a ‘systems’ approach to aid. Such a perspective will be even more useful if it is owned and driven by recipients, which is the stated intention of other aspects of aid – such as the country-defined Poverty Reduction Strategies now required by the World Bank and International Monetary Fund.
The general point is simple. Improvement in performance of development assistance will be seriously constrained if a comprehensive, ‘systems’ approach to measuring impact does not replace a narrow, self-protective ‘chain’ view.
1 J Carlsson, G Kohlin and A Ekbom (1994) The Political Economy of Evaluation: International aid agencies and the effectiveness of aid London: St Martins Press, p176.