There are some basic questions for any serious discussion of social impact evaluation, especially in the field of social change philanthropy, and it is to the credit of the contributors to this issue of Alliance that these questions are extensively covered. In general, the key questions are: why is impact evaluation important? What are the desirable principles, values and approaches that must guide social impact evaluation? And, what are the gaps in knowledge and methodology that require continued attention?
Around these questions, the contributors articulate some refreshing ideas that bear highlighting. The first is the idea that social impact evaluation must be methodologically circumscribed by text, context and texture.
Text, context and texture
In other words, this is a field with both tested and continually evolving techniques and meanings. In a basic sense, we can say that there is a text of standards, best practices and approaches, an idea of what ‘good’ impact evaluation looks like and what bad practices must be avoided.
At the same time, impact evaluation is an ever-changing field that is shaped by context, in terms of both space and time. Finally, the very meaning and quality of ‘social impact’ is dependent on the texture of relationships among ‘partners’. That is, impact is not an externally imposed definition of causality, but a reflection of the values and interests of people and organizations in webs of interaction.
Intrinsic ethical values
The second striking idea is that, beyond the technicalities, the process of social impact evaluation has intrinsic (and not merely instrumental) ethical values. For example, many of the contributors express a preference for processes that level the ‘playing field’ and help throw the spotlight on the role and performance of donors as much as grantees and beneficiaries. The rationale for this, as Perla Ni points out, is that ‘our sector aims to empower the people and causes we serve’. Hence, beyond helping to improve programme performance, evaluation can and must be an opportunity for joint learning and mutual accountability.
Many of the evaluation practices and approaches described here, such as constituency feedback, participatory evaluation, stakeholder engagement, and grantee perception reports, are noteworthy in this respect. Consistently applied, these approaches can constitute the firmest basis for testing David Bonbright’s hypothesis that ‘the quality of an organization’s relationships with its beneficiaries … is highly predictive of its effectiveness and impact.’
The need for transparency and sharing
Third, many of the contributors highlight the need for greater transparency and sharing of information/knowledge about what has gone wrong and what is working. This is an intriguing idea and worth debating. The suggestion of an ‘information marketplace’, made by Paul Brest, is refreshing if it is inclusive of both grantee performance and donor practices. Otherwise, a predominant focus on grantee evaluation undermines the partnership ideal that most donors at least claim to seek in their relationship with their grantees.
Learning from the private sector
The fourth idea is that there is much to be learned from the private sector. The conventional view is that the precision that typically marks the ‘bottom line’ in the private sector is well nigh impossible to achieve in the non-profit sector. However, as the examples of the Social Venture Technology Group and the Acumen Fund show, there is indeed considerable space for overlap and interlock between the two sectors in relation to the goals of philanthropy. That space is currently shaped by social investments and social venture enterprises.
This implies some basic common values between the for-profit and non-profit sectors, one being David Bonbright’s point that both sectors achieve impact and thrive when they ‘listen to their customers’. On the face of it, this is not surprising, for both the for-profit and non-profit sectors are supposed to be value-driven and results-oriented.
Caroline Hartnell’s captivating interview of the founder of Hand in Hand International, Percy Barnevik, also amply illustrates the new ways by which non-profit outcomes can be produced through for-profit strategies, such as measured scaling up and replication in the manner of business expansion. More importantly, Percy Barnevik makes the important business point that ‘what gets measured gets done’ – that is, evaluation can stimulate impact, and it can be done efficiently. ‘You don’t need to sit and talk for hours…’ he points out. This is a persuasive response to the idea that impact evaluation is often either impossible or a diversion of energy and resources from work that directly produces results.
The fifth point that stands out is that increased attention to impact evaluation is not perceived by many donors as inimical to risk-taking (Andrew Milner). On the contrary, many of the grantmakers who were interviewed by Alliance for this special feature expressed the view that risk-taking is intrinsic to social change grantmaking.
This raises an intriguing question: if the fear of ‘unveiling failure’ is not a deterrent to impact evaluation, then how can risk assessment (an ‘upstream’ activity) become a spur for impact evaluation (a ‘downstream’ imperative)? In other words, how can risk and impact assessment become more organically linked in the grantmaking process?
It is possible that posing and answering this question would make impact evaluation strategically easier to accommodate within organizations. If risk analysis and evaluation benchmarks were approached simultaneously, and if, as Percy Barnevik argues, evaluation can be a determinant of performance, then impact evaluation would become intrinsically valuable to organizations in achieving their mandates. And, between the two ends (risk analysis and impact evaluation), more attention could go into monitoring as a basis for midstream course correction.
Building capacity for impact evaluation
The sixth idea has to do with the need to build capacity for impact evaluation. It is probably true that every organization has an implicit sense of its accomplishments. What is often lacking, however, is the capacity to track and ‘capture’ impact clearly and systematically.
In the global South in particular, such capacity is typically weak. As Fred Carden firmly puts it, ‘evaluation research cannot remain the preserve of northern-based institutions with northern values.’ Leadership development, as Daniella Malin explains, is a key component of this task.
To broaden this concern further, we can identify three interlocking circles of capacity. The first is capacity that resides within donor organizations to conduct impact evaluation (the piece by Inga Pagava covers this well). The second is capacity within beneficiary organizations, and the third is expertise and capacity that reside within specialized and independent institutions that pursue impact evaluation as a public good.
The third circle is probably where many of the values (independence, transparency, information dissemination, etc) associated with impact evaluation might be easier to enhance. Ruth Levine and Bill Savedoff clearly show how the emerging multi-donor supported International Initiative for Impact Evaluation (3IE) would serve such a purpose. As an independent institution, 3IE (and like-minded institutions) can help make evaluations more objective and systematically feed results and ideas into the ‘information marketplace’ that Paul Brest has argued for so well. The real challenge, however, lies in ensuring that these institutions reflect and represent the issues and values of the global South as much as those of the global North.
Who drives the demand for evaluation?
Another challenging issue is who drives the demand for evaluation. Across the board, the organizations asked by Alliance to explain how donor reporting requirements affect their work (reported by Andrew Milner, see p38) agree that donor-driven requirements undermine many of the values we would want to see drive impact evaluation and its conduct. Typically, donor-driven evaluations tend to be excessively expensive in time and money, inflexible in terms of methodology, and often too dependent on external consultants who have limited understanding of the local context and dynamics.
In general, then, this collection of articles represents a good mix of new ideas and insights into old and enduring challenges, as well as new challenges and opportunities on the horizon. The ‘take-away’ point of the collection is the finding by Betsy Schmidt and David Bonbright, reporting on the Keystone/Alliance survey of attitudes to evaluation, that there is ‘a broad acceptance of the importance of evaluation by donors and grantees alike’ and that a lot more remains to be done.
Akwasi Aidoo is Executive Director of TrustAfrica. Email firstname.lastname@example.org