Who should measure the impact of non-profits? Responses to Caroline Fiennes and Ken Berger

Paul Garner, David Bonbright and Tris Lumley

The non-profit impact revolution has taken a wrong turn. The job of examining their impact should be done by independent specialists rather than non-profits themselves: this is what Caroline Fiennes and Ken Berger argue in the March 2016 issue of Alliance magazine. As self-styled ‘veterans and cheerleaders’ of that revolution, their article constitutes both a change of direction and a major mea culpa. But are they right? Alliance asked three experts from different vantage points to offer their perspective…

Paul Garner, Professor of Evidence Synthesis, Liverpool School of Tropical Medicine

Are non-profits effective? Caroline Fiennes and Ken Berger are absolutely right to require evidence of impact to underpin investment in non-profits. They now reflect on whether asking non-profits do this themselves may do more harm than good: they allege that non-profits are likely to spin the results, select those that support their cause and ignore the data that may be less favourable.

And, indeed, why not? Non-profits are businesses. They are in a competitive market place. They are often skilled advocates, able to make persuasive arguments, and have strong internal beliefs that what they are doing is right. Just like drug companies, governments, academics, and UN organizations, they will recruit data to advance their mission.  Caroline and Ken point out one of the commonest methods of doing this, known as ‘selective reporting’ where an organization simply reports on evidence of benefit and ignores harm or failures. As an academic researcher specializing in research synthesis, I am staggered by the degree to which researchers will actually spin their research findings. For example, they may withhold studies that do not conform to their beliefs, or tweak the analysis to present what they believe is right. For example, we ourselves have seen researchers withhold or delay publication of trials showing that community deworming programmes are ineffective[1]. So it doesn’t surprise me if non-profits adopt similar strategies which, in effect, can mislead philanthropic donors and the public.

‘The approach Caroline and Ken promote is sound.’

The approach Caroline and Ken promote is sound: that non-profits use the evidence base from independent research to assess whether what they are delivering is effective. Non-profits should consider and understand, appraise and interpret evidence from carefully controlled studies, often synthesised in the form of reliable, up to date systematic reviews. If the strategies work, then the reviews may help identify the circumstances that ensure success, and the non-profit can then identify more proximal measures of implementation, such as coverage. This won’t guarantee impact, but gives indicators useful to managing the programme to help them improve their performance[2].

Finally, non-profits need periodic independent evaluations carried out in a methodologically sound fashion using a variety of methods. Conflicts of interest should be made explicit and carefully managed. Evaluations should be publically accessible with clear methods and data to allow others to appraise the reliability of the findings. These measures, if adopted, would give people investing in not-for profits the necessary information on their performance and, ultimately, their impact on people’s lives.

David Bonbright, Chief Executive, Keystone Accountability

I enjoyed Caroline Fiennes and Ken Berger’s argument. It is straightforward, clear and even elegant. Behind it one can feel the tempering of a thousand experiences. I would like to make three points in qualified support of their view, all of which are underpinned by the idea that when it comes to measures, it is use that matters most.[3]

My first point is that doing research and using it are inextricably related. The way you do research often determines whether the findings become more than dust catchers. The two major schools of reform within the evaluation world, known as ‘utilization-focused evaluation’ and ‘real time evaluation,’ rest in large part on this idea. They challenge researchers to design and conduct their work in ways that lend themselves to use.

Point two: to use evidence effectively to improve performance and outcomes, you must validate that evidence with frontline actors – staff and intended beneficiaries. This is not complicated, but is rarely done. You bring the findings to staff and beneficiaries and say, ‘Here is the evidence. We think it means X. What do you think? What shall we do in response to it?’ When organizations do this, several things happen: they get a deeper understanding of what the evidence means, they win the respect and commitment of staff and beneficiaries consulted and they radiate a collective understanding of the evidence into the ecosystem surrounding the organization.

‘When we learn, we improve. When we improve, we have greater impact.’

Which tees up point three: it’s all about learning! Evaluation matters when we learn from it.[4] When we learn, we improve. When we improve, we have greater impact. We want impact systems that produce and reward learners. Society solves tough problems when we collectively learn how to solve them.

Fiennes and Berger assert that a ‘much better future lies in moving responsibility for finding research and building tools to learn and adapt to independent specialists.’ I’m on board as long as the impact gurus they are anointing embody these three points taking use, frontline validation and learning seriously.

Tris Lumley, Director of Development, New Philanthropy Capital

Ken and Caroline have a point. But the view that non-profits should not be in the business of trying to measure their impact is much too simplistic for my taste. What we need instead is to bring everything being learned in research on complex systems into our approach to impact measurement.

‘What we need instead is to bring everything being learned in research on complex systems into our approach to impact measurement.’

The non-profit sector is an ecosystem of funders and investors, charities and beneficiaries. It has a flow of money and resources which enables work to be done. But flows of information are also needed to guide how resources should be allocated. That has been the thinking behind the impact movement from the beginning.

Less well appreciated is that different bits of the ecosystem might need to be responsible for different pieces of the impact jigsaw. Academic research may play a leading role in deciding what types of programme seem to work for different people in different contexts, but this won’t always be an exact fit. The experimental approach to research that works for medicine, for example, is not always well suited to the complex systems in which people and communities exist.

Instead of pushing ‘proven interventions’ academics might want to produce guidance on recommended practices: in these contexts, with these people, we recommend using these kinds of approaches. Funders and investors could use that research to form insights into what programmes they want to fund taking advantage of their broad view across a sector. Charities could and should take more advantage of existing research, guidance on recommended practices and funders’ guidelines. They could also adopt a ‘design’ approach to gain a detailed understanding of the lives of the people they aim to serve and developing approaches around ‘user journeys’ that reflected their reality.

Whilst we should be aware of the challenges that Caroline and Ken raise, we should not give up on charities doing research especially research into areas of work that are genuinely novel and its results unforeseeable. If charities use external evaluators where appropriate, are transparent in their methodologies and encourage external audit of their work, we can at least start to address the challenges raised by Caroline and Ken.

So the impact movement may not quite have gone wrong. We just haven’t been able to see the wood for the trees.


Footnotes

  1. ^ Taylor-Robinson DC, Maayan N, Soares-Weiser K, Donegan S, Garner P. Deworming drugs for soil-transmitted intestinal worms in children: effects on nutritional indicators, haemoglobin, and school performance. Cochrane Database of Systematic Reviews 2015, Issue 7. Art. No.: CD000371. DOI: 10.1002/14651858.CD000371.pub6.
  2. ^ Garner P. Do objectives of health-aid programmes impair their effectiveness? Lancet 1997; 349: 722-723.
  3. ^ I explore these and related thoughts on use of impact evaluations in a guidance note on that subject for the Rockefeller Foundation and Interaction
  4. ^ For a beautifully crafted case story of evaluation as learning, see “Striving for wholeness – an account of the development of a sovereign approach to evaluation,” by Sue Soal of CDRA .

Comments (3)

Morghan Vélez Young-Alfaro

Hi Paul, Thanks for this article and its rich discussion. We partner with orgs and schools in Central California with a specific agenda: help orgs establish internal practices for program assessment. However, we also partner on third-party evaluations/assessments with orgs and initiatives. Your discussion helped me reflect on our impetus for our work (sustainable program assessment practices as must as humanly possible) in orgs that historically have not been able to do so for a myriad of reasons. We've shared your article on our Pinterest site as a "go to" tool as well as our website. http://www.anchoringsuccess.com/tools-and-resources/ Again, thanks. -Morghan


David Dinnage

Apologies, the comment above is from Cliff Prior, not from David Dinnage


David Dinnage

A great debate, thanks to Alliance for running this theme. As someone who has run social organisations, my experience is that internal research for internal improvement is critical to success in your social mission. Taking all that external takes it away from the lived realities and reduces the interest for learning amongst your team. Of course external evidence and external evaluation are also important. But let's keep everyone who is involved in delivery hungry to know more in order to do better.


Leave a Reply

Your email address will not be published. Required fields are marked *



 
Next Opinion to read

Time for philanthropy to master digital data

Lucy Bernholz