Interview – Paul Brest, Jed Emerson, Katherina Rosqueta, Brian Trelstad and Michael Weinstein

Katherina RosquetaOne of the most perennially vexing questions in philanthropy is how to assess the impact of funding, especially where there’s no obvious way of putting a price on the end product. A recently published paper on Measuring and/or Estimating Social Value Creation, written by Melinda Tuan and commissioned by the Bill & Melinda Gates Foundation, features different models adopted for this purpose by a number of US foundations.

Alliance talked to representatives of some of these foundations about the strengths and weaknesses of their approaches, what they had learned from the research, and what they see as the next steps for the field. Fay Twersky (Gates Foundation) comments.

At one level, the different foundations are doing very similar things. As Katherina Rosqueta of the Center for High Impact Philanthropy remarks, ‘all of the approaches, including SROI [social return on investment], employ the same fundamental ratio: impact-cost or cost-impact. We all recognized that.’ Within this broad approach, however, there are some interesting nuances.

Acumen Fund’s BACO analysis

Brian TrelstadThe Acumen Fund does what it calls a BACO analysis, explains Brian Trelstad, which ‘compares the net outputs over time of our investment (and its net cost or net income) with the “best available charitable option”, a prevalent comparable that provides us with as close as we can get to an apples to apples comparison of what else our donors could “buy” on the philanthropic marketplace for the same amount of philanthropy.’

Strengths and weaknesses? On the credit side, says Trelstad, the analysis ‘is fairly simple and intuitive. The team can conduct a BACO in a very reasonable amount of time and the “marginal mindset” really forces them to think about whether our investment is the best use of limited philanthropy.’ On the other hand, he admits, ‘the comparisons can be arbitrary … it doesn’t calibrate for quality, so you have to do a quality adjusted BACO’ and it doesn’t ‘discount the future value of social output. If we invest in a business that sustainably generates clean water in perpetuity, we only capture the outputs over the life of our 5 to 7 year investment.’

Center for High Impact Philanthropy favours simplicity

The chief virtue of the Center for High Impact Philanthropy’s approach, says Katherina Rosqueta, is its simplicity: ‘We divide the philanthropic capital required to obtain a given impact by the incremental impact expected. We call this “cost per impact”. Examples would be “$1,000 per child life saved” or “$30,000 – $250,000 per additional on-time high school graduate”.’ One of the reasons for this simplicity, she acknowledges – touching on one of the great sticking points in this whole area – is that ‘when we looked at the state of available evidence and data to be used in linking cost and impact, we found it was pretty thin’.

Other reasons for its simplicity are the need to produce something that works across the various different contexts in which the Center operates (which are ‘as different as US education and global public health’) and the need for ‘an approach that requires minimal incremental data collection on the part of non-profits and that can be easily grasped by individual philanthropists. When we tested it, our approach worked on both counts.’

The chief drawback, she says, is that it involves ‘focusing on one primary impact. That limits comparability to those programmes or activities that have set out to achieve the same primary goal. It also means that important secondary or related impacts are not captured.’

Michael M WeinsteinRobin Hood’s monetization approach

In an effort to get round the first of these difficulties, the Robin Hood Foundation uses what Michael Weinstein calls a ‘monetization approach’: ‘to compare the value of differently targeted grants, we translate impacts, no matter what form they take, into dollars.’ Some non-profits, understandably, he admits, choose to make no such comparisons, leaving different types of grants to be judged differently. ‘Donors can invoke a cost-effectiveness standard: choosing grantees which minimize the cost per success, where success is limited to a common outcome, for example training unemployed individuals to become certified nurse practitioners. Such cost-effectiveness standards offer the advantage of simplicity and reliability, but they can’t guide allocations across programmes of different types.’

The benefit-cost procedure, he says, ‘provides a disciplined, transparent decision-making process. The procedure takes explicit account of counterfactuals – what staff assume would have happened to participants in the absence of interventions funded by Robin Hood.’

Of course, as he admits, ‘the process requires substantial guesswork, and therefore involves substantial imprecision.’

Paul BrestThe Hewlett approach: logic models and measurable outcomes

Paul Brest of the Hewlett Foundation points to the same problem when he says that ‘most of our philanthropic goals are highly complex, so there is real risk that the logic and assumptions behind some of our estimates are wrong’. Hewlett’s approach is nevertheless for programmes to ‘include measurable outcomes and targets in their goals, and to have explicit logic models that explain how grants and clusters of grants are intended to contribute to the targets’. Brest believes that ‘the use of measurable goals and targets and an expected return mindset helps programme staff make explicit tradeoffs in grantmaking, which has been particularly valuable in a time of diminished resources.’

‘A major convergence in approaches’

So much for the pros and cons of each of these approaches. What did Gates’ nominated experts gain from looking at each other’s homework and discovering how they go about ‘comparing apples with oranges’, as Michael Weinstein puts it?

For most of them, there is a good deal of excitement and optimism produced by the discovery that, as Paul Brest points out, ‘there is major convergence in approaches among those thinking hardest about these issues. This seems to imply,’ he goes on, ‘that we are collectively pursuing something useful (or perhaps participants in a collective folly).’ He believes that starting the process of analysing programme strategies may be more important than the early results it produces. ‘Having explicit logic and assumptions is allowing us to improve results steadily with a sort of “successive approximation”,’ which, he suggests, ‘may be the only way to tackle some of the truly complicated social issues that we work on.’

For Katherina Rosqueta, the single most exciting discovery is that the Center for High Impact Philanthropy ‘has some very thoughtful partners in moving the field towards better evidence and a more thoughtful approach that goes beyond counting inputs to linking costs to impact’.

Brian Trelstad cites two things about the process that excite him: ‘Most exciting to me was the convergence around what we came to call “cost effective cost-effectiveness”. The core insights of cost-effectiveness evaluation endure, but those of us without the resources to conduct thorough cost-effectiveness studies for our investments can still benefit from the mental constructs with these very light-touch (cost-effective) ways of building that thinking into how we make decisions.’

The second exciting thing was to get feedback from peers on ‘what they think works and what doesn’t work’. He admits to being ‘sort of hung up on some seemingly unsolvable methodological limitations that others have got past (eg how to account for the future value of output) while they might be ignoring simple things to make the tool more robust (eg recalculating on an annual basis and watching how things trend). Not letting the perfect be the enemy of the good in esteemed company has given us more confidence to keep pushing on these tools.’

SROI coming of age

Jed EmersonThe fact that such approaches are at last gaining ground is the most exciting thing for Jed Emerson of Uhuru, who acted as adviser to the project and was the architect of the SROI concept over a decade ago when working with REDF (Roberts Enterprise Development Fund). ‘The idea that one should attempt to track the performance of social capital as one measure of impact has gained wide acceptance. While the specific details of the framework and how it differs from traditional cost-benefit analysis are still in the process of being assessed and explored, there can be no debate that such an analysis can be undertaken and helps inform discussions regarding both how best to allocate and how best to assess the performance of capital invested in social impact.

‘The second exciting aspect of the evolution of this work,’ he goes on, ‘is to see how various people from within various cultural/political contexts are exploring how best to apply such an approach for their own conditions.’

The next step?

If there is an emerging general movement towards social value measurement, as Jed Emerson implies, what should that movement do next and what is getting in the way?

Paul Brest believes that ‘it may be time to explore how shared (or overlapping) goals, logic models and targets could allow a group of foundations to achieve more than the sum of their individual efforts. This might be demonstrated within a relatively measurable field (eg environmental sustainability in the western US). For example,’ he suggests, ‘we could create and document an experiment in which a group of foundations would consider these tools collectively, and report on whether synergies result.’

Beyond BACOs to broader data sets …

Brian Trelstad of Acumen wants to ‘migrate from a world where portfolio managers conduct bespoke BACOs to one where there is a large enough and reliable enough data set for our teams to run the numbers against a relevant peer set. The BACO would then give way to peer benchmarks.’ The biggest obstacle he sees as ‘agreeing on standard definitions (when is a job a job?) and being willing to create an information regime that encourages real transparency.’ This, he observes, is not so much a technical challenge as an organizational one.

Katherina Rosqueta recommends working ‘towards domain- or sector-specific standards for cost accounting and impact metrics’ and suggests, ‘in areas where the evidence base is thin, keep it simple’. She concludes with a caution and a recommendation: ‘All of these measurements or estimates are only as good as the assumptions and data we put into them. More and better impact assessments – especially in domestic, social service programmes – coupled with standardizing costs data and impact metrics, would go a long way to giving the field reasonable benchmarks with which to make decisions.’

… and better counterfactuals

Michael Weinstein has two recommendations: ‘Find creative ways to use existing national data sets to generate better estimates of the poverty-fighting impacts of policy interventions and create better counterfactuals: baseline economic and other outcomes for individuals in the absence of charity-driven interventions.’ The chief enemy of progress he sees as the high cost of randomized control-group experiments which, for him, are ‘the most reliable means by which to estimate the impact of policy interventions’. On the other hand, the next best thing – ‘translating short-term, observable impacts into estimates of long-term impacts’ – requires ‘a rich experimental literature’.

Beyond SROI as a single number

For Jed Emerson, ‘the next step is to continue exploring SROI not as a single number, but as an integrated set of metrics which together paint a more complete picture of total performance.’ When exploring the ‘quantitative representation of qualitative value’, he says, we too often ‘seek a single number to capture all measures of worth’.

One of the challenges in conducting an SROI analysis, says Emerson, ‘is simply that one is trying to simultaneously capture value at a number of levels – financial, social and, in some cases, environmental. While parts of the SROI framework lend themselves to being represented in a single number, other aspects of return have to be presented in language or in qualitative terms which, when viewed together with quantitative, numeric analysis, give one the full measure of social return. This should be kept in mind as practitioners work to create their own frameworks for assessing social return on investment.’

What stands in the way?

In Paul Brest’s experience, one of the greatest obstacles is ‘the perception by some programme officers that explicit use of logic models, measures and targets limits their ability to fund unexpected opportunities and may not take advantage of their expert intuitions as grantmakers’. Hewlett attempts to address these concerns, he says, by continuing to encourage pools of funding for unexpected opportunities and by emphasizing that the models are intended to inform expert intuition not override it. ‘We acknowledge that at least the first generation of this approach is going to be flawed, and allow programme officers to override “the numbers” when they feel that the process is getting the wrong answer.’

For Jed Emerson, what stands in the way is ‘quite simply the limits of our sense of imagination which we, collectively, bring to this task. Too many people begin the conversation from where they are as opposed to where they would like to end up. If you are on an existing path, and you look forward, what you are most likely to see is the continuation of that path. But if you are on a path and you look up and beyond, often you are better able to refocus on your ultimate goal, and see how other paths connect with the one you are on and how your own path may not in fact be the best one after all.

‘As I’ve listened to the emerging debates around SROI over more than a decade, I can only sit back at times and smile at how passionately people seem to be able to fight for their path – as opposed to how excited we should all be when our paths help take us further up this mountain.’

 

Fay TwerskyComment Fay Twersky

Many years before I joined the Gates Foundation, I was part of the team that created the REDF SROI approach along with Jed Emerson, Melinda Tuan and practitioners in the REDF portfolio. The field has come a long way since then.

But there are still a number of key ingredients missing if we want to see wider uptake of common measurement approaches, as Melinda’s paper makes clear. First is a common language. All the methodologies profiled in the paper use basic terminology differently. One person’s impact is another person’s output. For the purpose of the paper, we converted all these into a common set of terms, but this language divergence is a challenge for the field, because a lot of meaning is lost in translation.

Another key challenge is the quality of data to populate the various social value creation calculations. The calculations are only as good as the data you put into them. Inconsistency in measures and inconsistent data quality make the calculations less reliable and less helpful than we really need them to be if they are to inform decision-making.

But we need to be wary about relying only on the data calculations, however good. As Jed points out, a simple numeric ‘answer’ is never enough. It can lead to a sense of false precision and misleading reductionism. The numeric calculation can only be part of what informs sound professional judgement about the merits of an approach to creating social value. The discipline of the measurement process is in some ways as important as the measurement itself.

Another thing that is vitally important, and not always done, is to measure how our assumptions play out over time. Some of the methodologies are simply prospective. They help target investments that we estimate will be significant and transformative. While that is useful in terms of informing how resources may be best targeted for impact, it is critical to measure to ensure that those assumptions are actually borne out. Such real-time measurement can inform course correction and also make subsequent investment decisions more realistic.

At the Bill & Melinda Gates Foundation, we have not adopted one single approach to calculating social value creation across the foundation. We are designing our measurement processes to be directly focused on our theories of change, to measure milestones of progress and targeted outcomes and impacts. Our aim is to generate data that are ‘actionable’. As one of my colleagues said to me recently, ‘data are perishable goods’. It is imperative that we spend the time designing, collecting, digesting and indeed using data in a timely way to help us make more informed and ultimately wiser decisions.

We have learned a great deal from our colleagues profiled in this paper. We expect to continue that learning as we work with our partners to evolve our strategies, understanding what works and what doesn’t, and how we might improve our approaches to achieving long-term, sustainable impact.

Alliance would like to thank the following for contributing to this article:

Paul Brest President, Hewlett Foundation
Jed Emerson Managing Director of Integrated Performance, Uhuru Capital Management
Katherina Rosqueta Executive Director, Center for High Impact Philanthropy
Brian Trelstad Chief Investment Officer, Acumen Fund
Fay Twersky Director of Impact Planning and Improvement, Bill & Melinda Gates Foundation
Michael Weinstein Chief Program Officer, Robin Hood Foundation, New York

For more information
http://www.gatesfoundation.org/learning/Documents/WWL-report-measuring-estimating-social-value-creation.pdf
http://www.gatesfoundation.org/learning/Documents/WWL-profiles-eight-integrated-cost-approaches.pdf


Comments (0)

Leave a Reply

Your email address will not be published. Required fields are marked *



 
Next Interview to read

Interview – Geoff Scott – Combining the traditional and the modern

Alliance magazine