In a recent annual report statement on Creating an Online Information Marketplace for Giving Outcomes, President of the Hewlett Foundation Paul Brest stressed the need to invest in creating information. This chimes well with the unsurprising message of this edition of Alliance – which is that funders need to get better at assessing the impact of the money they spend.
Yet it’s clearly impossible to have a separate, specifically funded evaluation for every grant. So when are evaluations needed, and how can funders be persuaded to pay for them? And how good are foundations at sharing information? Caroline Hartnell talked to Paul Brest about this and about his views on measuring impact more generally.
With the impact of foundation programmes in mind, what sort of information is most needed and who’s going to supply it?
The basic data has to come from the source, from those who are actually implementing projects, whether it’s foundations and their staff or non-profit organizations implementing the programme.
Ideally, we want to know what a particular programme or project seeks to accomplish, what the strategy or theory of change for reaching those goals is, and what evidence there is that they have actually achieved the goals – or, in the absence of that, evidence that they are making progress towards them?
Do you think foundations currently see this as part of what they ought to be funding their grantees to do?
Even foundations that care a lot about outcomes and impact can’t fund the evaluation of every single grant. It is beyond their capacity. I think, though, that most foundations are significantly underinvested in providing the infrastructure for doing this sort of evaluation.
When would you have an evaluation done? Does it depend on the size of the grant?
We can distinguish several situations. One is where the foundation is in effect funding social science research, where it’s not looking at its own grant but at a set of interventions to test a theory of change. For example, the Hewlett Foundation funded a three-year evaluation of the success of a particular type of charter school even though we don’t make grants to charter schools.
To take another example, we are about to take to our board a proposal for a very large grant for Pratham, an Indian NGO that will be doing an educational intervention in poor Indian regions. It will include funding for the evaluation, since there’s no point doing a demonstration project of that sort unless you build in means to find out whether the demonstration was successful.
At the other end of the spectrum, we make hundreds of grants where it’s not realistic to measure the outcome grant by grant. We’re doing work to reduce global warming, but we won’t know about its effect, if ever, until the people here are long gone, and even then we won’t know what aspect of climate change we helped offset and how much we contributed. In those cases, which are the large majority, what you need is a clear theory of change that says, for example, in order to reduce carbon dioxide emissions in coal-fired power plants in this region of the US or China, here are the steps that need to be taken. If you are taking those steps, it’s an indicator of progress even if it’s not a measure of ultimate outcome, and we expect that kind of indicator in any grant we make.
Deciding what level of reporting will be involved and whether there will be an evaluation is all part of negotiating a grant. And of course it affects the funding. If there is to be an external evaluation, we will include funding for it in the grant, or we might make a concurrent grant for the evaluation.
What can be done to get funders to include evaluations in their grants?
Funders are often so sure that the intervention they are supporting works that they don’t see any reason to put money into an evaluation. The problem is that intuitions that something works are often wrong. The only way you can find out is through evaluation. Then again, even somebody who cares about it can’t always do it because of the expense, so you have to choose when you’re going to do it.
To take a less charitable view, many philanthropists – though this is less true of the large foundations – really don’t care very much about impact. They care about the personal relationships that are created by giving in the community and the community’s appreciation when they are given a significant gift.
Do we have the infrastructure that’s needed to share information among funders?
There are certainly informal ways of sharing information. Some of the work done by foundations, when it’s published in peer-reviewed journals or is in any form that Google can find, is available. But there is no systematic way of disseminating the data and many foundations and non-profits just don’t bother – it’s not even available on their websites.
Is some sort of web-based system into which foundations would feed evaluation reports for others to use something we should try to create? And is anybody working on it?
Well, we’re supporting a small grant to an organization that is experimenting with putting all publications by foundations online. It’s not limited to evaluation data, but it would include that.
If foundations aren’t very willing to publish data which might reveal their failings, isn’t it even harder for non-profits because they’re looking for grants?
It is harder. On the other hand, foundations could get together and say, we are not going to make grants unless this information is available, which doesn’t seem very fair if foundations don’t do it themselves. And they are in the best position to become more open because they don’t depend on anyone for their money. I hope that a group of foundations – perhaps helped by organizations like the Center for Effective Philanthropy or the Foundation Strategy Group – will come together to try to change the culture. I haven’t seen a great deal of openness yet. Every now and then a foundation like Casey or us or Irvine will publish a report about something that went wrong – everyone will publish reports about things that went right – but I don’t think there’s enough to really affect the culture.
Do you think that the Center for Effective Philanthropy’s anonymous surveys of grantee perceptions will help to change the culture at all?
They help, but the truth is that grantee perception reports are not about impact. I don’t say there’s no relationship between grantee perceptions and impact, but it’s not, I think, a strong or obvious one. If you have bad relations with grantees, it probably isn’t conducive to effective outcomes, but you have to take the items in the Grantee Perception Report one by one.
What about Keystone’s suggestion that donors should require grantees to report on feedback from their beneficiaries?
I think that has similar benefits and similar qualifications. This is information that a funder should ask the organization to get for the sake of both of them. You learn an awful lot from whether the beneficiaries think they are benefiting or not benefiting. But it is not a substitute for hard data. It is well known, for example, that a community’s sense of whether there is a health problem can be quite distinct from professional, quantitative, rigorous epidemiology. So you can have stakeholders who believe that something has been done well or badly and they can be mistaken. The gold standard is always a good, well-defined evaluation. I would say the stakeholders’ views are valuable, and if you think you need to discount them, you can do that.
Do you see any danger that the need to measure might limit the innovativeness of the work that funders are willing to undertake?
It’s a possible danger, but not at the moment. My sense is that things would have to move a long way before it became a realistic problem. In fact, I look forward to the time when it’s a real problem rather than just a potential one!
Paul Brest is President of the William & Flora Hewlett Foundation. Email PBrest@hewlett.org