Most grantmakers don’t seem to know if they are effective

 

Caroline Fiennes

0

Is your foundation any good? I don’t mean: is it big, or does it give away a lot, or does it run an efficient process, or do the grantees achieve much. Rather, I mean: are you any good at being a foundation? Are you effective? Are you good at finding work that will succeed? Do your grantees think that you help them? Is progress towards your goals faster with your foundation’s involvement than it would be without you?

Few foundations seem to know. We suspect that few foundations analyze to answer these vital questions. Certainly, few publish any. Yet it is perfectly possible to find out, and to identify where and how to improve. Giving Evidence knows this because we have looked at what foundations publish about their own effectiveness, and we have done these kinds of analyses for foundations.

Our research about what foundations publish about their effectiveness

Each year, the Foundation Practice Rating assesses 100 UK-based charitable grantmaking foundations on their practices on diversity, accountability, and transparency. The research is done by Giving Evidence which I run. For our ‘accountability’ criteria, we look at whether each foundation publishes any analysis of its effectiveness, what types of analysis it publishes, and whether it says what it will change as a result.

In this most recent year, only 16 of the 100 foundations were scored as publishing any analysis of their own effectiveness. This is remarkable given foundations’ enthusiasm for assessing other organisations, i.e. ones which they support or might support.

For the Foundation Practice Rating, we count as an analysis of a foundation’s effectiveness analysis such as:

  • views of grantees and/or applicants, collected systematically. (We did not count ad hoc quotes or case studies published without a statement that all grantees/applicants were surveyed, because there is no way of knowing whether the foundation has cherry-picked only the most flattering examples);
  • analysis of the proportion of grants which at some level succeeded vs. those which did not; or
  • analysis of the costs created by the foundation’s funding processes and borne by grantees/ applicants. Ideally this would be expressed as a proportion of the amount given, i.e. the net grant. This matters because clearly if a foundation is a net drain on the sector it seeks to support, then it is not helping.

None of these is a perfect measure of a foundation’s effectiveness, but each gives a line of sight. By analogy, there is no single measure of the health of a nation’s economy, which is why economists use a whole raft of measures. If a grantmaker has none of the three types of analysis listed, we would argue that it cannot know whether it is doing well.

What if the foundation does this analysis but doesn’t publish it?

That’s clearly better than nothing, though there are two weaknesses. First, the foundation is not making itself accountable about its effectiveness. And second, nobody else can learn from it.

Indeed, because only 16 of 100 foundations publish anything on this, foundations can learn only very little from each other’s public material about how to be effective and how and where to improve.

If foundations assess their effectiveness, does that shift even more power to funders and away from operating organisations?

No. Firstly, as mentioned, we count systematic surveys of grantees. That doesn’t imply anything about who is in charge, and indeed, a grantee survey would reveal whether grantees feel that power is stacked against them and show any consequent harms created by the funder. And second, we also count if a foundation reports the proportion of its grants that achieve whatever goals they are intended for. Those goals can be entirely set by the grantee.

The following are not analysis

  • A breakdown of where their grants went, for example, by geography or sector is not analysis of effectiveness. Rather, that simply catalogues activities or inputs.
  • Nor is citing activities or outputs such as “76 volunteers have received training to help them provide support within their organisations”.
  • Reviews of individual programmes. People can learn from those, but they do not assess the foundation as a whole.
  • Case studies or anecdotes of feedback from grantees are not analysis because, again, there is no way of knowing whether they are representative.
  • Nor do we give credit for reports recounting grantees’ achievements – because those achievements might be despite their funders! This isn’t a rhetorical joke: there are real examples of funders whose laborious processes cost grantees more than the grant is worth. In those cases, the funder is detracting from grantees’ work.

What foundations do publish on this:

Systematic surveys of grantees are the most common analysis for which FPR gave credit. Around 12 foundations publish those. Examples include that by The AB Charitable Trust.

Some surveys were run by external agencies; others appear to have been run by the foundation itself. The analysis published by the Lloyds Bank Foundation for England and Wales is structured around eight lessons that it has learned through its work, including from systematically hearing from grantees, and how it is integrating those lessons.

None of the 100 foundations assessed published analysis of the proportion of grants which meet their goals.

Near-misses and honourable mentions

Two foundations published numbers for their ‘social return on investment’ but without any detail of the calculations, input data used, or what the numbers refer to. Disclosing those details might be helpful and insightful.

Catternach cites a review of its “approach to evaluation and learning”, done by an external entity. We could not find that review published. However, the foundation does cite in its Annual Report some changes made as a result, for example adopting a lighter-touch approach to evaluation, focusing on ‘an evaluation approach that supports organisations in their own improvement process’ and generally using ‘evaluations as a tool to support grantee learning’.

Co-Op Foundation says that ‘95 percent of young people took an action after seeing our Lonely Not Alone campaign’. It cites neither any reference nor what set of young people it refers to, nor whether this number relates to the funding provided: without that, we do not know whether that number is unchanged from before the campaign – or even whether it has fallen!

Other examples come with no analysis of why the foundations have been able to achieve the claimed results, what the necessary conditions or resources are, nor the cost. So the scope for other foundations to learn from them is pretty limited.

One of the strangest statements of ‘impact’ that we’ve seen for a while is this, from the Hunter Foundation: ‘It is almost satirical that despite years of [our] investment in enterprise education and entrepreneurship, Scotland, thanks to our own Centre’s research, has not moved one jot in becoming more entrepreneurial in the past decade. We only console ourselves that the millions of pounds we’ve invested in enterprise education alongside Government and others maybe stemmed a possible reduction in entrepreneurial activity rather than just helping to maintain the status quo.’

How foundations can assess their effectiveness

The three methods listed above are all perfectly possible. In order, those are:

  1. Systematic collection of feedback from grantees, covering all grantees. Above are examples of that, and clearly it is what the Grantee Perception Reports have gathered for years.
  2. Analysis of the proportion of grants which (at some level) succeeded vs. those which did not. The Shell Foundation did this in a report to mark its first 10 years. Giving Evidence has done this analysis for the ADM Capital Foundation, based in Hong Kong. Another UK-based foundation did this analysis – and compared the eventual success of work that it funded to that of work that it rejected… and actually found no difference at all (!)
  3. Analysis of the costs created by the foundation’s funding processes and borne by grantees / applicants. This is the simplest of the three to do. Giving Evidence knows of a UK-based foundation which has done it but never published it.

Learn as if you were to live forever” – Gandhi

All foundations could usefully do these analyses, in order to learn where and how to improve. We are happy to discuss with any foundation how to do this.

Caroline Fiennes is the founder at Giving Evidence

Tagged in: Funding practice


Comments (0)

Leave a Reply

Your email address will not be published. Required fields are marked *