Foundation staff and board members should embrace learning not for its own sake but as a means towards improved performance and greater positive impact. It is easy to endorse the rhetoric of the learning organization, particularly when the learning is focused on what grantees might do differently or on the nuances of the issues foundations seek to address.
It’s much tougher, though, to turn the lens inward and focus on learning how the foundation’s performance compares to others and what it can do differently, and better. As a result, this kind of learning too often simply doesn’t happen.
One trustee of a major international foundation told me that board meetings were ‘wonderful, educational opportunities – like watching Nova’ [PBS’s acclaimed science television series]. A vice president for programs of another large international grantmaker with a particularly high-powered board told me: ‘Our board meetings frequently veer from the issues that relate directly to the foundation into international issues that board members simply enjoy debating.’
In both cases, the boards probably learned a lot, but the learning did not lead to more informed decisions. These boards accessed precious little data that they could use to assess the relative performance of the foundation, or, for that matter, of the CEO.
While it is essential that foundation staff and board members be knowledgeable about their areas of focus, it is easy to see how the pursuit of learning can become an exercise in self-indulgence. With few external forces influencing them, foundations must exercise considerable discipline to commit to learning about their own performance by asking the simple question, how are we doing?
No easy answers
Part of the problem is that the question turns out not to be so simply answered.
In theory, most foundation leaders agree that they’d like to know the positive impact they created relative to the resources they expended. It’s a simple ratio. If only we could arrive at a quantification of impact created. But that’s impossible for most foundations. Individual grants often represent a fraction of a project or grantee budget and are made across initiatives and programmes for which there is no common unit of social impact measurement. At the overwhelming number of grantmaking foundations (perhaps all of them) it is not possible to calculate overall foundation impact, or social return on investment, or whatever we choose to call it.
Moreover, in the absence of other measures, administrative expense, which is easily quantified and compared, threatens to become the de facto universal performance measure. I have seen many board members, frustrated by the staff’s inability to provide data on foundation performance, seize on the tangible and insist on the lowest possible administrative cost ratio. This can be damaging, leading to the elimination of valuable foundation work simply because no data exists to demonstrate what the administrative spending is achieving.
So, what to do? If precise and irrefutable evidence of impact relative to resources expended does not exist, should we give up altogether?
That would surely be the easiest path, because it frees staff and board from tough questions about whether they might be able to do better. Fortunately, many foundations are embracing what we at the Center for Effective Philanthropy (CEP) refer to as ‘indicators of effectiveness’. While we cannot easily measure impact, we can develop indicators that will show how likely foundations are to be effective. These are drawn from a variety of comparative data sources, from grantee and expert perceptions to data on board functioning to more traditional evaluation data. Taken together, these comparative indicators allow foundations to understand their relative strengths and weaknesses. One tool we have developed, the Grantee Perception Report® (GPR), draws on the perspectives of grantees to help foundations understand how they are perceived on myriad dimensions – from helpfulness of non-monetary assistance to responsiveness to impact on their fields or communities of funding – compared to other foundations.
The GPR is a powerful tool because it ensures confidentiality of individual grantee responses. Most importantly, it places grantee perceptions in a comparative context. Grantees tend to rate the foundations that fund them towards the high end of an absolute scale, no matter how cynical or critical they may be about foundations in general. As a result, comparative data is essential if a foundation is to understand that, on some dimensions, a rating of 5 on a 1-7 scale is actually quite low.
Learning and changing
Since 2003, when we launched the GPR, more than 100 foundations have commissioned the report, including seven of the largest ten US foundations. The results are typically carefully considered by boards, senior staff and programme staff, and have frequently led to significant change. Changes include refocusing grantmaking priorities, clarifying goals, overhauling selection and reporting processes, or replacing key staff. Many have been quite public about what they learned and what they are doing about it. Fifteen foundations – including the David and Lucile Packard, Charles Stewart Mott, McKnight, and George Gund Foundations – have even made public some or all of the GPR itself.
The GPR has led foundations to reconsider assumptions, address weaknesses and build on strengths. Elizabeth Smith, CEO of Hyams Foundation in Boston – to choose one of many possible examples – sent grantees a four-page letter describing what the Foundation learned from the GPR and what it planned to do about it. In one passage, she wrote:
‘Our grantees believe that Hyams has a deep understanding of the populations they serve. While they also rated the Foundation’s impact on their own organizations very highly, Hyams’ rating on this criterion was slightly below the median. Based on the survey, we learned that Hyams staff provide more assistance ‘beyond the grant check" than other foundations in the CEP data set. Assisting grantees in accessing other sources of funding was seen as especially valuable. Based in part on this feedback, we are interacting even more with our grantees by making fewer and larger multi-year grants in several of our program strategy areas.'
This kind of thoughtful change may not generate headlines, but it leads to more productive relationships between foundations and their grantees – and to stronger grantee organizations that are better positioned to achieve the impact their funders are seeking. Already, many foundations are repeating the process to gauge improvement, and a number have made significant strides in the eyes of their grantees.
It isn’t easy. We have seen the shock that can accompany a realization that, for example, a foundation that prided itself on the assistance it offers to grantees is providing assistance that is rated much less positively than that provided by other foundations. We have learned that foundations need strong leadership, thoughtful sequencing of responses, and time to act on results. But we have been inspired by how many foundations have risen to the challenge.
Most people want to know how they are doing, and then they want to figure out how to improve based on that knowledge. There are bumps along the road but nearly all foundations that receive GPRs eventually make changes as a result – 97 per cent according to a recent survey.
Our data sets have also shed light on hotly debated issues and challenged some widely held assumptions. It turns out, for example, that there are three key dimensions grantees value in their foundation funders more than size or type of funding: interactions with foundation staff; clarity of communication of foundation goals and strategies; and foundation expertise and external orientation. These are the best predictors of grantee satisfaction with their funders – and of good ratings for foundation impact. The fact that we have now surveyed more than 20,000 grantee organizations allows us to really understand the grantee viewpoint from more than just an anecdotal perspective.
No one data set, tool or organization will offer all the answers. But there is reason to be optimistic that foundations are increasingly seeing their learning efforts as opportunities to improve, drawing on comparative data to put their results in perspective, and making real changes.
Learning for its own sake is wonderful, but that’s what schools and universities are for. Foundation staff and board members have a responsibility to use their learning to inform improvement in performance.
1 In the interest of full disclosure, it should be noted that Packard and Mott are grant funders of CEP – with annual levels of support of $75,000 and $50,000, respectively.
Phil Buchanan is Executive Director of CEP. Email firstname.lastname@example.org
The Center for Effective Philanthropy
CEP is a non-profit organization that provides comparative data to enable higher-performing foundations.
For more information, visit http://www.effectivephilanthropy.org and click on Publications. In particular, see Indicators of Effectiveness: Understanding and improving foundation performance (2002) and Listening to Grantees: What nonprofits value in their foundation funders (2004).