The difficulty and expense of measuring the impact of small grants is often seen as a drawback to making such grants at all. Can measurement systems be developed that do not overburden grantees and whose costs are not totally disproportionate to the size of the grants? The Global Fund for Children (GFC), a global grassroots grantmaker and one of a growing number of intermediary grantmakers, has recently developed a set of metrics to better measure and assess its impact. Their experience may begin to answer this question.
In 2004, GFC reached the landmark of awarding over $1 million in grants annually. Until then, we had focused on collecting data and indicators that related mostly to the practice and process of making grants such as numbers of grants awarded, total funding, average grant size and number of grantee partners. A small grants programme is often – pardon the pun – under the microscope. Many question the impact, let alone scale, such a programme can achieve and how that impact can be measured without undue expense. Moreover, what level of evaluation capacity can be expected of emerging community-based grantees?
We believe that small grants funders have a critical role in building and strengthening civil society and fuelling a pipeline for larger institutional grantmakers. This early support and higher-risk investment allows emerging NGOs to flourish. None of the data we were collecting demonstrated this. It was time to take monitoring and evaluation to the next level.
No numbers without a story; no story without numbers
Over an 18-month period, we developed a logic framework, mapped outcome indicators, and built a database to capture and present impact. This framework consists of seven indicators that provide a holistic evaluation of our own grantmaking effectiveness, as well as our grantee partners’ organizational and programmatic impact.
Such a metrics model needs to carefully balance the cost and scope of the framework against the maintenance of quality and rigour. GFC drew upon the best ideas and practices from the fields of philanthropy, social science, evaluation and business. The underlying premise of the framework is: no numbers without a story; no story without numbers. Qualitative and quantitative data both have well-rehearsed shortcomings. A good metrics framework will therefore include both.
Lessons drawn from implementation
In making the case for the impact of small grants, our metrics framework focuses on measuring and demonstrating the two key aspects of our work: our own effectiveness as a grantmaker and the organizational and programme effectiveness of our grantee partners. Our three key areas of inquiry are grantmaking effectiveness, capacity-building and programme effectiveness. From the first 18 months of implementation, we are seeing a number of trends.
The hidden benefits of the OCI tool
We developed the Organizational Capacity Index (OCI) tool to assess the organizational development stage of our grantee partners at the beginning of our funding and over time (in respect of governance, planning and financial management). The tool – which classifies a grantee as nascent, emerging, developing, strengthening or thriving – allows us to see that we are engaging with an organization at the appropriately early stage for our model of grantmaking. We anticipate that the OCI tool will allow us to measure if an organization is developing along that continuum. This will in turn help to demonstrate the role and value of the grantmaking model.
As with our other metrics, the OCI tool plays a de facto capacity-building role. It was developed to be a rapid self-assessment of our grantees’ capacity. Many have evolved a more focused direction and vision for their organization following its use. But the biggest surprise was when a grantee, Rescue Alternatives Liberia, used its OCI results to identify its areas of improvement and incorporated this insight to request and secure funding from another grantmaker in order to develop its first ever strategic plan.
The importance of leverage
Despite the best intentions, too few grantmakers document, track and analyse their efforts in achieving funding and visibility for their partners. By contrast, our database includes fields for specific leverage inputs such as a written recommendation of one of our grantees to a new funder, as well as a grant award of $10,000 by that funder.
A dramatic example was our recommendation of Shidhulai Swanirvar Sangstha (Bangladesh) for the Gates Foundation Access to Learning Award, leading to a $1 million grant. Shidhulai has become a formidable player on the international development scene, and now receives funding and awards from some of the largest foundations and international actors. Our funding relationship with Shidhulai began when its budget was less than $50,000. (Pictured above and below are some of Shidhulai Swanirvar Sangstha’s boat schools.)
The truth about counting: ‘it depends’
Many of our stakeholders ask us about our reach. ‘How many children are you serving?’ they ask. The figures we cite, far too often, either astound or disappoint. And that is because the answer is simply ‘it depends’. Realistically, counting children is subjective, and a particularly vexing question for our limited capacity grantee partners. Does one want to know, for example, the number of children served by the Ethiopian Books for Children and Education Foundation, a community library programme in Ethiopia, or the number of visits to their libraries? The latter is far more easily captured, though likely not as meaningful.
Several of our stakeholders ask us to aggregate the number of children served, but it’s not as straightforward as simply adding 25 high school scholarships, 100 after-school tutoring recipients, 300 trafficked children in shelters, etc. GFC now asks its partners to report numbers of children directly and indirectly served, though this distinction is also fraught with complexity. With the first metrics data under our belts, we are beginning to consider which is more important for our model of grantmaking – the actual reported numbers our partners served or the way in which that number is understood and determined.
Measuring programme effectiveness
At the outset, we expected our emphasis to be on collecting raw numbers and data. Increasingly we are learning that the more important questions for our grantees are: what are you trying to collect and how are you collecting it? This leads us to talking about their expectations of their programmes, the many nuances of the figures, and the accuracy and specificity of the data points.
Under our Learning portfolio, which supports education programmes, for example, improved or high exam marks are a typical metric of effectiveness of a tutoring programme for orphans. The grantee’s thinking behind both the metric and its collection is now a critical part of GFC’s metrics work, and lends itself to a critical capacity-building role for small grants funders and intermediaries.
The Global Fund for Children is keenly aware of the costs of implementing and collecting metrics for our grants. There is much pressure to demonstrate the value of small grants with a level of rigour that is commensurate neither with the grant investment nor with the recipient organization’s capacity. In order to maximize and fully leverage the overall process, we have shaped our metrics work to be a learning exercise for ourselves and our grantees. If the costs of implementing a metrics system for a small grants programme look high, the grantee’s increased capacity to collect and evaluate data must be included among the direct benefits.
While only time will tell if our framework provides accurate and actionable measurements, the insight and lessons learned from the development and implementation are proving highly beneficial. We will continue to refine the framework and data collection methodology, while actively engaging our grantees to address new insights. Longer-term collection of data will provide a fuller sense of each metric’s value as well as practical learning to guide additional refinements to the framework.
Maya Ajmera is president of the Global Fund for Children. Email email@example.com
Victoria Dunning is GFC vice president of programs. Email firstname.lastname@example.org
Global Fund for Children: a global, grassroots grantmaking model
GFC’s vision is that all children grow up to be productive, caring citizens of a global society. To achieve this, GFC makes small grants, ranging from $5,000 to $20,000, to innovative community-based organizations that serve the world’s poorest children and young people. To date, it has awarded over $15 million to 373 organizations in 72 countries. It awards nearly $4 million in grants annually, and has a complementary media ventures programme, which harnesses the power of books, films and photography to promote global understanding.
The grantmaking model is based on strategically investing early in an organization’s development. GFC typically engages with grantees over a period of three to eight years. In addition, it provides them with a final grant of $25,000 for investment in their long-term sustainability. Finally, GFC offers its former grantees $1,000 tracking grants in exchange for data on status, growth and organizational health. Tangible tracking measurements include Ashoka Fellowships, $1 million-plus budgets and successful leadership transitions.
For more information
What David could teach Goliath…
Mark Kramer and Hallie Preskill
There is much to applaud in Maya Ajmera’s description of the Global Fund for Children’s approach to evaluation. The constraints of evaluating extremely small grants have led them to discover several lessons that apply to the evaluation of grants at any size.
First, GFC’s philosophy of ‘no numbers without a story, no story without numbers’ is an elegant articulation of the need for mixed methods. Numbers quantify outputs or demonstrate differences and correlations between two or more variables, but they cannot show causation. Stories and other qualitative data, collected and analysed rigorously, do a much better job of explaining why and how a particular result occurred, but stories alone can be imprecise or biased, and therefore can mislead as well. The principle that qualitative and quantitative methods must always be used in combination, therefore, is excellent guidance for any evaluation design.
A second universal lesson is that evaluation efforts must be proportional to the size of the GFC to develop simple, quick methods for gathering basic information that can help assess the strength of its grantees. Again, keeping evaluation efforts in proportion is always good advice.
An even more important consequence of this sense of proportion is that it immediately removes the quest for attribution. One would never claim that a $5,000 grant was the primary cause of any significant intervention, so costly and time-consuming quasi-experimental or experimental designs are not to be considered. Freed from this burden, GFC is able to concentrate its evaluation efforts on two simple and useful questions: is the organization performing well both in delivering on its mission and in its organizational development? And is GFC making wise choices in the selection of grantees?
Shidhulai Swanirvar Sangstha is cited as an example of success, having grown from an annual budget of $60,000 to a multi-million-dollar budget. Surely GFC would not claim that its grants of $5,000 to $25,000 ‘caused’ the organization to achieve such impressive growth? GFC can learn, however, that it successfully identified an organization at an early stage that had great potential. By studying the factors that influenced its selection, GFC can learn useful lessons about how to identify other small organizations with similar potential.
GFC is also learning to improve its grantmaking in other ways. The description of its ongoing search for the most informative measures of programme effectiveness, in partnership with each of its grantees, has sharpened the awareness of both parties about the ways they achieve change and the consequences of their interventions. The never-ending exploration between funder and grantee of the best indicators of impact has proved to be as great a source of learning as the outcome measures themselves.
Finally, GFC’s methodology shows the heuristic power of evaluation. In order to measure the effectiveness of its grantees’ organizational performance, GFC had to define an Organizational Capacity Index (OCI), a set of clear and specific standards around each aspect of organizational development. In so doing, GFC set a series of implicit goals for its grantees that these early-stage organizations may not have had the time or experience to work out for themselves. GFC did not require its grantees to meet the conditions of the OCI, but by naming the evaluation indicators it intended to track, it helped its grassroots grantees to develop a roadmap to organizational effectiveness.
The field of philanthropy tends to lionize the largest foundations, in part because there are so few ways to measure funder effectiveness beyond the amount of money contributed. (In recent years, a number of organizations such as Grantmakers for Effective Organization, FSG, Bridgespan, and the Center for Effective Philanthropy have begun to fill this void.) Our experience at FSG, however, has shown no sure correlation between the size of the foundation and the extent of its social impact. Many smaller funders have found thoughtful and innovative ways to achieve – and to evaluate – their impact. GFC’s approach offers one example where Goliath may have much to learn from David.
Mark Kramer is Managing Director of FSG Social Impact Advisors and Hallie Preskill is Executive Director of its Strategic Learning and Evaluation Center. Emails Mark.Kramer@fsg-impact.org and Hallie.Preskill@fsg-impact.org