Impact evaluation is now required for funders and funded alike. So how can woolly ideas like ‘social justice’ stand up to this new imperative? And how can social justice funders gear themselves up to measure their impact? This article attempts to set out how to measure social justice based on evaluations currently under way.
Over the past ten years, the climate on evaluation has changed markedly. Now almost all funders want to measure what they achieve. The British government recently announced that funding would be restricted to organizations that can demonstrate impact. This sent shockwaves through charities whose leaders argue that such an approach fails to capture the complex development needs of communities struggling against poverty in the long term. However, this argument is unlikely to cut much ice with New Philanthropy Capital, which has recently called for charities to be ranked according to the amount of public benefit they produce.
In our studies, we have found that foundations and their grantees commonly do good work but are poor at analysing and communicating impact. It is less that the work is too complicated to measure, and more that charities lack the necessary skills and systems.
For organizations wishing to remedy this, the first step is to move beyond faith that their approach works towards testing that it actually does.
There are four reasons for adopting such an evidence-based approach: to assess whether the intervention method works, to learn from the work, to communicate results, and to be accountable for the use of money. Of these, the first – assessing what works – is far and away the most important. The others – learning, communicating and accounting – all depend on it.
There is far too little evidenced-based practice, leaving the field vulnerable to fads and fashions. For example, ten years ago philanthropy was obsessed with ‘civil society’. At the time, Caroline Hartnell and I suggested that a way of defining, developing and measuring civil society was necessary if the idea was to take root in development. This didn’t happen and, as we might have predicted, civil society is now a low priority for most foundations.
Social justice is too important an idea to suffer this fate. A recent OECD report has shown shocking growth in inequality across the world. In their bestselling book The Spirit Level, Wilkinson and Pickett demonstrate that unequal societies are ‘dysfunctional’.
Foundations have been slow to engage with social justice. A 2005 report from the Foundation Center on social justice grantmaking found that one of the reasons for holding back is that funders feel that ‘social justice’ is a vague term and it is difficult to evaluate outcomes.
In order to strengthen practice in the field and to encourage more philanthropic support for social justice, the Working Group on Philanthropy for Social Justice and Peace set itself the task of defining the term and finding ways of measuring it. The Group quickly realized that it is open to numerous, even widely divergent interpretations. For example, while a former leader of the Conservative Party in the UK set up the Centre for Social Justice, community foundations in Central and Eastern Europe associate the term with communism.
So what is social justice philanthropy? After a year’s work, the Group had failed to agree on a single form of words that captured its essence. A breakthrough occurred with the insight that social justice philanthropy is not one kind of practice but a number of practices connected by certain ‘family resemblances’. Albert Ruesga and Deborah Puntenney found eight commonly practised traditions of social justice (see Albert Ruesga’s article on p28).
To turn the eight traditions into something that could be measured, Suzanne Siskel and I applied the framework to grants in the portfolio of the Social Justice Philanthropy Unit at the Ford Foundation. Seeing overlap between some of the traditions, we reduced the number from eight to six. ‘Structural justice’ and ‘equal distribution of resources’ were combined, as were ‘shared values’ and ‘cultural relativism’.
We then developed indicators for each one:
- Structures exist that ensure equal distribution of outcomes of public and private goods.
- All people have security within a framework of rights.
- Marginal groups are protected through the rule of law.
- Individuals and groups are able to have a say on issues that affect them.
- All cultures recognize that their norms should not dominate others.
- The market operates in ways that benefit all.
I tested the relevance of these criteria by asking grantees in the Ford Foundation’s US-based Community Philanthropy and Civic Culture Portfolio to rate on a five-point scale how important each of the items was for their work. While this was a post-facto effort to characterize their work, we found that these indicators were highly relevant. The highest priority for the group was item 4.
We then tested for the extent of ‘family resemblance’ between the six items using a statistical reliability analysis. This found a strong family resemblance between all six items.
Deeper statistical analysis showed that there are two underlying dimensions, economic and social, to social justice. Items 1 and 6 in the above list were clustered together, while items 2, 3, 4 and 5 formed a second cluster. Items 1 and 6 can be thought of as primarily ‘economic’, while items 2, 3, 4 and 5 can be thought of as primarily ‘social’. The economic component consists of the market producing fair outcomes for everyone; the social component consists of rights, security, protection and participation for all.
We repeated this analysis on a cohort of grants from the Global Fund for Community Foundations. This funds similarly community-based organizations, though all the grants were outside of the US. We found similar results, which increases our confidence in the relevance of our social justice indicators.
These results suggest that social justice is a family of related concepts. So, what use is this for a practically minded funder who wants to support social justice and measure the impact of their support? This takes us from the ‘what’ of evaluation to the ‘how’.
Our research on funders and impact evaluation suggests there are two common mistakes that foundations make. The first occurs when a foundation funds a programme for some years, decides to conduct an evaluation to find out whether it worked or not, and hires an evaluator to look back at it. The problem with this approach is that there has rarely been any monitoring, so the evaluator spends most of the time trying to figure out what’s happened. The evaluator’s conclusions about impact are coloured by people’s memories, which are usually incomplete and distorted.
The second common mistake is that the foundation hires an outside evaluator to work alongside the programme at an early stage so that the data is collected prospectively. While assessment of impact is more likely with this approach, a foundation may find that the results do not necessarily suit its view of itself. In this case, the foundation either fires the evaluator or disregards the conclusions.
The result of these mistakes is that negative evaluations are buried and positive ones lack good data. These are lost opportunities for learning about what works in philanthropy.
The new evaluation industry
However, with the new climate of impact measurement, we may see a more rigorous approach. Certainly, the new impact imperative has produced a massive evaluation industry. The Foundation Center has recently launched ‘Tools and Resources for Assessing Social Impact’ (TRASI). While its over 150 tools and techniques provide a formidable array of choices, it is also confusing in the absence of some guidance about what evaluative processes and tools are appropriate and effective for a foundation’s programmes.
To illustrate this dilemma, Lisa Jordan, director of the Bernard Van Leer Foundation, and I wrote a spoof called ‘Kirsty and the Evaluators’ about a programme officer who finds herself between a demanding boss, a complex set of grants, and incomprehensible evaluators. One by one, she encounters psychological, organizational and technical barriers to setting up an evaluation system. After many false starts, she concludes that only she can develop the method to evaluate her portfolio. Since she conceived it, she must own it. She finds the conclusion liberating.
However, the real challenge is just beginning because she now has to turn to the technical issues of setting up an evaluation system. The play doesn’t deal with this, but all that is required is common sense, logical thought, and a commitment to keeping the system and the language as simple as possible. The bulk of the work should be done internally, and evaluators should be employed only where the organization requires a process to be facilitated externally, needs specialist skills (such as analysis), or requires the kind of critical examination that insiders cannot do.
Seven steps towards an evaluation system
I suggest seven steps. Under each, we need to answer a question.
Step 1 What are the quality of life conditions we want to see?
The first step is to identify the change that we want to see. To take some examples, ‘a living wage for all’ would be a desired state under indicator number 6: ‘the market operates in ways that benefit all’. ‘Women play a full part in political life’ would be a desired state for indicator number 4: ‘Individuals and groups are able to have a say on issues that affect them’.
Step 2 How would these conditions look if we could see or experience them?
It is important to write a narrative – a paragraph or two – about what would constitute success. I don’t mean abstract goals or mission statements that contain lofty ideas such as ‘the dignity of labour’ or ‘women’s empowerment’, but real-world conditions about what a living wage or women playing a full part in political life would look like in practice if you could see or experience them.
Step 3 How can we measure these conditions?
Success needs to be framed in specific terms that can be measured. In our examples, we may be interested in the proportion of employers paying a living wage or the proportion of women members of parliament. This information may be available in some cases, but in others, foundations may need to commission studies to find it.
Step 4 How are we doing on the most important measures?
It is vital to be able to track progress. This is best done visually – for instance, through a line chart in a spreadsheet or on a wall – so that you can see changes at a glance. Some trajectories may involve 20-year timeframes, since much social injustice will take years to correct. In such cases, it is important to set intermediate milestones.
Step 5 What interventions are successful?
It is important to be clear about ‘what works’ to produce the desired results. Research may yield hypotheses about interventions that can be tested further in the foundation’s work, and these should be publicized for the benefit of others with comparable concerns or goals.
Step 6 Who are the partners that have a role to play in doing better?
Foundations can achieve social justice outcomes only with others. Their grantees are key partners and other organizations play a role. The success of the Living Wage Campaign in London, for example, has been the result of a university providing a statistical analysis of what constitutes a living wage; a coalition of community groups, churches and mosques providing popular support; media organizations running stories in the press; foundations writing cheques; and businesses coming to see that a living wage has benefits for their profits. Each organization provides something different. If desired, the value of individual contributions can be assessed through the newly emerging science of social network analysis, which maps relationships between funders in any partnership and estimates the value of individual contributions to outcomes.
Step 7 What do we propose to do?
Impact evaluation needs to be part of a learning cycle so that results from the evaluation are applied in subsequent actions by the foundation.
Impact evaluation in practice
The Sabanci Foundation in Turkey is using the above approach. As a result, it is way ahead of most in evaluation practice. In assessing the value of its programmes for youth, women and disabled people, the foundation has specified clear outcomes for each group, clarified the roles of the foundation and its grantees, developed specific methodologies for delivering programmes, and been highly successful in obtaining press coverage of the results of its work. A notable feature of the organization of the evaluation is that administration data doubles as evaluation data and is integrated in an online database that handles all aspects of application and reporting. This means that evaluation is integrated with day-to-day work rather than being a bolt-on extra. This is the best possible design.
This article has shown that devising indicators for social justice can be done so long as the idea is seen as a family of connected but different concepts. There are no special skills necessary to set up an evaluation system. What is needed is a combination of confidence, common sense and orderly thinking. The main rule, to paraphrase Einstein, is to keep it as simple as possible but no simpler.
1 Barry Knight and Caroline Hartnell (2000) ‘Civil Society: Is it anything more than a metaphor for hope for a better world?’ Alliance, September.
2 Growing Unequal? Income distribution and poverty in OECD countries. Available from http://www.oecd.org/document/4/0,3343,en_2649_33933_41460917_1_1_1_1,00.html
3 Richard Wilkinson and Kate Pickett (2009) The Spirit Level: Why equality is better for everyone, Penguin Books.
4 Adapted from the work of Mark Friedman (2005) Trying Hard is Not Good Enough, Fiscal Policy Institute, Trafford Publishing.
Barry Knight is secretary of CENTRIS. Email firstname.lastname@example.org
For more information
For a selection of the resources and tools mentioned in this article, go to