The need for greater effectiveness in the non-profit sector and for organizations to measure this is receiving growing attention. While no one seriously disputes that organizations should assess their work, the main ground of the debate has tended to become drawn away into a discussion about whether the values of civil society (the premium it puts on democracy, solidarity, social justice, dialogue, relationships, values) are in danger of being overrun by those of social capital markets (business principles, metrics, evidence, return on investment, numbers).
Discussion has thus become polarized between proponents of either civil society or markets.
The aim of this online forum is to discuss whether such a polarization is misplaced, whether the two paradigms can be reconciled, and whether the real question should rather be how assessment of an organization’s work, in whatever form it takes, can be done in such a way that it takes full account of the organization’s purpose.
The two opening articles explore the reconciliation of these apparently conflicting perspectives through two organizations working in the area. Tris Lumley, Head of Strategy at New Philanthropy Capital, describes how NPC is trying to advance the practices of impact measurement, analysis and evidence-based decision-making among non-profits and their funders. David Bonbright, founder and Chief Executive of Keystone, talks about how they are helping non-profits and foundations to measure and act on constituency voice.
New Philanthropy Capital: putting effectiveness at the heart of non-profits and philanthropy
New Philanthropy Capital (NPC) started when a group of people working in the financial sector looked for independent research, analysis and advice to help them give effectively, and found a gap. NPC thus started off providing investment research, advice, due diligence, brokerage – all pretty familiar concepts from the financial markets.
We found, over the last nine years, two major challenges to this model.
When I joined NPC, my research focused on local community organizations working in deprived areas – charities like Family Action in Rogerfield and Easterhouse (FARE) in Glasgow. In an area where territorialism and gangs are rife, they provide whatever support people and the community need, acting as classroom and mentor one day, friend and family the next, advocate and social worker the next.
The first challenge to NPC’s approach is that charities don’t measure much about their outcomes. Not necessarily because they don’t want to, but often because it’s hard, and they don’t have the resources to do it. For FARE, you can measure the impact of the mini Olympics you organize each year on young people who might otherwise be joining gangs. But how do you measure the value of friendship, belonging, and just being there for someone?
The second challenge is that, while people are talking about effective philanthropy and impact a great deal more than they were nine years ago, evidence-based philanthropy is actually very thin on the ground. Money isn’t yet flowing to charities working hardest to demonstrate their impact. Those without any evidence for their work haven’t yet found their fundraising has stopped working. Effectiveness isn’t at the heart of giving decisions. On the whole people give because of a personal connection to a cause or for reasons related to their values or because they’re asked by someone they know or trust.
So far, so good for the proponents of civil society, right? The non-profit sector, and philanthropy, seem to be driven by values, not by evidence.
Well, no. Because it turns out that great charities generally do want to measure their impact and donors generally do want to ensure that what they give makes a real difference. Because that impact is what they’re passionate about. They want to know that their values are reflected in real changes in the world, in people’s lives, in the policies and attitudes that shape society. But there are barriers preventing this, which NPC and others are now working to address.
First, we’re helping charities to measure and analyse the impact of their work, and their own effectiveness. Helping charities like FARE to measure the value of friendship and support. We’ve developed a tool for charities to measure the well-being of children they work with (www.philanthropycapital.org/publications/improving_the_sector/well-being/default.aspx). When you’re trying to make children’s lives happier, you can measure that by asking them how happy they are. And then see changes in that data over time as you work with them. And that data helps charities to refine their work, and to inspire others to fund it.
Second, we’re helping donors combine their values and passions with data and evidence to make more effective giving decisions. For instance, we have helped one UK foundation map the needs of young people across the UK and effective approaches to working with them, which has resulted in a new strategic programme, focused on urgent needs and high impact interventions. We now also provide training to philanthropy advisers.
Effectiveness certainly matters when you ask people about their giving—a recent report shows that the top two things donors think charities should do in tough times is demonstrate that they’re spending money efficiently and demonstrate their impact. But if charities aren’t telling them about their impact, either because they haven’t determined it yet, or because they don’t think donors want to know, then philanthropy can’t move forward.
Keystone Accountability: planning, managing and communicating impact through constituency voice
Unlike NPC, Keystone’s origins are exclusively in civil society. Collectively, the founders of Keystone in 2004 had over a hundred years of experience in non-profits and foundations, none in finance and business. We knew from experience that non-profits had poorly articulated goals and strategies and actually knew very little about their results and effectiveness, but having lived through the South African liberation movement, we also knew that social movements, though messy, work. So we sought to find how to make them work better.
At the same time we were, like others who have been critical of market-based approaches, worried about the imposition of metrics and mindsets by the rich and powerful onto the civil society sector. We believed, then and now, that who does the counting is more important in social change that what is counted. So we set out to design an approach to results measurement that was grounded in the people who were meant to benefit from the work, and came up with the idea of constituency voice, where an organization plans, assesses, learns and reports with its primary constituents. Who those constituents are will vary according to what the organization does, but it will always include the people who are meant to benefit from its work. Constituency feedback yields data about results, about performance and about relationships. Feedback can be combined with evidence from other sources to round out the picture of impact and performance.
We have worked with scores of organizations large and small over the past six years to create effective measures of constituency voice using surveys, interviews, focus groups and observational techniques. Little by little we are seeing an accumulation of good examples of constituency voice measurement practice. A group of East African foundations, for example, have used a common survey instrument to collect feedback on their performance from the 305 small community-based organizations that they fund (www.keystoneaccountability.org/resources/reports). Maybe the most complete example of constituency voice metrics is a joint project of the Gates Foundation and the Center for Effective Philanthropy, YouthTruth, which involves listening to the students in American high schools receiving Gates Foundation support.
In many ways, constituency voice in social change is analogous to customer satisfaction in consumer markets. The customer satisfaction industry grew out of a social movement, the consumer rights movement. That movement led to the consumer rights legislation promoted by the Kennedy administration and passed in 1962. Without it, there would be no customer satisfaction industry today, or at least not the one we know.
Which begs a question: does philanthropy require a social movement to move forward? If so, where will it come from? Or, to put the question as we see it: how can philanthropy move forward if charities and donors are not systematically measuring and reporting what those meant to benefit think about their work?
So, beginning from very different standpoints and with very different approaches, NPC and Keystone both work to explore and improve how values and evidence interact to drive non-profit and philanthropic activity. But it is critical to recognize that we mean much more by data than simple numbers, ratios or reductive proxies. We need only look at how focusing analysis of charities on cost ratios has twisted the non-profit sector to see the danger there. We are looking for more evidence about impact not because we want to build a technocratic world of mechanistic charities and giving, but because we want non-profits to be able to do full justice to their values and aims and to ensure that the people charities aim to help are, in both senses of the term, the ones who count.
→ I think this polarisation is absolutely justified and essential. We have spent the last 300 years trying to justify everything in terms of market economics never questioning what those market economics were for in the first place.
This, I would suggest, is a false economy. ;)
We need to rethink not the purpose of civil society but the purpose of market economics, which I would suggest is a means towards an end – that end being “civil society” in its values rather than formal sense.
Thanks for the opportunity to comment – these are my personal thoughts and reflections.
→ Ruchir, thank you for your thoughts, and for being the first person to post a comment!
Do you think social capital markets are really about market economics, or the capitalist system, in the sense in which you say we have spent the last 300 years trying to justify everything in terms of market economics?
I wonder if the very term ‘social capital markets’ is contributing to a false polarization, suggesting a much closer analogy with the capitalist economic system than is really there? As we are using the term, it really means no more than working out some way of comparing the work done by non-profits so that donors can better direct their funds – and the two examples that started off this discussion are both trying to ensure that values are fully taken into account in making comparisons.
→ I could notice that two publications had finally come to one mutually shared point: constituencies/beneficiaries should have a say in defining a needy action and in measuring the effectiveness of the action. Thus funders could better be convinced to fund and felt comfortable with outcomes. However this can be a costly element, which should have had well defined source of financing. Could that be financed by internal resources of the organization, or be a part of project cost? On the other hand, the introduction of such measurement tools serves both for market development and for civil society value development, because: a) it creates a demand and a market of independent assessment services; b) once involved in project development and implementation process constituencies acquired a potential to migrate from their needs awareness to social movement action.
→ I don’t think we need a market for independent assessment – we already have that – lots of overpaid consultants ‘evaluating’ programmes or projects in often rushed, once-off retrospective evaluations that contribute little sustained actionable learning.
There will always be a place for this. But what we really need are better ways for organizations to track and monitor and report their success or lack of success in an ongoing way together with their key constituents – especially beneficiaries but also funders, partners etc. To turn social change into a genuinely collaborative enterprise in which all constituents understand how they are contributing and how they can contribute better to achieving the outcomes they share.
New methods and tools are part of this. And Keystone, NPC and many others have very exciting and practical innovations to share. But it needs investment of time and resources from all constituents.
The exciting thing is that when organzations begin to behave in this way they find that the benefits (or returns) hugely outweigh the cost. New strategic insights emerge from asking new questions and most of all from the dialogue among constituents about what success looks like and how it can be measured, understood and reported. Relationships improve as mutual confidence and trust grows.
The question is how can we create new incentives to kick start this kind of market – to the point where the intrinsic value it generates becomes the incentive that sustains it?
This is the market we need to nurture.
→ It is very serious discussion especially for transition societies. As everywhere in the field of public good, it is very difficult to invent (though I personally prefer the word “discover”) the criteria for measuring of something “unmeasured” and “nonmaterial”. In fact, measuring the happiness by asking beneficiaries of charities does not give the objective picture. In the Soviet Union people were happy, just ask! In Ukraine today people in majority say that they are poor, though objective criteria of poverty does not testify to this. This is just the impact of many factors, and communication technologies leading this way.
I think that both measurements – values and market indicators – should be used integrally. As we all in fundraising say: way to donor pocket goes through his/her heart and mind, so we need to provide both assessment data for the heart and mind.
This dichotomy is not vicious, it’s virtuous.
→ Hello all – a good discussion, and I’m particularly happy to see voices from different parts of the world, because this is an issue that’s central to the development of philanthropy globally.
I agree with many of the points made: evaluation should be formative and participatory, involving constituents and helping improve program operations as they’re going on, not just filling out a scorecard after the fact. But I’d like to get back to the question as the good folks at Alliance have framed it: “civil society vs. markets – a false dichotomy?”
Lots to think about here. Dichtomy or trichotomy? The term “third sector” reminds us that we’re talking about the government or public sector, the private sector, and the social-impact/nonprofit sector. Interestingly, the history of the term “civil society” is that it used to cover *both* the realm of market-based interaction and the realm of citizen collective action. “The state and civil society” was the relevant dichotomy: public and private, you could say. But it became clear that the realm of private, non-state action contains two types of action, governed by very different logics. The classical logic of market interaction is that collective outcomes are generated by the individual pursuit of individual self-interest – the “invisible hand.” To pursue collective outcomes intentionally is to subvert the logic of the market. But the logic of independent citizen action, the other component of civil society, is all about pursuing collective outcomes intentionally. (Another way of saying this is that the freedom of markets and the freedom of democracies are two different things, as much as they get conflated.)
So there’s an inherent tension at the heart of civil society. And I think we see that in this discussion. Tris makes the point, “Those without any evidence for their work haven’t yet found their fundraising has stopped working.” There’s a market failure that should be corrected by the application of market logic. Individuals pursuing their individual motivations for giving – whether based on effectiveness or values – determine whether an organization’s fundraising succeeds are not. But there’s a different logic, that of intentional, collective citizen action, that’s at the heart of most actors in this space – NGO staff and donors alike. And that intentional action is based on values about collective self-interest, the common good, that are constantly re-articulated and re-negotiated. That’s the space that most of us think about as civil society, and it’s important to remember that it does have an inherently different logic than that of markets. So while markets can bring needed discipline, there will always, in my view, be a tension between market-based approaches and approaches based on intentional, collective citizen action.
What this means for philanthropy, I’m not sure, but it seems like something worth continuing to explore.
→ I have no doubt about how important is the assessment to the third sector. In a progressive and fast market world, if we don’t prove that we are effective we simple don’t exist.
It’s really important to create an whole hall of numbers and situations (cases) that will allow us to realise what is fair when we’re talking about philanthropy.
For instance, what is a fair salary in third sector?
I know young people who is really interested on third sector, not for the altruism perspective, but because the salaries can be higher than the common market could pay.
In a holistic perspective, the encounter of third sector with the capitalism world only can be studied, cleared and become understandable with the continuous use of audit techniques.
At last, soon or later, we will need to response the question:
Is the third sector a real tool to change the world? Or we are just mitigating the capitalism bad effects?
→ I am glad that events in this life have put brakes on the momentum that was carrying us to think everything related to private sector, market economy and “rigor” of financial markets will make our sector more productive, efficient and effective. Though we would like to be productive, efficient and effective, relying on metrics alone will not be the magic recipe. We shoud find a balance of relying on metrics and the human element of assessing situation (otherwise quoted as gut feeling). Finding the right balance will improve the way we do our work and show impact results. Our sector will contribute with a technique to show results without blindly relying on metrics alone but reintroducing the human element (avoiding the black swan effect!).
→ Based on my experience in developing the third sector in a region that suffered too much from dictatorship and military regimes I have a special respect to civil society organizations, and their leaders, in combining value-based decisions with a clear vision about their role in bringing long-standing democracy and citizenship participation. And, if we measure how rare became a coup du stat in the region it is clear that civil society is making a difference.
But, on the other hand, I see a serious question in how those organizations can survive looking for the necessary resources. And here I see the role of philanthropy, Philanthropy is a side effect of capitalism. And capitalism is based on metrics to survive. Efficiency and effectiveness must be measured for capital makes sense. Thus, if civil society organizations want to survive based on philanthropic resources, economic indicators will prevail, independent of other values that are in place.
→ Thank you Alliance friends for this unique opportunity to discuss crucial issues for civil society organizations. I see that there are already interesting thoughts posted here.
No one seriously disputes that social organizations should assess their work, that is true. Organizations, governments, donors, all will agree on its importance.
The question is who drives the process, and evaluation is too often a donor driven process. Some donor organizations are unfortunately too enchanted with some kind of metrics, which are frequently not relevant for the mission of the organizations, for learning from the opinions of the beneficiaries, for measuring the real impact of the work of the organization. How useful is really the logframe or other tools that are imposed to organizations so that organizations become trapped by those boxes and forgetting the real soul of development and evaluation?
I do not see a contradiction with markets, BUT education of donors seems to be one of the most important step in this process.
→ Thanks for the great comments everyone – I’ll pick up on a couple of threads running through them.
Svitlana – I agree that measurement needs to capture both subjective and objective dimensions, for example by combining subjective well-being measures with objective indicators. We need this duality if we are to accurately reflect impact from a macro perspective and from the viewpoint of beneficiaries. And you’re absolutely right that we need this to speak to both the rational and emotional sides of the donor too.
But I don’t think that has to lead us to dichotomy, if we manage to construct measures that incorporate both dimensions. NPC’s working on a well-being tool that captures subjective well-being and can also be incorporated into national objective well-being indices. Keystone’s developed a methodology for capturing constituency voice in a way that’s robust and comparable between organisations. I think you can speak to the head and heart in a single integrated approach.
Anabel, Chris, Andre – through your comments runs a strand that I hope will be transformative in how civil society organisations are managed and funded. If we can support social impact organisations in their efforts to build their own frameworks and systems for measuring, analysing, managing and communicating their impact, we can harness the power of measurement to strengthen what they do. We can move away from a funder-driven paradigm of evaluation that is imposed from without. We can construct the management information a great organisation needs to refine, learn from and publicise its work. We can engage all stakeholders with that data – involving those we aim to help and reporting to those who support the work. We can share lessons so that knowledge about what works becomes a vital currency of civil society.
All of this requires investment – by funders in the fundamental capacity of charities, social enterprises and movements to measure what’s most important and meaningful, by the organisations themselves in grappling with complex and collaborative system measurement. I believe we have to come together as a sector to lobby for this investment, and then use it well, to transform what we can achieve through what we can measure.
→ Fascinating debate.
It seems to me that if there is a polarisation, it is around confidence and propensity for error. Those who favour independent analysis and those who support more self assessment can use the same tools, but it is problematic when champions of the first regard themselves as more scientific than advocates of the latter approach.
One has only to look at the errors that have bedevilled investment market analysis to see that it is as prone to subjectivity and bias as any other discipline. It is certainly no gold standard, as writers like Nassim Taleb and John Kay have pointed out. Many highly respected investment analysts wrote at length about what an effective and sound institution Enron was, right up to the point when it fell apart.
Independent analysis has its place. Done well, it’s great. But it’s only affordable to some, and it’s only rarely sustainable. Equipping people with the skills so they can assess and demonstrate the difference their organisation makes can take more time, but it leaves organisations in a position where they are not only accountable, but they can also develop their own practice.
→ I take a quote from the first article: “when you are trying to make children’s lives happier, you can measure that by asking them how happy they are and then see changes in that data over time as you work with them”. At the Aga Khan Foundation we have devised a Quality of Life Index which, as you can imagine tries to ascertain peoples perceptions about the quality of their life, and then repeats this after some years. Of course it is tricky – the questions, the questionning, and the presentations of results are all complicated, but in essence it is a simple idea providing you have people of intergrity doing the questionning and the reporting.
In the Civil Society Programme (CSP) of the Aga Khan Foundation we have another set of questions embedded in the larger Quality of Life questions. The CSP is interested to know whether it has changed things for the better in rerspect of the quality of CSOs or the quaklity of CSOs interface with govt, business and the public. We found that a way to do this was (a) to be very clear what changes you were actually aiming for, so that you could be clear whether you had achieved them or not, and (b) to have a person of integrity and insight ask the questions of the right people. Visions of objectively verifiable data had to be given up in the simple effort of people of experience and integrity asking clear questions of affected people – backed up, of course, by harder adata on money, training, staffing where this was relevant.The two best measures of civil society that i know – the CIVICUS civil society index, and the USAID Sustainability Index are both based upon asking clear questions and gettiing informed peoples perceptions, and presenting them serially over time to see changes.
→ Thanks all for advancing this important discussion. I wanted to pick up on a couple of threads above and also provide a little more information about what we learned from developing YouthTruth (www.youthtruthsurvey.org).
As Andre notes, “what we really need are better ways for organizations to track and monitor and report their success or lack of success in an ongoing way together with their key constituents – especially beneficiaries but also funders, partners etc. To turn social change into a genuinely collaborative enterprise ….” Anabel further notes that “the question is who drives the process” of data collection and monitoring success.
With YouthTruth, we have sought to respond directly to these articulated needs and create a system in which performance measurement efforts benefit all who are working to improve educational opportunities simultaneously — not only are beneficiaries empowered through the process of data collection but school leaders benefit from the data we collect as do networks and districts who manage schools as do funders.
When we began YouthTruth in 2008, we identified what we saw as two distinct approaches to student survey work — shades of the market-based and civil society agendas described above. Some groups were collecting student survey data for external evaluation and accountability purposes while others were collecting student feedback as a means of primarily empowering students to help drive school improvement efforts. What we sought to do with YouthTruth was to bridge the gap and create a tool that rigorously collects student feedback and places it in a comparative context while also empowering students by validating that their voices have been heard.
Students not only hear this message initially through our YouthTruth video, co-produced with MTV, but perhaps more importantly, schools are required to “close the feedback loop” and share data back with students before the end of the school year. One consistent message from students was that the most important indicator that schools were serious about making change was that their voices were taken seriously and being considered in improved decision-making.
We also make sure that school leaders are a key audience for YouthTruth findings. They are among our core audiences as we seek to deliver value at multiple levels through our comparative data – informing the work and decision-making of school leaders, education intermediaries who manage portfolios of schools, and education funders.
We piloted YouthTruth in 2008 – since then more than 20,000 students representing more than 90 schools have participated in YouthTruth. We are optimistic that by creating a shared dataset that informs the work of these multiple audiences, we can begin to shift the dynamic and advance a more a collaborative learning model.
→ Thank you for joining into the conversation Valerie. I hope that everyone interested in multi-constituency measurement has a look at the YouthTruth website.
Now that you have told us a bit about how and for whom YouthTruth works, Valerie, can we draw you out a bit more to say more about what the data is saying, what the comparative dimension adds, and how others interested in promoting these kinds of collaborative learning can learn from your data set.