Measurement – of everything from the size of a particle to the health of economies – has become such an integral part of our approach to the world that we no longer question its value. We assume that it is a good thing, something that allows us to show in statistical form the changes taking place in more or less complex phenomena. This assumption has naturally entered the world of social change.
Not only is it assumed that the processes, outcomes and impacts of social change should be assessed; it is also assumed that they can be assessed – in other words, that the instruments of measurement at our disposal are adequate and effective. More problematically, it is assumed that measurement enhances our ability to make or accelerate positive change.
In a future issue of Alliance, I will discuss some alternative approaches to measurement. In this article, I want to examine what I believe is wrong with our assumptions. There is no question that quantitative approaches have increased our ability to assess the effectiveness of social change processes. But the push towards greater and greater quantification – particularly from donors – and the amount of activists’ time and energy it consumes, compels us to examine our assumptions in order to determine when measurement may be meaningless or even detrimental to our understanding of how change happens. I am going to illustrate my argument with examples from the arena I know best: rural and urban development projects, and particularly women’s empowerment projects, in India.
Reasons for: contrasting principle and practice
Why do we believe measuring change is a worthwhile activity? In theory, at least, we measure for the following reasons: to determine if we have done what we set out to do; to learn what interventions worked and what didn’t; to intensify effective change processes or replicate them in other communities; to build new theories of change; and to ensure the accountability of social change agents receiving public resources for their work.
In reality, however, the evaluation of change processes is more likely to be done for quite other reasons: because donors need to ensure their funds have been used correctly and to demonstrate to their own constituencies (boards, contributors, governments, etc) that they are supporting effective work; and because NGOs need to sustain or obtain more funding. Donors like to bet on safe horses, which means organizations with a proven track record of work (that is, measured results!).
It is these sorts of pressures that convert measurement from an activity designed to aid learning into one that evaluates performance, and so distorts the process.
I worked in a community health project in a rural area of western India in the 1970s. One of our goals was to eradicate diarrhoeal deaths of young children and thus bring down the level of child mortality. Village health workers were required to report the number of diarrhoea cases treated at a monthly meeting. Within a year, at one of these meetings, our director berated several health workers who had no diarrhoea cases to report from their villages. This was not taken as a sign that our strategy was working but of the poor performance of the workers in failing to report cases.
Problems of interpretation
Such false conclusions from social change indicators continue to occur in more subtle forms. For instance, the impact of programmes attempting to stop violence against women is often measured through the number of cases of violence reported to or registered with the police. In countries like India, these figures have grown rapidly in the past decade. But it is extremely difficult to interpret them. Are they the result of greater awareness and openness about domestic violence, so that more women are reporting violence, or is there an actual increase in violence?
This brings us to another of the core dilemmas in our approach to social change measurement – attribution. How do we take account of the many forces that are at work – reforms in legislation and police procedures, greater media attention and exposure of violence, higher levels of education among women which make them refuse to tolerate violence, softening of family taboos, as well as the interventions of the women’s organization whose work is being measured? The social and political environments in which social change work is occurring are becoming increasingly complex, making it harder to determine exactly how positive change occurred. But rather than forcing a rethink of our approach to measurement, this difficulty has merely resulted in the creation of even more complicated measurement systems.
Most change measurement systems currently in vogue have a serious bias because they originated in western liberal democracies where the social and political environment is predictable and stable (though this may be much less true today – the Paris riots, for instance, took everyone by surprise). If you were designing an impact measurement system for a new kind of civics curriculum in high schools in Western Europe or North America, you didn’t have to worry that schools might be shut down in the middle of the year owing to a civil war or that the curriculum might be swept aside after a takeover of power by the military or religious fundamentalists. Most of our measurement systems assume a stable social and political context in which outside forces will not upset our change strategy. In fact, they assume that social change is predictable, whereas every activist knows that the opposite is true – change is highly volatile, unpredictable, and non-linear.
The increasingly complex nature of change
These may seem somewhat extreme scenarios, but I am exaggerating to make a point: even countries that have achieved relatively high levels of political stability and continuity (South Africa, Mexico, Turkey or India, for example) must contend with an economic, social and political environment where change is occurring at a bewildering rate because of forces outside the control of social change actors. In India, for instance, a lot of women’s empowerment interventions attempting to keep girl children in school and encouraging young women to take employment and delay marriage have simply been swept aside in peri-urban areas as such changes have occurred far more rapidly thanks to the unprecedented growth of the export economy (garments, watches, other electronics), which typically prefers young women workers. At the same time, thousands of young women have been mobilized into very militant groups (‘Durga Vahinis’) by fundamentalist organizations. Rather worryingly, if you were to apply the indicators used to measure empowerment (work participation, income earned, and political participation), all these groups would look like examples of positive change.
As social problems have become far more complex, and far more connected to forces outside the immediate social or cultural locale, change interventions must follow suit. There are no simple causal links. In intricate and often invisible ways, global forces are catalysing local change without any conscious intervention at the local level. Global agreements about intellectual property rights allow multinational drug companies to maintain very high prices for AIDS drugs. Poor countries cannot afford them, so communities have to let AIDS patients die. How shall we begin to measure the effectiveness of an NGO working in such a situation? Should we measure the extent of their involvement in international mobilizations against multinational drug company policies, or simply the number of AIDs orphans they are able to look after?
Methodologies of change assessment
The methodologies of change assessment present another set of problems. Target groups or service users or communities are rarely involved in setting goals or choosing indicators. Indeed, their involvement is actively discouraged by many donors as compromising the ‘objectivity’ of the assessment. Yet communities often offer the most sensitive indicators of their own change, and can be far more critical and objective about the distance they have travelled than outside evaluators, who can sometimes completely fail to see the significance of the shift that has occurred. I was present when members of a collective of very poor and oppressed rural women in South India told a group of ‘objective’ outside evaluators that one of their indicators of success was the failure of the upper castes in the village to break their solidarity as a group, despite repeated attempts to do so through bribes and threats. The evaluators had no way of quantifying this evidence, and were clearly uncomfortable with it. So they ignored it and kept asking the women how many cases of wife beating or dowry harassment they had taken up as a group. Since the answer was none, the group was considered to have failed.
Even collecting the insights and learning of social change activists in building our measurement tools is rare, which makes the efforts of initiatives such as Keystone particularly valuable, since they seek to create assessment tools from precisely such insights and experiences.
Disturbing trends in donor support
Most worrying, though, is the trend of donors moving support to approaches that generate convincing-looking and simpler-to-understand data, even when it is telling us only a very small part of the story, or the wrong story altogether. The move from more complex, long-term empowerment and popular education programmes with women in poor communities towards so-called ‘rights-based’ approaches is an illustration of this. This is not a criticism of rights-based approaches, rightly understood – empowerment is a rights-based approach. My point is that ‘rights-based’ has become code for more easily countable change, in the form of cases of rights violations registered, convictions obtained, legislative reform, etc. Measuring changing levels of empowerment and awareness is a far more challenging – and challengeable – exercise.
Finally, because we have moved from a focus on learning about change to an obsession with measuring it, measurement rarely leads to genuine critical reflection on our theories of change, many of which remain untouched by the vast numbers of assessments and evaluations conducted around the world. Take the case of microenterprise programmes for poor women in developing countries. They can be of great value but they are NOT a magic bullet leading surely to women’s overall empowerment. There is increasing evidence suggesting that poor women – and their daughters – may be facing new and oppressive burdens of debt repayment, increased male resentment and violence, loss of schooling opportunities, and overwork. We don’t hear about this because what is actually counted is loan disbursal, repayment rates and income rises. So the mantra that women-focused microenterprise projects raise their status and reduce poverty remains firmly entrenched in development theory and practice.
There are, of course, many examples of assessment data changing aid policies – such as those that led to the changing of World Bank funding policies for large-scale projects like dams, and that embedded elements such as social and economic rehabilitation for ‘project affected persons’ in institutional policy. Change measurement can be a powerful tool for strengthening our effectiveness when it prioritizes learning, confronts the growing complexity of change forces, and reconstructs our theories of change and development. It is time to examine the myths and realities of social change measurement and to question why we measure. Only then can we begin to make measurement a meaningful process.
Srilatha Batliwala is an India-based Civil Society Research Fellow, Hauser Center for Nonprofit Organizations, Harvard University. She can be contacted at Srilatha_Batliwala@harvard.edu