Measurement vs action

Alliance magazine

When Deepti Goel and Nachiket Mor say that ‘REs [randomized experiments] create two groups that are identical in all respects except for the intervention itself’ (Alliance March 2009, pages 48-49), I suggest that they are assuming the truth of something that is the crucial point at issue. How are you going to find or create two identical groups and how are you going to know that ‘any differences in group outcomes can therefore be attributed to the intervention’, as the authors go on to assert?

You’re not working in laboratory conditions and the materials (and someone is bound to find fault with me here for my choice of words) you’re working with are dynamic, not inert. In short, this is not a pure science and you don’t have the pure scientist’s luxury of predictability, that is, of being able to predict exactly results from like cases.

Granted that in randomized drug tests (a comparison Goel and Mor use), the materials are dynamic, too, but they are at least single organisms; they are kept in similar, controlled circumstances during the test and there are a large number of them. I frankly don’t see how you can set up a similar experiment to test a social initiative without there being so many cracks in the frame that it’s impossible to get a reliable answer (in other words, you have to admit that what works under one set of circumstances might not work in another and, on a science model, that’s enough to say it won’t work).

I believe the best you can get from REs is that similar initiatives have a good chance of working under what appear to be similar circumstances – but you can get that without the apparatus (and the cost) of randomized experiments. In neither case will there be a guarantee of success (by implication, Goel and Mor admit this when they say that, as matters stand, REs won’t tell you why this or that approach works).

Andrew Milner
Freelance writer and researcher and Associate Editor of Alliance

 

How much should charities and funders spend on measuring impact? David Bonbright gets to the heart of the matter when he highlights the goals of evaluation in ‘Proving or improving?’ in Alliance magazine in March.

A good system of measurement will inform charities’ and funders’ strategy as well as help to prove and improve impact. If we have existing data that our intervention works, we don’t need ongoing full-scale evaluation but we should check from time to time to learn how we can improve it. If, however, we do not know what works, we have to measure our intervention to find out if it does, or risk doing harm through our ignorance.

Exactly how much we spend on measurement depends on how much we already know about what works. If we know little, we’d better spend a lot. If we know a lot, we should spend less.

Of course, if we all get better at sharing what we learn, and ensuring others get the value from our knowledge, we’d have to spend less overall on measuring impact. How about starting by publishing and sharing the thousands of evaluations charities and funders have already spent money on?

Tris Lumley
Head of Strategy, New Philanthropy Capital

 

In your March issue, Deepti Goel and Nachiket Mor make the appropriate arguments for randomized experiments (REs) – as far as they go. David Bonbright counters that REs don’t go far enough. More precisely, he notes that they may lead us in the wrong direction. Bonbright argues that REs misapplied to complex development processes may yield a result, but tell us little about the process that produced it. The experiment may tell us – as Dylan Thomas wrote – ‘everything about the wasp except why’. Our work at Continuous Progress Strategic Services focuses on similarly complex processes of policy change. Rigorous controlled experiments are impossible: we have no perfectly equivalent US Congress or town council in a parallel universe to serve as a control group. But we can plan carefully and set meaningful indicators of progress towards interim objectives along the path to the desired policy goal. More important, our indicators allow us to take stock, analyse what is happening, and change strategy if necessary. In policy change and in many of the development contexts Bonbright cites, evaluation is best viewed as a tool for learning and improvement. Less precise than Goel and Mor would like, I suspect, but our best option.

David Devlin-Foltz and Lisa Molinaro
Continuous Progress Strategic Services, The Aspen Institute


Comments (0)

Leave a Reply

Your email address will not be published. Required fields are marked *



 
Next Letter to read

Not the will lacking, but the means

Theresa Lloyd