Another response to Caroline Fiennes and Ken Berger’s mea culpa

 

Genevieve Maitland Hudson

4

In a recent article for Alliance Magazine Caroline Fiennes and Ken Berger say a mea culpa for the impact revolution in social programmes and suggest two ways in which we ought to work in the future in order to do a better job on evidence.

Their first recommendation is that social programmes should be consumers rather than producers of evidence.

Their second is that impact should be assessed by independent experts.

Fiennes and Berger come to these conclusions after analysing the ‘system’ in which social programmes operate and noting a set of perverse incentives and an absence of appropriate skills. Their response is to take the bulk of evidence gathering out of programmes altogether.

There are a couple of assumptions at work in that argument that deserve to be analysed in detail.

One assumption is that independence will get us away from the mess and muddle that comes with self-evaluation. This implies that independent evaluators are at one remove from the ‘system’ of social programmes and therefore immune to its incentives.

That ought to give us pause.

We need to ask blunt questions about who is funding these independent experts to secure their immunity from influence. An academic funded by her university to produce a systematic review of, say, evaluations of interventions to reduce reoffending is perhaps unlikely to be influenced by those running the interventions on the frontline. It can be reasonably argued that she is ‘looking in’ on those social programmes in a disinterested way.

An evaluation consultancy retained by a provider (or a group of providers) of those same interventions is in a very different position. It is hard to see how that consultancy is outside the ‘system’ in any meaningful way at all.

More fundamentally, the idea that research of this kind is ever free from broader influences and judgements is almost certainly a false one. There is a thriving school of research in science studies that argues that all science, from quarks to Maxwell’s equations, is contingent, and might have gone in entirely different, but equally productive, directions under a different set of circumstances. This school of thought outlines a ‘robust fit’ between theories, models and apparatus that settles into an accepted narrative about a series of phenomena. This all takes place amongst “practices, bodies, places, groups, instruments, objects, nodes, networks”[1]. Call it a ‘system’, if you like.

There’s no getting out of these kinds of ‘systems’, or even of standing outside them temporarily. What we can and should do is remain alert to them and how they operate. This requires reflectiveness and transparency. On its own, independence is likely to be a spurious guarantee of the quality of evidence of social programme effectiveness.

Fiennes and Berger also diagnose an absence of the required skills and resources to produce valid evidence within social programmes themselves. This is where they make another assumption, this time overtly:

“… (our definition of good impact research) requires knowing about sample size calculations and sampling techniques that avoid ‘confounding factors’ – factors that look like causes but aren’t – and statistical knowledge regarding reliability and validity.”

Whether or not social programmes have this kind of knowledge is an empirical question; I’m prepared to accept the authors’ statement that they don’t. I don’t so readily accept that this is the only kind of evidence worth collecting.

This assumption, much like the assumption about independence, appears to be aiming at the maximum amount of ‘purity’ in the data produced by social programmes.

The ‘purity test’ is one way of looking at measurement, but it isn’t the only one.

Measures don’t have to be perfectionist to produce worthwhile information. We may have become used to thinking of measures in perfectionist terms, as absolute and defined according to fixed convention, but this is a relatively recent development.

Measures can be, and for a very long time were, representational. They were a means, not of holding ourselves up to a conventional and ‘validated’ standard, but of helping us to achieve our ends. Instead of saying a field consisted of X number of hectares, we measured in terms of days of labour or probable yield. Agricultural measures of this kind were predictive of a desirable outcome for us as labouring human beings. There is a case for saying that representational measures of this sort would be infinitely more useful to social programmes, and for suggesting that those best placed to implement them, and collect data using them, are within those programmes, not outside them.

Statistical research methods of the kind advocated by Fiennes and Berger have status and respectability but that doesn’t necessarily make them the best, certainly it doesn’t make them the only, ones we can use. There is nothing to stop us using representational measures to assess the effectiveness of our social programmes. Indeed, the authors hint at this in passing by approving the collection of feedback.

Using measures that are responsive to the messiness of our ‘systems’ and the reality of human aspiration to account for our impact, now that would be truly revolutionary.

Genevieve Maitland Hudson is a researcher and consultant. She works with the consultancy Osca. Email gmhudson@osca.co


Footnotes

  1. ^ Bruno Latour, Science in Action (Harvard University Press, 1987)

Comments (4)

Gen Maitland Hudson

Ok. I get you. Without going into too much detail in BTL comments I guess I would say I do disagree, and my interests are no more vested than yours I don't think, but obviously vested to some degree. My points of contention would be: a) the scientific method (whether it is "unbiased" and free from the kinds of problems that come with practice-based measurement), b) the transferability of the scientific method to social programmes c) the definition of "misleading" in this context and whether or not experimental and quasi-experimental research does a better job of directing spending on its own. Admittedly, being argumentative, I could probably come up with some others if pushed...


Caroline Fiennes

What I'm saying is this. It's really not radical at all. Resources should be allocated to: - problems which the intended beneficiaries (whatever we choose to call them) think are priorities. That is a role for feedback. and - effective responses to those problems. Discerning which interventions are effective and which just *sound effective* is hard - really hard. It needs what you call 'statistical research methods' (I would call it 'the scientific method' though I suspect we mean the same thing). That discernment - i.e. research into impact - needs to be good quality. That's basically all I'm saying. Not all impact research is good quality - that is, some is bad quality i.e, gives misleading answers which cause us to misspend our resources. The chance of research being poor quality (i.e, dangerously misleading) are higher if (a)it's done on too small a budget (b)it's done by people who aren't skilled at doing it, and/or (c)it's done by people who are biased, e.g, who have a strong vested interest in finding a particular answer (e.g., because the survival of their employer depends on it). I'm not saying that non-profits *in principle* can't do good quality research; rather that it's a lot to expect of them to unbiased-ly assess themselves, and furthermore if one looks at their impact research, it is often weirdly positive. A funder said to me on this: "of course we don't trust it... it all seems to be implausibly positive". {in www.giving-evidence.com/info-infrastructure} On quality, I refer you to the fact that (the last time I looked) none of the evaluations assessed by Project Oracle had reached beyond its level three, which in social science terms isn't terribly exacting. I'm really surprised if anybody (who isn't a vested interest...) disagrees with this if they've understood it!


Gen Maitland Hudson

And a v. good thing, although I reckon still not enough to guarantee the quality of the evidence. But is it then not so much the independence as the expertise you're advocating?


Caroline Fiennes

Hey Gen Yes, you're right: I certainly don't think that a consultant or anybody else paid by an NGO is independent of that NGO. It's for that reason that health studies (i.e., studies in the field which has thought hardest about all this) are often required to state who funded them.


Leave a Reply

Your email address will not be published. Required fields are marked *