The short answer is no. At first sight, it seems that randomized controlled trials (RCTs) and Constituent Voice (CV) could be substitutes for each other because they both seek to ascertain a programme’s effect. In fact they’re not interchangeable at all. An RCT is an experimental design, a way of isolating the variable of interest, whereas CV is a ‘ruler’ – a way of gathering information that might be used in an experiment or in other ways.
Let’s look at an example of an RCT. Suppose we want to know the effect of Tostan’s human rights education programme in West Africa (which works on many things but is most famous for significant reductions in what its founder Molly Melching calls female genital cutting). The most rigorous test would be as follows. First, measure what’s going on in a load of villages. Then, choose some villages to have Tostan’s involvement and others not: choose them at random. (It’s no good to have villages opt in because maybe only the most progressive villages will opt in, meaning that we won’t know if changes result from their progressiveness (‘a selection effect’) or from the programme itself.) Finally, after the programme, measure again what’s going on in each village, and compare the change in the villages that got the programme with the change in those that didn’t.
‘CV and RCTs can – and I’d argue should – sit alongside each other.’
CV and RCTs can – and I’d argue should – sit alongside each other. The classic uses of CV are to understand what people want and what they think of what they’re getting. Those are obviously important – and I champion work on both – but answers to these questions may not accurately identify the ‘impact’, which a well-run RCT would do.
Take, for example, two microfinance ‘village bank’ programmes that targeted poor people in north-east Thailand. It’s quite possible that people in these villages wanted to be less poor, and liked the microcredit programme that they received. So the programme would have come out well if measured using CV. It came out well on some other measures too. But it fared badly when analysed with a well-run RCT (RCTs can be run badly): people who got microloans did do better than those who didn’t, but RCTs showed that those differences were entirely due to selection effects and had nothing to do with the microloans themselves.
Distinguishing selection effects from programme effects is hard – routinely foxing even highly trained doctors and researchers – and can’t be done by the naked eye alone. It’s quite possible that ‘beneficiaries’ might think that a programme is helping because they (like everyone else) conflate selection effects with programme effects. We can’t rely on CV to identify impact.
Well then, in a world of rigorous evaluations, why do we need CV?
In a world of rigorous evaluations, why should we ask people what they want? Answer: because there are legion tales of donors plonking (say) a school in a community that really wanted a well. Rigorously evaluating the effect of the school totally misses that it wasn’t wanted, and the erosion of self-determination caused by non-consultative ‘donor plonking’. We can tell that consultation with ‘beneficiaries’ is complementary to rigorous research because they’re both used in evidence-based medicine (eg to establish what to research: see the article about the James Lind Alliance).
And in a world of rigorous evaluations, why should we ask people what they think of what they’re getting? Answer: again because they’ll tell us things that we didn’t know that could improve delivery. That staff are rude. That staff are often late. That the clinic should open half an hour earlier because that’s when the bus arrives. That the nurse giving the vaccines could be less scary.
Well-run RCTs are unparallelled in their ability to isolate a single factor and thereby identify the effect of that factor. But there are obviously instances where that approach is inappropriate. They include: when controlling for that factor would be unethical or illegal; when the available sample size is too small to yield statistically significant results; when the cost of conducting the study would outweigh the benefits; when the outcome is unmeasurable (such as measuring the effectiveness of alternative ways of honouring the dead); when a cheaper method is available (perhaps you have decent historical data and just need to analyse it). They are also inappropriate when you want to find out something apart from the effect of a particular factor, eg users’ opinions or perceptions of something. So no, CV is not a proxy for RCTs. As so often, the answer is ‘both’.
Caroline Fiennes is director of Giving Evidence and author of It Ain’t What You Give, It’s The Way That You Give It. Email firstname.lastname@example.org
This talk (17 min) rattles through the issues of quality, incentives and non-publication in charities’ research.