The latest trend to arouse development economists is the randomised controlled trial. Borrowed from biology and healthcare, it’s when a researcher picks two or more different groups according to the roll of a dice, giving each group a different drug or treatment. The scientist can be fairly sure that the best-responding group has been given the intervention that works.
In development economics the researcher tests policies or techniques instead of drugs. One group, randomly chosen, receives the policy to be tested and one doesn’t, with an attempt made to keep all other circumstances equal. This avoids the problem of selection bias: the economist might distort the results by choosing a group which is particularly willing to take part or where she knows a technique will work well. It’s why the results of those pop surveys after you’ve visited a website aren’t to be trusted. The answers only come from the type of person who can be bothered to fill them in.
In a 2007 paper Benjamin Oken tried to measure ways of reducing corruption in Indonesia. He chose 608 villages in Indonesia where roads were to be built, splitting them into groups: ones without audit vs. those with audit; those featuring invitations to accountability meetings vs. those without; and ones with invitations to accountability meetings along with anonymous comment forms. Oken found that less money went missing in the villages with government audits than in the ones featuring grassroots participation in monitoring. The conclusion: audits cut corruption better than the other methods.
Oken works at MIT’s poverty action lab, of which Esther Duflo and Abhijit Banerjee are directors. They’re the authors of a recent bestseller, Poor Economics, which uses experimental evidence to discover what methods best lower poverty. Over years they did more than 240 experiments in 40 countries.
Some of their conclusions run against received wisdom. Political corruption isn’t too bad. Microfinance (tiny loans for poor people) is no magic bullet.
Fans of randomised trials are right that there’s no big answer. Lots of development fads boil down to the researcher’s arbitrary preferences rather than to hard evidence. Development sometimes seems like a Roman arena in which big, shouty men try out different ways of stabbing each other. The actual subjects — poor people — are often forgotten.
Experimentalists are also right to deny that everything’s down to corruption. That’s often just a sloppy byword for blaming the poor for their own problems. If you didn’t tolerate kleptocrats, the argument goes, you’d be richer. But lots of corrupt countries aren’t democracies, and plenty of corrupt countries got rich (China springs to mind). Some reasonably graft-free nations continue to toil.
But i’m a bit wary of these sort of experimental approaches, which amount to tinkering rather than really changing underlying conditions. There’s no understanding of capitalism as a system and very little about the exploitation of labour by the wealthy. You don’t need to be a raving socialist to understand that what’s good for the poor often isn’t good for the rich. We get cheap Iphones because Foxconn holds wages so low.
As this review says, there’s little real understanding of power in development. It’s as if the world is gradually tiptoeing toward a better understanding of poverty, and by carefully applying a few well-founded scientific results we’ll eventually live as one happy tribe. Call me a cynic, but I think power relations play a bigger role than most people admit. It’s actually beneficial to have a multi-million strong pool of underpaid or unemployed people clamouring for jobs because it keeps wages down and lets us have cheap stuff.
Experiments may work in medicine, but they can be methodologically dubious in social science. The gist of philosopher Nancy Cartwright’s presentation at INET is that the sample size needed for a fully objective study can be so enormous as to be impossible. Researchers often draw completely unwarranted conclusions from these studies.As pointed out by Marx and others, ’empirical facts’ are often distorted by their proponents. One person’s common sense is another’s ideology. I’d imagine that the sorts of questions asked by most middle-class academic Americans aren’t always the same as the sorts of things that many poor people themselves would ask.
Most of these trials also seem to fail to take enough account of context. What works in one culture might not be generalisable elsewhere. Cash incentives don’t work as well in the communitarian cultures of the South Pacific as they do in, say, Kenya. When people have plenty of food and are generally happy, they’re less likely to work hard for more pay.
Because humans are at the same time the architects of their own theories and its subjects, the problem of reflexivity emerges. The results of experiments may be valid in one situation for a short period of time but they can’t really be taken as hard fact for eternity in the same way as findings in natural science can. So most findings are only provisional and context-dependent. Such is the eternal dismalness of social science. This isn’t a body-blow to experimentalism in economics, but it does reduce its importance.