InnovationPublic Policy

Questions about the practical utility of Randomized Control Trials

I got a “revise and resubmit” for my paper on making economics more useful on the very day the Nobel prize in economics was announced.
I have long been skeptical of the practical utility of Randomized Control Trials (RCTs) in general and field experiments in developmental economics in particular. I had not gone into this in my first round submission but I think I’m going to risk this in the re-submission.
The argument I plan to make is as follows:
A foundational question for disciplinary economics is whether it should limit itself to being a science which privileges general propositions; or, should it also include knowledge (as in engineering or medicine) that pertains to the design and use of physical or virtual artifacts (airplanes, drugs, surgical procedures)?
The Deaton and Cartwright critique of RCTs correctly observes that RCTs are ‘optimized” to validate general principles rather than evaluate or improve actual designs. Thus the RCT finding that textbooks alone don’t improve learning supports the general principle that complements matter; other field experiments support the general idea that incentives (including monetary incentives) matter. (The latter has been a staple of older econometric research as well: Zvi Griliches’s hybrid corn research comes to mind.)
Perhaps these are valuable contributions: earlier economists (going back to Mill) argued their axiomatic models were “empirical” in that their axioms were grounded in direct personal experience: unlike natural scientists therefore experiments to validate their core premises were unnecessary. It can’t hurt that statistics and experiments support (and in some cases undermine the core assumptions.) And perhaps its helpful to remind policy makers that complements and incentives and matter in a wide range of circumstances.
But RCTs are an impractical tool for going beyond evaluating general propositions to evaluating and optimizing specific designs: there are just too many variables that can be manipulated, goals and objectives desired, and the conditions under which designs are supposed to work. Which is why in engineering and medicine there is no one testing tool that does the job. Moreover even collectively, the tools used in engineering and medicine are rarely decisive so there is judgment made at the end, often through a dialectical process.
For instance in pharma, RCTs come into play after, in vitro, in vivo and animal experiments. And the results of the RCTs themselves are debated by a panel of experts that “weighs” the evidence produced and renders a subjective judgment. Often the judgments are not unanimous.
There is, in addition, the practical question — which to my knowledge has never been exposed to “evidence based” research: what is the cost-benefit of validating intuitive principles through expensive field experiments? Some cost hundreds of thousands of dollars. Could that money and effort not be more productively used?

Leave a Reply