Last month, I was on my way to speak at an IDB sponsored conference on evaluation. Getting on the shuttle to DC I bumped into a friend of mine who is the head of a technology related company. On the plane I was telling him I was on my way to talk about the fad of doing RCTs in my field of development. He told me he had a great slide from the tech consulting company Gartner about the “Hype Cycle” in tech industries. As you see, this wonderful graphic shows a typical cycle of a tech idea or technique through five stages:
2.Peak of Inflated expectations
3.Trough of disillusionment
4.Slope of Enlightenment
5.Plateau of Productivity
Gartner tries to estimate where each new tech fad is in the cycle and how long it will take to reach the Plateau of Productivity. I have no idea if their classification are right (as a tech neophyte), but they think Big Data is now over hyped but could reach productivity in 2-5 years whereas 3D Bio-printing is headed up the tech cycle, but is more than 10 years away from the Plateau of Productivity.
I added this slide immediately as the first of my presentation as I think it is a great framing for the discussion of the use of RCT techniques in development practice.
The slide I was going to start my talk with shows house prices in Las Vegas. to illustrate the three stages of a bubble. As it takes off even rational level-headed people cannot resist the siren call of “how can I not get in on this?” Then, one day, just like on a roller-coaster, there is a collective “uh-oh” as the clackety-clack of the climb stops and you know, even before the dive that the screams are about to start. Then as the collapse proceeds into the Trough of Disillusionment the rapid disassociation from the excesses of the boom starts and so does the finger pointing.
I think there is little question that randomized techniques were underutilized in development practice and research in 1993. I also think there is little question that in 2013 RCTs are now in an overvaluation bubble and nearing the Peak of Inflated Expectations. (Of course one of the true signs of the peak of a bubble is the increasing vehemence with which people who invested their financial and human capital into the bubble deny that it is a bubble).
At a World Bank Symposium on Assessment for Global Learning last week Jishnu Das estimated there are in the neighborhood of 500 evaluations of education interventions underway at a cost of between 200K and 500K each. Assuming a typical cost of 300K this is $150 million dollars being spent on RCTs in just one field of development. It is hard to make the case that, of all things that could be spent on to improve global education this is the right allocation. In fact it is impossible to make the case with evidence. One of the bemusing ironies of the RCT fad as a component of “evidence based policy” movement is that its advocacy has been evidence free. There has never been any evidence (much less “rigorous” evidence) that RCT results would, as the implication of a validated positive theory of policy formulation, affect policy (Pritchett 2002, Pritchett 2010)) or that they even produce evidence that should affect policy generally enough to be cost-effective (Pritchett and Sandefur 2013).
But I think as we collectively experience the “uh-oh” moment the question is not so much whether we are at the peak, but how fast we can get through the Trough of Disillusionment, into the Slope of Enlightenment, and on to the Plateau of Productivity. RCTs are one hammer in the development toolkit and previously protruding nails were ignored for lack of a hammer, but not every development problem is a nail.
The transformation to productivity will have to embed RCT advocacy into:
a) reasonable “theory of change”—an articulated positive politics of policy formulation—that explains how and why RCT evidence will be incorporated into decision making
b) a plausible model of how organizations adopt new practices at scale and how RCTs provide the kind of evidence that changes behavior,
c) a truly scientific approach to external validity that acknowledges the hyper-dimensionality of the design space and the potentially rugged nature of the fitness function.
My modest contribution to this is that “Its all about MeE” (with Jeff Hammer and Salimah Samji) which proposes radically more randomization by using project implementation to “crawl the design space” to discover how to do what the implementing organization wants to do (as Jed Friedman suggests in his recent blog). This is embedded in a more general theory of state organization capability, which (re)discovers Hirschman’s idea: development projects themselves are a process of learning (see papers with Matt Andrews and Michael Woolcock on capability traps and how to escape them).
(To see the actual resulting presentation at the IDB conference see my powerpoint or the video.)