With rigorous economic research and practical policy solutions, we focus on the issues and institutions that are critical to global development. Explore our core themes and topics to learn more about our work.
In timely and incisive analysis, our experts parse the latest development news and devise practical solutions to new and emerging challenges. Our events convene the top thinkers and doers in global development.
Each year billions of dollars are spent on development programs with relatively few rigorous studies of whether they actually work. In 2004, CGD set out to address this lack of good quality impact evaluations and our recommendations led to the creation of the International Initiative for Impact Evaluation (3ie) in 2009. The number and quality of impact evaluations has risen significantly, but there is still a long way to go to make sure future development interventions are based on evidence of what works.
In recent years, a growing literature has measured the impact of education interventions in low- and middle-income countries on both access and learning outcomes. But interpretation of those effect sizes as large or small tends to rely on benchmarks developed by a psychologist in the United States in the 1960s. In this paper, we demonstrate the distribution of standardized effect sizes on learning and access from hundreds of studies from low- and middle-income countries.
In this paper we examine how policymakers and practitioners should interpret the impact evaluation literature when presented with conflicting experimental and non-experimental estimates of the same intervention across varying contexts. We show three things. First, as is well known, non-experimental estimates of a treatment effect comprise a causal treatment effect and a bias term due to endogenous selection into treatment. When non-experimental estimates vary across contexts any claim for external validity of an experimental result must make the assumption that (a) treatment effects are constant across contexts, while (b) selection processes vary across contexts. This assumption is rarely stated or defended in systematic reviews of evidence. Second, as an illustration of these issues, we examine two thoroughly researched literatures in the economics of education—class size effects and gains from private schooling—which provide experimental and non-experimental estimates of causal effects from the same context and across multiple contexts.
Each year billions of dollars are spent on thousands of programs to improve health, education and other social sector outcomes in the developing world. But very few programs benefit from studies that could determine whether or not they actually made a difference. This absence of evidence is an urgent problem: it not only wastes money but denies poor people crucial support to improve their lives.
The authors examine the Millennium Villages Project (MVP), an experimental and intensive package intervention to spark sustained local economic development in rural Africa, to illustrate the benefits of rigorous impact evaluation. Estimates of the project’s effects depend heavily on the evaluation method.
I never cease to be astonished by the amount of energy people put into claiming that Randomized Control Trials (RCTs) are the be-all and end-all of impact evaluation methods; nor at the energy people put into claiming that RCTs are marginal, costly, and a waste of time.