With rigorous economic research and practical policy solutions, we focus on the issues and institutions that are critical to global development. Explore our core themes and topics to learn more about our work.
In timely and incisive analysis, our experts parse the latest development news and devise practical solutions to new and emerging challenges. Our events convene the top thinkers and doers in global development.
Each year billions of dollars are spent on development programs with relatively few rigorous studies of whether they actually work. In 2004, CGD set out to address this lack of good quality impact evaluations and our recommendations led to the creation of the International Initiative for Impact Evaluation (3ie) in 2009. The number and quality of impact evaluations has risen significantly, but there is still a long way to go to make sure future development interventions are based on evidence of what works.
This paper analyses the grades awarded in the 65 primary reviews undertaken by the UK Independent Commission for Aid Impact (ICAI) over its first eight years of operation, from 2011 to 2018. It finds that ICAI has directly evaluated £28bn of UK aid over the period. Around four-fifths of spend assessed was graded as “satisfactory” (amber/green) or “strong” (green). The findings from ICAI reviews, and this report, should inform the UK Government’s aid allocations between departments at the forthcoming spending review.
In this paper we examine how policymakers and practitioners should interpret the impact evaluation literature when presented with conflicting experimental and non-experimental estimates of the same intervention across varying contexts. We show three things. First, as is well known, non-experimental estimates of a treatment effect comprise a causal treatment effect and a bias term due to endogenous selection into treatment. When non-experimental estimates vary across contexts any claim for external validity of an experimental result must make the assumption that (a) treatment effects are constant across contexts, while (b) selection processes vary across contexts. This assumption is rarely stated or defended in systematic reviews of evidence. Second, as an illustration of these issues, we examine two thoroughly researched literatures in the economics of education—class size effects and gains from private schooling—which provide experimental and non-experimental estimates of causal effects from the same context and across multiple contexts.
Each year billions of dollars are spent on thousands of programs to improve health, education and other social sector outcomes in the developing world. But very few programs benefit from studies that could determine whether or not they actually made a difference. This absence of evidence is an urgent problem: it not only wastes money but denies poor people crucial support to improve their lives.
The impact evaluation world has changed dramatically through a range of initiatives at research institutions, think tanks, development agencies, and governmental policy units. It has now been seven years since CGD’s Evaluation Gap Working Group released “When Will We Ever Learn? Improving Lives Through Impact Evaluation,” and four years since the launch of 3ie.
The purpose of this conference is to reflect on what has been achieved in recent years, to consider how the environment has and has not changed, to assess existing initiatives aimed at improving the supply and use of high quality evidence and to provide ideas for 3ie as it considers the next stage of its strategy within this landscape. Please note that the afternoon sessions will be organized to include small group discussions with the intention of generating specific and useful ideas for future action.
The authors examine the Millennium Villages Project (MVP), an experimental and intensive package intervention to spark sustained local economic development in rural Africa, to illustrate the benefits of rigorous impact evaluation. Estimates of the project’s effects depend heavily on the evaluation method.
I never cease to be astonished by the amount of energy people put into claiming that Randomized Control Trials (RCTs) are the be-all and end-all of impact evaluation methods; nor at the energy people put into claiming that RCTs are marginal, costly, and a waste of time.
The New England Journal of Medicine recently published the results of “the Oregon experiment” based on the 2008 US Medicaid program expansion in Oregon. The study is one of very few randomized control trials on publicly-subsidized health insurance that exists to guide health policy, and found what some commentators considered a disappointing result: while health care utilization increased and households were protected from financial hardship, expanding Medicaid coverage had “no significant impact on measured physical health outcomes over a 2-year period.”