With rigorous economic research and practical policy solutions, we focus on the issues and institutions that are critical to global development. Explore our core themes and topics to learn more about our work.
In timely and incisive analysis, our experts parse the latest development news and devise practical solutions to new and emerging challenges. Our events convene the top thinkers and doers in global development.
Each year billions of dollars are spent on development programs with relatively few rigorous studies of whether they actually work. In 2004, CGD set out to address this lack of good quality impact evaluations and our recommendations led to the creation of the International Initiative for Impact Evaluation (3ie) in 2009. The number and quality of impact evaluations has risen significantly, but there is still a long way to go to make sure future development interventions are based on evidence of what works.
The International Initiative for Impact Evaluation (3ie) has announced that Emmanuel (Manny) Jimenez will be the organization’s new Executive Director starting in early 2015. The selection of Jimenez represents a key transition for 3ie, which has moved quickly from start-up to maturity in just six years.
I never cease to be astonished by the amount of energy people put into claiming that Randomized Control Trials (RCTs) are the be-all and end-all of impact evaluation methods; nor at the energy people put into claiming that RCTs are marginal, costly, and a waste of time.
In this paper we examine how policymakers and practitioners should interpret the impact evaluation literature when presented with conflicting experimental and non-experimental estimates of the same intervention across varying contexts. We show three things. First, as is well known, non-experimental estimates of a treatment effect comprise a causal treatment effect and a bias term due to endogenous selection into treatment. When non-experimental estimates vary across contexts any claim for external validity of an experimental result must make the assumption that (a) treatment effects are constant across contexts, while (b) selection processes vary across contexts. This assumption is rarely stated or defended in systematic reviews of evidence. Second, as an illustration of these issues, we examine two thoroughly researched literatures in the economics of education—class size effects and gains from private schooling—which provide experimental and non-experimental estimates of causal effects from the same context and across multiple contexts.
The impact evaluation world has changed dramatically through a range of initiatives at research institutions, think tanks, development agencies, and governmental policy units. It has now been seven years since CGD’s Evaluation Gap Working Group released “When Will We Ever Learn? Improving Lives Through Impact Evaluation,” and four years since the launch of 3ie.
The purpose of this conference is to reflect on what has been achieved in recent years, to consider how the environment has and has not changed, to assess existing initiatives aimed at improving the supply and use of high quality evidence and to provide ideas for 3ie as it considers the next stage of its strategy within this landscape. Please note that the afternoon sessions will be organized to include small group discussions with the intention of generating specific and useful ideas for future action.
The United Kingdom has been a stalwart funder and innovator in foreign assistance for almost 20 years. In 2011, it created the Independent Commission for Aid Impact (ICAI) to report to Parliament on the country’s growing aid portfolio. ICAI is a QUANGO in Brit-speak – a quasi-public non-governmental organization - with a 4-year mandate which is undergoing review this year. Recently, I took a look at the reports it has produced to see whether the organization is fulfilling its role in holding the country’s overseas development aid programs accountable. I found one fascinating report which shows what ICAI could be doing and many more reports that made me wonder whether ICAI is duplicating work already within the purview of the agency, Department for International Development (DFID), which accounts for most of the UK’s foreign assistance programs.
In recent weeks, the public health world and political pundits alike have been abuzz about results from the “Oregon Experiment,” a study published in the New England Journal of Medicine that finds no statistical link between expanded Medicaid coverage and health outcomes such as high cholesterol or hypertension. Limitations of the study aside, the Oregon Experiment is a good example of the importance of rigorously testing all US health programs, rather than just assuming ‘more care = better health’. The Innovation Center at the United States Centers for Medicaid and Medicare Services, created under the umbrella of the Affordable Care Act, represents a new and encouraging approach to address this problem, an approach that we think has important lessons for global health.
The New England Journal of Medicine recently published the results of “the Oregon experiment” based on the 2008 US Medicaid program expansion in Oregon. The study is one of very few randomized control trials on publicly-subsidized health insurance that exists to guide health policy, and found what some commentators considered a disappointing result: while health care utilization increased and households were protected from financial hardship, expanding Medicaid coverage had “no significant impact on measured physical health outcomes over a 2-year period.”