Ideas to Action:

Independent research for global prosperity

Publications

 

The cover of the brief
December 3, 2020

Establishing USAID as a Leader in Evidence-Based Foreign Aid

As the world’s largest bilateral donor responsible for managing around $20 billion in annual funding, the US Agency for International Development (USAID) has a particular responsibility to take an evidence-informed approach to its work. It also has a congressional mandate to do so.

Cover of Policy Paper 140
April 17, 2019

The Quality of UK Aid Spending, 2011–2018: An Analysis of Evaluations by the Independent Commission on Aid Impact

This paper analyses the grades awarded in the 65 primary reviews undertaken by the UK Independent Commission for Aid Impact (ICAI) over its first eight years of operation, from 2011 to 2018. It finds that ICAI has directly evaluated £28bn of UK aid over the period. Around four-fifths of spend assessed was graded as “satisfactory” (amber/green) or “strong” (green). The findings from ICAI reviews, and this report, should inform the UK Government’s aid allocations between departments at the forthcoming spending review. 

August 7, 2013

Context Matters for Size: Why External Validity Claims and Development Practice Don't Mix - Working Paper 336

In this paper we examine how policymakers and practitioners should interpret the impact evaluation literature when presented with conflicting experimental and non-experimental estimates of the same intervention across varying contexts. We show three things. First, as is well known, non-experimental estimates of a treatment effect comprise a causal treatment effect and a bias term due to endogenous selection into treatment. When non-experimental estimates vary across contexts any claim for external validity of an experimental result must make the assumption that (a) treatment effects are constant across contexts, while (b) selection processes vary across contexts. This assumption is rarely stated or defended in systematic reviews of evidence. Second, as an illustration of these issues, we examine two thoroughly researched literatures in the economics of education—class size effects and gains from private schooling—which provide experimental and non-experimental estimates of causal effects from the same context and across multiple contexts.

April 18, 2011

Toward Results-Based Social Policy Design and Implementation - Working Paper 249

This paper analyzes some of the elements that cause the perception in the realm of social policy that too little evidence is produced and used on the impact of specific policies and programs on human development. They propose we develop Results-Based Social Policy Design and Implementation systems that focus public attention on better outcomes.

Miguel Székely
August 21, 2008

A Little Less Talk: Six Steps to Get Some Action from the Accra Agenda

In September 2008 official aid donors and recipients will meet in Accra, Ghana, to discuss how to make development assistance more effective. CGD president Nancy Birdsall and co-author Kate Vyborny suggest that advocates of better aid who really want a win at Accra forget haggling over broad conceptual issues and focus instead on getting a public commitment from donors to one or more very concrete steps to improve aid effectiveness and to hold donors accountable.

Kate Vyborny
May 31, 2006

Learning from Development: the Case for an International Council to Catalyze Independent Impact Evaluations of Social Sector Interventions

This brief outlines the problems that inhibit learning in social development programs, describes the characteristics of a collective international solution, and shows how the international community can accelerate progress by learning what works in social policy. It draws heavily on the work of CGD's Evaluation Gap Working Group and a year-long process of consultation with policymakers, social program managers, and evaluation experts around the world.

May 31, 2006

When Will We Ever Learn? Improving Lives Through Impact Evaluation

Each year billions of dollars are spent on thousands of programs to improve health, education and other social sector outcomes in the developing world. But very few programs benefit from studies that could determine whether or not they actually made a difference. This absence of evidence is an urgent problem: it not only wastes money but denies poor people crucial support to improve their lives.

The Evaluation Gap Working Group