As the world’s largest bilateral donor responsible for managing around $20 billion in annual funding, the US Agency for International Development (USAID) has a particular responsibility to take an evidence-informed approach to its work. It also has a congressional mandate to do so.
This paper revisits the concept of international development aid effectiveness and its measurement as part of a review of the Quality of ODA (QuODA) assessment published regularly since 2010.
The Quality of UK Aid Spending, 2011–2018: An Analysis of Evaluations by the Independent Commission on Aid Impact
This paper analyses the grades awarded in the 65 primary reviews undertaken by the UK Independent Commission for Aid Impact (ICAI) over its first eight years of operation, from 2011 to 2018. It finds that ICAI has directly evaluated £28bn of UK aid over the period. Around four-fifths of spend assessed was graded as “satisfactory” (amber/green) or “strong” (green). The findings from ICAI reviews, and this report, should inform the UK Government’s aid allocations between departments at the forthcoming spending review.
Are USAID programs high impact and good value for money? Do they work? Do they generate more results for less cost than if the agency just gave poor people cash? We don’t always know the answers to those questions, but USAID is trying to find out.
Context Matters for Size: Why External Validity Claims and Development Practice Don't Mix - Working Paper 336
In this paper we examine how policymakers and practitioners should interpret the impact evaluation literature when presented with conflicting experimental and non-experimental estimates of the same intervention across varying contexts. We show three things. First, as is well known, non-experimental estimates of a treatment effect comprise a causal treatment effect and a bias term due to endogenous selection into treatment. When non-experimental estimates vary across contexts any claim for external validity of an experimental result must make the assumption that (a) treatment effects are constant across contexts, while (b) selection processes vary across contexts. This assumption is rarely stated or defended in systematic reviews of evidence. Second, as an illustration of these issues, we examine two thoroughly researched literatures in the economics of education—class size effects and gains from private schooling—which provide experimental and non-experimental estimates of causal effects from the same context and across multiple contexts.
This paper analyzes some of the elements that cause the perception in the realm of social policy that too little evidence is produced and used on the impact of specific policies and programs on human development. They propose we develop Results-Based Social Policy Design and Implementation systems that focus public attention on better outcomes.
When Does Rigorous Impact Evaluation Make a Difference? The Case of the Millennium Villages - Working Paper 225
The authors examine the Millennium Villages Project (MVP), an experimental and intensive package intervention to spark sustained local economic development in rural Africa, to illustrate the benefits of rigorous impact evaluation. Estimates of the project’s effects depend heavily on the evaluation method.
In September 2008 official aid donors and recipients will meet in Accra, Ghana, to discuss how to make development assistance more effective. CGD president Nancy Birdsall and co-author Kate Vyborny suggest that advocates of better aid who really want a win at Accra forget haggling over broad conceptual issues and focus instead on getting a public commitment from donors to one or more very concrete steps to improve aid effectiveness and to hold donors accountable.
Learning from Development: the Case for an International Council to Catalyze Independent Impact Evaluations of Social Sector Interventions
This brief outlines the problems that inhibit learning in social development programs, describes the characteristics of a collective international solution, and shows how the international community can accelerate progress by learning what works in social policy. It draws heavily on the work of CGD's Evaluation Gap Working Group and a year-long process of consultation with policymakers, social program managers, and evaluation experts around the world.
Each year billions of dollars are spent on thousands of programs to improve health, education and other social sector outcomes in the developing world. But very few programs benefit from studies that could determine whether or not they actually made a difference. This absence of evidence is an urgent problem: it not only wastes money but denies poor people crucial support to improve their lives.