Psychologically, we are hard-wired to be more attuned to negative news and events. When it comes to the state of the world’s poor and vulnerable, there is no shortage of negative news these days. Over the last few years we have seen the war in Syria go on longer than World War II, the number of refugees reach its highest level ever recorded, famine take hold in four countries, and a deadly hemorrhagic fever kill thousands. Perhaps it is unsurprising then that the results of a global poll showed the vast majority of people believe absolute poverty has increased globally.
Happily, however, the opposite is true. In the last 25 years, the proportion of people living on less than $1.25 a day has dropped by two-thirds. In fact, poverty is falling almost regardless of the poverty line used (for those who think $1.25 is too low to measure well-being meaningfully). In this same time period, six formerly low-income countries have achieved middle-income status, child mortality has declined by more than half, life expectancy in low- and middle-income countries has increased by seven years, and the proportion of low- and middle-income countries considered “not free” has gone from nearly half to under a third. In short, there is good cause for optimism.
Most of this success is due to major global forces such as trade and cross-border labor mobility. And much of the credit goes to the governments and citizens of developing countries themselves for pursuing the policies that have enabled donor, private sector, and (increasingly) their own resources to translate into development outcomes. But development assistance—including US aid—has made important contributions.
As policymakers in the United States increasingly ask fundamental questions about the value of foreign aid investments—and as they struggle to ensure limited funds are put to best use—here is a brief take on what we know—and what we do not—about the effectiveness of US foreign assistance efforts. This note then offers further thoughts on priorities for the structure and programming of US foreign assistance given the balance of information.
There is evidence that foreign aid works.
While we do not have aggregate evidence of the overall effect of the US foreign assistance portfolio, there are some well-documented successes.
Health is the natural place to start. Global health spending accounts for a significant portion—about 23 percent—of the US international affairs budget and has somewhat more measurable outcomes than many other foreign assistance objectives. In its Millions Saved series, the Center for Global Development (CGD) documented case studies of health interventions that resulted in cost-effective health gains. Among these are several success stories that benefitted from US government funding, including the eradication of smallpox, a malaria control program in Zambia that saved 33,000 children’s lives from 2001 to 2010, the swift interruption of a polio outbreak in Haiti, a vitamin A program in Nepal that halved the mortality rate for children under 5 in five years, and a 99.7 percent drop in guinea worm prevalence in Asia and sub-Saharan Africa. In addition, the United States is by far the largest donor addressing the HIV epidemic. The President’s Emergency Plan for AIDS Relief (PEPFAR) is providing 11.5 million people with lifesaving antiretroviral treatment (ART), and has enabled nearly 2 million babies who otherwise would have been infected to be born HIV-free. Another major US health program, the President’s Malaria Initiative, has contributed to the global decline in malaria that has saved 6 million lives since 2000. It’s also worth noting that some of these health programs can deliver big impact at a relatively low cost. For instance, researchers found that vaccination programs in nearly 100 low- and middle-income countries yield returns 16-44 times the cost of the program.
Though health often dominates the conversation about “what works” in foreign assistance, there are notable successes in other sectors as well. One of the biggest in history is the Green Revolution. The transfer and adoption of new agricultural technologies in the mid- to late twentieth century partly funded by the US government is credited with saving millions of people from starvation. A few examples of more recent successes that are directly attributable to US-supported interventions include:
- USAID’s food security program in the Balochistan region of Pakistan whose crop interventions (and to a lesser extent water and livestock interventions) generated an estimated $2 million in direct monetary impact to beneficiary households during its three-year life, yielding a positive return on investment;
- Two basic education projects funded by the Millennium Challenge Corporation (MCC) in Burkina Faso and Niger that yielded, several years after program completion, significant positive impacts on school enrollment and test scores, especially for girls;
- The United States Agency for International Development’s (USAID) Learn to Read program in Mozambique, which was responsible for improvement in reading competency, as well as lower absenteeism rates among students and teachers; and
- Again in Mozambique, a USAID-sponsored initiative to encourage political participation via an SMS campaign that increased voter turnout.
What about growth? On a macro level, Michael Clemens and others found that increases in aid have been followed on average by modest increases in investment and growth. In fact, the majority of aid-growth studies over the last 10 years have yielded similar (caveated) findings, suggesting some academic convergence (though not complete consensus) on this issue.
But not all aid works.
Of course, first there is the question of what it means for aid to “work.” Quite a lot of US foreign assistance is given to strategic allies as a key element of our bilateral relationship. In these cases, the objective is to further the economic, security, or political foreign policy interests of the United States (much of the Economic Support Fund account, for instance), not necessarily to achieve development outcomes (e.g., reduced poverty, improved health, strengthened institutions)—even if the funds support investments in development-oriented areas. It is hard to judge whether this aid “worked” on the basis of the achievement (or not) of development outcomes where that was not the core goal to start.
Even where development outcomes are the goal, it is easy enough to find individual failed interventions (PlayPumps, which received some US funding, is a well-known example). Often, results are mixed (e.g., evaluations of three MCC farmer training programs found increases in farm income, but not household income, the goal of interest; a USAID-funded nutrition program in Nepal improved nutritional status in some target regions, but not in others).
As stewards of taxpayer dollars, US government officials should generally seek to minimize spending money on projects that do not achieve the desired results. On the other hand, failure should not be considered categorically bad.
This is not to gloss over the very real, pernicious reasons that contribute to failed aid programs. For example, donors may superimpose their strategic or bureaucratic priorities with insufficient regard for partner country context or the priorities of its government, citizens, and intended beneficiaries. Though donors and developing countries have coalesced around certain principles intended to mitigate some of the problems that contribute to failure (e.g., country selectivity, local ownership, managing for results), donors’—including the US government’s—implementation of these principles has varied in practice.
Failure, however, is not necessarily bad to the extent that it is a natural byproduct of experimentation to find out what does work. Assuming that every dollar spent will yield its anticipated results would, by definition, hamper innovation. When you consider this, failure becomes bad only when it takes place in an environment that discourages acknowledgement and discussion of poor results. Indeed, as Jonathan Glennie and Andy Sumner point out, the question really should not be “does aid work” but rather “under what conditions does aid work and how it can work better.”
The big issue is gaps in understanding what works (though these are narrowing).
Over 10 years ago, CGD’s When Will We Ever Learn report characterized the extent of the evidence gap in development programming. The authors said, “after decades in which development agencies have disbursed billions of dollars for social programs, and developing country governments and nongovernmental organizations (NGOs) have spent hundreds of billions more, it is deeply disappointing to recognize that we know relatively little about the net impact of most of these social programs.”
The state of evidence has gotten better since then, though big gaps remain (especially when you consider the limitations to applying learning across contexts—what we know worked in Bosnia might not work the same way in Liberia, for instance). The International Initiative for Impact Evaluation’s (3ie) recently released evidence gap maps compile what we know about what works in a variety of areas and highlight where our understanding is limited. These useful tools can (and should) help drive decisions about where and how to invest. . .and where to focus efforts to increase learning.
The US government has made important contributions to development learning over the last decade, as well. Since the release of its well-regarded evaluation policy in 2011, USAID has released over 1,000 evaluations (of varying quality and rigor), most of which have been used to inform programmatic decisions. MCC, with its built-in focus on results covers almost 85 percent of its portfolio (by value) with an ex-post evaluation; roughly half of these will be impact evaluations. But there is still more we can do to prioritize rigorous evaluation of our aid programs.
In the face of budget cuts, how can evidence of what works lead to smarter funding allocation?
In the face of proposed cuts to US foreign assistance, calls to conduct an evidence-based review of the aid portfolio are gaining traction. Such a review should take into account a wide variety of factors, including country-level factors like need, as well as sector/program-level factors like the extent to which programming supports public goods. Program performance is also frequently raised as an important criterion to inform allocation decisions.
This is eminently reasonable, though not entirely straightforward. Evaluations, which typically offer specific findings about individual programs, have limited utility in informing cross-sectoral allocations. An evaluation (like the Mozambique governance evaluation mentioned above) can tell you whether people’s voting behavior is influenced more by a free newspaper than by SMS messages, but it cannot tell you whether to allocate funds to democracy and governance over basic education. Comparing performance-toward-targets across different indicators should be met with similar skepticism; it is hard to know what to make of results that suggest, for instance, that only 14 percent of targeted US government-supported energy generation transactions reached closure in FY2015 but 136 percent of targeted host country NGOs monitoring human rights received US government support. Clearly, the answer is not necessarily to fund more human rights activities and less energy generation.
The bigger question to answer is one of opportunity cost—did a project work well enough to justify expenditure on it compared to another. Here cost-benefit analysis (CBA) can be informative, though it bears mention that high-profile attempts to use CBA to prioritize across sectors, while lauded by some, have also drawn criticism, illustrating the inherent difficulties of such an exercise. This does not mean that data on program performance and cost effectiveness, where available, should not be considered. There should simply be no expectation that it will be bluntly prescriptive for cross-sectoral allocations.
A related but somewhat distinct question is how to think about programming in areas either with little evidence of impact or where the direct link between intervention and benefit is unclear. In these cases, it is harder to estimate the program’s value. One natural response may be to shift funding out of that area and into areas with more demonstrated connections to results. But lack of evidence pointing to success does not necessarily mean a program does not work; it often just means we do not know if it works, either because it is harder to measure or because no one has tried. Within PEPFAR, for instance, there is far more evidence supporting the link between interventions to prevent mother-to-child transmission of HIV and babies born without infection than there is supporting the link between programming for orphans and vulnerable children and improvements in their well-being; outcomes for the latter have proven somewhat harder to measure. Does this call for cutting programs to orphans and increasing the prevention of vertical HIV transmission? There is no simple answer to that question. There are some things people may think are valuable that are hard to measure. The question then becomes: when does lack of evidence suggest an area should be cut and when does it suggest a need to invest more in learning what works?
How can we move US foreign aid forward on the question of what works?
Ultimately, there must be a balance between experimentation and the pursuit of more tested approaches. In identifying that balance, it will be important that 1) innovative approaches are accompanied by processes to generate learning, and 2) for more tested approaches, evidence is built into project design. To do this well, US development agencies—and especially USAID, the lead development agency responsible for over half of US economic assistance—must be well placed to fulfill two important functions: continuing to build an evidence base, and disseminating lessons from the evidence that exists. Here structural questions become important, especially as agencies are being asked to consider how they could reorganize to improve efficiency. That both of these critical roles are currently played by USAID’s Bureau for Policy Planning and Learning argues strongly for its continuation and strengthening. It would be a marked efficiency loss to compromise the institutional structures and functions that promote learning.
Right now there is substantial uncertainty about the future budget and structure of US foreign assistance. As the United States’ development agencies head into what may be a new era, it should keep its sights on helping us better answer the perpetual but always important question: does foreign assistance work, under what conditions, and how can it work better?
 Vaish, Amrisha, Tobias Grossmann, and Amanda Woodward. Not All Emotions Are Created Equal: The Negativity Bias in Social-Emotional Development. Psychological Bulletin. 2008 May; 134(3): 383–403.
 Lampert, Martijn and Panos Papadongonas. 2016. Towards 2030 Without Poverty. Glocalities: Amsterdam.
 United Nations. 2016. Millennium Development Goals and Beyond 2015.
 Kenny, Charles. 2016. Really, Global Poverty *Is* Falling. Honest. Center for Global Development. Views from the Center Blog.
 United Nations. 2016. Millennium Development Goals and Beyond 2015; World Bank. 2016. World Development Indicators; Freedom House. 2017. Freedom in the World 2017. Freedom House: Washington, DC.
 Clemens, Michael and Hannah Postel. 2017. Work Visas as Aid: Fight Poverty Abroad with Economic Growth at Home. Center for Global Development. Views from the Center Blog.
 The Henry J. Kaiser Family Foundation. 2016. The U.S. Government and Global Health. Kaiser Family Foundation: Menlo Park.
 Glassman, Amanda and Miriam Temin. 2016. Millions Saved: New Cases of Proven Success in Global Health. Center for Global Development: Washington, DC; Levine, Ruth. 2004. Millions Saved: Proven Successes in Global Health. Center for Global Development: Washington, DC; Levine, Ruth. 2007. Case Studies in Global Health: Millions Saved. Center for Global Development: Washington, DC.
 PEPFAR. 2016. PEPFAR Latest Global Results.
 President’s Malaria Initiative. 2016. PMI by the Numbers.
 Ozawa, Sachiko, Samantha Clark, Allison Portnoy, Simrun Grewal, Logan Brenzel and Damian Walker. 2016. “Return On Investment From Childhood Immunization In Low- And Middle-Income Countries, 2011–2020.”
Health Affairs. February 2016 35:2199-207.
 Blas, Javier. “Father of Green Revolution Saved Millions of Lives.” Financial Times, Sept 14, 2009.
 Management Systems International. 2012. United States Assistance to Balochistan Border Areas, Evaluation Report: Annex A – Impact Assessment. Management Systems International: Washington, DC.
 Davis, Mikal, Nick Ingwersen, Harounan Kazianga, Leigh Linden, Arif Mamun, Ali Protik, and Matt Sloan. 2016. Ten-Year Impacts of Burkina Faso’s BRIGHT Program. Mathematica Policy Research: Washington, DC; Bagby, Emilie, Anca Dumitrescu, Cara Orfield, and Matt Sloan. 2016. Long-Term Evaluation of the IMAGINE Project in Niger. Mathematica Policy Research: Washington, DC.
 Raupp, Magda, Bruce Newman, Luis Revés, and Carlos Lauchande. 2015. Impact Evaluation for the
USAID/Aprender a Ler Project in Mozambique Year 2 (Midline 2) IE/RCT, Final Report. International Business & Technical Consultants, Inc.: Vienna.
 Vicente, Pedro, Macartan Humphreys, and Daniel M. Sabet. 2015. Networks and Information:
An Impact Evaluation of Efforts to Increase Political Participation in Mozambique. Social Impact, Inc.: Arlington.
 Clemens, Michael, Steven Radelet, Rikhil R. Bhavnani, and Samuel Bazzi. 2011. Counting Chickens When
They Hatch: Timing and the Effects of Aid on Growth. Working Paper 44. Center for Global Development: Washington, DC.
 Glennie, Jonathan and Andy Sumner. 2014. The $138.5 Billion Question: When Does Foreign Aid Work
(and When Doesn’t It)? CGD Policy Paper 049. Center for Global Development: Washington, DC.
 FRONTLINE/World. 2010. Troubled Water. WGBH Educational Foundation: Boston.
 Millennium Challenge Corporation. 2012. MCC’s First Impact Evaluations: Farmer Training Activities in Five Countries. Millennium Challenge Corporation: Washington, DC; McNulty, Judiann, Jennifer Nielsen, Pooja Pandey and Nisha Sharma. 2013. Action Against Malnutrition through Agriculture Nepal Child Survival Project Kailali and Baitadi Districts, Far Western Region Bajura Expansion District – Final Evaluation Report. Helen Keller International: New York.
 OECD. 2011. Busan Partnership for Development Cooperation; Dunning, Casey, Sarah Rose, and Claire McGillem. 2017. Implementing Ownership at USAID and MCC: A US Agency-Level Perspective. Policy Paper 099. Center for Global Development: Washington, DC.
 Glennie and Sumner, 2014.
 The Evaluation Gap Working Group. 2006. When Will We Ever Learn? Improving Lives Through Impact Evaluation. Center for Global Development: Washington, DC.
 Savedoff, William. 2015. The Evaluation Gap Is Closing, But Not Closed. Center for Global Development. Views from the Center Blog.
 International Initiative for Impact Evaluation. Evidence Gap Maps. http://www.3ieimpact.org/en/evidence/gap-maps/
 United States Government Accountability Office. 2017. Foreign Assistance: Agencies Can Improve the Quality and Dissemination of Program Evaluations. GAO-17-316. GAO: Washington, DC; Hageboek, Molly, Micah Frumkin, Jenna Heavenrich, Lala Kasimova, Melvin Mark, and Aníbal Pérez-Liñán. 2016. Evaluation Utilization at USAID. Management Systems International: Arlington; United States Agency for International Development. Evaluation. https://www.usaid.gov/evaluation
 Rose, Sarah and Franck Wiebe. 2015. Focus on Results: MCC’s Model in Practice. Center for Global Development. Washington, DC.
 Dunning, Casey and Ben Leo. 2015. Making USAID Fit for Purpose — A Proposal for a Top-to-Bottom Program Review. In The White House and the World 2016 Briefing Book. Center for Global Development: Washington, DC.
 United States Agency for International Development. 2016. Shared Progress, Shared Future: Agency Financial Report, Fiscal Year 2016. USAID: Washington, DC.
 Clemens, Michael. 2006. Development Goals and the Art of the Possible. Center for Global Development. Views from the Center Blog; Sachs, Jeffrey. 2004. “Seeking a Global Solution.” Nature. August 2004 430: 725-726; Burke, Tom. “This is Neither Scepticism Nor Science - Just Nonsense.” The Guardian, Oct. 22, 2004.
 Rose, Sarah. 2013. Seeking Results from PEPFAR’s Orphans and Vulnerable Children Programs. Center for Global Development. US Development Policy Blog.
 Executive Order 13781 of March 13, 2017, “Comprehensive Plan for Reorganizing the Executive Branch,” Federal Register 82, no. 50 (March 16, 2017): 13959-13960, https://www.gpo.gov/fdsys/pkg/FR-2017-03-16/pdf/FR-2017-03-16.pdf.
Currently, USAID’s Bureau for Policy Planning and Learning leads the advancement of the agency’s evaluation agenda.
Rights & Permissions
You may use and disseminate CGD’s publications under these conditions.