Committing to Cost-Effectiveness: USAID's New Effort to Benchmark for Greater Impact

Are United States Agency for International Development (USAID) programs high impact and good value for money? Do they work? Do they generate more results for less cost than if the agency just gave poor people cash? We don’t always know the answers to those questions, but USAID is trying to find out.

Cost-effectiveness is essential for USAID. Budgets are tight, and the agency’s overseers, including members of Congress, are keen to see results and good value. The agency’s own “Journey to Self-Reliance” framework also demands a greater understanding of cost-effectiveness. The initiative’s chief aim is to help countries increasingly plan, implement, and finance their own development results. For cash-strapped low-income countries to decide when and how to transition aid-financed programs onto their own budgets, it’s critical that they understand if they’re worthwhile and affordable—that is, if they work, how they work, and how much they cost.[1] Historically, USAID has invested little in understanding the impact and value for money of its investments. But this may be starting to change.

Building on its evaluation policy approved in 2011, USAID has increasingly begun to rigorously evaluate its programs. Although only a small share of programs has been subject to rigorous evaluation to date, efforts have gradually expanded over time.[2] And recently, USAID began a pioneering new effort to evaluate the cost-effectiveness of a set of its programs by comparing their per-dollar impact with that of a comparably sized cash transfer given directly to beneficiaries. The first results are in—with more to come.

This policy note explains what’s behind this new effort to assess impact and benchmark effectiveness against cash and how the initiative fits within USAID’s broader evaluation landscape. We then offer recommendations for how USAID can advance these efforts and identify important questions we should be asking as the agency moves forward.

Evidence, evaluation, and cost-effectiveness in USAID programming

Part of understanding cost-effectiveness is understanding your program’s results. USAID has renewed its focus on this question in recent years. It came out with a new evaluation policy in 2011, and since then the number and quality of evaluations of USAID programs has risen.[3]

However, impact evaluations—studies that measure changes attributable to a particular intervention—remain rare.[4] It’s not unexpected that impact evaluations would be a minority of evaluations—they’re not always feasible or appropriate for the question being asked.[5] But the fact that these rigorous studies make up such a small fraction of the agency’s evaluations suggests that the agency is overlooking opportunities to understand more about its programs’ impacts and use this information to enhance the effectiveness of future investments.

Moreover, evidence of impact is only part of the story when it comes to cost-effectiveness. Impact evaluations tell you whether a given intervention was better than doing nothing. But that is a low bar and it ignores the fundamental question of opportunity cost. An impact evaluation alone doesn’t tell USAID whether the intervention was the most efficient and effective way to provide assistance for a given objective. To answer questions like “did a project work well enough to justify expenditure on it compared to spending the money on something else?” and “is the project’s impact-per-dollar greater than that of an alternative?” the agency must go a step further.

It’s a step that USAID hasn’t often taken. While impact evaluation is rare, cost-effectiveness analysis is rarer still. USAID’s operational policy suggests staff might want to analyze cost-effectiveness as part of program development and design. But it’s not a requirement and is infrequently done.[6]

USAID certainly isn’t alone in its lack of attention to cost-effectiveness. However, a number of other donor agencies do focus more on value for money. The Millennium Challenge Corporation (MCC), for instance, conducts cost-benefit analysis (comparing an estimate of how much the project will increase local incomes to the cost of implementation) for each of its major activities;[7] PEPFAR requires costing and expenditure analysis of its programmatic interventions; the World Bank theoretically does either cost-benefit or cost-effectiveness analysis for all its projects at appraisal (though in practice, it does so only for a minority of them);[8] and the United Kingdom’s Department for International Development has a “value for money” policy in which every program goes through a mixture of cost-benefit analysis and narrative justification. None of these efforts are perfectly implemented, nor do they yield flawless analysis. But they provide an analytical framework for thinking through value for money, which is preferable to making decisions in the absence of cost and impact considerations.

What is cash benchmarking and why is it a good idea?

There are a number of ways to evaluate cost-effectiveness. One promising option is cash benchmarking, or comparing the cost-effectiveness of a proposed program to the cost-effectiveness of a comparably sized cash transfer.[9]

Why cash transfers? With their relatively low overhead and administration costs, cash transfers are just about the lowest cost way to help a beneficiary, and have been shown to have a positive impact on a range of development outcomes. [10] Cash is also valuable almost everywhere; it will have some value in any context and for any outcome, as opposed to narrower interventions that are only relevant in some places and for some outcomes. Cash transfers can also address multiple dimensions of poverty since individual recipients use the money to address their own priorities for achieving greater well-being, which may vary across individuals, households, or groups.

Traditional aid programs, on the other hand, prescribe a solution to a development challenge and in doing so presume to know how best to address beneficiaries’ needs. Traditional aid programs can also be expensive. Program design, procurement of goods and services, implementation, and management all contribute to the cost. Do all those extra layers add value? Are these kinds of programs worth it? In short, can a given USAID program do more for the poor with a dollar than the poor could do for themselves with that same money?[11] Sometimes the answer to that question will be yes. And sometimes it will be no. Cash benchmarking can help distinguish between the two and direct USAID toward more effective future programming strategies.

The basis of the benchmark comes from rigorously evaluating cash transfer programs. Measuring the per dollar impact of cash grants on household- or individual-level indicators helps set a “baseline” level of results-per-dollar that proposed programs should be expected to surpass to be considered cost-effective (i.e., more cost-effective than cash). To make the comparison, USAID uses impact evaluations of previous programs that had goals like those of the proposed program to estimate its expected results-per-dollar.[12]

Cash won’t always be the clear winner.[13] If program staff can make the case that what they propose to do is better—on a per-dollar basis—than the cash benchmark, this gives a justification to continue. But if the proposed program is likely to achieve less development impact than an equivalent amount of cash, this should prompt staff to rethink the design or pursue a different type of assistance—maybe even, in some cases, a cash transfer itself (see recommendation 4 for more on that).[14]

There are limits to the use of cash benchmarking. A cash benchmark is mainly applicable to programs that seek to affect household- or individual-level indicators (e.g., school attendance, savings rates, nutrition status) since these are the kinds of outcomes cash payments have been shown to move.[15] But cash isn’t the solution to everything. Cash transfers have limited ability to improve the provision of public goods that are foundational for development. These include infrastructure like roads and bridges, underprovided community interventions like vector control to reduce malaria, and governance issues like rule of law and regulatory policy, among other things.[16] Aid programs that support public goods are not good candidates for cash benchmarking.[17]

USAID’s pioneering work in cash benchmarking

In 2015, USAID launched the first of what grew to be several new cash benchmarking evaluations, making it the first bilateral donor to develop and seek to operationalize such a benchmark.[18] The first study included simultaneous evaluations of an integrated nutrition and water, sanitation, and hygiene (WASH) program in Rwanda and a program that delivered cash grants directly to Rwandan households.[19] To make a comparison between the two, the cash transfer program was evaluated for its effect on the same outcomes the conventional nutrition and WASH program targeted (even though these are not necessarily the outcomes cash is thought to be most likely to affect). This type of head-to-head experimental comparison is a cutting-edge approach to evaluation, used often in Silicon Valley (called A/B testing) but rarely in foreign assistance (note: co-funded the cash grants activity with USAID).

The results of the first study are now available.[20] And they offer useful lessons for future programming. After one year of implementation, the USAID-supported nutrition and WASH program in Rwanda did not have an impact on any of the primary outcomes of interest—household dietary diversity, maternal or child anemia, child growth, household consumption, or wealth. The program did have a positive impact on savings, a secondary outcome. An equivalent amount of cash (about $140 per household) allowed households to pay down debt and increase their assets, but—like the traditional program—the sum did not improve the primary outcomes of child health and nutrition. Only a larger cash transfer—around $500 per household, an amount still well within the range of a per beneficiary cost of many traditional nutrition programs[21]—significantly improved the targeted health outcomes (better dietary diversity, improved height-for-age, and lower child mortality), while also positively impacting consumption, savings, assets, and house values.

The fact that this one nutrition and WASH program achieved little impact should not distract or dissuade USAID from the bigger cash benchmarking task. The potential value of the evaluation goes beyond its specific results and has to do with how the findings can feed into decision making about future programming to help the agency improve its effectiveness and efficiency. For instance, programs seeking to advance the outcomes that the nutrition/WASH program did influence (e.g., household savings) may be relatively cost-effective since they can do so at a lower cost than cash. But programs seeking to improve child nutrition should consider the non-impact of the interventions tested in Rwanda, compared to other more evidence-based interventions, including but not limited to cash transfers.

This is, of course, a single study in a single place. One cannot draw firm and fast conclusions about the relative impact and cost-effectiveness of nutrition programming or cash transfer programming from this paired evaluation alone. As USAID adds to the body of evidence on the relative impact and cost-effectiveness of cash transfers and various traditional interventions—and reviews the existing literature and evidence more carefully—a clearer picture will emerge. Further USAID-supported studies of cash transfers that will be used in developing a cash benchmark are underway in Liberia, Malawi, and the Democratic Republic of the Congo.[22]

Recommendations for USAID

USAID has a long way to go before impact evaluation and cash benchmarking become mainstream. Below are four interrelated suggestions for how to advance a focus on impact evaluation and cost-effectiveness.

1. Develop rules of thumb around cash benchmarks  

To make cash benchmarking data accessible to program staff to use in project design and requests for proposal, USAID will need to compile information about the impact-per-dollar of different types of interventions and synthesize this into “rules of thumb” for how these compare to the cost-effectiveness of cash. This will require strong command of the literature on the impacts of both the proposed intervention(s) and cash transfers.

Fortunately, this kind of information is becoming more accessible in the public domain. USAID’s own impact evaluations can inform the process, along with a wealth of external resources. For instance, the International Initiative for Impact Evaluation (3ie) has a database of sector-specific evidence that can serve as an easy starting point;[23] Cochrane and the Campbell Collaboration are the gold standard for evidence in clinical medicine; DFID has several evidence reviews;[24] and the World Bank has a library of impact evaluations. USAID should draw upon a broad set of evidence to determine a range of possible effects that the in-kind interventions used in traditional programs might have. For interventions with a weaker evidence base, USAID could commission 3ie or others to conduct a systematic evidence review, either as part of specific project preparation or more generally to advance agency knowledge. (USAID’s current support to 3ie ends this year.) The benchmarks could be developed rapidly by an external group, and adjusted and updated periodically to reflect new evidence.

Cash benchmarks can be helpful in directing USAID away from programming that is unlikely to be cost-effective (i.e., as cost-effective as cash). However, surpassing the cash hurdle does not necessarily mean a proposed program will be impactful or cost-effective in practice. A cash benchmark provides a helpful ex ante screen for cost-effectiveness, but it is not a substitute for rigorous evaluation at program completion.

To develop cash benchmarks, USAID will need to make progress on the next recommendation.

2. Assess the state of impact evaluations at USAID—and do more of them

USAID’s pipeline of cash benchmarking evaluations—along with a number of external studies—will soon create a body of high-quality evidence on the impact of cash transfers on various outcomes for different populations. This will only form part of the picture, however. Without a good understanding of the impact-per-dollar of its traditional programs, USAID cannot compare them to the cost-effectiveness of cash.

While USAID has increased its evaluation quantity and quality over the last several years, the agency has completed very few impact evaluations and their quality remains mixed.[25] Those that have been completed rarely include the kind of cost data necessary to understand impact-per-dollar. Where cost analysis has been done, methodologies have varied, limiting their comparability across studies.

USAID should assess its existing stock of impact evaluations to determine where it has evidence to apply to cash scorecards and where there are gaps. It should use this analysis to identify priorities for future impact evaluations. The agency should also encourage the inclusion of costing analysis in impact evaluations and develop a common approach to costing to enable greater comparability across studies.

Of course impact evaluations aren’t always appropriate or feasible. USAID should make sure it chooses the right evaluation approach for the question(s) being asked; impact evaluations will not always be the right choice. But given the objectives and strategies of most of USAID’s portfolio, impact evaluations should be more norm than exception.

3. Create a new evaluation and evidence unit at USAID/Washington

In a previous piece, we recommended that USAID elevate and consolidate the evidence agenda with the establishment of a new, independent unit we called Evidence, Evaluation, and Learning (EEL). EEL would combine the existing functions of the two main units focused on how the agency generates and uses evidence: the Bureau of Policy Planning and Learning’s office of Learning Evaluation and Research (LER), which supports the implementation of the agency’s evaluation policy, and the Global Development Lab’s Development Innovation Ventures (DIV), which competitively funds innovative ideas to address development challenges and then rigorously tests them to see what works.[26] EEL would elevate the profile of evaluation and evidence; strengthen and streamline LER and DIV’s overlapping roles; expand efforts to create and implement a learning agenda; and support missions and technical bureaus’ efforts to develop and modify program design using rigorous existing and emerging evidence. Cost-effectiveness should be part of this learning agenda. Cash benchmarking efforts would be a natural fit for a unit like EEL, which would be well placed to advance the above recommendations.

Right now, the task of evaluation—identifying opportunities, designing scopes of work, procuring evaluators, overseeing evaluation contracts, and disseminating results—rests largely with mission staff. Shifting to a more centralized approach to evaluation would be more strategic and efficient, especially for developing cash benchmarking. A centrally managed process can help missions identify evaluation priorities that would contribute to developing cash scorecards. It would also more efficiently use staff skills. Good evaluation management requires significant time from staff with solid evaluation expertise. Mission monitoring and evaluation staff don’t necessarily possess the deep evaluation experience necessary—and they frequently have many competing priorities for their time, given extensive demands for monitoring data to fulfill reporting requirements.[27] More of USAID’s technical evaluation expertise resides in Washington. These experts spend much of their time advising, reviewing, and approving work often done by those with less technical expertise in the missions, at some cost to quality and efficiency.

USAID’s proposed organizational restructuring emphasizes strengthening Washington’s support to the field. So far, however, there’s been little emphasis on increasing support for evaluation functions. That should change. A strengthened central role for Washington in evaluation would enable USAID to more strategically identify evaluation opportunities, more effectively and efficiently manage evaluators, and make it easier for mission program staff to know where to turn for evidence—including evidence about cost-effectiveness—all of which could be used to inform how the agency chooses and designs programs.

4. Explore greater use of cash transfer programming

Cash benchmarking is not the same as cash programming. The limited number of cash transfer interventions supported through USAID’s benchmarking effort are being done expressly for the purpose of establishing a cost-effectiveness hurdle. But an important offshoot of these efforts should be increased recognition by USAID and its overseers that providing no-strings-attached grants to poor households may be—in some circumstances—the most efficient and effective way to help them. Where the evidence shows that cash transfers can achieve higher per-dollar impact than traditional programs, USAID should follow the evidence and program accordingly.

Cash transfers are a fairly common form of anti-poverty intervention, but historically, USAID has been reluctant to pursue them as part of its programming. Part of this reluctance comes from uncertainty about how it fits within the parameters placed on its funds. It is much more straightforward to demonstrate compliance with sector-specific congressional directives with sector-specific, input-based programming. Because cash transfers may impact multiple development outcomes, they may be viewed as insufficiently targeting the objectives prescribed by Congress. However, cash transfers can show—and have shown—clear effects on sector-specific targets.

Another reason for qualms about cash transfers relates to the agency’s need to demonstrate accountability. The agency may feel more protected knowing it can describe exactly how its funds are spent—especially when facing wariness that cash recipients might use the money to buy temptation goods like alcohol or otherwise put it to unproductive use. However, evidence shows that fears about cash fueling negative behavior are unwarranted. Multiple studies have shown that cash recipients are not more likely to spend more on things like drugs or alcohol, nor are they more likely to reduce their work hours.[28] Instead, poor families use cash to do things like invest in productive assets, send their children to school, and invest more in health.[29]

There’s also an element of inertia. USAID has mastered awarding grants and cooperative agreements to (largely US-based) implementing partners. So when there’s money to move, it is often easiest to stick to the tried and true.

A substantial (and growing) body of evidence on the effectiveness—and cost-effectiveness—of cash transfers may help overcome resistance. After all, pursuing cost-ineffective programs when better value alternatives are known becomes a deliberate choice not to help as many poor households as possible. And there are other benefits to cash transfer programming, as well. For instance:

  • Cash aligns with USAID policy on country ownership. With a view toward greater sustainability, USAID’s operational policy now urges a more comprehensive shift toward locally-owned development throughout the program cycle. USAID staff are encouraged to “seek out and respond to the priorities and perspectives of local stakeholders,” including beneficiaries.[30] But as much as the agency cares about recipients’ priorities, it’s hard to get a precise assessment—especially since they’re often heterogeneous within a targeted group. Giving beneficiaries the autonomy and flexibility to invest in their own priorities, however, tells USAID a lot about what their true priorities are.

  • Cash is faster to deploy. Due to lengthy design and procurement procedures, it often takes several years for USAID to take a project from conception to launch. Cash transfer programs do require some design steps (e.g., identifying and verifying beneficiaries), but their simpler structure makes them much faster to get up and running.

  • Cash is particularly high return in disaster-affected and conflict-affected states. These also happen to be the kinds of places USAID spends significant sums of money. During economic recovery, returns to capital are higher, and the fluidity of the situation makes local knowledge particularly important for making smart investments.[31]

  • The value of cash lies beyond the immediately measurable. Cash transfers allow poor households to decide for themselves how to invest the money to improve their well-being. The impact of this flexibility is only partly quantifiable. Changes in particular indicators (e.g., savings, school attendance) may be observed and measured, but giving beneficiaries more agency over their own life decisions is less paternalistic and may yield unquantified psychological value.

  • Programs that use digitized cash transfers may boost financial inclusion. Some have suggested that digitizing cash transfers can connect otherwise financially excluded populations with other formal financial services. While there’s limited evidence that digitized cash transfers on their own lead to greater sustained financial inclusion, certain implementation choices may increase the potential for achieving this objective.[32]

All that said, cash transfers are not a silver bullet. Giving cash selectively may sometimes negatively impact those in the same area who didn’t get a grant.[33] Furthermore, the effects of cash transfers vary and in some cases have not been sustained.[34] Of course, the same is true of traditional development programs—which often come at a higher cost. But while that doesn’t dismiss the concern, it also suggests cash transfers should remain on the table. As with any new intervention that shows promise with caveats, the path forward should focus on ensuring future interventions add to the body of evidence and yield learning from instances in which results are or are not achieved and sustained. Design choice also may be able to mitigate some of these risks.[35]

Recommended oversight questions for Congress and other external stakeholders

USAID’s cash benchmarking efforts deserve support, and it will be important for Congress and other external stakeholders to exercise oversight.

The following are a sample of questions that overseers should ask of USAID, both to signal the importance of this effort and to push the agency to do it well:

  1. While USAID’s policies and operational guidance encourage impact evaluation and cost-effectiveness analysis, they are infrequently done. What have been the chief barriers to greater deployment and use of impact evaluation and cost-effectiveness criteria to date? What would make cash benchmarking more likely to be used?

  2. Evidence-based decision making is important, but there are gaps in USAID’s use of evidence to inform policies and programming. What prevents USAID from more systematically basing program design on the existing body of rigorous evidence? How will USAID encourage staff to use cash benchmarking in program design decisions?

  3. USAID staff must ensure programs respond to a wide range of criteria and requirements (e.g., alignment with administration or agency priorities, alignment with national development strategies, mobilization of external resources, progress toward local implementation targets). Given the variety of other criteria USAID must consider when choosing what to fund, to what extent is cash benchmarking likely to be influential in decision making?

  4. The cash benchmarking exercise is likely to show that some proposed programs will not achieve their desired outcomes as cost-effectively as cash transfers. If proposed programs fall short of the cash benchmark, how will staff be directed to proceed?

  5. The timing of an evaluation is central to the results it detects—results may take time to become apparent and/or they may not persist for long. Understanding timing is critical for interpreting results. How does uncertainty about the medium- or long-term persistence of impact factor into cash benchmarking? What kind of assumptions about the timeframe of programs’ benefit streams are required?

  6. While one of the benefits of cash transfers is their relatively low administration costs, it is still important to ensure funds are spent according to plan. For the cash transfer component of cash benchmarking, what safeguards are in place to ensure grants reach the intended recipients?

  7. The design of cash transfer programs—payment mechanisms, size of transfer, duration of transfer, accompanying interventions, etc.—can have implications for their results. How will USAID seek to design cash transfer programs in a way that strengthens their desired impacts and minimizes negative impacts on non-recipients?

[1] Rose, Sarah, Erin Collinson and Jared Kalow. “Working Itself Out of a Job: USAID and Smart Strategic Transitions.” CGD Policy Paper. Washington, DC: Center for Global Development. 2017.

[2] An independent study of USAID evaluations conducted between 2011 and 2014 (the years just after the release of the new evaluation policy) found just eight impact evaluations out of its sample of 609 (1 percent). (Hageboeck, Molly, Micah Frumkin and Stephanie Monschein. “Meta-Evaluation of Quality and Coverage of USAID Evaluations 2009-2012.” Management Systems International, August 2013. The proportion of impact evaluations has risen since then. A Government Accountability Office (GAO) report that looked at evaluations completed in FY2015 found that 9 percent were impact evaluations. (US Government Accountability Office. “Foreign Assistance: Agencies Can Improve the Quality and Dissemination of Program Evaluations.” US Government Accountability Office, March 2017.

[3] Hageboeck et al., 2013; US Agency for International Development. “Evaluation.”

[4] Hageboeck et al., 2013; US Government Accountability Office, 2017. 

[5] Karlan, Dean, and Mary Kay Gugerty. “Ten Reasons Not to Measure Impact—and What to Do Instead.” Stanford Social Innovation Review, Summer 2018.

[6] “ADS Chapter 201 Program Cycle Operational Policy.” US Agency for International Development, August 2018.

[7] Rose, Sarah, and Franck Wiebe. “Focus on Results: MCC’s Model in Practice.” Center for Global Development, January 2015. /publication/ft/focus-results-mccs-model-practice-brief

[8] “Cost-Benefit Analysis in World Bank Projects.” International Bank for Reconstruction and Development/World Bank, 2010.

[9] “Cash Transfers: The New Benchmark for Foreign Aid?” Center for Global Development, May 2014. /event/cash-transfers-new-benchmark-foreign-aid

[10] Blattman, Christopher, and Paul Niehaus. "Show them the money: Why giving cash helps alleviate poverty." Foreign Affairs 93, no. 3 (2014): 117-126.; GiveDirectly. “Research on Cash Transfers: Overview of the Evidence,” n.d.; Hagen-Zanker, Jessica, Francesca Bastagli, Luke Harman, Valentina Barca, Georgina Sturge, and Tanja Schmidt. “Understanding the Impact of Cash Transfers: The Evidence.” Overseas Development Institute, July 2016.

[11] Blattman and Niehaus, 2014; Fuller, Jacquelline. “Want to Help People? Just Give Them Money.” Harvard Business Review, March 2013.; Kestenbaum, David, and Jacob Goldstein. “Cash, Cows and The Rise of Nerd Philanthropy.” National Public Radio (NPR), August 2013.

[12] When possible, it’s helpful to simultaneously pair two impact evaluations—one that looks at the outcomes of a traditional aid program, the other that looks at the same outcomes of an equivalently sized cash transfer to a similar population—and compare the results side by side. But separately conducted impact evaluations can also contribute useful information.

[13] For example, “Cash, Food, or Vouchers?” presented at the Evidence from a Four-Country Experimental Study | IFPRI-WFP Event, International Food Policy Research Institute, October 2013.; Handa, Sudhanshu. “Raising Primary School Enrolment in Developing Countries: The Relative Importance of Supply and Demand.” Journal of Development Economics, no. 69 (2002): 103–28.; Gentilini, Ugo. “The Revival of the ‘Cash versus Food’ Debate: New Evidence for an Old Quandary?” Policy Research Working Paper. World Bank Group, February 2016.;sequence=1

[14] Though cash benchmarking is a practical tool for assessing the cost-effectiveness of proposed aid programs, it is more complex to implement than this simple description suggests. There are a number of questions about how to structure the traditional program, the cash intervention, and the corresponding evaluations. For instance, for the traditional programs, is it better to evaluate interventions separately or as a package of related interventions assumed to work together? Since budgets tend to change, staff don’t always know ex ante what programs will cost. How much cost uncertainty is tolerable? Do different kinds of cash transfers—lump sum vs. incremental—yield different impacts? What is the right window of time to expect outcomes to change; do evaluations conducted shortly after program closure mislead about longer-term impact? Given the limited applicability of results across contexts, how many studies do there need to be for one type of program to have a suitable benchmark? (Aker, Jenny. “Cash or Coupons? Testing the Impacts of Cash versus Vouchers in the Democratic Republic of Congo.” Center for Global Development, March 2013. /sites/default/files/Aker-Cash-versus-Vouchers_0.pdf)

[15] Hagen-Zanker et al., 2016.

[16] Indeed, a deliberative polling study in Tanzania run by CGD researchers Justin Sandefur and Nancy Birdsall found that two-thirds of poor Tanzanians would rather spend gas revenues on public goods (security, for example) and other government-provided public services. (Sandefur, Justin, Nancy Birdsall, and Mujobu Moyo. “The Political Paradox of Cash Transfers.” Center for Global Development, September 2015. /blog/political-paradox-cash-transfers)

[17] Blattman and Niehaus, 2014; Aker, Jenny. “What Cash Payments Can’t Do: Lessons from #BringBackOurGirls.” Center for Global Development, May 2014. /blog/what-cash-payments-cant-do-lessons-bringbackourgirls

[18] The World Food Programme has undertaken studies comparing cash, food rations, and food vouchers to inform its programming decisions (Gentilini, 2016; Hidrobo, Melissa, John Hoddinott, Amber Peterman, Amy Margolies, and Vanessa Moreira. "Cash, food, or vouchers? Evidence from a randomized experiment in northern Ecuador." Journal of Development Economics 107 (2014): 144-156.

[19] The nutrition/WASH program was implemented by Catholic Relief Services; the household grants program was co-financed by USAID and and was implemented by Give Directly.

[20] Zeitlin, Andrew and Craig McIntosh. “Benchmarking a Child Nutrition Program against Cash: Experimental Evidence from Rwanda.” Innovations for Poverty Action, June 2018.

[21] Menon, Purnima, Christine M. McDonald, and Suman Chakrabarti. "Estimating the cost of delivering direct nutrition interventions at scale: national and subnational level insights from India." Maternal & Child Nutrition 12 (2016): 169-185.; McMillan, Della, and Sidibe Sidikiba. “Final Evaluation Report for the Tubaramure PM2A Program.” US Agency for International Development, September 8, 2014.

[22] The studies are part of a memorandum of understanding developed between USAID’s Global Development Lab, Give Directly, and Good Ventures. None is designed to employ the same innovative, highly rigorous, head-to-head comparison the Rwanda evaluation used, but they will still yield valuable information for constructing a cash benchmark. In Liberia and Malawi, USAID will evaluate the impact of standalone cash transfer programs on a wide range of development outcomes. In the Democratic Republic of the Congo, the cash transfer evaluation will be paired with a separate evaluation of a workforce program and will look at workforce-related outcomes in addition to many of the development outcomes evaluated in the other studies.

[23] International Initiative for Impact Evaluation (3IE). Inform Policy.

[24] Department for International Development. Rapid evidence assessments. July, 2015.; Department for International Development. DFID Evidence Papers. July, 2014.

[25] US Government Accountability Office, 2017; Goldberg Raifman, Julia, Felix Lam, Janeen Madan Keller, Alexander Radunsky, and William Savedoff. “Evaluating Evaluations: Assessing the Quality of Aid Agency Evaluations in Global Health.” Center for Global Development, August 2017. /publication/evaluating-evaluations-assessing-quality-aid-agency-evaluations-global-health

[26] Rose, Sarah, and Amanda Glassman. “Advancing the Evidence Agenda at USAID.” Center for Global Development, September 2017. /publication/advancing-evidence-agenda-usaid

[27] Rose and Glassman, 2017.

[28] Blattman and Niehaus, 2014; Evans, David K., and Anna Popova. “Cash transfers and temptation goods: a review of global evidence.” Policy Research working paper 6886; Impact Evaluation series, 127. Washington, DC: World Bank Group, 2014.; Banerjee, Abhijit V., Rema Hanna, Gabriel E. Kreindler, and Benjamin A. Olken. "Debunking the stereotype of the lazy welfare recipient: Evidence from cash transfer programs." The World Bank Research Observer 32, no. 2 (2017): 155-184.; “Myth-busting? Confronting Six Common Perceptions about Unconditional Cash Transfers as a Poverty Reduction Strategy in Africa.” Innocenti Working Paper 2017-11. UNICEF Office of Research, 2017.; Peterman, Amber, and Silvio Daidone. “Evidence over Ideology: Giving Unconditional Cash in Africa.” UNICEF, August 2017.

[29] Hagen-Zanker, et al., 2016; GiveDirectly, n.d.; Blattman and Niehaus, 2014

[30] US Agency for International Development, 2018 (p. 12)

[31] Blattman and Niehaus, 2014.

[32] Soursourian, Matthew. “Can Emergency Cash Transfers Lead to Financial Inclusion?” Consultative Group to Assist the Poor (CGAP), June 2017.

[33] Haushofer, Johannes, James Reisinger, and Jeremy Shapiro. "Your gain is my pain: Negative psychological externalities of cash transfers." 2016.; Baird, Sarah, Jacobus de Hoop, and Berk Ozler. “Income Shocks and Adolescent Mental Health.” Journal of Human Resources 48, no. 2 (2013): 370–403.; Filmer, Deon, Jed Friedman, Eeshani Kandpal, and Junko Onishi. "General equilibrium effects of targeted cash transfers: nutrition impacts on non-beneficiary children." World Bank Group, 2018.

[34] Examples of studies that find unsustained effects several years post-grant: Blattman, Chris, Nathan Fiala, and Sebastian Martinez. “The Long Term Impacts of Grants on Poverty: 9-Year Evidence From Uganda’s Youth Opportunities Program.” Working Paper. National Bureau of Economic Research, September 2018.; Sandefur, Justin. “Cash Transfers Cure Poverty. Side-Effects Vary. Symptoms May Return When Treatment Stops.” Center for Global Development, April 2018. /blog/cash-transfers-cure-poverty-side-effects-vary-symptoms-may-return-when-treatment-stops; Haushofer, Johannes, and Jeremy Shapiro. "The long-term impact of unconditional cash transfers: Experimental evidence from Kenya." Busara Center for Behavioral Economics, Nairobi, Kenya (2018).; Fafchamps, Marcel, David McKenzie, Simon Quinn, and Christopher Woodruff. "Microenterprise growth and the flypaper effect: Evidence from a randomized experiment in Ghana." Journal of development Economics 106 (2014): 211-226.; Brudevold-Newman, Andrew, Maddalena Honorati, Pamela Jakiela, and Owen Ozier. “A firm of one's own: experimental evidence on credit constraints and occupational choice.” The World Bank, 2017.; Blattman, Christopher, Nathan Fiala, and Sebastian Martinez. "The economic and social returns to cash transfers: evidence from a Ugandan aid program." Columbia University, Departments of Political Science and International & Public Affairs (2013). Examples of studies that found sustained effects several years post-grant: Blattman, Fiala, and Martinez, 2013; Northern Uganda Social Action Fund – Youth Opportunities Program. Innovations for Poverty Action.; De Mel, Suresh, David McKenzie, and Christopher Woodruff. "One-time transfers of cash or capital have long-lasting effects on microenterprises in Sri Lanka." Science 335, no. 6071 (2012): 962-966.

[35] Hagen-Zanker, et al., 2016.

Rights & Permissions

You may use and disseminate CGD’s publications under these conditions.