Recommended
CGD’s Amanda Glassman and Julia Kaufman introduce this blog from prominent international non-governmental organizations (INGOs) in the development space. The piece builds on the findings and recommendations of a recent CGD working group.
CGD’s recent working group on expanding the policy value and use of impact evaluation and evidence calls for global development funders to boost their impact by reinvigorating their evidence agendas. The working group’s final report offers recommendations on better evaluation funding and practice to unlock more results from public and aid dollars. In a time of tightening budgets and unprecedented global crises, limited funds need to stretch further—and deliver development impact at greater rates—than ever before. But the effects—and costs—of most international development programs remain unknown, leaving aid agencies and government officials in the dark about how to better design programs, allocate resources, and ultimately improve lives. You can read the working group’s final report for a deep dive into examples and insights on how rigorous evaluation and complementary cost analysis have evolved to be less time intensive, more affordable, and larger scale. In this blog post, INGO leaders reflect on why and how aid agencies and NGOs, in partnership with governments, can harness these developments and use cost analysis to make better decisions, as opposed to solely counting the beans for the bean counter’s sake.
In recent months, leading global aid donors such as US Agency for International Development (USAID) and the UK Foreign, Commonwealth and Development Office (FCDO) announced their intention to reduce bureaucratic “sludge” and simplify the obstacle course of their grantmaking—and grant management—processes, with the promise of making the aid bureaucracy more efficient and the programs they fund more effective and inclusive. But if donor agencies are not careful, they risk throwing out the cost-effectiveness baby with the bureaucratic bathwater.
As senior INGO leaders who are committed to standardizing and learning from cost analyses, we believe that prioritizing cost-effectiveness is critical for delivering education, health, and economic well-being to the people we work with. Development and humanitarian donors seemingly share this belief: they commission (often expensive) evaluations and impose rigid cost controls on projects. But in the absence of sufficient data on what it actually costs to deliver outputs and outcomes across different contexts, these shared aspirations have not translated into meaningful improvements in cost-effectiveness.
The biggest gains in cost-effectiveness will not come from performance metrics and financial control, but from learning and making decisions based on which approaches allow us to create as much impact per dollar as possible. To take an example of learning-focused analysis, CARE conducted cost-efficiency analysis of its portfolio of conditional cash for protection programs in Jordan. Program staff were able to identify several ways to increase efficiency by changing transfer frequency and adding other wrap-around activities, allowing them to achieve more impact per dollar. Another analysis showed that delivering early childhood development (ECD) materials through different delivery channels in Jordan and Lebanon caused the cost-per-visit to vary from $8 to $56 per home visit, a five-fold difference in the achievable impact per dollar. This kind of transformational increase in cost-effectiveness is possible. But our experience shows that it doesn’t come from squeezing overhead rates or travel caps. It comes from better program design, built on a body of rigorous learning-focused cost analyses.
Within many donor agencies, though, issues of cost-effectiveness and value-for-money are perceived as the turf of procurement or finance departments. Unfortunately financial reporting does not provide information that is disaggregated enough for reliable cost analysis. As far back as 2014, an RTI report assessing cost-effectiveness of early grade reading programs noted that “if USAID is keen on examining the cost of developing evidence-based EGR programs, it must ensure that the data needed to examine the costs of these programs is readily available... [otherwise] USAID and its implementing partners will face in the future the very same set of problems we both faced in this activity.” FCDO’s “Programme Operating Framework” explains how Senior Responsible Owners should assess value-for-money in a holistic way—merging spending data with information on outputs, outcomes, and equity. But in practice, the most common assessment tool is the “Non-Project Attributable Costs budget template,” which was put out by the Commercial department and mostly captures the unit costs of inputs. Reducing this obstacle course of faux-efficiency assessments is clearly a win for the sector, but we should not lose sight of the greater promise of actual cost-effectiveness.
Donor efforts to embed “value-for-money” as a performance metric are well-intentioned, but they have stumbled on the fact that we don’t actually have enough good data on the costs per output or outcome (let alone on variation across contexts). This, in our view, is precisely why so much bureaucracy has been built up to control the price of inputs—at least it can be easily measured. Absent more meaningful data, the attempts to set cost benchmarks for humanitarian and development activities feel a bit like telling people to count their calories without actually knowing the appropriate daily intake.
The promise of cost-effectiveness will not be realized by benchmarks set in dollar terms (not for another decade, anyways), but through rigorous comparative learning and high-quality program design. Within our organizations, we are investing in the tools to rigorously measure cost-efficiency and cost-effectiveness of our programs, and then pooling results to understand the typical costs to deliver quality services and why and how much they vary across contexts. Results can be used to identify the program design decisions that will improve value-for-money. In a study of 28 basic needs cash programs, the International Rescue Committee (IRC) found that programs reaching fewer than 1,000 households cost 2.5 times as much in delivery costs for every dollar transferred than those reaching more than 1,000 households. Program scale accounted for the greatest variation in program cost-efficiency, more than targeting method, delivery method, or any other feature examined. This data led to an “efficiency benchmark” —set in terms of design best practices, not global cost thresholds—which suggests that IRC’s basic needs cash programs reach a minimum of 1,000 households.
Our studies show that it is not only our program designs that can improve cost-effectiveness; the ways in which donors provide funding can also make a big difference. Across studies in Somalia and Colombia, comparing the cost-efficiency of programs delivering the same results with short-term (i.e. 12 months or less) versus longer-term funding (i.e. 24 months or more) showed that multi-year funding increased efficiency between 20 and 40 percent. Nor is this kind of learning restricted to INGOs: analyses we have done in partnership with local CSOs show the same cost-efficiency patterns from INGOs apply. In Nigeria, Save the Children led a study comparing four latrine construction projects and found that program scale, rather than NGO type, was the greatest predictor of cost-efficiency. There is a clear strategic lesson for the sector: national organizations should be funded at sufficient scale to maximize their dollar per impact in the same way as INGOs, which is not currently the case in many places.
Donors can play a critical role in advancing this kind of learning, but to date, this has typically taken a backseat compared to the emphasis on value-for-money as a performance monitoring issue. FCDO and USAID both encourage researchers to measure cost-effectiveness as part of impact evaluations they fund. However, a 2019 study found that only 14 percent of studies in a sample from the International Initiative for Impact Evaluation (3ie) included any kind of value-for-money analysis. In addition to more cost analysis as part of impact evaluations, further investment in other means of gathering rigorous cost evidence is needed. The recently revised ADS 201, which requires cost analysis as part of all USAID-funded impact evaluations, is an excellent step towards building up the body of cost evidence available to decision-makers. The USAID Center for Education’s Cost Measurement Initiative is unique in harnessing compliance processes on routine projects to produce quality cost evidence for learning at a potentially greater scale. However, such efforts are still nascent and it remains to be seen how much traction they will get within donor agencies.
To prevent “aid efficiency” from becoming bureaucratically homeless, donors should embed it in a new venue: as part of sectoral research and learning agendas, supported by consistent tools in our Monitoring and Evaluation toolkits, with the power to drive resources to the places they make the most difference. Rather than treating this as a reporting issue, donors should invest in their sectoral Monitoring, Evaluation, Accountability and Learning (MEAL) experts to develop strategic pools of cost evidence, perhaps coordinated by elevated Chief Economists. By combining cost-effectiveness research, cost-efficiency analysis, and insights from M&E, expert teams will be able to derive best-practices for what “value-for-money” looks like in practice. This is a big task, one which our own agencies have been investing in for many years, and it's much harder than simply capping indirect costs. But, as our studies have shown, the returns can be dramatic, and that means greater sustainability for our programs and more impact for people in need—at a moment when the stakes couldn’t be higher.
Co-Authors:
- Caitlin Tulloch, Dioptra Consortium Lead and Director for Best Use of Resources at International Rescue Committee
- David Leege, Senior Director, Impact, Learning, Knowledge and Accountability at CARE
- Volker Hüls, Head of Division for Effectiveness, Knowledge and Learning at Danish Refugee Council
- Jeannie Annan, Chief Research & Innovation Officer at International Rescue Committee
- Josh DeWald, Vice President, Program Performance and Quality at Mercy Corps
- Michael O’Donnell, Director of Evidence & Learning at Save the Children International
Disclaimer
CGD blog posts reflect the views of the authors, drawing on prior research and experience in their areas of expertise. CGD is a nonpartisan, independent organization and does not take institutional positions.