Reinvigorating Impact Evaluation for Global Development

There is a cost in lives and livelihoods when policymakers fly blind and make decisions with insufficient data and evidence; the staggering death toll and economic losses from the COVID-19 pandemic make the consequences of this ignorance all too clear. High-quality, timely, and context-specific evidence was and continues to be necessary for both the effectiveness and the political credibility of the response. Despite significant advances in the data and analytical methodologies at our disposal, the pandemic highlighted an unfinished agenda in the generation and use of data and evidence for policy. Across sectors, decision makers within governments, aid agencies, multilateral organizations, and NGOs have not yet fully harnessed the value of evidence for better public policies.

To address this missed opportunity, we convened a virtual working group from 2020 to 2022, bringing together 40 policymakers and experts from 20 countries with collective experience at over 100 organizations to review progress, identify challenges, and propose recommendations to enhance the policy value and use of data and evidence for global development, with a focus on impact evaluation. While the group highlighted that impact evaluation is far from the only evidence tool available to policymakers to understand what is working and why, it is also clear that impact evaluation is underutilized given its potential to improve and save lives through learning and improved decisions about policies and programs.

Today, we’re pleased to share the working group’s final report, which includes dozens of resources and examples of good practice that seek to ensure we are benefiting from—and not rehashing—well-developed contributions.

The final report recaps over a decade of progress and shows how far the field has come in addressing persistent critiques about the scale, generalizability, and policy utility of impact evaluation. For instance, data digitization is expanding the frontier of evidence generation and use, enabling faster, lower-cost evaluations that can inform consequential policy decisions. During COVID-19, for example, adaptive impact evaluation informed the rapid design and improvement of a “low-tech” remote education intervention in Botswana and, subsequently, other countries.

But despite these developments and an increase in the overall number of impact evaluations, only a small share of development policies and programs are rigorously evaluated. For instance, less than 5 percent of World Bank projects have been formally evaluated for development impact since 2010. In development spending and lending, multiple studies find less than 10 percent of evaluations are impact evaluations, which attribute the net impact of a project or program—despite bold public statements about impact.

Further, while the global community of researchers and organizations conducting these evaluations and related evidence activities continues to grow, too often developing country researchers are left out. Funding to sustain long-term research-to-policy partnerships, through which locally linked researchers can build trust-based relationships with policymakers and identify and develop opportunities for policy-relevant evaluations, remains limited.

To turn the page, the working group underscores the need for more and better investments to deliver on the promise of impact evaluation and bolster the broader evidence ecosystem. Systematically dedicating even a small portion of resources towards evaluation and data related functions ensures that all development resources are spent most effectively and efficiently. As a global recession looms and government budgets come under immense pressure, doing so represents our best shot at enhancing value for money. The potential rate of return is immense: a $1 million impact evaluation could save hundreds of millions in ineffective resources and help save and improve lives.

Specifically, the working group proposes five ways to improve the funding and practice of impact evaluation and the broader evidence ecosystem:

  1. Design evaluations that start from the policy question and decision space available. Evaluations should be designed to support decision makers who are interested in using more evidence (in addition to expanding the global knowledge base). And to help make findings more relevant for real-world decisions, impact evaluations should routinely integrate a range of complementary analyses.

  2. Harness technology for timely, lower-cost evidence. Greater investments are needed in capacity strengthening to harness new data sources and analytical methods for impact evaluation, alongside resources to improve the quality, regularity, and granularity of administrative data.

  3. Advance locally grounded evidence-to-policy partnerships. The development community should increasingly focus resources on context-specific evidence-to-policy initiatives and researchers with deep contextual knowledge, enabling them to identify policy-relevant research questions and advance uptake of findings.

  4. Enact new incentives and structures to strengthen evidence use. More robust systems and incentives are needed to institutionalize the generation and use of rigorous evidence to determine whether projects have their intended impacts and whether they should be adjusted or scaled up or down.

  5. Invest in evidence leaders and communities. Development funders can shape meaningful evidence-to-policy communities and create opportunities for people to learn lasting skills through online teaching resources, civil service institutes, government training programs, and other field-building fora.

To illustrate the application of this agenda to specific development funders, the working group developed detailed recommendations for three key audiences with strong existing foundations for evaluation and evidence use to complement and leverage country government funding: philanthropies, USAID, and the World Bank.

Stay tuned for more updates in the coming weeks and months!


CGD blog posts reflect the views of the authors, drawing on prior research and experience in their areas of expertise. CGD is a nonpartisan, independent organization and does not take institutional positions.