US development efforts have long been at the forefront of evidence-based policymaking by tracking and measuring the results of programs on wellbeing and prosperity around the world. Still, many opportunities remain to translate evidence more consistently and systematically into policy and practice.
Last month, the Center for Global Development (CGD), the White House Office of Science and Technology Policy, and the Office of Management and Budget co-hosted an Evidence Forum as part of the White House Year of Evidence for Action, featuring USAID Administer Samantha Power. The event celebrated progress in evidence-based approaches and explored how US development institutions and country partners can further expand the frontier of evidence generation and use to strengthen policies and programs and advance better outcomes.
Administrator Power opened the event with keynote remarks, followed by a panel discussion among Alicia Phillips Mandaville from the Millennium Challenge Corporation (MCC), Dafna Rand from the State Department, Michele Sumilas from USAID, and Eliya Zulu from the African Institute for Development Policy.
The discussion drew on Breakthrough to Policy Use: Reinvigorating Impact Evaluation for Global Development, the final report of a recent CGD working group on how to expand the policy value and use of rigorous evidence and impact evaluation. You can read the full report and accompanying materials here, including an interactive (or PDF) timeline of progress in the evidence ecosystem over the last two decades that features numerous US-led and -supported milestones.
Key insights and future directions
While evidence-based policymaking is a core principle across US development efforts, agencies are at different stages in their evidence journeys. For instance, MCC conducts and publishes independent evaluations for every project. USAID pioneered the development of an agency-wide evaluation policy in 2011 and has conducted over 130 impact assessments to date. And the State Department—where impact evaluation and related evidence activities are more nascent—has now mobilized 150 officers to help implement its agency learning agenda, unveiled its Enterprise Data Strategy, and staffed up its Center for Analytics. Regardless, speakers across the board stressed the need to expand from evidence for compliance and accountability to evidence for learning, use, and utility. Below we share three takeaways from the event.
1. Local immersion matters for evidence use and policy decisions
In previewing USAID’s vision for better evidence generation and use, Administrator Power highlighted the need for evidence to be informed by the needs and the desires of the communities USAID serves, accessible to those who are in the best position to analyze and use such information in their work, and transparent to policymakers in partner countries so they can benefit from emerging lessons and insights.
Zulu underscored the importance of involving locally immersed researchers and evidence organizations in this agenda, noting their unique ability to inform and improve policy decisions.
Mandaville pointed to MCC’s efforts in Morocco to help set up the Morocco Employment Lab—a partnership formed in 2020 between a Moroccan think tank, MIT, and Harvard to build research capacities and design longer-term projects focused on aligning labor market trends with job training programs. As a standalone complement to the compact’s embedded evaluation mechanism, this partnership is designed to be carried forward by the government and sustained beyond MCC’s involvement. At USAID, Sumilas highlighted the Partnership for Enhanced Engagement and Research, or PEER program, which supports researchers in partner countries with awards of up to $300,000.
Yet partnerships of this sort—and investments in partner country-based researchers overall—are still rare. Only 25 percent of authors of social science impact evaluations focused on low- and middle-income countries include authors based in the countries of study. As part of Administrator Power’s focus on locally-led development (and commitment to ensure 50 percent of USAID awards are designed, implemented, monitored and evaluated with input from local communities), Sumilas shared that USAID is exploring new mechanisms for missions to support local researchers, potentially to be launched later this year.
2. Institutional leadership and structures are key to mainstreaming evidence use
The role of systems and incentives in institutionalizing the generation of evidence, supporting learning from evidence, and enabling the use of evidence in resource allocation and program design cannot be overstated.
Both Administrator Power and Sumilas highlighted the appointment of a new chief economist, Dean Karlan. As part of this leadership role to expand evidence use across the agency, Karlan’s efforts will involve creating more opportunities for country offices and those at the “last mile” to access timely data and evidence synthesis to inform real-time decisions, alongside expanding the use of cash benchmarking.
On incentive structures at MCC, Mandaville discussed enhancing early use of evidence and data as part of MCC’s investment decision-making process to better understand potential impacts, risks, and trade-offs. And Rand explained that the State Department is a “learning institution” open to new insights, ways of working, and adaptation when programs do not lead to their intended results.
3. Strategic prioritization can help harness greater benefits from evidence
Among ever-growing information gaps and finite resources to deploy towards evidence activities, agencies must strategically decide how to prioritize evaluation efforts. CGD’s working group report suggests that policymakers think of evidence generation, and impact evaluation specifically, as a development intervention in and of itself; there is a cost to developing new evidence, and there is a benefit in the form of increased or speedier impact on outcomes or cost savings. The potential rate of return is immense: a lower-cost impact evaluation, including those leveraging existing management and monitoring data, could save millions in ineffective spending.
Consistent with the requirements of the Evidence Act, agencies have launched new learning agendas that lay out priority questions for the next several years and can be addressed in part through researcher-policymaker collaborations. The development and public release of agency learning agendas are an important step toward prioritization.
CGD’s report also suggests that funders use a “value-of-information” approach to proactively consider and prioritize evaluation and evidence investments with the greatest potential “returns” in the form of improved outcomes, such as programs that receive a large share of resources and could easily be evaluated but have not yet been. Sumilas also recognized that metrics are at times overly focused on monitoring progress against the objectives of specific programs, as opposed to taking a bigger picture view of knock-on effects and benefits that cut across earmarks and ultimately help move global development forward.
You can watch the full event here and explore the other evidence forums here. Stay tuned for more events, publications, and commentary from CGD on locally-led development, better evidence and evaluation funding and practice for development policy, and researcher-policymaker partnerships.
CGD blog posts reflect the views of the authors, drawing on prior research and experience in their areas of expertise. CGD is a nonpartisan, independent organization and does not take institutional positions.
Image credit for social media/web: Adobe Stock