BLOG POST

From Commitment to Action: Structuring USAID to Drive Evidence-Based Policies and Programs

Changing organizational culture to embrace evidence and its use in decision-making is a long, hard slog. Over the last decade, USAID has made progress in that journey and—in many ways—has outperformed many federal agencies on fulfilling certain evidence requirements. But room for improvement remains.

Those who have been on the inside of efforts to elevate evidence within the agency have consistently pointed out that an important factor in driving this agenda forward is a high-level champion who provides leadership, guidance, and support. In these areas, Administrator Power has taken important steps. Even before she took office, she highlighted the need for measurable outcomes and has regularly asserted her commitment that USAID’s programming will be guided by evidence and that she will work to increase the rigor of USAID’s evaluations. Her efforts are both informed and reinforced by the Foundations for Evidence-Based Policy Act of 2018 (Evidence Act) and the Biden-Harris administration’s overarching commitment to evidence-informed policymaking. And this energized recommitment comes at a vital time. The pandemic has magnified needs around the world, and USAID is managing additional funding to respond. The imperative to produce and use more rigorous and systematic evidence in real-time is now stronger than ever.

Part of improving USAID’s evidence orientation involves thinking about to make use of structures, staffing, and incentives to integrate the use and generation of evidence throughout the agency’s work. Under the previous administration, the agency had proposed a reconfiguration of its existing Bureau of Policy, Planning and Learning (PPL) into a new Bureau for Policy, Resources and Performance (PRP). But the future of that plan is uncertain since it hasn’t yet moved forward as the other parts of the agency reorganization have. This moment presents an opportunity for the current administration to make its mark in advancing this agenda. Building on our earlier work, and drawing inspiration from experience with the United Kingdom’s Foreign Commonwealth and Development Office (FCDO), this blog offers four recommendations.

Invest in a strong central hub for evidence and learning

Most of the day-to-day work of collecting data, planning, and managing evaluations—as well as using research as an input into program design—happens in USAID’s missions. Mission staff are the ones managing most USAID programs and are most immersed in the country context, including, presumably, local priorities for evidence. Within USAID/Washington, Policy, Planning, and Learning (PPL)—especially the Office of Learning, Evaluation and Research—is the main home for evidence and learning. Evidence and evaluation-related functions also exist across the agency, including in Development Innovation Ventures, which sits in the new Bureau of Development, Democracy and Innovation, as well as within sector and regional bureaus. This somewhat diffuse and decentralized structure has some advantages, but it risks fragmentation and, with it, missed opportunities for lessons learned in one part of the organization being heard in others, or the evolution of different interpretations of evidence in different pockets of the organization.

LER’s main role is to support mission staff, chiefly through policy creation, guidance, and training. But its direct reach to missions has been more limited. Here’s where there is room for more investment. While each mission has a cadre of monitoring and evaluation staff, specialized evaluation skills—especially those relevant to impact evaluations—aren’t spread widely throughout the missions. And this may make sense; working on impact evaluations—and interpreting impact evaluation results—will be a small part of most M&E staff’s time (a lot of which is consumed by reporting requirements); most staff will work on only a few impact evaluations during their career. The risks to this set up, however, are missed opportunities to generate high-quality evidence (and sometimes misdirected spending on lower quality evidence) as well as the potential for programs disconnected from existing evidence. A key role for Washington support (in addition to setting standards) should be to supplement these skills, helping to identify opportunities to pursue evaluation, providing guidance on methods and sampling, and helping manage the evaluation process. Technical support is also useful for distilling existing evidence to help inform questions about program design, an exercise that takes time and expertise. As it stands, USAID’s centralized evaluation support isn’t adequately staffed to meet all these potential needs.

The agency needs a strong central team of experts who can provide consistent, long-term, and sometimes deployable embedded support to bureaus and missions to help staff identify needs and opportunities for evaluation (and data generation) and then help manage the process. Ideally, this occurs in partnership with local policymakers in a way that responds to their capacity and evidence needs. There is also a need for experts to serve as evidence “brokers,” providing tailored translation of existing evidence to help inform the evidence case for individual projects or strategies. Finally, a centralized hub can serve as a testing ground for evidence and learning innovations that cut across sectors—for example, USAID’s cash benchmarking.

These changes would, of course, require additional resources to stand up; some of the envisioned responsibilities could be covered by existing staff, but it would likely require the creation of new staff positions or structures. The United Kingdom’s FCDO serves as a potential model, where it employs small cadre of evaluation experts, embedded both at the center and in bureaus and missions with the common objective of supporting the design of evaluations where appropriate, and the use and dissemination of their findings across the organization.) A complementary option is to expand USAID’s partnership with the General Services Administration’s Office of Evaluation Sciences (OES), a unit whose role is to support federal agencies—often by embedding staff—to build and use evidence. USAID’s global health bureau has had a longstanding partnership with OES, but the agency could explore opportunities to expand and deepen this engagement beyond a handful of projects.

Create a central, high-level, empowered evidence lead

It is great that, under Administrator Power, support for evidence and learning is coming from the very top. But USAID also needs a strong, empowered leader to drive this vision forward across the agency. This person is ideally an intrinsically motivated evidence champion with technical expertise, but to be successful they must also be vested with meaningful policy authority, resources, and staff, perhaps as head of a unit like the one described above.

USAID has several positions that have some features of an evidence lead but all have key limitations. The head of PPL is the senior leader charged with advancing the use of evidence, but given the diversity of the bureau, the position has responsibilities and equities well beyond evidence. USAID also has a chief economist. The role has been vacant for several years but has been historically filled by technical experts. Compared to peer agencies like the United Kingdom’s FCDO and the World Bank, however, USAID’s chief economist is relatively disempowered. Sitting on its own without budget or staff, the role is more akin to a senior advisor. USAID has also designated a staff member as the agency’s chief evaluation officer, per the requirements of the Evidence Act. But that position, while technically strong, isn’t bureaucratically empowered, sitting within PPL’s Office of Learning, Evaluation and Research.

Proving the chief economist with greater authority is one option worth exploring at USAID. At FCDO, the chief economist is supported by a small team of development economists (and a larger cohort of economists focusing more broadly on international economics). It also oversees the Quality Assurance Unit, which is responsible for peer-reviewing larger spending proposals and for checking the quality of internal Annual Reports on program effectiveness. The role of FCDO’s chief economist takes on three dimensions. The post is partly technical, overseeing institutional structures designed to encourage (and to some extent, police) the use of evidence. The chief economist also acts as an internal consultant, helping different country programs assess and revise their strategy and programmatic approach. Finally, the position provides thought leadership, encouraging—through think pieces, seminars, discussions with staff at all levels, and an annual economics conference focusing on development issues—approaches to problem-solving and program design that are rooted in evidence and economic theory. In sum, the chief economist role is central to how FCDO “thinks” and is given a prominence that reflects this.

Raise the profile of evidence in decision-making

USAID’s operational policy requires that the agency’s investment decisions be supported by evidence. The policy’s broad wording gives operating units a lot of independence in asserting how they meet these evidence requirements. While flexibility is appropriate, there’s limited quality control over how it is operationalized. Large programs are subject to senior leadership review, and one of the criteria they must consider is the evidence case for the intervention, alongside local priorities, time and resource constraints, and politically-based development mandates or other US foreign policy priorities. Senior staff charged with review understandably bring their own angles and interests to the process. Where there is daylight between these interests, the arbitration process for what matters is murky. That is, even though the evidence case is presented as part of the review, it is one factor among many criteria and it’s unclear, from a public vantage point, how heavily it gets weighed. Having a senior official clearly accountable for ensuring the evidential basis for USAID spending decisions would mean at least one player in this process advocates for evidence quality above other competing priorities.

To improve the implementation of this policy, USAID should strengthen the process of program review by requiring an independent check (at least for programs exceeding a minimum size threshold) that the best available evidence has been brought to bear on the program design. That independent review would be led by a focused, empowered, and technically strong evidence lead (and team) described above.

Of course, independent quality checks aren’t all powerful. FCDO, which has an extremely strong reputation for the use of evidence, has not totally excised its portfolio of poorly-evidenced interventions—or indeed those with strong evidence of no effect. That said, the existence of a process for assessing the quality of evidence used in spending proposals provides a mechanism by which decision-makers can be held to account for approved program design and empowers the chief economist and technical advisers to demand higher standards. While the potential for further progress remains, multiple independent assessments have found that FCDO programs and portfolios designed under this approach have performed well.

Set high standards across the whole evidence agenda

In addition to her generally-stated commitment to improving USAID’s use of evidence, Administrator Power has begun to use her platform to champion behavioral science, the study of why people act the way they do—and why they sometimes make decisions or behave in ways that yield less desirable outcomes. Applied to the practice of development, behavioral science uses experimental methods to identify interventions that help people or communities change their attitudes, norms, or behavior in order to produce improved development outcomes (e.g., hand washing, vaccination uptake, participating in voting).

In remarks at the United Nations Behavioral Science Week, the Administrator pledged her commitment to incorporating behavioral science more into the work USAID does. This commitment is a welcome advance. Behavioral science is, in many cases, an important component of understanding how or why outcomes were (or were not) achieved. And some behaviorally focused interventions have shown great success. A recent study from IPA showed how a set of interventions designed to change norms and behaviors around mask wearing in Bangladesh led to an increase in masking and a decrease in the burden of COVID. And as Power noted in her UN remarks, the DIV-funded program to put stickers on buses in Kenya encouraging riders to speak up about unsafe driving practices—a small, low-cost “nudge”—significantly reduced accidents, injuries, and deaths.

But the application of behavioral science to development outcomes is a great example of why high standards for evidence are important. In a forthcoming paper,  Stefano DellaVigna and Elizabeth Linos, from the University of California at Berkeley, uncovered a gap between the large average effect of nudges demonstrated in the academic literature and the average effect of nudges realized at scale by “nudge units” in the policy world, including US programs. As Linos said to The Decision Lab,

“the likely impact of any given nudge is probably smaller than what policy makers would predict, if they only looked at the academic literature. This means that businesses or policymakers may need to move beyond “nudges” to achieve a larger impact. Nudge Units themselves acknowledge this — many behavioral science teams and experts are already exploring how to use insights from behavioral science to design better policies, better legislations, and rethink programs as a whole. Nudges are just one small part of the toolbox.

As in the case of the Bangladesh masking study referenced above, behavior change may be about more than small nudges. The interventions employed to change masking behavior were extensive and overlapping. Free distribution of masks, and more importantly, paying individuals to remind people to wear them ended up being the most important contributors to the program’s overall success. More subtle behavioral nudges—text messages, cash rewards, pro-mask signage—were less impactful. And within the United States, we’ve seen that nudges in the form of incentive programs to increase COVID vaccine uptake have had limited effect.

The broader point is this. Behavioral science is absolutely worth further attention by USAID. But it must be done in the context of broader investment in experimental studies and evidence use. Insights from behavioral science will only realize their full potential in USAID’s work if the agency excels at identifying opportunities for experimental research—and getting the methods right—and is held to account for finding, listening to, and acting on good research as part of its program design. It will be critical, then, that any new focus at USAID on behavioral science—or on nudges more narrowly—be paired with broader efforts to improve how USAID incorporates experimentation into its work and the use of evidence in policy and practice. The agency, despite noteworthy progress, still underinvests in high-quality, experimental studies of any kind, is only starting to invest more in cost analysis, and doesn’t systematically bring evidence to bear in its programming decisions. It would be great to see USAID invest more in behavioral science—as part of a set of overall reforms that ensure USAID programs are either based in evidence or seek opportunities for experimentation and learning.

Thanks to Erin Collinson, Amanda Glassman, Anne Healy, and others for helpful comments and conversations.

Disclaimer

CGD blog posts reflect the views of the authors, drawing on prior research and experience in their areas of expertise. CGD is a nonpartisan, independent organization and does not take institutional positions.


Image credit for social media/web: Adobe Stock