BLOG POST

How Do Donors “Hear” Evidence?

Researchers take for granted that more evidence is better. By and large, they expect that their efforts to generate and communicate new ideas and test them empirically contribute directly or indirectly to a better world. (Not all researchers think this way: some do research just for the love or challenge of it, but many do.) In reality, for that impact to be realized, research needs to be noticed and acted upon by people who actually do stuff. While there is an increasing volume of research that looks at how policymakers update their beliefs and act when confronted with research, there is surprisingly little consideration of how and when they are confronted with research in the first place, in the ordinary run of their work.

Development organizations (bilateral, multilateral, or philanthropic) tend to be more technocratic and evidence-driven than most. Given the stakes and the sheer inadequacy of the resources they manage compared to the size of the challenges they seek to address, getting the most bang for their buck matters enormously, so they invest in building evidence into their functioning. Yet it is striking how different their approaches are, and how little we know about how well they work and under what conditions. I set out four different models used by donors here, and what (little) we know about their respective merits. At the outset, it’s important to note that these approaches are not mutually exclusive. Organizations may—and often do—use elements of each.

Insourcing research capability

Research is a specialist activity. Most researchers spend the better part of a decade in full-time higher education developing the skills required; and most research is undertaken by organizations whose resources and hiring are dedicated to the pursuit of new knowledge—universities, most obviously. This creates an inevitable asymmetry of capability between research institutions and policymaking institutions, both in terms of their familiarity with a vast research landscape and their ability to interpret and synthesise this research.

One solution, then, for policymaking institutions is to replicate this model by building strong internal research capacity. This is the approach that the World Bank has taken, in part. (Though I focus on the bank here, the IMF and some other multilateral organizations also house well-respected research departments.) The World Bank’s Development Research Group (DECRG) in the Development Economics Vice Presidency is the closest thing any development organization has to an internal university, with full-time researchers producing academic work and publishing in top economics and development journals. It even has its own—highly regarded—academic journal, the World Bank Economic Review.

The attraction of this approach is that the World Bank doesn’t need to go anywhere to learn about, say, how to support firms in developing countries. Bank staff can simply take the lift to David McKenzie’s office and find out from perhaps the world’s leading expert on the topic. The bank can build the highest quality learning from its normal activities, as it did when McKenzie used its YouWin programme in Nigeria to learn about how best to identify and support high-potential entrepreneurs. This is part of the World Bank’s direct offer to its clients: access to some of the best researchers on the planet, but it is also designed to improve all the rest of what the bank does.

At least, that’s the intention. In practice, the World Bank’s internal university has remained, to some extent, an ivory tower. It is not fully integrated into bank operations and lacks oversight over the rest of the organization; it does not even have a formal role in evaluating the bank’s routine operations—that is the domain of the Independent Evaluation Group. In some ways, the DECRG is more like a highly regarded university department than it would like: it has influence, won by its excellence, over many development organizations, but virtually no direct, baked-in institutional standing. Paul Romer, in his brief stint as World Bank chief economist, considered closing down DECRG. He felt that the principle of comparative advantage should apply. Universities produce research at lower opportunity cost in terms of real-world impact; the World Bank should specialize in doing and allow others to specialize in thinking.

I would argue this is a step too far: the discipline and cross-pollination from close interaction between the researchers and the programmers probably help improve both, though there certainly isn’t perfect transmission of evidence into action. It is also expensive. Hiring what amounts to a mid-sized university research department on competitive salaries, and the research funds they work with is a significant outlay (it’s hard to find the exact cost here, but the overall Development Economics group is the most expensive of the World Bank’s institutional services, at over $100 million—see page 76 here). Given the resources required of this model, it’s also not one more modest-sized organizations are likely to emulate in full. Other donors operate very limited versions of this at a fraction of the cost, which involve hiring a small number of part- or full-time researchers to work in the organization, but they tend to be substantially less visible than a full department.

Information interventions

If Romer was right, though, and knowledge generation should be the specialist preserve of dedicated institutions, that leaves another problem, one most development organizations have to grapple with: how to smooth the transfer of this knowledge into the organization. The naïve view is that information processing is costless, its transfer is frictionless, there are no bounds to computing ability among development officials, and no biases in how they process information. In such a world, as soon as evidence exists, it is costlessly and perfectly incorporated into the knowledge upon which the development organization draws and uses to inform decisions.

The real world is nothing like this. The barriers to knowledge transfer are large and persistent. Four are worth highlighting. The first is monetary: most academic journals and books are ruinously expensive, the product of the kind of market failures that would make Kenneth Arrow faint clean away. Development organizations do not typically provide academic journal access to their staff. There is a real barrier to knowledge acquisition. The second is time: there is a large literature on almost everything, and even reading and comprehending a good summary takes several hours or days (even for people trained in the job). If this doesn’t sound like a long time, you have probably never worked in a government department. The third is computing capacity: understanding research is hard, and even researchers frequently argue about the correct interpretation of existing bodies of evidence. The last is bias: we are not machines; we interpret the world through frames and in the context of our previous beliefs and decisions, and all of these affect the way we process new information.

In response to these sorts of problems, some organizations have created “knowledge products” that aim to simplify, interpret, and communicate the key information required for programmatic and policy decisions made by their staff. One of the most prominent such attempts is the “Smart Buys” in learning produced by the Global Education Evidence Advisory Panel, supported by the FCDO, World Bank, UNICEF, and USAID. This report aims to summarise a huge literature and—critically—interpret the findings with respect to identifying the best and worst approaches to achieving a specified policy objective, learning. It works (in part, as we will see below) by relaxing each of the four constraints identified above. And it is not the only such approach; in the UK the National Institute for Clinical Excellence does much the same job (indeed, going beyond it) for health interventions. FCDO produced a number of other “Best Buys” for other sectors, too; USAID’s revamped Office of the Chief Economist has taken up this charge, aggregating and summarizing findings to help staff across the organizations (including in USAID missions) identify cost-effective interventions. Less prescriptively, some donors (including FCDO) have standing contracts with researchers to produce rapid evidence summaries on specific policy questions to inform decision-making.

The benefit of this approach is that it directly addresses some of the constraints to evidence use in development agencies and can be tied explicitly to the policymaking and programmatic process. It also standardizes the understanding of evidence to the extent that the simplified evidence is itself easily interpreted and understood by policymakers. It is also substantially less expensive than maintaining a full-time group of researchers, with the cost likely to run into the tens or at most hundreds of thousands, rather than millions of dollars.

On the other hand, when constraints to evidence use are not driven by poor understanding or knowledge of the evidence, these approaches achieve less (though not nothing, as we will see below). They also assume that the central agency or body that produces the summary information is better informed and better able to process and package information than the ultimate users. While this is very likely (indeed almost certainly the case in many examples), the more prescriptive they are about the interpretation of evidence, the more likely they are to introduce errors in those cases where local conditions do run against the grain of the prevailing evidence. Whether this represents a net improvement depends on whether these incidents are more or less common than mistakes that come from incorrect interpretation of evidence by decentralized decision-makers.

Building the demand for evidence

The provision of simplified information is a supply-side intervention for the use of evidence. It makes evidence easier and cheaper to access and process for those who want it. Many organizations supplement this approach with attempts to drum up the demand for evidence. This is usually achieved by the appointment of “evidence champions” in the organization. USAID tapped Dean Karlan as chief economist in November 2022, empowering his office to galvanise the whole organization’s approach to evidence use. FCDO and DFID before it employed “evidence brokers” and an Evidence Into Action Team, full of smart professionals whose role in the organization was to match the right evidence to the right potential user and to motivate users to seek out and use evidence more often. More subtly, the chief economist and chief scientific adviser-type roles in most agencies—and nearly every bilateral and multilateral agency has a chief economist, some having several—is in part to serve as an exemplar to the organization, to encourage the use of good analysis and evidence throughout the organization’s work. This tends to be a cheap way of influencing the organizational approach to evidence, requiring only a handful of highly trained staff, usually without a substantial research or operational budget.

Demand-side approaches aim to work through two channels. The direct channel is by identifying specific individuals and teams and “selling” the importance of evidence to them so they themselves look for and use evidence in their work. This can work by correcting misperceptions about the existence of useful evidence (that is, many programme or policy teams may be unaware of the extent of the evidence base and its applicability to their problems) or by effecting some form of behaviour change in targeted individuals or teams by changing their preferences—the “evangelizing” channel. The indirect channel also effectively evangelizes, working through culture change and highlighting the benefits evidence in the decision-making process (some organizations, like GiveWell have explicitly evolved around a culture that privileges evidence in their decision-making).

The evangelizing channel partially addresses the deeper incentive problems that lead to underusing evidence, but only partly. If decision-makers are optimizing outcomes beyond impact and the “naïve” organizational mission, then the value of evidence that focuses on impact and organizational outcomes for their optimization problem is limited. For example, if Task Team Leaders at the World Bank are focused on both maximizing the impact of a programme and ensuring it is simple, easy to implement, and easy to finance (to maximize their promotion prospects), then evidence that only addresses the programme's impact, without considering its organizational simplicity and appeal, will play only a partial role in their decision-making. Demand-side interventions can shift this at the margin, but how strongly depends on the strength of competing incentives.

Institutional guides and guardrails

If the underlying constraint on evidence use is that competing incentives motivate decision-makers, for which evidence on impact is irrelevant or actively harmful, then organizations face a more profound problem in incentivizing the use of evidence. They cannot simply make evidence more available or hire people to talk about its value. They need to either affect their internal reward structures or redesign the decision-making process to be less sensitive to these competing incentives. One way of doing this is through institutional checks and balances in the decision-making process.

The FCDO (following DFID before it) uses a system of third-party quality assurance (a form of peer review) for large projects, which has this effect. Because the third-party reviewers—who are still civil servants—do not have the same career incentives as the programme manager, they can assess evidence use and make recommendations unencumbered by the competing incentives affecting project-originating teams. The World Bank also uses informal and formal peer review structures to build incentives for evidence use and to protect against the undue influence of competing incentives. These structures can also provide additional information and “computing power” if these are the ultimate constraints to the uptake of evidence, as well. Most other organizations use some form of organizational approval process, but only some explicitly build an evidence-assessment component to it. Even project approval documentation can have this effect, compelling officials to set out the evidentiary basis for their proposals. And approaches like the “Best Buys” in FCDO have a secondary purpose: to incentivize staff to use good evidence by making it more costly to disregard it (since staff will then have to justify why their reading of the evidence differs from that of the chief economist and their team).

In practice, though, these approaches are limited by the capacity of the organization to review and assure projects and policies, which tends to be limited. Their cost includes slowing down organizational function—which may absolutely still be cost-effective, if it improves impact, but may nevertheless be costly.

What this means for evidence promotion

Ironically, few of these approaches have themselves been the subject of rigorous research scrutinizing their effect on the effectiveness and impact of the organizations that use them, and what research does exist is inconclusive (including my own). Yet for organizations like CGD, whose mission is to use evidence to shift practices in international development (and especially by donor governments) on the basis of evidence, learning about the mechanisms these institutions actually use to learn, and their merits, strengths, and weaknesses matters enormously. Without doing so, engagement uses only a subset of the possible channels to influence: by engaging with specific evidence champions through targeted or diffuse systems (targeted might be contacting them directly; diffuse is through their engagement in conferences, seminars and the usual systems for disseminating research), or by influencing a broader literature, in the hope of eventually being incorporated into knowledge products or demand-side structures.

Researchers and think tanks with an interest in policy should care about which systems are used by different donors for two reasons: first, we want to influence the most effective of these systems to have the biggest impact; and second, we want to encourage the expansion and use of the best systems to improve evidence use. Resource transfers to developing countries are pitifully inadequate; a large portion remain primarily allocated across competing geographies and uses through donor decision-making. Learning about how they institutionalise their learning from evidence to improve this process is of first order importance and remains understudied.

Disclaimer

CGD blog posts reflect the views of the authors, drawing on prior research and experience in their areas of expertise. CGD is a nonpartisan, independent organization and does not take institutional positions.


Image credit for social media/web: smolaw11/Adobe Stock