Asking the Right Questions about Evidence Use in Development Policy

Last month, Open Philanthropy published a list of open research questions they would like answers to. It’s a fascinating list, and in keeping with their mission, focuses on some potentially high-impact and neglected problems where more evidence could make a big difference to improve social and economic well-being. One section in particular stood out to us: Science and Metascience. In it, they posed a question that that many of us in CGD have grappled with too: “What forms of research do policymakers find most persuasive or useful?”

The question is a natural one for an organization that wants to apply the best evidence to make the world better in the most efficient way. If you think that research is the way we learn about making the world better, it’s a logical next step to ask which forms of research influence what the people who control the biggest spending and regulatory apparatuses actually do.

And we’d argue, in at least one crucial way, that it’s the wrong question to ask—or an insufficient one. That’s because it’s written from the researcher’s perspective. It assumes that policymaking is a decision-making and implementation process into which research is a regular input. If that is what the policymaking process looks like in practice, then as in almost all production processes, better inputs (like more skilled workers in a tech firm, faster and more powerful machines in an auto factory, better ingredients in a restaurant kitchen) will usually make the process better, churning out higher quality outputs, or producing more for less effort.

But that is a mischaracterisation of how policymaking actually happens. The kinds of research produced matter—and we’d argue that implicit in the question is the premise that policymakers need different types of evidence from a range of sources to derive policy-relevant inferences. But it’s at least as important to ask when—and under what circumstances—policymakers are receptive to evidence?

Our experience is that from a policymaker's perspective, when is often a more fundamental question than what; and, to more fully understand the policymaking process, it needs to be answered first.

Policymaking is non-linear: it isn’t an act of constantly making things happen, nor is it continuously revisiting and reconsidering all previously made decisions to iterate and refine. It ebbs and flows but there are typically critical inflection points or windows when evidence can be brought to bear on decision-making.

For the most part, policymakers tend to have three kinds of work: originating, maintaining, and exiting. Originating work involves doing something new: that can happen when a policymaker decides they want to shake up an old system or program (perhaps because they’re newly appointed to a policy area), it may happen when a programme or policy hits a natural break or renewal point, and it may happen—though more rarely—when they realise that their conception of the work was incomplete and a change or adjustment is warranted. When a country decides to institute a new school meals programme or a cash transfer, that’s originating work.

Maintaining work occupies the lion’s share of most policymakers' time. It’s simply letting things run and fixing minor bugs in the system. When money is allocated according to the agreed formula, teachers are allocated to schools, or exams are happening, most policymakers (most of the time) will be monitoring for unexpected results, but not plotting how to radically overhaul, or even modestly tweak, their system. Once the school meals programme has been developed and is being delivered, the system switches to maintaining: tracking to make sure that meals get to schools and that contract terms are being met.

Exiting work happens when something is wound down. This happens often where the government is carrying out or commissioning projects and more rarely where it is administering continuous programs. It also happens when contracts come up for renewal, when there is a turnover in leadership, or when policies and strategies expire (it can bleed into originating work but is distinct because, in the process of exiting, policymakers usually seek out lessons and decide whether to renew or start a new originating process). Winding down a school meals programme (as has happened in many countries) is exiting work: sometimes it will involve a switch to a new approach or policy, and sometimes just letting the work lapse.

Understanding this matters because influencing policy is ultimately about understanding policymaker attention. As Herbert Simon wrote in 1973,“Attention is the chief bottleneck in organizational activity, and the bottleneck becomes narrower and narrower as we move to the tops of organizations.”  When attention is a scarce resource, the uptake of evidence depends in large part on when you can get attention for what.

Taking this perspective, whether evidence influences policy depends on more than the type and quality of the evidence provided, though these are important factors. It depends, most importantly, on the stage of policymaking process. To start, we need to understand what kinds of problems capture vital attention at each stage—and then move on to unpacking what kind of evidence helps with those problems or (alternatively) is compelling in a way that it seizes attention even if it doesn’t. And because the transmission of evidence is relational, the messenger can matter as much as the method. That is, the question of who is providing the evidence becomes crucial, too. And this is before we get to factors around competing incentives, the political constraints and institutional structures policymakers work within, and their understanding of how to get promoted, not to mention their propensity to read and seek out evidence that agrees with the positions they already hold.

All of these questions are under-studied. Most of the expanding literature on evidence-based policymaking examines how policymakers interpret and express preferences over different kinds of evidence, not when they take note of and act on evidence (though there are some exceptions). Studies also typically involve a high level of policymaker engagement with evidence: one-to-one engagement between an enumerator and the policymaker, with total attention given to the evidence for a substantial period of time. This is very different than drawing on evidence from a sea of available information as part of one’s day-to-day job—with competing demands and incentives.

And the most successful experiments have shown evidence take-up and action, but primarily for adopting rather easy and straightforward policies. None of this means that the evidence is not useful: it is. It’s just that it answers questions about part of what is often a non-linear process—and there is much that we will still benefit from understanding about the rest of it.

Influencing policy is hard. And it is hard by design. A system that changed its mind every time a new paper came along would be hopelessly unstable and achieve little. But that also means understanding when and how the high bar for policy influence is breached by good evidence on making the world better is of substantial importance for improving human welfare.

Open Philanthropy is right that this is a big unanswered question. But it’s part of a bigger set of consequential issues that need to be examined.


CGD blog posts reflect the views of the authors, drawing on prior research and experience in their areas of expertise. CGD is a nonpartisan, independent organization and does not take institutional positions.

Image credit for social media/web: ShpilbergStudios / Adobe Stock