When the COVID-19 crisis hit, policymakers the world over found themselves grappling with urgent decisions in the face of uncertainty. The pandemic had quickly turned people’s lives and livelihoods on their heads, and the data available to guide policy response was often incomplete or outdated. To remedy this, policymakers and researchers began teaming up (often with the support of external financing) to try to better understand how the pandemic was impacting different populations, and to measure the impact of investments in COVID-19 response.
Last month, CGD and Innovations for Poverty Action (IPA) hosted a conversation with policymakers (Daniel Gomez Gaviria from Colombia’s National Planning Department), researchers (Radha Rajkotia from IPA and Maryam Akmal, then at CGD), and donors (USAID’s Josh Kaufman and Eunice Yaa Brimfah Ackwerh from the World Bank’s Ghana office) to explore how the pandemic has impacted the engagement between policymakers and researchers. The discussion highlighted several key lessons to inform partnership approaches for evidence-based policymaking throughout the COVID response and beyond. With an eye toward being better prepared to meet the evidence needs that may arise in future crises, we point to five key themes below.
Collecting and analyzing data to target programs or analyze their effectiveness can be challenging under the best of circumstances.
The crisis revealed the importance of existing policymaker-researcher partnerships.
Several panelists emphasized the value of being able to draw on preexisting partnerships when the pandemic hit. Establishing new relationships can be prohibitively onerous when time and resources are already stretched thin; a more successful approach was to build upon existing partnerships and repurpose work programs to meet new and emergent needs. Gomez Gaviria gave the example of being able to quickly pivot an ongoing partnership with the World Bank to conduct an immediate needs assessment to inform plans to expand the government of Colombia’s existing cash transfer program to meet increased needs.
To turn data into policy impact, the nature of partnerships matter.
The overarching objective of policy research is to inform policy decisions—which hopefully achieve impact. But not all research efforts are equally successful in reaching policymakers. A common theme that emerged throughout the discussion was that the structure, processes, and goals of a partnership are important in spurring policy uptake.
Akmal highlighted that a lot of development research has a representation problem. The predominant model tends to be one in which data collection happens in the global south—often by local enumerators—with researchers in the global north conducting all the analysis and delivering the final, fully-baked report. While this type of research model can produce useful insights, it may get limited uptake if it fails to respond to local needs or isn’t developed from a partnership based in mutual trust. Multiple panelists pointed out that involving policymakers and other local partners throughout the lifetime of a project—from co-creating the research plan and priority questions, establishing research methods, collecting data, and conducting analysis to planning for policy responses based on research results—can encourage critical buy-in. To strengthen this kind of partnership, Rajkotia pointed out that a key role of international research teams—and the donors that support them—should be drawing upon, supporting, and strengthening local government partners’ own data and research capacity. This can take the form of technical assistance or long-term, structured collaboration.
Co-creation and capacity building often come with tradeoffs, however, particularly in terms of speed, which can be important during a crisis. It’s important, therefore, to invest in these relationships and capacities during “normal” times, not only to meet research needs for evidence-based policymaking in a steady state environment, but also to have strong local resources and established trusted partnerships to tap into during times of crisis.
Data accessibility is critical for translating evidence into policy impact.
In addition to making partnerships more collaborative, researchers must invest in communications about their results to ensure their findings are available and accessible. Kaufman, discussing USAID’s use of data to inform its own plans to support COVID-19 recovery around the world, applauded the wide range of COVID-19 impact data already published. But time-strapped policymakers, pulled in multiple directions at once, are rarely able to take in, much less absorb the flow of new research unless it is translated into targeted, useable information.
Regional, country-based, or sector-specific evidence portals can be a valuable way to store and share data and evaluation results. Donors or other organizations with connections across countries could consider brokering such a service during a crisis to facilitate knowledge sharing around key questions. Portals like this can help enable rapid systematic reviews of crisis studies and aide researchers to map out evidence gaps so that they can more effectively prioritize new research.
To make evidence usable for policymakers, tailoring communication to their specific needs and questions can be particularly helpful. Ackwerh noted the success of Ghana’s quarterly education evidence summits, which provide a forum for researchers and policymakers to discuss new research and its implications for strategy and implementation. And, as Rajkotia discussed, researchers can work hand-in-hand with policymakers at the outset of a research project to develop a policy response plan—a set of options to pursue based on the range of outcomes that might emerge.
Times of crisis can motivate the use of new or improved data collection methods—and shine a light on ways to improve the utility of existing data sources.
While the pandemic created an urgent demand for new data, it also led to restrictions in access and movement, rendering many forms of data collection impractical or even unsafe. This required changes in data collection systems.
One shift, described by Rajkotia, was IPA’s rapid adoption of phone surveys in lieu of typical in-person collection methods. With enumerators working from home, IPA had to identify where they needed to adjust their frameworks for things like privacy (which looks different for a phone call) and data security—and then quickly implement, test, and refine those new processes.
In many ways, the pandemic merely accelerated a shift toward phone-based data collection that was already underway. In fact, we’re likely to see more of it, even as the crisis subsides. But it won’t be a wholesale shift. As Rajkotia pointed out, there are some questions better broached through face-to-face interactions (those about sensitive subjects in particular, like intimate partner violence). And researchers may need to contact subsets of the population not reliably reachable by phone, considering issues like mobile signal coverage or gender disparities in phone access.
In addition to advancing new methodologies for specialized data collection, Ackwerh called for greater use of administrative data—including records of government service delivery (e.g., in health, education, social safety net distribution) that are readily available without additional outlays of time or money. Where the quality of administrative data is weak, donors and researchers can play a valuable role in building government capacity to improve it.
Beyond questions of administrative data quality, issues around its usability also emerged. Gomez Gaviria pointed out that limited interoperability across various data systems made it difficult for the government of Colombia to bring together the data it needed to inform plans to expand an existing cash transfer scheme. Data existed in databases across government ministries and even within the private sector (finance, telecoms). A uniform recognition of the urgency of the situation helped align the incentives of agencies and firms to cooperate. This, combined with government-issued emergency decrees that accelerated administrative steps, helped enable the various government and non-government actors to work more quickly to link databases than they otherwise might have. But this example points to the need for attention in “normal” times to building interoperability, as well.
Flexible financing is critical when the direction of policy response—and the data needed to inform it—is uncertain.
During normal times, policymaker-research partnerships—and the external resources that fund them—are typically planned in advance and oriented around specific policy questions. During COVID, however, a range of new priorities emerged. Sources of external funding to support these new priorities—and the new data needed to inform their targeting and measure their impact—were slower to respond. Many traditional donors, including bilateral donors like USAID, simply lack the budget flexibility to react when new needs arise. Some private donors, like foundations, can be more agile.
If donors are serious about mounting as effective a crisis response as they can, it would seem that a small amount of flexible funding to meet evidence needs would be advantageous. But with flexible funding in short supply and competing demands upon it, the evidence community may need to do more to make its case. Seeking to measure the impact of data and evidence generation and use could help.
That all said, realistically speaking, its rare for donor budgets to become significantly more flexible. The question, then, is how better to leverage planned, steady state research funding to enable timely and relevant data collection and analysis during crisis. Drawing together some of the themes above, donors or organizations focused on development and social protection could regularly incorporate support for long-term research partnerships with governments, country-based capacity development, and improving administrative data quality and usability across government. Supportive partnerships that advance governments’ and local researchers’ efforts to use data to improve intervention targeting and results measurement will serve policymakers both when we return to “normal,” and set a stronger foundation for emergency response in the future.
CGD blog posts reflect the views of the authors, drawing on prior research and experience in their areas of expertise. CGD is a nonpartisan, independent organization and does not take institutional positions.
Image credit for social media/web: IMF Photo/Ebun Akinbo