Recent CGD events have shown several bright spots for the evidence-informed policy agenda, demonstrating leadership commitment to evidence generation and use. The new World Bank president, Ajay Banga, called for the institution to deepen its role as a knowledge bank in a recent speech at CGD. Newly elected IDB president Ilan Goldfajn has also elevated the importance of development effectiveness, including at CGD. And at a panel discussion on going beyond the dollars in MDB reform, colleagues with expertise from USAID, IDB, Gavi, Norad, and the World Bank emphasized the critical role of evaluation in improving the impact of development policies and programs.
As Banga prepares to put his ambitious strategy for the World Bank to lead global development policy into action, we hope to see evidence prioritized even further. In Banga’s words, “we won’t win the fight ahead of us without taking some risks. The trick is: don’t make the same mistake twice.” But what will this mean in practice?
Turning broad support for knowledge and effectiveness into actual evidence use requires concrete plans to revisit operational processes, decision-making, and incentives across the institution. And landing these plans will require strong, committed internal champions, as discussed in the final report of CGD’s Working Group on New Evidence Tools for Policy Impact. Scaling evidence generation and use to enhance impact is all the more important as the World Bank advances new reforms to address both global challenges and development priorities.
Evaluation and evidence enhance development effectiveness and value; they are not mere hoops or delays. In addition, development finance from multilateral development banks will be marginal in the long run, especially in middle-income countries, making a pivot towards more systematic knowledge generation and use all the more strategic. Knowledge generation and technical expertise is a clear domain for the World Bank to continue to add and expand its value.
Evidence and evaluation should be core to operation design, and structured to be agile, of modest cost, and relevant to policymakers.
In a new note, we outline five operational brass tacks for World Bank leadership to develop such an evidence function and then translate evidence leadership into practice. Here’s an overview:
1. Shareholders must place greater value on evidence—and fund it accordingly
The World Bank’s shareholders must demand greater accountability and more rigorous evidence. This means going beyond uninformative “scorecards,” instead taking into account the proportion of projects by volume and value with rigorous evaluation built into them and the use of existing evidence to inform project selection and design to assess and improve impact.
But one reason that so few World Bank projects are rigorously evaluated is because of current funding approaches and a trust fund model, which poses challenges with fragmentation of evaluation activities across the World Bank and vulnerabilities to insufficient and volatile aid flows. Having an off-the-top allocation for proper project evaluation that leverages additional IDA funds for the project itself and applies to all projects could help address funding limitations. One way to scale and systematize learning from implementation would be to commit funds towards the evaluation of a large proportion of board-approved World Bank projects.
2. Use multiple evidence-generating approaches to inform individual projects and thematic or sectoral areas
On the individual project level, new approaches to adaptive, real-time program adaptation should be used to inform adjustments to ongoing projects. But not every project can or should have a large-scale, rigorous impact evaluation attached to it.
As one option, World Bank teams could implement smaller-scale evaluations or one-off assessments to inform adjustments to design parameters of ongoing projects. Another alternative may be to build evaluations into programmatic support rather than to think of them as informing individual projects. A third idea is “batch” project preparation, where researchers would organize workshops with project teams, researchers, and counterparts around a particular sector or theme to help embed evidence into project design and inform thematic and/or sectoral areas.
3. Incentivize operational staff to follow the evidence
Professional success is still too often measured by project approval and disbursements, as opposed to learning from, acting on, and sharing evidence. Operational staff are currently rewarded for projects that are approved by the World Bank’s board of directors. This can lead to a fraught relationship between operational staff and evaluators. Instead, could team leaders be rewarded for embedding an assessment into project design, incorporating lessons from the assessment into the project scale-up, and using iterative evaluation to inform real-time program improvement throughout implementation? Equally, on the evaluative side, research staff incentives must be aligned to help operational teams answer questions that are of relevance to them and partner country policymakers, and not just of interest to academic journals. Management must also be incentivized to oversee rigorous evaluations that are comprehensive of their portfolios.
Importantly, interest from country partners to use rigorous evidence must also be prioritized and generated, including by designing evaluations that respond to their policy questions and available decision space, discussed more here. The experiences of various evaluation initiatives within the World Bank show that when development partners help officials use evidence to solve pressing problems and questions, governments are more likely to demand more evidence going forward. But this requires upfront demand generation.
As one example, given that limited fiscal space and implementation capacity often result in staggered program rollout, the World Bank’s financing could be leveraged creatively to tie concessional lending to the rigorous evaluation of alternatives to business-as-usual approaches.
4. Research should be quality controlled to ensure independence and credibility
As discussed in our earlier CGD piece on knowledge generation recommendations for the World Bank, the institution needs a range of evaluation structures that can inform policy and lending operations to make timely decisions, while also protecting the integrity and independence of the research to remain credible and guard against the potential capture of the evaluation or assessment process.
Research is generally credentialled through publication in a peer-reviewed journal. Where feasible, this could be encouraged and operational staff’s contributions to academic publication—not just working papers—could be recognized as part of the World Bank’s formal staff evaluation process. But many policy-relevant assessments may not be of interest to academic journals; such incentive misalignment is a key finding of CGD’s working group.
Considering this, the bank’s own peer-review process could be empowered as an alternative. For instance, external reviewers could be brought into both the concept note and final report stage. Involving researchers at the design stage and moving the concept note review earlier in the process would further increase entry points for evidence into decision-making.
5. Results should be reproducible
Another dimension of accountability of the research process is the reproducibility of results. We welcome recent efforts by World Bank’s researchers to invest in the replicability of its analytical work. Posting on the World Bank’s microdata catalog all the data and programs used for analysis once an evaluation is approved or published is one way of providing greater accountability. Many academic journals now have data editors who review the data and programs submitted for all accepted papers and ensure that results are indeed reproduced in reasonable ways. The World Bank could employ full-time data editorial staff to similarly ensure that the data provided reproduces the results presented in reports.
The return on investment
The World Bank has a long history of engaging in rigorous assessments of its work, yet only 5 percent of its projects have been subject to formal impact evaluations since 2010. Devoting just 1 percent of IDA loans and grants to impact evaluation would equal approximately $400 million per year (IDA is around $40 billion per year). While this would be an almost sixfold increase over the $70 million that is currently allocated towards the bank’s research activities, the returns could be sizeable. One percent is not very hard to “earn back” in terms of greater impact—on average, the investment would break even if the effectiveness of a project’s spending improved by 1 percent. This is an extremely compelling and feasible opportunity given the kinds of opportunities to do better that we see from many evaluations, including those discussed in our note.
Read our full note and related analysis on climate and beyond for further insights on how more evidence can result in greater impact from the World Bank, if strategically integrated and incentivized across its operational structure.
CGD blog posts reflect the views of the authors, drawing on prior research and experience in their areas of expertise. CGD is a nonpartisan, independent organization and does not take institutional positions.
Image credit for social media/web: Adobe Stock