[00:00:22] Rachel Glennerster: Good morning and welcome everyone, I'm Rachel Glennerster, President of the Center for Global Development, and it's my pleasure to welcome you on behalf of CGD, but also our partners and co-hosts here, the Development Innovation Lab at the University of Chicago. So thank you for joining, all of those people who are joining online as well as in person. This last year has been a tough one for those of us in the development community, with aid cuts in the US, the UK, Germany, France, and many other countries. But what's really important to hang on to in this period is remembering that the vast majority of funding for anti-poverty programs and development in general, it comes from low- and middle-income countries themselves. So both the poor in those countries and also the governments there. So while aid matters, one of the most effective things that we can do is help support low- and middle-income country governments to scale up things that have been found to be highly cost effective and that can work at scale. So how do we do that? That's what today is all about. But there are many steps to getting to that goal, to scaling up highly cost-effective interventions. One is understanding the critical needs on the ground. Another is identifying potential innovative solutions that have the potential to scale. Designing and rigorously testing new approaches, adapting those approaches to scale, comparing the relative cost effectiveness of different approaches and what is best suited to different conditions, and then supporting countries to actually scale. At CGD, we're involved in many of those different steps. We are running RCTs, for example, on reducing lead poisoning. We're synthesizing evidence to identify the most cost effective interventions, for example in education, and helping countries prioritize their budget to make sure that they focus on the most cost effective interventions. We're one part of many different groups that are working on this important agenda, and today we're bringing together people from governments and researchers and NGOs and international agencies who have worked on this agenda to ask the question and to learn about how we can design, fund, and institutionalize evidence-based policy innovations so that they reach many more people. We'll reflect on what has worked, what has failed, where the bottlenecks are, which could be incentives or capacity or bureaucratic constraints, and I'm very pleased that we have members of government who've actually struggled to do this themselves. Also, to help us think through these issues, we're fortunate to be joined by a really outstanding set of speakers, and we start with Michael Kramer, who will deliver the keynote. He is a professor at the University of Chicago and a Nobel Prize winner. He's not only helped scale many evidence-based policies to millions of people, but has designed the institutions that can help support others who are attempting to scale interventions. We'll then transition to a panel discussion featuring, as I say, policymakers and researchers who have experienced and collaborated on working to bring things to scale. So I encourage you to actively engage, ask tough questions, and push us towards practical strategies. Thank you for being here, and I'll now hand over to Michael.
[00:04:41] Jose Carlos Chavez Cuentas: Thank you so much for this presentation, and thank you to CGD for inviting us to talk about evidence-based innovation. Innovation is the key driver of economic growth, as we've seen for some time. And I think it's also critical to partner with social indicators, and even environmental effects. There are some types of innovations that attract a lot of commercial investment. I just push the button on my cell phone, for example, and there's profits to be made. And I can even invest. But there are other types of innovations for which there is a great social need, but the incentives are smaller. For example, I'm trying to figure out how to improve pedagogy in mathematics, in public schools, in low-income countries.
Probably it's not going to attract as much investment. But it's also very important. And I think that creates an arbitrage opportunity for evidence-based innovation funds to support that type of innovation. And there are many possible ways to structure those types of evidence-based innovation funds, and the appropriate way will vary depending on the aims of the funder and the nature of the host institution. And I think there's probably complementarity around different ways. I'm going to very briefly touch on one very important approach, which is government innovation units. And I think these share some similarities with things that are happening in Indonesia as well, although I won't talk about the Indonesia case. But I'll devote most of my talk to discussing the track record of a tiered, evidence-based, open social innovation fund, and discussing that category. And that will be drawing on the evidence from USAID's Development Innovation Ventures. Let me start by saying a little bit about government innovation units. These are often located in a finance ministry or other central government department that crosses ministries. There can also be important units within sectors, within an education ministry or health ministry. Let me focus on the broader case. Central government then can work with departments or regions, and also with external researchers, to look for potential policy innovations. Here I'm focusing on the subcategory of policy innovations. And they collaborate with researchers within government and outside of government, could potentially be within the country and externally to the country, to conduct some sort of experimental or non-experimental evaluation, and then evaluate them. And then the most successful ones can then be scaled up. Because this is all done within a government, it's relatively natural to the things that will be selected will be things that are of interest to the government in the first place. There's a very natural path to scaling. So there are many examples of this. Let me discuss one category, which is nudge units, which were set up in the UK and the US and India. The US, I think, went transformed and became a bit broader. The UK one similarly evolved. But that's an example that many people will be familiar with. Those focused on behavioral interventions. Of course, there are many other innovations that can be considered. So something broader, where we're lucky to have Ryan Cooper here with us, was the Experimental Policy Initiative in Chile. So that identifies and rigorously tests new and existing programs, policies, and ideas developed by units within the Chilean government. And it's designed with two complementary elements. There's an internal evaluation team that collaborates with national and international scholars to design and implement controlled and natural experiments, focusing on large-scale structural programs and policies. And then there's an impact evaluation fund, which releases requests for proposals to attract innovative ideas for new or existing Chilean government interventions. Ryan led this effort in Chile, and he's now working at Development Innovation Labs, working with governments to support scaling this up. We will hear a little bit later about what's happening in Peru, but there's also one in Fortaleza, Brazil. There's also an internal to Ministry of Education program in Dominican Republic. But if people here are interested in trying to establish a similar unit, I'm sure Ryan would be happy to talk. Let me turn to another type, open-tiered evidence-based social innovation funds. These are broader. For example, they're open to things that would scale commercially rather than through governments. Whereas the government innovation units are naturally focused on a particular country, these often cover many countries. On the other hand, they could potentially be focused on a particular sector. That's the interest of the organization setting it up. They complement that openness with an approach to... It's very important to have some... If you're really open, then how are you going to evaluate things? How are you going to make decisions? And how are you going to make sure that you're choosing, that you're not throwing good money after bad? There's often a quite rigorous process in DIB. There were small amounts of funds for piloting new ideas. I'm going to discuss some evidence from the early portfolio. These were actually very small amounts of funds. Somewhat larger amounts of funds for rigorous testing. But then the large-scale funds are reserved for things that have evidence of impact and cost-effectiveness from rigorous testing. That doesn't need to be done within DIB or the fund. They could reach that stage through other ways. But you can't get funding without some rigorous evidence. Or, in the case of private things that would scale commercially, with quite a rigorous assessment of, is this really commercially viable in the places where it already exists? There's also a hybrid category, but I'm not going to go through all of that. We looked at the... A while back, we looked at the portfolio from 2010 through 2012. So DIB was set up in 2010. Early portfolio, we analyzed some of the data through 2012. We're now actually in the midst of updating that analysis. We looked up through 2019 before. We're now looking through 2024. Sort of halfway through that, so a little bit... I hope it will go okay today, but I wasn't quite sure whether to present. But I'd love feedback and thoughts. I won't be able to go into detail, but happy to talk afterwards. Partly, we're just going to ask the question, is this type of social innovation a good investment? Obviously, funds can go to immediate needs right now, and that's very important, and clearly there's a lot of them. So does this pass a basic benefit-cost test? Are the benefits greater than the costs? And then also to shed light on the design of these, I think it's useful to think about which innovation scale, what are the predictors of that, because that can give guidance to optimal investment strategy. And I think out of that will come some implications for design at a number of levels, which I'll discuss if we have time. So here's an updated analysis as of preliminary, as of 2024. What were the number of beneficiaries of various innovations supported by DID? Let me note one thing, which actually didn't actually make it into the actual slide, but let me say it as if it were on the slide. It's in my notes pages, and I forgot to say it. Generally, there are two routes to impact that these innovations can have. So one route to impact is by influencing the larger organization of which they're part. If you're thinking about government innovation units, clearly that's going to be a key thing that they can bring to the table. If you're thinking about this type of open-tiered, evidence-based social innovation funds, then there's at least the potential to influence the larger policies of the organization. If you think about the example of conditional cash transfers, this is something that the Inter-American Development Bank was involved in, and it really influenced the larger policies of the Inter-American Development Bank and the World Bank, had a very big impact in that way. So having successes like that is one route. Another route is just by directly funding innovations, and those innovations might scale in a variety of ways. And one of the things with open-tiered, evidence-based social innovation funds, because they're not part of a government, they may be able to take somewhat bigger risks. They typically have open calls for proposals. So now let me show you, I'll break it down by category. So if you look at the scaling of innovations, in blue I've put an innovation that's scaled through outside of direct DIB funding, through DIB's influence on USAID as a whole. I'm happy to discuss this afterwards if people want. It probably could have been funded within DIB, but it was done through other parts of the agency. That one, you'll notice this break in the line, that one's reached a truly enormous number of people because it's scaled up in India. And India's got a national program. I mean, not just India, but really the innovation was bringing this existing innovation to India. The ones in red were ones that scaled through without major involvement by the rest of the agency, but through other mechanisms. One thing that I, is, you know, for those of you who've seen this analysis as of 2019, there are roughly 100 million people that have been reached by 2019. That one didn't include the deworming, so maybe not apples and oranges, but you can see there's been quite substantial growth since then. I think we estimate the, I believe it's flipped to the right pages, but I think it's 8% a year growth overall, but deworming grew, had grown quickly, and deworming in India grew quickly and plateaued, excluding that, I think it's 25% growth. I might be wrong about that. You'll see some of these have asterisks by them. Yes, 8% and 25%, those are the right numbers. I'm going to move on. There's various definitions you can use of how many people were reached. What really counts is what was the return. So for that, you multiply the number of people reached times the benefit per person reached, and then you can adjust the numbers, and the final answer will be the same, independent of definitions. You count individuals or households, but the total benefits will be the same. So I'll move on to that in a minute. But we weren't able to do this for all the innovations because some of them are very hard to put a dollar value on or we didn't have sufficient data. If you look at the ones with asterisks, those are the ones where we thought it was at least worth trying to put a dollar value on it. And those were generally, they're delivered financial benefits or health benefits. Here is an analysis of the benefit-cost ratio, and this is very preliminary. I'm sure it's going to change. All of this, we conservatively assume there'd be zero benefits of these after 2030, so they would sort of fall off a cliff. The programs would end then. As I noted, they've been growing quite quickly up until now. One, they break it into categories. If we just look at things that scale through direct grants, we assume a 5% growth rate for 2030, you get a 39 to 1 benefit-cost ratio. Again, taking, sorry, I should clarify this. It's taking the benefits of those innovations with asterisks, not against the cost of those innovations, but against the cost of the entire portfolio. There are roughly 45 innovations supported. So we're taking the cost for all of those. The benefits of the subset are 40 times as big, roughly 40 times as big as the cost for those. If we could do this more conservatively, it's still going to be huge. So is it worth investing? The answer is yes. What about the benefits through scaling, through the influence on the institution? There we've only got one data point, deworming, but the benefits of that are truly enormous. So I think that's a single data point, so it's hard to generalize, but I think it does suggest there is an advantage for institutions like multilateral development banks or aid agencies in trying to maximize opportunities for that type of flow as well. Okay, the overall portfolio, again, looks phenomenal. Sort of embarrassingly high numbers, but try to rig them to get them to be smaller or something. Okay, let me just say a few things about the patterns of scaling. There are some things that we've held in the previous analysis. I'll go to those in a minute. We haven't done all the analysis we'd like to, but let me just point out one thing that I think will be obvious if you look at this. The three largest innovations there, all of them were in India, and India accounts for a lot of the number of people there. So one lesson, without naming a country, is that investing in innovations in large countries seems to have a particularly big impact in terms of the number of people reached and presumably in terms of the dollar value of benefits as well. Or at least it's worth considering that. Obviously, different organizations will have different objectives. Some other implications. There's a saying in development that pilots never scale. Well, if you look at the rate of scaling of pilots, it was definitely lower. These are, by the way, from the previous analysis because we haven't updated it. But if you look at the dollar spent per person reached, it actually looked best for the pilot. Now, these weren't random pilots. These were selected through a process. We were looking for things that had the potential to scale, and we also screened on some other things, which I'll point out in a minute. But it suggests that appropriately chosen pilots, in fact, they can be an effective way to scale. And other factors. Cost was a key determinant. This is very simple. I think people often neglect it. There are complicated ways people try to judge things, but this is very strong. There's evidence rigged from randomized controlled trials, and this was very collinear with external researcher involvement, so hard to separate out. Sometimes people think of that as slowing things down and the enemy of innovation. It's not easy to interpret this causally, but there's certainly no evidence that this gets in the way. If anything, it seems to be, at least, again, among this set of selected innovations, to be quite strongly positively associated with scaling. I don't think there's time to go through all of this in terms of the implications of this, but let me just note that, to come back to the theory again, I think if you look at this, there is a question of why do you get such high returns when the private sector is nimbler? This was part of the U.S. government. I think it's largely this reason I mentioned at the beginning of my remarks. There are some innovations that are very profitable that the private sector will invest in. That's in that upper quadrant. But there are other things that are privately profitable and socially useful. There are other things that are profitable, maybe not useful, not socially desirable. They're the things that just don't work. There's also a quadrant here of things that are not privately profitable, or not privately profitable without at least some initial investment, but turn out to be very socially valuable. And I think that's where the arbitrage opportunity exists, and I think that's also the area where innovation funds are most likely to be additional. So I think thinking about what those areas are systematically is useful. I think one of them is innovations in delivery in government services. Another is innovations where there's not really, there's not what Warren Buffett calls a moat, where it's very easy for subsequent entrants to come in. That's bad for a commercial investor. That's great for a social investor, because it helps them scale. Thanks very much.
[00:26:14] Rachel Glennerster: Thank you. Thank you.
[00:26:26] Amanda Glassman: This music is so relaxing. Okay. Thank you, and thanks for such an interesting update of the Development Innovation Ventures portfolio. So we're joined today by a great panel. I think you all have their bios. Right now we have a sub sitting at the end, but he's far from a sub. He's the person who has been working on, I guess, the Development Innovation Lab at the University of Chicago's efforts to work with governments to institutionalize some of these things. And then he'll be replaced by Juan Carlos, Jose Carlos, excuse me, who is formerly of the Ministry of Economy and Finance of Peru. But let's get started. You each bring a different lens to these issues. We're really honored to have [inaudible], who's a former Minister of Finance, and you're from the wonderful country of Indonesia, which is extremely large, so has that advantage of scale that Michael was talking about, heterogeneous, decentralized. So tell us a little bit about how you've thought about these issues around innovation and testing inside government. What have you learned? What would you recommend to others?
[00:27:44] Muhamad Chatib Basri: Well, thank you very much, Amanda. Very good morning to all of you. Thanks for having me here. Well, you already stole some of my response, because you mentioned Indonesia is a large country, very diverse. I think the main reasons why this scaling up the evidence-based policy is very important for a country like Indonesia, because for those of you who are not familiar about Indonesia, we are a country with 187 million people, 17,000 islands, very diverse. So there is no way that we can implement the policy like one size fits all. We need to develop sort of like a pilot project before implementing this. And then we were lucky, we are lucky, actually, because we got support from people like Rema, the J-PAL, who helped us on designing this pilot thing. So let me give an example, Amanda, about one of the, to me, is really the big success of this scaling up the evidence-based policy is community-driven development. We call it the Kecamatan Development Program, the sub-district development program. The idea at that time was to alleviate poverty, but at the same time to strengthen the local government by empowering people for their participation in designing the policy. And the result was quite amazing in the sense that when we transfer the money directly to the community, it helps a lot in the sense that it cuts the cost about 25 to 30 percent. So when Rachel mentioned about the highly cost-effective, this is one of the examples. We started with these sub-districts. We started back on that after the Asian financial crisis. And during my time when I was a finance minister, I integrated to the so-called PNP Mandiri, the community-driven development program, covered around 70,000 villages. This is one of the examples. I can talk about this issue later on. The other thing that we are doing now, together with RIMA, because when we are talking about this evidence-based policy, how to scale up, the most important issue is whether the government will adopt this policy or not. It will be determined by institution and politics. I always like to quote what Jean-Claude Juncker said, we all know what to do, we just don't know how to get re-elected after we've done it. That's the real issue, right? So how to make sure that the policymaker will adopt this idea. So the thing that we are doing now, just give an example a little bit because it's still in the piloting, improving the targeting for the cash transfer, for the social protection, what we are doing in one district in Jawa now. The narrative is, if you tell the president, we need to improve the targeting, probably the politician will not be interested. So designing the narrative is very important. So the way we do it, because I'm sitting in the National Economic Council, like the Council of Economic Advisers to the President, the way we try to convince the policymaker to do it, if we can improve the targeting, then the government can save money. If you save money, you can reallocate some budget for your own program. So that's why the politician will become interested. Then they say, oh, it sounds like a good idea, why don't you pilot it? The narrative is very important. Okay, Amanda, I'll stop here. I'll certainly be happy to talk about it with you.
[00:31:43] Amanda Glassman: Absolutely. You have to make your benefits tangible to the decision maker, right? Well, Rema, you've worked in the same context among other places. Tell us a little bit about your view on these issues.
[00:31:54] Rema Hanna: Oh, no, thanks. It's very exciting to be here. I think these topics are so important, and so I'm glad everybody's taking time out of their day to be here to discuss them with us. I've been working in Indonesia for over 20 years and working very hand-in-hand with the Indonesian government in collaboration also as well with the research institute, the J-PAL Southeast Asia Office we built at the University of Indonesia 12 years ago. And the idea is that we try to really think holistically about evidence of state. And so we do a lot of training and teaching to demystify what is evaluation, what is data, how can it be useful for you? Because, again, I think this idea is that the narrative is very important. The second thing we do is we try to get results out there. You know, we do a lot of conferences, we do a lot of one-on-one policy makers on what do we know for things that they care about right now. And then the third is we do evidence generation. And, you know, the exciting thing about working in Indonesia is that there is huge scale. So let me give you a few examples of how the research has influenced the policy space. So, for example, the national health insurance team, we work with the government to test out different demand pricing models. That led to the health insurance rates not being increased. A second thing we worked on was thinking about there was a national reform moving from in-kind transfers to basically food stamps. We did the evaluation on that. And as [inaudible] said, we were able to show that it's for the same amount of money that the government is putting out in terms of providing transfers, just changing what you give actually had a large reduction on poverty. And so as a result of that, this is another thing that has scaled nationally. We worked on targeting of cash transfers for many years. And, in fact, actually some of the work we did in community targeting became the basis of the emergency transfer program during COVID that filled in cash transfers for 8 million people who were newly vulnerable but missing from the government roles at that point. I'm losing track of all the things we did, that Pak Dede and I have been working. Not only do you need to deliver projects, you need to fund them. We've been working in collaboration with the Ministry of Finance. We've done work thinking about how do you reorganize the Ministry of Finance and showed previous reform had very large results. And so that led to a recent reorganization of how we assign firms to different tax offices. And so a lot of this work has been done really hand-in-hand in collaboration between researchers and the government, setting questions together and really thinking through what could be done to make the world better but also align with the interests of people in government in terms of their agenda.
[00:34:43] Amanda Glassman: Can you tell us about what the institutional setup looks like within the Ministry of Finance or within the government of Indonesia? Do you work with your counterparts at the university as you are a special person or is there an entity whose job it is to do?
[00:35:02] Muhamad Chatib Basri: Well, in the case of this, our experience with the PNPM, the community-driven development, we work with the University of Indonesia and also J-PAL at that time. So the evaluation about cost efficiency is about 25-30% based on their evaluation. So we work with them. But the question, Amanda, about how to institutionalize this is a very important one because we don't want just to do an experiment without eventually being adopted by the government. The most important thing is how to institutionalize. So let me give an example about this, the PNPM, the community-driven development. So when they ask the additional budget for this community, we require them that they fulfill all the criteria based on the evaluation that we need by a study conducted by LPM at the time by J-PAL. We use the selection criteria based on that. So each part is embedded in our budget. Rima mentioned about some of our projects related to the tax issue as well. We found out that if we do the tax administrative reform, it would increase the revenue. And the Ministry of Finance adopted this policy by increasing the number of the medium-sized tax office to around 30 additional. So this is just an example. Based on the pilot, we moved into policy. We put them in the evaluation and part of the government policy. So I hope I answered your question.
[00:36:48] Amanda Glassman: Sorry to insist, but have subsequent ministers of finance embraced that same kind of approach to budgeting or to policymaking?
[00:36:57] Muhamad Chatib Basri: Yeah, because our approach is when we work with them, like a colleague from J-PAL, it's not only someone from outside, but also we work together with them. We train them, part of this institution development. So that is why they have the ownership. Let me be frank and honest with you. Many of these policy recommendations made by the multilateral agency are always first rate. But the problem is it can only be implemented 25 years from now because there is no ownership. In this case, the story is very different because they embraced and they realized this is part of their program.
[00:37:38] Amanda Glassman: Thank you. And welcome, Jose Carlos. We were just talking about how to institutionalize these kind of innovation approaches, units, into governments. So I'm wondering if you can tell us a little bit about the MEF lab. Not meth, MEF. Ministerio de Economía y Finanzas. How did the government decide to set up this model in Peru?
[00:38:11] Jose Carlos Chavez Cuentas: The main interest of the Ministry of Economy is to have information regarding the external evidence regarding their own programs and policies that are implemented in Peru. In that regard, the MEF lab is an opportunity to create partnerships on the basis of the need for information for external evidence and the possibility of having a high-quality team with experts in producing quality experimental evaluations. This is precisely how we set up a partnership and it was turned then into the MEF lab. Then we started right away with an agenda within a context that, as you will know after the pandemic, our country has done really bad in terms of poverty levels. It increased by 10 points, but now it's 8 points above what it was before the pandemic. In Lima, the capital of Peru, the poverty rate went from 14% to 28%. So our conditional cash transfers, which is 2% of the GDP, which in a country that collects between 17% and 18% of GDP, this was such a significant program. So our conditional cash transfers had the purpose of using it in urban areas because in Peru these cash transfers traditionally have been focused on rural areas where we see the highest levels of severe poverty. Since circumstances changed, there was a need to define a program and a policy for these urban areas. That is how we established the TPI, that is transfers for early childhood, doing it in urban areas for women with children under 3 years of age. We are talking about a transfer program, which is 2% of GDP. It's a significant investment that was needed. We did receive the support for the MEF lab, where we work with the University of Chicago and their excellent team of researchers. This has been a very successful program. Not only is it one of the programs, the relevance of the program within the context is optimal. As we have heard from Academia, the role of MEF is very important because it is helping, because we have been investing in expanding the program and it has been very easy to expand it randomly. This has been joint work that was synchronized and that allowed us to reach a large population with $10 million. We had a great impact like we've never seen throughout Latin America. Is this lab still in existence? Yes, it's still working because from the very beginning we made sure. Obviously, our culture has something to do with it, but we regulated it in our legislation to make it a permanent tool.
[00:42:09] Amanda Glassman: I think that was also the case in India. In other cases, there are also sectoral labs that maybe we can come back to later. Now, let's turn to the global level. Sasha, it's so great to see you again. Tell us a little bit about, of course, it could be public, it could be private. When we think about government innovation, how does an open-tiered external lab work for that purpose?
[00:42:38] Sasha Gallant: Sure. Michael described it quite thoroughly in his opening remarks. I won't dive into it, but ultimately it served as an aid agency, as kind of research and innovation. Really trying to find promising ideas, find those with more potential and really rigorously test those and ultimately find those that are highly cost-effective and have potential to scale and try to push those out the door to scale. We've talked a bit about the focus there on evidence of impact and cost-effectiveness, but I really do think it's important to focus on the potential for scale side of things and the sustainability at scale. For interventions that were designed to scale through the public sector, we spent a lot of time really deeply engaging as funders on the relationship that was there between researchers or implementers and the government partner themselves. I think it's a key indicator of success, especially when thinking about really bringing something to scale. It's one thing to have a good or interesting idea. It's a totally different thing, and you all have spoken about this as well, to design an intervention that can actually and realistically be taken up by public partners. And thinking about that in terms of feasibility, likelihood of implementation, fidelity, cost, as well as these questions of buy-in. How do you make sure this isn't a one-time exercise where you've brought evidence in and checked a box, but actually integrating that meaningfully with an institution? I think as a funder, as much as we were interested in just new and compelling ideas, there's always at least as much weight on policy relevance. How are we making sure that the question being posed and the research being designed is done in such a way that is actually helping solve a problem that the people charged with social problems are actually trying to solve? It seems like common sense, but that hasn't always been the case. Absolutely, and so in 15 years, they've supported over 300 awards, but really supported over 150 randomized controlled trials, many of which we're thinking about not just does something work, which is a critical question, but how does it work? And what happens when it worked somewhere, but you're trying to bring it somewhere else? And what are the different kinds of nuances that are required or contextualization and really having time, not just for the big question of what is the idea, but what can make it stick? And really understanding what has impact in people's lives. And so in our portfolio, I had some tremendous examples of that, including some of Rima's work, certainly in Indonesia. But then thinking about, as I'm sure we'll talk about, what it means not just to do this in kind of a... Michael touched on this, right? The direct grant exercise of doing that in specific targeted evidence generation or evidence use in a country context, but also thinking about how do you help mainstream the process of evidence use into an institution, right? Be that kind of a ministry or in a bilateral, right? And really kind of make that part of the water.
[00:45:47] Amanda Glassman: Yeah. Say more about that. How do you influence institutions that work with countries such as a multilateral development bank, for example? Sure.
[00:45:58] Sasha Gallant: So I think a lot of us are probably... have spent a lot of time trying to crack that nudge. I don't think it's a straightforward answer by any means. I mean, some of it is just getting really clear about what we know works, right? And, like, finding ways to figure out what works in different contexts. And then some of that is figuring out what any particular institution is well-placed to push forward, right? This came up in the conditional cash transfers that Michael raised. There certainly was targeted research. You can talk about this with anyone. There's targeted research, but they're also, like, what is the role of a multilateral, right? The IDB or the World Bank in not just understanding the evidence, but figuring out the role of kind of technical assistance provision or thinking about where else something can work, right? The replication exercise. In terms of influencing an institution, I think some of that is about the same thing as with government. It's about buy-in in a real way, right? There's a real need for leadership investment. But I think also, as is coming out here, there's a need for kind of savvy people being involved throughout the institution who aren't just bought in kind of conceptually, but understand how these goals meet the core goals of the institution, right? And, like, why it makes sense, again, not just to bring a researcher in, but really make them part of the work that's going forward. And I think there's also a real need in understanding really kind of seemingly boring things like procurement.
[00:47:33] Amanda Glassman: That is not boring, Sasha.
[00:47:37] Sasha Gallant: Like, how do you actually design a solicitation or design an award in a way that allows a partner, like a strong researcher, to be part of it? That brings the right implementer into the room. And I think, like, a lot of our lessons, which I'm happy to dig into, we certainly offered some, like, funding incentives and other ways to bring people in. But a lot of the value that I think we saw was about time. Really investing in the same way as you've been going to Indonesia for 20 years, right? Like, really sitting with the people who are doing, who are thinking about the scope of an award, doing the award design, building a tech panel, figuring out who, like, actually gets in the room, right? And really kind of getting into the weeds of reshaping what works. Yeah.
[00:48:24] Amanda Glassman: Let me ask you, you gave some great examples of using the financial incentives that you hold as a ministry of finance. How did you finance Rima's work? Or how did Rima finance her own work? Or how did that work? Probably, you've worked together on a number of areas and projects. But tell us a little bit about the financial setup.
[00:48:47] Muhamad Chatib Basri: Well, on the case of the PNPM, the Kecamatan Development Program, it was initially funded by the World Bank. And later on, when we see the result, of course, I believe that some other institution, I don't know whether maybe you were also involved in this Ketamatan Development Program? I think a long time ago. It was started by the World Bank. And then later on, when we see the result is quite satisfactorily from our side, then we adopt it from our budget. Excellent.
[00:49:20] Amanda Glassman: I think I was getting to that point. Thank you so much. I think that's powerful. You know, and I guess another to you, Jose Carlos, how was it financed within the MEF? Did the University of Chicago bring their own money? Did you allocate budget money? Did an external partner help with it? How did it work?
[00:49:44] Jose Carlos Chavez Cuentas: Both. The University of Chicago financed their own researchers. And MEF financed their own interventions, such as data collection and so on. The more significant expenditures, those were about $10 million. It was related to the type of program and the magnitude of it. MEF also. For example, we are doing assessments in education. Given that women participate in the labor market more. So we are looking for exchanges of labor for care. So we have researchers to leverage all the resources. It helps that everything is institutionalized at MEF lab. And this helps us leverage the resources and the opportunities through cooperation. The issue is that cooperation was already defined. In Peru, we have been working a lot on gender issues. And this has been giving us good results. We also need to make efforts in working together within cooperation opportunities. There are big opportunities in Peru. IDB and the World Bank, through loans and technical assistance, that they've been working on modernizing the state. MEF lab is a good instrument that deserves to be supported through loans to help us have better resources so that we don't fall into the situation where we have budgeting gaps. This doesn't happen yet, but if it were to happen, we could react given that risk. Given experimental evidence, we have given significant steps because, in general, solutions in Peru and Latin America, in particular, require a significant push. And this push can come from five or six programs per country over the course of five years, large programs through external cooperation. That would be a great opportunity. And there is a concept already written at the ministry. I think this is a great opportunity. I also believe that MEF lab, specifically, this is a political issue, but we also can work with policy. When a president in Congress is committed to a program and its growth, we can tell them, well, this is unavoidable. And if they're asking for the growth to be from one to ten, we can say, sure. We can negotiate over time, over the course of four years, and we can say, well, you can make it grow five, five can be turned into 15, and so on. This is when there is an imminent expenditure politically. That 5% in the next generation with the next government can allow us to say, well, we have high-quality data on what is actually happening. At the same time, there are programs that do not have political commitment because it's only a three-, five-year program. MEF lab can tell them this is working on its own. It has inertia, so let's expand it.We have to have a strong evidence that after 35 years we can test to see whether it should continue or not.
[00:53:45] Amanda Glassman: [inaudible] evidence, so you scale when you know if it works and how much it costs. I think that's a great example. Rima, tell us a little bit about how you see this institutionalization issue from the perspective of a researcher, what challenges are you facing?
[00:54:02] Rema Hanna: I think this is very important. It's funny, I was just thinking, when I first started my career, I was told that I would stop what I'm doing. You're too policy, you're not economics, and you can't get interesting economics papers if you invest this much. On the flip side, I think that was actually really wrong advice. I'm glad I completely disregarded it. I do think that part of if you really want to make a difference in the policy space, it really is about building relationships. It's about listening, it's about hearing what are the right questions people really need to know. How can you be helpful? It's about building trust. Some of that is spending a lot of time, like Pak Dedetalked about, not only do we work with our partners, we do training on the methodology so people understand what we're doing, we demystify it. It's something that we could discuss and debate and talk about. I think building this kind of institutions, building really matters. Then the institutions help because people in the government change. I'm on one project where the person in charge of the agency we're working with has changed five times over the course of two years. People change. The institution allows it to outlast any given one person and build it into the workflow of the agency. I should say I'm also glad I disregarded the advice because for academics out there, I think I actually studied much more interesting research questions, much more relevant by listening, by thinking about the policy and not just the theory. I think it could be a win-win for all. Absolutely.
[00:55:41] Amanda Glassman: We're now at the almost time of Q&A with the audience, so prepare yourselves. Maybe I'll do one more round. Tell us about a recent interesting project where you've worked together with others to take a look at how evidence could feed into a scaling decision, for example.
[00:56:00] Muhamad Chatib Basri: The project that we are working on with Raymond now is on this RCT, on tax reform. We try to look at this allocation of labor, how this allocation of labor and also incentive for the salary will improve the revenue from the government. We try to look at the rate of return by putting additional staff in the tax office and also provide an incentive. This project is ongoing, but the other thing is also J-PAL helping us. We are doing a pilot project about this, improving the targeting, especially the cash transfer through the digital portal. This is the thing that we are working on. But if I may, Amanda, I was talking about the success story, but not all of them are successful. That's very important. Allow me to talk a little bit about what went wrong. I have to confess, we made a mistake because it is stated in our constitution that 20% of budget should be allocated for education. We allocated this budget. We gave the salary for teachers. One day, I read in the Quarterly Journal of Economics that this policy doubled the salary for teachers, but ended up with nothing. Because we didn't do piloting the project. We just provided the salary without conditionality. We also learned from our mistakes. Not only the successful ones, this is also very important.
[00:57:38] Amanda Glassman: Absolutely. The cautionary tale is actually just as powerful as an example of what works. Sasha, did you want to say something? You mentioned procurement. Do you want to talk about that more? I love procurement.
[00:57:54] Sasha Gallant: We could spend a lot of time in procurement and what it takes to design well. I think one thing I would say on the front of procurement is that I think a lot of institutions, be those ministries or funding institutions, when procurement rules are written, they are then the law of the land. But actually, they often weren't written by people and can be assessed and reassessed. There often are different and interesting and creative ways to work with available procurement tools or to raise new ideas and see how to get to the solution that we want. I think often form and function feel quite bound and trying to spend time to think about what is it that we're trying to achieve and what are the critical inputs in order to get there. Then how do we design a solicitation or an award or a contract in a way that has the kind of critical shared outcomes in mind is quite central. That, I think, also means buy-in, not just from the people who are bought in on RCTs overall as a way of moving forward, but really having the right people. Some of that is your contract officers and some of that is legal. Really having thoughtful advocates alongside who understand what is possible when we're all quite aligned on what it is that the institution is trying to do. Then these kinds of things, actually, they seem quite different from the typical way of doing business, but actually are quite feasible, again, when there's alignment on the point, which is figuring out what works, making sure that we have an open enough aperture to not just have ideas from within, but make sure that we're getting ideas from the outside, from the right partners, and then being able to build that, both conceptually and really tactically. I will say USAID's DIV was not the only tiered evidence fund, and we helped see this replication in the Fund for Innovation and Development, which is now operating in France. We've seen this in multiple other countries. We invite anyone who's interested to talk to us, to be excited to help, and anybody who's thinking about doing this. It absolutely takes buy-in and it takes these different kinds of people to be able to do this, but I will say I've seen them set up in a matter of months with the right people involved, and I've seen funds like this start with a million dollar budget that was brought in from a bunch of different places across an agency, ultimately build to a 40-plus million dollar annual fund that's able to support this as it becomes clearer and clearer that this is not kind of a niche objective to bring in evidence, but actually that it meaningfully informs the larger work that an institution is trying to do, and hopefully, to your first point, right, how do people get re-elected after? Maybe because they're able to show some results in a meaningful way that we can really kind of hang our hat on.
[01:00:54] Amanda Glassman: Yes. I certainly hope so. Okay. So, do you want to add a last point? : Or should we go to the audience?
[01:01:05] Jose Carlos Chavez Cuentas: I simply want to say that I fully agree in that all these things have to have an administrative capacity development, not only the people that know about evaluations, but professors who are in charge of the leadership who know the topic very well, but who have to handle many things.
[01:01:30] Amanda Glassman: I want to ask you how you work, given that you have a lot of decentralization in Peru and in Indonesia. How do you use that state power and how do you, as the central government or the federal government, do this kind of collaborative testing or get different states to try different things? But we'll come back to that. Okay. To the audience. We'll take three questions and then go back. So, we'll start here. Please say who you are. I think there are people listening online, so.
[01:02:08] David Alsate: Thanks so much for this discussion. Super interesting. I'm David Alsate. I work at GiveDirectly. My question has to do with innovation and scale in places with even less fiscal capacity than governments such as Indonesia and Peru. I'm thinking of a lot of places in sub-Saharan Africa where the need is just as big or greater, but the governments just don't have the administrative or fiscal bandwidth to take up an innovation and deliver it nationally. And now, with the collapse of aid and a lot of dwindling resources, how should we think about it? Should we think about it just as philanthropy should step in and try to give billions of dollars in these places and go up? Should it be more than just that?
[01:02:52] Amanda Glassman: The answer is yes to that. But we'll come back to the rest of your question. Okay. Right there in the middle.
[01:03:00] Tania Alfonso: Good morning. Buenos dias. Tania Alfonso, formerly of USAID. And before that, I was the country director for Peru for Innovations for Poverty Action. I'm very curious about the transition between something being a pilot and something being taken to scale. How does that change? You mentioned a little bit in Indonesia, there's a little bit of targeting. There was a case in Peru of the transition from rural to urban cash transfers. Sasha, you mentioned things about contextualization. What actually happens? How do you do that? How do you think through it? What is the evidence showing that this needs to change and this shouldn't change? What is the process that something goes through as it changes from a pilot to a scale model?
[01:03:48] Amanda Glassman: Out there in the back.
Rachel Glennerster[01:03:57] Gonzalo Meda: Thank you for the panel and for the introductory remarks. My question is similar but a little bit different. My name is Gonzalo Meda. I'm currently an education consultant working for the IDB. And my question has to do with what role do you see for implementation science in your labs and in your roles as researchers and policymakers? And by this I mean the intention and the commitment to continue assessing and continue generating high-quality data and to continue generating new evidence as you scale to see whether there are voltage effects or not. And also, if you believe that's important, I mean, assuming that's the case, how can we better align the incentives both from the policymaker perspective and from the research perspective? Because it's not the same to publish a paper with an impact evaluation than to publish a paper with implementation variables. And that makes a huge difference for generating evidence. So that's my question. Thank you.
[01:04:58] Amanda Glassman: Great, great question. Okay, that's a good collection for you. So the first... Do you want to start?
[01:05:07] Jose Carlos Chavez Cuentas: Well, as for the context of scarce capacity, I would begin by doing a mess lab. I would work with the finance ministries. And the challenge that we have in Peru and countries that have 1,800 local governments, 1,000 of which have less than 5,000 inhabitants. And the people who are in the territories always say that they have to start from some starting point, from the finance ministry who might help estimate, add the minimum profitability of the initiative and what have you. So I do the same. I would start with the existing capacity and then build upon that at the finance ministries. And that will ensure sustainability as in the case of mess lab. MEF is not going to invest $10 million and then abandon the scaling up. That wouldn't be logical. So it's a very different context. Now, the process of why we come to this point of rural poverty, it's more a collective understanding of the influencers, the side decision makers or finance ministries. The Ministry of Finance in Peru since 2007 created these budgetary or result-based budgetary frameworks. And I'm not going to get into any polemic about things like what happened in Chile. But it wasn't from the measurement or management or assessment or the art of evaluating. It was more from the beginning as a point of view of developers because of the reality in Peru. We had health experts, not just poverty experts. And from that point on, working within the Ministry of Economy, knowledge arose regarding infant development and poverty. And we followed a whole journey of things that happened and then they got stuck. And then we had anemia. The pandemic, I'm sorry, I'm sorry. And then we had the urban situation where people have extremely low prenatal controls, for example. And healthy children's control are very low even though they're mandatory since 2007. And a lot of resources have been invested. So we talk about that and we dialogue about it with a group. And why the need for scaling up? Urban poverty goes up 10%. In the presumably richer cities, poverty is 28%. In Callao where in the port it's 37%. And it's not even based only in Lima. It's distributed all over the place. So we have a severe problem. So in the designs, we begin to see that we need a response for all that for which we need to randomize everything to answer all the questions. That's as far as the scaling up. As far as the way to align with the politicians or the policymakers, I think it requires a political game because they have their political commitments as well. And I think that we have to work with that. We cannot reject those. We just offer them the information which is related to their commitments, showing them with scientific evidence that things are not working, that we could all believe that we're doing things to help change, but nothing is happening. So there's no re-election. That's the case in Peru. And I think that that's what we should be doing, to give them quality evidence so that they can make the right decisions. Politicians are interested in something, but then they're less interested in other things. So we just let those things that they might not be interested in go by, and we stress on the things that they are interested in. Thank you.
[01:09:09] Rema Hanna: I think these were really great questions, and particularly what I wanted to touch on was the science of scaling because I think it's completely, completely understudied. And I think it's incredibly important because this program scale, as my colleague Ben Olken likes to say, they're not necessarily Xerox. It's not going to work necessarily the same way. You learn stuff from the pilot. You have to adapt to the larger scale. There's many things going on. I think it goes back to the fact that we don't want to just do the program evaluation, but we want to understand why. So let me give you a quick example. I'll be very fast. Maybe about 15 years ago, we worked with the government of Indonesia on a pilot to look at community-based targeting. So basically, you use data or do you use community input to decide who gets transfer programs? So the first project we did, we created a very tiny little transfer program, and we showed the trade-offs between the proxy means past the data-driven approach and the community approach. We were discussing the results with the government, and the next thing they said is, what happens if you add it like this? You were doing this for a real program with much more money on the line. So we did a second experiment, which was the first scale-up, actually, where we incorporated this randomized experiment to compare the data-driven approach versus community-driven approach in the expansion of the decision for the conditional cash transfer program. Lots of money on the line. And so based on these previous pilots, during COVID, the government of Indonesia expanded the program nationwide to all rural areas to help use community targeting to fill in the gaps. To fill in the gaps of people who are missing from the transfer system, and so it was given to 8 million people over a couple months very quickly. And so we've actually been analyzing this data, and maybe we should have done this before we started all of this, but we hadn't really thought so much about the science of scaling. We're analyzing the data now, and in the scale-up, we're seeing quite similar results to the original two pilots. And so we're trying to dig into why, and I think one thing that was very useful is that when we did the original studies, we had spent a lot of time trying to understand why. We looked at how community preferences got aggregated up. We looked at governance issues. What are the number of people you need in the room to make sure that there is good enough governance? And so we looked at heterogeneity across different regions with more institutional capacity and less. We tried to really dig into why, and so I think a lot of the – it wasn't only just the program evaluation that helped with the scale-up. It's some of the lessons learned from the whys. When we did the original pilot, that helped drive the design of the final scale-up, even though it looked, you know, different than the original pilot just because of the nature of scaling up. And so I think we actually don't do enough of understanding sort of the implementation science around scale-ups, and I think as we think about – you know, I'm so excited about these ideas of thinking about more of these kind of development labs and such as we're thinking about this. I think thinking about how do we learn how things work and how they can apply to other settings, how they can scale up more broadly, I think is going to be an important component of that.
[01:12:17] Muhamad Chatib Basri: Perhaps let me try to respond to the question on the stages of the scale-up. Back again to the story of this PNPM, the community-driven development. It was back on 1998 to 2001. It was the period of the piloting at the time. And it turned out that this program, you know, was very effective in the sense that it would cut the cost about 25 to 30 percent. And then we found out that it is more efficient to transfer the money directly to the community rather than to the local government. Because if you give money to the local government, there is a risk about the corruption, et cetera. So if you send the money directly, that's more effective. After that, when we realized this project is quite satisfactory, then we do the horizontal scale-up 2002 to 2006. We introduced the so-called urban poverty project. The most challenging at that time was training for the trainer, the facilitator, because they need to support people in the rural area villages. Once we see that the result is quite satisfactory, then we adopt it in a national budget. We fund it with a national budget. But this could happen because the enabling condition as well. Why? Because the strong commitment from the political leader. The question is, sometimes we economists, we made a mistake. If we try to come up with the ideas and the politician didn't buy in, we always blame them. I think the idea is supposed to be vice versa. How to incentivize politicians to support our idea? The question is, what's in it for them? For them, it's very important because if we could alleviate the poverty, then it becomes very popular. So this is the thing that we probably need to address. Adaptive design is very important. Evaluate, adapt, and then scale up. But the challenge is, unfortunately, this program ended back in 2015 when the new government came and introduced the so-called village fund. I have to say, honestly, that this village fund program, it's okay, but it's more bureaucratic now. And then the possibility of innovation is relatively slow compared to what we had. But this is always the challenge of policy. The political cycle, the turnover of the bureaucracy, come up with a new idea. So I hope I answered your question.
[01:15:07] Amanda Glassman: Iteration. And what about the question of publication for researchers working on these sort of implementation experimentation or adjustments during implementation?
[01:15:20] Rema Hanna: I think, again, it goes back to this thing. I think the world has changed. As I said when I started off, I was told so many times, you're just a policy person. You're not a real economist, and you're never going to publish in an economics journal. I'm like an editor. So I do think how we think about academic publications has also changed. I think people want to understand why things work and how they work. And so understanding the implementation is an important part of that. And I think as we're thinking about the academic field and how we give incentives to people who are coming up in the field right now, I think it's on us to say that if we want work that really makes a difference, we need to be judging work, not just on whether or not it's a pie-in-the-sky big idea, but whether or not it is something that could be implemented and trying to understand when and where. But, you know, again, the problem is, I must say, just to put it out there again, and this is why I like the idea of a lot of these development and innovation labs, it's much more expensive to run a pilot where you're trying to understand why. You need a bigger sample, you need to collect more variables than just if you run a program of impact evaluation. But there's huge benefits of it because not just do the partners within learn, you learn a lot about the implementation, you learn about possibilities for scale and scope, you learn about external validity. And so I think it's both the academic side of understanding, you know, this is okay, it's awarded nowadays, and people want to know if these things actually work, and also from the funding side that, you know, sometimes a little bit more investment in the original pilot. I know we're all going for cost-effectiveness. I'm really trying to think about cost-effectiveness on pilots, but actually making them a little bit richer in terms of data collection, in terms of not just trying out one thing, but trying out a few things, we actually learn a lot more in terms of, you know, as you do these adaptive things and learn in scale. And so I'll put it out there for the funders.
[01:17:20] Amanda Glassman: I'm going to go to, do you want to make a quick, and then we'll go to Sasha?
[01:17:24] Muhamad Chatib Basri: I just want to add, Reema, you know, publication is one thing that's very important, but the policy implication is also very important. That is why we publish paper at AEER, but I said to Reema, we probably need to disseminate this idea in the human language. Right? That's why we wrote an op-ed, et cetera, to make this, you know, so the policymaker will buy in, will buy our idea. You know, so it just goes beyond the academic publication, but something that will be implemented.
[01:17:58] Amanda Glassman: I hope you respond. And then also, can you say something about, like, how fast can a pilot be? How fast can you go through the little, the different gears? How does that happen? It depends.
[01:18:08] Sasha Gallant: Is that a helpful or super helpful answer? But I think there are a lot of questions that can be answered quickly, and there are some that just take time, right? Sometimes they take time for a lot of reasons. They take time because of the academic calendar or harvest seasons, but they also take time because you don't just want to know if it worked two months later. You want to know if it still works two years later. There are reasons that certain things take time, but there are also, I think, much more kind of implementation-related questions where you actually can get some meaningful data quite quickly, and I will join you, and you're charged to funders, here, and say, you know, even in a lot of what is funded at the Stage 3 level, which is about catalyzing scale, there often was really deep evidence generation because part of what was happening was replication. And I haven't seen a Xerox copy yet, so that works precisely, right? You just pick it up and plop it over there. It's about contextualization, but it's also about optimization, right? How are we making sure the thing works as effectively as possible? And so I think for funders to, as you said, Parthede, right, like, yes, the academic output matters, but for funders to both incentivize that, recognize the incentives within the research structure, the academic structure, but also really put energy behind and funding behind the policy outcome, right? And then having the grants kind of structured in a way that's really thinking about engagement with policy actors, right, along with understanding kind of what happened at Midline has a lot of value, and I think those incentives can really carry. I also think as much as thinking about these things at the same time is usually less expensive than thinking about them after you've done something. And so thinking about, like, what is the kind of right-sized amount of learning to be done at these different stages is also really important, right? Like, you can't learn absolutely everything, typically, in any one go, but figuring out kind of what the right level of color is and having that not be an isolated question that the PIs are asking independently, but, like, building that with both policy partners, but also talking to other implementers, right, who have done this work and have learned from it and talking to other funders or the bank who have seen things happen in multiple country contexts and trying to figure out how do we build from different contexts, right, as we're trying to learn as much thought as we can.
[01:20:22] Amanda Glassman: In fact, there's an event on Friday at the World Bank that will explore some of these same kinds of issues, and I'll be sharing a little bit about what the IDB is doing in this space at that time as well. And I do recall a working group here at the Center for Global Development that looked at how to, you know, create incentives for operating agencies and partners to incentivize these kinds of actions. So, do you want to say a final word?
[01:20:51] Sasha Gallant: Let me just add David's point, which there's a lot there in your question about the role of philanthropy, which I think we'll all be contending with for a long time. I do think one of the questions is also, like, what is the kind of engagement that a researcher or an implementing partner has with a government partner? And I think there had been a narrative for a long time that was kind of like, the implementing partner does it, and then it shows the government how to do it, and then, you know, hand off, et voila, right? But much more often, the success stories that I have seen have meant a partnership in a real way, right, and figuring out what the right level of partnership is and not owning it for the government in any way, but also not disappearing, I think it's the role of implementers, it's the role of researchers, but I do think there's something on the funder responsibility there as well, of recognizing the value of that kind of ongoing relationship, of how much further you're able to go because you actually have been working with similar institutions for 20 years, right, and what that means in terms of trust, what it means of being somebody who's able to answer the call when a new question comes up. Like, these things are not irrelevant, and innovation can be understood independently to a degree, but there's also some human element there.
[01:22:02] Amanda Glassman: Yeah, absolutely. Okay, how long, we're now at 11.30. I'll take three, okay, how about a rapid round, two-second interventions, and then we'll do your last two seconds in response. In the back. Oh, I'm going to, okay.
[01:22:20] Michael Kremer: Thank you. Yeah, I just wanted to return to this question of the role of relationships and of ongoing, I think that's all very important, and I agree with it, but I think there's also a role for pilots as well, and I think these are really complementary. One thing I'll note, I mentioned in our data was there was a 25% growth rate between, annual growth rate, between 2019 and 2024, and many of these came from things that were initially pilots. There wasn't initially all of that government buy-in, but that developed later. Now, of course, that does require the buy-in and the engagement later, and that's very important, but there may be a role for pilot funding as well. A little bit like the medical analogy of, there's an initial trial to see, does this work in ideal conditions, and then later it scales, and if I think about the thing that was actually by far the largest scaling, that was deworming. The grant that DIV gave during that time was in response to interest from the Indian government, Indian state governments, but the initial program was, that was an NGO program with 75 schools, and no level of, the government gave permission, but no level of government buy-in that they want to scale it up. So I think there's complementarity between funding at different stages.
[01:24:06] Speaker: Hi, I just wanted to first echo what Michael said. I think that there's an important complementarity between these two schemes. Having worked a lot with governments and inside governments, you realize that there may be many ideas that simply aren't possible, or it's very difficult to start from within the government and having a pilot first outside, where there's more flexibility, can help a lot. And I've seen it, I've also been benefited by funds like FID and DIV, and yeah, we've been able to work on new things that probably wouldn't have been able to work if it would have to start from the government. And I wanted to mention one more thing regarding scalability, and that's why I think there's a complementarity, because on the other end, one element, just the fact of starting a pilot in the government by itself will help you have a bit more scalability. It won't solve everything, there might be issues with general equilibrium effects, there might be some other issues, but in terms of just being in the context of the government, working with the staff of the government, with the rules of the government, that helps it by itself in the scalability. So I think that's also important, that's why I think both are good complements.
[01:25:38] Amanda Glassman: Absolutely, last comment, please.
[01:25:45] Chris McCray: Hi, Chris McCray. So I was wondering, do you have any cases of scaling up with teenage youth? I mean, I know women empowerment is really important, and most of the RTCAs I've studied are to do with women building communities, but have you got anything scaling with teenage youth?
[01:26:05] Amanda Glassman: Many, many things. I think maybe you can mention...
[01:26:07] Rema Hanna: Michael, are there projects that scaled up focusing on youth unemployment and such?
[01:26:16] Sasha Gallant: Employment specifically, but certainly, I mean, looking at different ways of engaging youth, I mean, if you're thinking about education programs, certainly, and not just thinking about... I think there's certain adaptations that have also been looked at of programs that are oriented toward younger students, but also, are there ways that youth can be involved in actually helping to scale and engagement in terms of implementation? There are interesting mental health programs that are underway quite focused on this population, and a series of others. I'd be happy to walk through a portfolio of stuff that we can probably connect to the people looking deeper at this.
[01:26:50] Amanda Glassman: I think another lab, I've just been thinking about the work of Chris Blattman, too, who's working in Mexico, looking at how young people get involved in crime, for example. And how to prevent that, what might be successful. I think one area that we didn't touch in in this conversation is that lower right quadrant from Michael's framework, because we're not looking at private sector innovation that has high social benefits, but is not necessarily profitable. So, that's another area. Actually, I'd say in the MDBs, there's a lot of... And Div had a window for this. There is financing for private sector innovation, as well, that would need development, for development. So, I think that's an interesting area. Maybe we'll do another panel at another time. But to you, thank you so much to our panelists for an interesting discussion and to you, as an audience. Thanks.
[01:27:41] Rachel Glennerster: Thank you.