With rigorous economic research and practical policy solutions, we focus on the issues and institutions that are critical to global development. Explore our core themes and topics to learn more about our work.
In timely and incisive analysis, our experts parse the latest development news and devise practical solutions to new and emerging challenges. Our events convene the top thinkers and doers in global development.
Economic development, institutional analysis, health systems, corruption, evaluation
Bill Savedoff is a senior fellow at the Center for Global Development where he works on issues of aid effectiveness and health policy. His current research focuses on the use of performance payments in aid programs and problems posed by corruption. At the Center, Savedoff played a leading role in the Evaluation Gap Initiative and co-authored Cash on Delivery Aid with Nancy Birdsall. Before joining the Center, Savedoff prepared, coordinated, and advised development projects in Latin America, Africa and Asia for the Inter-American Development Bank and the World Health Organization. As a Senior Partner at Social Insight, Savedoff worked for clients including the National Institutes of Health, Transparency International, and the World Bank. He has published books and articles on labor markets, health, education, water, and housing including “What Should a Country Spend on Health?,” Governing Mandatory Health Insurance, and Diagnosis Corruption.
On reading “Global HIV/AIDS Policy in Transition” in the June 11 issue of Science, I was reminded of Al Gore’s catchwords for global warming (“An Inconvenient Truth”) because the authors – John Bongaarts and CGD Senior Fellow Mead Over – openly confront a very uncomfortable fact: money spent on treating AIDS patients saves far fewer lives than money spent on a wide range of other urgent health interventions.
This is not news. I remember over a decade ago when I first saw cost-effectiveness calculations for anti-retroviral treatment compared with bednets, beta-blockers, tobacco cessation, or promoting use of condoms. While cost-effectiveness calculations are definitely not the sole basis on which to allocate resources across the health sector, they do give us critical information about the tradeoffs we face when managing constrained resources. And in most low-income countries, despite the past decade’s increase in foreign aid, health spending is still highly constrained.
My reaction was to play the ostrich and pretend it was someone else’s problem. I imagined that resources would go to prevention as well as treatment and we would muddle through. Besides, there were a number of comforting arguments to qualify the cost-effectiveness calculations – HIV/AIDS advocacy was helping to increase the total resources going to global health initiatives, negotiations were bringing down the prices of anti-retroviral drugs, the ability to offer treatment provided an incentive for testing and counseling, and so on.
But I never imagined the magnitude of the HIV/AIDs response (analyzed and documented by CGD’s HIV/AIDS Monitor, among others) – still too small to fully address the problem and yet, at the same time, overshadowing and putting stresses on other parts of the developing world’s health systems.
But Bongaarts and Over neither flinch at the inconvenient truth nor do they use it simplistically to argue against treatment. Instead, they are calling for a more balanced perspective that considers the ethical imperative of applying funds where they can do the most good alongside the need to uphold commitments to those who are already receiving treatment. This is a realistic call to “protect and expand resources for the most cost-effectivehealth interventions, focusing on HIV prevention, childhoodimmunization, malaria, tuberculosis, maternal mortality, andfamily planning,” as well as a proposal for a coherent HIV/AIDS approach that “preserves recently achieved mortalityreductions while lowering the annual number of new infectionsto less than the annual number of AIDS deaths.” (This latter phrase is what Over has termed “The Global Aids Transition,” showing that the epidemic cannot be understood without simultaneously considering the relationship between new infections, treatment and the mortality rate.)
Putting our heads in the sand may be an effective way to avoid inconvenient truths, but it’s an ineffective way to save lives.
Shout-out to Duncan Green and Oxfam for commenting on our new book and calling, like Nicholas Kristof, for pilots of COD Aid. Best of all, Duncan noted (as have several others such as Owen Barder in this note among others) that many of the usual concerns about COD Aid (see our FAQs for some) apply as much or more to other forms of aid.
But on one big point we disagree: It’s not true that COD Aid has been tried before.
We like other incentive-based programs such as Output-Based Aid (OBA) and the European Commission’s MDG contracts, but they differ from COD Aid in important ways. For example, OBA is paid to specific service providers, while COD Aid addresses broader policy issues at the country-level; and MDG contracts involve pass/fail high-stake conditions, while COD Aid is based on an incremental indicator that is independently verified. In developing COD Aid, we built on lessons learned from these approaches (for more on evaluations of these approaches, check out this report on EC MDGs, this book on Output-Based Aid, and this book by Levine and Eichler), and we discuss how our approach differs on pages 36 to 38 of our book.
We thank Duncan and his colleagues and hope they continue to ask tough questions about the merits and impact of this and other aid approaches.
Lawrence Haddad is the Director of the Institute for Development Studies at The University of Sussex in the UK. In a recent blog post, he poses several challenges for the new UK government on development.
Here’s my take on how Cash on Delivery Aid (COD Aid), an approach the UK Conservatives endorsed in their international development green paper, might address some of Lawrence’s challenges to the new government (using his numbering):
2. How to reconcile the learning and the accountability sides of the new emphasis on impact and value for money--they do not often work hand in hand?
With COD Aid, funder and recipient agree on a measurable outcome, and COD funding is disbursed only after the recipient demonstrates measurable progress. The recipient has full discretion in use of its own domestic budget and COD flow for maximizing progress on that agreed outcome (impact) at minimum cost (value for money – for the recipient, and it follows for the outside funder’s fungible contribution). Over 5 years, it is the recipient who “learns” how to maximize outcomes and minimize costs.
3. How to make sure that the greater accountability of aid-dependent countries to donors does not detract from the accountability of those countries to their citizens?
This is the key dilemma for all outside funders. Cumbersome reporting and implementation requirements by donors have the potential to detract from the accountability of countries to their citizens. COD Aid is designed to resolve this dilemma. With COD Aid the recipient agrees to make public to its own citizens the COD contract, its report of annual progress against the agreed outcome, and the independent audit. There is no additional reporting to the funder.
5. How to fix the broken feedback loop in development (citizens in aid-receiving countries cannot hold donors to account) --are there practical ways of doing this?
See the answer to #3. With COD Aid the citizens have simple information about what their government has agreed to accomplish with foreign aid (e.g. educate children, reduce mortality) and information about progress with which to hold their own government to account for its use of outside funds. With such an arrangement, the accountability of donor governments becomes of secondary importance and only in terms of the outcome that they have agreed to pay for.
7. How to communicate the case for aid in a more authentic and grown up way?
Tell rich country taxpayers what their aid agency is paying for. Hypothetically it might sound like this: “In 2012 the Government of Malawi reported to us that 30,000 additional children completed primary school and billed us $6 million in line with our agreement to pay them $200 for every additional child that finished primary school and took a competency test. Our independent audit verified their reported outcome. By the way the test result for all children who finished primary school rose slightly.”
In a recent blog post, Evaluation Gap Working Group co-chair and CGD senior fellow William Savedoff explained the motivation for the workshop, highlighting the heightened need to address the ways in which evidence and policymaking interact, as well as the importance of continued improvements in evaluation systems in determining the efficacy of social interventions. We were thrilled with the positive responses from the workshop’s participants and audience members, and invigorated by the discussion and recommendations that arose as a result of the event.
The importance of well-designed and delivered research on individual social programs was a central theme of the day-long seminar. 3ie deputy director Marie Gaarder called for researchers to evaluate the overall causal chain for CCTs and health outcomes, and introduced a special issue of the newly-launched Journal of Development Effectiveness that focuses on CCT programs. Exploring ways in which researchers seek to “open the black box” by going beyond measurement of average effects, Gaarder noted the importance of analyzing heterogeneity (for example, how contraceptive use changed more among extremely poor women than among those who were better off) and distinguishing mechanisms (for example, assessing whether birth weight was higher for program participants because of higher income, better nutrition, or empowerment).
The seminar outlined the different channels through which financial incentives can influence changes in behavior, and the capacity for these incentives (especially when conditioned) to have a profound effect on indicators such as the utilization of health and education services by the poor. Participants also discussed the limitations of CCT programs, noting that emerging evidence suggests that these interventions may have little effect on overall health and education outcomes such as coverage of basic health interventions and school achievement among urban children. Speakers and panelists emphasized the need for a deeper understanding of the relative cost-effectiveness of investing in the supply versus the demand-side within the health and education systems, as well as the potential negative implications of encouraging utilization of services without a corresponding effort to increase the quality of these services.
Over lunch, CGD president Nancy Birdsall recognized the successes of 3ie’s first year of operation, and congratulated the organizations that have supported it. She highlighted the importance of 3ie’s role as a provider of public goods that should be financed by all major actors, going on to challenge participants from institutions like the World Bank and Inter-American Development Bank to ask why their organizations have not yet joined. She reminded those present that the initial idea of financing 3ie with a small levy, perhaps 0.05% of all disbursements by bilateral and multilateral agencies, has still not come to fruition.
3ie executive director Howard White also discussed 3ie’s successes, but concentrated his remarks on what it takes to conduct quality impact evaluations. He also mentioned the capacity for mistakes to occur when researchers fail to consider how projects are being implemented or investigate anomalies that are heard in the field.
In the event’s final session, a group of panelists—moderated by former president of the Global Development Network Lyn Squire —discussed how researchers can more effectively bring evidence-based results into the policy process. Ruth Levine, director of evaluation, policy analysis and learning at USAID, emphasized the importance of identifying and creating long-term “durable” incentives for health and education-promoting behaviors if programs are to have lasting impact. Levine also recommended that researchers ascribe to an strategy dubbed “The 3Ps”—Predicting likely problems to inform research, Prescribing clear core messages, and drawing Pictures (such as GIS maps)—to provide compelling visual demonstrations of information. Miguel Szekely, former undersecretary of education in Mexico, and Squire stressed that “bridging the gap” between research and policy will require radical thinking, underscoring the importance of devising incentives for policymakers themselves to demand good evidence for developing more effective programs.
This final topic—bridging the gap between researchers and policymakers—is one of our central concerns as we move forward. To follow our progress and be part of the debate, please sign up to receive CGD’s Evaluation Gap newsletter.
First Kristof wrote: “The basic truth of foreign aid is that helping people is far, far harder than it looks.” And he’s right. But a big part of the difficulty is with us, not them.
Kristof highlights how our proposal for Cash on Delivery Aid would create incentives for countries receiving aid to improve education outcomes. Yes, true. But a key aspect of our proposal is first and foremost to help aid agencies and philanthropies focus their money on outcomes – more learning, lower mortality – and less on inputs and budget execution – also known as obsessive tracking of aid agency money. In our research on the foreign aid community, we found that everyone involved wants money to flow more effectively toward results, but aid agencies find it incredibly difficult to change the institutional machinery to focus on results and measure them, rather than on proving that funds were used for inputs.
Second, what Kristof didn’t say. The idea of COD Aid is to help aid agencies find a way for their aid to make the government recipients accountable to their own citizens rather than to the aid agencies – back to tracking the money and not worrying about results or outcomes.
The Center for Global Development (CGD) and the International Initiative for Impact Evaluation (3ie) will host a workshop at CGD on Tuesday, May 4 to discuss the health and education benefits of conditional cash transfer (CCT) programs. The event, Closing the Evaluation Gap: 3ie One Year On, will also include an introduction to a special issue of 3ie’s new Journal of Development Effectiveness.
CGD president Nancy Birdsall and 3ie executive director Howard White will deliver keynote speeches at the workshop, and high-level policymakers and researchers will discuss the implications of existing evidence and propose recommendations to improve impact evaluations of CCT programs in the future. Panelists and speakers include Ruth Levine, former Evaluation Gap Working Group co-chair and current director of evaluation, policy analysis and learning at USAID; William Savedoff, a Working Group co-chair and CGD senior fellow; Marie Gaarder, 3ie deputy director; Lyn Squire, former president of the Global Development Network; and Laura Rawlings, lead specialist for the World Bank’s Human Development Network.
The link between evidence and policymaking is not a simple one, according to Savedoff. “Impact evaluations often provide a lot of useful nuanced information but policymaking thrives on big messages while trying to accommodate political, social, and cultural pressures,” he wrote in a recent blog post that cautioned against the possible misuse of impact evaluations.
The participants will explore strategies for increasing the role of evidence in development policies. “We need development and policy leaders to call for reforms that focus development interventions in a more effective way,” said White. “This event provides an opportunity to learn what is the impact of conditional cash transfer programs on people’s health and education - when they have worked and why - and what are the lessons learned,” he said.
The recent surge of interest in impact evaluations and creation of 3ie were spurred in part by the activities of the CGD Evaluation Gap Working Group which began in 2004 and drew upon input from 100 development policymakers and practitioners. In 2006 the group’s final report recommended strengthening evaluation efforts within major development organizations as well as establishing an independent organization to push for more and better evaluations designed to be able to attribute development outcomes to specific interventions. After two years of preparatory work, and with support from more than a dozen members and more than a score of associate members, the 3ie was formally launched in 2009.
The May 4 workshop will run from 9:00 a.m. to 5:00 p.m. and will be held at the Center for Global Development, 1800 Massachusetts Avenue NW, in Room 1004/1006 on the lobby-level. Reservations required.
By Ben Edwards
Focusing on the role of rigorous evaluation in policy interventions and highlighting lessons learned from conditional cash transfer programs in the health and education sectors, the event will be an opportunity for dialogue between high-level policymakers and researchers to discuss the implications of available evidence and propose recommendations for moving forward. The event will include the introduction to a special issue of the newly-launched Journal of Development Effectiveness focusing on CCT programs, along with a series of panel discussions focused on the capacity for social interventions to improve outcomes such as educational achievement, prenatal care, and newborn health in developing countries.
The World Bank announced this week that it will providing “free, open and easy access to World Bank statistics and indicators about development.” It is an important step for the Bank. First and foremost because it will facilitate more research and better-informed writing about development issues; but also because it recognizes that this kind of information is exactly the kind of public good that the World Bank should be producing. (You can learn more about this effort from the video embedded on this page, and access the bank’s data catalogue here.)
In terms of the first issue – promoting more research – I recall how, when I was in graduate school many years ago, the amount of research on income inequality in Brazil was huge relative to the amount of similar research in Mexico. The reason, as best as I could tell, was that Brazil made its census data available in the 1960s and 1970s to all kinds of researchers; while Mexico was withholding data access to a few select people. Data access, particularly in this age of computers and websites, is fodder for advancing ideas about development. This is great.
On the second issue – producing public goods – I recall a presentation by Hans Rosling (probably the most entertaining statistician alive) who asked why we pay taxes to support organizations like the World Bank and then have to pay them again to get data out of them! This point was one that resonated with me. When working at the Inter-American Development Bank (IADB) in the mid 1990s, our office asked the U.N. Economic Commission for Latin America and the Caribbean (ECLAC) for access to household survey data that they had collected and were told we would have to pay for it. Did I mention that the ECLAC collected the data with financial assistance from (here’s your chance to guess) … the IADB?
Rosling points out in a recent video that “Statistics means bookkeeping of the state, serving the decisionmakers within the state. Times have changed. We need to get them [statistics] to the public.” Congratulations to the World Bank for doing the right thing and getting this data to the public.
A working paper distributed this month by NBER and covered in the New York Times not only contributes to the growing number of rigorous studies on public policy questions but also epitomizes changing research norms that are crucial to improving the quality of such studies.
The study, “The Oregon Health Insurance Experiment: Evidence from the First Year,” used a natural experiment to answer questions about the impact of having health insurance on participants’ health care utilization, health status, and financial stress. A team of researchers heard that Oregon had insufficient funds to cover all 90,000 people who applied for subsidized health insurance and so chose to enroll people by lottery. They recognized that they could use administrative and survey data from this “natural experiment” to measure effects that are otherwise extremely difficult to disentangle from other types of information.
While the content of the study is important for health care debates around the world, the most striking thing to me about the paper was its attention to addressing bias in research (an issue that has concerned me before). I suspect the authors were keenly aware that anything they wrote would be subjected to enormous scrutiny in the polarized political climate of the United States, especially with regard to health policy. Whatever the reason, the authors should be celebrated for following a number of practices which should be standard for policy research.
First, they created a public archive for their research design regarding data to be collected and hypotheses to be tested before looking at the outcomes in the dataset. This is common in controlled medical trials as a way to reduce the chances that researchers will comb the data for significant correlations and justify the results post facto. This doesn’t keep the authors from extending their analysis and research but when they do so, they explicitly alert the reader that those extensions were not in the pre-specified research design.
Second, they appropriately qualify their results by noting the limits of generalizing from this particular population to other dissimilar groups. More importantly, they acknowledge that these are partial equilibrium results and cannot be used to do a simplistic extrapolation for a large scale program that might induce significant supply responses or other general equilibrium effects.
Finally, they provide a lengthy appendix that can be downloaded from the NBER website and provides the full questionnaire, more details on the research design, and alternative estimations that were excluded from the paper. All of this makes it easier for readers to judge the kinds of statements that occur frequently in research papers such as “the alternative specification was excluded for reasons of space but largely confirmed the findings presented here.”
There are two additional ways this paper can establish itself firmly as a model for more open and less biased research: first, by making the primary data available for downloading and second, by encouraging other researchers to replicate the results. Given the care of this study, I expect the authors are already planning for this. In this regard, it is encouraging to see social science journals adopting the requirement that supporting data and programs be made publicly available (e.g. see the American Economic Review’s policy). The issue of replication has been addressed elsewhere, including in a blog by Michael Clemens and another by David Roodman.
Yes, I have a lot to say about the content of the study, what it means for health care debates in the U.S. as well as developing countries. But for now, I just want to celebrate what I see as an important maturation of public policy research. Way to go.
Michael Clemens recently wrote me, saying that he gets asked this question a lot. I do, too. So I was interested when he brought my attention to a 2007 article in Forbes that discusses a number of companies that do use randomized studies. I wasn’t surprised to see Google in the list, but I never imagined that all the junk mail that I receive from Capital One might be guided by sophisticated research (though it hasn’t convinced me to sign up yet!). Progressive Insurance apparently discovered profitable lines of business (middle-aged motorcycle drivers) by randomly accepting a portion of applicants who would normally be rejected and studying their claims behavior. According to Hunt Alcott, other companies that have used randomized studies include H&R Block, PNC Bank, Amazon, Subway and Harrah’s Casino.
Businesses have an advantage in the evaluation arena because they get very effective feedback from sales data – but they still have the basic problem of comparing their performance against an appropriate counterfactual. If ice cream sales go up after a change in strategy was it due to the new approach or to an especially hot summer? And the key to answering these questions usefully means thinking in terms of testing assumptions with information that can prove your hypotheses wrong as well as confirm when you’re right. This kind of “testing mind-set,” as argued by Thomas Davenport in the Harvard Business Review (2009), is used by companies all the time. A lot of Davenport’s points could apply equally well to improving the way aid agencies or public policies are assessed and improved.
So if companies are doing so much sophisticated evaluation work, why don’t we hear about it more often? As Michael pointed out, they have incentives to keep what they learn secret. Companies that have found a new profitable niche or effective marketing strategy don’t want to share the news – which is why you’ll never know when the iPhone 5 is coming out even if your best friend works at Apple. This is where public policy really diverges from the for-profit world. Companies are accountable to their shareholders; governments and non-profits are supposed to be accountable to the public through transparency and by widely disseminating knowledge from good evaluations.
Despite a host of challenges, hundreds of millions of people across the world have benefited from programs that have been rigorously evaluated and scaled up. Impact evaluation has generated knowledge about poverty and public policy leading to better programs.
At the event, policymakers and evaluators will discuss examples of how evaluation has helped enhance effectiveness, and a panel of evaluation funders will reflect on lessons learned and the way forward. In a time of political transition, we seek to re-energize the movement for increased evidence and value for money in public and aid spending.
"Give a man a fish and you feed him for a day. Teach a man to fish and you feed him for a lifetime."
This Chinese proverb has been the mantra of sustainability in the development business. It makes sense. But, oh, it can go so wrong. A recent paper by Ann Swidler and Susan Cotts Watkins looks at 10 years of research on development aid to community-based organizations dealing with HIV/AIDS in Malawi and shows how following this approach has led to some rather dismaying outcomes. In particular, they note that Malawians "know how to fish." What they lack (and this is my view after reading the paper) is access to fishing poles or fish. But this doesn't keep foreign donors from insisting on paying for training.
Swidler and Watkins follow the logic of the "sustainability" mantra to show how incentives at every level "from the international donors to the national elites, interstitial elites and local population" make funding for training (and training of trainers) the dysfunctional outcome of an otherwise well-meaning effort. The donors can claim they are "teaching the population to fish," the national elites get income and status from managing and negotiating the programs, the interstitial elites (usually young high-school-educated volunteers) get contacts and opportunities to rise socially and economically, and the local population gets .. well, relatively little.
(By the way, one of the particular joys in reading this paper was to see good qualitative research work which is interpretative and gives meaning to their analysis of incentives. And it isn't anecdotal because the authors show how their illustrations are grounded in a larger body of systematic interviews.)
Swidler and Watkins make the case for what is really needed quite clearly in their final remarks:
"It is hard to say precisely what constructive recommendations follow from the perspective we have offered here, but we do have several suggestions. First, the ideal of sustainability is a convenient self-delusion for funders and they would do much better if they could systematically and rigorously determine what projects are effective and then sustain them by paying local workers to actually do good--provide health care, sell discounted seeds and fertilizers, treat STIs, provide ARVs, supply children with books and school uniforms, or care for the ill and elderly (Kremer & Miguel, 2007). Second, since few of the approaches to AIDS prevention currently in vogue have shown any measurable effect (Potts et al, 2008), we encourage funding that responds to Malawians' desire to take care of the vulnerable in their communities, provide for their children's futures, and build economic security, independent of the issue of HIV and AIDS. Indeed, reading the proposals that Malawian villagers submitted in their usually vain attempts to gain access to AIDS funding convinces us that villagers do know what they want, but little of it is training in how to prevent, mitigate, or treat AIDS. The first two they already know how to do as well as the experts who try to advise them (Watkins, 2004), and treating AIDS has to be done through the health-care system.
"Finally, we suggest that donors consider the "hidden curriculum" their procedures teach. Requirements for elaborate proposals, bank accounts, and monitoring and evaluation might better be replaced by simple procedures that would funnel more resources to villagers and less to monitors. Such resources would create continuing projects that both villagers and employees (perhaps the brighter, more successful of the villagers' children) might rely upon. Rather than projecting a social imaginary that they find morally gratifying, donors and NGOs might provide opportunities that could sustain the realistic aspirations of those they claim to help."
Are pay-for-performance aid programs such as Cash on Delivery Aid more vulnerable to corruption than traditional input-focused programs? My guests this week, senior fellows William Savedoff and Charles Kenny, argue in a new new working paper and brief that the opposite is true.
One of the exciting things about the Cash on Delivery Initiative is that once people understand the concept, they frequently come up with all kinds of new ideas for applying it. This happened most recently at the CGD-hosted book launch for Cash on Delivery: A New Approach to Aid this week. Within the course of an hour, the conversation shifted from skeptical questions to prospective applications of COD Aid. While the book outlines a proposal for channeling aid to countries that accelerate their progress toward accomplishing the Millennium Development Goal of universal primary completion, people have asked about applying it to water, deforestation, malaria and to another Millennium Development Goal: reducing maternal mortality.
This last suggestion has struck a chord with many of us. Every year, more than half a million women die from complications in pregnancy and childbirth, and 99% of these deaths occur in poor countries. What’s more, as Karen Grepin recently discussed (citing the Disease Control Priorities Project), counting stillbirths among infant deaths would mean that roughly half of all child mortality occurs in the first year of life. These deaths are largely preventable. Compelling evidence from Sri Lanka, Tunisia and Malaysia reveals that maternal and infant mortality can be drastically reduced in low-income settings by increasing access to skilled attendants and emergency obstetric care at birth. And if this isn’t reason enough to support the idea, consider this: interventions aimed at expanding coverage of skilled birth attendance demand basic reforms to strengthen health systems, improving health training, assuring availability of medical supplies, and addressing problems in management and contracting. Julio Frenk, Mexico’s former Health Minister, made this point at a recent Woodrow Wilson Center event, arguing that setting priorities grounded in women’s health drove improvements in Mexico’s health system.
So what would happen if a group of funders offered to pay $25 for a proxy indicator closely related to reducing maternal mortality – such as the number of births attended by a skilled health worker? (As we emphasize in the book, defining the right indicator is critical. It must be clear, measurable and verifiable at reasonable cost. An initial step would be to confirm whether skilled birth attendance is the right measure.)
To make a credible COD Aid agreement, this indicator would be reported by the recipient government and then subjected to verification by an independent agent – perhaps through a combination of auditing the reporting process and conducting a separate survey. One of the key advantages of such an agreement is that it would let the government decide the course of action it thinks would best achieve progress. The agreement would also align incentives at the national level toward the goal, involving the Finance Ministry as much as the Health Ministries in the process. It would also give a strong boost to improving vital registration and data on births and maternal mortality.
In Governance of New Global Partnerships, a new CGD Policy Paper, Keith Bezanson and Paul Isenman shine light on an important feature of the international aid landscape – a cohort of international organizations established in the last two decades which tend to have focused mandates and large complex governing boards. This paper is a welcome cautionary note to those who want to start new international initiatives, counseling them to ask hard questions about the added value of starting a new organization and the downsides of inclusiveness. They also provide a good set of lessons and positive messages for designing new initiatives when that makes sense.
Bilateral agencies are run by their governments. UN agencies and development banks tend to have countries as members and respond to other interests by creating consultative mechanisms. The new global partnerships, however, seek to include as many voices as possible in their governing bodies. While increasing “voice” in this way has clear benefits, the downsides in terms of complexity and conflicts of interest are frequently ignored.
For example, Bezanson and Isenman explain that the Global Alliance for Vaccines and Immunization (GAVI) was created as a partnership of donors and recipients, private and public sectors, state and non-state actors who initially served in different ways on four different boards. This complex arrangement was eventually simplified, in 2008, by creating a single board – though it still has 28 members, two-thirds of whom are elected by constituent partners (e.g., representing a particular category like donors, recipients, NGOs). The Fast Track Initiative (now the Global Partnership for Education) has a membership of 46 developing countries and over 30 bilateral, regional, and international agencies, along with development banks, private sector groups, teachers, and civil society organizations. Like GAVI, its 19-member board is seated on the basis of election by different constituencies. Perhaps the most dramatic story involves the Consultative Group for International Agricultural Research (CGIAR) which was governed exclusively by donors until 2002 when it sought to “become a 21st century partnership organization” by introducing NGO and private sector representatives. The new arrangements collapsed in 2002 when the partners were unable to reconcile tradeoffs between the demands of research, attention to poverty reduction and commercial interests.
The paper also discusses how these partnerships can generate conflicts of interest. The effort at inclusion often seats countries and organizations on boards which are making decisions that stand to benefit those very same countries and organizations. For example, the World Bank’s legal counsel pointed out that it shouldn’t take funds from GAVI if it wanted to keep its seat on the GAVI board. It chose to follow that advice and no longer receives GAVI funding. Countries which receive funds from the Global Fund Against Aids TB and Malaria (the Global Fund) are also prominent in that organization’s governance, leading to potential conflicts of interest when the Global Fund assesses their performance and allocates funds.
From my experience both with serving on and creating new boards, there is a more insidious problem with partnership arrangements that rely on constituent representation. Too often, board members in these multi-stakeholder organizations see their role as representatives for competing interests rather than as directors responsible for setting strategies that best serve the organization’s mandate. When serving as a board member, the individual’s primary allegiance should be to the organization. They can bring their outside experience, perspective and understanding to the table, but they shouldn’t be tempted, or instructed, to serve as a spokesperson for this particular foundation or that particular government.
The answer is not to restrict membership of governing bodies. Voice is important. Diverse perspectives are also incredibly important. But after reading this paper, I think most readers will agree that designing a new governing body requires thinking not only about inclusion but also about undue complexity and conflicts of interest.
The Tea Party movement in the United States had a big impact on this year’s mid-term election. The energy it channeled can be seen as a pendulum shift from the progressive winds that were blowing in 2008. So what comes next?
Greg Mankiw (Harvard Economics Professor) posted this dramatic rallying cry on his blog this week. The sign, hoisted at a pre-election “Rally to Restore Sanity and/or Fear,” presages what I believe we are going to see emerge as the major force for change in the United States in 2012 – a campaign for evidence-based policy! A large slice of the U.S. electorate is growing weary of fact-free policy and pseudo-science. I predict we will see house parties organized in the next few months to read and discuss back issues of CGD’s Evaluation Gap Newsletter. It won’t be long after that before they’ll be demanding that President Obama follows through on his stated commitment to make foreign aid “accountable” and cite Michael Clemens’ blog calling for programs like the Millennium Villages Project to do the research needed to see if they really represent a sustainable route out of poverty. The politicians that will ride this wave in 2012 are the ones who will be able to cite the meta-analyses that informed their platforms.
I think it is fitting that Mankiw posted this photograph. After all, the preface to his textbook, Principles of Economics, makes a case for why students should study economics. “As a voter, you help choose the policies that guide the allocation of society’s resources,” he writes. I think we even have evidence to support that claim.
[Thanks to Sarah Jane Staats who brought the photograph to my attention]