Existing Initiatives
Concern about the Evaluation Gap is widespread as demonstrated by the many ways that public agencies, intergovernmental commissions, non-governmental networks, research centers, and foundations are addressing it. In particular, initiatives are underway to:
- Increase access to existing information through reviews, searchable databases, and policy pamphlets and newsletters.
- Improve regular data collection by developing country governments and develop aggregate indicators
- Promote specific evaluations with grants and other kinds of funding
- Conduct research and demonstrate good evaluation practices
The following discussion of initiatives is by no means comprehensive. Rather, it is presented as a demonstration of the range of existing efforts.
Access to data and information
Numerous organizations are trying to make existing information and data more readily accessible. The OECD’s Development Assistance Committee has a searchable Evaluation Inventory of studies done by its member bilateral assistance agencies. IDS, with support from DFID, has a database of studies called “ID-21” (www.id21.org) with an associated strategy for outreach and dissemination via an e-newsletter. Other initiatives aimed at increasing the exchange of existing information include the Development Gateway and the Global Development Network, as well as official channels such as the United Nations Evaluation Forum and the ECG Network (comprising multilateral development banks).
Some initiatives aim to provide access to knowledge by synthesizing the results of many studies on the same question. The Campbell Collaboration has established a process to generate systematic reviews of programs in education, crime and justice, and poverty reduction. The Cochrane Collaboration has taken the lead in the medical field but has paid only limited attention to health system policy. The Robert Wood Johnson Foundation also has a “Synthesis Project” oriented toward health policy. The Canadian Health Services Research Foundation is currently analyzing methods for synthesis of social policy studies.
Better data collection
A range of initiatives aim to improve data collection in developing countries through conducting surveys or building local capacity to establish ongoing data collection efforts. Some examples include the Demographic and Health Surveys sponsored by USAID, the Living Standard Measurement Surveys sponsored by the World Bank, the MECOVI program sponsored by the Inter-American Development Bank to support improvement of government statistical offices, and a recent initiative to improve data collection by Paris 21 (Scott 2005).
Other initiatives aim to increase capacity for local stakeholders or researchers to conduct good quality evaluations, including programs sponsored by the World Bank’s Operations and Evaluation Department, Canadian Health Services Research Foundation, and many bilateral agencies.
In addition, international efforts are aiming to standardize and systematize the collection and interpretation of indicators, such as the Health Metrics Network, the Child Survival Partnership, and the Millennium Project.
Financing and conducting impact evaluations
Every bilateral and multilateral agency and almost every government has contracted an impact evaluation at some time or other. Some agencies and private foundations have also established grant programs that are open to unsolicited proposals (e.g. Canadian Health Services Research, Bill & Melinda Gates Foundation, and Development Gateway).
Many developing countries are taking their own initiatives to learn from social development programs through better impact evaluations. Different agencies in Chile, Kenya, and India have all started or actively collaborated in designing good impact evaluations because they recognize the value of the information they will build. Mexico has even passed legislation requiring impact evaluations of a wide range of social development programs.
A wide range of research centers, such as the Institute for Fiscal Studies (London), the Instituto Nacional Salud Publico (Mexico), GRADE (Peru), and the International Food Policy Research Institute (Washington) have established reputations in supervising, conducting and advising impact evaluations of social programs in developing countries. Several international programs aim specifically to increase the number of skilled evaluators in low- and middle-income countries and thereby contribute to building a supply of researchers and to promoting an appreciation within public policy debates for evaluation findings.
Most international agencies also have internal initiatives aimed at improving impact evaluation. Interviews with staff at multilateral development banks and bilateral agencies indicate that they are aware of the need for better impact evaluation and that several initiatives are underway to improve the number and quality of such studies. The World Bank’s DIME program illustrates the kinds of steps that institutions can take to better link their operational and research capacities, in partnership with developing countries, to generate knowledge from impact evaluations on selected thematic areas (See Box 2).
Of the initiatives above, the only ones that address the fundamental incentive problems are those that involve internal reform of public agencies – and for most organizations these efforts are infrequent and fragile without some form of sustained external support. Something far bolder, with greater collective support and engagement, is required.
We believe that a desirable solution would have the following characteristics:
Focus specifically on the public good aspect of impact evaluation and directly alter the incentives for producing good impact evaluations;
- Be a collective response to the problem;
- Mobilize additional funds only to the extent that they are necessary to leverage existing funding;
- Establish a process for assuring and distinguishing high quality impact evaluations from those of poor quality;
- Direct impact evaluation work toward questions of enduring importance and high value to decision-making; and
- Support intelligent and strategic selection of a relatively small number of subject programs from which the most can be learned.
The kinds of impact evaluation studies that should be promoted would:
- Address questions of enduring importance;
- Measure the net impact of a program or policy by establishing appropriate unbiased controls ex ante and utilize rigorous methods;
- Earmark substantial resources for random assignment studies;
- Involve collaboration among program designers, researchers, and implementers from start to finish; and
- Engage policymakers in defining questions and in discussing findings.
A wide range of solutions were considered. The least demanding suggestions involved working through existing institutions and advocating changes in existing practices. Other suggestions required inter-agency and inter-governmental accords, commitments to earmark funding, or the establishment of new institutions.[i] One specific idea, for example, was the creation of an entity that would coordinate evaluations across development agencies, principally by establishing common priority questions, and establishing quality standards. The funding of the evaluations themselves would be the responsibility of development agencies. Another idea, one that is elaborated in more detail below, would be to create a facility that would be able to mobilize and distribute resources for independent impact evaluation.
The approach of consultation, coordination, quality assurance and communication has some clear advantages. To some extent, these functions are currently being undertaken by existing bodies, and so such an approach would be relatively straightforward to fund and implement. Under an alternative scenario, which includes the additional features listed below, the Club would mobilize and strategically allocate funds for the design and conduct of impact evaluations. Most, but not all, Working Group members favored this approach, for the following reasons:
- The value of independence: An independently funded evaluation is far more likely to be seen as credible by the range of policymakers and stakeholders who use evaluation results to decide which social programs to support and in what form.
- The benefits of single-mindedness: An institution or group that has the core mission of promoting and generating impact evaluations is more likely to be able to achieve this aim than an institution with many other roles. In donor agencies, for example, whose main business is delivering funding for projects, the operational demands often dominate.
- The potential for shared resources: If every organization funds impact evaluations out of its own resources, inevitably some agencies and governments will have more evaluation funding than others. This greatly limits the ability of smaller and/or less well-resourced governments and agencies to engage in policy debates about what works and what doesn’t, to demonstrate the effectiveness of their own programs, and to learn. Pooled resources can level this playing field.
Based on our understanding of the need to fundamentally alter incentives for the conduct of impact evaluations, we therefore propose the creation of a new external and independent institution that would have a variety of coordination and standard-setting functions, but would also directly fund evaluations.
Such an institution would not be imposed on governments and agencies nor should it control their evaluation priorities or studies. Rather, it should be voluntary to assure that (1) it only continues so long as it is demonstrating value to its members; (2) it does not create unnecessary additional burdens for its members; and (3) it does not interfere in internal budget allocations or priority setting. The institution’s influence on member organizations should be through information, demonstration, and persuasion.
This is not a proposal for an undertaking by the Center for Global Development, whose mission does not include conducting or managing major impact evaluations. Rather, it is a proposal for consideration by the broad international community of developing country governments, donor agencies, international technical agencies and coordinating bodies, and their constituencies.
An International Impact Evaluation Club
This consultation draft recommends that governments, international agencies, and private foundations establish an Impact Evaluation Club with its own governance, dues, standards and staff.
The particular design of such an Impact Evaluation Club will be determined by its initial members. The following ideas illustrate a potential framework for addressing the incentive problems, the public good issues, and the need for independence and technical excellence. Additional details for how such an institution could be constructed appear in Box 6.
Membership in this "Club" should be voluntary, and the "Club" should have a clear mandate to promote and finance impact evaluations that:
- address questions of enduring importance;
- would not otherwise be conducted;
- are hard to finance in other ways;
- provide models of good practice for emulation; and
- promote stronger evaluation standards.
In promoting good impact evaluations, the Impact Evaluation Club could help developing countries and international agencies to cluster studies around common themes and questions so as to increase their usefulness in learning what interventions are effective and under what circumstances.
Studies with randomized assignment face the largest obstacles relative to their promise in knowledge building, so more than half of the Club’s funds should be earmarked to support studies with randomized designs. The rest of the Club's funds should be used to support good quality impact evaluations that use other methods. [1]
The incentive problems are specifically tackled by separating two different decisions: (1) a high level decision by governments, agencies and foundations to dedicate funds for good quality and independent impact evaluations, and (2) a program level decision to conduct an evaluation.
The incentives for high-level decision-makers to seek Club membership would be:
to leverage funds – because the institution could potentially access more funds than it contributed if its impact evaluations proposals are accepted;
- to participate in a committee that would select "enduring questions" to guide requests for proposals;
- to participate in a committee that would identify potential subject programs for studies based on expected learning; and
- to comply with mandates from their stakeholders requiring implementation of results-oriented management.
Those public program managers, agency staff and policymakers who have an interest in learning from impact evaluations would find that the existence of the Impact Evaluation Club would lower the costs and barriers they face because the Club would:
- provide short-term grants for exploring the feasibility of evaluating a program or collecting baseline data;
- be a well known source for longer term funds dedicated to impact evaluation;
- provide models for good impact evaluation design and implementation;
- give external credibility, legitimacy and continuity to impact evaluation studies;
- act as a link to experts and technical review processes.
The Impact Evaluation Club could improve international learning from impact evaluations in several ways: by setting quality standards, exchanging information among members, and providing long term and substantial funding for impact evaluations.
Quality Standards and Clearing House: The Impact Evaluation Club could develop, or identify existing standards, for good quality impact evaluations. It would then endorse an existing process or establish a process for registering impact evaluation studies and providing independent reviews that would certify to what degree the study met established standards. The registration of studies would serve to inform others of the existing range of studies and to begin to address publication bias.[ii]
Thematic Selection. Contributing entities would be encouraged to have their representatives participate in periodic deliberations regarding the questions they would like to see addressed by impact evaluations. This non-binding guidance would provide valuable information to the "Club" in soliciting proposals and developing its program of activities. The information exchanged among members and disseminated by the "Club" would also help institutions cluster impact evaluation work around common themes to further the collective interest in learning about common questions and about the generalizability of specific interventions.
Funding for Impact Evaluations. The Impact Evaluation Club would require members to contribute funds and/or to dedicate a portion of their internal budgets to impact evaluation. A member’s contribution would bear some relation to the scale of its development assistance, national budget, or endowment. Although international agencies and donor governments would benefit substantially from the creation of this international fund, the real beneficiaries would be citizens in developing countries to the extent that knowledge from good impact evaluations would be used by policymakers in their countries to accelerate social development.
Developing country governments would be empowered by the Club in several ways. First, they could participate fully as members in advising the Club’s administrative team regarding priority areas for research. Second, as members, they would dedicate some domestic resources for impact evaluations that would accelerate their own learning about which of domestic social programs are effective. Third, they could leverage additional funds to supplement domestic resources for these studies. Fourth, they could set the priorities for which internal programs should be evaluated by choosing which proposals to submit for funding. Fifth, they would get access to expert advice on impact evaluation design. Finally, the studies commissioned under the Club’s guidelines would have international credibility, gaining legitimacy from the review process and transparency created by this independent institution.
Completing good impact evaluations requires a substantial commitment of funding and time. A more detailed study of financing requirements should be undertaken as part of negotiating the final design of the Impact Evaluation Club; however, prospective members should recognize that some studies might cost as much as US$10 or US$20 million over a 7 to 10 year period. An initial estimate suggests that within five years, the Club could be operating with an annual budget of $30 million, of which 7% would be dedicated to costs of administration and professional networking. Most of this funding would be additional to current spending on impact evaluations in member organizations. In addition, funds that are currently dedicated to evaluation in donor-funded social development projects could be used more effectively with the addition of resources at the margin from the Impact Evaluation Club.
The Club would finance impact evaluations on a competitive basis. Any studies chosen for support by the Club would have to fulfill the following conditions:
- the research design is independently reviewed under rigorous quality standards,
- basic fiduciary responsibilities are fulfilled,
- the final study is submitted for independent review under rigorous quality standards, and
- once approved on methodological grounds, the study must be in the public domain regardless of its findings; and the data must be made available to other researchers.
Strengthen existing initiatives:
As discussed earlier, a variety of initiatives currently address particular aspects of the Evaluation Gap. Many of these are proceeding within existing institutions. However, the creation of an Impact Evaluation Club would complement and strengthen existing initiatives in several ways. By becoming members of the Club, organizations will be able to strengthen their internal efforts at improving impact evaluation by linking them to an external and independent source of standards and credibility. Organizations that are not members would still have access to the registries, guidelines, and studies that the Club would finance. Furthermore, the Club would interact with groups that are trying to improve the quality of impact evaluations and synthesize their results and provide a channel for dissemination and communication.
Specifically, the Club could encourage its members to:
- Fund, promote, and disseminate more synthesis studies via existing channels (e.g. Campbell Collaboration, CHSRF, Robert Woods Johnson). These initiatives address the Evaluation Gap in two ways. They show the value of impact evaluation findings where they exist, and demonstrate the magnitude of the Evaluation Gap where the evidence base is weak.
- Distinguish impact evaluations on the basis of quality. Existing initiatives could be encouraged to propose and disseminate criteria for distinguishing evaluation studies on the basis of the quality of their design, data, execution, methodology and analysis. This would make it easier for policymakers and their advisors to assess the reliability of different studies and would create an incentive for evaluation designers to pay attention to quality standards.
- Create "rapid-response" trust funds to support incorporating impact evaluations during program preparation. The program preparation phase is the ideal moment for program designers, researchers, implementers and policymakers to identify appropriate questions and methods for a proper and useful impact evaluation. Small amounts of funds at this stage can have a large impact on subsequent expenditure and the effectiveness of any future spending on evaluation. Specific funds can be established within existing development agencies or made available externally to provide such rapid-response funding to contract evaluation advisors and collect baseline data.
- Lobby for legislation in middle- and low-income countries to promote more and better impact evaluations of social development programs; and in high-income countries to promote more and better impact evaluation of programs funded by bilateral and multilateral development agencies.
- Support initiatives that aim to increase the capacity for conducting good impact evaluations in low- and middle-income countries.
Hope for the future
Governments, public agencies and private foundations are making progress toward a world with better health, more education and less poverty. But we can reach those goals faster and more effectively by systematically building knowledge about what kinds of social development interventions do and do not work. The recommendations proposed here will not single-handedly achieve this goal, but it can contribute an important, and missing, element to that effort – a way to find out what works.
[i] Working Group members discussed alternative solutions at length. Many favored the solution proposed here while others were not persuaded of the need for a new institution with its own ability to finance impact evaluations. This issue is still under discussion and will be addressed by the Working Group in the future once feedback to this consultation draft has been received from stakeholders.
[ii] Publication bias results from the greater likelihood that a study will be published if its results are positive (i.e. showing the impact of a program) than if its results are negative. By registering all studies at inception, conclusions from published works can be appropriately qualified.
| Box 6. Illustration of how an Impact Evaluation Club could be configured
The following discussion of how a "Club" could be configured is intended purely as an illustration and as a focus for comments and debate.
Mandate The Impact Evaluation Club (ImEC) is a non-profit institution whose mandate is to increase learning about effective social programs in low- and middle-income countries through encouraging high quality impact evaluations of social development interventions, particularly through promoting random assignment studies. To achieve this, most of its funds will be earmarked for random assignment studies; it will create or adopt high quality standards; it will engage with the policy community in all phases of the project cycle and program activities; and it will promote transparency by disseminating the results of all studies along with associated data. Initial objective: Within 5 years of its inception, the Club will have directly financed 10 good random assignment studies on important questions and supported another 10 good random assignment studies through technical assistance or project preparation funds.
Structure The ImEC would have the following structure: A Board of 7 members, at least two of whom must be selected on the basis of their technical expertise and knowledge of impact evaluations and methods (including random assignment). The remainder would be appointed or elected by the Club’s Member Institutions. The Board Members would not represent specific members or their interests. Rather they would be charged solely with carrying out the Club’s mandate and holding the administration of the Club accountable for fulfilling the mandate. A larger Advisory Committee, composed of dues-paying Members’ representatives, would be convened to share information about impact evaluation opportunities, to debate the leading questions of the day, and to provide non-binding advice to the Club’s Administration regarding themes and priorities. The Advisory Committee would also assist members in exchanging information about planned and ongoing impact evaluations and encourage clustering of research around questions of common interest. An Administration staffed for the following responsibilities: manage peer reviews of evaluation proposals; make decisions regarding which proposals will be funded; monitor funded studies to assure compliance with the Club’s standards and mandate; issue requests for proposals on previously identified themes; raise funds from and provide services to members; collect and disseminate impact evaluations; manage and disburse funds; and submit semi-annual reports to the Board Members on all activities.
Funding Windows Activities would be financed through at least three "windows":
Conditions for Funding Recipients All studies financed by the ImEC:
Funding Members will pay dues to join the ImEC. The minimum required dues will vary in proportion to an organization’s size and income (e.g. different rates for multilateral development banks, large bilateral agencies, developing country governments, private foundations, and NGOs). Members will be able to contribute as much as they like, above and beyond their dues, as unrestricted funds in support of the ImECs program or as "matching grants" to encourage greater financial participation by others. Innovative mechanisms may be designed – for example, developing country governments might be asked "to contribute" in the form of earmarking internal resources for impact evaluations.
Benefits of Membership Priority for funding: Once they have satisfied all the conditions and have passed the review process, research proposals from member organizations would receive priority over proposals from non-members. Input on "enduring questions" to be addressed through window 3 funding: Members would be able to provide non-binding guidance regarding which questions should be the subject of requests for proposals through participation in the Advisory Committee. Coordination: Members would be able to exchange information on planned and ongoing impact evaluations and reach agreements on clustering research around particular themes or areas of common interest. Technical review: Members would have access to technical review services provided in the course of the Club administration’s work. In addition, Members would benefit from being associated with an organization that has the explicit aim of generating knowledge for better results – a symbol of "good governance."
Budget Estimates Administration of the Club will require three operational departments: (1) a financial and operational management office, (2) a grant review office, and (3) a technical services and information office. The initial budget would require between US$1 million and US$3 million per year to cover administrative expenses and to begin funding preparatory work for impact evaluations. After five years, however, the Club should be running at its full capacity. At that time, the administrative budget will cover approximately 15 staff members and funding to contract peer reviewers and external technical advisors, along with a full program of impact evaluations.
|