April 2007

Untitled Document

Evaluation Gap Update
March 2007

In this issue:

Leading Edge Group Pushes the Initiative Forward at a Meeting in Bellagio, Italy
From February 16-20, at the Rockefeller Foundation's Bellagio Conference Center in Italy, Leading Edge Group members met to finalize a number of issues related to designing an international collective institution for promoting more and better impact evaluations of programs in developing countries. Participants developed proposals to take back to their foundations, agencies and governments. A full report of the meeting will be made available on our website in several weeks. For a brief overview and highlights from the meeting read Ruth Levine's announcement, on our Blog Views from the Center.

Global Development Network Funds 20 Impact Evaluations of Health Programs
The Global Development Network (GDN) held its Eighth Annual Global Development Conference last month in Beijing at which it reviewed proposals for impact evaluations of health programs in Asia and Africa. GDN received about 600 proposals and ultimately selected 20 to receive funding. Many of the proposals demonstrated how smaller scale studies can help to answer policy-relevant questions of global interest.

African Evaluation Association Addresses Evaluation Gap at Conference in Niger
The African Evaluation Association (AfrEA) held its annual conference last month in Niamey, Niger, drawing a wide range of evaluation experts from around the world. The Evaluation Gap initiative was discussed in several sessions and generated considerable debate. Exchanges among participants clarified a number of misunderstandings, addressed substantive disagreements, and called for continued dialogue and collaboration. CGD participated in a panel discussion that included representatives from CARE, IDRC, UC Irvine, Center for Policy Alternatives (Ghana), Global Environment Facility and The World Conservation Union which affirmed the value of using a range of methods to learn about improving social and economic development projects.

Evaluation of World Bank Research Highlights Importance of Applying Research Methods to Operational Questions about Impact
A group of academic economists chaired by Angus Deaton recently assessed the vast collection of World Bank publications, reports the Economist in "What the World Bank Knows". In the report the evaluators recognized the Bank's significant research contributions, while at the same time expressing concerns that Bank management may overemphasize fragile research findings that support institutional policies while undervaluing research that provides contradictory evidence. The panel recommended, among other tasks for the future, that the Bank build evidence for program design with micro-level studies on particular development questions.

Development Agencies Focus on Impact Evaluation
A Task force has been set up within the external Development Assistance Committee's (DAC) Evaluation Network to help coordinate impact evaluation efforts among the DAC members. As described by the DFID Aid Effectiveness newsletter, "The task force initially plans to complete two activities: (a) production of good practice guidelines for impact evaluation that all donors can use and (b) a web-based database of impact evaluation studies produced by donors and multi-laterals."

Implementation Science: A New Focus For NIH's Fogarty International Center
The US National Institute of Health's Fogarty International Center recently announced that it will add "implementation science" to its strategic plan. Similar to impact evaluation, NIH defines implementation science as "research to develop and test effective and efficient methods, strategies and models for completion of research-tested health interventions." One example of implementation research provided in the IFC publication, Global Health Matters (pdf), is a study that adapts a successful developed country intervention to a developing country setting.

US Legislation Incorporates Provision for Rigorous Evaluation
US Senator Feingold recently introduced legislation (pdf) in Congress that would require rigorous evaluations of publicly-sponsored activities to prevent and reduce crime in public housing. Importantly, the legislation sets aside a share of program funds for such evaluations. It recommended that random assignment methods be used where appropriate to produce "scientifically-valid knowledge." The Coalition for Evidence-Based Policy helped make this possible with its continued advocacy for better evidence in domestic policymaking.

Additional Resources on Impact Evaluation

  • "Evaluation Bias and Incentive Structures in Bi- and Multilateral Aid Agencies" in Review of Development Economics discusses the disincentives agencies face that lead to a critical lack of more rigorous impact evaluation.
  • "Mitigating Myths about Policy Effectiveness: Evaluation of Mexico's Antipoverty and Human Resource Investment Program" by Jere Behrman and Emmanual Skoufias provides a comprehensive history of the acclaimed impact evaluation of the Mexican conditional cash transfer program, PROGRESA.
  • The Abdul Latif Jameel Poverty Action Lab (J-PAL) will hold its third annual course on Evaluating Social Programs using randomized trials which is designed to provide participants with the necessary knowledge and training to measure the impact of poverty programs in developing and advanced countries. Taught by experts with extensive experience evaluating programs in the field, the course will be held May 29--Saturday June 2, 2007 in Cambridge, Massachusetts. The program fee is $3,950 and a limited number of scholarships are available. The application deadline is March 15, 2007 and March 1, 2007 for scholarship applications.

Recent Blog Posts on Evaluation

If you have news or highlights to share in future mailings, please send them to Jessica Gottlieb ([email protected]).

Thanks and regards,

Ruth Levine
Director of Programs and Senior Fellow
Center for Global Development