May 2012


Independent research & practical ideas for global prosperity 

Evaluation Gap Update 
May 2012

Dear Colleague,

Good methodologies and careful research are critical to generating credible evidence from impact evaluations. This month’s update addresses other factors which are essential complements for translating that evidence into action. We have mixed progress in Latin America on promoting the use of impact evaluations for policy, a guide on low-cost evaluations, and a video that makes a complicated evaluation technique look easy.

In preparing this update, I realized that this series is in its seventh year. If you’d like to find out how it all got started, check out the Evaluation Gap Initiative’s site. Or browse through our archive of Evaluation Gap Updates to see the items we’ve highlighted over the years for being insightful, useful, or simply amusing.

Regards,


William D. Savedoff
Senior Fellow
Center for Global Development

Institutionalizing evidence for policy: bad news, good news

First, the bad news. In 2010, we reported that Argentinean representatives led by Eduardo Amadeo had introduced legislation to institutionalize impact evaluations of public programs. This month, we learned that Argentina’s congressional leadership has officially dropped the proposed legislation, though Amadeo says he plans to continue his efforts. Good news, though, from the Pacific coast. Two Peruvian ministries have created the Comisión Quipu – an organization charged with improving the empirical basis for public policies. The commission includes representatives from the government, domestic universities and international experts. It will get technical assistance from J-PAL, Innovations for Poverty Action and Soluciones Empresariales Contra la Pobreza (SEP). According to J-PAL’s announcement, the commission is named for the quipu which the Incas used to measure and track their empire’s finances and demographics.

Image: Ministro de Economia, Peru

How much does a good evaluation cost?

The cost of rigorous evaluations depends on many things including the question being studied, the context, and the required level of precision. So why do people complain about the cost of studies as a general principle when they vary so much? And why discuss costs without considering the benefits of the information they generate? The Coalition for Evidence-Based Policy contributes some evidence about costs to this debate in “Rigorous Program Evaluations on a Budget: How Low-Cost Randomized Controlled Trials Are Possible in Many Areas of Social Policy.” Their brief guide describes five well-conducted, low-cost studies which ranged from $50,000 to $300,000. The introduction of random assignment in these studies comprised only a small portion of this cost (between $0 and $20,000) and the studies all produced practical and useful evidence for public policy.

Explaining rigorous evaluations well is part of the challenge …

… but this video from the International Growth Centre shows it can be done. In the video, Karthik Muralidharan (University of California, San Diego) and Nishith Prakash (University of Connecticut) explain how they measured the impact of a program in Bihar, India that gave bicycles to girls as a way to promote increases in high school enrolment. Though the study results are preliminary, the method seems robust. Muralidharan and Prakash control for other factors by contrasting the change in enrolment for girls over time to the change in enrolment for boys within Bihar. They then go one step further by contrasting that difference with the comparison between girls and boys in a neighboring state that did not have the bicycle program. Smart research design; excellent explanation of results.

Image: Flickr user Avram Iancu/ CC

Resources