2012


Independent research & practical ideas for global prosperity 

Evaluation Gap Update 
November 2012

Dear Colleague,

Preparing this month’s update left me a little breathless. Today’s whirlwind of impact evaluation news includes a special journal issue by and about Africans, the biggest experiment in evaluation to date, and a new guide for systematic reviews. Controversies also continue to swirl over methods, standards, and relevance. Don’t expect any of these storms to calm down soon.

Regards,


William D. Savedoff
Senior Fellow
Center for Global Development

Evaluation out of Africa

Developing an evidence base requires policy makers who care about evaluation and researchers with capacity to do them. Africa has more of both than ever before. The evidence? “Impact Evaluation in Africa,“ a special issue of the Journal of African Economies, co-edited by Marcel Fafchamps of Oxford and Andrew Zeitlin of Georgetown University (also associated with CGD). In a recent blog, Justin Sandefur praised the included articles for their rigor – particularly in their attention to separating causation from correlation. He also highlights articles that push into the difficult terrain of large policy questions such as the impact of Kenya’s “free primary education” reform on gender differentials in enrollment and Ethiopia’s productive safety net program of transfers to small farmers.

Image: Flickr user Roland Urbanek

The biggest evaluation experiment yet?

The Millennium Challenge Corporation (MCC) recently published five impact evaluations on farmer training programs – the first of many because MCC, unlike most other development agencies, is conducting such studies for about 40 percent of its portfolio. Bill Savedoff blogs that this makes MCC the biggest experiment in evaluation: an entire agency committed to seriously produce impact evaluations on a large share of its operations and publicly disseminate them. In followup blogs, Savedoff reflects on what these studies mean for improving impact evaluations and for the future of farmer training programs.

Image: MCC

Systematic reviews … again!

Our last update cited several systematic reviews which showed the limitations of existing evidence. Systematic reviews are an important tool for extracting conclusions from a large number of studies, yet when they are conducted poorly, they can confuse more than edify. With this concern in mind, Adam Wagstaff blogged that “systematic reviews in the health systems field seem to be systematically unsystematic!” Fortunately, the September special issue of the Journal of Development Effectiveness published a guide for conducting systematic reviews. The paper addresses all the ingredients necessary to make a good systematic review: defining questions and scope, conducting a full literature search, assessing quality, methods of synthesis, and generalizability.

Image: Flickr user ajstarks

Resources

  • The Children and Violence Evaluation Challenge Fund issued a call for proposals to evaluate interventions aimed at preventing violence against children in low- and middle-income countries, due December 17.
  • 3ie and IFPRI have an ongoing joint seminar series on impact evaluation. In the next seminar, Paul Winters will present a live webcast on Kenya’s cash transfer program (date to be announced).
  • An impact evaluation design course for researchers, project managers and practitioners will take place March 18-22 at the Institute for Development Studies in the UK. Application deadline Feb 1, 2013.
  • David Mackenzie provides a pre-analysis plan checklist for impact evaluations, arguing that the extra work improves survey design, expedites data analysis, and reduces bias.
  • A new research paper showed “increases in agricultural investment among farmers who received rainfall insurance, but relatively small effects among those who received cash” highlighting exposure to risk and not lack of capital as a major constraint.
  • In an interview, Velayuthan Sivagnanasothy, Sri Lanka’s Secretary for Ministry of Traditional Industries and Small Enterprise Development, talks about the value and uses of impact evaluation.
  • A Financial Times article heaps praise on rigorous evaluation: “ … even an unmitigated failure would have produced valuable information about what works and what does not.”