May 2013


Independent research & practical ideas for global prosperity 

Evaluation Gap Update 
May 2013

Organizations, bureaucracies, and policies seem to be the way we human beings struggle to make learning, adaptation and improvement more systematic. This newsletter highlights an independent commission charged with monitoring the quality of foreign aid, two offices that integrate experimentation with impact evaluation, and a new list of searchable impact evaluation databases. All of them are efforts to establish regular patterns of behavior that will generate better evidence and promote its use.

The resources below also contain numerous opportunities for individual learning – 2 residential courses, a distance-learning course, as well as non-technical notes. Then you can test your knowledge by explaining the statistical lesson about green jelly beans.

Regards,
William D. Savedoff
Senior Fellow
Center for Global Development

ICAI learns from evaluating an evaluation

The United Kingdom’s Independent Commission for Aid Impact (ICAI) has published a unique study of an anti-poverty program in India. ICAI commissioned researchers to return two years after completion of a DFID evaluation to assess the previous study’s reliability and the sustainability of the project’s results. This study was a brilliant way to check on the project itself (did poverty really decline?) as well as provide insights regarding the way DFID conducts impact evaluations for learning, adaptation, and dissemination. ICAI is a commendable UK innovation for holding aid programs accountable, as we noted at the time it was founded in 2011. Now, after two years of experience, the UK is reviewing ICAI’s mandate and performance (to which we’ve also thrown in our two cents).

Credit: (nz)dave

 

Learning Laboratories

Many institutions conduct impact evaluations, others implement projects that are evaluated, but how many of them institutionalize experimentation with evaluation? A comment in the Lancet drew attention to two initiatives that epitomize such an approach: The CMS Innovation Center at the US Centers for Medicaid and Medicare Services (CMS) and the Development Innovation Ventures office at USAID. In both cases, the units act like experimental laboratories, using modest funding to test innovative approaches, explicitly embracing risk and systematically studying outcomes. At the end of the day, they know something about why the innovation succeeded or failed. Victoria Fan goes into greater detail about the CMS Innovation Center in her recent blog and argues for this model to be more widely embraced.

Where in the world is that … impact evaluation?

Where do people go when they are looking for evidence from impact evaluations? They may use general internet searches (like Google Scholar), systematic reviews (like those at 3ie and the Cochrane Collaboration), or literature reviews in specialized journals (like the Journal of Development Effectiveness or Journal of Economic Literature). Still, the idea of a dedicated searchable repository for impact evaluations is compelling enough that many development organizations have started databases for these studies. In a recent blog, Savedoff and Collins provide a list of 14 evaluation databases and invited readers to submit corrections and additions. The updated list now includes 17 entries. Drew Cameron at 3ie has also weighed in, explaining why “all databases are not equal.”

 

Resources

Thanks to Ted Collins for his support in putting together this newsletter.