Ideas to Action:

Independent research for global prosperity

Tag: Evaluation Gap

 

It Takes Two to Quango: Does the UK’s Independent Commission for Aid Impact Duplicate or Add Value?

The United Kingdom has been a stalwart funder and innovator in foreign assistance for almost 20 years. In 2011, it created the Independent Commission for Aid Impact (ICAI) to report to Parliament on the country’s growing aid portfolio. ICAI is a QUANGO in Brit-speak – a quasi-public non-governmental organization - with a 4-year mandate which is undergoing review this year. Recently, I took a look at the reports it has produced to see whether the organization is fulfilling its role in holding the country’s overseas development aid programs accountable.  I found one fascinating report which shows what ICAI could be doing and many more reports that made me wonder whether ICAI is duplicating work already within the purview of the agency, Department for International Development (DFID), which accounts for most of the UK’s foreign assistance programs.

The Biggest Experiment in Evaluation: MCC and Systematic Learning

MCC recently published five impact evaluations on farmer training programs – the first of many because MCC, unlike most other development agencies, is conducting such studies for about 40 percent of its portfolio. I would argue that this makes MCC the biggest experiment in evaluation: an entire agency committed to seriously produce impact evaluations on a large share of its operations and publicly disseminate them.

Impact Evaluations Everywhere: What’s a Small NGO to Do?

I frequently get inquiries from organizations that recognize the importance of rigorous evaluation and yet aren’t quite sure how they can do it. They see the growing number of random assignment or quasi-experimental studies and are attracted to the apparent objectivity and relative certainty of quantitative studies, but they are often reticent to dive into those approaches. Sometimes organizations have reasonable concerns about costs, lack of expertise, or the applicability of such approaches to the questions they care about.

Don’t Do Impact Evaluations Because…

Recently, I was called for advice by someone who will be running a workshop attended by people who implement and evaluate programs. She asked me to help her anticipate the main objections raised against doing impact evaluations—evaluations that measure how much of an outcome can be attributed to a specific intervention--and to suggest possible responses.

Will Politicians Punish the MCC for Doing Evaluation Right? Mexico Shows a Better Way.

This is a joint post with Christina Droggitis.

The Millennium Challenge Corporation (MCC), a trailblazing U.S. development agency, is doing the right thing by publicly releasing impact evaluations of its programs as they are completed. Will politicians punish the MCC, using what will surely be mixed evaluations as a stick to beat it and an excuse to cut funding? If so, this will have a chilling effect on the movement to improve evaluation of U.S. development programs more broadly. Luckily, a new study of recent experience in Mexico offers some hope that politicians can resist this temptation.

A recent CGD working paper by Miguel Szekely, Toward Results-Based Social Policy Design and Implementation, describes how Mexico has institutionalized evaluation (and impact evaluations) into its policymaking processes. While there is still much to do, the paper shows how far Mexico has progressed in the last 15 years – not just in terms of conducting and publishing evaluations but more importantly by insisting on disseminating data and evidence regardless of the potential for short term political fallout if the results are negative.

Making Development Economics More Scientific: A Young Journal Leads

Researchers who call their work scientific must make their work reproducible. That is, other scientists must be able to reproduce the same result in an essentially similar setting. If they can’t, the result gets dumped. When I was a boy, two scientists at the University of Utah claimed to discover a way to cheaply generate energy with “cold fusion”. But because other scientists could not reproduce that result, no one today builds energy policy around cold fusion.

“When Will We Ever Learn?” Mexico and Britain Take the Question Seriously

This is a joint post with Christina Droggitis

This May will mark the five-year anniversary of CGD’s Evaluation Gap Working Group’s final report, "When Will We Ever Learn: Improving Lives Through Impact Evaluation". The report noted a large gap in evidence about whether development programs actually work and recommended creating an independent international collaboration to promote more and better impact evaluations to close this gap. The International Initiative for Impact Evaluation (3ie) was formed as a result of this recommendation. The report also stressed the need for countries, both donors and recipients, to make larger commitments towards high-quality evaluation work. These commitments, it argued, should include supporting 3ie financially, as well as generating and applying knowledge from impact evaluations of their own development programs.

Pages