BLOG POST

It’s Not About the Grade: MCC’s First Five Impact Evaluations

October 22, 2012

It’s not about the grade, it’s about the learning say Millennium Challenge Corporation (MCC) officials as they prepare to release the US government’s first five* independent development impact evaluations tomorrow. Results will be mixed. They should be. But if the MCC and other development policymakers pay attention to what the impact evaluations tell them—and the MCC keeps its commitment to independent, rigorous evaluation across the rest of its programs—it will be really good news.

MCC sets the transparency and evaluation standard higher than any other US development agency (USAID has a new policy to catch up). And MCC impact evaluations go beyond typical performance evaluations to test—with control groups and counterfactuals—whether their activities directly increase incomes. The MCC risks being unfairly compared to organizations that aren’t as rigorous and transparent, but is forging ahead to gather and share real evidence. That means good results (incomes up!) and bad (incomes not up, can’t attribute to MCC, or can’t measure).The first five MCC impact evaluations cover farmer training activities in MCC compacts with Armenia, El Salvador, Ghana, Honduras and Nicaragua. Farmer training is just 13 percent of MCC investments in these five countries and 2 percent of MCC’s global compact portfolio. But the lessons will matter for the MCC and other donor programs with similar investments such as USAID’s Feed the Future initiative. And these first five MCC impact evaluations already double the stock of evidence globally on farmer training activities.According to preliminary conversations with MCCers, the impact evaluations:
  • Show the MCC met or exceeded projected farmer training outputs and outcomes (e.g. number of farmers trained and increased crop yields);
  • Detect increases in farm income in some cases;
  • Do not yet detect statistically significant increases in household income as a direct result of MCC-funded farmer training activities; and
  • Two evaluations themselves failed (i.e. evaluators could not measure elements required to make judgments).
I share the MCC’s view that impact evaluations aren’t about pass or fail for specific projects. Development programs—and the MCC compacts—comprise multiple complex activities. Some may turn out well, others may flop, and the measure of a strong organization is that it wants to know the difference and learns and improves when it finds out.As such, the MCC and its friends shouldn’t disproportionately emphasize specific success nor should critics focus solely on investments that don’t meet projected targets. But I’m not willing to say the grades don’t matter at all. It’s a big deal that the MCC is trying to gather rigorous data—and get some grades—for its investments. It’s just that how the MCC uses what it learns from the evidence matters more than any individual program result. (A recent Engineers without Borders Failures Report and the World Bank’s Failfairs push this point.) The MCC is already talking to the Hill, NGOs, and US development policy makers about the findings, which is a very good first step.I’ll also be looking for how the MCC uses this new evidence to change future decision-making. How will the MCC balance pressure to keep moving on program implementation and stick to a sequence required to measure impact and learn something at the end of the program? Were initial estimates of MCC impacts too optimistic and if so, why?  Should the size or scope of future programs change? Will future evaluations be designed to measure impact 5 or even10 years after the compact ends? And will the MCC keep its commitment to conduct and share independent, rigorous evaluation across the rest of its programs, including in countries like Madagascar and Mali where political coups forced the MCC to halt compacts but something can still be learned from what was invested?The United States should always aim to get the biggest bang for its development buck, but to date there has been little if any rigorous data on what works and what doesn’t. The MCC gets a gold star for its courage to conduct and share the first real evidence—and the grades—for its development investments so everyone can do better. We should hope, and keep pressing, the MCC and other US development agencies for more of this kind of good bad news to inform smart development policymaking, especially in the tough budget cycles ahead.*CORRECTION: My colleagues have pointed me to two recent USAID evaluations—one on social insurance in Nicaragua and one on Zambia’s production, finance and improved technology project—that would qualify as independent impact evaluations, plus three others that use rigorous qualitative methods (if not full control groups and counterfactuals). All are up on USAID’s evaluation website. I was also reminded that USAID did some independent impact evaluations in the 1970s and 1980s.  My apologies for the oversight and kudos to USAID for having a few recent impact evaluations already under their belt and committing to produce many more. It seems to me there is probably a good story here about USAID’s history with impact evaluations including why, if they were doing them 30 years ago, there were so few in recent years. Would welcome thoughts and comments from readers!

Disclaimer

CGD blog posts reflect the views of the authors, drawing on prior research and experience in their areas of expertise. CGD is a nonpartisan, independent organization and does not take institutional positions.