BLOG POST

The Biggest Experiment in Evaluation: MCC and Systematic Learning

November 05, 2012

MCC recently published five impact evaluations on farmer training programs – the first of many because MCC, unlike most other development agencies, is conducting such studies for about 40 percent of its portfolio. I would argue that this makes MCC the biggest experiment in evaluation: an entire agency committed to seriously produce impact evaluations on a large share of its operations and publicly disseminate them.Sarah Jane Staats argued that “It’s Not About the Grade” when she gave MCC a gold star for pursuing rigorous evaluation, being transparent and (still to be seen) applying the lessons from these studies. Now that I’ve had a chance to read the MCC’s brief and some of the studies, I agree.

Congratulations, MCC, for living up to your commitment to rigorous evaluation and transparency! This is a big achievement. The world is full of agencies with bias in their evaluation systems where the positive ones get reported; lukewarm ones get revised; and the negative ones get buried. Even when programs fail to yield the expected benefits, the knowledge you are sharing from these studies are likely to yield huge benefits by influencing the design of future programs and the allocation of future aid money. In other words, the benefits may go far beyond the impact of any one program.So while we’re eating the party cake, we can debate at least three different questions about these studies. In this blog, I’ll talk about why I think this is a game changer for institutions that finance development projects. In later blogs I’ll talk about what we can learn from these studies about doing impact evaluations and then specifically address what the studies tell us about farmer training programs.For development institutions, MCC has established a new standard for what it means to be a responsible public agency. They have explicitly stated how they think their programs will affect the chain of events from inputs to impacts; contracted qualified people to test their assumptions; and taken advantage of outside perspectives to get real debate over their programs. Using MCC’s actions as a model, here is my list of what a responsible agency does:
  • Recognize how little we know and commit to study programs rigorously. The importance of studying an operation depends on a few things but mostly on the value of the information that can be learned from it. This means focusing research efforts on programs that are unproven and which either represent a large share of your portfolio or are newly hyped and rapidly expanding. Unlike other organizations, MCC recognizes that evidence is weak for most development programs and has plans to study about 40 percent of its portfolio. This is certainly a record for a development agency, possibly even for most other public and private organizations. By having the temerity to question whether farmer training programs really increase farm incomes and whether they reduce poverty, MCC has uncovered important flaws in project logic and useful lessons for designing better programs.
  • Be transparent about what you’re studying. The simplest way to resist the temptation to hide bad results (which is where we often learn the most) is to be open about what you’re studying from the moment you begin. It is a basic standard of medical research to pre-register any clinical trials so that all results – not just the desired ones – are in the public domain. MCC has published the study designs on their website. Anyone can see the list of planned studies along with descriptions of the study design.
  • Use independent researchers. Most organizations cannot afford to have a sufficiently large number of staff with specialized skills in impact evaluation. You do need to have enough staff with expertise so that they can properly design terms of references and select qualified researchers. MCC has done both. Those responsible for overseeing the impact evaluation efforts are respected in their fields and the list of research groups conducting these recent studies are also top of the line.
  • Take advantage of peer review. Impact evaluation is a science. It progresses through debate, listening to challenges, and responding to critics. MCC commissioned peer reviews of these five studies which are publicly posted alongside the evaluations. I found William Masters’ comments on the Nicaraguan and Ghanaian projects particularly insightful, but learned something useful from all of them.
  • Publish the underlying data and computer code. All of us make mistakes and the best way to minimize such errors is to let someone else check your work. This is one reason that publishing data and associated computer code is essential to learning from impact evaluations. CGD is one of many organizations that have adopted a transparency policy for data and computer code. While MCC has not yet posted the datasets for these evaluations, its commitment to do so is included in their Open Government Plan. MCC staff inform me that they are delayed in establishing standards regarding privacy and will post datasets once that process is complete. I hope that happens soon.
  • Groups of rigorous studies give you a better chance of getting useful information. By doing five different studies on related questions, MCC has a better chance of extracting useful information from the evaluations. Some of the studies provided inconclusive evidence, others were more robust. Some of the findings suggest ways to improve implementation of programs and others question the logic behind the interventions. I think this helped MCC produce an initial summary that strikes an excellent balance. Instead of exaggerating positive results or underplaying negative findings, MCC was able to consider the robustness, validity, and generalizability of the studies as a whole.
When I first heard about MCC’s plans to undertake systematic evaluation (back in 2005), I was quite skeptical. I had many reasons to expect that political, budgetary, and bureaucratic pressures would dilute the effort. For that reason, I agree with Markus Goldstein when he describes the MCC initiative as “gutsy.” Establishing a systematic process of evaluation like this isn’t easy for a public agency but now we know it’s possible.Let’s take a pause to celebrate over the cake and punch. But then, let’s talk about what these studies tell us about the process of doing good impact evaluations, which I’ll write about in part two …

Disclaimer

CGD blog posts reflect the views of the authors, drawing on prior research and experience in their areas of expertise. CGD is a nonpartisan, independent organization and does not take institutional positions.