MCC recently published five impact evaluations on farmer training programs – the first of many because MCC, unlike most other development agencies, is conducting such studies for about 40 percent of its portfolio. I would argue that this makes MCC the biggest experiment in evaluation: an entire agency committed to seriously produce impact evaluations on a large share of its operations and publicly disseminate them.Sarah Jane Staats argued that “It’s Not About the Grade” when she gave MCC a gold star for pursuing rigorous evaluation, being transparent and (still to be seen) applying the lessons from these studies. Now that I’ve had a chance to read the MCC’s brief and some of the studies, I agree.
- Recognize how little we know and commit to study programs rigorously. The importance of studying an operation depends on a few things but mostly on the value of the information that can be learned from it. This means focusing research efforts on programs that are unproven and which either represent a large share of your portfolio or are newly hyped and rapidly expanding. Unlike other organizations, MCC recognizes that evidence is weak for most development programs and has plans to study about 40 percent of its portfolio. This is certainly a record for a development agency, possibly even for most other public and private organizations. By having the temerity to question whether farmer training programs really increase farm incomes and whether they reduce poverty, MCC has uncovered important flaws in project logic and useful lessons for designing better programs.
- Be transparent about what you’re studying. The simplest way to resist the temptation to hide bad results (which is where we often learn the most) is to be open about what you’re studying from the moment you begin. It is a basic standard of medical research to pre-register any clinical trials so that all results – not just the desired ones – are in the public domain. MCC has published the study designs on their website. Anyone can see the list of planned studies along with descriptions of the study design.
- Use independent researchers. Most organizations cannot afford to have a sufficiently large number of staff with specialized skills in impact evaluation. You do need to have enough staff with expertise so that they can properly design terms of references and select qualified researchers. MCC has done both. Those responsible for overseeing the impact evaluation efforts are respected in their fields and the list of research groups conducting these recent studies are also top of the line.
- Take advantage of peer review. Impact evaluation is a science. It progresses through debate, listening to challenges, and responding to critics. MCC commissioned peer reviews of these five studies which are publicly posted alongside the evaluations. I found William Masters’ comments on the Nicaraguan and Ghanaian projects particularly insightful, but learned something useful from all of them.
- Publish the underlying data and computer code. All of us make mistakes and the best way to minimize such errors is to let someone else check your work. This is one reason that publishing data and associated computer code is essential to learning from impact evaluations. CGD is one of many organizations that have adopted a transparency policy for data and computer code. While MCC has not yet posted the datasets for these evaluations, its commitment to do so is included in their Open Government Plan. MCC staff inform me that they are delayed in establishing standards regarding privacy and will post datasets once that process is complete. I hope that happens soon.
- Groups of rigorous studies give you a better chance of getting useful information. By doing five different studies on related questions, MCC has a better chance of extracting useful information from the evaluations. Some of the studies provided inconclusive evidence, others were more robust. Some of the findings suggest ways to improve implementation of programs and others question the logic behind the interventions. I think this helped MCC produce an initial summary that strikes an excellent balance. Instead of exaggerating positive results or underplaying negative findings, MCC was able to consider the robustness, validity, and generalizability of the studies as a whole.
Disclaimer
CGD blog posts reflect the views of the authors, drawing on prior research and experience in their areas of expertise. CGD is a nonpartisan, independent organization and does not take institutional positions.