BLOG POST

Will Politicians Punish the MCC for Doing Evaluation Right? Mexico Shows a Better Way.

May 12, 2011
This is a joint post with Christina Droggitis.The Millennium Challenge Corporation (MCC), a trailblazing U.S. development agency, is doing the right thing by publicly releasing impact evaluations of its programs as they are completed. Will politicians punish the MCC, using what will surely be mixed evaluations as a stick to beat it and an excuse to cut funding? If so, this will have a chilling effect on the movement to improve evaluation of U.S. development programs more broadly. Luckily, a new study of recent experience in Mexico offers some hope that politicians can resist this temptation.A recent CGD working paper by Miguel Szekely, Toward Results-Based Social Policy Design and Implementation, describes how Mexico has institutionalized evaluation (and impact evaluations) into its policymaking processes. While there is still much to do, the paper shows how far Mexico has progressed in the last 15 years – not just in terms of conducting and publishing evaluations but more importantly by insisting on disseminating data and evidence regardless of the potential for short term political fallout if the results are negative.Many people know the story of how a positive evaluation of PROGRESA/Oportunidades , Mexico’s premier anti-poverty program, helped it to survive and expand. Fewer have heard of how PROGRESA’s nutritional supplement was found wanting and had to be reformulated based on evidence comparing it with another program (LICONSA). Szekely also describes the difficulties faced by a newly created independent evaluation office (CONEVAL):All evaluations are made public, and their presentation since 2006, which was the first year of formal operation . . . has caused intensive debate and, most of the time, criticism and discrediting of government action in the media. The process of publication has commonly generated tension and confrontation with other government offices responsible for different programs, especially since the media still tend to highlight whatever negative element arises from the analysis, while ignoring any positive impact or achievement. Tensions reached the highest levels when CONEVAL—also in charge of publishing the official poverty statistics since 2005—released poverty figures for 2008 revealing soaring poverty levels.To its credit, the government has stood behind CONEVAL and continued to publish results.In the U.S., impact evaluations became increasingly rare in the field of development assistance but this is going to be changing over the next few years as the seeds planted by the MCC begin to bear fruit. The MCC’s focus on results led them to adopt a model that requires reporting on a variety of input and output measures throughout the course of any particular program. In addition, about half of activities financed by the MCC in its compacts (programs with fully eligible countries) are supposed to be the subject of rigorous impact evaluations, measuring outcomes such as school completion and household income. (USAID is embarking on a similar journey with its recently approved evaluation policy and will be facing similar issues in the future).So far, impact evaluations have been completed for three Threshold Programs, 2-3 year agreements focused on the institutional reforms needed for the country to be eligible for an MCC compact. Results from these studies showed impact, but not necessarily on specific MCC indicators. Arecent paper by the MCC’s Sarah Lucas explains the principles the agency is applying to evaluation. It reports, for example, that while the evaluation of the Burkina Faso threshold program showed increased school enrollment and higher test scores, it was unclear if the newly-constructed schools (the MCC intervention) were the most critical aspect to the project’s success. In some cases, the threshold programs were too short to measure the impact they set out to achieve. MCC published the lessons it took from these studies and how the information will affect its future threshold programs.So what can we expect from the forthcoming compact evaluations? What happens if the results are inconclusive, or worse? The MCC is entering a vulnerable period in the next fiscal year when its first impact evaluations of compact programs are to be released (Honduras is scheduled for September 2012). It is probable that these first evaluations will show mixed results and at the same time provide valuable information about how to improve MCC’s programs. The risk, however, is that the small number of studies will be subjected to excessive scrutiny, especially in the current highly charged political atmosphere in Washington.As it stands, the MCC is already facing budget constraints (see Sarah Jane Staat’s blog on the subject). While the MCC was intended to be a $5 billion/year program, the highest amount allocated so far has been $1.75 billion in FY2007. The recent budget wars in Congress cut $380 million from the MCC’s $1.28 billion FY2011 request. In short, the MCC is currently facing tough challenges in funding its projects. How might mixed results from the impact evaluations effect future funding appropriation discussions?Speaking openly about failures is how we learn. As MCC starts publishing its results, good or bad, Congress and the Executive have a choice. They can praise MCC for its openness and use it as a model for other U.S. development agencies in terms of showing how we can learn from successesand failures. Alternatively, they can punish MCC for its candor by cutting funds and send a clear signal that old-style bureaucratic self-protection is the way to go.It is easy to publish policy evidence when it is positive, but can countries stand behind evidence when it is unfavorable? This is the quandary facing the U.S. as it anticipates a new wave of evidence on development policies. Interestingly, Mexico has already faced this question with regard to its domestic social programs and come out favorably. Will U.S. politicians prove that they have the same courage?

Disclaimer

CGD blog posts reflect the views of the authors, drawing on prior research and experience in their areas of expertise. CGD is a nonpartisan, independent organization and does not take institutional positions.

Topics