A new report from AidData and William and Mary is out, and some of its findings raise questions about the “MCC Effect,” the claim that countries enact policy reforms in order to become more competitive for MCC funding. The report explores how external assessments from donors and international organizations or NGOs – things like the MDGs, the Doing Business report, as well as MCC’s country eligibility criteria and scorecards – influence reform efforts. In short, the study finds that MCC’s eligibility process is not very influential. While this finding perhaps appears disappointing, it does not erase the fact that there are some real, concrete examples of MCC’s country selection system contributing to reform conversations. These remain important even in the absence of strong evidence of a systematic effect.
The new report is based on a survey of over 40,000 government officials, donor staff, and members of civil society organizations (CSOs) and the private sector in developing countries. One of the things it asks respondents is how influential specific external assessments were on a government’s decision to pursue reforms to key problems. Out of 101 assessments, MCC’s scorecards ranked a mere 89. That doesn’t look great, but it shouldn’t be terribly concerning. Here’s why.
- First and foremost, the MCC Effect is a “bonus.” In my opinion, if we only ever get anecdotal evidence of the MCC Effect, that’s ok. Now, normally I am dismayed by US aid agencies’ tendency to overly rely on anecdotes to claim “success” (as are some of my colleagues). These nice, feel-good stories tend to gloss over the much more important questions of causality (was the intervention responsible for generating the result we see?) and value for money (do the benefits generated exceed the cost of implementation?). This matters less for MCC’s scorecards. First of all, incentivizing reform isn’t the only – nor, arguably, the principle – objective of the scorecards; they’re a tool to help MCC pick partner countries. Not only that, their cost is essentially $0 (in program funds), so any policy reform benefit to which they contribute is in excess of the cost — even that contribution is marginal.
- The scorecards may not influence “most” people … but they may influence the “right” people. Interestingly, a 2013 report by the same authors found that MCC scorecards are among the most influential external assessments. Why are the new findings so different? Hard to say, but it’s relevant to note that the 2013 survey was smaller, had a much more targeted sample (just individuals knowledgeable about MCC), and asked about a much smaller number of assessments. In general, larger, more representative samples are better for getting more accurate results. On the other hand, in terms of what matters for getting reforms done, is it important that MCC’s scorecards influence a wide range of people (the second survey’s sample)? Or do they just need to get the attention of a few key people (the first survey’s sample), as long as they have the influence to push through reforms?
- MCC ranks relatively well in certain areas. MCC ranks as one of the top three most influential assessments in the policy areas of family and gender, land, and infrastructure. It also ranks among the top three influential assessments for influencing “informality,” when there is a “disconnect between formal policies or institutions and informal administrative, cultural, or economic practices or norms.”
- MCC uses some highly influential measures on its scorecards. MCC’s scorecards are a compilation of third-party assessments, many of which rank as quite influential (e.g., the IMF’s Article IV Report, the source of the scorecards’ Fiscal Policy and Inflation indicators, ranks 4th). In some instances, MCC’s use of these indicators on its scorecard may help reinforce the attention governments pay to the underlying assessment.
- Results must be interpreted with methodology in mind. Only around 9 to 10 percent of survey recipients completed the substantive questions. This low response rate, while comparable to other elite surveys, introduces a risk of bias since respondents and non-respondents may be different, on average, in ways that would affect their responses. Second, perceptions surveys are good for getting at awareness and attitudes, but they’re not as good for discerning facts. A respondent saying that a particular assessment influenced a reform (and even saying that said reform was successful) gives limited information about whether this reform was really undertaken and whether it successfully yielded the desired outcomes.
All that said, MCC can and should react to the findings of the study in a few key ways:
- Talk about the MCC Effect judiciously. By all means, keep talking about it. But use caution not to talk about it as a sweeping phenomenon since there’s inconclusive evidence that it is.
- Talk more about the other ways MCC influences reform. MCC talks a lot about the MCC Effect, but it’s not the agency’s most important channel of policy influence — especially for the policies that matter most for growth. MCC negotiates conditions into its country programs whereby partner governments agree to undertake policy, regulatory, and/or administrative reforms in the growth-constraining areas that MCC’s investments target. Unfortunately, there’s little public information about these conditions and countries’ track record of compliance, except for some anecdotal successes. Because these reforms are critical to the success and sustainability of the MCC investment, reporting should be more systematic and transparent.
- Don’t let unproven assumptions about the MCC Effect influence the selection system. When thinking about potential future tweaks to the scorecards, don’t get hung up on “but how will this influence the MCC Effect?” since it likely works somewhat idiosyncratically. But do refer to the new report for some interesting data about why assessments in general influence reforms and what attributes of assessments make them more likely to be influential.
Disclaimer
CGD blog posts reflect the views of the authors, drawing on prior research and experience in their areas of expertise. CGD is a nonpartisan, independent organization and does not take institutional positions.