Ideas to Action:

Independent research for global prosperity

X

US Development Policy

Feed

“Conditionality” in foreign aid often gets a bad rap, but are there circumstances in which it works?  The Millennium Challenge Corporation (MCC) provides large-scale development assistance to selected poor but well-governed countries, chosen primarily based on their performance on a set of publicly available policy indicators (a type of ex ante conditionality).  MCC’s selection system is touted as an incentive for countries to pursue policy reform in order to gain MCC eligibility, a phenomenon nicknamed: the “MCC Effect”.  However, there is limited evidence on whether and in what contexts the MCC effect works. Former CGD visiting research associate Bradley Parks and his colleague Zachary Rice share with CGD’s MCA Monitor the results of a new global survey that concludes that the MCC effect does exist and that developing country policymakers and practitioners view MCC’s approach favorably.

In their new MCA Monitor Analysis, Does the “MCC Effect” Exist?:  Results from the 2012 MCA Stakeholder Survey, Parks and Rice, both with the College of William and Mary’s Institute for the Theory and Practice of International Relations, share the findings of a 100-country survey of more than 600 development practitioners and policymakers who are familiar with MCC and/or developing country reform efforts.  Their approach and findings are an important contribution to a relatively small body of evidence on the MCC effect (for example, here, here and here).  In short the authors find:

  • MCC’s eligibility criteria are influential in encouraging policy and institutional reform in developing countries; they’re actually among the more influential of various donor incentive tools.  The incentive effect, however, is not consistent across countries or policy areas.  Countries where the rules are not well understood or where policymakers are skeptical about the criteria’s role (vs. the role of politics) in determining eligibility are less responsive to MCC’s incentive.  And while the eligibility criteria are reportedly particularly influential in policy areas like fiscal policy, business registration, and control of corruption, they are substantially less influential in areas like democratic rights.
  • MCC threshold and compact programs tend to be perceived as successful more often than not.  Threshold programs are thought to successfully influence policy reforms (their intended programmatic result), and the incentive of compact eligibility is often seen as a motivating factor for threshold program success.
  • Policymakers and practitioners generally welcome MCC’s use of ex-ante conditionality and selectivity, which contrasts with skepticism expressed by some researchers about this type of approach.

I’m excited to see the new information that Parks and Rice offer about where, why, how—and how well—the MCC effect really works.  However, it’s important to keep in mind (as the authors note) that the findings do not represent conclusive, definitive evidence; limitations of the sample and the perception-based nature of the data are important considerations when interpreting the results.  On the sample, the target population was essentially people with known (or likely) interactions with MCC/USG on eligibility issues, involvement in MCC programming, or other knowledge of MCC.  In many ways, these are the right people to target since they can provide an informed perspective about their experiences; however, they don’t necessarily represent the level of MCC-awareness/perception among a country’s policymakers in general.  Of those targeted, there was a response rate of about 30%.  This isn’t bad for surveys of this type, but since we can’t assume that respondents and non-respondents are fundamentally the same on average, this may be another source of bias (for instance, if those with more knowledge of/stronger feelings about MCC were more likely to respond).

There are also limits to perceptions-based data which are inherently subjective.  For instance, a survey respondent saying that MCC’s eligibility criteria provided an incentive to reform does not tell us that a reform was really undertaken, how effective it was, nor the relative weight of the MCC incentive among the surely-multiple criteria that led to the decision to reform.  Similarly, perceptions that an MCC-funded program was successful cannot substitute for the findings of an independent evaluation; in fact, while the majority of respondents familiar with Threshold programs considered them successful (albeit with no specific definition of what “successful” means), the handful of Threshold program evaluations done so far suggest more mixed results (see here, here, here and here).

That all said, the survey findings do provide insight into the viewpoints of many of the key people MCC is specifically trying to influence.  MCC can be proud that many developing country policymakers claim to like MCC’s approach and feel it helps spur policy reform.  The survey also gives MCC useful information about where the MCC effect seems strongest and where it could be stronger.  Building on those findings, the authors suggest how MCC might increase the pull of its incentive effect.  One way is to raise more awareness about MCC eligibility.  I can see some relatively simple administrative ways MCC could make its rules more accessible, for instance, by publishing its selection criteria report and associated materials in multiple languages (this information is posted online annually, but only in English).

The authors also suggest that MCC should be more transparent around the justification for eligibility (and non-eligibility) decisions.  MCC has made substantial strides this area, for example, by publishing detail on the types of supplemental information the Board of Directors takes into account and making explicit through the use of a new democracy “hard hurdle” the Board’s historically revealed preference for passing up non-democratic countries that otherwise meet the criteria.  However, MCC still provides little specific detail on why countries that perform well on the indicators are not selected.  There are, of course, a number of very valid reasons (e.g. sensitivity to bilateral relationships) why a USG agency might choose not to disclose this kind of information, which the authors acknowledge, but—according to the survey results—the tradeoff may be a somewhat less effective incentive.

To dive into the study further, the full survey report, Measuring the Policy Influence of the Millennium Challenge Corporation: A Survey-Based Approach, can be found here.

Related Topics:

Disclaimer

CGD blog posts reflect the views of the authors drawing on prior research and experience in their areas of expertise. CGD does not take institutional positions.