Last week, CGD hosted a discussion with Alicia Phillips Mandaville and Andria Hayes-Birchler of the Millennium Challenge Corporation about the MCC’s ‘corruption hard hurdle’ –the Corporation’s use of a corruption indicator as a key pass/fail component of selecting which countries are eligible for MCC support. With Jonathan Karver and Casey Dunning, I had put out the case against the hard hurdle in a recent CGD paper, Alicia had generously blogged about it, and the event was a wonderful opportunity to discuss some of the issues face to face. Two of the reasons that many of us at CGD have a soft spot for the MCC are how open it is to discussing ideas about how to make the Corporation (even) better, and how incredibly knowledgeable MCC staff are on the issues involved –I said in the meeting that we knew we wrote nothing in the paper that MCC staff didn’t already understand themselves. Lawrence MacDonald recorded a wonkcast with Alicia, Casey and me that summarizes some of the discussion.
The event was well timed –because the US Supreme Court had just helped illustrate some of the considerable difficulties with measuring corruption. The judges issued a ruling on the constitutionality of campaign finance limits, considerably loosening the constraints such limits set. And between ruling and dissent, one of the things that the Justices argued over was the nature of corruption –or at least the corruption they should be worrying about. The majority opinion suggested the only corruption that mattered was ‘quid pro quo,’ while dissenters favored a concern that was broader –the general corrupting influence of money in the system. The debate was a perfect illustration of the problem researchers face when they try to put a number on corruption.
‘Quid pro quo’ corruption is what surveyors are trying to measure when they ask questions about informal payments. Pay a government official a bribe and in return they’ll provide you with a service, or a contract, or a place in the school or clinic (or not arrest you, or deny you something you’re entitled to). But when researchers ask people ‘how corrupt do you think the government is?’ respondents don’t just think about bribe payments. They think about all sorts of different ways government officials use public office for private gain, many of which don’t involve a direct payoff at all, and some of which may not be illegal. Think Congress members using inside information to try to make money in the stock market (apparently some try, but most aren’t very good at it). Or politicians steering projects and government contracts to friends in their home districts with hope of future favor but no explicit deal for a payoff.
That is one reason why there is often a considerable gap between surveyed corruption as measured in a ‘quid pro quo’ sense and surveyed corruption measured in a broader perceptions sense. Take the institution over which the Supreme Court presides –the US Judiciary. Transparency International surveys Americans and asks them about corruption in the US courts system. 42% of respondents view the judiciary as corrupt or extremely corrupt, while 15% of those who’d had contact with the courts reported paying a bribe to a member of the judiciary over the past twelve months.
Of course, that the questions are about different ideas of corruption is only one of many reasons why the survey numbers are what they are, and why different questions suggest different levels (absolute or relative) of corruption: biases, errors, reticence and ignorance all play a huge role. Those biases are even stronger when it comes to perceptions that with surveys asking directly about bribe payments, and help to explain why perceptions measures appear to end up reflecting a general sense of ‘how well governed is a country.’
That surveys about bribes and perceptions indicators are measuring different things, and may be differently influenced by bias, error, reticence and ignorance, is a reason why attempts to mash such data together are going to end up with a fuzzy aggregate. And that’s one reason why the Worldwide Governance Indicator on corruption, based on a mash-up of citizen, firm and ‘expert’ surveys covering experience and perceptions, is carefully acknowledged by its creators to be fuzzy: they publish margins of error around their measures partially for that reason. Because those margins only capture the bias, error, reticence and ignorance that isn’t correlated across their mashed up surveys, they aren’t a full indicator of error in the underlying measure, but they are a very good start.
And that all makes the control of corruption indicator used by the MCC distinctly different from many of the other indicators used in its scorecard to determine eligibility for Corporation funding. The MCC does have too big a soft spot for perceptions indicators particularly in the ‘ruling justly’ category of indicators (the tail of annual widespread data coverage wagging the dog of MCC incentive effects and aid effectiveness). But the majority of measures in the scorecard regard things like the presence or absence of laws, the rate of child immunization and school enrollment –they aren’t a mash-up.
These other indicators are measured with considerable error –data quality is a serious issue. But the kind of error is different: it isn’t inherent to the indicator because the indicator is trying to mash together distinct phenomena. For immunization, even if we measure both with error, we know the numerator (vaccinated kids) and the denominator (kids). For ‘control of corruption’ –not so much.
That’s what makes the corruption hard hurdle such a mistake. The idea of the MCC scorecard approach is twofold: to incentivize and reward reform and to increase the effectiveness of MCC support. The scorecard takes one of the MCC’s most fuzzy indicators, and puts a bright shining line down on top of it. If your country is in the bottom half of its income group on the control of corruption, no MCC for you. That’s a weak incentive to reform because it is difficult to know what policy levers move the control of corruption needle, and it is a weak tool for aid effectiveness because a very fuzzy corruption indicator is (unsurprisingly) weakly related to development outcomes. It is time for the hurdle to go.