Yesterday I was excited to see that the UK Independent Commission for Aid Impact (ICAI) had a report out on UK Department for International Development’s (DFID’s) anticorruption activities. It was a great topic for independent analysis by a group that didn’t need to worry about the politically correct thing to say, and could get beyond sloganeering (‘zero tolerance for corruption’) to a careful, evidence-based analysis of how corruption impacts development, what the role is for donors, and how DFID’s existing portfolio stacks up. My excitement didn’t last long—this report is not that analysis. I feel like a kid who got empty wrappers in his trick or treat bag.
DFID is a major funder of CGD’s research, including our work on anticorruption (alongside work on results based aid, food security, global health and technology). I’m grateful for that funding, especially because I personally disagree with some of what the Department does on the topic. I'm skeptical about the more extreme estimates of how harmful corruption has been for development and aid effectiveness. It is largely a symptom of poor governance that is the bigger underlying problem. And because they’re fixated on bribe payments, donors approach corruption thinking far too much about receipts (auditing and procurement tracking) and not nearly enough about focusing on results (more people connected to electricity, fewer kids dying).
But I didn’t learn anything much from the report—good or bad—about DFID’s impact on corruption because ICAI’s attitude to what counts as evidence is so inconsistent between what it asks of DFID and what it accepts for itself. For example, ICAI suggests that DFID can’t show the robust links between an initiative like the Extractive Industries Transparency Initiative and reduced poverty. I agree we need to try harder to build the evidence base for or against transparency and that overall the evidence base around anticorruption efforts is poor.
But ICAI can’t attack DFID for inadequate evidence on the one hand and then turn around and use similarly inadequate evidence to suggest failure and success on the other. For example, an ICAI survey of participants is seen as enough evidence to declare a DFID-funded training program of prosecutors in Nigeria was a success. But if you survey civil-society participants about the Extractive Industries Transparency Initiative, 90 percent see it as successful or very successful. ICAI is right to imply that doesn’t prove impact of EITI. It is wrong to suggest its survey is any better evidence of the justice program’s impact.
Again, the report suggests DFID support for UK prosecutions related to corruption in developing countries have ‘done better’ than global initiatives because the Met and the City of London Police have made some prosecutions. But where is the evidence that those prosecutions have had any impact on poor people in developing countries that ICAI is (rightly) so keen to see?
(And while DFID’s support for the Metropolitan Police may well be valuable, it sits a little oddly with such cut and dried statements as “it is highly problematic for DFID to support government systems and structures that are known to be corrupt.” The police officers involved with the efforts to retrieve money stolen from some of the world’s poorest countries are on the side of the angels. But given the Met’s recent struggles with corruption, perhaps that would have been a good moment for some nuance to the zero-tolerance mantra.)
Or what about a ‘failure’ suggested by the ICAI report. The authors surveyed 300 residents in each of five police districts in Nigeria to see if a DFID-funded ‘model police station’ program had reduced experience of corruption. They asked the survey respondents if the police at the local station asked for bribes more or less often than two years ago. The average answer suggested more bribes. Presto: a failed program.
On the one hand, this is a case where ICAI has tried to gather new systematic evidence on program performance, and credit for that is due. On the other, it was a poorly thought-through effort. Think for a moment if you could give an informed answer to the question ‘are bribes more or less common than two years ago in your local police station’ where you live. Given people’s answers to corruption questions like these are biased by factors including age, ethnicity, and political opinions, the nonrandom error in this survey will likely significantly outweigh the ‘signal’ about levels of corruption. Added to those problems, the district included in the survey weren’t randomly selected, including the one ‘control’ district where the program wasn’t active. According to the report, there was no statistically significant difference between answers in the control district and the other four. Given the quality of the evaluation approach, I would have chalked it up to freakish luck if there had been.
Aid programs are really hard to do, and hard to evaluate, too. We don’t know nearly enough about what works where and when. More quality independent analysis of aid is a real priority, especially in a country like the United Kingdom, which is so generous in its aid contributions. Sometimes, ICAI’s work really sheds light on DFID’s performance. Other times, it doesn’t do so well. This report is right that we don’t know nearly enough about what works and what doesn’t in anticorruption, but that includes the cases where ICAI still felt able to declare success or failure. The Commission has created a lot of heat and light, but it has failed to significantly add to our evidence base. That’s a wasted opportunity.
Disclaimer
CGD blog posts reflect the views of the authors, drawing on prior research and experience in their areas of expertise. CGD is a nonpartisan, independent organization and does not take institutional positions.