The United Kingdom has been a stalwart funder and innovator in foreign assistance for almost 20 years. In 2011, it created the Independent Commission for Aid Impact (ICAI) to report to Parliament on the country’s growing aid portfolio. ICAI is a QUANGO in Brit-speak – a quasi-public non-governmental organization - with a 4-year mandate which is undergoing review this year. Recently, I took a look at the reports it has produced to see whether the organization is fulfilling its role in holding the country’s overseas development aid programs accountable. I found one fascinating report which shows what ICAI could be doing and many more reports that made me wonder whether ICAI is duplicating work already within the purview of the agency, Department for International Development (DFID), which accounts for most of the UK’s foreign assistance programs.
The world of impact evaluation has changed dramatically over the last ten years and I’ve been worried that political and bureaucratic pressures to water down evaluation systems would erode this wave of commitment to study, learn and respond to findings on aid programs. This was a key concern in CGD’s report on When Will We Ever Learn. So in 2011, when I first heard about ICAI, I wrote “…by establishing ICAI, the UK has gone further than most countries in establishing independent external oversight for aid programs, thereby raising the visibility of evaluation work and the standards of evidence.”
One of the first ICAI reports that I read seemed to fulfill this goal. Two years after DFID completed an impact evaluation of the Western Orissa Rural Livelihoods Project (an anti-poverty program in India), ICAI commissioned researchers to return and assess the quality of the evaluation, the reliability of the information, and the sustainability of the results. This study (in newly renamed Western Odisha) was a brilliant way to check on the project itself (did poverty really decline?) as well as provide insights regarding the way DFID conducts the impact evaluations that should serve as the basis for learning and adaptation. In this case, they found delays in the initial baseline survey, problems with the quality of the questionnaires, and errors in the associated cost-benefit analysis which underestimated the program’s likely return.
The report was quite candid about these findings but to my surprise none of these points were highlighted in the report’s three recommendations. Instead, the ICAI report concludes with three broad recommendations unrelated to evaluation or specific to anti-poverty programs. It calls for better long term planning, attention to sustainability in project design, and more transparency. These are perfectly reasonable admonitions. But in practical terms, what do they mean? In retrospect, projects always look like they should have planned for problems in implementation and sustainability. Furthermore, how would you know if DFID were implementing those recommendations?
By contrast, the evidence this team collected would have allowed ICAI to make much more specific recommendations about the rigor and use of DFID’s evaluations. Some of the findings could be tracked as a way of improving the learning cycle. A simple point noted in the report is the need for conducting baseline surveys in a timely fashion – this would be a powerful and relatively easy to monitor recommendation.
The ICAI report on a health program in Zimbabwe – which describes its methodology as “desk-based research and a two-week visit to Zimbabwe in September 2011” – is more typical of the studies that assess individual projects. In fact, of the 14 studies on the ICAI website that look at individual projects, 12 relied on short visits, interviews, and secondary information while only two involved significant primary data collection (the other relied on a cross-section of interviews in Nigeria). The remaining seven studies on ICAI’s site look at DFID’s relationship with other multilateral agencies (World Bank, Asian Development Bank, EU Aid, UNDP and UNICEF) based on literature reviews, short visits and interviews; explain ICAI’s approach to Value for Money; and assess DFID’s strategy for reducing corruption.
Some of these studies are nicely done but most of them look like the kinds of operational and quick project completion reports that are common within aid agencies and it isn’t clear why an independent commission is required to do them. Meanwhile, ICAI has generated a stream of recommendations, which I’ve been told has led DFID to generate a voluminous stream of new guidance – with little notion of whether it is read, is used or has much effect.
What an independent commission really can do is hold an aid agency accountable for having a strong evaluation and learning system in place. Is good evidence generated, and is it being used? The Western Odisha study was explicit about improvements needed in evaluation – including providing adequate resources and time relative to the study goals – and in learning – noting that lessons from this program had been applied across India but have not informed similar program’s in other countries.
Other countries should learn from the UK’s experience and think about setting up a Quango, but make sure it’s focused on the right things. DFID is mandated to evaluate and learn. Instead of duplicating that function, ICAI could hold them to account for doing it better.
Thanks to Ted Collins for research assistance on this blog.
Disclaimer
CGD blog posts reflect the views of the authors, drawing on prior research and experience in their areas of expertise. CGD is a nonpartisan, independent organization and does not take institutional positions.