Is UK Aid Being Spent Properly? We Just Don’t Know

August 15, 2017

In late July, the UK’s National Audit Office (NAO) published a progress report on Her Majesty’s Government spending that found that in 2015, a fifth of the £12.1 billion the country spent on aid was committed through government departments and cross-government funds other than the Department for International Development (DFID), the UK’s aid agency. The report also found that “no part of government has responsibility for checking on progress in implementing the UK Aid Strategy or for assessing the overall effectiveness and coherence of ODA expenditure.” [emphasis my own]

About a week later, on August 1, I joined the Center for Global Development as a Senior Fellow and Director for Global Health Policy. I am very excited about my new role, and, since my base will be the London offices of CGD Europe, I am also rather hopeful that I will be able to contribute to the current thinking on improving the effectiveness of aid budgets, including the one spent by the UK government.

With that goal, some colleagues and I recently proposed the creation of an independent public body that would assess the value for money of overseas development assistance (a “NICE” for development). We have proposed a “what works” centre for aid that would place the UK at the forefront of evidence-informed policymaking and reassure skeptics that tax monies are “properly spent.” But as the NAO report points out, ascertaining “properly” is becoming increasingly tough in an ever-fragmented environment, where the Ministry of Defence, the Foreign Office, and the National Institute for Health Research under the Department of Health (to name but a few) all receive aid money to spend on development projects.

Here are some recommendations for how we might assess and improve impact in a fragmented government landscape:

Getting results out of a fragmented, incoherent, and unaccountable aid structure

In its July report, NAO describes the limited capacity of many government departments to absorb and spend (effectively or not) the aid allocated to them. Five of 11 departments (in 2016 another two were added to make them 13) spent over 50 percent of their aid budget in the last quarter of the calendar year, whilst DFID remains the “spender of last resort” (to avoid missing the 0.7 percent target). Perhaps in an attempt to accelerate spending, roughly 20 percent of UK aid is paid out in the form of promissory notes, a more formal version of IOUs, with the amount of uncashed IOUs doubling between 2014 and 2016 to some £8.7 billion as of December 2016 (or the equivalent of 72 percent of the annual UK aid allocation).

Impact is hard (impossible?) to measure (other than the 0.7 percent spending target) given that three of the government’s four strategic aid objectives (resilience, global peace, and global prosperity) lack measurable and attributable outcome indicators, perhaps not surprisingly. And though each department is, in theory, responsible for ensuring “value for money,” again, there is no guidance for how to do so in a coherent way across a fairly wide range of departments which even includes Work and Pensions; Culture, Media; and Sports; and Her Majesty’s Revenue and Customs (HMRC). Interestingly, HMRC experienced an almost five-fold increase in its aid allocation between 2015 and 2016, the largest amongst any government department. Now that the aid transparency index no longer applies to organisations spending less than $1 billion, hence excluding nine out of the 11 UK government departments spending ODA, assessing impact or value for money becomes even harder.

The government’s intention to spend more ODA through the CDC, DFID’s independent investment arm, is also proving challenging as far as assessing effectiveness goes. The plan is to up CDC’s allocation to £6 billion, possibly rising to £12 billion, to invest in the private sector in developing countries in order to accelerate growth and create jobs. Its impact, defined as “a lasting difference to people’s lives in some of the world’s poorest places” according to another NAO report, is not easy to describe, let alone evaluate.

Finally, lines of accountability are blurred. By law, DFID must spend 0.7 percent of GNI on aid. This can be measured and is met, but that is where performance assessment stops as far as non-DFID spending conduits are concerned. The Treasury passes approximately 0.14 percent of the country’s GNI directly to various government departments and three cross-departmental funds with ODA responsibility. These latter have no lines of accountability to DFID or to any other single government department or institution with overall responsibility for assessing the coherence, effectiveness, and value for money of this spend. Even if such a responsible party did exist, given that individual departments and funds are currently not required to “report separately and specifically on how they have spent their ODA budget,” any judgement on how ODA pounds are spent, and to what effect, would simply not be possible.

Assessing the “likelihood of future [aid] effectiveness”

And though assessment of impact for the whole of the ODA envelope is not possible (for now), there have been attempts to look at individual components of it. The Independent Commission for Aid Impact (ICAI) published a report earlier this year on one of the three major cross-government funds for ODA, the Prosperity Fund (the other two are the Conflict and Stability Fund and the Empowerment Fund). This was described as an assessment of “likelihood of future effectiveness,” as the £1.3 billion fund is not yet operational, and so it focused on the procedural aspects of the fund’s design and operation so far. Amongst other things, the Commission called for more transparency, including results-based indicators and performance metrics to be developed in order for the fund’s work to be subject to assessment of impact and value for money. Defining “prosperity” in order to measure it may prove tricky as the fund is charged with the dual task of reducing poverty (primary objective) and creating business opportunities for (mostly) UK and overseas firms (secondary objective). So, in addition to metrics, some guide as to how to trade off these two objectives when necessary, is likely to be required.

One would hope for another one of the government’s ODA pots of money, the Global Challenges Research Fund, to undergo some form of evaluation. It is now spending £1.5 billion through various Research Councils, including Arts and Humanities, Engineering and Physical Sciences, and Economics and Social Sciences.

What makes for effective aid: start with process

Despite this gloomy appraisal, the new environment in the UK with multiple agents entering the development space offers, perhaps, an opportunity to test the effectiveness of engaging non-experts in development and even to prove the critics wrong. Here are a few thoughts on what is not helping as things stand and what could be achieved, whilst acknowledging that attribution and causality are very hard to show in complex real-world interventions. So, what I propose below is more about process: it is about laying out the infrastructure, including evidence generation mechanisms, a strong institutional framework for assessing value for money, and sustained investment in building the UK’s own capacity for delivering aid, all within transparent accountability structures and with a coherent vision of what success looks like.

Fund pragmatic research and empower learning healthcare systems

We know from high-income settings that pragmatic research can improve the quality and efficiency of care (see here for an example from paediatric care in the United States and the Academic Health Science Networks model in the English NHS). This is especially the case when research addresses questions that matter to people who pay for and use the healthcare services (see here for an example from the UK of an inclusive approach to setting research priorities). Investing in research can improve health outcomes through creating learning health systems in aid recipient countries, too. For this to work, research funds must go to building capacity where it is most lacking, and less to major UK universities’ estates and facilities overheads. The first NIHR £60 million ODA call, made just before Christmas 2016, offered no guidance (importantly no ceiling) as to the amount of overhead allowed to be claimed by UK universities, but made it clear that low- and middle-income country (LMICs) partners were allowed none.

Launch a NICE for aid

Both NAO and the ICAI reports deplored the lack of measurable indicators of impact, whether in the specification of the four objectives of the government’s strategy or at the level of individual spending departments or funding agencies. Measurable indicators to assess performance (including ones of value for money) and encouraging pragmatic evidence generation especially in low-income country settings are both needed for meaningful impact assessments. A NICE for aid could help with both, though it would have to be an improvement on the UK version, which has had limited leverage when it comes to evidence-making and has been focusing more on individual technologies and less on system capacity, distribution, and affordability—all central to the realities of LMICs. Perhaps the government should require that some of the £1.5 billion allocated to the UK Research Councils be properly spent in low-income countries to build capacity and data collection systems. The intent would be to enable a rigorous assessment of the value for money of specific interventions as well as whole programmes of work funded by aid money. The overall UK aid strategy might itself also be usefully subject to such empirical assessment.

Such a systematic effort could also inform the good work of ICAI, which currently comprises four commissioners supported by three management consulting companies, but which lacks the remit, resources, or methodological standards to commission its own research to address Parliament’s questions (e.g., through a clearly set out reference case for economic evaluation).

Make the private sector accountable

The private sector (product manufacturers, providers, insurers, IT and m-technology companies) is a major partner for healthcare systems striving to expand coverage and improve quality. So, private investment through the likes of CDC or OPIC, the United States' development finance equivalent, could be a force for good. But understanding the impact of the UK’s private investment in overseas healthcare industries—including its impact on the impoverishment of patients and their families who use and pay for the services thereby created, on health outcomes and their distribution, and even on processes and governance arrangements, such as audit and clinical governance—is as important as understanding the impact of public funding on job creation and economic growth.

Build our own capacity to help

Capacity-building within UK institutions not traditionally involved in ODA spending (both those who fund and those who deliver) matters. On the commissioning end, some of the Research Councils and government departments have never before had ODA money of their own. With shrinking operational budgets and headcount, civil servants sometimes turn to management consultants. For example, the Fleming Fund is managed by Mott-MacDonald and High Street consultants were brought in to scope out the Prosperity Fund for the Foreign Office. Such outsourcing of the strategic design and direction of the ODA money further accentuates fragmentation and makes it harder to convey a consistent cross-government message on what this ODA is meant to achieve. It can also be quite expensive.

Institute performance-based contracts between HMG departments

Assuming indicators can be developed and evidence to populate them produced, outcome-based contracts between DFID (or the Treasury) and ODA-spending departments would enhance accountability and spur the generation of evidence of impact (where there is impact) or course correction (where there is none). Such success stories would also boost the case for ODA and, being backed by numbers, are much more likely to be accepted by skeptics, especially those not fundamentally opposed to giving overseas aid but who suspect, probably rightly, that unmonitored and unmeasured impacts mask extremely low productivity.

Stay tuned and engage!

As people come back from summer holidays, I look forward to hearing your thoughts on the above and on what you think we should be focusing on in relation to global health at CGD and CGD Europe. I’ll be working together with our global health team in Washington, DC, and with Amanda Glassman, who is, luckily, going to maintain a very much hands-on and leadership involvement in CGD’s Global Health Policy programme. I am very much open to suggestions and ideas! Ongoing work on family planning, the recently launched Global Health Procurement Working Group, and practical approaches to implementing health benefits packages, drawing on iDSI and the What’s In, What’s Out book to be launched in the fall, are all underway. I am also keen to explore the meaning of implementation advocacy, as a means of moving away from the current dichotomy of advocacy versus analysis towards a more constructive advocacy for analysis paradigm. After all, to be credible and effective in our push for more money for health, we must be able to show more health for the money.


CGD blog posts reflect the views of the authors, drawing on prior research and experience in their areas of expertise. CGD is a nonpartisan, independent organization and does not take institutional positions.

Image credit for social media/web: Social media image by DFID