Ideas to Action:

Independent research for global prosperity


CGD and the Brookings Institution recently released the third edition of the Quality of Official Development Assistance (QuODA), a joint venture that measures donor performance across a series of aid quality indicators to encourage governments, institutions, and agencies to disburse more effective, transparent, and efficient assistance. QuODA uses four dimensions of aid quality assessment: maximizing efficiency, fostering institutions, reducing burden, and transparency and learning.

QuODA assesses the quality of aid against agreed upon priorities and best practices, but it faces the considerable challenge of comparing metrics across agencies with a wide range of structural and operational differences. So while QuODA provides an important starting point for a discussion of aid quality, it can’t offer each agency a customized recipe for reform.

QuODA covers 14 US government agencies that deliver development assistance.  Here’s a closer analysis into how two of them—the US Agency for International Development (USAID) and the Millennium Challenge Corporation (MCC)—stack up.

USAID in QuODA: Better than Expected

In this third edition of QuODA, USAID demonstrates the positive effects of a strong internal push for reform. Since 2010, USAID has implemented USAID Forward, a reform agenda designed to return policy and budgeting expertise to the agency, expand scientific and innovative capacity, and build and utilize local systems, among other reforms. Compared to the first edition of QuODA (which relied on 2008 data), USAID has improved relative to the aid system as a whole on two dimensions of aid quality measured by QuODA: fostering institutions and transparency and learning.

While USAID logs average scores in this year’s assessment on transparency & learning and reducing burden, the agency performs better than other aid agencies on fostering institutions. This dimension of aid quality judges an agency’s use of recipient country systems and share of aid recorded in recipient budgets, for example. This high score no doubt reflects recent efforts to channel more USAID program funds to partner governments and local civil society. Indeed even as overall funding for the agency decreased by 5 percent, USAID managed to increase its spending to local entities by 18 percent from FY2012 to FY2013. The agency is also working to ensure that every country in which it operates has a jointly-developed Country Development Cooperation Strategy that outlines how USAID engagement will further recipient country priorities in the long term.

However, it’s not all good (or at least average) news for USAID. The agency scores below average in QuODA’s maximizing efficiency dimension, which includes an examination of USAID’s share of allocation to well-governed countries and focus/specialization by recipient country.  Here, you could argue, USAID gets penalized for actions beyond its control. The aid landscape of the United States is such that USAID is the agency with the remit to handle humanitarian efforts, deal with fragile and conflict-affected states, and play a key role in frontline states. USAID has major operations in poorly governed places like Afghanistan and the DRC, and USAID isn’t going to stop working in these places – nor should it when there’s a strong strategic and development interest.

USAID also gets knocked for lacking focus. Again, the agency is tasked with being the US development presence in the most difficult of places across a range of sectors, based on identified needs within a country. This operational rationale will always run counter for calls to a singular focus in well-governed places. So, USAID should applaud itself for making great strides in targeting its aid to foster local institutions and performing reasonably well on reducing the burden to local entities and promoting transparency and learning. Given USAID’s current remit and role within the US development apparatus, it will have a hard time improving its score in the maximizing efficiency dimension. One place to start would be to reduce the number of countries in which it operates and push for a focus on things the agency does really well like humanitarian aid and social sector programs in health, education, and water.

MCC in QuODA: An Aid Effectiveness Model in Practice

It’s not surprising that MCC performs above the USG and global average on most (11 of 15) of the agency-level QuODA indicators. After all, MCC’s model and practices are based on many of the aid effectiveness principles the index seeks to measure. For instance, MCC funds only a limited number of relatively well-governed countries, uses open international procurements, pursues country-led strategies, and commits to a high level of transparency.

Interestingly, a look at the handful of QuODA indicators on which MCC performs less well (below average) shows that, in many cases, low scores in one area may actually reflect good aid effectiveness practice in another.  This suggests that it may not be possible for an individual agency to maximize all aspects of aid quality at once.

For instance, MCC gets relatively low marks on its share of allocation to poor countries.  This might seem counterintuitive since, by law, the agency can only fund low or lower-middle income countries, but the income levels of MCC partners range widely within those boundaries. So while MCC does fund some of the lowest income countries in the world, its partners are not altogether concentrated at the lower end of the income distribution.  This is largely because MCC was established to fund only countries that are relatively well-governed (it ranks 5th globally on the QuODA indicator of share of allocation to well-governed countries), and these countries are also not concentrated at low income levels.  This suggests that it can be hard for countries to score well on both allocation to the poorest countries and allocation to well-governed countries (in fact, scores on these two QuODA indicators are negatively correlated at -0.6).

Another indicator on which MCC scores below average is focus/specialization by sector. For this measure, agencies are docked for funding lots of sectors rather than specializing in a few. The logic behind this is compelling (donor proliferation in a particular area can create inefficiencies and coordination problems).  But part of the reason MCC has a relatively low score on this indicator is because it doesn’t pre-determine the sectors in which it will operate in a country.  MCC’s model places high importance on country ownership and achieving results, so the agency’s investments in a given country are determined based on economic analysis and country-identified priorities. If MCC decided in advance to invest only in limited sectors, the agency could undermine these two pillars of its model.

Interestingly, MCC only registers average performance on the indicator measuring the share of aid to recipients’ top development priorities, ranking about the same as USAID.  This result is rather different than my colleague, Ben Leo’s, findings that MCC does better, at least in Africa, at investing in things citizens say they want. So it raises some questions—like how well MCC’s process of working with a country government to identify priorities effectively captures citizens’ preferences for investment and whether the way the QuODA indicator is constructed fully captures the link between MCC investments and a country’s development priorities?

All in all, the main takeaway for MCC—that even many of the lower scores confirm—is that the agency does relatively well applying practices associated with aid quality and effectiveness to its operations, and that it should continue to strive to make the right tradeoffs when best practices in delivering quality aid conflict with one another.