Ideas to Action:

Independent research for global prosperity

X

Views from the Center

Feed

Back in December 2006 I blogged a meeting in Paris at which the Consultative Group to Assist the Poor (CGAP), a sort of microfinance think tank operating out of the World Bank, won a mandate to hold a mirror up to the aid agencies that fund it. A year and a half later, CGAP's new SmartAid Index looks at the quality of donors' work in supporting micorofinance. For example, it asks, do donors learn and apply lessons from past experience? Do they hold staff accountable for results? My observation at the meeting was that mid-level officials who were directly responsible for microfinance activities, while seeing the good sense in external scrutiny, were somewhat uncomfortable with the prospect. Meanwhile, people closer to the top were eager for ways to understand the effectiveness of the various branches of the organizations they run, including units doing microfinance.

Several CGD projects have measured the quality of foreign aid. The aid component of the Commitment to Development Index discounts the quantity of aid that a donor gives in order to incorporate three aid quality factors: "tying" aid to purchases of donor-country goods services; "selectivity," meaning the extent to which aid goes to countries that are poor but relatively well governed; and "project proliferation," the propensity to overwhelm recipient governments with lots of small projects. (For details, see my aid component background paper.) A new CGD project, Measuring Aid Quality, seeks to go much farther, measuring such things as aid volatility. Both efforts depend heavily on data that donors report to the Development Assistance Committee (DAC) in Paris.

One lesson I have learned from this work is that crunching DAC data gets one only so far in measuring aid quality. (Of course, whatever distance it does take you is useful.) Ultimately, good aid delivery cannot be formulaic, but is adaptable to complex local circumstances. At some point, one must conclude that high-quality aid is aid that comes from high-quality institutions. The question is, what are those? How do you know a high-quality aid agency when you see one? Is it one that that learns from past projects, or that rewards employees for a subtle blend of results and risk-taking? Measuring such characteristics is hard--especially when the agencies being measured fund your budget. But that is exactly what CGAP has done, in the realm of aid for microfinance, using extensive surveys and interviews. It is, in my view, an important and unique effort. (I informally advised the project in 2006.)

The SmartAid Index rates donors on five aspects of performance: strategic clarity, staff capacity, (internal) accountability, knowledge management (learning), and use of appropriate instruments. (My old post says more about the genesis of these categories.) Only seven agencies volunteered for the initial round of the Index, and of those the Canadian International Development Agency (CIDA) has not approved the release of its results. The remaining, brave and wise six are: the Asian Development Bank (ADB), the Netherlands Development Finance Company (FMO), the German Gesellschaft für Technische Zusammenarbeit (GTZ) and Kreditanstalt für Wiederaufbau (KfW), the Swedish International Development Cooperation Agency (Sida), and the U.N. Capital Development Fund (UNCDF). Among the six, only Sida is a traditional single-government, grant-giving agency. (The ADB and UNCDF are multilateral. FMO and KfW are development banks, which primarily lend. GTZ provides only training and technical advice, often on contract.) The scarcity of traditional agencies may reflect their particular skittishness about the political ramifications of exposure to the Index. It may also point to the agencies' declining importance in microfinance as big lenders such as ADB and KfW move in. Perhaps half a dozen more agencies will join in the next round.

To my eye, the initial SmartAid Index web release is opaque. CIDA's absence from the results is not explained. The pages do not spell out precisely why the ADB, say, gets a 57 (out of 100) while GTZ gets 86. Leaving certain things unsaid imbues the Index with an air of political intrigue, or at least caution. Certainly, rating one's funders is a delicate dance. Perhaps this shows the drawback in CGAP's dependence on donors.

Yet if CGAP really wanted to pull its punches, it would not have published the Index at all. And CGAP's close relationships with donors give its staff unique access to donor officials and thus unique insight into the agencies' operations. The open lines of communication and the tactful delivery of results may also minimize defensive reactions, helping donors listen and then actually reform how they deliver aid for microfinance. Improving the quality of aid is, after all, the point of the exercise.

Related Topics:

Disclaimer

CGD blog posts reflect the views of the authors, drawing on prior research and experience in their areas of expertise. CGD is a nonpartisan, independent organization and does not take institutional positions.