At a moment when the case for maintaining aid budgets is being fundamentally questioned, we might do well to ponder once again how we measure and communicate results. As illustrated during the recent CGD panel discussion with Mark Suzman of the Gates Foundation, many in the development community lament that we have failed on two counts: broad audiences don’t know about unprecedented progress in poverty reduction and human development indicators in recent decades, and, if they do know, they don’t see the connection between aid programs and such progress.
Having spent the last six years in two development institutions that care deeply about results measurement (MCC and IADB), I can attest to the amount of time, effort, and resources that go into strong results frameworks. And the motivation is not just accountability but also learning and adaptation for greater development impact.
And yet it remains hard to articulate results in a way that is compelling to nontechnical audiences—taxpayers who absolutely deserve to understand why and how development dollars are making a difference. This communications failure has much to do with the current atmosphere of skepticism.
Results descriptions fall into four broad categories: (1) input summations—e.g., how many development dollars have been committed or disbursed; (2) output aggregations that are beneficiary specific—e.g., how many people have been trained or have access to paved roads, electricity, or financial services; (3) the findings of impact evaluations which are also beneficiary specific—whether an improvement in outcomes for a set of beneficiaries can be attributed to a particular intervention; and (4) the stories of individual beneficiaries whose lives have been changed by development interventions.
These are all reasonable aspects of results. They work particularly well in fact in the field of health. That is because health interventions like immunizations and antiretroviral drug treatments are very scalable (with costs falling sharply with greater volume) and their effects on health outcomes are predictable. In turn, the broader economic impacts of greater longevity, reduced morbidity and associated health care expenditures, and higher productivity can be estimated at a macroeconomic level. In short, it is possible to link individual treatment to systemic impact.
In other areas of development, the links between interventions for individuals and systemic impact are not so clear. And yet, systemic impact is at the heart of many of the questions posed by those who seek evidence of aid’s value for money. They understand that aid funding will never be enough to reach most of the poor. So they ask how has aid transformed government services, markets, firm behavior, private investment patterns, and institutions (public and private) to leave behind systems, products, business models, and actors capable of sustaining and scaling development impact.
In this sense, the relevant questions for measuring results go beyond those focused only on impact on the beneficiaries targeted by the project or program. Such questions might include some of the following:
Are public institutions essential for markets (e.g., utilities, regulatory bodies, courts, property registries, maintenance and procurement units) adequately funded on an ongoing basis and operating effectively and efficiently?
Has government service delivery improved after the project?
Is more private investment flowing into sectors with development impact after the project?
Are capital markets and financial institutions using their own resources to serve sectors with development impact and previously excluded (but creditworthy) individuals/firms?
Are innovations in technology, products and services, and business models being introduced and piloted on an ongoing basis in sectors with development impact?
Is the private sector replicating, sustaining, and scaling successful innovations after the project?
Did private firms enter and improve services in sectors with development impact where there were previously no private actors?
These questions clearly present difficult measurement problems for at least four reasons. First, measurement must be ex post, often with a long time lag, because the idea is to track what happened after the development project or program ended. Measuring results long after a project ends is notoriously difficult. Second, in some cases, objective metrics for measuring these kinds of impact would be hard to devise. Third, where there is room allowed for adaptation and course corrections in the original interventions in response to data feedback, the systemic results at the end of the program may look very different from those targeted at the beginning. Fourth, and perhaps most difficult, is the question of attribution. Systemic challenges by their nature require many actors and actions to address them. Indeed, this is one of the reasons there is such a growing emphasis on partnerships in development work. But the bigger and more complex the partnership, the greater the difficulty of attributing any one outcome to any one actor or action.
Yet it would be wrong to give up on measuring the kind of impact that is so fundamental to the case for aid because it is hard. Going forward, development practitioners and researchers should strive to be innovative, careful, and deliberate about gathering data (including through surveys) that measure sustainability, scalability, and the functioning of governments, markets, and institutions. And they should build these measurement tools into monitoring and evaluation frameworks from the project design phase. By doing so, they will shape project and programs that reach for systemic impact as well as real gains for beneficiaries.