[Update: I posted slides from my turn as a discussant for this paper at the Brookings Institution on January 25, 2010.]
In the last few days, Bill Easterly and Owen Barder, two respected bloggers who spent time at CGD, looked over a new paper and (at least provisionally) reached opposite conclusions about whether aid has at long last been shown to boost economic growth on average. Also in the last few days, I whacked Bill; now I'll back him.
The new paper by Channing Arndt, Sam Jones, and Finn Tarp (I'll call them "AJT") contains two statements I largely endorse:
Using observational data, there is no way of identifying a plausible counterfactual without making assumptions that are bound to be debatable, in theory and in practice.
Overall, we believe our approach represents the most carefully developed empirical strategy employed in the aid-growth literature to date.
The first quote says that unless you experiment on poor countries, randomly giving aid to some and not others, you have to make arguable assumptions about the world in order to infer anything from country-level data about whether foreign aid causes growth. The second quote might overreach slightly but is true in spirit: this is an impressively careful analysis.
The authors work with a data set from a widely cited paper by Raghuram Rajan and my colleague Arvind Subramanian that concludes that there is no clear evidence of a systematic impact of aid on growth. Reanalyzing, AJT conclude that---on the contrary---it is reasonable to believe that aid worth 1% of a country's gross domestic product (GDP) raised economic growth by 0.1%/year on average during 1970--2000. That is a small but helpful impact.
While I cannot prove AJT wrong, I remain skeptical. In a new CGD paper, Blunt Instruments, Michael Clemens and Samuel Bazzi powerfully express my main concern. I've explained it less technically on my microfinance open book blog with reference to studies of the impact of microcredit, and will adapt that writing here.
A big challenge in the social sciences is to go beyond merely observing correlations to showing causation---e.g., that receiving aid is not merely correlated with economic growth but causes it. AJT employ a common technique for ferreting out causation: the use of instruments, which are factors that are assumed to affect an outcome of interest only through a determinant of interest (caveat for experts: "...after linearly controlling for observed covariates"). To temporarily simplify, AJT set up this picture:
population => foreign aid/capita => economic growth
The first arrow says that how many people live in a country affects how much aid it gets per person. This is true: bigger countries like India get less aid per person than small ones like Madagascar. The second arrow embodies the hope that aid increases economic growth. But by assumption no arrow runs from population directly to economic growth. Population is held to affect growth only through foreign aid. So if we observe in the data that the things on the two ends of the diagram are correlated---moving up and down together---then both the arrows in between must be at work. In particular, foreign aid is making a difference. Here, we say that population "instruments" for aid; and having the first arrow, running from the instrument, lets us study the second arrow.
Notice the reasoning here. We assume:
A. Population affects growth only through aid.
That plus the data leads to:
B. Aid affects growth.
A few comments about this structure:
- Just about all reasoning works this way. You have to assume something to conclude something. Think of Euclid's classic text on geometry, The Elements, which begins with a handful of axioms, such as that for any two points, a straight line can be drawn to connect them.
- AJT understand this. That's what the first quote above is really about.
- It is not clear that we should believe A more readily than B. If I am ready to make one assumption about causal relationships across a diverse set of complex nations over 30 years---population only affected economic growth through aid---why stop there? Why don't I just assume B---that aid raised growth on average? It would save a lot of time. The answer has to be that A is easier to believe than B, just as Euclid's axioms are easier to believe than what Euclid proves with them, such as the Pythagorean Theorem (a2+b2=c2). But is A more credible than B in the case at hand? In Blunt Instruments, Michael and Sami point out that other economic studies have proceeded in a fashion analogous to, yet incompatible with, AJT's, by assuming that population affects growth only through how much foreign trade a country engages in or only through how much foreign investment it receives.
In my old blog post, I distilled these ideas down to: "For a study to teach us about the world, the assumptions on which it rests must be more credible than the assumptions that it tests."
In fact, AJT do not instrument aid with population. But they do instrument with some clever variables devised in Rajan & Subramanian that are based on population, and with other variables to which the same argument applies. And like Rajan & Subramanian they do so in a way that makes it impossible to check whether the instruments are valid, that is whether assumption A holds. All these criticisms, it should be said, apply to Rajan & Subramanian's more famous and negative study, so we should equally question whether they have shown that aid has no impact. Personally, I doubt aid regressions more than aid.
To their credit, AJT do perform one sensitivity test that allows them to check the validity of their instrument (Table 6, column III, page 27), though they do not discuss the result of this test. They compute something called the Hansen J statistic. They estimate that if their instrumentation approach is valid there is only a 12% chance that the J statistic would be as big as it actually is:
That's a small probability. For several technical reasons, this does not flat-out disprove assumption A, but it is not the sort of reassuring number one would hope for to defend that key assumption.
In sum, the new study, though careful, does little to assuage my skepticism about the search for instruments in order to discern the effect of aid on growth.
[The study raises other par-for-the-course technical questions in this reader's mind, which I will telegraph to experts: Were aid and GDP adjusted for inflation before being aggregated over time? How much do the results depend on how economic growth is measured? Should the hint of data mining, favoring the 1970-2000 time frame, be avoided more fully? Was sensitivity to outliers checked?]
I admire Owen Barder for many reasons. Did you notice his modesty in blogging World Pneumonia Day while glossing over his own contribution to what appears to be a remarkable aid success, the imminent delivery of affordable pneumo vaccines to poor countries? And he probably hasn't run nearly as many aid-growth regressions as me, which I take as true sign of wisdom. Withal, here I disagree with him and side with Bill Easterly.
(For more in this vein, see my Guide for the Perplexed.)