With rigorous economic research and practical policy solutions, we focus on the issues and institutions that are critical to global development. Explore our core themes and topics to learn more about our work.
In timely and incisive analysis, our experts parse the latest development news and devise practical solutions to new and emerging challenges. Our events convene the top thinkers and doers in global development.
CGD research brings an economic perspective to the global fight against major killers, including HIV, tuberculosis, and malaria. We help governments and international agencies, including the Global Fund, find ways to catalyze development of new, effective drugs, as well as how to make fair decisions about who should get treatment for what disease at what cost—and the implications of those decisions.
Three years ago, the U.S. President’s Emergency Plan for AIDS Relief (PEPFAR) launched the DREAMS partnership, an ambitious and innovative effort to reduce HIV infection rates among adolescent girls and young women in ten sub-Saharan Africa countries, followed by support for DREAMS activities in five additional countries last year. This large-scale collaboration between PEPFAR, the Bill & Melinda Gates Foundation, Girl Effect, Johnson & Johnson, Gilead Sciences, and ViiV Healthcare aims to empower girls and women through rigorous, evidence-based methods that both improve healthcare services and address structural factors that contribute to HIV risk.
In this paper we combine fourteen years of high-resolution satellite data on forest loss with individual-level survey data on malaria in more than 60,000 rural children in 17 countries in Africa, and fever in more than 470,000 rural children in 41 countries in Latin America, Africa, and Asia. We did not find that deforestation increases malaria prevalence nor that intermediate levels of forest cover have higher malaria prevalence.
Deforestation isn’t associated with higher malaria prevalence in children in 17 African countries. Nor is it associated with higher fever in children in 41 countries across Africa, Asia, and Latin America. That’s the surprising conclusion of our new CGD working paper.
This means that, at least in Africa where 88 percent of malaria cases occur, public health efforts to reduce malaria should continue to focus on proven anti-malarial interventions. These include insecticide-treated bed nets, indoor spraying, housing improvements, and prompt clinical treatment, which along with other interventions have reduced the incidence of this killer disease by 41 percent between 2000-2015.
For advocates of forest conservation in Africa, there are many good reasons to keep forests standing. These include carbon storage, biodiversity habitat, and clean water provision, alongside other goods and services, as elaborated in my (Jonah’s) book, Why Forests? Why Now? However, forest conservation might not have anti-malarial benefits, at least not in Africa.
But increased malaria risk might not necessarily translate to higher rates of malaria in humans (i.e., “prevalence”). That’s because there’s considerably nuance in the effects listed above. For example, deforested areas may be favored by some mosquito species but not others; deforestation is generally considered to increase the density of malaria-transmitting mosquitoes in Africa and Latin America but decrease their density in Asia. In addition, many other factors besides deforestation also affect malaria prevalence in humans, including climate, community demographics, access to health facilities, and people’s behaviors to avoid malaria.
Nine previous studies have compared deforestation to malaria prevalence in humans (see table below). These studies generally analyzed small amounts of data from a handful of countries—four from Brazil, two from Indonesia, and one each from Malaysia and Paraguay, as well as one study that compared national-level statistics across 67 countries. Most, though not all, found that more deforestation is associated with more malaria. So, it was a surprise to find no association between deforestation and malaria in our study.
So then, why might studies find that deforestation leads to higher malaria rates in South America and Southeast Asia but not in Africa? The explanation, we speculate in our paper, may have something to do with the difference between how deforestation happens in Africa versus elsewhere. Deforestation in Africa is largely driven by the slow expansion of rotational agriculture for domestic use by long-time smallholder farmers in stable socio-economic settings rather than by rapid clearing for market-driven agricultural exports by new frontier migrants as in Latin America and Asia. We hope that this hypothesis can be supported or refuted by future work.
How we got there
We came to our conclusions by assembling massive data sets on deforestation and malaria. Our data set on deforestation included annual tree-cover loss between 2001-2015 in 1.5 million ~5.5-kilometer grid-cells across the tropics, compiled from Global Forest Watch as part of a previous CGD working paper. We also obtained data from malaria tests of around 60,000 children in rural Africa and fever recall surveys of around 470,000 children across the rural Tropics conducted under the auspices of USAID’s Demographic and Health Surveys. We combined these two data sets in a multivariate regression analysis that also considered temperature, precipitation, housing quality, water source, access to health services, child age, and bed-net usage.
In addition to our main comparison of deforestation and malaria, we also tested hypotheses generated in advance from previous studies. Did smaller cuts lead to more malaria on a per-hectare basis than larger cuts? Did deforestation have a bigger effect in places with more forest? Did deforestation have a bigger effect on fever in African and Latin America than Asia? The answer to all three questions is a resounding “no.”
We’d originally also planned to compare the cost-effectiveness of preventing malaria through forest conservation to the cost-effectiveness of common interventions such as bed nets and spraying, as measured in disability-adjusted life years (DALY) per dollar. But since deforestation wasn’t found to affect malaria rates, the DALY-per-dollar benefit was essentially zero.
Bolstering credibility with a pre-analysis plan
We expected our findings were bound to be controversial, no matter what we found. A previous study of deforestation and malaria in the Brazilian Amazon generated some heated back-and-forth. So to bolster the integrity and credibility of our research we used a pre-analysis plan. That is, we wrote down and time-stamped all our hypotheses, methods, models, and variables in advance. Then we stuck with them.
Pre-analysis plans are common and even required for some types of clinical research. But they are still new to social sciences, including economics, where common research practice often involves testing many possible combinations of variables and model specifications. If the authors of such a study only report tests showing favorable results while relegating the results of other tests to the digital trash bin (“data mining” or “p-hacking”), they can inadvertently or deliberately place a thumb on the scale to achieve desired results. This is what we wanted to avoid by writing and following a pre-analysis plan. Since prominent repositories for pre-analysis plans hosted by the American Economic Association and the International Initiative for Impact Evaluation compile pre-analysis plans for randomized controlled trials (RCTs) but not other types of studies, we published our pre-analysis plan on the CGD website (available in two parts, here and here).
It has also been claimed that the use of pre-analysis plans can make null findings less likely to be rejected for publication. We certainly hope this is the case—research on important topics ought to be equally likely to be submitted, published, and reported on no matter what the finding. At stake in a full and accurate understanding of deforestation and malaria are the lives and health of millions of people and the conservation of millions of hectares of forest.
Comparing deforestation to malaria prevalence in humans
Positive association between deforestation or forest cover reduction and malaria?
Wayant et al., Geospatial Health, 2010
Univariate correlation between NDVI-forest cover change interaction and malaria case rates over 260 months in two departments in Paraguay
Forest cover change
Pattanayak et al., ERID working paper, 2010
Conditional correlation in cross-sectional regressions of primary and secondary forest area and 500 household surveys in Flores, Indonesia
Forest cover, family size, number of children, gender, native born, child age, caregiver age, caregiver health, caregiver education, household wealth, housing quality, village public health facility, village population, village area, village elevation
Olson et al., Emerging Infectious Diseases, 2010
Conditional correlation in cross-sectional regressions of deforestation and malaria incidence across 54 health districts in Mancio Lima County in Acre, Brazil
Deforested land area, deforestation, access to care, area
Hahn et al., PLoS ONE, 2014a
Cross-sectional regression of deforestation and incidence in 602 municipalities of the Brazilian Amazon
Deforested land area, deforestation, Paved road density, unpaved road density, area affected by fire,
Valle and Clark, PLoS ONE, 2013 (see also Hahn et al., 2014b, Valle 2014)
Association between forest cover and malaria incidence across 401 20km radii around towns in the Brazilian Amazon
Forest cover, deforestation, population, lagged precipitation, lagged drought index
Garg, job market paper, 2014
Panel regression between occurrence of village-level outbreak and MODIS monthly hectares of district-level deforestation across four islands of Indonesia
Deforestation, village poverty, village health, access to hospital, population density, rice field area, proximity to river, elevation, rainfall
Terrazas et al., Malaria Journal, 2015
Correlation between incidence of malaria and average annual deforestation rate across 62 municipalities of the state of Amazonas, Brazil
Forest cover, deforestation human development, education, income, poverty, unemployment, health surveillance, watercourses
Fornace et al., Emerging Infectious Diseases, 2016
Association between incidence of P. knowlesi and historical forest loss within a 1-5 km radius of 405 villages in Sabah, Malaysia
Forest cover, deforestation, elevation
Austin et al., AIMS Environmental Science, 2017
Structural equation model of malaria prevalence rate in 2013 vs self reported changes in forest cover (FAO FRA) 2012-2013 across 67 countries
Forest cover change, latitude, GDP per capita, Sub-Saharan Africa, agriculture as % of GDP, rural population growth, public health conditions
Bauhoff and Busch, CGD Working Paper, 2017
Conditional correlation in cross-sectional regression of deforestation and malaria prevalence in 60,305 children in 17 African countries; fever in 469,539 children in 41 countries
Forest cover, deforestation, temperature, precipitation, child age, floor type, water source
The Sustainable Development Goals are an ambitious set of targets for global development progress by 2030 that were agreed by the United Nations in 2015. A review of the literature on meeting "zero targets" suggests very high costs compared to available resources, but also that in many cases there remains a considerable gap between financing known technical solutions and achieving the outcomes called for in the SDGs. In some cases, we (even) lack the technical solutions required to achieve the zero targets, suggesting the need for research and development of new approaches.
We here at CGD tend to be critical of international agencies like WHO or the UNDP for establishing targets or guidelines without sufficient consideration of the impacts, for good and ill, of those guidelines in the affected countries. Such guidelines often apply standards more appropriate to rich countries and then pressure poor countries to behave as if they were rich.
Take the example of WHO’s recommendations regarding eligibility for free antiretroviral therapy (ART) in affected countries. The current guideline, promulgated in 2015, recommends that all countries, no matter how poor or how severe their AIDS epidemic, immediately start all HIV infected people on AIDS treatment. And UNAIDS reinforces this guideline by advocating that all countries attain the “three 90s,” which imply that 81 percent of everyone living with HIV be on treatment by the year 2020, rising to 90 percent by 2030. Models show that such ambitious objectives are necessary to achieve zero AIDS deaths and zero new infections. These guidelines are useful for setting targets for international donors—and perhaps for middle-income countries like South Africa and Botswana—but they provide no help to a poor country attempting to prioritize treatment scale-up without the resources to treat all its HIV-infected citizens. Decision makers in such circumstances need a decision rule—such as the CD4 thresholds that WHO formerly provided.
The subtitle of WHO’s 2002 guideline was “Guidelines for a public health approach,” signaling an attempt to accommodate the needs of a poor country with limited health sector capacity and vast numbers of HIV-infected people. In that earliest guideline, countries were advised to prioritize patients with CD4 counts less than 200 cells per microliter, a criterion that focused care on the sickest people with HIV. WHO’s justification for taking a public health approach rather than a clinical approach was explained in a 2006 article here. Subsequently, in 2009, WHO revised this threshold to 350 and then in 2013, to 500 leading to the 2015 guidelines removing the CD4 criterion altogether.
The trend toward increasingly comprehensive treatment recommendations, which ignore fiscal and health system constraints, has been propelled by findings that earlier treatment was correlated with better long-term survival and lower infection rates among the patient’s sexual partners, without reference to the fiscal and health system burden the more comprehensive guidelines imposed. The studies that established these benefits, including HPTN-052, START, and TEMPRANO on the direct benefits to patients and HPTN-052 on the prevention benefits, compared randomly sorted patients who started treatment before their CD4 count descended to 350 to patients who began treatment promptly as soon as their CD4 count dropped that low. The study organizers went to great pains to assure that any differences observed in medical outcomes between those who immediately started ART (at a CD4 > 350) and those whose treatment was deferred could not be attributed to failure of clinics or patients to follow the guidelines. So for example, if a patient learned his or her treatment was deferred and did not voluntarily continue with pre-ART testing and evaluation, the patient would be tracked down in his or her home and given the support necessary to assure they continued testing until they were eligible and subsequently remained on treatment for the follow-up period of two or more years.
But in real-life programs in severely affected resource-limited settings, it is difficult to assure that a deferred patient will in fact return regularly for testing and evaluation and then begin and sustain treatment when their CD4 count drops below a threshold. So the three clinical trials achieved an accurate measure of the biological outcomes of treatment on the individual with CD4 greater than 350 who immediately starts treatment. But they compare these patients to an extraordinarily compliant group who maintain their connection to the clinic and repeat their lab tests despite the fact that their treatment was at first deferred.
Instead of looking for medical outcomes, the new paper, by Jacob Bor, Till Baernighausen, and coauthors, focuses on the important mediating variable of patient retention. Patients who do not return after an initial CD4 test cannot possibly benefit from treatment. Using the “regression discontinuity method,” the authors test the hypothesis that offering treatment to individuals improves their subsequent retention in the pre-ART and ART treatment program. Figure 1 compares retention results between the three previous clinical trials, which virtually guaranteed patient retention, and the authors’ study of the “Hlabisa compliers” using these innovative methods.
Figure 1. Comparison of three previous studies of the health effects of immediate vs. deferred ART to “compliers” in the current study shows the importance of patient behavior for treatment outcomes (Source: Bor et al., figure 4)
Instead of explicitly randomizing patients, asking for their consent to participate, and then deferring one group’s treatment, the regression discontinuity approach exploits the fact that routine patients are automatically randomized due to the naturally occurring, day-to-day random deviations of the patient’s CD4 count on either side of the treatment threshold. In contrast to the previous studies, which were designed to prevent impact of eligibility on retention, the new study finds that patients who randomly have a CD4 count of 351 rather than 349 on the day they were tested, and comply with the recommendation they begin treatment, were fully 70 percentage points less likely to be enrolled and retained in treatment 12 months later.
The results in the last two bars of Figure 1 above are for the subset of all patients called “compliers,” whose subsequent treatment accorded with the recommendation of the 350 threshold guideline. For the entire sample of 2,300 patients, binned into 32 subgroups by initial CD4 count, Figure 2 shows the dramatic discontinuity in 12-month retention on either side of the 350 threshold.
Figure 2. Immediate ART eligibility "causes" an increase of 18 percentage points in the percentage of patients retained in treatment at 12 months. (Source: Bor et al., figure 2)
In Figure 2 the 18 percentage point difference between the retention rates to the left and right of the threshold is less than the 70 percentage point difference in the right two bars of Figure 1. This is because Figure 1 excludes the non-compliers, including both the “never-takers” who were eligible but did not start ART, and the “always-takers” who were not eligible but took ART anyway. The former depress the data on the left of Figure 2, while the latter inflate the data on the right. So the 18 percent difference in Figure 2 is closer to a real-world estimate of the disadvantage in retention to people whose treatment is deferred because their CD4 count is too high—and much higher than the negligible difference in retention that would have been misleadingly estimated from the retention results in the first three sets of bars in Figure 1.
So should we conclude, as the authors suggest, that the 350 threshold was an incorrect guideline from WHO and an incorrect policy choice for South Africa, producing more harm than good? To draw such a conclusion would require a much more complete assessment of the costs and benefits of the guideline’s implementation in a public health setting.
When resources are limited, individuals with CD4 counts above 350 can only be initiated on ART by reallocating resources from somewhere else. The most obvious possibility for reallocation would be to withhold ART from someone with a CD4 count below 350. Since the authors do not assess the capacity utilization of the ART clinics in their study, we don’t know whether such a displacement of a sicker person (with a lower CD4) by a healthier one (with the higher CD4) would have been required in Kwa Zulu Natal. But such a displacement might well be required in other settings with much more limited resources.
As I suggest above, the missing discussion in the paper is that of the “opportunity cost” of a program-wide policy of early initiation. Deferring the treatment of a sicker person in order to begin the treatment of a healthier person is only one example of the “opportunity costs” that a country might incur from abandoning the 350 guideline. In general, the term “opportunity cost” refers to all the other valuable social ends that could have been accomplished with the financial and medical resources used to initiate ART for people with CD4 counts above 350. To neglect these other possible uses for these resources implies the belief that none of the initiated patients with a high CD4 count is displacing one with a lower count, that none of them is displacing another patient who urgently needs a vaccination or an appendectomy, and that the financing of the medical resources has no alternative possible useful purpose, such as expanding access to education for girls or building roads to rural areas.
Assuming that there are, in fact, opportunity costs to expanding ART access to those above a CD4 count of 350, the analysis is incomplete without that information. The new study tells us that the benefits of initiation might be much higher than previously thought. But the costs must also be considered. The authors plausibly suggest that the patient-level benefits they estimate for Kwa Zulu Natal Province might be generalizable to the rest of sub-Saharan Africa. But not so the public health considerations, including the opportunity costs. The three types of opportunity costs I list will vary greatly between poor countries with few donors and a high HIV prevalence rate and richer ones like South Africa. This paper should not be interpreted as evidence that thresholds are bad for public health.