What the Pre-Post Evaluation of AMFm Can Tell Us

September 04, 2012


This is a joint post with Heather Lanthorn, a doctoral candidate at Harvard School of Public Health. In mid-July, amidst the busy global-health month of July, in between the Family Planning summit and the AIDS conference, the near-final draft of the independent evaluation of the Affordable Medicines Facility - Malaria (AMFm) was released.
What is AMFm? It is a financing initiative to (1) increase the supply of antimalarial drugs, specifically quality approved the artemisinin-based combination therapies (ACTs), through negotiations and co-payments at the top of the supply chain and (2) to increase demand for ACTs through country-specific supporting interventions.
This evaluation report represents quite an expansion: an earlier AMFm evaluation report published in March 2012 was 194 pages while we, and the Global Fund, now have 675 pages to consume and process (that March report was a teaser, wasn’t it?). The authors deserve credit for organizing a 7-country evaluation over a short period of time: kudos to the folks at ICF International and LSHTM, as well as the ‘data contributors’ from 5 different organizations working in sub-Saharan Africa. Never intended to be an experiment or quasi-experiment, the pre-post evaluation of the AMFm has so far been interpreted cautiously and optimistically; we’re encouraged by this. But given that the evaluation considers trends in the outcomes of interest before and after AMFm in only the chosen AMFm countries, the evaluation lacks a counterfactual or comparison group. At a minimum, it would be helpful to know whether similar trends in the outcomes also occurred in non-AMFm countries, or having more measurements before AMFm began. As decided during a 2008 Global Fund Board Meeting (the 18th, in New Delhi, GF/B18/7), it is now time for the Global Fund Board – hosts of the AMFm – to determine whether to “expand, accelerate, terminate or suspend the AMFm.” Though other considerations such as funding will undoubtedly play a role, the Board will rely in large part on this independent technical evaluation to determine the future of the AMFm with the Global Fund. The focal points of the evaluation are the benchmarks of success laid out in January 2011 to assess the availability and affordability of ACTs, including to the vulnerable. More specifically, the benchmarks address at set specific thresholds for:
  • Availability: “The proportion of all facilities, private and public, stocking [quality-assured ACTs, including those with the AMFm ‘green leaf’ logo,’ hereafter: c/QA.ACTs], among outlets with any antimalarials in stock at the time of the survey”
  • Market share: “Total volume of c/QA.ACTs sold or distributed as a proportion of the total volume of all antimalarials sold or distributed in the last week [7 days].”
  • Use: “Proportion of children under age 5 with fever who received a c/QA.ACT on the day that the fever started or on the following day”
  • Price: “Adult equivalent treatment dose” of the dominant non-cQA.ACT used to treat malaria, generally SP or chloroquine
At first glance, things appear to have been moving in the right direction over time, especially in countries with longer implementation periods. Facilities are stocked with the AMFm-subsidized anti-malarial drugs (ACTs), and prices of ACTs are down – particularly in the private sector which is where most people get their anti-malarial drugs and where AMFm was expected to have the largest impact (and in part why the AMFm approach was both novel and controversial). As for households, the evaluation didn't get to it yet, but word from the street suggests that use of ACTs is also up. But of course, the goal of rigorous evaluation is not to examine whether the overall malaria treatment situation in the Phase I countries has improved over time, but rather the extent to which that change is caused by AMFm and, therefore, which moving-forward option is the best for the AMFm. In many cases, other major malaria initiatives have been underway in Phase I countries, including the regular Global Fund granted efforts and efforts by the US President’s Malaria Initiative and others, which also use supply-side interventions, including to reduce stock-outs, but which are likely (but not necessarily) to focus on the government-run drug distribution systems. The AMFm pilot represents a large input of resources - $225M - though it is smaller than the $540M in PMI and $320M in malaria Global Fund grants (which includes some additional funds for AMFm in-country implementation support) in 2011. These, and other contextual factors, are noted in the report for each evaluated country. The AMFm evaluators do explicate a Theory of Change, which incorporates contextual factors in order to assist in attribution, if only qualitatively. The contextual factors included with each case study include: (1) the OTC-status of ACTs; (2) concurrent malaria interventions, such as bednet distribution, other ACT subsidization schemes, IRS, RDT roll-out, and information campaigns; (3) political support for AMFm; (4) political stability; and (5) rainfall. Still, can we really know whether the AMFm is 'working' based on the independent evaluation alone? Unfortunately, no. This evaluation does not tell us this. Indeed, the AMFm – and its evaluation – was never planned as a randomized experiment or even a quasi-experiment with a proper counterfactual. An experimental approach would have faced many logistical and technical challenges. Within a country, the use of private-sector supply chains and mass media made a phased-in roll-out difficult. Across countries, not only are there wide idiosyncrasies but the ‘most likely comparator’ country in many cases was also included in the pilot, in part to take advantage of geographic clustering (e.g. Kenya and Tanzania). Moreover, a real-world implementation with several moving parts takes time and no country had all three strands of the AMFm – (1) ACT manufacturer price negotiations, (2) application of a co-payment to the qualified ACTs procured by and distributed within the country; and (3) supporting Interventions to facilitate the AMFm and the appropriate use of ACTs – running for the planned 18 to 24 months. Given these challenges, the evaluation is a series of seven country-case studies with a pre-post evaluation in each. Such studies run the risk of unduly attributing changes seen over time to AMFm. At a minimum, some tracking of whether the trends observed in AMFm-pilot countries are also seen in non-AMFm-pilot countries would be helpful. This could serve as a basic, crude, and imperfect counterfactual. This is important because AMFm was a controversial approach to improving ACT access and now a decision will be made about whether it ‘worked’ and whether and how to continue. An additional approach would have been to use more frequent measurements, ideally with measurements before AMFm started. An example of this is a newly published paper by Prashant Yadav and others with periodic retail audits (albeit only after AMFm began); they find a large increase in Tanzania in ACT availability and prices at government-recommended levels. Some will be convinced by the pre-post design that a sufficient amount of the changes seen are attributable to AMFm; it seems possible, but we’re not convinced on how large AMFm’s effect is. But does that mean that AMFm should be terminated or suspended entirely? Certainly not. Regardless of the precise merits of the evaluation, the evaluation alone does not represent all that the Board will need to consider in its decision. For one, it should be obvious that the sudden termination of a $225M supply-side market intervention would (again) distort the market and likely for the worse. If only for this reason, AMFm should continue, potentially with modification, in the pilot countries where an attributable effect is believed to be seen. But in Niger and Madagascar, where no effect was seen in the evaluation and implementation was somewhat limited, careful decisions are needed to determine whether to “resume”, “suspend” or “terminate.” The authors of the independent evaluation treat each country as an idiosyncratic case, and the decision about the AMFm should follow this lead, considering the next steps on a country-by-country basis (see here for related arguments on the ‘how’, rather than the ‘if’ of AMFm). In addition, the Board needs to consider adding programmatic elements largely left out of Phase I for a variety of reasons, such as using diagnostics to encourage patients to use ACTs correctly (rather than treat preemptively). The Board needs to be clear and transparent about how they make their decision. We – and especially countries – should understand their thinking process, including what they take from the evaluation and what other criteria, anecdotal and otherwise, they consider. And decisions and plans should be made in consultation with each country and a country’s National Malaria Control Program and Plan. Moving forward, a targeted and tailored strategy will be needed if AMFm continues in some or all of the present pilot countries and expands into additional countries (funds and support permitting). Regardless of what is decided for the next phase of AMFm, we strongly recommend that resources be allocated for, at a minimum, tracking outcomes more frequently and also in the non-AMFm countries.


CGD blog posts reflect the views of the authors, drawing on prior research and experience in their areas of expertise. CGD is a nonpartisan, independent organization and does not take institutional positions.