Earlier this month, Ambassador Goosby officially announced that he was stepping down from his role as Global AIDS Coordinator where he led the President’s Emergency Plan for AIDS Relief for the past four years. As my colleague Amanda blogged in anticipation of Dr. Goosby’s departure, his service will be remembered for strengthening the evidence base behind PEPFAR’s work. Indeed, Dr. Goosby established the “Office of Research and Science,” which was charged with the creation and management of the Scientific Advisory Board, the oversight of a $60 million NIH-funded research program to conduct rigorous combination HIV prevention trials, and most recently the promulgation of guidelines which encourage PEPFAR country staff to submit a proposal to conduct an “impact evaluation” as part of their annual Country Operation Plan (COP).
Swearing-in ceremony, September 17, 2009.
Would PEPFAR be as interested in evidence if, counterfactually, Ambassador Goosby had not accepted this appointment back in 2009?
All of this is a dramatic and welcome departure from a time in the not-so-distant past when “research” was a dirty word. Still PEPFAR staff – who are program implementers, or the managers of program implementers, and often not familiar with research jargon – were left wondering what they were really being asked to do; what does PEPFAR mean by “implementation science” and “impact evaluation”(IE)?
To answer these questions, PEPFAR included a detailed description of what they meant by “impact evaluation” in the 2013 COP guidelines sent to all 72 country offices (and posted online here) and solicited proposals for additional funding so that country teams could conduct their own impact evaluations. The submissions were to be dramatically different than the traditional “Public health Evaluations” PEPFAR had previously done. For the first time, PEPFAR staff and partners were asked to specify a “counterfactual,” which the guidelines explain is what would have happened without the intervention. In their IE proposals, teams were asked to clearly describe how they will construct that counterfactual and how they will estimate program achievements compared to the counterfactual.
Here is a particularly challenging passage from the guidelines:
Impact Evaluation Methods
Impact evaluations (IE) use experimental approaches (e.g. randomization) to establish a counterfactual (i.e. what would have happened in the absence of the project) or quasi-experimental methods (e.g. comparisons groups, advanced statistical and modeling techniques) when randomization is not feasible. As a result, they permit an accurate estimate of effectiveness through causal attribution of outcomes or impact to the program being evaluated as opposed to what would have happened in the absence of the program. IE hypotheses reflect these comparisons (the counterfactual). Note that randomization can often be achieved through ―smart implementation (i.e., rolling a program out in a randomized, controlled fashion) without the enormous costs and levels of monitoring necessary in a clinical randomized controlled trial to achieve regulatory approval of a new drug or to evaluate the efficacy of a new product. Because, by definition, IEs focus on real world effectiveness, they must be linked to the evaluation of a PEPFAR program. Proof-of-concept efficacy trials (with precisely defined and narrow objectives) as well as basic or investigational clinical research activities will not be considered for funding as IEs. (Source: PEPFAR.gov )
Last week I served “pro bono” as one of the “faculty” of the first of these PEPFAR impact evaluation workshops, held in Harare, Zimbabwe. It was an interesting and exhilarating experience to listen to, and answer questions from highly motivated representatives of the five PEPFAR country teams that came to the workshop (read more about my experience at the workshop here). They all understand that PEPFAR aims to hand over program ownership as soon as any country is able to sustain the quality and scale of PEPFAR support. And they all seem determined to make the most out of this learning opportunity to help their country’s program do better.
So as Ambassador Goosby looks back on his year’s at OGAC, he must occasionally wonder about the counterfactual to his own service at OGAC. How is OGAC different because he accepted President Obama’s call in 2009? Would another OGAC leader have moved as forcefully towards an evidence base for PEPFAR. Would the term “implementation science” ever have been invented – or endorsed by PEPFAR?
Unlike the situation in PEPFAR countries, where the large number of PEPFAR facilities offer opportunities for constructing pretty good counterfactuals, the question of how history would have been different if Ambassador Goosby had not come to DC will forever be beyond the reach of science. But I for one am convinced that few leaders could have done as much to put PEPFAR on a sound research footing as Dr. Goosby has done.