With rigorous economic research and practical policy solutions, we focus on the issues and institutions that are critical to global development. Explore our core themes and topics to learn more about our work.
In timely and incisive analysis, our experts parse the latest development news and devise practical solutions to new and emerging challenges. Our events convene the top thinkers and doers in global development.
With shifting disease burdens, growing populations, and rising expectations comes a greater focus on value for money. International health funders and agencies want to know how to make the most of money spent by focusing on the highest impact interventions among the most affected populations. Whether through better procurement systems for health commodities, results-based financing, or more detailed assessments of the effective ness of health technology, CGD’s work aims to make health funding go further to save, prolong and improve more lives.
Universal health coverage (UHC) is now firmly on the global health agenda, and carries with it the ambitious goal of providing “access to key promotive, preventive, curative and rehabilitative health interventions for all at an affordable cost.” So where do we start? A critical first step to delivering on the aspirations of UHC is deciding which services and policies to prioritize and make available. While resources for health care are growing, they are not infinite and hard choices must be made.
Priority-setting processes for health spending specify at least a certain set of policies, services and technologies that will be financed and made available under UHC. Some will also indicate which services or technologies will not be funded and provided. Ideally, the design of a priority setting process is fair, transparent, inclusive and deliberative. And in the best cases, the selection of these services is based on cost-effectiveness and accounts for equity, financial protection and social values in a systematic way.
When priority-setting processes aren’t in place – that is, when resource allocation decisions are made based on past budgets or under pressure from interest group – less health is provided for every dollar spent. Evidence of suboptimal allocation abounds; India subsidizes open heart surgery while child vaccination rates remain low; Colombia purchases more analogous insulins per diabetic than any country in Latin America while diabetes prevention and management programs remain underfunded; and Egypt spent a fifth of its public spending on health to send a few fortunate people overseas for health treatments while a fifth of their children were stunted. (for more examples, see our report on priority-setting institutions here)
Under UHC, it will ultimately be up to countries to set their own priorities for health spending. And that’s great - reallocating a portion of public and donor monies toward the most cost-effective health interventions would save more lives and promote health equity. But too many low- and middle-income countries lack the fair and evidence-based processes and institutions needed to adequately inform funding decisions.
With that in mind, the Center for Global Development ran a working group on priority-setting institutions during 2011/12, recommending the creation and development of national and global systems to more rationally set priorities for public spending on health. The group called for an interim secretariat to incubate a global facility designed to help governments develop national systems and donors get greater value for money in their grants.
So we’re delighted to announce a new platform that does just that -- the international Decision Support Initiative (iDSI). Recently launched by NICE International, the iDSI will support low and middle income governments, and perhaps donors, in making resource allocation decisions for healthcare. Specifically, the initiative will share experiences, showcase lessons learned and identify practical ways to scale technical support for more systematic, fair and evidence informed priority setting processes. In strengthening priority-setting institutions, the iDSI will be a tool to both improve access to effective health interventions and the quality and efficiency of health care delivery. And importantly, it will help elevate the value of priority setting as a necessary, if not sufficient, condition for attaining and sustaining UHC.
The full announcement from NICE International on the iDSI can be found here and the strategic overview here. For more information on CGD’s work on Priority-Setting, see the report’s brief here and a wonkcast on the topic, here.
Yesterday the World Bank and the Global Fund announced a stronger partnership for health centered around an innovative aid mechanism, results-based financing (RBF). This partnership is precisely what our CGD report More Health for the Money recommended (see the chapter on designing contracts). As we’ve argued, RBF represents a paradigm shift in aid, moving away from checking receipts to measuring results. RBF creates incentives to save more lives for the same money by paying for health services after pre-agreed results have been achieved and independent verified. We congratulate the strong leadership by Jim Kim, Tim Evans, and Mark Dybul, and the staff at both agencies including Monique Vledder of the Bank for their hard work to bring this partnership to fruition.
According to yesterday’s announcement, the Bank and the Fund will work in common countries to integrate and scale-up services, enhance supply chains, and use common RBF platforms. This duo will arguably have far wider scale and uptake than the Health Systems Funding Platform, which intended to bring the Bank, the Fund, and GAVI together but has yet to achieve widespread buy-in or results. In short, this represents a serious venture for the Fund to improve health systems.
Despite this progress, there is still room to improve the incentives between the Global Fund and its recipients. Yesterday’s announcement between the Fund and the Bank mainly focused on the subnational incentives, with little mention of the funder-country recipient incentives. The Bank has a channel to improve funder-country incentives through Program for Results (P4R) financing. But the Fund, at least prior to the New Funding Model, has weak funder-country incentives for performance.
At the Interagency Working Group on RBF on October 31, 2013, the first such meeting jointly hosted by the Global Fund and the World Bank, both levels of incentives were presented and discussed. I presented my paper published in Lancet Global Health which identifies major challenges with the Fund’s current performance-based financing (PBF) system. These include performance indicators that are mainly inputs rather than outputs or outcomes, not rigorously measured and inaccurate, and only weakly linked to the money. At present the Fund still needs to ensure that part of a grant’s funds are explicitly linked to performance.
That said, improving the quality of data and indicators used for subnational incentives through RBF, particularly with rigorous and robust measurement, will also improve the indicators used in funder-country recipient incentives. Moreover, the Fund has been considering a few “pilots” to improve funder-recipient incentives through CGD’s Cash on Delivery (COD) Aid model. Countries such as Rwanda and Benin are moving forward on some variant of COD, though the Fund still lacks a systematic strategy towards piloting. Another promising development is that the Fund is drafting operational guidance for a “menu of options” for performance incentives: regional COD; COD; and of course RBF with HRITF. But it remains unseen how much voluntary uptake there will be with a menu, and whether the Fund can articulate an overall policy that improves from its current system. This would be our Christmas wish. Stay tuned.
The Center for Global Development has a history of work on performance incentives: Rena Eichler and Ruth Levine (former CGD Vice President) convened an early CGD Working Group on performance incentives from February 2006 to 2009 and published a book on the topic, along with an accompanying video featuring several cases. Influenced by several program experiences with results-based financing and early discussions in the CGD working group, the World Bank launched the Health Results Innovation Trust Fund (HRITF) in December 2007 with Norwegian and later, UK funds. The HRITF was founded in part conditional on the creation of an Interagency Working Group (IWG) on Results-Based Financing that met twice a year. The IWG was first co-chaired by Levine and Amie Batson, member of the CGD working group and then at the World Bank and later USAID Deputy Assistant Administrator) (see here and here). Amanda Glassman, CGD’s current director of global health policy and senior fellow, participated in the CGD working group, is now a member of the IWG, has blogged about the HRITF, and she led the Salud Mesoamerica 2015, a related RBF mechanism while at the IDB.
Value for money was at the top of our agenda this year, so I was pleased to see the topic also top the list of CGD’s most popular Global Health Policy blogs in 2013. The rest of this year’s list is a mixed bag, reflecting a number of debates that will likely stick around in 2014 (data for development, universal health coverage, and the state of global health financing, to name a few).
Check out the full list below, and leave a comment to tell us what you’d like to see more (or less) of in 2014. As always, thanks for your continued readership and we look forward to bringing you more lively, evidence-based discussions in the year ahead!
Over the last few months, we have been busy tracking and analyzing a number of notable developments in the global AIDS space. So in commemoration of World AIDS Day, marked annually on December 1st, here is a roundup of what we’ve been talking about, complete with links to our most recent work:
The US Congress passed thePEPFAR Stewardship and Oversight Actof 2013, extending PEPFAR’s congressional authorization another five years and adding some important new requirements for better reporting. We were particularly pleased to see that the bill requires annual targets for prevention, treatment, and care efforts including a description of how those targets will reduce the number of new HIV infections below the number of deaths among persons infected with HIV (my colleague Mead Over calls this the AIDS transition).
PEPFAR recently conducted its first impact evaluation workshop to support country teams that want to design and oversee impact evaluations for their programs. If successfully carried out, these evaluations will help PEPFAR learn a lot about what makes their HIV/AIDS programs work (or not work). CGD’s Mead Over served on the “faculty” of the workshop, and shares his experience here.
Data presented at PEPFAR’s October 2nd Scientific Advisory Board (SAB) meeting show that African countries are struggling to retain patients on AIDS treatment, particularly at two important stages in the continuum of care: that from diagnosis to care, where Africa loses 41% of patients, and that from initiation of treatment to retention, where Africa loses 30% of patients. While other studies have found higher rates of retention, the issue of retention and its importance to treatment success and avoidance of drug resistance is now on the agenda. New data from CDC – also highlighted at the SAB – shows why some facilities do better than others and what factors contributors to treatment success.
And finally, we’ll be watching the Global Fund’s fourth replenishment meeting this week where it’s hoping to raise $15 billion to support its work for the next three years. Check out our blog in the coming weeks for new analysis on these developments and others in the weeks to come.
The Global Fund to Fight AIDS, TB and Malaria will host its fourth replenishment meeting this week in Washington, DC where it’s hoping to raise $15 billion to support its work for the next three years. On the eve of the replenishment, the BBC will air a 30-minute segment on its show Panorama titled “Where’s Our Aid Money Gone” that – judging by the synopsis – will likely take a more critical view of the Global Fund than much of its recent press (see here, here, and here). Here is the teaser from BBC’s site:
Supported by celebrities like Bono and Bill Gates, the Global Fund has spent almost £15 billion fighting AIDS, malaria and tuberculosis. But its inspector general was sacked for 'unsatisfactory' performance after exposing corruption, and reports revealing how aid money went missing have been delayed. Richard Bilton challenges those responsible, and questions the UK government's decision to hand over another £1 billion of taxpayers' money to the Fund this autumn.
I was interviewed for this segment to discuss CGD’s recent report ‘More Health for the Money’, which offers recommendations for how the Global Fund can get more value for money from their investments in health. The interviewers were particularly interested in a recent report from the Global Fund’s Inspector General (IG) that uncovered evidence of financial misconduct and corruption related to a Global Fund grant in Cambodia. I couldn’t speak to this particular case, but made some general observations about corruption and international aid more broadly.
Where’s Our Aid Money Gone?
Monday, December 2
7:30pm (UK); 2:30pm (US)
The interview lasted 90 minutes, so only a few of my comments will likely make the half-hour show. But here are some points that I think are worth highlighting, no matter what ends up in the final version of the show:
First, corruption and fraud happen. This isn’t a problem unique to the Global Fund; every public spending program, and every private sector firm, experiences fraud and corruption at some point in time. As one very experienced participant in a recent CGD workshop on corruption noted, “if you look, you will find.” The United States’ own Medicare program detected around $40 billion in 2012.
What’s important is that corruption and fraud is detected and dealt with fairly and transparently. To some extent, that’s what the Global Fund has done. When the Fund found corruption two years ago, it took action to strengthen its financial oversight and change its procurement processes. In the recent Cambodia case, the total amount of misused funds was modest ($431,567 out of $86.9 million of expenditures that the IG reviewed), and action has been taken to recover the monies. However, in Cambodia, the IG was only able to audit 39 percent of expenditures from 2003-2010; the rest was not able to be tracked. Further, it took two years for the findings of the investigation to be published.
As my colleague Bill Savedoff pointed out two years ago, we still don’t have a way to assess how representative this case of corruption may be. But the Global Fund’s continued commitment to open investigations and reporting should be praised, not slammed, and improvements should be encouraged.
Second, part of the explanation for fraud and corruption lies with the incentives implicit and explicit in the relationship between any kind of payer (an aid agency like the Global Fund in this case) and their recipients – economists call this the “principal-agent problem”.
In past years, the incentives and accountability in the relationship between the Fund, Country Coordinating Mechanisms (CCM) and recipients have been diffuse: performance-based funding didn’t always reward performance; performance itself was defined weakly and measured in an ad hoc manner; CCM had built-in conflicts of interest in their structures and lacked resources to fairly conduct oversight. Taken together, the incentive structure did little to promote accountability for efficiency or results.
The Fund has recently strengthened its fiduciary controls by increasing the frequency and rigor of audits. This new approach creates stronger incentives for recipients to be more careful about receipts and adhering to agreed budgets, but has also had a chilling effect on spending as recipients worry whether they are using the money “correctly.” And the structural issues around performance, performance measurement and CCM remain challenges to be solved.
Finally, and perhaps most importantly, the challenges the Global Fund faces – related to corruption, grant management, or otherwise – have solutions. The international response shouldn’t be to abandon the Global Fund, but rather to insist on reforms to strengthen the institution. For instance, at CGD we have suggested the Fund strengthen its measurement of results by shifting verification of self-reported data to organizations with the capacity to conduct rigorous and representative evaluations. We also think the Fund could greatly improve its performance-based financing mechanism by directly linking a portion of funding to results in all of their grants.
I’m glad the BBC is dedicating 30 minutes of its programming to the Global Fund. But I hope the message isn’t that the UK taxpayers – or those of any other donor country – should be weary of their government’s investment in the Fund. After all, AIDS is no longer a death sentence for millions of people in the world largely because of programs supported by the Global Fund (as well as U.S. President’s Emergency Plan for AIDS Relief).
We aren’t better off without the Global Fund. We are better off with a better Global Fund. I hope this is reflected on the BBC program, as well in the pledges at the replenishment meeting on December 3. And most importantly, I hope this is reflected in greater progress against AIDS, TB and malaria in the months to come as we begin to see the results of recent Global Fund reforms. I’ll be watching all three to find out.
Last year, PEPFAR submitted guidelines which encouraged country staff to submit a proposal to conduct an “impact evaluation” (IE) as part of their annual Country Operation Plan (COP). Subsequently, they received only four submissions, of which three were funded. But they also learned that many PEPFAR staff – who are mostly program implementers, or the managers of program implementers – didn’t fully understand what they were being asked to do; what does PEPFAR mean by “impact evaluations”?
In response, PEPFAR has started to conduct impact evaluation workshops in order to support the in-country teams who want to include an impact evaluation proposal in their March 2014 COP submission. I recently had the pleasure of serving on the faculty of the first impact evaluation workshop in Harare, Zimbabwe.
What was it like for a think-tank guy like me to be involved in this exercise? It was interesting, exhilarating and pretty hard work.
Small teams of program staff came from South Africa, Tanzania, Uganda and Zambia (Zimbabwe sent a few observers). Each team included representatives from the government, from PEPFAR and from a PEPFAR implementing partner (PEPFAR-speak for a local or international contractor). Unlike the participants at most of the donor-funded workshops I have attended over the last three decades, these were extraordinarily engaged. They had all come with one or more candidate research questions, they all worked to fill out a research template, and they all attended almost every session from morning until night; often working late into the evening.
The 27 participants all had heard of many of these ideas, but few had heard of all of them. And now they needed to choose the most relevant of these for their country context and put them together in order to come up with a study design that would pass muster, first, with their constituencies back in Johannesburg, Dar es Salaam, Kampala and Lusaka, and then next March with OGAC’s Office of Research and Science.
On the way to the airport, I passed by Harare’s famous “balancing rocks.” They remind me of the task facing these PEPFAR teams. Internally valid evidence, balanced on data from a sample, balanced on the choice of an experimental or quasi-experimental method, balanced on the foundation: choice of a policy relevant counterfactual and pair of null and alternative hypotheses. The fact that the in-country teams are designing and overseeing the execution of this learning process assures that, if they carry them they will learn a lot about what makes HIV/AIDS programs work or not work. We should be seeing these studies contracted and launched within a year or so. For the sake of all the HIV infections they might avert and all the patients they might help, let's hope so.
Earlier this month, Ambassador Goosby officially announced that he was stepping down from his role as Global AIDS Coordinator where he led the President’s Emergency Plan for AIDS Relief for the past four years. As my colleague Amanda blogged in anticipation of Dr. Goosby’s departure, his service will be remembered for strengthening the evidence base behind PEPFAR’s work. Indeed, Dr. Goosby established the “Office of Research and Science,” which was charged with the creation and management of the Scientific Advisory Board, the oversight of a $60 million NIH-funded research program to conduct rigorous combination HIV prevention trials, and most recently the promulgation of guidelines which encourage PEPFAR country staff to submit a proposal to conduct an “impact evaluation” as part of their annual Country Operation Plan (COP).
Swearing-in ceremony, September 17, 2009.
Would PEPFAR be as interested in evidence if, counterfactually, Ambassador Goosby had not accepted this appointment back in 2009?
All of this is a dramatic and welcome departure from a time in the not-so-distant past when “research” was a dirty word. Still PEPFAR staff – who are program implementers, or the managers of program implementers, and often not familiar with research jargon – were left wondering what they were really being asked to do; what does PEPFAR mean by “implementation science” and “impact evaluation”(IE)?
To answer these questions, PEPFAR included a detailed description of what they meant by “impact evaluation” in the 2013 COP guidelines sent to all 72 country offices (and posted online here) and solicited proposals for additional funding so that country teams could conduct their own impact evaluations. The submissions were to be dramatically different than the traditional “Public health Evaluations” PEPFAR had previously done. For the first time, PEPFAR staff and partners were asked to specify a “counterfactual,” which the guidelines explain is what would have happened without the intervention. In their IE proposals, teams were asked to clearly describe how they will construct that counterfactual and how they will estimate program achievements compared to the counterfactual.
Here is a particularly challenging passage from the guidelines:
Impact Evaluation Methods
Impact evaluations (IE) use experimental approaches (e.g. randomization) to establish a counterfactual (i.e. what would have happened in the absence of the project) or quasi-experimental methods (e.g. comparisons groups, advanced statistical and modeling techniques) when randomization is not feasible. As a result, they permit an accurate estimate of effectiveness through causal attribution of outcomes or impact to the program being evaluated as opposed to what would have happened in the absence of the program. IE hypotheses reflect these comparisons (the counterfactual). Note that randomization can often be achieved through ―smart implementation (i.e., rolling a program out in a randomized, controlled fashion) without the enormous costs and levels of monitoring necessary in a clinical randomized controlled trial to achieve regulatory approval of a new drug or to evaluate the efficacy of a new product. Because, by definition, IEs focus on real world effectiveness, they must be linked to the evaluation of a PEPFAR program. Proof-of-concept efficacy trials (with precisely defined and narrow objectives) as well as basic or investigational clinical research activities will not be considered for funding as IEs. (Source: PEPFAR.gov )
Last week I served “pro bono” as one of the “faculty” of the first of these PEPFAR impact evaluation workshops, held in Harare, Zimbabwe. It was an interesting and exhilarating experience to listen to, and answer questions from highly motivated representatives of the five PEPFAR country teams that came to the workshop (read more about my experience at the workshop here). They all understand that PEPFAR aims to hand over program ownership as soon as any country is able to sustain the quality and scale of PEPFAR support. And they all seem determined to make the most out of this learning opportunity to help their country’s program do better.
So as Ambassador Goosby looks back on his year’s at OGAC, he must occasionally wonder about the counterfactual to his own service at OGAC. How is OGAC different because he accepted President Obama’s call in 2009? Would another OGAC leader have moved as forcefully towards an evidence base for PEPFAR. Would the term “implementation science” ever have been invented – or endorsed by PEPFAR?
Unlike the situation in PEPFAR countries, where the large number of PEPFAR facilities offer opportunities for constructing pretty good counterfactuals, the question of how history would have been different if Ambassador Goosby had not come to DC will forever be beyond the reach of science. But I for one am convinced that few leaders could have done as much to put PEPFAR on a sound research footing as Dr. Goosby has done.
This blog is the second in a series of three on the quality of PEPFAR’s HIV treatment programs. See the first blog in there series here.
It’s one thing to measure the quality of AIDS care; it’s another to understand how to improve it. Our last blog showed how the metaphor of the “treatment cascade” can be a useful way to conceptualize and measure the quality of AIDS care and that PEPFAR supported care has room for improvement on this measure (see more on the treatment cascade here). In order to achieve the health benefits that would result from reducing patient attrition over the course of the treatment cascade, PEPFAR and its partners need to learn why some facilities do better than others and what factors contributors to treatment success.
Here we use the term “determinants” to refer collectively to factors that affect treatment success. Figure 1 depicts a simple model of the contribution of various determinants to the production of quality AIDS treatment services. The clinics or hospitals delivering antiretroviral therapy (ART) organize their available resources or “ inputs” (1), using managerial and technical practices (2 and 3), with technical support and supervision (4) in order to help patients proceed successfully through the stages of the “treatment cascade” (5). The degree of success achieved by any given facility depends partly on components (1), (2), (3) and (4) of this production process.
By measuring these determinants, PEPFAR and host countries can find out how much of the variation in quality measures – like patient retention – can be explained by factors largely outside any given facility’s control (like patient traits), and how much can be explained by malleable factors like determinant categories (1), (2), (3) and (4).
Figure 1. A model of the production of ART Services
At PEPFAR’s October Scientific Advisory Board (SAB) meeting I was surprised and pleased to learn that the CDC has been collecting data at the facility level on three categories of the determinants of quality: the adequacy of input supplies (category 1) and the conformity to norms of managerial practice and clinical practice (categories 2 and 3). Dr. Deborah Birx, Director of the CGH Division of Global HIV/AIDS, presented descriptive statistics on this data from PEPFAR supported partners and sites in Rwanda and Tanzania. Here is what stood out to me:
A small number of facilities are accorded high scores on inputs and management. On supply chain management, one possible category (1) determinant, Dr. Birx showed that eight facilities in Bushenge, Rwanda have had high scores over a 24 month period (figure1) and that in these same facilities a financial management score has improved over that same period (figure 2). She showed similarly high or improving scores for management of other key aspects of the service delivery process, including human resources, strategic information, and laboratory services. For 32 Rwandan facilities, she showed high and usually improving indices of input and managerial quality. This pattern was particularly striking, because during the 24-month period covered by the data, PEPFAR’s role had shifted in many of these facilities from direct service delivery to indirect service support. The data suggest that in the wake of this “transition” of responsibility from PEPFAR partners to the government, scores on these indices had either remained high or actually improved.
Figure 2. Supply Chain Scores at Bushenge District Hospitals and Health Centers (Rwanda)
Source: Deborah Birx, “Transition of Track 1.0 Partners,” presented at PEPFAR SAB Meeting held October 2-3, 2013, Crystal City, VA
Figure 3. Financial Management Scores at Bushenge District Hospitals and Health Centers (Rwanda)
Source:Deborah Birx, “Transition of Track 1.0 Partners,” presented at PEPFAR SAB Meeting held October 2-3, 2013, Crystal City, VA
Scores on clinical checklists vary greatly. Supplementary data from the CDC shows scores for the quality of technical and clinical practice (determinant category 3) of 203 treatment sites in Tanzania, each of which is supported by one of six PEPFAR contractors. Figure 4 shows the variation within and across the five PEPFAR partners in the indices for adult and pediatric care and treatment (one of the partners did not have scores for adult and pediatric care and treatment). The partners are ranked from left to right according to the median score on adult care and treatment (blue boxes). This seems encouraging, but we know too little to be reassured. How are the scores constructed? Are the scores objective? Replicable? Based on the right component measures? Are the 32 Rwandan facilities representative of all PEPFAR-supported Rwandan facilities or of the thousands of facilities supported in other countries? Hopefully CDC will be publishing papers which reveal all these details.
Figure 4. Box and Whisker Plot Showing Proportion of “Most Satisfactory” (Score: 3) Adult Care and Treatment Scores and Pediatric Care and Treatment Scores by Partner
Source: Deborah Birx, supplemental data to “Transition of Track 1.0 Partners,” presented at PEPFAR SAB Meeting held October 2-3, 2013, Crystal City, VA
Note:Ranked from left to right according to the median score on adult care and treatment
It’s hard to say what to make of these data. First, we see that the site scores for any one of the five partners vary greatly and there is no apparent correlation between the typical scores of a partner on adult and on pediatric care. Wouldn’t we expect that high scores on adult care would correlate with high scores on pediatric care? One possible explanation for the wide variation and lack of correlation is that these scores contain no real information about the quality of care and are just white noise. This would be the case, for example, if the measurement process were subjective and inconsistent from one site to another.
Second, although we know the adult score is intended to capture seven aspects of treatment, we don’t know how each of those is measured. Perhaps scores are the simple sums of checks on some kind of checklist, such as those made famous by Atul Gawande. Or perhaps they are based on third-party judgments or on patient interviews. Or maybe some combination. Knowing more precisely how these scores are constructed would help us judge their validity as measures of the quality of these types of care.
In whole, it’s great that the CDC has been collecting data on the determinants of quality AIDS treatment. But Dr. Birx’s sneak preview raises more questions than answers. It leaves us wondering whether the techniques used to measure these scores were based on validated reliable measurement approaches as developed and applied for example in the journals Health Services Research and Operations Research. And we wonder why all the measurement is of determinant categories (1), (2) and (3) and none of the quality of the partner’s supervision and technical support, which in our Figure 1 is category (4). But the most intriguing and important question we are left with is whether any of the indices Dr. Birx presented are actually correlated with any aspect of treatment quality that appears in the treatment cascade. An index that can be shown to predict patient retention or viral suppression is thereby pretty well validated, regardless of whether anyone in the decades-long history of health services research has ever previously validated it. And an index that shows no correlation with health outcomes is suspect even if it has an old and respected academic pedigree.
Perhaps soon we will see further analysis from CDC of how well these interesting indices predict treatment quality. Or even better, CDC will post these data on the web, along with site-matched measures of patient retention and other aspects of treatment quality, so that the collective talents of the global public health community can explore them for useful insights on how to save more patients with PEPFAR dollars.
Stay tuned for our third and final blog in the series on the role of treatment quality in the transition of PEPFAR programs to greater country ownership.
The authors thank Dr. Deborah Birx, Director of the CGH Division of Global HIV/AIDS, for sharing the slides from her great presentation at the October 2013 SAB meeting, as well as additional data on the CDC’s site monitoring studies (SMS).
 Adult care and treatment consists of seven metrics: reference materials, adherence support, ART eligibility, cotrimoxazole, nutrition access, patient tracking and PHDP