This is a joint post with Yuna Sakuma.
This blog is the second in a series of three on the quality of PEPFAR’s HIV treatment programs. See the first blog in there series here.
It’s one thing to measure the quality of AIDS care; it’s another to understand how to improve it. Our last blog showed how the metaphor of the “treatment cascade” can be a useful way to conceptualize and measure the quality of AIDS care and that PEPFAR supported care has room for improvement on this measure (see more on the treatment cascade here). In order to achieve the health benefits that would result from reducing patient attrition over the course of the treatment cascade, PEPFAR and its partners need to learn why some facilities do better than others and what factors contributors to treatment success.
Here we use the term “determinants” to refer collectively to factors that affect treatment success. Figure 1 depicts a simple model of the contribution of various determinants to the production of quality AIDS treatment services. The clinics or hospitals delivering antiretroviral therapy (ART) organize their available resources or “ inputs” (1), using managerial and technical practices (2 and 3), with technical support and supervision (4) in order to help patients proceed successfully through the stages of the “treatment cascade” (5). The degree of success achieved by any given facility depends partly on components (1), (2), (3) and (4) of this production process.
By measuring these determinants, PEPFAR and host countries can find out how much of the variation in quality measures – like patient retention – can be explained by factors largely outside any given facility’s control (like patient traits), and how much can be explained by malleable factors like determinant categories (1), (2), (3) and (4).
Figure 1. A model of the production of ART Services
At PEPFAR’s October Scientific Advisory Board (SAB) meeting I was surprised and pleased to learn that the CDC has been collecting data at the facility level on three categories of the determinants of quality: the adequacy of input supplies (category 1) and the conformity to norms of managerial practice and clinical practice (categories 2 and 3). Dr. Deborah Birx, Director of the CGH Division of Global HIV/AIDS, presented descriptive statistics on this data from PEPFAR supported partners and sites in Rwanda and Tanzania. Here is what stood out to me:
A small number of facilities are accorded high scores on inputs and management. On supply chain management, one possible category (1) determinant, Dr. Birx showed that eight facilities in Bushenge, Rwanda have had high scores over a 24 month period (figure1) and that in these same facilities a financial management score has improved over that same period (figure 2). She showed similarly high or improving scores for management of other key aspects of the service delivery process, including human resources, strategic information, and laboratory services. For 32 Rwandan facilities, she showed high and usually improving indices of input and managerial quality. This pattern was particularly striking, because during the 24-month period covered by the data, PEPFAR’s role had shifted in many of these facilities from direct service delivery to indirect service support. The data suggest that in the wake of this “transition” of responsibility from PEPFAR partners to the government, scores on these indices had either remained high or actually improved.
Figure 2. Supply Chain Scores at Bushenge District Hospitals and Health Centers (Rwanda)
Figure 3. Financial Management Scores at Bushenge District Hospitals and Health Centers (Rwanda)
Scores on clinical checklists vary greatly. Supplementary data from the CDC shows scores for the quality of technical and clinical practice (determinant category 3) of 203 treatment sites in Tanzania, each of which is supported by one of six PEPFAR contractors. Figure 4 shows the variation within and across the five PEPFAR partners in the indices for adult and pediatric care and treatment (one of the partners did not have scores for adult and pediatric care and treatment). The partners are ranked from left to right according to the median score on adult care and treatment (blue boxes). This seems encouraging, but we know too little to be reassured. How are the scores constructed? Are the scores objective? Replicable? Based on the right component measures? Are the 32 Rwandan facilities representative of all PEPFAR-supported Rwandan facilities or of the thousands of facilities supported in other countries? Hopefully CDC will be publishing papers which reveal all these details.
Figure 4. Box and Whisker Plot Showing Proportion of “Most Satisfactory” (Score: 3) Adult Care and Treatment Scores and Pediatric Care and Treatment Scores by Partner
It’s hard to say what to make of these data. First, we see that the site scores for any one of the five partners vary greatly and there is no apparent correlation between the typical scores of a partner on adult and on pediatric care. Wouldn’t we expect that high scores on adult care would correlate with high scores on pediatric care? One possible explanation for the wide variation and lack of correlation is that these scores contain no real information about the quality of care and are just white noise. This would be the case, for example, if the measurement process were subjective and inconsistent from one site to another.
Second, although we know the adult score is intended to capture seven aspects of treatment, we don’t know how each of those is measured. Perhaps scores are the simple sums of checks on some kind of checklist, such as those made famous by Atul Gawande. Or perhaps they are based on third-party judgments or on patient interviews. Or maybe some combination. Knowing more precisely how these scores are constructed would help us judge their validity as measures of the quality of these types of care.
In whole, it’s great that the CDC has been collecting data on the determinants of quality AIDS treatment. But Dr. Birx’s sneak preview raises more questions than answers. It leaves us wondering whether the techniques used to measure these scores were based on validated reliable measurement approaches as developed and applied for example in the journals Health Services Research and Operations Research. And we wonder why all the measurement is of determinant categories (1), (2) and (3) and none of the quality of the partner’s supervision and technical support, which in our Figure 1 is category (4). But the most intriguing and important question we are left with is whether any of the indices Dr. Birx presented are actually correlated with any aspect of treatment quality that appears in the treatment cascade. An index that can be shown to predict patient retention or viral suppression is thereby pretty well validated, regardless of whether anyone in the decades-long history of health services research has ever previously validated it. And an index that shows no correlation with health outcomes is suspect even if it has an old and respected academic pedigree.
Perhaps soon we will see further analysis from CDC of how well these interesting indices predict treatment quality. Or even better, CDC will post these data on the web, along with site-matched measures of patient retention and other aspects of treatment quality, so that the collective talents of the global public health community can explore them for useful insights on how to save more patients with PEPFAR dollars.
Stay tuned for our third and final blog in the series on the role of treatment quality in the transition of PEPFAR programs to greater country ownership.
The authors thank Dr. Deborah Birx, Director of the CGH Division of Global HIV/AIDS, for sharing the slides from her great presentation at the October 2013 SAB meeting, as well as additional data on the CDC’s site monitoring studies (SMS).
 Adult care and treatment consists of seven metrics: reference materials, adherence support, ART eligibility, cotrimoxazole, nutrition access, patient tracking and PHDP
CGD blog posts reflect the views of the authors, drawing on prior research and experience in their areas of expertise. CGD is a nonpartisan, independent organization and does not take institutional positions.