Imagine you are the proud owner of a new desk calculator for totting up your household bills. You type in some test calculations: 7+5? Your calculator computes this as 23. You try 37-15 and get 84. What about 9 ÷ 3? Your calculator gives you -117. Disconcerted, you take your new purchase back to the shop. The shop assistant tells you not to worry. “Madam, although this calculator gives the wrong answers for simple calculations—the sort you can check in your head—it is perfectly accurate when it comes to more complex calculations.” If you are like us, this answer would not dissuade you from asking for your money back.
The last several years have seen the development of many decision-support tools (“value frameworks”) for supporting policy and investment decisions (see special issues of Value in Health for February 2017 and February 2018). These tools make use of lots of numbers representing factors of undoubted importance in decision making, and they synthesise this information into a decision-relevant score or ranking or choice recommendation. Sometimes it is hard to trace how the numbers are combined; in other cases, although the relevant formulae are spelled out, it is hard to see why the numbers are combined in a particular way.
Something which many of these tools have in common, however, is that when confronted with “no-brainer” decisions, they recommend the wrong choice. For example, in the case of the ASCO framework, by focusing on relative risk reduction, it potentially favours interventions which reduce your mortality risk by a negligible amount over those that effectively save you from almost certain death; by allowing compensation for the absence of effectiveness, they prioritise treatments which do not work at all over those which do. The apparent complexity of these frameworks not only does not compensate for the absence of clarity of thought which has gone into their design, it also makes it harder to understand what drives the decision recommendations which these frameworks generate. The moral has to be that those who develop and test such frameworks should be familiar with basic concepts of economics and decision sciences.
Global health development also has its fair share of value frameworks. And though they are all different, a notably common feature of frameworks developed by disease- or technology-specific funding conduits is their ad hoc approach to defining the key concepts of value for money and cost-effectiveness.
Gavi, the vaccines financier of the developing world, in its Vaccines Investment Strategy, defines value for money as “cost per death and case averted” in the context of a multiple-criteria decision analysis-like framework laden with double counting and preferential dependence.
The Global Fund to Fight AIDS, Tuberculosis and Malaria’s 2017-2022 Strategy denotes key performance indicator 10 as its value for money indicator, which is defined as the “spend reduction in commodity purchases made within the Pooled Procurement mechanism for equivalent commodities at equivalent quality and volume.”
UNITAID’s recently launched strategy includes three key performance indicators under value for money: impact, efficiency, and “positive returns”—the last defined as a “return on investment = $ benefits / $ costs.” However, return on investment on its own is not enough when deciding between alternative options competing for the same pot of money.
DFID’s 3E Value for Money framework is the only framework explicit about cost-effectiveness. However, it defines cost-effectiveness as an intervention’s “impact on poverty reduction relative to the inputs. . . invest[ed] in it” with no reference to alternative uses of the same resources.
The World Bank’s IDA and IBRD lending operations make little reference to value for money, with the Global Financing Facility’s investment cases offering policymakers no help as to how to choose what to pay for (assuming not everything can be afforded).
It is as if, in the world of global health, there are no budgetary constraints and, therefore, no trade-offs to be made, no choices between competing options to agonize about (though one might hope that the forthcoming replenishment of most of the above development partners’ funds may change all this).
It is easy to be blinded by the complexity of these value frameworks. However, the decisions which these frameworks are designed to support are consequential indeed—not only in terms of the health of the populations in the partner countries, but also in terms of public confidence that the money which goes into aid programmes is allocated wisely. The design of these frameworks suggests that they are developed with good intentions, but without adequate learning from the economic and decision science literatures about the principles that underpin sound value framework design, and without adequate empirical and conceptual piloting and testing. When policymakers find the decision recommendations which emerge from these frameworks clash with their expectations, they will be tempted to return the frameworks to the shop from which they bought them. The analysts responsible will want to have more convincing arguments than the calculator sales assistant of our opening paragraph.
Disclaimer
CGD blog posts reflect the views of the authors, drawing on prior research and experience in their areas of expertise. CGD is a nonpartisan, independent organization and does not take institutional positions.