Recommended
Following wide criticism of the way the government had been formulating and communicating its COVID-19 response, particularly on its scientific rationale, the UK government has initiated daily public briefings on the evolving crisis. Last week we urged the government to improve its public communications during this outbreak—including through sharing the evidence underpinning its decisions—and the move to daily briefings and sharing relevant analyses is a welcomed step. Earlier this week the government escalated its response to COVID-19, and at the same time, researchers at Imperial College released the epidemiological modelling which seems to have been driving, presumably with additional evidence, the government’s response—both its earlier focus on mitigation and, following critical model updates, the new more aggressive focus on suppression. (Colleagues at the Center for Global Development and around the world are reviewing the various modelling analyses that governments are employing.) Below we run through the UK government’s communications so far, and outline recommendations for the government to improve its communication ever further, by revamping its whole evidence-informed decision-making approach. In doing so, we ask questions about the way modelling and other evidence can more effectively inform the government’s response.
Communications so far
At the government’s first daily briefing, held on 16th March, the chief medical officer (CMO), the chief scientific officer (CSO), and the prime minister announced tougher measures to deal with COVID-19. (The second briefing focused more on the economy.) They ramped up the country’s response significantly by calling on everyone to avoid nonessential and unnecessary travel, to work from home where possible, and to avoid socialising in pubs and theatres. In addition, the CMO and CSO noted the scientific considerations that informed the decision process remained focused on “flattening the curve” and reducing avoidable deaths.
Helpfully in our view, and for the first time since the outbreak, the CSO, Professor Chris Whitty, outlined the potential health impacts of COVID-19 that are less related to the virus itself and more to the way we respond to it. He highlighted three distinct types of health impact: “direct deaths” caused by COVID-19; “indirect mortality” as a result of individuals receiving suboptimal care because of the increased burden on health services caused by the epidemic (as we have seen in places like Italy and China); and “wider effects,” which he did not specify, but presumably include the health impacts of a prolonged recession, including pension liabilities, social care, and housing crises, amongst others.
Monday’s statement was perhaps the first signal the government is considering (as it should) the net health impact of a coordinated government—and indeed global—response to COVID-19 (see figure below). Since then the chancellor has announced the biggest-ever package of economic support in the country’s history at 15 percent of the UK’s GDP. At the same time, calls are growing to explicitly consider the potentially detrimental and most likely long-lasting effects of response measures on health and wellbeing in the UK and beyond.
How should the government “do” evidence-informed policy-making better? Three broad recommendations
The modelling analysis published by the Imperial College COVID-19 Response Team on the same day as the first briefing, seems to have contributed to the escalation of the response measures. But inevitably, uncertainties remain, fuelling calls for further transparency. Going forward we recommend that the government set out clear rules as to how the models and other evidence informing its decisions ought to be shaped, quality assured, and well communicated, starting with the following three steps:
-
Require that economic implications and net health impacts of alternative intervention options are included in all modelled scenarios from the outset. This includes both the projected broader economic impact of the pandemic and of the measures to control it (the latter perhaps carrying more serious economic implications than the pandemic itself, as shown in previous outbreaks). It also includes health economic analyses of individual interventions for achieving control and whose effectiveness and cost effectiveness will vary based on a wide range of “modellable” factors (e.g., a review of response measures to the 2009 influenza pandemic from across mostly high-income countries showed that school closures saved a life at a cost of almost $1m while contact tracing and surveillance came at under $4k). Far from placing a dollar value on human life, knowing the net–total–health effect of interventions will help governments save more lives now and later. This would make tough trade-offs explicit, as Professor Whitty hinted last Monday.
Such netting out of lives saved would include those who recover from COVID-19 after being admitted to an intensive care unit, but also lives lost in the process due to perhaps certain services being deprioritised and care quality undermined in the short term. We have already seen examples here and here from China. Additionally, it would include lives lost due to deteriorating economic conditions with rising unemployment and increasing inequalities as the economy suffers in the longer term (e.g., see here and here).
-
Encourage the use of open-source data and code-sharing platforms for all models and analyses used to inform policy. These platforms would enable modelling analyses to be shared, reviewed, improved on, and reused by experts around the world, real time. This would increase critical review and help improve the model itself; identify weaknesses or inaccurate or out of date input data early on; and complement the academic journal peer review process, which is hardly adequate as the sole means of quality assurance, in the current circumstances. Other disciplines, like high-energy physics, molecular biology, and economics, have embraced open source in normal times to help solve intractable problems; medicine and epidemiology (especially when tax-funded) ought to too (and hopefully continue to share models and data after the crisis is over!).
Whilst such a move will carry overheads in terms of providing accessible explanatory documentation and screening and/or responding to resulting analyses and recommendations, it can also help reduce the misinformation and wide-scale criticism of policies flourishing in the media by building trust. It can also serve, as a true global public good, the needs of researchers and their governments in LMICs where capacities are limited. We hear colleagues from centres in Africa are now using R (building on analyses such as this) to model their outbreaks and inform their respective governments’ response. The UK ought to share as a matter of principle and also because this is a very global crisis which demands collective action.
Most importantly perhaps, establish a non-political open multi-stakeholder consultative evidence-informed deliberative process to both inform and interpret modelling design and inputs, to help translate their findings into policy. This would help to help communicate related trade-offs, including the economic implications of alternative decisions and to shield the expert academic community from unwanted press attention or political pressures—though thankfully we have not seen much of the latter so far. Such a process would be driven by an independent committee of experts, broadening out the membership and mode of working of the current government scientific committee, the Scientific Advisory Group for Emergencies, or SAGE (whose last meeting minutes date from summer 2019) and the influenza group SPIM, to include academics from multiple research institutions and disciplines (operational researchers, epidemiologists, economists, biologists, clinicians, sociologists, ethicists…) as well as frontline professionals, NHS managers, public health doctors, and lay people.
Imagine the model and broader evidence synthesis as a house built by builders and engineers (the modellers, epidemiologists…) under the guidance of architects (a broad-spectrum advisory committee with government and frontline input as well as scientists) and with a view to meeting the needs of those who will live in it (the government at national and local levels implementing the advice and the people whom the government serves).
Five key technical considerations for future evidence
In the section above, we highlighted in broad terms how the government and its partners can improve the process of incorporating science and evidence into policymaking. Below, we focus on a number of specific areas we argue would enhance the evidence base (including modelling) and support better decisions.
As we anticipate SAGE’s publication of the models and data underpinning its advice to government, we outline five issues that need addressing. We understand that these issues may already have been addressed, but this cannot be ascertained given the data released so far (and their format):
All analyses that the government commissions or uses must abide by a set of rules reflecting decision makers’ needs to ensure comparability and fitness for purpose. This can be achieved by abiding by a reporting and reference case. Such ground rules for what the analyses the government considers ought to include, and how they ought to be developed, are followed by other governments standing advisory bodies and committees, such as JCVI (which advises on vaccines) and NICE (which advices on quality and on technologies used in the NHS), and increasingly in global health settings (iDSI). Specific areas such as agent-based modelling and health economics analyses also have reporting ground rules to facilitate communication and review.
Models must rely on and dynamically update based on real world and real time data (which will, in turn, encourage real data collection rather than models becoming substitutes for data). This is why it matters to resume community testing in the UK or at least explain the plan of action better. Indeed, the latest dramatic shift in government policy has been driven by adjustments made to the original model based on (a) the latest, more pessimistic, NHS estimates on feasible/reasonable scale up of ICU capacity; and (b) updating ICU rates based on data from Italy, China, and (very early data from) the UK. Committing to ongoing dynamic self-adjustment based on new evidence from the UK and abroad is of the essence. This is probably how the current models work, but how the very uncertain, all-changing, and differing-from-setting-to-setting model inputs are sourced, quality assured and incorporated is a big question.
Uncertainty matters and must be openly acknowledged and characterized along with the priority research needed to address it. Results must be reported not only as expected (average) net benefits, but also as distributions, with some form of probabilistic depiction of uncertainty, and the identification of a targeted research agenda based on methodological approaches such as “Expected Value of Perfect Information” to identify the most cost-effective priorities for further data collection.
Inputs, especially in settings where there is significant uncertainty, must encourage/require systematic expert elicitation which involves use of expert judgement in a structured fashion (as opposed to selective approaching of individuals) and is well established in decision science.
Cost effectiveness of alternative modes of action must be considered to inform decisions. Even if immediate financial costs are rightly not a priority, it is important to understand how supply-side constraints—like shortfalls of medical personnel trained to detect and care for COVID-19 patients and shortages of beds and equipment—are factored into the modelling. This should include taking into account the (opportunity) costs of any significant scale-up that may be needed and how all this translates into lives lost, and ought to be an urgent government ask.
Communicating better—we need a process
The government needs to do more than simply argue that decisions are being informed by the scientific evidence (which is in itself welcome and a lot more than many other national authorities and multilaterals are currently doing). The science should be put out into the public domain and subjected to constructive interrogation in the context of a structured process. (This is already happening, though perhaps in an unstructured and less helpful fashion through social media—e.g., see here and here.) The evidence and its assessment and interpretation must be multidisciplinary: for instance, the criticism of the government’s approach to “behavioural science” has been misplaced. In order to have and to trust in evidence-based policymaking, we need different perspectives that are all subject to detailed scrutiny.
Now is the time for a process that encourages multidisciplinary perspectives, including the impact on resource use and costs, and includes an honest interrogation of all relevant evidence and the inevitable uncertainty surrounding it. This is clearly a fast-moving situation, but that does not negate the need for carefully designed decision-making processes. Fortunately, the UK has substantial experience in this space, particularly with respect to evidence-informed decision making for translating evidence into policy and building social legitimacy for difficult choices and a tradition in studying how best to communicate science (and scientific uncertainty). If the outbreak lasts for 18 months or so, this is the time to set up flexible decision-making processes that engage evidence and people in a timely fashion. If anything, the trade-offs ahead will only get tougher—transparency, broad engagement, and trust will be crucial to ensure an effective response.
Disclaimer
CGD blog posts reflect the views of the authors, drawing on prior research and experience in their areas of expertise. CGD is a nonpartisan, independent organization and does not take institutional positions.
Image credit for social media/web: Center for Global Development