BLOG POST

The Paper-to-Policy Pipeline: Reflections from Evidence Live 2013

Alongside Victoria Fan, I recently attended the Evidence Live conference in Oxford, hosted by the BMJ and Oxford’s Centre for Evidence-Based Medicine (CEBM). While the conference’s clinical focus was outside my normal global health/economics comfort zone, I was immensely impressed by the rigor, candor, and nuance of discussion, particularly around tough issues like publication bias and conflict of interest. I left feeling that the global health and development fields would be wise to pay close attention to current debates in medical research, both to emulate proven successes and avoid dangerous pitfalls.

In particular, the event’s top-notch keynote speakers – which included Ben Goldacre, Peter Gøtzsche and Jack Cuzick (whom, to my eyes, are the Bill Easterly and Jeff Sachs of breast cancer screening); and Peter Wilmshurst – helped to dissect the many stages between primary research (i.e. a trial or other study) and evidence-based practice (i.e. how that research is applied by a Ministry of Health or your doctor’s office) – and all that can go wrong in the process.

When the process goes smoothly, primary evidence is compiled through a trial or other analysis. Results are written up, submitted to a journal, resubmitted to a second (or third or fourth or fifth) journal, peer-reviewed, edited, accepted, and published. Studies are then aggregated into systematic reviews, which are themselves submitted, reviewed, and published. Finally, the aggregated evidence is presented to doctors, patients, and policymakers, who change their practices, preferences and policies (respectively) to account for the cumulative evidence base on effectiveness, risks, and value for money.

But as speakers repeatedly emphasized, the path from study to evidence-based policy is long, winding, and filled with roadblocks – falsification, conflict of interest, duplicate publication, and misrepresentation, to name a few. Perhaps the most serious challenge (though it receives the least attention) is publication bias, created because of the many trials which dead end in the laboratory. Several speakers noted recent evidence that as many as 50% of trials never make it to publication, and that published results are highly unrepresentative. 

Because evidence is cumulative, the end result of these many biases (and others not described above) is highly distortionary to the research record. Goldacre pointed out that in terms of impact, non-publication of trial data (often but not exclusively at the behest of industry) is akin to the misconduct represented by deletion of 50% of data points from a single trial – and literally so, when you think that selective publication essentially deletes 50% of data points for systematic reviews. Rather than just complain about these biases, conference speakers offered or endorsed a series of new proposals to address them, including mandatory universal publication of trial data (sign the petition!); better data transparency; and post-publication peer-review, including the threat of retraction if an author refuses to release sufficient information about his or her methods.  

Beyond roadblocks in the collection of evidence, several speakers shared their efforts (and frustrations) in translating research to practice, such as in communicating controversial results to skeptical peers; attempting to sway policy-makers; correcting misleading media narratives; teaching evidence-based medicine to practitioners, or disseminating research results to potential patients. Overall, the depth, breadth, and difficulties of these approaches make clear that a lack of primary studies is only one of several barriers to evidence-based practice. Studies become practice only after being picked and prodded in the messy world of politics, and after being filtered through our own values and preconceptions – places where no amount of “systematization” will ensure full implementation of an evidence-based approach.

Compared to development and global health, the medical research field is way ahead of the curve in its rigor and standardization – think mandatory pre-registration of trials, strict standards for randomization, and clear protocols for systematic reviews, which eventually feed into guidelines for evidence-based clinical practice. Yet despite existing safeguards and systems, it’s troubling to see that very serious problems persist in the translation of evidence into evidence-based policy. Meanwhile, development and global health are playing catch up – we’ve already made huge progress on building a rigorous evidence base through initiatives like 3IE, J-PAL, and CGD’s own evaluation initiative, but there’s still a lot of room for improvement in systematizing research processes. In doing so, and in attempting to translate that research into policy, our clinical peers can be an enormous resource – both as a source of inspiration, and as a cautionary tale.

Medical research clearly doesn’t have all the answers, but smart people are raising a lot of good questions. I look forward to watching the debate unfold, and applying those lessons to our own global health and development worlds.

Thanks to Victoria Fan, Bill Savedoff, Amanda Glassman, and Jenny Ottenhoff for helpful comments.

Disclaimer

CGD blog posts reflect the views of the authors, drawing on prior research and experience in their areas of expertise. CGD is a nonpartisan, independent organization and does not take institutional positions.

Topics