A working paper distributed this month by NBER and covered in the New York Times not only contributes to the growing number of rigorous studies on public policy questions but also epitomizes changing research norms that are crucial to improving the quality of such studies.
The study, “The Oregon Health Insurance Experiment: Evidence from the First Year,” used a natural experiment to answer questions about the impact of having health insurance on participants’ health care utilization, health status, and financial stress. A team of researchers heard that Oregon had insufficient funds to cover all 90,000 people who applied for subsidized health insurance and so chose to enroll people by lottery. They recognized that they could use administrative and survey data from this “natural experiment” to measure effects that are otherwise extremely difficult to disentangle from other types of information.
While the content of the study is important for health care debates around the world, the most striking thing to me about the paper was its attention to addressing bias in research (an issue that has concerned me before). I suspect the authors were keenly aware that anything they wrote would be subjected to enormous scrutiny in the polarized political climate of the United States, especially with regard to health policy. Whatever the reason, the authors should be celebrated for following a number of practices which should be standard for policy research.
First, they created a public archive for their research design regarding data to be collected and hypotheses to be tested before looking at the outcomes in the dataset. This is common in controlled medical trials as a way to reduce the chances that researchers will comb the data for significant correlations and justify the results post facto. This doesn’t keep the authors from extending their analysis and research but when they do so, they explicitly alert the reader that those extensions were not in the pre-specified research design.
Second, they appropriately qualify their results by noting the limits of generalizing from this particular population to other dissimilar groups. More importantly, they acknowledge that these are partial equilibrium results and cannot be used to do a simplistic extrapolation for a large scale program that might induce significant supply responses or other general equilibrium effects.
Finally, they provide a lengthy appendix that can be downloaded from the NBER website and provides the full questionnaire, more details on the research design, and alternative estimations that were excluded from the paper. All of this makes it easier for readers to judge the kinds of statements that occur frequently in research papers such as “the alternative specification was excluded for reasons of space but largely confirmed the findings presented here.”
There are two additional ways this paper can establish itself firmly as a model for more open and less biased research: first, by making the primary data available for downloading and second, by encouraging other researchers to replicate the results. Given the care of this study, I expect the authors are already planning for this. In this regard, it is encouraging to see social science journals adopting the requirement that supporting data and programs be made publicly available (e.g. see the American Economic Review’s policy). The issue of replication has been addressed elsewhere, including in a blog by Michael Clemens and another by David Roodman.
Yes, I have a lot to say about the content of the study, what it means for health care debates in the U.S. as well as developing countries. But for now, I just want to celebrate what I see as an important maturation of public policy research. Way to go.