The best physicians, military planners, and policymakers embrace uncertainty and acknowledge their own limitations. Yet they are tasked with making important decisions – possibly life-and-death decisions – despite not knowing for sure they’ve got it right.  Wait and see? Act boldly? Give it time – but how much time? Change course now?  All the while observing and thinking and investigating further. Doctors, generals and government officials must be willing to act quickly, willing to do nothing, and willing to change their minds. Because the health of the patient - whether that patient is a person or the body politic - is what matters, not a foolish consistency with past opinions.[Reworded from a previous post, Think like a Scientist, Act like a Doctor]

With that in mind, here are some excerpts* from Unmasking Scientific Expertise, by M. Anthony Mills/Issues in Science and Technology:

“In early February 1976, two cases of swine flu were discovered at Fort Dix in New Jersey…After consulting with a group of scientific experts and public representatives, President Gerald Ford launched a nationwide vaccination program to immunize ‘every man, woman, and child.’ The National Swine Flu Immunization Program, which cost $137 million and received bipartisan support from Congress, soon met with controversy…The epidemic never materialized. The immunization program was discontinued in December 1976, after some 40 million Americans had been vaccinated. Prevailing public opinion—exemplified by a December 1976 New York Times opinion piece on the ‘swine-flu fiasco’—was that the government had botched it… The opinion of many public health experts, by contrast, was that given the stakes the government was right to err on the side of caution. A 2006 article published in the journal Emerging Infectious Diseases expressed this point of view: ‘When lives are at stake, it is better to err on the side of overreaction than underreaction.… In 1976, the federal government wisely opted to put protection of the public first.’

Follow which science? 

Scientific knowledge is always uncertain when applied to real-world events, albeit to varying degrees. And the translation of such knowledge into action involves yet more uncertainty, not least about the consequences of our actions. Making and evaluating policy decisions amid great uncertainties and urgency to act require scientific evidence—whether in February 1976 or February 2020. But they also require making judgments about how to interpret that evidence, weigh risks, reconcile differing, sometimes incompatible values and goals, and evaluate the inevitable trade-offs that actions or inactions entail. This is not, to put it mildly, what our public discourse over the past year and a half would lead one to believe. 

On the contrary, the rhetoric of ‘follow the science’ has served to mask the ineliminable role of judgment in public health policy, and thus the difficult choices the coronavirus crisis has forced us to confront. Suggesting that the correct policies follow inevitably from ‘the science’ gives political decisions the veneer of objectivity, hiding both the uncertainties and disagreements that underlie them. This charade secures a privileged place for scientific experts in the process of political decision-making, while allowing politicians to outsource the justifications for their decisions. As a result, rather than political debate over what needs to be done, we hear competing claims about who is “following the science” and who is not. Caught up in the game of determining who is or is not being ‘scientific,’ citizens and their representatives get distracted from the complex reality they face.  

Scientific, political, and media elites have spent a lot of time over the past eighteen months bemoaning the public’s lack of trust in “the experts.” Various explanations have been proffered, from the rise of populism and heightened polarization to digital disinformation and inadequate science education. But conspicuously absent from this list is anything that might implicate the experts themselves—or the political and media elites who perpetuate the follow-the-science charade. 

If we want to rebuild a shared public trust in expertise, we will need a more realistic and humane language to talk about scientific expertise and its place in our political life—an account of expertise that is worthy of the public’s trust. Such an account would affirm scientific expertise as a praiseworthy human achievement, indispensable to understanding the world around us and valuable for making political decisions. But it would also recognize the role of uncertainty and judgment in science, and thus the possibility of error and disagreement, including value disagreements, when using science for public policy. Reestablishing an appropriate role for science in our politics, in other words, requires restoring the central role of politics itself in making policy decisions.  

The problem with the follow-the-science charade is that it papers over this messy reality [of weighing tradeoffs and deciding policy], concealing both the rationale and context for such decisions from public view. And that makes policymaking appear to be a rote exercise in the application of ‘scientific’ rules, rather than a deliberative process, informed by expert judgments and interpreted and enacted by politicians at various levels of government, who are responsive to a range of pressures and considerations. What the public sees is different policymakers and different experts all claiming to be ‘following the science,’ often in different directions at different times. As a result, public health policies can start to look like arbitrary political whims, particularly to those already disinclined to trust scientific and political elites. And this, in turn, produces both bewilderment and backlash—especially when “the science” changes, as inevitably it does under conditions of radical uncertainty.  

But the charade ultimately erodes the credibility of both experts and lawmakers, undermining the legitimacy of the policies they advocate. If we pretend our disagreements about public policy are fundamentally scientific in nature, then our political discourse will inevitably devolve into counterproductive debates about ‘the science.’ 

It would be far healthier for both science and politics to surface the disagreements that are really driving these debates—especially those value disagreements about whether and when precautionary approaches to public health policies are appropriate. We should not expect these disagreements to track our ideological divisions too neatly, or the political alignments surrounding them to remain stable over time. Thus it was in 1976 a Republican administration that adopted precautionary policies, based on expert advice, to prevent a potential epidemic. And it was the mainstream media—the Times no less—that criticized these policies as disproportionate, alarmist, and motivated by ‘the self interest of government health bureaucracy.’”

* Excerpted paragraphs may not directly follow each other. The original article is much longer and highly recommended.

Reference:

Unmasking Scientific Expertise by M. Anthony Mills/Issues in Science and Technology, Vol. XXXVII, No. 4, Summer 2021.