UCHV Graduate Prize Fellow Emily Kern Recaps Oreskes' Tanner Lectures on "Trust in Science?"

Thursday, Jan 19, 2017
by femked

In the first week of the Trump presidency, scientists from the US Department of Agriculture were briefly forbidden from speaking to the public, although the ban was reversed a few days later. Subsequently, the Environmental Protection Agency was told that peer reviewed science would also be subject to review by members of the administration. While the rejection of elite authority is not a new entrant to American politics, it is novel to have this level of official skepticism of scientific authority. Why is scientific authority so (apparently) politically challenging? What are the reasons that we should trust science? If we generally do trust science, what should we do when science gets something wrong?

These questions were all raised by Naomi Oreskes in her timely and pointed Tanner Lectures on the question ‘Trust in Science?’ held at Princeton University Oreskes, who is Professor of History of Science and Affiliated Professor of Earth and Planetary Sciences at Harvard University, and an internationally renowned geologist, science historian, and author, began her first lecture by placing the problem of trust and verification in science in a long historical and philosophical context. For the nineteenth century philosopher and sociologist Auguste Comte, science was reliable because it relied on observation, although he admitted a role for an underlying theory that would guide the scientist on what to observe. This emphasis on the value of empirical observation would come to guide much of the development of the philosophy of science in the first half of the twentieth century. Figures like Karl Popper and Pierre Duhem emphasized the importance of falsification for testing scientific hypotheses, while acknowledging that theories that are empirically false can still make fairly good (if not entirely perfect) predictions. In these instances, however, how can we correctly identify the problem? For Duhem, the answer was to rely on the role played by professional judgment and “good sense.” Discovering errors was not grounds for radical skepticism of scientific knowledge production, but instead for a healthy humility in the face of the unknown and imperfect.

In the 1930s, Ludwig Fleck introduced the idea of the thought collective, emphasizing that scientists do not work in isolation and produce work from within communities that share intellectual commitments and modes of practice. The thought collective was pivotal to the development of Thomas Kuhn’s thesis on the structure of scientific revolutions. For Kuhn, the process of scientific knowledge production followed a pathway from ‘normal science’ that was devoted to solving scientific puzzles – small mysteries existing within the framework of the accepted scientific worldview. As this puzzle-solving work progressed, however, more and more anomalies would emerge, until these difficulties became big and disruptive enough that the scientific community had to create a new paradigm to account for these divergent facts, and that this process gives rise to the eponymous scientific revolutions. Kuhn’s work ultimately became linked with the Quine-Duhem thesis and taken up by the Sociology of Scientific Knowledge theorists of the Edinburgh School. For the Edinburgh School, it was easy to move from a position that claimed “empirical knowledge alone does not determine belief” to one of “empirical knowledge plays no role in our belief.” These sociological claims that reality had little or nothing to do with constructed or socially-negotiated natural knowledge were a useful provocation, but also spelled the end of Comte’s dream of science as “positive knowledge,” where science was fundamentally defined as being true, correct, and reliable. Also, if everything was socially constructed, was there such a thing as an objective truth? In this new framework, sociologists of science gained useful insights from feminist philosophy, which suggested that objectivity could be collectively achieved through social processes of “transformative interrogation.” While acknowledging that everyone has biases, this model suggests that in a diverse community these biases can be challenged and minimized, allowing the community to produce new, trustworthy knowledge. From this perspective, values are not incompatible with objectivity but emerge as a function of community practices. Diversity then becomes not a nicety but a necessity: the so-called “political correctness” begets epistemological correctness.

Oreskes concludes that we should trust science on two bases: first, on the basis of scientists’ sustained engagement with the natural world, as a recognition of special expertise drawn from long experience. Asking an astrophysicist to weigh in on galaxy formation is as reasonable and sensible a decision as asking a plumber to weigh in on a drain blockage. Each professional in this example has an area of specific expertise, and while there are more capable astrophysicists and plumbers as well as less capable ones, we have sociological systems in place that help us to sort a good plumber from a bad one, and we can equally apply these measures to assessing whether or not someone is a good scientist. Second, the processes of peer review and the tenure process assert a social contract within the scientific community. These internal vetting processes show how the scientific community self-regulates and polices the boundaries of its own reliability and trustworthiness.

Drawing on her extensive research on climate change denial, Oreskes added a coda, asking why we should not ask the petroleum industry to research global warming? Applying her model for assessing verisimilitude and trustworthiness in science, she concluded that the petroleum industry is not a part of the scientific enterprise and has fundamentally different (profit motive) goals and purposes. While it would be reasonable to ask the petroleum industry for expertise on petrochemical extraction and shareholder profits, industry officials are not experts on ocean acidification or atmospheric carbon. Oreskes argued that we should be fundamentally skeptical when a non-knowledge producing enterprise puts forth a knowledge claim. When the American Enterprise Institute, for example, is asked to weigh in on climate science, we are committing the same fundamental error as if we were to ask an astrophysicist to come unstop a blocked drain.

Comments were provided by Ottmar Edenhofer and Marc Lange. In his response, Edenhofer noted some particular challenges attendant on addressing public distrust of the science of climate change. Climate change has specific problems at the science-policy interface, because it’s big, messy, has some conflicting norms, non-linear risks. Policy making in this area is necessarily a wager – and this uncertainty means that there are lots of different potential policy decisions, but also that facts and values are intertwined due to all the uncertainty, and cannot be disentangled. In the landscape of social values, individuals end up all over the map, with different values that suggest different priorities.

For Lange, the question of ‘trusting in science’ had an inherent circularity to it that was difficult to avoid. In his view, to ask for a justification of science is an unfair demand. Nothing could possibly satisfy the question, so we have to reject the premise. Furthermore, he noted, we take prediction as justified, believing collectively or individually that we are qualified to observe something, and that those observations will be reliable, if not infallible. The critical challenge was to find a shared value basis, and to build from that point. As Lange concluded, philosophers love to disrupt existing systems and “break things,” but it also becomes incumbent upon them to build things back up again. Perhaps Comte’s positive knowledge might yet get another chance.

Lecture Two:
In her second lecture, Oreskes explored what happens when science “goes bad” in a variety of ways, drawing on examples from the history of science. These examples included the limited energy theory, which proposed that higher education would harm women’s reproductive capacities; the widespread rejection of the theory of continental drift by American geologists in the 1920s; Anglo-American support for eugenics as both a field of study and the basis for public policy; expert rejection of patient experiences suggesting a strong link between depression and certain forms of hormonal birth control; and a recent contretemps over whether or not dental floss has any measurable effect on health. Oreskes identified a number of commonalities among these cases. Consensus was usually lacking; furthermore, in some cases Oreskes pointed out that there was a scientific consensus that co-existed with an active debate over the topic, suggesting that this combination should serve as a red flag signaling that other factors may be at play in this situation. When science went awry, it was also often because evidence was ignored or discounted, most often as the result of the particular methodological commitments of the investigators. While a good deal of evidence is imperfect, Oreskes reminded the audience, that’s not a reason to completely ignore it. She also pointed out that Pierre Duhem was right about the importance of humility in the face of methodological faults and human errors. Lastly, these cases showed that values must play a positive role in the production of scientific knowledge. Science is fundamentally a consensual activity and scientific knowledge is the conclusion of experts, and shared values are the basis for trust.

Commenting on Oreskes’ second lecture, Jon Krosnick pointed to some continuing challenges for the practice of “good” science, drawing particular attention to the replication crisis, especially prevalent in his own field of psychology, and to the prevalence of bad incentives in scientific research. Despite Oreskes’ solutions, Krosnick was not entirely certain that the problem of “science gone awry” was entirely solvable. The second comment was provided by M.Susan Lindee, who suggested that grounding trust in science in consumer faith in everyday technologies might build towards regaining public trust in elite technological expertise, and, as a partial solution, proposed what she termed “an epistemology of frozen peas.” Lindee drew attention to all the forms of knowledge that go into food technology, as well as to the invisible world-making where science is naturalized and made invisible or separate from technological production. Rather than taking this naturalization or vanishing of science from the scene of consumer technology for granted, Lindee suggests that historians, philosophers, and scientists of all stripes use everyday technologies as a starting point for public engagement, calling attention over and over again to the ways that our trust in science is imbricated in the fabric of ordinary life.  

The Tanner Lectures welcomed a diverse audience with faculty, visitors and students from the sciences and humanities contributing to a lively discussion on a most timely subject.

**If you missed the lectures – or are interested in viewing them again – please follow the link below.

Lecture 1: Should We Trust Science?

Lecture 2: When Not to Trust Science, or When Science Goes Awry

The Program in the History of Science, The Program in Science, Technology, and Environmental Policy (STEP), and Climate Futures Initiative co-sponsored the event.

Above: Princeton University President Christopher L. Eisgruber, Ottmar Edenhofer, Marc Lange, Naomi Oreskes, Tanner Lectures Committee Chair, Stephen Macedo; M. Susan Lindee; Jon Krosnick; and (in front) Director of the University Center for Human Values, Melissa Lane.

Tanner audience 1Naomi Oreskes profileMacedo profileEdenhofer and grad student