Keith Kloor raises important concerns, but is not able to arrive at a clear conclusion about what, if anything, ought to be done about those problems. Though he presents a handful of alarming anecdotes, he cannot say whether these represent the exception or the rule, and it makes a difference whether scientific discourse mostly works, with a few glaring exceptions, or is pervasively broken.
Real life does not distinguish as clearly as Kloor attempts to do between scientific and ideological considerations. The article by Roger Pielke in FiveThirtyEight, which Kloor discusses at length, certainly attracted ideological responses, but it was also widely criticized on scientific grounds. Pielke and his critics, such as William Nordhaus, have published arguments for and against in peer-reviewed journals. For those who sincerely believe that someone’s methods are deeply flawed and his or her conclusions factually incorrect, it is not an act of censorship, but of responsible peer review or journalism to not print that work. To do otherwise risks contributing to the phenomenon Maxwell Boykoff and Jules Boykoff call “Balance as Bias.”
However, drawing a bright line between responsible policing for accuracy and irresponsible policing for ideological purity is often impossible. In The Fifth Branch, Sheila Jasanoff distinguishes “research science” (narrowly disciplinary with strong consensus on methods) and “regulatory science” (intrinsically interdisciplinary, with experts holding diverse views about the soundness of methods and also strong political views). In regulatory science, such as research on climate change, what some see as purely scientific judgment that certain work is shoddy may seem politicized censorship to others.
In 1980, Alan Manne and Richard Richels found that expert engineers’ political views about nuclear energy strongly influenced their scientific judgments about apparently unrelated factual questions.
In Science, Truth, and Democracy, Philip Kitcher considers whether some scientific questions, such as hereditary differences in intelligence, ought not to be pursued because of the potential for even solid empirical results to be misused politically. Kitcher argues that certain research ought not to be done, if it is likely to cause more harm—through political abuse of its results—than good. However, he also recognizes that censorship would likely cause even more harm than the research. He concludes that the question whether to undertake a potentially politically dangerous line of research should rest with the conscience of the researcher and not with external censors, however well-intentioned.
It is important to keep outright falsehoods out of journalism and the scientific literature. Creationism and fear-mongering about vaccine safety do not deserve equal time with biological and medical science. But in matters of regulatory science, where there is not a clear consensus on methods and where it is impossible to strictly separate factual judgments from political ones, the literature on science in policy offers strong support for keeping discourse open and free, even though it may become heated. But it also calls on individual scientists to consider how the results of their research and their public statements about it are likely to be used.