2015-05-14

by Judith Curry

Psychologist Brian Nosek of the University of Virginia says that the most common and problematic bias in science is “motivated reasoning”: We interpret observations to fit a particular idea.

Nautilus has published a very interesting article entitled The trouble with scientists:  How one psychologist is tackling human biases in sciences.  I thought this article would be a good antidote to the latest nonsense by Lewandowsky and Oreskes.  Excerpts:

Sometimes it seems surprising that science functions at all. In 2005, medical science was shaken by a paper with the provocative title “Why most published research findings are false.” As Ioannidis concluded more recently, “many published research findings are false or exaggerated, and an estimated 85 percent of research resources are wasted.”

It’s likely that some researchers are consciously cherry-picking data to get their work published. And some of the problems surely lie with journal publication policies. But the problems of false findings often begin with researchers unwittingly fooling themselves: they fall prey to cognitive biases, common modes of thinking that lure us toward wrong but convenient or attractive conclusions.

Psychologist Brian Nosek of the University of Virginia says that the most common and problematic bias in science is “motivated reasoning”: We interpret observations to fit a particular idea. Psychologists have shown that “most of our reasoning is in fact rationalization,” he says. In other words, we have already made the decision about what to do or to think, and our “explanation” of our reasoning is really a justification for doing what we wanted to do—or to believe—anyway. Science is of course meant to be more objective and skeptical than everyday thought—but how much is it, really?

Whereas the falsification model of the scientific method championed by philosopher Karl Popper posits that the scientist looks for ways to test and falsify her theories—to ask “How am I wrong?”—Nosek says that scientists usually ask instead “How am I right?” (or equally, to ask “How are you wrong?”). When facts come up that suggest we might, in fact, not be right after all, we are inclined to dismiss them as irrelevant, if not indeed mistaken.

Statistics may seem to offer respite from bias through strength in numbers, but they are just as fraught. Chris Hartgerink of Tilburg University in the Netherlands works on the influence of “human factors” in the collection of statistics. He points out that researchers often attribute false certainty to contingent statistics. “Researchers, like people generally, are bad at thinking about probabilities,” he says. While some results are sure to be false negatives—that is, results that appear incorrectly to rule something out—Hartgerink says he has never read a paper that concludes as much about its findings. His recent research shows that as many as two in three psychology papers reporting non-significant results may be overlooking false negatives.

Given that science has uncovered a dizzying variety of cognitive biases, the relative neglect of their consequences within science itself is peculiar. A common response to this situation is to argue that, even if individual scientists might fool themselves, others have no hesitation in critiquing their ideas or their results, and so it all comes out in the wash: Science as a communal activity is self-correcting. Sometimes this is true—but it doesn’t necessarily happen as quickly or smoothly as we might like to believe.

Nosek thinks that peer review might sometimes actively hinder clear and swift testing of scientific claims. He points out that, when in 2011 a team of physicists in Italy reported evidence of neutrinos that apparently moved faster than light (in violation of Einstein’s theory of special relativity), this astonishing claim was made, examined, and refuted very quickly thanks to high-energy physicists’ efficient system of distributing preprints of papers through an open-access repository. If that testing had relied on the usual peer-reviewed channels, it could have taken years.

Medical reporter Ivan Oransky  believes that, while all of the incentives in science reinforce confirmation biases, the exigencies of publication are among the most problematic. “To get tenure, grants, and recognition, scientists need to publish frequently in major journals,” he says. “That encourages positive and ‘breakthrough’ findings, since the latter are what earn citations and impact factor. So it’s not terribly surprising that scientists fool themselves into seeing perfect groundbreaking results among their experimental findings.”

Nosek agrees, saying one of the strongest distorting influences is the reward systems that confer kudos, tenure, and funding.  “I could be patient, or get lucky—or I could take the easiest way, making often unconscious decisions about which data I select and how I analyze them, so that a clean story emerges. But in that case, I am sure to be biased in my reasoning.”

Not only can poor data and wrong ideas survive, but good ideas can be suppressed through motivated reasoning and career pressures. Skepticism about bold claims is always warranted, but looking back we can see that sometimes it comes more from an inability to escape the biases of the prevailing picture than from genuine doubts about the quality of the evidence. Science does self-correct when the weight of the evidence demands it, says Nosek, but “we don’t know about the examples in which a similar insight was made but was dismissed outright and never pursued.”

Surprisingly, Nosek thinks that one of the most effective solutions to cognitive bias in science could come from the discipline that has weathered some of the heaviest criticism recently for its error-prone and self-deluding ways: pharmacology. It is precisely because these problems are so manifest in the pharmaceutical industry that this community is, in Nosek’s view, way ahead of the rest of science in dealing with them.

Nosek has instituted a similar pre-registration scheme for research called the Open Science Framework (OSF).  The idea, says Nosek, is that researchers “write down in advance what their study is for and what they think will happen.” It sounds utterly elementary, like the kind of thing we teach children about how to do science. And indeed it is—but it is rarely what happens. Instead, as Fiedler testifies, the analysis gets made on the basis of all kinds of unstated and usually unconscious assumptions about what would or wouldn’t be seen. Nosek says that researchers who have used the OSF have often been amazed at how, by the time they come to look at their results, the project has diverged from the original aims they’d stated.

Ultimately, Nosek has his eyes on a “scientific utopia,” in which science becomes a much more efficient means of knowledge accumulation. As Oransky says, “One of the larger issues is getting scientists to stop fooling themselves. This requires elimination of motivated reasoning and confirmation bias, and I haven’t seen any good solutions for that.” So along with OSF, Nosek believes the necessary restructuring includes open-access publication, and open and continuous peer review. We can’t get rid of our biases, perhaps, but we can soften their siren call. As Nosek and his colleague, psychologist Yoav Bar-Anan of Ben-Gurion University in Israel, have said, “The critical barriers to change are not technical or financial; they are social. Although scientists guard the status quo, they also have the power to change it.”

JC reflections

There are a number of things that I like about this article.  I think that the studying cognitive biases in science is an important topic, that has unfortunately been perverted by Stephan Lewandowsky, with respect to climate science anyways.

Lets face it:  would you expect Soon and Monckton to write a paper on ‘Why climate models run cold’.   Or Jim Hansen to write a paper saying that human caused climate change is not dangerous. People that have a dog in the fight (reputational, financial, ideological, political) interpret observations to fit a particular idea, that supports their particular ‘dog.’  The term ‘motivated reasoning’ is usually reserved for political motivations, but preserving your reputation or funding is probably more likely to be a motivator among scientists.

As scientists, it is our job to fight against biases (and its not easy).  One of the ways that I fight against bias is to question basic assumptions, and see if challenges to these assumptions are legitimate.  The recent carbon mass balance thread is a good example.  Until Salby’s argument came along, it never even occurred to me to question the attribution of the recent CO2 increase – I had never looked at this closely, and assumed that the IPCC et al. knew what they were talking about.  Once you start looking at the problem in some detail, it is clear that it is very complex with many uncertainties, and I have a nagging idea that we need to frame the analysis differently, in the context of dynamical systems.  So I threw this topic open to discussion, stimulated by Fred Haynie’s post.   I think that everyone who followed this lengthy and still ongoing discussion learned something (I know I did), although the discussants at both extremes haven’t come any closer to agreeing with each other.  But the process is key – to throw your assumptions open to challenge and see where it goes.  In this way we can fight our individual bias and the collective biases emerging from consensus building activities.

Filed under: Sociology of science

Show more