2013-12-06

If you took to heart the recent cover story in The Economist, “How Science Goes Wrong,” you might be tempted throw your hands up and stop reading about scientific research entirely. The piece describes how scientists often fail to reproduce some of the most frequently cited findings in their fields, calling their conclusions into question. Science writers have also come under fire recently, most notably Malcolm Gladwell, who according to critics in the The Atlantic, The Wall Street Journal, and Slate, among others, cherry-picks research to fit his thesis and hangs major arguments on poorly replicated studies in his latest book, David and Goliath.

These rebukes are only the latest in what has become a torrent of criticism of the way scientific research is carried out and reported. The catalyst arguably was a paper by the epidemiologist John Ioannidis, provocatively titled “Why Most Research Findings are False,” that got a lot of attention from the popular press, including a 2010 cover story in The Atlantic by David Freedman. In a recent piece in the Columbia Journalism Review, Freedman also blamed science journalism for “a failure … to scrutinize the research it covers.”

Where are the readers in this discussion? Yes, scientists should do high-quality work and journalists should report it responsibly, but readers should be discerning and thoughtful information consumers. They can’t expect science writing to provide simple answers to complex questions; in fact, they should be skeptical of any piece that claims to do so. As University of Virginia psychology professor Brian Nosek told me, "Science reporting is not purveying the facts, it's purveying the discovery process, the adventure into the unknown."

Getting the most out of science writing takes work, but it’s vital, and similar to the attention we devote to consuming other products: We check the labels on food packages at the supermarket.  We pore over online reviews before making even minor purchases. We should put the same care into the way we absorb scientific information, which has the power to shape the way we live.

To be a thoughtful reader, there are a few questions you should ask yourself whenever you read a popular science piece. I’ll use Hanna Rosin’s 2009 story in The Atlantic, about the evidence for and against the benefits of breastfeeding, as an example, because it addresses these questions especially well.

What’s the Big Picture?

A lot of science writing focuses on single studies. But each study is only a piece of the puzzle; look for writing that provides context. Before delving into the research on breastfeeding in her piece, Rosin provides historical background that helps explain why people have such strong feelings about breastfeeding and formula. She describes, for example, how some of the concern about formula stemmed from an international scandal in the 1970s, in which babies who were formula-fed in South America and Africa were more likely to die than those who were breastfed. It turned out that this was because mothers were using contaminated water or rationing formula because it was so expensive. Still, she writes, “the whole episode turned breast-feeding advocates and formula makers into Crips and Bloods, and introduced the take-no-prisoners turf war between them that continues to this day.”

Her review of the breastfeeding research is similarly nuanced. The article’s title, “The Case Against Breastfeeding,” suggests a diatribe, but the piece itself provides an even-handed review of the literature, describing studies that have found evidence for the benefits of breastfeeding and others that have not. In this way, she builds toward her overall conclusion: that claims about the benefits of breastfeeding have been overstated.

What’s Wrong?

Even a good study has limitations and weaknesses. Usually the researchers are direct about these in academic papers, and do a decent job of explaining how they might detract from the study’s conclusions. Look for science writers to be similarly frank. Examples of weaknesses include: samples that are either very small (which makes it difficult to find a statistically significant difference) or very large (which means tiny, basically meaningless effects might still be statistically significant), or are somehow unrepresentative of the population they’re trying to understand (studying animals to learn about human diseases, for example).

Also look for a popular science piece to identify methodological weaknesses that might have undermined the study’s findings. In her piece, Rosin points out a “glaring flaw” with most research on breastfeeding: mothers who choose to breastfeed probably differ in many ways—income, education level, race—from those who don’t, and any of these factors could influence a child’s development. Even cleverly designed sibling studies, which compare mothers who fed their children differently—say, breastfeeding the first child but using formula for subsequent children—can’t completely address these confounds, Rosin points out. A mother might treat her children differently in ways other than feeding—for example, lavishing more attention on her first child than on her second and third. By pointing out the weaknesses in some of the breastfeeding studies, Rosin helps explain the contradictory literature: Studies that found evidence for benefits tended to be those that did not account for these potential confounds, whereas those that found little or no evidence for breastfeeding’s benefits were usually well-designed and controlled.

What Does This Mean For Me?

In many cases, understanding what the research says about a topic you’re interested in isn't enough—you also want to understand how it applies to you, especially if it pertains to a topic as important as personal health. When I read Rosin’s piece soon after my first child was born, this question was foremost in my mind. Exhausted by breastfeeding exclusively, I decided to supplement with formula, and though I felt my decision was well reasoned, the not-so-subtle messages I’d received for the past several months—from What to Expect, to hospital lactation consultants, to the women in my new mothers’ group—were that exclusive breastfeeding was the only acceptable option. Rosin’s piece offered me a fresh perspective. It didn't tell me whether I should breastfeed or not—she acknowledged that “[b]reast-feeding … is much too intimate and elemental” to enter into based purely on facts and figures. But she made it easy to evaluate how the findings on breastfeeding pertained to me. Take her discussion of the IQ findings:

The evidence on IQs … at best suggests a small advantage, perhaps five points … If a child is disadvantaged in other ways, this bump might make a difference. But for the kids in my playground set, the ones whose mothers obsess about breast-feeding, it gets lost in a wash of Baby Einstein videos, piano lessons, and the rest. And in any case, if a breast-feeding mother is miserable … surely that can have a greater effect on a kid’s future success than a few IQ points.

I knew my son, like Rosin’s child, would be raised in an environment that was stimulating (perhaps overly so). I knew that exclusive breastfeeding was making me miserable, and that Rosin’s hunch that maternal depression can harm kids was supported by research. The relevance to me was clear: breastfeeding likely wouldn't provide benefits to my child beyond what he already received by being raised in an enriching environment, and doing so exclusively might actually indirectly hurt him by making me feel depressed.

Rosin’s piece is not an exception: many other science pieces address these questions well. But if you read something that doesn’t, the solution is simple: read more. You can search websites such as ScienceDaily for summaries of the latest research news on any topic, or Longform for more in-depth pieces. It’s also essential to check in regularly on critical topics, such as those related to personal health, to see how the science is evolving. In his book Wrong, Freedman suggests that when you finish reading a piece of science writing, you don’t think, “‘Wow, I better make some serious changes to the way I eat/talk to my children/use my credit cards,’ but rather ‘Hmmm, I wonder how likely it is that this advice will turn out to be worth following.’” That curiosity should spur you to seek out good information continually. Over time, if the research appears to converge on a particular conclusion—the overwhelming consensus that there is no link between autism and vaccines, for example—then you should probably take it seriously.

Of course, the better the science in the first place, the better the chances that you’ll have high-quality science writing at your fingertips. Fortunately, the research community is moving in the right direction, taking seriously the criticisms that have been leveled against it, such as poor replicability. Nosek, the University of Virginia psychologist I spoke to, recently launched the Reproducibility Project, a collaborative effort to see if the results of studies published in three major psychology journals in 2008 can be replicated. And the Personality and Social Psychology Review, a prestigious journal, recently published a set of guidelines put forth by a task force on research practices to improve the dependability of research, including choosing an appropriate sample size and avoiding what the authors call “questionable research practices”—for example, running multiple experiments with similar procedures and only reporting those with statistically significant results.

But perhaps the most important guideline in the paper is this: “[W]e need to promote a climate that emphasizes ‘telling the whole story’ rather than ‘telling a good story.’” This sentence applies to science writing as it does to science, with one tweak: Telling the whole story and telling a good story need not be mutually exclusive. The best science writing does both.



    

Show more