2015-10-04

A few weeks ago I sent out a survey to many of my scientist friends. I wanted to know: why does some research stay unpublished? Those outside academia or research might think that science always proceeds in a linear fashion. A person does a study, they publish it, now it is out there for other scientists to reference. Once research is performed, it is a known quantity. But that’s not necessarily true.

For any number of reasons, a fair chunk of research never makes it to the publication stage. Sometimes it’s because it’s bad research: it is biased or the methods are bad, so during the peer review process the paper is rejected. This might not be through any real fault of the scientists. The problems might have only become apparent after the research was completed. This is pretty inevitable, and can lead a research group to design a second study that is much better and really gets at their question.

But does all the good research even get published? No, definitely not. There’s research out there that remains unpublished even though it probably could have been.

Some possible reasons for this are that the researchers ran out of time to write up the results, or the results just didn’t seem very interesting, or their hypothesis was rejected. For these reasons people might choose to focus on another project they had going at the same time. But that leaves a gap in the record of published science: results with bigger effect sizes are published proportionally more often than null results. Results with no effect might be left in a drawer to be published later, or never.

This phenomena is called the “file drawer effect” and is a major contribution to a bias in publication which is problematic for many reasons, which I’ll discuss later. Here’s a nice paper on the file drawer effect.

With my survey, I wanted to get at why people don’t publish. First I asked about how much research they leave in the file drawer, so to speak, out of their own choice. Then I asked how often other people pressured them to avoid publishing, and why. I’ll get to that second question in part two of this post.

First, as a caveat before getting to what people told me. The responses certainly don’t represent the whole science community, and I can’t draw any conclusions about frequency of the types of things I’m asking about. I had 182 responses, which is not a lot, and the majority were from ecologists and evolutionary biologists relatively early in their careers. The survey was spread by word of mouth so this is just a function of who I know.

Here’s some data on who responded to my call:





(I’ll also add that most respondents were in academia, but there were also some who worked for government research institutes or companies. I’ll get more into that in part two, but for now I’m going to write primarily from the academia perspective. Just be aware that there are some non-academia responses in here as well.)

Now. One of the first questions I asked was, “How many papers or reports worth of results of your own work remain unpublished, by your own choice?”

It’s obvious now that I could have worded this a little bit better. Some people include all of their unpublished work in this answer, while others said that just because something hadn’t been published yet didn’t mean that it would never be published, and left some work out of the count. They may be right about that: some work does eventually get published years later, when researchers finally have a chunk of free time and nothing “more important”.

(“Pressure was not direct – just lack of support to move the paper forward,” one survey responder wrote of his/her supervisor. “Ultimately he approached me to finally publish the work – after more than 20 years!”)

But that data that you swear you will write up one day can also remain in the file drawer indefinitely.

In any case, here’s what I found:



The other thing that is unclear in the results is how realistic it is to publish all of these datasets – are they each an individual paper, for instance? Some people take a dataset and divide it into as many pieces as possible so that they can get the most publications out of it, when in reality publishing all the data together would have made a more interesting, meaningful, and high-impact single manuscript. So has someone doing research for six years really accumulated ten unpublished datasets? Perhaps they have. Meanwhile, I am impressed by the few people who had been doing research for 25 or even 40 years and had seemingly published every worthwhile dataset they had ever collected. These people must be writing machines. (And I say that as someone who writes quite a lot!)

Adding a regression line is probably inappropriate here, but let’s just say that in the first ten years of their career (depending on how they are counting, this is a bachelors thesis, a masters, a PhD, and maybe a postdoc or two), many people accumulate four or five studies that they could have published, but they didn’t. After that they might accumulate one every five or ten years. It makes sense that more of the unpublished papers come early in the career because people aren’t yet adept or fast at writing papers. They also don’t have as much experience doing research, so data from projects like bachelors theses often go unpublished because of flaws in study design or data collection. These mistakes are what eventually lead us to learn to do better science, but they can keep a piece of research out of a top journal.

As of 2013, there were about 40,000 postdocs in the United States. Add that to the rest of the world and there’s potentially a lot of unpublished research out there – clearly over 100,000 datasets worth! (Is that good or bad? Both, and I’ll get to that later.)

The answers are partially biased, I am sure, by the differences in productivity and funding between different researchers. This might depend on what kind of appointment the researcher has – is their job guaranteed? – and how much funding they have. A bigger lab can generate a lot more results. But someone still needs to write them; labs might go through phases where the writing falls primarily on the PI (primary investigator, a.k.a. lab head) or other phases where there are highly productive postdocs or precocious PhD students who also get a lot of papers out the door.

And one of the biggest constraints is, of course, time. With pressure to publish your best research in order to get that postdoc position, to be competitive for a tenure-track job, and to eventually get tenure, if researchers have to choose between publishing a high-impact paper and low-impact one they will certainly focus their energies on the high-impact results. The results that were confusing or didn’t seem to show much effect of whatever was being investigated might stay in that file drawer.

One thing that was clear is that this problem of time as a limiting resource is contagious. Later, I asked people if they had ever been discouraged from publishing something which they had wanted to publish. Two responses for why a supervisor was not supportive of their publication emerged as particularly common.

Let’s look at the “more data was needed” issue first. What I offered as a potential response in the multiple choice question was, “My supervisor thought that we needed to collect more data before publishing, even though I thought we could have published as-is.”

In some cases, the supervisor might be right. Maybe more data really was needed, maybe the experiment needed to be replicated to ensure the results were really true, maybe the team needed to do a follow-up experiment or correct some design flaws. After all, the supervisor should have more experience and be able to assess whether the research is really good science which will stand up to peer review.

“Simply, what I thought were publishable results were probably not worth the paper it would be printed in,” one responder wrote of research (s)he had done during a bachelors thesis, but which the supervisor had not supported publishing. “The results did serve as the basis for several other successful grant applications.”

But at the same time: as Meghan Duffy recently noted on Dynamic Ecology, perfect is the enemy of good. That can go for writing an email to your lab, and also for doing experiments. In the discussions of her blog post, someone noted that “perfect is the enemy of DONE” and Jeremy Fox wrote that often graduate students can get into the rut of wanting to just add one more experiment to their thesis or dissertation, so that it is complete, but at some point you just have to stop.

“I have not directly been pressed not to publish, but I have 2 paper drafts which have not been published yet,” wrote another respondent. “I wrote them as a PhD student and now I think they will be published, but I have the feeling that for some period, one of my supervisors did not want to publish them because it was just correct but not perfect enough.”

If more data should be collected, but probably never will be, does that mean that the whole study should sit in the file drawer? If it was done correctly, should it still be published so that other people can see the results, and maybe they can do the follow-up work? Different researchers might have different answers to this question depending on how possessive they are of data or an idea, or what level of publication they expect from themselves. But if a student, for example, is the primary person who did the research, their opinions should be taken into account too.

Why is this publication gap a problem?

That gets into the second idea: that as-is, the research won’t be accepted into a top journal.

Ideally, this shouldn’t matter, if the research itself is sound. There are plenty of journals, some of them highly ranked, which accept articles based more on whether the science is good and the methods correct, rather than whether the results are groundbreaking.

(Unfortunately, these journals often have big publication fees, whereas highly-ranked journals have a large time investment but publication is free. Plos One, one of the most well-known journals which focuses on study quality rather than necessarily the outcome, is raising its publication fee from $1350 to $1495, which must be paid by the author. For some labs this doesn’t matter, but for other less-flush research groups the cost of open-access publishing can definitely deter publication.)

It is important to get well-done studies with null results out there in the world. Scientific knowledge is gathered in a stepwise fashion. Other scientists should know about null results, arguably just as much or more than they should know about significant results. We can’t move knowledge forward without also knowing when things don’t work or don’t have an effect.

Here’s two quick examples. First, at least in ecology and evolution, we often rely on meta-analyses to tell us something about whether ideas and theories are correct, or, for example, how natural systems respond to climate change or pollution. The idea is to gather all of the studies that have been done on a particular topic, try to standardize the way the responses were measured, and then do some statistics to see whether, overall, there is a significant trend in the responses in one direction or another. (Or, to get a little bit more sophisticated, to see why different systems might respond in different ways to similar experiments.) This both provides a somewhat definitive answer to a question, and makes it so that we can track down all of the work on a topic in one place rather than each scientist having to scour the literature and try to find every study which might be relevant.

If researchers only 20% of the scientists studying a question find a significant effect, but these are the only results which get published, then literature searches and meta-analyses will show that there is, indeed, a significant effect – even if actually, across all the studies which have been done (including the unpublished ones), it’s a wash. Scientific knowledge is hindered and confounded when this happens.

A second example. When you are designing a study, you search the literature to find out what has been done before. You want to know if someone else has already asked the same question, and if so, what results they found. You also might want to know what methods other people use, so that you can either use the same ones, or improve them. If research is never published, then you might bumble along and make the same mistakes which someone already has made. The same flawed study might be performed several times with each person realizing only later that they should have used a different design, but never bothering to disseminate that information. (And sure, you can ask around to find unpublished results, but if there’s no record of someone ever studying a topic, you’re unlikely to know to ask them!)

Almost everyone in the scientific community acknowledges that the publication bias towards positive or significant results is problematic. But that doesn’t really solve the problem. It’s just a fact that null results are often much harder to publish, and much harder to get into a good journal. And considering the pressure that researchers are under to always shoot for the highest journals, so that they can secure funding and jobs and advance their careers.

“I think a lot of pressure comes from the community rather than individuals to avoid publishing negative results,” one early-career ecologist wrote in a comment. “I think negative results are useful to publish but there needs to be more incentives to do so!”

This pressure can be so great that, I was told in a recent discussion, having publications in low-impact journals can actually detract from your CV, even if you have high-impact publications as well. Two candidates with the same number of Ecology Letters or American Naturalist or even Nature papers (those are good) might be evaluated differently if one of them has a lot of papers in minor regional or topic-specific journals mixed in. Thus, some researchers opt for “quality not quantity” and publish only infrequently, and only their best results. Others continue to publish datasets that they feel are valuable even if they know a search or tenure committee might not see that value, but consider leaving some things off their CV.

One thing I’d like to mention here is that with the “contagion”, students are sometimes affected by their supervisors’ standards of journal quality. While a tenure-track supervisor may only consider a publication worthwhile if it’s in a top journal, a masters student may be greatly benefited by having any publication (well not any, but you see my point) on their CV when applying for PhD positions. I also know from my own experience that there is incredible value, as a student, in going through the publication process as the corresponding author: learning to write cover letters, respond to reviewer comments, prepare publication-quality figures, etc. Doing so at an early stage with a less-important manuscript might be highly beneficial when, a few years later, you have something with a lot of impact that you need to shepherd through the publication process.

There are many good supervisors who balance these two competing needs: to get top publications for themselves, but to also do what is needed to help their students and employees who might be at very different career stages. In many cases, of course, supervisors are indeed the ones pushing a reluctant graduate to publish their results!

Unfortunately, this is not always the case. Again, because of the low number and biased identity of survey participants I can’t say anything about how frequently supervisors hinder their students in publishing. But I think almost everyone has some friend who has experienced this, even if they haven’t themselves.

“I have been in the situation where a supervisor assumed that I would not publish and showed no interest in helping me publish,” wrote one responder. “As a student, being left hanging out to dry like that is rough – might as well have told me not to publish.”

“Depending on the lab I’ve been in, the supervisory filter is strong in that only things deemed interesting and important by them get the go ahead to go towards publication,” wrote another. “Thus, the independence to determine what to publish of the multiple threads of my research is lacking in certain labs and management structures.”

That obviously feeds in to the publication bias. So how do we get past it, in the name of science? There aren’t a lot of answers.

Why is the publication gap maybe not so bad?

At the same time, it’s clear that if all this research (100,000 or more papers!) was submitted for publication there would be some additional problems. Scientific output is roughly doubling every nine years. There are more and more papers being published; there are more postdocs (although less tenure-track professor positions) in today’s scientific world, and I’m pretty sure the number of graduate students increased after the “Great Recession”, about the time when I was finishing my bachelors degree and all of a sudden many of my classmates’ seemingly guaranteed jobs disappeared.

This puts a lot of stress on the peer review system. Scientists are not paid to review research for journals, and reviewing may or may not be included as a performance metric in their evaluations (if it is, it’s certainly not as important as publishing or teaching). With more and more papers being submitted more and more reviews are needed. That cuts time out of, you guessed it, doing their own research. It’s a problem lots of people talk about.

Others lament that with so many papers out there, it’s getting harder and harder to find the one you need. Science is swamped with papers.

Even without publishing in a journal, there are other ways to find data. For instance, masters theses and PhD dissertations are often published online by their institutions, even if the individual chapters never make it into a peer-reviewed journal (perhaps because the student leaves science and has no motivation to go through the grueling publication process). But this type of literature can be harder to find, and is not indexed in Web of Knowledge, for example. So if it’s the data or methods you need, you might not find it.

Reconciliation?

I’m not particularly convinced by the argument that there’s too much science out there. Research is still filtered by journal quality. Personally, I read journal tables of contents for the best and most relevant journals in my field. I also have google scholar alerts set for a few topics relevant to my research, so that when someone publishes something in a place that would be harder to find I know about it. This has been useful. I’m glad they published it, even if it’s in an obscure place.

With that in mind, I wonder if there is a way to publish datasets with a methods description and some metadata but without having to write a full paper.

There are, of course, many online data repositories. But I don’t believe people use them for this purpose as much as they could. It is now becoming common for journals to require that data be archived when a paper is published, so much of the data in these repositories is simply data that actually already has been published. In other cases people only bother with publishing a dataset as-is if it is large or has taken a lot of time to curate, and might be of particular interest and use to the community. Smaller datasets of pilot projects or null results are not often given the same treatment.

And while published datasets are searchable within the individual repositories archives, they don’t show up in the most common literature search tools, because they aren’t literature: they are just data.

Is there a way that we could integrate the two? If you have five papers-worth of data that you don’t think you’ll ever publish, why can’t we have a data repository system which includes a robust methods and metadata section, but skips the other parts of a traditional manuscript? If this were searchable like other kinds of literature, it could contribute to more accurate meta-analyses and a faster advancement of science, because people would be able to see what had been done before, whether it “worked” and was published with high impact or not. The peer review process could also be minimal and, as with code or already existing data archives, these data papers could have DOI’s and be citable.

But I’m not sure if this is realistic (and honestly, I haven’t thought through the specifics!). Science seems slow to change in a lot of ways. Methods change fast. Open access and online-only publishing have swept through to success. But creative ideas like post-publication review, preprints, and other innovations have been slower to catch on. These types of ideas tend to generate a group of fierce supporters, but to have a difficult time really permeating the scientific “mainstream”.

The scientific community is big – how can we change the culture to prevent our large and growing file drawers full of unpublished results from biasing the literature?

Stay tuned for part two of this series, about other reasons that people are pressured not to publish results – for instance, internal or external politics, competing hypotheses, stolen data. Part two will be published later this week. If you want to take the survey before it goes up, click here.

Show more