In part one of my writeup on survey results, I talked a lot about the file drawer effect and why we end up not publishing some potentially useful results because we don’t have time. In a high-pressure environment where publication in the best journals is important to advance our careers, we often focus that limited time on the manuscripts with the highest potential impact. In some unfortunate cases, that means that professors do not prioritize giving their students the support necessary to publish results from projects, theses, or dissertations.
There’s no doubt that this can hurt younger scientists’ careers. Helping a student aim high and write higher-quality papers is great…. but it can go too far, too.
“There was no specific pressure NOT to publish, but rather my supervisor could not provide useful and supportive feedback and he was never satisfied with any draft I submitted to him for review,” wrote one respondent. “After numerous iterations of my projects over many years, I became discouraged and decided it wasn’t worth the effort to try and publish my results. Others in my lab have had the same or similar experience.”
Today I will talk about something more insidious: when you are discouraged from publishing something for other reasons, like politics, that your data didn’t support your research group’s hypothesis, or that external partners did not understand the results or the underlying science.
(If you want to know more about the dataset I am working with, its small size, and its various biases, I discussed it in part one: click over here.)
As an ecologist, I didn’t think that this happened a lot in our field, at least not compared to other fields where there’s more often commercial connections and money at stake. Perhaps if you are an environmental consultant or doing impact reports for the government or companies. But in a purely academic community I assumed that it was a fairly rare occurrence for results to be kept out of publication.
One thing quickly became obvious. It does happen, sometimes. There are lots of reasons, some of which are highly case-specific, i.e. the government of the researcher’s native country didn’t allow him/her to import her samples in the end, after all…. but there are some common patterns, too.
With such a small sample size – 40 of the 184 respondents reported this happening – and also the fact that I made it clear online that I really wanted to hear from people who had been discouraged from publishing, it’s impossible to say how prevalent such events actually are. The proportion of responses does not reflect the proportion of total scientists who have had this experience.
I can certainly say that comparatively, many fewer unpublished papers are due to these events than due to the self-created file drawer effect. Two thirds of survey respondents said they had at least one unpublished dataset, if not a handful or more, even though many were just in the first five years of their research careers.
The file drawer effect means that there are tens of thousands of unpublished datasets out there, maybe 100,000. Many probably have no significant results, since some of the most cited reasons for not publishing were inconclusive data, needing to collect more data, and doubting that the results would be accepted by a high impact journal.
Other pressures happen in a smaller number of cases, but primarily for the opposite reason: results did show something interesting, but maybe not what someone – a supervisor or a government employee – wanted to see.
And while I cannot draw any conclusions about prevalence, I can (hopefully) draw some conclusions about why this happens and who it happens to.
A brief table of contents:
First, student-specific challenges.
Second, government and, to a lesser extent, industry challenges.
Third, “internal” and interpersonal political challenges.
Students Bear The Brunt of It
“As a grad student, the concept of this is crazy to me,” wrote one respondent. “In many ways and instances, publications are the currency by which scientists are measured against one another. Thus, not publishing work seems counterintuitive to me. I’d like to hear the reasons behind why it happens.”
Well, dear student, there are many. And being discouraged not to publish in fact seems to happen mostly to students. Here’s who the 40 survey respondents who reported being pressured not to publish their work were:
Mostly students. One explanation is that as we go along in our careers, we get a better concept of what is good and valuable science, and make some of the decisions to jettison a project ourselves rather than being told by a supervisor. We also become more and more crunched for time, meaning that we make more of these types of prioritizations before it gets to the point of having someone else weigh in.
But that’s not the only explanation. Let’s look first at when a direct supervisor was involved. With 32 responses of this type, it was about twice as common in my dataset than when an external person pressured a respondent not to publish. In these supervisor-related cases, it was most frequently a tenured or tenure-track professor discouraging a graduate student from publishing a chapter of their thesis or dissertation.
And as discussed in part one, part of the issue was that these driven supervisors were strapped for time and transferred their own expectations about significance of results and journal quality onto their students, even if students would have been happy settling for a lower-impact publication.
Sometimes this is very appropriate, sometimes less so. Where this line is drawn probably depends on your goals in science.
“A paper was published, but it excluded the results that I found the most interesting because they were not in line with the story that my advisor wished to push,” one respondent wrote of bachelors thesis research. “Instead, results from the same project that I though were not well thought-out were published in a way that made them seem flashy, which seemed to be the main goal for my tenure-track advisor.”
Another respondent had a similar story with a different ending, about work done as part of a masters thesis.
“The situation was not resolved; I just ended up not publishing,” he/she wrote. “I wanted to publish, as I considered the results to be high-quality science and the information very useful to disseminate, but I could not agree to change the research focus entirely to suit my supervisor’s personal interests.”
You can see both sides of the coin in some cases. What is the goal? To advance scientific theory and knowledge, or to share system-specific data that might help someone in the future? Ideally, a manuscript does both, but sometimes that’s not possible and just the second is still a good aim. In some cases the supervisor is probably guiding the student towards using their data to address some question larger than the one they had initially considered. But, as the bachelors thesis respondent noted, it’s not always appropriate to do so – some people think that overreaching and drawing conclusions based on data not really designed to do so is a big problem in some fields.
“Some datasets and analyses I have collected and analysed don’t tell a clear story that would be readily publishable given the current state of how research articles are assessed for impact thus I tend to move on to things that tell a better story,” wrote another respondent. “This feels disingenuous at times though perhaps it is how science moves forward more quickly.”
A surprising amount of the time, supervisors discouraged students from publishing because the results turned out to not support their hypothesis. This was actually the most common single reason that a supervisor told a student not to publish. I may be naive, but it’s hard for me to think of a situation in which this is not just straight-up bad.
I was quite explicit to ask whether the results did not support “our” hypothesis, or whether they did not support a supervisor, department, or company’s hypothesis. Sometimes the two overlapped, but most of the time when this happened the respondent selected the second option: the researcher themself might not have been surprised by the results, but the supervisor, lab group, or company did not like them.
(About 60% of the 32 responses came from ecology and evolution, but many also came from other fields.)
This really surprised me. In our training as scientists it is drilled into us that we might learn as much from a null result or a reversal of our hypothesis as we would if our hypothesis was supported – maybe even more, because it tells us that we have to carefully look at our assumptions and logic, and can lead us down new and more innovative paths.
In the U.S. at least, a substantial proportion of the population just has no respect for science. Whether its climate change deniers or anti-vaxxers, as a science community we tell them: go ahead, prove us wrong! Science is very open to accepting data that disproves something we had previously thought was true. We try to tell the public that we are not close-minded, that we are following evidence, and that if the evidence showed us something else, we’d still accept it.
On some small scale, that might not be true, and it’s very troubling. Without knowing more about the research in question here, it’s impossible to say much more. But it’s not a very inspiring trend. And again: this was happening coming from direct supervisors who were mostly in academia and shouldn’t have had a financial or political conflict of interest or anything like that.
And it also has potentially big implications for the sum of our community’s knowledge. Luckily there are so many researchers out there that probably someone else will ask the same question and publish it eventually, but this sort of attitude can delay learning important and valuable things.
“Unfortunately it’s hard to tell what could become interesting later, or what could be interesting to another researcher, so it’s too bad that these results never see the light of day,” wrote one early-career biologist. “What’s more concerning to me is the tendency of some researchers in my field to ignore or leave out results that they can’t explain, or worse, that contradict their pet hypothesis.”
When pressure came from an external source – someone not supervising the study respondent – the prevalence of this reason for discouraging publication was even higher. The data not supporting someone’s hypothesis rose from roughly two-thirds of respondents citing it, to almost half.
And relatedly, the person doing the pressuring was afraid that the results would make them, their group, or the government look bad. In other words, these are classic cases of repressing research, the worst case scenario that we think of!
Governments are Not Always Great (for Science)
Sometimes, this external pressure came from within academia, but it was also often from governments.
“Yes, the results were published, yes it created an public uproar, yes all authors were chastised by the agency and external company, and yes all subsequent follow-up research papers on the topic were expressly forbidden,” wrote one federal government employee. “There are considerable research accomplished by state and federal government agencies. Much of those data results never see the light of day because the results may be divergent from what the chain of command’s perspective or directive may be, I.e. support the head official’s alternative energy, logging harvest, endangered species delisting, stream restoration, etc. policy.”
It’s clear that one place where state, local, and federal government officials can be particularly destructive is Canada. Apart from the cuts to research funding which have been hitting many countries, it’s been discussed by people far more knowledgeable than I that the government literally muzzles its scientists by not allowing them to talk to the media, among other policies: see here, here, and here.
Here’s what one anonymous survey respondent had to say: “The Canadian government has been muzzling scientists for years…I was just the latest in their ‘Thou Shalt Not Publish’ scheme. If the research you’re doing will make them look bad in any way, you’re not allowed to publish the results without fear of massive repercussions: job loss, degree removal, job losses of your superiors if they can’t fire you, being blacklisted in the scientific community, being blacklisted for grants, etc.”
Multiple survey respondents cited the Canadian government. So, about those elections coming up….
Consultants and researchers in the corporate/industrial sectors are often muzzled as well, but many of them are aware of this from the time they are hired.
“It is simply understood that if the research results from work we do for clients are inconvenient, they will attempt to redact the reports as trade secrets,” wrote one consultant. “They own the data so they are often able to do this. But not always.”
But even if companies are upfront about data ownership policies, it can still feel tough. One person told me that it was discouraging not to be able to get a patent and get credit for his/her work because a company owned all the intellectual property rights and would use the discovery as proprietary and secret until it was no longer profitable to do so.
In a variety of fields, there’s also some crossover between the industrial and academic sectors of research. Companies often provide funding to students or research groups working in an essential location or on a related topic. The companies shouldn’t be able to use their influence to suppress results, but in some cases they do seem to.
This is actually what happened in the case that inspired me to create my survey: the International Association of Athletics Federations squashed survey results showing that a huge proportion of championships competitors were doping. They were not involved in the research itself, but had provide access to the athletes, and thus felt like it was their prerogative to police the results.
One survey respondent said that he had been let go from his position after publishing research about the effects of pesticides, and had heard a researcher with industry ties imply that the same thing would happen to someone else publishing similar research.
Several people in environmental and earth sciences fields mentioned this happening to people that they knew or had talked to, but it’s hard to pin down other than in news stories.
We Can Be Our Own Worst Enemies
Finally, other politics are more about internal power dynamics, be it within a department or within a research field.
“A person, invited late to the project, was asked to provide simple review in return for coauthor ship,” wrote one respondent. “They hijacked the project and it is still unpublished four years on.”
It’s pretty tragic to see a good experiment, or maybe a whole grant that some agency spent hundreds of thousands of dollars on and researchers spent years of their lives on, get derailed by interpersonal problems and arguments about data ownership or authorship.
In many fields the community of specific experts is fairly small, so you are likely to have to work with people again, or have them review papers, etc etc etc. The problems are hard to resolve once they begin.
It was also clear that sometimes people nixed manuscripts because they didn’t understand the science or the value of this. Sometimes this meant a bureaucrat at a funding agency, but sadly, sometimes it also came from within the scientific community itself.
“Because my scientific community is so small, in some cases only one review has been given by a local expert, and of course the editors don’t have time to fact-check, but my paper will not be accepted because these few experts are, as I perceive it, not wanting recent data contrary to results from their systems to be published, and assume that someone with an M.Sc. cannot be a diligent scientist, in many cases providing lots of evidence in reviews that they have not read the manuscript with care… possible skipping entire sections,” wrote one student.
There’s even outright theft sometimes.
“The results were made partially public at a conference,” wrote one researcher. “Another researcher who has hard feelings towards my former supervisor, and viceversa, started to use the date as if it was a ‘public domain information’ and later my supervisor considered that the publication is not worth going out. The problem has not been resolved yet.”
A Reminder
This has been, in some ways, a worst-ever tour of the scientific research community. We all know someone who has had some terrible experience with their research.
But many of us have had relatively happy tenures in science and research. At least in my field, ecology, I can say that the vast majority of people are good people and fun to work with. It’s part of what I love about my job. If the only people around me were those who stole results, bullied me into not publishing, constantly asked me to change the focus of my research, or demeaned what I did because I was a graduate student, I would quit.
But here I am, and I’m happy! Such people do not make up the majority in our fields. But it’s worth remembering that even one major interaction like this can seriously discourage people from continuing to do research. There are lots of other jobs out there, and if the research environment is malevolent it’s easy to feel that the grass is greener on the other side.
So: with the knowledge that there is some scummy behavior going on, can we try to be nicer and kinder to one another? After all, our goals are to advance scientific knowledge and to create more capable, creative, and conscientious scientists.
Thanks to all who participated in the survey. I hope it has been interesting and helpful to read about.