Chris Ryan/Nature
There are at least six things in this picture that a quality-assurance manager would try to improve. Can you spot them? (Answers1, below)
Rebecca Davies remembers a time when quality assurance terrified her. In 2007, she had been asked to lead accreditation efforts at the University of Minnesota’s Veterinary Diagnostic Laboratory in Saint Paul. The lab needed to ensure that the tens of thousands of tests it conducts to monitor disease in pets, poultry, livestock and wildlife were watertight. It was a huge task. I felt sick to my stomach, recalls Davies, an endocrinologist at the university’s College of Veterinary Medicine. She nevertheless accepted the challenge, and soon found herself hooked on finding and fixing problems in the research process. She and her team tracked recurring tissue-contamination issues to how containers were being filled and stored; they traced an assay’s erratic performance to whether technicians let an enzyme warm to room temperature; and they established systems to eliminate spotty data collection, malfunctioning equipment and neglected controls. Her efforts were crucial to keeping the diagnostic lab in business, but they also forced her to realize how much researchers’ work could improve. That is the beauty of quality assurance, Davies says. That is what we were missing out on as scientists.
Biomedical researchers lax in checking for imposter cell lines2
Davies wanted to spread the word. In 2009, she got permission and financial support to launch an internal consulting group for the college, to help labs with the dry but essential work of quality assurance (QA). The group, called Quality Central, now supports more than half a dozen research labs helping them to design systems to ensure that their equipment, materials and data are up to scratch, and helping them to improve. She is also part of a small but growing group of professionals around the world who hope to transform basic biomedical research. Many were hired by their universities to help labs to meet certain regulatory standards, but these QA consultants have a broader vision. They are not pushing for universal adoption of formal regulatory certifications. Instead, they advocate ‘voluntary QA’. With the right strategies, they argue, scientists can strengthen their research and improve reproducibility.
When Davies first started proselytizing to her fellow faculty members, the responses were not encouraging. None of them found the idea compelling at all, Davies recalls. How important could QA be, they asked, if the US National Institutes of Health did not require it? How could anyone afford to spend money or time on non-essentials? Shouldn’t they focus on the discoveries lurking in their data, and not the systems for collecting them?
Microbiome science threatened by contamination3
But some saw the potential, based on their own experiences. Before she had heard of Quality Central, University of Minnesota virologist Montserrat Torremorell was grateful when a colleague let her use his instruments to track transmissible disease in swine. But the results made no sense. Samples from pigs experimentally infected with influenza showed extremely low levels of the virus. It turned out that her benefactor had, like many scientists, skimped on equipment maintenance to save money. It was a real eye-opener, Torremorell recalls. It just made me think that I could not rely on other people’s equipment.
Quality for all
Quality systems are an integral part of most commercial goods and services, used in manufacturing everything from planes to paint. Some labs that focus on clinical applications implement certified QA systems such as Good Clinical Practice, Good Manufacturing Practice and Good Laboratory Practice for data submitted to regulatory bodies. There have also been efforts to guide research practices outside these schemes. In 2001, the World Health Organization published guidelines for QA in basic research. And in 2006, the British Association of Research Quality Assurance (now simply the RQA) in Ipswich issued guidelines for basic biomedical research. But few academic researchers know that these standards exist (Davies certainly didn’t back in 2007). Instead, QA tends to be ad hoc in academic settings. Many scientists are taught how to keep lab notebooks by their mentors, supplemented perhaps by a perfunctory training course. Investigators often improvise ways to safeguard data, maintain equipment or catalogue and care for experimental materials. Too often, data quality is as likely to be assumed as assured.
Nature special: Challenges in irreproducible research4
Scientific rigour has taken a drubbing in the past few years, with reports that fewer than one-third of biomedical papers can be reproduced (see Nature http://doi.org/477; 2015). Scientific culture, training and incentives have all been blamed for promoting sloppy work; a common refrain is that the status quo values publication counts over careful experimentation and documentation. There is chaos in academia, says Masha Fridkis-Hareli, head of ATR, a biotechnology consultancy in Worcester, Massachusetts, that also conducts laboratory work to help move basic research into industry. For every careful researcher she has encountered, there have been others who have thought nothing of scribbling data on paper towels, repeating experiments without running controls and guessing at details months after an experiment. Davies insists that plenty of scientists are doing robust work, but there is always room for improvement (see ‘Solutions’). There are easy fixes to situations that shouldn’t be happening, but are, she says.
Solutions
There are many things wrong with the fictitious lab shown above. But, here are six that a quality-assurance manager would identify, and how they would solve them.
DISORGANIZED SAMPLE STORAGE
Clear labelling and proper organization are important for incubators and freezers. Everyone in the lab should be able to identify a sample, where it came from, who did what to it, how old it is and how it should be stored.
INADEQUATE DATA LOGGING
Data should be logged in a lab notebook, not scribbled onto memo paper or other detritus and carelessly transcribed. Notebooks should be bound or digital; loose paper can too easily be lost or removed.
VARIABLE EXPERIMENTS
Protocols should be followed to the letter or deviations documented. If reagents need to be kept on ice while in use, each lab member must comply.
UNSECURED DATA ANALYSIS
Each lab member should have their own password for accessing and working with data, to make it clear who works on what, when. Some popular spreadsheet programs can be locked down so that manipulating data, even accidentally, is difficult.
MISSED MAINTENANCE
Instruments should be calibrated and maintained according to a regular, documented schedule.
OLD AND UNDATED REAGENTS
These can affect experimental results. Scientists should specify criteria for age and storage of all important reagents. Michael Murtaugh, a swine biologist at the University of Minnesota, had tried to establish practices to beef up the reliability of his team’s lab notebooks, but the attempts that he made on his own never gained traction. Then Davies got on his case. After a year or so of her planting seeds as she puts it Murtaugh agreed to work with Quality Central and implement a low-tech but effective solution.
Hospital checklists are meant to save lives so why do they often fail?7
On designated Mondays, each member of Murtaugh’s lab draws a name from a paper bag to determine whose notebook to audit. The scientists check that their assigned books include relevant controls for experiments, and indicate where data are stored and which particular machine generated them. The group also makes sure that any problems noted in the previous check have been addressed. It takes about ten minutes per researcher every few weeks, but that’s enough to change people’s habits. Graduate student Michael Rahe says that the checks ensure that he keeps his notebook legible and up to date. I never used to put in raw data, he says. Albert Cirera, a technologist developing gas nanosensors at the University of Barcelona in Spain, has also embraced QA. As his lab group grew to 12 people, he found it difficult to monitor everyone’s experiments, and his own efforts to implement a tracking system were inadequate. He turned to a university-based QA consulting service for help. Now, samples, equipment and their data are all linked with tracking numbers printed on stickers and recorded in individuals’ notebooks, on samples and in a central tracking file. The system does not slow down experiments, and staying abreast of projects is a breeze, says Cirera. But getting to this point took about four months and frequent consultations. It was not something that you can create from zero, he says.
Making a market
Any scientist adopting a QA system has to wager that the up-front hassle will pay off in the future. It is very difficult to get people to check and annotate everything, because they think it is nonsense, says Carmen Navarro-Aragay, head of the University of Barcelona quality team that worked with Cirera. They realize the value only when they get results that they do not understand and find that the answer is lurking somewhere in their notebooks.
Lab-inventory management: Time to take stock8
Even when experiments go as expected, quality systems can save time, says Murtaugh. Methods and data sections in papers practically write themselves, with no time wasted in frenzied hunting for missing information. There are fewer questions about how experiments were done and where data are stored, says Murtaugh. It allows us to concentrate on biological explanations for results. The more difficult data are to collect, the more important a good QA system becomes. Catherine Bens, a QA manager at Colorado State University in Fort Collins, says that she remembers getting cold, wet and dirty when she had to monitor a study involving ultrasound scans and blood samples from a population of feral horses in North Dakota. Typical animal-identification practices such as ear tagging were not allowed. So, before the collection started, Bens supported researchers as they rehearsed procedures, pre-labelled tubes, made back-up labels and recruited animal photographers and park volunteers to ensure that samples would be linked to the correct animals. Even in a snow storm with winds so loud that everyone had to shout, the team made sure that each data point could be traced.
Rare samples or not, few basic researchers are clamouring to get QA systems in place. Most are unfamiliar with the discipline, says Davies. Others are hostile. They see it as trying to constrain them, and that you’re making them do more work.
Reproducibility crisis: Blame it on the antibodies9
Before awarding certain grants, the Found Animals Foundation in Los Angeles, California, which funds research on animal sterilization, requires proof that instruments have been calibrated and that written plans exist for tracing data and dealing with outliers. It can be a struggle, says Shirley Johnston, scientific director of the foundation. One grant recipient argued that QA systems were unnecessary because just looking over the data would reveal their quality. Part of the resistance may be down to how some QA professionals present themselves. A lot of them are there to tell you what you are doing is wrong, and a lot of them are not very nice about it, says Terry Nett, a reproductive biologist at Colorado State University who experienced this first-hand when he worked with outside consultants to incorporate Good Laboratory Practice principles in his lab. The effort was frustrating. Instead of helping us understand, they would act like a dictator, Nett recalls. I just didn’t want them in my lab. A few years ago, however, the university hired its own quality managers, and things changed. The current manager, Bens, acts more like a partner, Nett says. She points out where labs are already using robust practices, and explains the reasoning behind QA practices that she introduces. To win scientists over, Bens stresses that QA systems produce data that can withstand criticism. You build a support system around any data point you collect, she says. When there is a strange result, researchers have documentation to trace its provenance. That can show whether a data point is real, an outlier or a problem for example if a blood sample was not kept cold or was stored in the wrong tube.
There are easy fixes to situations that shouldn’t be happening, but are.
Scientists need to take the lead on which QA elements they incorporate, says Melissa Eitzen, director of regulatory operations at the University of Texas Medical Branch in Galveston. You want to give them tips that they can take or not take, she says. If they choose it, they’ll do it. If you tell them they have to do it, that’s a struggle. Rapport is paramount, says Michael Jamieson at the University of Southern California in Los Angeles, who helps other faculty members to move research towards clinical applications. Instead of talking about quality systems, he prefers to discuss concrete behaviours, such as labelling bottles with expiry dates and storage conditions. QA jargon puts scientists off, he says. Using the term good research practice makes most researchers want to run the other way. It’s a lesson that many QA specialists have taken to heart. Some say ‘assessment’ or ‘quality improvement’ instead of ‘audit’. Even ‘research integrity’ can be an inflammatory phrase, says Davies. You have to find a way to communicate that QA is not punitive or guilt-inspiring.
Not into temptation
How scientists fool themselves and how they can stop10
Having data that are traceable down to who did what experiment on which machine, and where the source data are stored has knock-on benefits for research integrity, says Nett. You can’t pick out the data that you want. Researchers who must provide strong explanations about why they chose to leave any information out of their analysis will be less tempted to cherry-pick data11. QA can also weed out digital meddling: popular spreadsheet programs such as Microsoft Excel can be vulnerable to errors or manipulation if not properly locked, but QA teams can set up instruments to store read-only files and prevent researchers from tampering with data accidentally or intentionally. I can’t help but think that QA is going to make fraud harder, says Davies. And good quality systems can be contagious. Melanie Graham, who studies diabetes at the University of Minnesota, often collaborates with others to test potential treatments. More than once, she says, collaborators have sent her samples in a polystyrene tube with nothing but a single letter written on it. Graham sends it back and requests a label that specifies the sample’s identity and provenance, and a range of storage temperatures. ‘Keep frozen’ is too vague she will not risk performing uninformative experiments because reagents stored in a standard freezer were supposed to be kept at ’80 C.
I can’t help but think that QA is going to make fraud harder. When she first sent documentation requirements to collaborators, she expected them to push back. Instead, reactions were overwhelmingly positive. It’s a relief for them, says Graham. They want us to handle their test article in a trusted way.
The benefits go beyond providing solid data. In 2013, Davies worked with Torremorell and other Minnesota faculty members on a proposal to monitor and calibrate equipment used by several labs. The plan that they put in place helped them to secure US£1.8 million to build shared lab space to deal with animal pathogens, says Torremorell. If we want to be competitive to get funding, and if we want people to believe our data, we need to be serious about the data that we generate.
Robust research: Institutions must do their part for reproducibility12
Davies is still trying to spread the word. Her invitations to give talks and review grant applications have mushroomed. She and collaborators at other institutions have been developing online training materials and offering classes to technicians, postdocs, graduate students and principal investigators. After a presentation last year, a member of the audience told her that he had reviewed a grant from one of her clients; the QA plan had made the application stand out in a positive way. Davies was delighted. I could finally come back to my folks and say, ‘It was noticed.’
Davies knows it is still an uphill battle, but her ultimate goal is to make QA as much a part of research as peer review. It may not have the flash and dazzle of other efforts to ensure that research is robust and reproducible, but that is not the point. A QA programme isn’t sexy, says Michael Conzemius, a veterinary researcher at the University of Minnesota and another client of Quality Central. It’s just kind of become the nuts and bolts of the scientific process for us.
References
^ Answers (nature.com)
^ Biomedical researchers lax in checking for imposter cell lines (www.nature.com)
^ Microbiome science threatened by contamination (www.nature.com)
^ Challenges in irreproducible research (www.nature.com)
^ http://doi.org/477 (doi.org)
^ ‘Solutions’ (www.nature.com)
^ Hospital checklists are meant to save lives so why do they often fail? (www.nature.com)
^ Lab-inventory management: Time to take stock (www.nature.com)
^ Reproducibility crisis: Blame it on the antibodies (www.nature.com)
^ How scientists fool themselves and how they can stop (www.nature.com)
^ less tempted to cherry-pick data (www.nature.com)
^ Robust research: Institutions must do their part for reproducibility (www.nature.com)