2016-04-13



This article originally appeared in The Intercept. To read the original article, click here.

I CAN PEEL a person’s face apart in 90 seconds,” said the well-dressed woman holding tongs, “but I can’t get a quesadilla out of here.” It had been a long day at the 68th Annual Scientific Meeting of the American Academy of Forensic Science. At the private reception in Pavilion 5, the food had gone quickly. All that remained was an unappetizing pile of quesadillas, stubbornly stuck together in their stainless steel buffet tray. As she leaned in to dislodge a clump of tortilla and cheese, the woman’s conference badge revealed that she worked in a medical examiner’s office. Her ID hung from a blue lanyard adorned with the iconic retro sign that greets visitors to town: “Welcome to Fabulous Las Vegas, Nevada.”

We were deep in the bowels of the Rio All-Suite Hotel and Casino. It was late February — just 24 hours after Donald Trump’s victory speech following the GOP caucuses in Nevada. As pundits and political operatives left Sin City, thousands of scientists, lawyers, and academics had arrived for nearly a full week of wall-to-wall panels and PowerPoint presentations by top forensic experts from around the globe. The reception that night was hosted by the group’s forensic dentists, the “odontology” section. In a ballroom down the hall, an audience trickled in for an evening event called “Bring Your Own Slides,” the forensic scientist’s equivalent of an open mic. There, students sought autographs from a famed pathologist in the back of the room, while up front a presenter showed graphic pictures of an exhumed corpse.

The AAFS is the largest professional forensic science organization in the world. Its 6,500 members include doctors, engineers, and scientists of all stripes — practitioners who lend expertise and testimony to lawyers and law enforcement. Founded in 1948, its mission is to “elevate the standards and advance the cause of the forensic sciences.” Membership is governed by a strict code of ethics “to promote the highest quality of professional and personal conduct,” according to the AAFS’s published guidelines, and available “only to those persons of professional competence, integrity, and good moral character.”

Such a buttoned-up image made the Rio a somewhat unlikely venue for the AAFS. The towering compound rises from the desert just west of the Strip, bathed in neon glass and surrounded by palm trees. Opened in 1990, the vaguely Brazilian-themed Rio has not aged well, though it remains home to such popular mainstays as the Chippendales, the World Series of Poker, and the magicians Penn & Teller. In February, along with a widely advertised all-you-can-eat deal — 24 hours of unlimited access to five Vegas buffets for $54.99 — the casino was devoting heavy promotion to an upcoming Guy Fieri project called “El Burro Borracho” (the Drunk Donkey).

Although word at the Rio was that the scientists were not a gambling bunch, conference organizers seemed intent on keeping things lighthearted. The thick convention program was decked out with a poker theme; attendees could purchase AAFS shot glasses or a commemorative T-shirt with a Nevada license plate on the back that read “VIVA 4N6.” A silent auction offered such novelty items as coasters covered with fake blood spatter, a human skull belonging to the victim of a fatal sling-shot, and a T-shirt with a bone on it reading, “I Found This Humerus.” At their comedic best, forensic scientists blend puns with dark humor. One pathologist’s presentation was titled “Chainsaw-Related Fatalities: What Is All the Buzz About?”

In a ballroom down the hall, an audience trickled in for an evening event called “Bring Your Own Slides,” the forensic scientist’s equivalent of an open mic.

For all the outward playfulness, however, a looming tension hung over the conference — the nagging knowledge that all is not well in the world of forensics. Despite the image peddled by popular TV shows like CSI: Crime Scene Investigation, which portray forensic experts as crime-fighting scientists with unparalleled gifts of observation, the field has become increasingly embattled in recent years. Crime labs have come under fire for mishandling evidence, and high-profile exonerations have exposed how “junk science” has sent innocent people to prison. The bad press has led to heightened skepticism of forensics, forcing practitioners to defend their reputation.

2015 was no exception. Soon after the AAFS convened last February under the banner “Celebrating the Forensic Science Family,” a series of controversies cast further scrutiny on the field. There was the abrupt halting of DNA testing in Washington, D.C.’s first independent crime lab — a three-year-old $220 million project whose director was forced to resign amid damning audits. There was the ongoing fallout in Massachusetts over a crime lab chemist who falsified thousands of drug tests over her nine-year career. And there were the usual headlines exposing miscarriages of justice based on junk science: a Texas man freed after 25 years in prison due to bad “bite-mark” evidence, and three men exonerated in New York after more than 30 years based on a faulty arson investigation (one died of a heart attack in prison). Among the record number of cleared cases in 2015, according to the National Registry of Exonerations, 45 involved “false or misleading forensic science.”

But perhaps most devastating, in April 2015, the Justice Department issued a bombshell announcement, formally admitting to a disastrous mishandling of evidence that lawyers, prisoners, and even its own forensic experts had pointed out for years. For more than two decades, as the Washington Post reported, FBI analysts doing hair-fiber examination “gave flawed testimony in almost all trials in which they offered evidence against criminal defendants.” In a post-conviction review of thousands of cases dating before 2000, the Innocence Project and the National Association of Criminal Defense Lawyers had so far discovered exaggerated testimony by FBI analysts in a staggering 95 percent. This included 32 defendants sentenced to death, 14 of whom were executed or died in prison before the problems were publicly acknowledged.

For the forensic community — and for the feds, who have trained countless local and state analysts in hair-fiber analysis — it was a PR disaster. There was no escaping the crisis at hand. The AAFS had no choice but to confront it. This year, the conference theme was “Transformation: Embracing Change.”

Bad Hair Days

At the opening plenary in the Rio’s Brasilia Ballroom, U.S. Deputy Attorney General Sally Yates started with the good news. “I’m happy to say that we’re making real progress in our efforts to strengthen the way forensic science is practiced in our laboratories and presented in our courtrooms,” she announced. For the first time in history, Yates said, the Department of Justice had imposed accreditation standards for its labs, requiring that “whenever practicable, DOJ prosecutors use accredited labs when testing evidence.” What’s more, she said, as an incentive to states and localities seeking federal funds, the DOJ would give “a ‘plus factor’ to grant applicants who will use the money to seek accreditation.”

That the federal government’s own crime labs have gone for so long without imposing basic standards and oversight was a grim reminder of what passes for progress in 2016. The move was one of the first recommendations put forward by the National Commission on Forensic Science, formed in 2013 through a partnership between the Obama administration and the National Institute of Standards and Technology (NIST). (Yates, a veteran prosecutor and former U.S. attorney in Georgia, serves as the commission’s co-chair.)



The landmark 2009 National Academy of Sciences report on the state of forensics.

The National Commission on Forensic Science itself was the product of a landmark report released by the National Academy of Sciences (NAS) in 2009, which urged the U.S. government to establish an “independent federal entity” to address deep and widespread problems with the state of forensics. Titled “Strengthening Forensic Science in the United States: The Path Forward,” the 254-page study was a wake-up call to the scientific and legal communities, raising major concerns over the way analysts handle the most common and longstanding forms of forensic evidence. The report concluded that nearly every single area of forensic science is plagued by serious questions of scientific validity and reliability. “With the exception of nuclear DNA analysis,” the NAS report read, “no forensic method has been rigorously shown to have the capacity to consistently, and with a high degree of certainty, demonstrate a connection between evidence and a specific individual or source.”

In particular, the report criticized branches of forensics known as “pattern-matching” — the analysis of such visual evidence as fingerprints, blood spatter, handwriting, and bite marks — as lacking any actual scientific underpinning. Also called “impression-matching,” these disciplines essentially boil down to a given “expert” eye-balling two or more objects and deciding whether they match — say, a bloody shoe print left at a crime scene and an actual shoe seized from a suspect, or tire marks left on pavement and the tires on a suspect’s car. There are no real standards guiding the interpretation of such visual evidence, so conclusions are based on subjective criteria. In some ways, the process is no more complicated than a child’s picture-matching game.

To say that the NAS report caused great upheaval would be an understatement. Its sharp assessments pulled the rug out from under even the oldest and most venerable disciplines within the forensic science community. Although seven years have passed since its release, in many ways, the field has barely begun to digest it, let alone devise solutions. Today, the NAS report comes up again and again wherever forensics reform is discussed. Vegas was no exception.

The NAS report concluded that nearly every single area of forensic science is plagued by serious questions of scientific validity and reliability.

Turning to the bad news — the catastrophic revelations about the FBI’s microscopic hair comparison unit — Yates spoke carefully. “It’s clear that, in at least some of the cases reviewed, lab examiners and attorneys either overstated the strength of the forensic evidence or failed to properly qualify the limitations of the forensic analysis,” she said. “This doesn’t necessarily mean that there were problems with the underlying science,” she continued. “It means that the probative value of the scientific evidence wasn’t always properly communicated to juries.”

To guard against such “testimonial overstatement,” Yates said, the FBI would be taking steps to make sure its experts deliver conclusions in court that are “supported by existing science.” Along with a “root cause analysis” of what went wrong with its hair-fiber analysis, the DOJ also plans to expand its ongoing review to other forensic practices — “not because of specific concerns with other disciplines,” Yates emphasized, somewhat defensively. But in order to “ensure the public’s ongoing confidence in the work we do.”

Yates did not identify the forensic practices the DOJ plans to assess — the department is just beginning to plan its review. But echoing the NAS report, she cited the “so-called ‘pattern’ or ‘impression’ disciplines” as presenting “unique challenges.” Despite her assurance that the DOJ harbors no particular concerns about any specific disciplines, it seemed clear that these would be first in line. “We’re thinking of it as a forensics ‘stress test,’” Yates said.

It’s Not Rocket Science

Yates’s announcement was swiftly applauded by the National Association of Criminal Defense Lawyers and the Innocence Project, along with Sen. Patrick Leahy, ranking Democrat on the Senate Judiciary Committee, who issued a press release praising the DOJ for its review, which would ensure that the “public can learn exactly what went wrong and how we can prevent this from ever happening again. Americans need and deserve a criminal justice system worthy of its name.”

Inside the Rio, it was harder to gauge the immediate reaction — but there was good reason to expect that, for some attendees, the review would not be welcome news. While the theme for the annual AAFS meeting has been consistently upbeat in the years since the NAS report first raised red flags — “The Forensic Sciences: Founded on Observation and Experience, Improved by Education and Research” (2013); “Our Path Forward” (2014) — the response from certain practitioners has been decidedly less so. Particularly among the forensic odontologists who practice bite-mark analysis, the reaction has been downright aggressive.

The “science” of bite-mark analysis relies on two conceits — first, that human dentition is unique, and second, that human skin is a sufficient and reliable substrate on which to record that uniqueness. The problem is that neither proposition has ever been proven — and the only empirical research attempting to do so has shown neither assumption to be true. Nonetheless, the subjective conclusions of bite-mark analysts have been allowed into evidence in criminal cases since the 1950s, when a Texas grocery store burglary was solved with the help of a dentist who matched a suspect’s teeth to a bite mark left in a piece of cheese found at the crime scene.

Bite-mark experts weren’t able to agree on the most basic question: “Is this a bite mark?”

In the past few decades, as bite-mark evidence has been linked to wrongful convictions, there has been growing recognition that there is no real science to support bite-mark analysis — including among members of AAFS. This has not gone over well with forensic odontologists. At the 2014 AAFS conference in Seattle, some sessions erupted into near shouting matches, as members of the American Board of Forensic Odontology (ABFO) — the discipline’s certifying body — reacted with hostility to presenters sharing research challenging the reliability of bite-mark analysis. One researcher was grilled so intensely that he was visibly shaking when he returned to his seat after his presentation (and even after he sat down, the grilling continued, from an odontologist sitting behind him). That same year, at a dinner hosted by the ABFO, a guest named Melissa Mourges — an assistant district attorney in Manhattan and a perfervid defender of bite-mark analysis — peppered her talk with nasty personal attacks on a scientist named Mary Bush, who along with her husband, Peter, has conducted critical (and ultimately unflattering) research into bite-mark evidence.

Following the Seattle gathering, the ABFO sought to show it had standards guiding its work. Members developed an elaborate “decision tree” to illustrate the process of identifying bite marks and matching them to a specific person’s teeth. But the project backfired: When it came to the first, most basic question on the chart — “Is this a bite mark?” — participating dentists were unable to clear even that initial hurdle. Of 100 case studies reviewed by 39 ABFO-certified bite-mark experts, there was agreement on that question just four times. The decision tree’s discomfiting results were presented at the following AAFS conference in Orlando, Florida, in 2015. This time, the odontology sessions were more subdued.

Still, there was drama that year. Adding salt to the ABFO’s wounds, AAFS leadership rejected a bid on the part of the board to banish one of its own members, C. Michael Bowers, a California dentist and attorney. Bowers, a vocal critic of bite-mark analysis, had been subjected to a trumped-up ethics complaint brought forward by an unwavering clique of ABFO members who, in part, accused Bowers of changing his expert opinion in a bite-mark case in exchange for remuneration. In reality, it was an open secret that the ABFO wished to expel Bowers for the crime of being too outspoken. The saga came to a head in Orlando, where Bowers celebrated the AAFS’s refusal to oust him by sporting a T-shirt with the image of a California license plate that read “XONR8,” and where one ABFO member, Richard Souviron, angrily confronted AAFS President Victor Weedn about the decision to dismiss the complaint, demanding to know, several times, whether Weedn had “any balls.” Yes, Weedn replied, he does. (Weedn told The Intercept that Souviron subsequently apologized.)

Things have not improved for the bite-mark matchers. Last year saw a storm of withering criticism in the press, including a four-part series in the Washington Post and an investigative report by The Intercept. In October, a Texas man named Steven Mark Chaney was released after spending 25 years in prison for murder on the testimony of an expert who told jurors that there was only a “one in a million chance” that marks found on the victim could have come from anyone else. Ultimately, a Dallas judge and county prosecutors agreed that Chaney should be freed based on the finding that bite-mark analysis is, indeed, junk science. To date, 24 wrongful arrests or convictions have been linked to bite-mark evidence; several additional cases are pending before various courts. And on February 12, 2016, less than two weeks before the AAFS conference, the Texas Forensic Science Commission issued a landmark decision calling for a state moratorium on the use of bite-mark evidence unless and until the practice can be scientifically validated. The commission has also ordered a review of every Texas conviction where bite-mark evidence was allowed.

If there was reason to believe the bite-mark loyalists might arrive in Vegas chastened, or more willing to consider criticism of their field, the odontology sessions at the AAFS conference quickly proved otherwise. Instead of presenting any new research — or even plans for new research that could lead to validation of the practice — bite-mark defenders doubled down, stressing the value of the discipline and warning about how frightening a world it would be without it. Many presentations were more like attaboy affirmations, delivered with a side of subtle (and sometimes not-so-subtle) jabs at critics.

In one presentation titled “Bite Marks — Maybe It Is Rocket Science,” Florida dentist Kenneth Cohrn derided the NAS report as more “opinion paper” than scientific document and slammed critics “posing as experts,” including journalists, calling their critiques “opinionated, sensationalized, and not scientific.” One slide featured a prominent picture of Mary and Peter Bush, presenting them as foes who wish to ban bite-mark evidence from the courtroom — one of three separate references to the couple during Cohrn’s 15-minute talk. In another presentation, “Scorched Earth Forensics: Why the Move to ‘Eradicate’ Disciplines From the Courtroom Is Bad for Science and Bad for the Law,” Melissa Mourges delivered a heavy dose of righteous indignation. After dissing the NAS report — “not everything” can be tested like a “school science project” — Mourges pointed to forensic psychology as a discipline that is even more subjective than bite-mark examination, yet hasn’t been attacked in the same fashion. (Some of her best friends are forensic psychologists, she added, “so I do not want to read in some stupid blog tomorrow that I badmouthed” the field.) Mourges warned that getting rid of bite-mark evidence would almost certainly lead to tragic results — by eliminating potentially exculpatory evidence that could actually help criminal defendants and by allowing child abusers to go unpunished. We shouldn’t “throw the abused baby out with the bathwater.”

For a casual observer unaware of the turmoil within the world of forensic odontology, the sessions in Vegas might have seemed impenetrable or inexplicably tense — definitely a little weird. When it was Bowers’s turn to present on the “rise and fall” of bite-mark analysis, there was some anticipation that he might face heckling or snide questions from the very colleagues who previously colluded to try to oust him. Instead, the crowd was almost exaggeratedly polite. (“That was kind of disappointing,” one dentist joked afterward.) Yet, outside the room, attendees to his session were greeted at the door by a stack of mysteriously placed excerpts from a Washington Post article exposing the lengths Mourges will go to defend the evidence she relies on in criminal cases. The printed passages showed how she altered a sentence from the NAS report in a court filing in order to suggest that bite-mark evidence is scientifically accepted — a blatant mischaracterization of the study, which concluded the opposite is true.

At the lectern, Bowers spoke with a casual air — no hint that this was his big comeback after emerging victorious over the ABFO. Acknowledging that some “people want to discount the NAS report,” he called upon them to “admit and accept” its criticisms and “move on.” Without naming names, he chided previous presenters for blaming critics and the media for their problems. “The public wants to know the truth,” Bowers said. Indeed, there are people whose freedom is at stake — he pointed to the case of Bill Richards, who was convicted of killing his wife based largely on the testimony of two bite-mark experts. Those experts have since recanted their testimony and Richards’s case is pending before the California Supreme Court. For Bowers, the Richards case is one of many that raise the question: Don’t defendants have a right to reliable forensic evidence?

Trust Us, We’re Experts

The odontologists’ head-in-the-sand attitude was in sharp contrast to other disciplines represented in Vegas. Consider the fingerprint experts, whose presentations were generally earnest and optimistic — in keeping with the “embracing change” theme of the conference.

There is no more ubiquitous and familiar a forensic practice than that of fingerprint analysis. Its origins go back to the 1800s, and like virtually all areas of forensic science, it was further developed primarily by — and according to the needs of — law enforcement. Also known as “friction-ridge” analysis, fingerprint analysis today involves collecting typically partial — and often distorted or “noisy” — latent prints from a crime scene and then matching them to a whole clean print taken from a suspect or victim, or pulled from a database. Although the practice is widely seen as foolproof, it has never been subjected to rigorous scientific scrutiny. Nor has there been any kind of standardized training or guidelines for fingerprint examiners — no rules to dictate, for example, how many print details should be considered when contemplating a match. The NAS report noted that “historically,” fingerprints have served as a valuable tool, “both to identify the guilty and to exclude the innocent.” But it also highlighted the “limited information about the accuracy and reliability” of fingerprint analysis, warning that expert claims of “zero error rates are not scientifically plausible.”

Although fingerprint analysis is widely seen as foolproof, it has never been subjected to rigorous scientific scrutiny.

Since the release of the NAS report, however, the fingerprint folks have been on their game. Researchers have sought to determine match error rates. Examiners have started to change how they talk about their findings. At the conference, one notably upbeat presenter was Henry Swofford, head of the U.S. Army crime lab’s latent print branch. In two separate sessions, he outlined the issues within the field and shared the solutions his lab had been developing — including reframing the way analysts report and testify on their conclusions (basically, by not claiming that a print can be individualized to a person, which implies 100 percent infallibility). And he described research underway to quantify the degree of correspondence between two impressions and to estimate the likelihood that correspondence pointed to the same source.

Indeed, in acknowledging the previous bad practices among fingerprint analysts, the affable Swofford poked some fun at his own profession — he talked about how he himself had been trained to consider “sufficient” quantities of print detail in determining whether two prints could be matched. “And I thought, yeah, I’m an expert!” But then he realized he was never told what “sufficient” actually meant: “and to date I haven’t been able to find an answer to that question.” Although the lack of specificity and standardization raises troubling questions about how many convictions may have hinged on faulty fingerprint analysis, Swofford said he isn’t certain that it is an issue. In a short post-conference interview, he said that fingerprints have always been considered a “highly discriminating biometric” and nothing arising from current research challenges that. But he was also confident that it would be possible to strengthen the field. Friction-ridge analysis is on the “cusp of real change,” he said, and scientists need to work with the legal community to implement reforms. “And I’m looking forward to it.”

To be fair, the bite-mark dentists weren’t the only ones not exactly rushing to embrace change in Vegas. In a talk titled “Critics Say the Darndest Things!” presenter Jan Kelly, a former president of the American Board of Forensic Document Examiners, focused mainly on how critics of handwriting analysis are often full of baloney and unfairly lump the practice with the junk science of bite marks and hair microscopy. Not a single wrongful conviction has ever been related to handwriting analysis, Kelly argued, at which point someone in the audience piped up: “Dreyfus!” It was a reference to the 1894 court martial of French army general Alfred Dreyfus, who was erroneously accused of treason based on a handwritten memo that an expert claimed could be positively matched to him. Kelly acknowledged the exception to her statement. But then she pivoted: Was that an Innocence Project case? “No,” she said, answering her own question. (In fact, the National Registry of Exonerations includes at least one wrongful conviction based in part on questionable handwriting analysis.) Of course, a lack of exonerations does not prove a forensic practice is necessarily sound. The NAS report noted that “there may be some value in handwriting analysis,” while warning that “there has been only limited research to quantify the reliability and replicability” of the practice.

In another session, an enthusiastic podiatrist from Indiana, Dr. Michael Nirenberg, stressed the significance of foot-related evidence in solving crimes. “A lot of people don’t think much about feet,” he said. “In forensic podiatry, we always say, ‘You cannot float through a crime scene!’” Although footprints have long been used as evidence by law enforcement, forensic podiatry is a relatively new specialty — it was not even mentioned in the NAS report. Its professional association, the American Society of Forensic Podiatry, was founded in 2003. In a 2008 article for Evidence Technology Magazine, one practitioner drew a distinction between his work and that of a mere “footwear examiner,” explaining that forensic podiatrists evaluate evidence “for the purpose of connecting an individual to footwear or a footprint.”

Indeed, in Vegas, Nirenberg claimed that a forensic podiatrist can link a suspect to wear patterns — the imprints and indentations inside of a shoe. (Performing a “shoe autopsy” helps with such analysis.) An emerging branch of the field, he said, is studying someone’s gait to link the person to a crime. “It’s very exciting,” Nirenberg said.

Forensic podiatry is a good example of a field that has established itself as a forensic discipline despite a thin scientific basis. Last fall, the Boston Review published an article titled “Forensic Pseudoscience,” which singled out the discipline as an illustration of what law professor and forensics expert Daniel Medwed has described as “rogue scientists” who “flourish” in the absence of oversight. Nirenberg and a colleague took umbrage at the article, writing a letter defending the discipline and pointing out that in a courtroom setting, “Experience suggests that where doubts exist as to expertise, this will inevitably come out during cross-examination.”

In response to follow-up questions posed by email, Nirenberg disagreed that his field would be considered among the pattern-matching disciplines questioned by the NAS report. Practitioners rely on sophisticated and detailed knowledge of the foot — “biomechanics, foot type, pathologies, deformities, and so on” — when considering whether a suspect’s and perpetrator’s footprint can be matched. That, of course, sounds much like the process forensic dentists describe when it comes to analyzing bite marks.

In Vegas, Nirenberg acknowledged that practitioners need to be careful about the opinions they offer the courts. But, seeking to prove that matches between feet and footprints can be scientifically accurate, he also threw out a wild array of disparate statistics as alleged evidence. A study out of India found footprints were distinct to one in 10,000 people, while a California study put that measure at one in 100,000. He even cited research coming out of the Royal Canadian Mounted Police that said the chance of finding a random match for a footprint is one in 1.27 billion. The numbers presented a quandary that was a consistent theme throughout the conference: How can experts express reliability to jurors in the absence of reliable scientific data?

So You Want to Be a Forensic Scientist

The AAFS exhibit hall was housed in the Rio’s Amazon ballroom, a massive space filled with conference sponsors, scientific publishers, and purveyors of forensic gadgetry. On the day it opened, a crush of conferencegoers headed straight for the free sandwiches, while others swarmed around the freebies available at various booths. Along with the usual candy and pens, it was a peculiar grab bag of weird stuff: wound measuring charts, evidence bags in various sizes, a clear plastic tube labeled “CONTAMINATED NEEDLES,” and a sperm-shaped stress toy. The exhibit hall also played host to the AAFS’s annual wine and cheese reception; attendees sipped wine amid human X-rays and lab samples with labels like “urine” and “stained money.”

A booth belonging to AAFS displayed a small handbook titled “So You Want to Be a Forensic Scientist!” A chapter introducing the “general” section of AAFS described how, as “the academy’s gatekeeper,” members of the section are “always willing to consider accepting new disciplines that develop in response to the needs of the justice system.” It quoted a former AAFS president, writing in the Journal of Forensic Sciences in 1983: “There is literally no end to the number of disciplines that become ‘forensic’ by definition,” he wrote. “Nor is there an end in sight to the number of present or future specialties that may become forensic. The examples are many.”

The passages illuminated a central problem on display at the conference. For one, there is the bias built into forensics as a whole, in which scientific objectivity is too easily undermined when deployed in the service of law enforcement. But in addition, even as old forensic techniques are called into question for their lack of scientific basis — and even as the human toll of junk science remains unquantifiable — new areas of scientific “expertise” continue to crop up, eventually making their way into court. For lawyers and judges, figuring out how to use such evidence is a daunting task. Defense attorney Chris Fabricant — director of strategic litigation for the Innocence Project and a major thorn in the side of bite-mark dentists — gives credit to forensic practitioners who have tried to correct their flawed work so that the burden of sorting out junk from legitimate science does not fall to untrained attorneys. “There are many techniques that are moving in the right direction, that are heeding the call of the National Academy of Sciences to rein in their scientifically unsupportable opinions and are rolling up their sleeves and doing basic and applied research,” Fabricant said. “And then there are those who are not, and who refuse to acknowledge the scientific realities.”

For members of AAFS’s jurisprudence section, the practical problem of how to use certain forensic evidence in court — if at all — was a constant theme in Vegas. Every day, in courtrooms across the country, judges act as “gatekeepers” in deciding what kind of evidence to allow. Yet few are equipped to determine whether a given forensic expert is sound in his or her analysis. “Why do we tolerate lawyers that don’t understand the science they’re using?” said one speaker, herself a sitting judge. “Why do we tolerate judges who are willfully science-phobic? I speak of myself, too.” Like many of her colleagues, the judge joked, she had done her best to avoid science throughout her schooling career. The same sentiment was echoed in a separate session by a defense attorney from the Twin Cities, who has since found herself navigating a massive crime lab scandal that has cast doubt on scores of convictions.

In his own presentation, Fabricant laid out the absurd reality. When it comes to the assessment of courtroom evidence, it is too often a matter of the blind leading the blind. “We have scientifically illiterate judges, scientifically illiterate lawyers, and scientifically illiterate jurors,” Fabricant said. These are the people determining “whether forensic science is valid and reliable science.”

A number of sessions set out to grapple with this problem. In a presentation titled “Better Ways to Manage Poorly Validated Scientific Evidence,” Michael Saks, a professor of law and psychology at Arizona State University, shared specific suggestions, some of which are already underway — labs should be accredited and examiners certified; evidence should be blind tested, so that an examiner knows only as much about the case as is necessary for testing. (We’re “used to blind tests at county fairs,” Saks noted. Why not also in forensic labs?) Judges must constrain forensic testimony to what is scientifically known in each field — and jurors should be instructed on the limitations of any given field.

Such safeguards, of course, do not solve the deeper problem of poorly validated forensics itself — that project is ultimately up to the broader scientific community. Throughout the conference, a great deal of focus was devoted to the burgeoning process of developing rules and standards for forensic disciplines, which, if done right, will provide desperately needed guidance to lawyers and judges for gauging the reliability of forensic evidence. But it was not always clear that these standards were being designed with the practical needs of the legal community in mind.

The DOJ review is just the beginning of a process that until now, has been almost completely overlooked.

Opening the criminalistics presentations of AAFS, Section Chair John Lentini — a chemist, author, and fire investigator who has done heroic work exposing faulty arson convictions — introduced the Organization of Scientific Area Committees (OSAC), a project of the National Institute of Standards and Technology. The launch of OSAC was the “happiest news” Lentini had to share about the past year. “People are actually going to write up some standards,” he exclaimed, somewhat wryly. It would be hard to overstate the scale and scope of this project. Its goal, according to NIST, is “to support the development and promulgation of forensic science consensus documentary standards and guidelines, and to ensure that a sufficient scientific basis exists for each discipline.” Explaining its structure, John Paul Jones II, associate director for OSAC affairs, displayed a color-coded, multi-pronged chart resembling a molecular map, showing a dizzying array of categories and committees, each containing subcategories and subcommittees. More than 540 members make up the committees, from scientists to government workers to private-sector experts. Following a multilayered approval process, each standard and guideline will be placed on a registry — ideally a one-stop-shop for information on forensic best practices.

The OSACs appear to have been met with a mix of anticipation and dread. One slide showed the iconic hands-on-the-TV image from the horror movie Poltergeist, reading: “OSAC Registries: They’re here.” In February, after months of deliberation, NIST published its very first OSAC standard, by the subcommittee on controlled substances. It lays out the minimum criteria for identification of seized drugs — an “essential first step” toward improving chemical analysis of controlled substances. For a limited time, Jones explained, NIST would grant free web access to the standard — along with future ones — to law enforcement, prosecutors, public defenders, and other stakeholders. (On its website, NIST says the standards will be free for up to two years. But currently, the link providing access yields an error message.)

It was hard to understand why, as a government-funded project, the standards should cost money at all — especially since that would discourage anyone with a limited budget from using them. In the ballroom next door to the OSAC presentation, members of the jurisprudence section were struggling with how to better educate themselves on forensics. During a discussion on how to introduce forensics into the law school curriculum, one criminal defense attorney noted that every year, the AAFS conference presents impressive information when it comes to “grants, standards, and accreditation” — work that “produces excellence in forensic science.” But when he gets to court afterward, it is a rude awakening. If such work doesn’t get back to lawyers and judges, he warned, “It is all for naught.”

The Duty to Correct

On March 21, the National Commission on Forensic Science met in Washington, D.C. There, Yates spoke once more about the planned review by the DOJ. “The goal is to start a conversation,” she told members. “And to get your input on the best path forward.” The head of the DOJ Office of Legal Policy, Jonathan Wroblewski, then laid out some of the questions at hand. “It begins with which disciplines — which ones we should be looking at. How do we select the cases? What are the standards by which we should be testing the testimony that was given in those closed cases? Who should be conducting this review?” Although much remains to be seen, Wroblewski said, “We think that initially, we should be considering disciplines that require forensic professionals to compare two items, then to make judgments about the similarities and differences.” He echoed what Yates said in Vegas. “This is about quality assurance. It’s not about the fact that we have any kind of suspicions as to particular disciplines in forensic science.”

The DOJ review is just the beginning of a process that until now, has been almost completely overlooked — and was barely mentioned in Vegas. For all the talk of moving forward and embracing change, the AAFS conference spent precious little time addressing a different imperative — the need to look backward. Or, as lawyers like Fabricant call it, the “duty to correct.” Indeed, as errors and deficiencies in forensics are acknowledged, so too should be the cases in which those deficiencies and errors were allowed into evidence.

Until the FBI’s recent inquiry into hair microscopy cases, such work was done in a scattershot way, mostly at the state level. The Texas Forensic Science Commission, for example, has facilitated a review of dozens of old arson convictions — a process jointly handled by the Innocence Project of Texas in partnership with the state’s fire marshal — and is now embarking on the bite-mark case review. But the review of potential wrongful convictions is largely left to a patchwork of modestly funded innocence projects, law school innocence clinics, and to a small number of conviction integrity units within prosecutors’ offices. (If there is any glaring blind spot in the NAS report, it is likely the failure to address the impact that faulty forensics may have had on a large number of criminal cases. The report acknowledges the troubling way forensics are vetted by judges and lawyers for admission into evidence, but notes only that this demonstrates a “tremendous need for the forensic science community to improve.”)

Generally, the criminal justice system favors finality — a virtue that has been reinforced in recent decades through legislation and expansion of certain legal doctrine, including the concept of “harmless error” — where mistakes during a criminal trial are acknowledged upon review, but ultimately shrugged off as not having impacted the outcome of the case. In short, it is a standard of expediency — and an example of the difference between law and science. Although science is founded on the principle of perpetual inquiry, the legal system prefers the proposition of one-and-done.

The system is simply not designed to facilitate any meaningful wide-ranging review — and more often than not, state actors fight tooth and nail against reopening old cases. Today, there is no way to ensure that every potential wrongful conviction tied to faulty forensics will be identified or remedied. Take, for example, the experience of the defense attorney from the Twin Cities. Even in the face of glaring failures by the state’s crime lab, she said, some prosecutors refused to cooperate with her to reassess the cases impacted by the lab’s incompetence.

Yet, the sheer power of forensic evidence makes such reviews crucial. As the AAFS conference came to a close, Fabricant shared the story of George Perrot, a man released from prison in February. Perrot spent more than two decades incarcerated for the 1985 rape of a 78-year-old Massachusetts woman. Although the woman repeatedly insisted to police that Perrot was not her assailant — he didn’t look at all like her attacker — the then-17-year-old was nonetheless convicted based largely on the testimony of an FBI hair examiner who said a single hair found in the victim’s bed was a match to Perrot. Perrot was sentenced to life in prison. If it weren’t for the FBI’s comprehensive hair microscopy case review, he would still be in prison. The power of forensic evidence in this case was enough to supersede the victim’s insistence that they had the wrong man, Fabricant pointed out. “I find that truly horrifying.”

This article originally appeared in The Intercept. To read the original article, click here.

Interested in some books by C. Michael Bowers including Forensic Testimony: Science, Law and Expert Evidence, the winner of an Honorable Mention in the 2015 PROSE Awards in Law and Legal Studies from the Association of American Publishers, it is a timely reference for interpreting and presenting accurate legal forensic evidence.

Another is Forensic Dental Evidence, 2nd Edition: An Investigator’s Handbook, which highlights the discussion regarding unjust convictions caused by inaccurate bitemark opinions and focuses on cases that use forensic techniques, emphasizing modern methods and protocols.

Both of these books can be purchased online via the Elsevier Store at up to 30% off the list price and free global shipping. Enter discount code STC215 at checkout.

The post VIVA 4N6: In Las Vegas, Embattled Forensic Experts Respond to Scandals and Flawed Convictions appeared first on SciTech Connect.

Show more