Combat False Narratives with These Investigative Processes
The report issued by the Center on Privacy and Technology at Georgetown Law Center on October 18, 2016, makes alarmist claims about the use of facial recognition technology by law enforcement. It calls for greater oversight and accountability in the use of facial recognition technology by law enforcement and sensationally claims that the queries performed by law enforcement are biased because a disproportionate number of mugshots in the system are people of color. I addressed these issues and why I believe they represent a false narrative in my previous blog.
The American Civil Liberties Union (ACLU) and other groups are calling for a Department of Justice investigation into the uses of facial recognition technology within law enforcement because they believe public safety agencies are violating the rights of Americans, especially communities of color. Practitioners like me know better.
This is a proven technology that provides great and growing value to public safety. But, I will also firmly state that law enforcement users of facial recognition technology must live by a clearly defined process and rock-solid policies that are rigorously exercised and audited. If we don’t actively demonstrate the accountability we all live by, we create open space for fear mongers to air their falsehoods.
Goal of Facial Recognition to Generate Investigative Leads
Let’s consider the facts. The goal of using facial recognition technology is to generate a strong investigative lead - not to definitively conclude that a face matches an identity. As a former detective who has analyzed thousands of images for criminal investigations, I can attest that every image introduced into a facial recognition system is unique. Because of this, a standard workflow and process must be established by every agency that uses the technology.
An initial step in this process is image analysis. A vetting process must be established by the agency where a system user analyzes and physically inspects the images for quality and clarity before conducting any type of facial recognition search or comparison. The user must affirmatively answer the questions:
Does this image meet the criteria for a facial recognition search?
Can this image be enhanced by software, or will this image be rejected?
These purely human expert judgments should be a prerequisite to starting the process since no facial recognition search or analysis is ever the same. As search results go, facial recognition algorithms read each face differently. When images are higher quality, the process is seamless: import a face and the software will find a result in the list of returns, usually returning a facial match with a higher confidence ranking. Many facial recognition systems on the market today do this very well.
What about facial recognition results using images of lower quality? Assuming these images can be enhanced by facial recognition software, they can be searched, but confidence rankings in the gallery of returned faces will vary and the returns will not be so obvious. What is important for agencies to remember is that you can still leverage a lower quality image to find possible matches . You can enhance the image with software and utilize data filters. Search results often change when metadata filters are applied, especially when searching against a database in the millions, thousands, or hundreds. When search parameters are defined and levels of specificity are set by the user, the results vary but also become more precise. Users can either narrow or expand searches based on filters and gallery sizes. These factors can improve results, but they still require the facial recognition user to individually examine each image. The algorithms may pick up minute facial features, but the subject will more likely return deeper in a candidate list.
The point is that with the right gallery size and appropriate metadata selections, lower quality images can be used by facial recognition systems to return results. It is also critical that the investigator who uses facial recognition technology has detailed knowledge of the case such as the unknown suspect’s descriptive features, or the location where the crime occurred. This helps improve the quality of search results. Just as important, the facial recognition user needs to have a keen eye for detail to locate the “face” within the list of candidates, and also needs to be patient and persistent with the use of filters.
Two-Level Verification Process Critical
Once a subject has been identified as a possible match through physical attributes, a first level verification begins. Because many doppelgangers exist in this world (he looks like him, she looks like her), an immediate background investigation must be performed on faces with similar physical characteristics. This first level verification validates the candidate as a viable suspect by strengthening the subjective analysis for identifying physical similarities between two faces.
Once the physical characteristics and first level background checks match, a second level verification is needed in the form of a peer review. The objective in a peer review is to convince the group the person in the known image (your candidate) may reasonably be the same person in the original image sourced in your investigation (your probe image). Findings should be displayed on high definition screens and presented to a group of three to five peers. The investigator should showcase his or her first level verifications in the form of a “sales pitch” made to their peers. This analysis must demonstrate in great detail any physical similarities or differences between both faces. Annotations should be made and displayed on screen which highlight definitive characteristics such as scars, moles, marks, or tattoos. Talk about the shape of the head, hairline, hair texture, jawline, eyes, nose, mouth region, and the shape of the ears. Discuss any relevant background information obtained about your candidate, provide any arrest history if applicable, and compare investigative background findings to establish associations with your current investigation. The goal of the peer review process is to showcase how you selected your candidate and came to your conclusions.
Ultimately, you want to reasonably place the known candidate who was matched and selected from the facial recognition gallery search at the scene of the crime. To complete the second level verification, a majority rules voting system should take place. The question everyone should be asking is, “Could this be the person we are looking for? Could this reasonably be the suspect with all factors considered?”
Critical to note is that once a candidate has been identified through facial recognition and validated through peer review, an arrest still CANNOT be made.
While this is obvious to most facial recognition users in law enforcement, it is still where unintentional mistakes can be made. While many agencies act in good faith to acquire expert testimony from facial recognition analysts, they sometimes unintentionally fall short by making arrests based on that analysis, and not based on sound implementation of the recommended two-level verification process. Expert analysis can inform the process, but it should not replace it.
Lower Quality Images Require More Vetting and Analysis
An enhanced approach should be taken with regard to images of lower quality. These images require more vetting and analysis. When surveillance video is described as “grainy,” the image lacks quality data because it is of lower resolution. This may cause an expert analyst to remain neutral. If that happens, there is no longer a need for a facial recognition comparison using algorithms. As a general rule of thumb, I always teach: “If you can’t see a face, the facial recognition system can’t see a face. If you are presented with a lower quality image that a human being cannot effectively analyze, do not perform any type of comparative facial analysis. It will rarely be credible.”
I've seen many articles discuss the high probabilities of facial recognition false positives or the potential for high rates of misidentifications. However, a key to law enforcement success with facial recognition is reducing reliance on facial recognition software alone or expert testimony alone. In police terminology, “Call for back up!” Back up your “faces” with peer reviews and other supporting documents to validate any facial recognition match candidates as viable suspects. Again, the goal of performing facial recognition analysis is to generate a strong lead. We have no time for fishing expeditions. When agencies implement a two-level validation process into their facial recognition workflow, it supports all forms of subjective analysis, and the false narrative of selecting false positives which lead to misidentifications will be much easier to refute.
Even when images acquired in investigations are higher in resolution, and a person in an image can easily be identified by human eyes, the analysis is still subjective. It still needs other forms of supporting validation to strengthen the already apparent physical similarities between two faces. How can you do this? This can be done with adding facial annotations and measurements in the comparative analysis of faces in addition to conducting background check verifications. The annotations and supporting documents can be introduced into court as the basis which generated the facial recognition lead. This proper documentation can greatly assist in the prosecution of suspects. Implementing this methodology shows an agency is not simply relying on the facial recognition software, but is sourcing other methods to validate the match.
Agencies Must Independently Establish Probable Cause for Arrest
Upon review, all facial recognition matches should be treated no differently than someone calling in a possible lead from a dedicated tip line. The onus still falls on the investigator in an agency to independently establish probable cause to effect an arrest. It is important to note: A recommended best practice in facial recognition use in criminal investigations is that all matches returned by the software, and validated by human analysis, remain possible leads. Probable cause for arrest must be met by other investigatory means.
As to claims of racial bias in the Georgetown Study, when would someone’s race be applied to a facial recognition search? If a facial recognition user is working with a large database of faces, and the suspect has been described as a male or female, and the subject’s race is known. However, this is not a biased search. This is a filtered search. These types of demographic filters (which are found within the image metadata) are applied to facial recognition searches to narrow a list of returns to a much smaller scale. This basic metadata, which works in tandem with the facial recognition algorithms, is simply the pedigree information associated with an arrest photo, or driver’s license photo, and includes gender, race, age, date of birth, height, weight, or any other descriptive feature.
What needs to be thoroughly evangelized here is when users apply these filters to a facial recognition search, it narrows a list of candidates from millions into thousands, thousands into hundreds, and ultimately allows a return from hundreds of “faces” to one. Using filters defines searches to levels of specificity and greatly contributes to matching accuracy by driving up more possible match returns and driving down false positives which often plague most of the larger database systems of today. The bottom line, filters help facial recognition accuracy and do not hinder performance. They set rules on searches to keep faces categorized. More importantly, facial recognition algorithms are designed to analyze nodal points, or facial landmarks. These landmarks are the eyes, nose, mouth, chin, jawline, etc. There are approximately 90 nodal points on a human face. Algorithms ignore skin color and focus matching probabilities on facial structure. For anyone to claim that facial recognition is racially biased or targets one or more races of color is purely a false narrative. In my experiences, these false narratives are often fueled and driven by the misinformed, and are professed usually for their own self-promoting reasons.
Documenting and Auditing Key to Facial Recognition Investigations
I strongly recommend that any agency electing to use facial recognition capabilities document each search and have full auditing capabilities. Doing so ensures all the officers within law enforcement agencies are utilizing this technology properly, and brings needed integrity to an already unfairly criticized technology. This is simply sound investigative practice.
Vigilant Solutions FaceSearch and LineUp facial recognition offerings are the standard model and platform for facial recognition in law enforcement. Vigilant realizes the need for security and accountability by taking a proactive approach to integrating full auditing capabilities at the agency administrator level. We provide a self-regulating inquiry system which meets FBI-CJIS compliance standards, giving every agency full dashboard monitoring capabilities so they can gather agency metrics, manage their photo galleries and assist in the supervision of the personnel who access these law enforcement systems.
Here is our first blog addressing the Georgetown study: Facial Recognition: Racial Bias, Privacy & Misuse. Facts, Metrics, and Accountability Needed to Combat False Narratives about Misuse and Racial Bias
The post Best Practices to Prevent Facial Recognition Misuse and Bias appeared first on Vigilant Solutions.