An array of negative stories regarding automated facial recognition systems have brought the issues of racial and gender bias of the technologies front and center in the minds of many. Most recently, this has resulted in a number of legal actions in the United States, including the IRS eliminating facial recognition from the 2022 tax season.
In general, the question of fundamental bias in face matching is not a new problem. Experience has clearly demonstrated that humans are notoriously bad at fundamental face matching and even worse at cross-racial face matching, showing a 76% correct identification rate in same-race determinations and 46% in cross-race determinations. Additionally, a study of trained passport officers demonstrated that these inaccuracies do not improve with experience or training, never exceed 84% accuracy, and are usually in the 60% range.
When compared against human results, all automated facial recognition systems outperform in terms of speed and accuracy. However, public perception remains largely unaffected by the increasing amount of data indicating significant and ongoing advances to eliminate bias and further increase accuracy. The facial recognition industry at this point regularly operates above 90% accuracy generally with bias variances which are well below, in some cases 10x below human reviewer performance.
Studying the Data
Many of the issues affecting the perception of the technology were highlighted in a study conducted by the National Institute of Standards and Technology (NIST) that found many of the algorithms powering some modern facial recognition systems displayed an apparent bias in their results. However, the study itself has been called into question, owing to flaws in the evaluation data that NIST is working to correct. But even with those mixed results, commercially available algorithms still regularly performed above 90% accuracy — significantly outperforming the previous generations of human manual review and determination in raw accuracy and relative bias.
Furthermore, a seminal report from MIT resulted in a confirmation of the biases in commercial facial recognition algorithms but found that commercial vendors had significantly improved upon these shortcomings within a single year.
No system is perfect, but the continual advancements being made in the field cannot be ignored while the most salient point often remains overlooked in the media — while automated facial recognition technologies have accuracy challenges and biases, they are still far better than the alternatives with humans in the loop or reverting to legacy knowledge-based authentication (KBA) techniques. Still, while automated technologies are clearly and demonstrably better than human review alternatives, this does not absolve the commercial industry from acknowledging and addressing the shortcomings and biases that do exist.
Course Correcting Is Fundamental to Progress
Current legislation initiatives at the state and federal levels are attempting to reconcile the public debate, most recently with an update to the Algorithmic Accountability Act of 2022 that empowers the Federal Trade Commission to require reporting of performance. The scientific data is clear that these automated facial recognition systems must be allowed to continue to improve and narrow the bias gaps that exist within all human interactions without losing the benefits that have been gained from technology.
These facts are not lost on academics and independent researchers, and we have seen significant movement in recent months, including robust outlines of the sources, natures, and jeopardies in automatic biometric systems as well as proposals for ongoing measurement, improvement, and elimination of biases in algorithms, models, and source data.
The Alan Turing Institute Public Policy Program released a comprehensive review, Understanding bias in Facial Recognition Technologies, which provides a very good overview of facial recognition technologies, some sources of bias, and potential jeopardies and consequences in which they found that data organization, labeling, and consistent application of labels are a key issue.
MORE FOR YOU
- The Solution to Passwords Is Staring Us in the Face
- The Road to Fraud Protection Doesn’t Have To Be Scary – Here’s How You Can Navigate It
- The Biggest Security Flaw That Exists Today With Distributed Teams
- Why Customer Identify Verification Is Destroying Your Profit Margins
- A Mission-Driven Purpose That Changed Our Company for the Better
Transcending Bias Through Code
Recent work by NIST, DHS, and others in the United States has targeted the sources of biases in these automated algorithms with different approaches and interesting results, demonstrating that — contrary to public discourse — the problem is deeper than just the datasets these algorithms are trained on.
In the DHS report, there were two very important findings that race, gender, and the composition of identification databases contribute significantly to similarity scores. These findings point to the fact that the composition and distribution of the databases being searched by the algorithm are as important as the composition of the training data. Furthermore, NIST has recently published a draft proposal that focuses on the stages of development and deployment of artificial intelligence and machine learning technologies along with the considerations that must be taken to minimize, monitor, and ethically report these performance characteristics.
Other work at MIT has pointed to new opportunities in analyzing algorithmic training models that may contain bias in order to mitigate it through variational reweighting during training, effectively narrowing in on the parts of the model that develop biases in order to remove them. Finally, the influential group that published the original report from MIT on bias in facial recognition published an update in 2019 to their original bias study, which found that all commercial vendors featured in the report had reduced their error by 17.7% – 30.4% within seven months of the original publication. This further differentiates automated systems from human reviewers in the rate of improvement and the incremental and additive gains demonstrated by industry.
The Road Ahead
Clearly, not only is new legislative and regulatory groundwork being laid to improve the ethical reporting of commercial facial recognition vendors, but industry frameworks and behaviors are changing and adapting for the better in real-time, making it possible to finally correct the shortcomings of legacy human-assisted and automated technologies to positively impact millions of people globally.
While automated technologies will never be perfect, they are always improving and already exceed by orders of magnitude the accuracy and bias present in legacy human review processes. The adoption of these technologies in our online activities is guaranteed to significantly improve the equity in all our interactions online.