With Proper Use, Facial Recognition AI Benefits All
By Jon Gacek, Government, Legal and Compliance Lead, Veritone
Since last month’s New York Times story on Clearview AI, the start-up that helps law enforcement identify unknown people based on their online images, the question of facial recognition benefits (particularly when used by law enforcement) has been the topic of renewed discussion among lawmakers, civil liberty groups and others, who have raised concerns regarding the privacy implications of a tool that can allow law enforcement to identify law-abiding citizens and find out where they live and work by taking their picture on the street. As a leader in artificial intelligence, multiple media outlets have reached out to Veritone seeking our perspective on this issue, which I summarize in this piece.
The problem with Clearview AI is not the use of facial recognition technology itself, but rather in the way Clearview AI has compiled its database of faces and some of the ways in which such a database could be used. Most use cases for facial recognition technology today have nothing to do with automated surveillance of the public or identification of individuals not involved in crimes, two of the areas of concern often raised by critics. As I told TechRepublic, there are dozens of use cases for facial recognition today that benefit society and make our communities safer. For example, AI can help law enforcement sift through massive amounts of data to find human trafficking victims in online ads, assist in the identification of terrorists and help to dismantle organized criminal networks.
Despite the unfortunate firestorm Clearview AI has caused, to overly regulate or ban facial recognition AI outright would be a mistake, especially in cases where the technology is only introducing efficiencies in processes that humans conduct manually. Ultimately, technology is a tool, and like any tool it can be used for good or bad purposes — the key is to regulate potentially bad uses of the tool while retaining the good uses. I believe that it is possible for AI facial recognition benefits to be used in a way that solves problems and benefits society at large, while also protecting our privacy and security.
Data size matters
One of the biggest problems I see with Clearview AI is its use of a very broad data set, compiled by scraping photos from public websites (in violation of their terms of use). In addition to potentially overreaching into a violation of privacy, the size of the database can result in more false-positive matches.
As I outlined to Law.com, the bigger the database, the more false positives you get. At some point there is a tipping point in lack of usefulness. Clearview AI has claimed a high level of accuracy in its marketing materials, but these have been contested. The use of a narrower band of data can result in more accurate results.
Doing what’s already done — faster and cheaper
Although there are many ways in which facial recognition technology can be used inappropriately, facial recognition AI used appropriately can be a huge help. In many cases, facial recognition AI is simply performing tasks that humans have been doing for decades, but much faster and at a much lower cost.
As outlined in this ZDNet story and TechTarget’s Anaheim PD case study, our product, Veritone IDentify, uses suspect identification methods of AI to compare images of suspects appearing in video and photographic content such as security camera footage with images contained in departments’ known offender databases. These datasets contain the booking records and mugshots of previously arrested suspects, as opposed to mass collected imagery of the general public scraped from their social media accounts. By matching video and/or still image evidence from a crime scene against images in a known offender database, Veritone IDentify provides law enforcement officials with a list of possible matches. It is important to note that in this case, AI is simply being employed to accelerate an otherwise long, manual process of an investigator leafing through mugshots to find potential matches – in that way, it is really no different than how technology accelerated the identification of fingerprints decades ago. However, just like with fingerprint identification, officers must continue to use additional investigation techniques to make a positive identification and build a case for probable cause and arrest.
The bottom line is that as facial recognition AI continues to develop, the conversation should focus on the data sources that are used for facial recognition, and the way the technology is used, rather than on outright bans on the technology. To avoid privacy infringement and promote the greater good, technology start-ups and other users must be mindful of where they source the data and how they use it. Even in AI, humans can and do have the final say.
To see our thoughts on this ongoing topic, follow the hashtag #VERIpragmaticAI.
Further Reading
Webinar Recording: How Automating Suspect Identification Saves Law Enforcement Time & Money
Facebook Tackles Terrorism with AI Photo and Video Matching
Intelligent, Rapid Suspect Identification for Criminal Investigation and Law Enforcement