In what is being called “the first known example of a government intentionally using artificial intelligence for racial profiling,” China is using facial recognition technology and its network of surveillance cameras to identify and track Uighurs, a mostly Muslim minority in China’s Xinjiang region.
As the New York Times reported, this marks yet another dramatic shift in who is shaping cutting-edge technologies and for what purpose. While democracies have had a near monopoly in this space for decades, the past year has been marked by China setting the tone for technologies like CRISPR and AI. While the United States may still lead the world in innovation, China is winning the race to implementation. As Kai-Fu Lee, a leading Chinese venture capitalist, explained in the NYT report China’s advantage lies in Chinese leaders’ unwillingness to be bogged down by “legal intricacies” or “moral consensus.”
This is a troubling development in how governments use AI and other emerging technologies.
Bringing this even closer to home, the New York Times recently demonstrated how easily people can be tracked by a facial recognition system without their knowledge. Using publicly available images of people who worked near Bryant Park on the east side of Midtown Manhattan, the NYT ran a day’s worth of footage through Amazon’s facial recognition service. They detected 2,750 faces and were able to identify specific individuals from their image database. There are far-reaching implications here. As the NYT report explained, “The police and governments may also have access to a vast network of cameras. Combine that with a comprehensive database of faces — like a driver’s license database — and it’s possible to track citizens throughout an entire region in real time.”
Amazon has come under increased scrutiny for selling its facial recognition software to police departments in Florida and Oregon and pitching its technology to ICE. A recent open letter to Amazon, now signed by 75 experts on AI, is calling on the company to stop selling its technology to law enforcement until there are clear legal safeguards in place. Amazon’s shareholders will also vote next month on a proposal, put forward by the Sisters of St. Joseph of Brentwood, requesting the company stop selling its technology to governments until it is clear it won’t be used to violate civil rights.
While Amazon has said that it is the customer’s responsibility to ensure compliance with the law — and that their Acceptable Use Policy (AUP) makes that clear — there is historical evidence that police departments may not view terms and conditions as companies do. In Maryland, for example, after Freddie Gray’s death in Baltimore, protesters were tracked via social media posts and photos run through facial recognition systems to identify and arrest those with outstanding warrants. As this practice became public, Instagram, Facebook, and Twitter terminated Geofeedia’s — the company working with Baltimore police — access to their APIs for violating their terms of service.
While these concerns have grown increasingly louder, they aren’t new. In 2014, writing about the government’s collection and use of biometrics, the editors of Scientific American explained, “[T]he potential for misuse is glaringly obvious.” They conclude that the possible privacy violations would make the National Security Agency’s sweeping data collection practices seem superficial by comparison. The ACLU has been covering this issue for years.
Given that the line between commercial and government data has proven to be blurry, it is becoming difficult to differentiate fears about commercial use from law enforcement use. Moreover, the debate over biometrics has been tied up in broader concerns about how our data is being collected and used. From Brussels to Sacramento to Washington, D.C., governments have been trying to regulate data collection and use. Although recent debates over Google and Facebook’s user data has grabbed headlines and attention, there is another battle quietly brewing over biometric data.
While there are clear, legitimate concerns about the way biometrics may be used, something strange happened as these concerns became reflected in public policies. Thus far most proposals, as well as current legislation, have focused solely on commercial use of biometrics and do nothing to mitigate the troubling rights violations that could occur from governments and law enforcement agencies using biometric data. Moreover, as written, most current public policies will all but ensure that the future of biometrics will tend to look more like a law enforcement tool while also guaranteeing that the most promising commercial applications will never be widely available.
The current state of biometric data regulation
As of 2019, just a handful of states have passed biometric data privacy laws. In fact, only Illinois, Texas, and Washington have biometric laws currently on the books. Illinois was the first state to enforce biometric privacy protections in 2008, specifically targeting the collection and use of biometrics by a “private entity.” This law explicitly covers how companies like Facebook, Google, and others use biometrics, but specifically exempts how state or local governments — including law enforcement agencies — collect and use this data.
Texas began enforcing a similar biometric law in 2009 but limited its application to only commercial purposes. While this clearly affects how companies use biometrics, it has no application on how government agencies and law enforcement goes about collecting, storing, and using biometric data.
Washington became the third state to enforce biometric data regulation in 2017, and it can be viewed as a hybrid of both Illinois and Texas. It specifically includes Illinois’ definition of “private entity” as well as Texas’ focus on commercial use. As a result, it covers the same private companies as Texas and Illinois but leaves government agencies free to use biometric data.
In recent years, more states have picked up on this trend. Alaska, Connecticut, Delaware, Florida, Massachusetts, Michigan, Montana, New Hampshire, New York, and Utah have all seen legislative proposals that mirror in part those found in Illinois and Texas.
As we see a greater focus on how our own faces could be used against us, there is something of a gap between what we are trying to protect against and what we are regulating. Sure there are some inherent risks in how private companies collect, store, and use our biometric data. The FTC has been at the forefront of that conversation, laying out principles and best practices for common usage of facial recognition technology.
The focus on how private companies use biometrics while explicitly ignoring how governments use the same data seems misplaced. Companies have a huge incentive to protect consumer privacy from misuse. As Clear CEO Caryn Seidman-Becker explains her business, “It’s always about security, it’s always about protecting the privacy.”
This is not to say that the policy debate has been all one-sided. Maryland has seen some attempts to protect against misuse of facial recognition technology by law enforcement. HB 1148, which was introduced and withdrawn in 2017, established clear procedures for the use of facial recognition and would have included fines and potential prison time for officers who violate the standards.
This spring, Utah passed a comprehensive data privacy bill which requires law enforcement agencies to obtain a warrant in order to access any electronic data held by a third party including biometrics. It doesn’t, however, apply to data collected by law enforcement agencies or other publicly available data (such as the photos used by the New York Times).
What’s at stake?
First, civil rights are clearly implicated and affected by the collection and use of biometrics. Facial recognition technology can be wildly inaccurate. While the software is right 99 percent of the time for white men, it can be up to 35 percent more error prone for women of color. Amazon’s technology, which is already being used by law enforcement, incorrectly matched 28 members of Congress to criminal mugshots.
When it comes to making decisions about making arrests, playing fast and loose with facial recognition technology has the potential to go very wrong very quickly.
The current regulatory approaches taken in Illinois, Texas, and Washington not only ignore this danger, they instead focus on limiting the commercial use of biometrics. This not only encourages the problematic use but also shuts down the world-changing potential this technology has.
Promising Commercial Applications
While still in its infancy, biometric technology is proving to be a force for good in the world. This technology has the potential to give sight to the blind, alert distracted and drowsy drivers, and even help those with advanced dementia.
Using biometrics, Microsoft has developed an app called Seeing AI that uses artificial intelligence to describe the world for the visually impaired. It has the ability to recognize friends, describe people around the user, and identify others’ emotions — enabling users to identify nonverbal cues.
Automobile manufacturers are developing biometric sensors to identify drowsy or distracted drivers and send them a notification to get their attention. These sensors can also detect a driver’s mood and change the music, ambient lighting, and avoid potentially annoying bells and chimes in hopes of calming an increasingly agitated driver.
There’s even the potential for robot pets, equipped with biometric sensors to recognize familiar faces and react to emotions, to provide companionship without needing care. This technology is showing promise as a potential complement to current dementia care programs. This could dramatically improve the lives of tens of millions in the future.
There’s no guarantee we will ever see these promising technologies, especially if the battle over biometrics ends with Illinois’ model taking hold in other states. Aibo, Sony’s robot dog, can’t be sold there because of the state’s biometric data law. But Sony isn’t alone. Mark Jenkins and I outlined:
Alphabet, the $800 billion parent company of Google, has also decided that reaching consumers in Illinois is not worth the cost. Nest, Alphabet’s smart home company, has turned off its smart camera service there. This product allows homeowners to identify friends or strangers approaching by using facial recognition. Google Art, an app that matched your selfie with great works of art that resembled you, was also unavailable to users in Illinois.
Moreover, companies like Google, Snapchat, and Facebook have been tied up in Illinois courts for years fighting accusations that their products violate the law.
These laws have had a variety of consequences from stopping Facebook from suggesting tags, preventing robot dogs from recognizing when they’ve made us happy, and prohibiting retailers from recognizing customers as soon as they enter the store. They do nothing, however, to stop the type of mass surveillance happening in China of the scale that the editors of Scientific American warned us about 5 years ago.
In fact, they may be encouraging it. Imagine that Illinois one day becomes the model for the 47 other states that haven’t yet passed biometric regulations. The promising commercial applications are all but banned in the United States, and the only profitable opportunities to create biometric technology is by contracting with the government and law enforcement agencies. Companies already working on biometric technologies have three choices: (1) pivot into something else, (2) go out of business, or (3) stay in business any way you can. Those choosing option three will have one buyer: law enforcement agencies. As a result, in that world, biometric technologies will be developed and shaped almost solely to meet the specific needs of law enforcement, rather than to improve the lives of consumers.
Today, that isn’t the case. Microsoft, for example, has turned down requests to install its technology into a California law enforcement agency’s body and vehicle cameras. The company’s president, Brad Smith, recently explained:
Before we wake up and find that the year 2024 looks like the book “1984,” let’s figure out what kind of world we want to create, and what are the safeguards and what are the limitations of both companies and governments for the use of this technology.
Shifting the focus on biometrics
While much of the debate over biometrics in statehouses across America have focused on limiting the ways in which companies use biometrics, that is a high-risk, low-reward route to success. Especially if the goal is protecting civil rights. After all, these companies aren’t using our biometric data to limit personal freedom or restrict choice. They’re looking to create better products, tailored experiences, and solutions to some of today’s biggest problems.
Every new technology brings new challenges, and it is important to be realistic about both the opportunities and risks created. Taking a myopic approach, or being overly optimistic or pessimistic, rarely benefits the technology or those who stand to benefit from it in the long term. It’s time we took a serious look at how we think about biometrics and allowed facts rather than fears to guide our approach to the future.