Facial recognition systems are flawed and biased. Police departments are using them anyway. |
|
|
Hello and welcome to Eye on AI. In today’s edition…Police are running wild with flawed facial recognition systems and no one is stopping them; Biden signs another AI executive order; Lawsuit documents show how Meta planned to train models on copyrighted material—and cover it up; and OpenAI trades newsroom funding for content.
In last Thursday’s newsletter, I posed the question: If AI tools are misused, when should we blame the companies building them? Today’s story deals with a situation where we have no one to blame but ourselves. Or more specifically, our institutions.
An investigation published this week in the Washington Post details how police across the U.S. are using flawed AI facial recognition systems to wrongfully arrest and jail Americans. According to the report, 15 police departments across 12 states ignored municipal standards and arrested suspects identified through facial recognition without other evidence. In most cases, they also failed to gather basic information like alibis and ignored contradictory evidence, even DNA and fingerprints that pointed to other suspects. The report features the stories of eight people who have been wrongfully arrested based solely on facial recognition, but notes that the total number of people affected is impossible to know.
This is a case where yes, the technology fails. We know it does. And that’s why the even bigger failure is by police who rely on it anyway and the lawmakers and government agencies that let them.
The flaws of facial recognition—and lazy policing A consistent stream of studies have shown that facial recognition systems have higher error rates for Black and dark-skinned people compared to white people. One of the most comprehensive—a 2019 NIST study that evaluated 189 facial recognition algorithms from 99 developers—found that Black and Asian people were up to 100 times more likely to be misidentified by the technology than white men. AI technology has improved over the years, but that’s a significant bias, and there’s no evidence that it’s been adequately improved.
Aside from the algorithms themselves, the datasets they’re matched against present biases, too. A 2020 ACLU study found that since Black people are more likely to be arrested for minor crimes than white people, their faces are more likely to be in mugshot databases used in facial recognition searches, and thus more likely to be scanned and selected by facial recognition algorithms. Additionally, it found that police surveillance cameras are disproportionately installed in Black and brown neighborhoods, further increasing the chances they’ll be wrongfully matched. Of the eight confirmed stories of wrongful arrests detailed in the Post investigation, seven of the people are Black.
The current issues with the use of facial recognition in policing go far beyond these biases, however. The Post found that some law enforcement officers using the technology appeared to “abandon traditional policing standards and treat software suggestions as facts.” In some instances, they referred to uncorroborated AI results as a “100% match” or stated that it “immediately and unquestionably” identified the perpetrator. Across the eight cases of wrongful arrest, the police neglected to check alibis in six of them, failed to collect key pieces of evidence in five, ignored suspects’ physical characteristics in three, and ignored contradictory evidence in two. They also used facial recognition results to lead witnesses into problematic statements and identification of victims. In one case, they even ignored DNA evidence and fingerprints that pointed to another suspect.
In many cases, the standards around how law enforcement officials use facial recognition are minimal—in others, they are not enforced. Most police departments are not required to report when the technology is used, and few keep records around the use of facial recognition. While many departments bar officers from arresting suspects based solely on facial recognition matches, many do so anyway, the Post found. That’s where lawmakers and government agencies that oversee police come in.
A lack of laws and accountability Many of the earliest laws addressing AI were city and state laws banning the use of facial recognition in policing. Yet some—like Virginia and New Orleans—have already reversed these bans. In other cases, police officers simply find ways to skirt the laws, like asking police in other jurisdictions or agencies to run searches for them. They also conceal their use of facial recognition. Even when use of facial recognition leads to an arrest, it’s not presented as evidence in court because they consider it an investigative lead, not proof of guilt. Often, defendants have no idea they were identified by AI because prosecutors are not required to tell them.
Many privacy and civil rights advocates argue that facial recognition should not be used in policing. At the very least, there should be stringent laws around how it’s used, such as mandating that the use of facial recognition is always disclosed and that there is sufficient evidence for an arrest even without the facial recognition results. The U.S. has not passed any federal laws regulating AI, let alone any addressing facial recognition. The EU AI act, enacted last year, was set to ban the use of facial recognition in law enforcement, but a last-minute change has legalized its use after all, only banning it from use on live surveillance footage.
Even more importantly, facial recognition is far from the only AI technology police are adopting. As I covered in Eye on AI last year, they’re also using voice analysis algorithms that claim to detect fraud risks based on speech, AI systems that connect every camera in a town into one hub, and tools that can purportedly aggregate and analyze data to generate insights, including data from social media, jail systems, video feeds, and court records. Without laws regulating their use and clear transparency and accountability, we should assume law enforcement will misuse these tools too.
And with that, here’s more AI news.
Sage Lazzaro sage.lazzaro@consultant.fortune.com sagelazzaro.com
|
|
|
2025 AI Business Predictions AI leaders are pulling ahead and redefining the way we do business. See what’s driving success, the risks to avoid, and strategies to thrive in 2025. Read more
|
|
|
Biden signs an executive order to lease federal sites for the development of AI infrastructure. Aimed at accelerating the large-scale development of AI infrastructure, the order directs federal agencies to make federal sites available for private sector companies to build data centers and clean energy facilities. It also includes directives for agencies to help speed up permitting and connection to the power grid. Developers will be required to pay for construction and supply the sufficient clean energy required for operations. You can read more from The AP.
Meta seemingly used book piracy site LibGen to train its Llama AI models. Court documents from a major copyright lawsuit against Meta were unsealed, revealing internal messages about the company’s plans to use LibGen and its rush to take the lead and beat competitors such as OpenAI and Mistral. Like other AI companies, Meta has argued that using copyrighted works for training AI models constitutes fair use. The emails also described how the company should not externally cite use of data from the site and avoid “media coverage suggesting we have used a dataset we know to be pirated.” You can read more from The Verge. At the time this newsletter went to press, Meta has not yet responded to requests to comment on this story.
OpenAI to fund four new Axios newsrooms as part of a content deal. OpenAI got to select which cities publisher Axios will expand to as part of the three-year deal, choosing Pittsburgh, Pennsylvania; Kansas City, Missouri; Boulder, Colorado; and Huntsville, Alabama. While OpenAI has now struck several content deals with publishers, this is the first that involves directly funding newsrooms in exchange for content it can use to answer queries in ChatGPT. You can read more from Axios.
|
|
|
Jan. 16-18: DLD Conference, Munich
Jan. 20-25: World Economic Forum, Davos, Switzerland
Feb. 10-11: AI Action Summit, Paris, France
March 3-6: MWC, Barcelona
March 7-15: SXSW, Austin
March 10-13: Human [X] conference, Las Vegas
March 17-20: Nvidia GTC, San Jose
April 9-11: Google Cloud Next, Las Vegas
|
|
|
8,010 That’s how many ads Meta ran for an AI service that turns photos of real women into fake nudes in just the first two weeks of January. Overall, the ads on Facebook and Instagram are accounting for 90% of total traffic to the service, called Crush AI, according to Alexios Mantzarlis’s Faked Up newsletter. Mantzarlis and 404 Media report that the service is a known bad actor they’ve both reported to Meta several times. Meta has blocked the ads, yet they keep popping up.
This is a gross and abusive misuse of AI. It’s bad enough that services like this are allowed to operate (some countries, such as the U.K., have rules against creating non-consensual deepfake porn); it’s worse that Meta is amplifying them. And this was happening before Meta’s sweeping content moderations rollbacks, which it announced last week.
|
|
|
Thanks for reading. If you liked this email, pay it forward. Share it with someone you know. Did someone share this with you? Sign up here. For previous editions, click here. To view all of Fortune's newsletters on the latest in business, go here.
|
|
|
|