The idea of law enforcement seeking limits on surveillance technology sounds bizarre. But that's what's happening in one of the world's most-surveilled cities — and it might not be as self-defeating as it first sounds.
London's top cop is pushing for artificial intelligence and facial recognition rules, but experts warn rushing those rules could help normalize and expand government surveillance.
Get what matters in tech, in your inbox every morning. Sign up for Source Code.
"The best way to ensure that the police use new and emerging tech in a way that has the country's support is for the government to bring in an enabling legislative framework that is debated through Parliament, consulted on in public, and which will outline the boundaries for how the police should or should not use tech," said Cressida Dick, commissioner of London's Metropolitan Police Service, Reuters reported.
"Give us the law, and we'll work within it," she added.
If that sounds familiar, it may be because requests for regulation have become a siren call within the technology industry. Critics of facial recognition technology suggest that Dick may be trying to get ahead of regulation that could limit the use of the technology — which now looks increasingly inevitable — in much the same way.
"Many in law enforcement — and in the tech industry — are realizing that for some new technologies, such as face recognition, the alternative to community support and regulation may well be a complete ban," said Farhang Heydari, a New York University law professor and director of the Policing Project. Several communities in the U.S., including San Francisco and Cambridge, have already enacted rules barring law enforcement from using the technology.
So "instead of describing new regulation as an impediment to business and government interests, they appeal to it hoping for two things," Evan Selinger, a philosophy professor at Rochester Institute of Technology, said in an email.
First, police pushing for regulation "gain goodwill for seeming to be in favor of emotionally palatable outcomes," like responsible use. Second, they "steer the regulatory conversation in the direction of conservative change," he said. That could ultimately lead to policies that don't create meaningfully restrictive rules for law enforcement.
Some, including Selinger, warn that broader deployment of facial recognition technology by law enforcement could lead to a surveillance state — so its use by police and government should be banned. However, regulatory proposals to date largely focus on setting up a framework for responsible use.
The European Union's digital future roadmap last week scrapped a draft section that called for a five-year ban on deployment of facial recognition technology in public spaces. Meanwhile, a recent U.S. proposal from Democratic Sens. Cory Booker and Jeff Merkley described as a moratorium on federal use of facial recognition technology includes a warrant exception that would essentially create a legal framework for law enforcement use of the systems.
London's Metropolitan Police Service is already using real-time facial recognition, which can identify people from live-feeds of video. And despite some bans, hundreds of police departments in the United States are also using various forms of the technology.
New York City Police Commissioner James O'Neill defended his agency's use of the technology in a New York Times op-ed last year, describing a variety of internal checks, such as only searching against mugshot data. The department's methods and findings are "subject to examination in court" if a case ever makes it that far, he added.
However, U.S. courts haven't proved to be much of a check on facial recognition technology. Just one American court is known to have weighed in on law enforcement use of facial recognition technology, and it largely sided with law enforcement to limit defendants' right to information about the system used to identify them, The New York Times reported in January.
Although facial recognition technology is widely used in consumer applications, like unlocking phones and tagging people in photos, its use in law enforcement remains controversial. One reason is that studies have shown the technology is generally less accurate when identifying people with darker skin tones — a racial bias that could have huge civil liberties implications when the tech is used for high-risk applications like policing.
Dick rejected similar concerns, claiming that the system used by London police — which relies on tech from Japanese firm NEC — doesn't have ethnic bias. But she admitted it does find it "slightly harder to identify a wanted woman than a wanted man."