In a letter to Congress this week, IBM's CEO Arvind Krishna said the company is no longer offering facial recognition tools and called on Congress to regulate the use of the controversial technology by police, as part of a list of proposals to tackle racial injustice in America.
The news, which came amid widespread protests against police brutality, earned IBM praise and calls for other tech giants, including Amazon, to do the same. But for all of IBM's recommendations — including investing in education and holding police accountable for misconduct — the company stopped short of cutting its many other ties with police departments. That includes the sale of artificial intelligence tools that enable police to predict criminal activity, a practice researchers and advocates say can entrench and exacerbate racial bias in policing.
In his letter, Krishna wrote that the company opposes the use of "any technology, including facial recognition technology offered by other vendors, for mass surveillance, racial profiling, violations of basic human rights and freedoms, or any purpose which is not consistent with our values."
But IBM spokesperson Adam Pratt declined to comment on whether that will have any impact on the company's predictive policing work, saying that the letter was clear on what IBM supports and opposes. Pratt suggested, instead, that Protocol write about how IBM is "stepping up with some of the most detailed policy recommendations we've seen yet from the business community to advance racial justice for the black community and respond to this critical moment in our nation's history."
On its website, IBM touts its policing work, including case studies from Rochester, New York; Miami-Dade, Florida; Durham, North Carolina; and Manchester, New Hampshire. In Miami-Dade County, IBM said it developed technology to help the police "identify crime hot spots" and "model what kind of suspect typically commits a particular crime and then generate and filter a suspect list to help solve cases faster." In Rochester, IBM said its technology gives police making traffic stops insight into not just who owns a given car, but who might be riding in it, painting a "comprehensive picture of an individual, their associates and their activity."
Experts say the problem with making crime or threat predictions based on historical policing data is that the data itself is a reflection of the disproportionate presence of police in black and brown people's lives. In a 2019 paper, Rashida Richardson and her colleagues at New York University's AI Now Institute called this "dirty data" because it's based on "data produced during documented periods of flawed, racially biased, and sometimes unlawful practices and policies." That can include data on calls for service in a given area or even arrest and stop data that omits whether a person was ever convicted of a crime.
"In policing, the data is going to reflect its environment," Richardson said. "If you have a culture of policing that has racial biases evident in its practices and outcomes and a culture that seems to have a complete disregard for entire parts of the population, that's going to be reflected in the data."
Some police departments have taken it upon themselves to terminate predictive policing programs. Last year, the Los Angeles Police Department shut down a controversial program known as LASER, which used technology developed by Palantir to compile a watch list for police. An internal audit found that some 84% of people considered "active chronic offenders" under LASER were black or Latino, and that nearly half of the supposed chronic offenders had never been arrested for a violent crime or had been arrested only once.
Richardson called IBM's letter a "PR move," in part, because it doesn't grapple with the other ways its AI tools are used in policing and, in part, because IBM wasn't the biggest player offering facial recognition technology to law enforcement to begin with. "It's easy to end selling a product that wasn't one of your major profit-drivers," Richardson said, "but it's also harder to take their move seriously when they didn't seem to comment on and don't seem to have an interest in divesting from predictive policing."
In his letter, Krishna wrote that both vendors and users of AI systems in policing need to test their technology for racial bias and have those tests audited. This, coupled with IBM's stance on facial recognition, is an important first step, said Albert Fox Cahn, founder and executive director of the Surveillance Technology Oversight Project, which opposes police surveillance. But, he said, it's only a first step.
"I think the same moral calculus that required IBM to ditch biased and broken facial recognition software will lead them to eventually walk away from the other ways that artificial intelligence can exacerbate police violence and civil rights abuses," Cahn said. "I don't think this is the end of this debate, but I do think it's a lead other tech companies need to follow."