Big tech says it wants government to regulate AI. Here’s why.
Tech executives are trying to get ahead of governments in the conversation about regulation of artificial intelligence.
There's been a change in the way technology executives talk about artificial intelligence. The breathy enthusiasm remains, but there's an extra dimension to the conversations: one of rules.
Over the last 12 months, top executives like Google's Sundar Pichai and Microsoft's Brad Smith have been adamant: It's time to regulate AI.
But "any time a company says 'we should be regulated,' we should be justifiably suspicious," says Jack Stilgoe, a senior lecturer at University College London specializing in the governance of emerging technologies. "And we should read into that."
Get what matters in tech, in your inbox every morning. Sign up for Source Code.
So let's! Here's a look at the suggestions that technology companies are making for the regulation of AI, along with thoughts from people who work in technology policy about what's motivating them — and their take on whether the ideas will help or hinder.
'Technology should not be banned'
"New technology should not be banned or condemned because of its potential misuse. Instead, there should be open, honest and earnest dialogue among all parties involved to ensure that the technology is applied appropriately and is continuously enhanced." — Amazon Web Services' Michael Punke, February 2019
Calls against bans may be an attempt by tech companies to head off knee-jerk reactions from lawmakers, says Jack Clark, policy director at OpenAI, an independent AI research lab.
"The consequence of powerful technologies being developed by the private sector with very little governmental oversight or awareness," says Clark, is an asymmetry in knowledge between big tech and governments. That may be causing nervousness within government over a lack of ability to analyze the technology or understand its implications, and tech companies would rather get in front of such fears potentially turning into a blanket ban.
After all, big tech companies "have a significant market interest" in AI technologies not being banned, says Meredith Whittaker, a co-director of AI Now, a research institute studying the social implications of artificial intelligence. Amazon Web Services, for instance, already makes money from selling its Rekognition facial recognition software to law enforcement and government agencies. (AWS declined to comment on the record for this story.)
"We support targeted policies that would increase the responsibilities for companies to develop and operate trustworthy AI. Given the ubiquity of AI — it touches all of us in our daily lives and work — there will be no one-size-fits-all rules that can properly accommodate the many unique characteristics of every industry making use of this technology and its impact on individuals." — IBM Policy Lab, "Precision Regulation for Artificial Intelligence," January 2020
To an extent this is the logical conclusion of not imposing bans: "Regulate use of technology, not the technology itself," as IBM's outgoing CEO, Ginni Rometty, put it during a panel discussion in Davos last month. Whittaker approves of that sentiment. "Technology is always about how it's used, who's using it," she says. "You cannot regulate technology without taking those into account."
Calls for precision are also a canny way to appeal to lawmakers and regulators: "If you're specific, you're in the scalpel domain," Clark says. "You prefer precision with law."
But this kind of approach could also help technology companies to retain control in regulatory conversations. It's big tech saying, "you can't set the rules unless you understand the technology," Stilgoe says. "It's a way to take regulation out of the democratic domain and put it in the technocratic domain." The alternative: a thornier and more philosophical debate about AI, which technology companies may prefer to avoid.
On a more logistical level, Whittaker wonders whether the technical and regulatory capabilities even exist inside government at the moment for such precise policy rules to be drawn up.
Technology companies, meanwhile, think they can help bring some of that knowledge to the table. "We're not calling for self-regulation. We're calling for co-regulation," says Ryan Hagemann, the co-director of IBM Policy Lab. "We think we have a valuable voice to lend to the conversation."
"Good regulatory frameworks will consider safety, explainability, fairness and accountability to ensure we develop the right tools in the right ways. Sensible regulation must also take a proportionate approach, balancing potential harms, especially in high-risk areas, with social opportunities." — Sundar Pichai, January 2020
Pleas for proportionality reinforce the idea that a technology shouldn't be banned before it's used, Stilgoe says, as its creators "will always argue that they don't know that the harms will be." And without knowing what the harms will be, it's effectively impossible to create a proportionate piece of regulation.
Even if you can craft regulations in advance, perhaps more crucial, asks Whittaker: "Who gets to define benefit and harm?"
"I don't want to live in a world where risks are told to me by originators of the technology," Clark says. "I want to live in a world where risk is reported to people more like weather, by different independent forecasts." Clark supports the idea of governments taking increasing responsibility for monitoring and evaluating AI — something that the National Institute of Standards and Technology has started to do, but so far only for established AI tools like facial recognition.
Google did not respond to a request for comment on this article.
"The EU and the U.S. are already starting to develop regulatory proposals. International alignment will be critical." — Sundar Pichai, January 2020.
"Standards that are globally recognized would help create consistency and certainty for consumers." — IBM Policy Lab, January 2020
"Obviously they want to know the rules they're working to," and where, Stilgoe says. Unified standards make that easier, because companies don't have to develop different products for different markets. It could also make life easier for startups: "If you have the same regulatory template, I'm sure it also enables smaller companies to operate in broader areas," Clark says.
A danger is that by pushing for international agreement, regulation is based on the lowest common denominator. "It becomes a race to the bottom," Stilgoe says. Whittaker suggests that companies could use this point to argue that other countries, such as China, have less stringent regulation, and that strident rules in the U.S. could limit domestic innovation. She says that would be a bad idea: "We should be raising the bar, setting a high water mark."
It's currently unclear if AI regulation really would play out as a race to the bottom: 42 countries last year adopted a set of principles on artificial intelligence that were created by the Organisation for Economic Co-operation and Development. They weren't legally binding, but they may still hint at a desire for unity at an international level.
'We should not wait'
"We should not wait for the technology to mature before we start to put principles, and ethics, and even rules in place to govern" AI.— Microsoft's Brad Smith at Davos, January 2020
Smith himself alluded to the motivation behind big tech's need for speed during another Davos event: "The continued health of the technology sector and technology markets actually now turns in part on sustaining public confidence and trust."
"The optimistic version is that there is a reputational benefit that companies can get from being ahead of the curve," Stilgoe says. "The pessimistic account would be that they are in reputational trouble and that they need to take proactive measures." Microsoft did not respond to a request for comment.
Either way, Whittaker agrees that urgency is required. "We need regulation ASAP," she says.
But "there's only a point in running a race if you know where the finish line is," Stilgoe says. And the big question that lawmakers and regulators will need to consider as they mull big tech's thinking on AI regulation is: Do we as a society share the same ideas as tech companies about the direction we're running in?