5 takeaways from the EU's new AI and data regulation plans
Europe wants in on AI — at least the good parts.
Photo: Thierry Monasse/Getty Images
The European Union has new ideas about how it could try to keep up with America and China on AI — and it could shape global thinking on how the technology is regulated in the process.
The European Commission published a suite of proposals for Europe's digital future on Wednesday, including a new data strategy and a white paper on artificial intelligence. They describe, among other things, proposed regulations of cutting-edge uses of AI, and the building out of a unified European data market.
The overall theme, as laid out in an op-ed by European Commission President President Ursula von der Leyen, is to give Europe the capability "to make its own choices, based on its own values, respecting its own rules" on AI — part of what she describes as "tech sovereignty."
But Europe could in the process make a global impact.
"The general ambition is that the EU takes the lead on regulation of AI — exactly as it did in the field of personal data protection," said Théodore Christakis, a law professor at Université Grenoble-Alpes who focuses on AI. "Regulation of artificial intelligence is seen as a kind of 'the next GDPR,'" he added.
Von der Leyen made the ambitious promise to initiate AI legislation within 100 days of taking office. That's not exactly what we have today. Instead, the AI white paper is a set of proposed approaches that could change substantially before becoming law.
"It's not legally binding," Christakis said. "It is presented as a kind of roadmap for rules that should be adopted in the coming years." The first actual regulation, he said, could potentially be put in place near the by the end of 2020 at the earliest.
But what exactly do all these proposals for what Europe should do actually mean? There's a lot to dig through, so here are five of the most important takeaways.
The EU sees a huge opportunity in AI: to benefit people's lives, through improving efficiency in areas like health care, agriculture, and technology, as well as it being a driver of economic growth. But it knows that tech giants, mostly American, are already ahead in the race, and they need to find ways to drive investment to catch up.
The white paper suggests a way in: focusing on industrial, business, and public sector data that will end up being stored and processed on devices at the edge of the network, rather than in the cloud.
"This opens up new opportunities for Europe, which has a strong position in digitized industry and business-to-business applications, but a relatively weak position in consumer platforms," the AI white paper notes.
That may not be enough.
"The problem is, not all, but much of data innovation, is driven by personal data — and most data on some level is personal data," said Sue Aaronson, a professor at The George Washington University and director of its Data Governance Hub. So it's unclear how the EU will navigate this terrain at the same time as complying with its own strict General Data Protection Regulation.
The strongest regulatory idea in the proposal is the creation of a mandatory testing system for AI applications that the EU considers high risk, especially those with significant human rights implications, such as government use of facial recognition or predictive policing algorithms.
"Basically the Commission is inspired here by conformity assessment mechanisms that already exist for a large number of products being placed on the EU's internal market" like cars and chemicals, Christakis said.
The proposal lays out two key criteria for what it considers a "high risk" use of AI. First, is it being deployed in a sector where there could be significant risks, like health, energy or transportation? Second, is it a system that could actually affect safety? Meanwhile, it suggests a more light touch for non-"high-risk" data uses, to avoid hindering innovation.
Early indications suggest that the industry welcomes this idea. "By focusing on precision regulation — applying different rules for different levels of risk — Europe can ensure its businesses and consumers have trust in technology," Christopher Padilla, vice president for government and regulatory affairs at IBM, said in a statement.
A draft of the AI paper that leaked in January suggested a temporary multiyear ban on facial recognition technologies in public places, to give governments time to figure out how to use it safely and ethically.
That section didn't end up in the final documents. Instead, the version released Wednesday says that the "Commission will launch a broad European debate on the specific circumstances, if any, which might justify" the deployment of facial recognition technology for things like identifying people in public places.
That may disappoint privacy and civil liberties advocates who hoped to see a more aggressive stance. But others, including Christakis, see it as a sign that the EU is planning to take regulation in this area seriously, rather than scrambling to roll out policies to meet political deadlines.
Changing gears to consider the data strategy, the Commission isn't coy about the status quo, acknowledging that just "a small number of big tech firms hold a large part of the world's data" right now. And that's a problem for Europe, it says, because it "could reduce the incentives for data-driven businesses to emerge, grow and innovate in the EU today."
But the strategy argues that "the winners of today will not necessarily be the winners of tomorrow" and lays out plans to try to increase local investment and foster the development of local competitors.
That includes incentivizing data sharing between European businesses with an emphasis on the industrial and business data that doesn't run afoul of the EU's strong privacy protections. But chief among those plans is the creation of "a single European data space — a genuine single market for data" that is open to data from anywhere, but governed by "European rules and values" including strong personal and consumer data protections.
Promoting sharing of this data within a structured marketplace may also promote transparency that could help Europe more effectively regulate data. Often, governments and researchers "just don't know what information [tech] firms have — they don't share with us, which makes it really hard to regulate," Aaronson said.
It's worth remembering that this is all still very much up in the air. The AI white paper is open for comment until May 19, and the Commision is also accepting feedback on the data strategy. Stakeholders — including the big tech firms that no doubt have strong opinions about the proposals — will waste little time in weighing in.
"In reality, lobbying has already started," Christakis said.
For tech leaders, especially those at big tech firms who have pushed their vision of the future to get ahead of AI regulation, this could represent a crucial moment. Currently there is a dearth of movement on these issues in the U.S., and Europe has a history of taking a lead in regulating technology — most recently with GDPR.
"The U.S. is not regulating effectively because we don't want to hamper" domestic tech giants, Aaronson said. But Europe seems ready to dive in, she added — and that could mean the rest of the world soon follows.
Andrea Peterson ( @kansasalps) is an independent journalist with extensive experience reporting on technology policy. Peterson was a staff writer for The Washington Post from 2013 through the end of 2016. Her byline has also been published by POLITICO, Ars Technica, The Daily Beast, Slate and other outlets.