yesAndrea PetersonNone
×

Get access to Protocol

I’ve already subscribed

Will be used in accordance with our Privacy Policy

Politics

5 takeaways from the EU's new AI and data regulation plans

Europe wants in on AI — at least the good parts.

European Commission President Ursula Von der Leyen

European Commission President President Ursula von der Leyen wants Europe to have the capability "to make its own choices, based on its own values, respecting its own rules."

Photo: Thierry Monasse/Getty Images

The European Union has new ideas about how it could try to keep up with America and China on AI — and it could shape global thinking on how the technology is regulated in the process.

The European Commission published a suite of proposals for Europe's digital future on Wednesday, including a new data strategy and a white paper on artificial intelligence. They describe, among other things, proposed regulations of cutting-edge uses of AI, and the building out of a unified European data market.

The overall theme, as laid out in an op-ed by European Commission President President Ursula von der Leyen, is to give Europe the capability "to make its own choices, based on its own values, respecting its own rules" on AI — part of what she describes as "tech sovereignty."

Get what matters in tech, in your inbox every morning. Sign up for Source Code.

But Europe could in the process make a global impact.

"The general ambition is that the EU takes the lead on regulation of AI — exactly as it did in the field of personal data protection," said Théodore Christakis, a law professor at Université Grenoble-Alpes who focuses on AI. "Regulation of artificial intelligence is seen as a kind of 'the next GDPR,'" he added.

Von der Leyen made the ambitious promise to initiate AI legislation within 100 days of taking office. That's not exactly what we have today. Instead, the AI white paper is a set of proposed approaches that could change substantially before becoming law.

"It's not legally binding," Christakis said. "It is presented as a kind of roadmap for rules that should be adopted in the coming years." The first actual regulation, he said, could potentially be put in place near the by the end of 2020 at the earliest.

But what exactly do all these proposals for what Europe should do actually mean? There's a lot to dig through, so here are five of the most important takeaways.

Europe knows AI is the future, and wants in. Somehow.

The EU sees a huge opportunity in AI: to benefit people's lives, through improving efficiency in areas like health care, agriculture, and technology, as well as it being a driver of economic growth. But it knows that tech giants, mostly American, are already ahead in the race, and they need to find ways to drive investment to catch up.

The white paper suggests a way in: focusing on industrial, business, and public sector data that will end up being stored and processed on devices at the edge of the network, rather than in the cloud.

"This opens up new opportunities for Europe, which has a strong position in digitized industry and business-to-business applications, but a relatively weak position in consumer platforms," the AI white paper notes.

That may not be enough.

"The problem is, not all, but much of data innovation, is driven by personal data — and most data on some level is personal data," said Sue Aaronson, a professor at The George Washington University and director of its Data Governance Hub. So it's unclear how the EU will navigate this terrain at the same time as complying with its own strict General Data Protection Regulation.

Europe also wants to protect itself from harmful AI.

The strongest regulatory idea in the proposal is the creation of a mandatory testing system for AI applications that the EU considers high risk, especially those with significant human rights implications, such as government use of facial recognition or predictive policing algorithms.

"Basically the Commission is inspired here by conformity assessment mechanisms that already exist for a large number of products being placed on the EU's internal market" like cars and chemicals, Christakis said.

The proposal lays out two key criteria for what it considers a "high risk" use of AI. First, is it being deployed in a sector where there could be significant risks, like health, energy or transportation? Second, is it a system that could actually affect safety? Meanwhile, it suggests a more light touch for non-"high-risk" data uses, to avoid hindering innovation.

Early indications suggest that the industry welcomes this idea. "By focusing on precision regulation — applying different rules for different levels of risk — Europe can ensure its businesses and consumers have trust in technology," Christopher Padilla, vice president for government and regulatory affairs at IBM, said in a statement.

That facial recognition ban? It's scrapped.

A draft of the AI paper that leaked in January suggested a temporary multiyear ban on facial recognition technologies in public places, to give governments time to figure out how to use it safely and ethically.

That section didn't end up in the final documents. Instead, the version released Wednesday says that the "Commission will launch a broad European debate on the specific circumstances, if any, which might justify" the deployment of facial recognition technology for things like identifying people in public places.

That may disappoint privacy and civil liberties advocates who hoped to see a more aggressive stance. But others, including Christakis, see it as a sign that the EU is planning to take regulation in this area seriously, rather than scrambling to roll out policies to meet political deadlines.

Europe hopes data can fuel its own big tech competitors.

Changing gears to consider the data strategy, the Commission isn't coy about the status quo, acknowledging that just "a small number of big tech firms hold a large part of the world's data" right now. And that's a problem for Europe, it says, because it "could reduce the incentives for data-driven businesses to emerge, grow and innovate in the EU today."

But the strategy argues that "the winners of today will not necessarily be the winners of tomorrow" and lays out plans to try to increase local investment and foster the development of local competitors.

That includes incentivizing data sharing between European businesses with an emphasis on the industrial and business data that doesn't run afoul of the EU's strong privacy protections. But chief among those plans is the creation of "a single European data space — a genuine single market for data" that is open to data from anywhere, but governed by "European rules and values" including strong personal and consumer data protections.

Promoting sharing of this data within a structured marketplace may also promote transparency that could help Europe more effectively regulate data. Often, governments and researchers "just don't know what information [tech] firms have — they don't share with us, which makes it really hard to regulate," Aaronson said.

None of this is certain — but it could affect global policy all the same.

It's worth remembering that this is all still very much up in the air. The AI white paper is open for comment until May 19, and the Commision is also accepting feedback on the data strategy. Stakeholders — including the big tech firms that no doubt have strong opinions about the proposals — will waste little time in weighing in.

"In reality, lobbying has already started," Christakis said.

For tech leaders, especially those at big tech firms who have pushed their vision of the future to get ahead of AI regulation, this could represent a crucial moment. Currently there is a dearth of movement on these issues in the U.S., and Europe has a history of taking a lead in regulating technology — most recently with GDPR.

"The U.S. is not regulating effectively because we don't want to hamper" domestic tech giants, Aaronson said. But Europe seems ready to dive in, she added — and that could mean the rest of the world soon follows.

People

Expensify CEO David Barrett: ‘Most CEOs are not bad people, they're just cowards’

"Remember that one time when we almost had civil war? What did you do about it?"

Expensify CEO David Barrett has thoughts on what it means for tech CEOs to claim they act apolitically.

Photo: Expensify

The Trump presidency ends tomorrow. It's a political change in which Expensify founder and CEO David Barrett played a brief, but explosive role.

Barrett became famous last fall — or infamous, depending on whom you ask — for sending an email to the fintech startup's clients, urging them to reject Trump and support President-elect Joe Biden.

Keep Reading Show less
Benjamin Pimentel

Benjamin Pimentel ( @benpimentel) covers fintech from San Francisco. He has reported on many of the biggest tech stories over the past 20 years for the San Francisco Chronicle, Dow Jones MarketWatch and Business Insider, from the dot-com crash, the rise of cloud computing, social networking and AI to the impact of the Great Recession and the COVID crisis on Silicon Valley and beyond. He can be reached at bpimentel@protocol.com or via Signal at (510)731-8429.

Politics

Silicon Valley is cracking down on Congress

Big Tech's pause on PAC contributions highlights how powerful it's become.

Democrats are particularly frustrated by Facebook, Google and Microsoft's decision to halt PAC contributions altogether, rather than targeting particular Republican lawmakers.

Photo: Tobias Hase/Getty Images

Congress has failed to act on every opportunity it had to seriously rein in the power of Big Tech over the last several years. Negotiations over a federal privacy bill fell apart last year, antitrust reform hit partisan headwinds and every debate over content moderation since 2016 has devolved into a theatrical yelling match that left the parties more divided over solutions than ever.

And now, the bigger-than-ever Silicon Valley is flexing its muscles with impunity as companies cut off violent extremists and wield the power of their political donations, acting more like a government than the U.S. government itself. They're leaving Republicans and Democrats more frustrated and powerless than ever in their wake.

Keep Reading Show less
Emily Birnbaum

Emily Birnbaum ( @birnbaum_e) is a tech policy reporter with Protocol. Her coverage focuses on the U.S. government's attempts to regulate one of the most powerful industries in the world, with a focus on antitrust, privacy and politics. Previously, she worked as a tech policy reporter with The Hill after spending several months as a breaking news reporter. She is a Bethesda, Maryland native and proud Kenyon College alumna.

Trump wants to spend his final week as president getting back at Twitter and Facebook for suspending him.

Photo: Oliver Contreras/Getty Images

President Trump has been telling anyone who will listen that he wants to do something to strike back at Big Tech in the final days of his presidency, promising a "big announcement" soon after Twitter permanently banned him last week.

In a statement that Twitter has taken down multiple times, Trump hammered usual targets — Section 230, the "Radical Left" controlling the world's largest tech platforms — and pledged he would not be "SILENCED." But at this point, as he faces a second impeachment and a Republican establishment revolting against him in the waning days of his presidency, there's likely very little that Trump can actually do that would inflict long-lasting damage on tech companies.

Keep Reading Show less
Emily Birnbaum

Emily Birnbaum ( @birnbaum_e) is a tech policy reporter with Protocol. Her coverage focuses on the U.S. government's attempts to regulate one of the most powerful industries in the world, with a focus on antitrust, privacy and politics. Previously, she worked as a tech policy reporter with The Hill after spending several months as a breaking news reporter. She is a Bethesda, Maryland native and proud Kenyon College alumna.

We need Section 230 now more than ever

For those who want to see less of the kind of content that led to the storming of the Capitol, Section 230 may be unsatisfying, but it's the most the Constitution will permit.

Even if certain forms of awful speech could be made unlawful, requiring tech sites to clean it up would be even more constitutionally difficult.

Photo: Angel Xavier Viera-Vargas

Many conservatives are outraged that Twitter has banned President Trump, calling it "censorship" and solemnly invoking the First Amendment. In fact, the First Amendment gives Twitter an absolute right to ban Trump — just as it protects Simon & Schuster's right not to publish Sen. Josh Hawley's planned book, "The Tyranny of Big Tech."

The law here is clear. In 1974, the Supreme Court said newspapers can't be forced to carry specific content in the name of "fairness," despite the alleged consolidation of "the power to inform the American people and shape public opinion." The Court had upheld such Fairness Doctrine mandates for broadcasters in 1969 only because the government licenses use of publicly owned airwaves. But since 1997, the Court has held that digital media enjoys the same complete protection of the First Amendment as newspapers. "And whatever the challenges of applying the Constitution to ever-advancing technology," wrote Justice Antonin Scalia in 2011, "'the basic principles of freedom of speech and the press, like the First Amendment's command, do not vary' when a new and different medium for communication appears."

Keep Reading Show less
Berin Szóka

Berin Szóka (@BerinSzoka) is president of TechFreedom (@TechFreedom), a technology policy think tank in Washington, DC.

People

In 2020, Big Tech reckoned with racial injustice. Its work is far from over.

From Facebook's walkouts to Amazon's facial recognition moratorium, did any of it make a difference?

Racial injustice issues engulfed the U.S. this year, and Big Tech wasn't spared.

Photo: Mark Makela/Getty Images

The movement for Black lives marched straight into the heart of some of Silicon Valley's most powerful companies this summer, with Facebook employees staging a virtual walkout over the company's policies, Pinterest employees speaking out about racism and retaliation they experienced and a parade of tech giants making heartfelt commitments to diversity and supporting POC-focused causes.

But as the year comes to a close, one of those giants, Google, is facing an uproar over the firing of Timnit Gebru, one of its top AI ethicists, after she wrote an internal message to fellow Googlers that criticized biases within the company and within its AI technology. It's a scandal Gebru's supporters argue is emblematic of the costs Black people in the tech industry bear for speaking out on issues related to discrimination every day.

Keep Reading Show less
Issie Lapowsky
Issie Lapowsky (@issielapowsky) is a senior reporter at Protocol, covering the intersection of technology, politics, and national affairs. Previously, she was a senior writer at Wired, where she covered the 2016 election and the Facebook beat in its aftermath. Prior to that, Issie worked as a staff writer for Inc. magazine, writing about small business and entrepreneurship. She has also worked as an on-air contributor for CBS News and taught a graduate-level course at New York University’s Center for Publishing on how tech giants have affected publishing. Email Issie.
Latest Stories