Policy

Apple just reignited the encryption fight. Everyone should have seen it coming.

Privacy advocates revived their worries about encryption over Apple's new child safety features, but questions about the most dangerous content in the U.S. and abroad were already shifting the thinking on how to balance security and digital harm.

An iPhone 11 on display during the Apple iPhone 11 launch at the tech giant's flagship store in Regent Street, central London.

Measures to protect kids on the iPhones drew concern from privacy and security advocates.

Photo: Jonathan Brady/PA Images via Getty Images

Apple's announcement that it would take new measures to protect children on iMessage and in users' photo accounts has generated worry in the tech policy community that the phone-maker has abandoned its commitment to privacy following years of pressure from governments worldwide.

Basically everyone wants to protect children, but in recent years questions about how to handle the spread of controversial content, from dangerous COVID-19 misinformation to political dissent repressive regimes want to silence, have become even more broad and politically fraught both in the U.S. and around the world.

The struggle over how to balance privacy and security is a long-running tension in the U.S. The most recent round began to play out when Apple said last Thursday it would allow parents to get warnings when kids under 13 send or receive explicit images and that it would scan photo uploads to iCloud for known child sexual abuse material (CSAM). First, Apple said its goal was "to help protect children from predators who use communication tools to recruit and exploit them." Then privacy, security and free speech advocates argued that, while CSAM is the most horrible of online content, compromising robust encryption means compromising the kind of information it protects, from health and financial data to the identities of LGBTQ+ youth and pro-democracy activists all over the world.

In the past, the tension has so far largely, if uneasily, resolved in favor of end-to-end encryption. Security and privacy prevail in part because of concerns that governments could use safety measures to chill all kinds of speech, and also in part because the First and Fourth Amendments create a high bar for speech and search in the U.S., even when those liberties may mean wrongdoers go uncaught. Such concerns, for instance, gave Apple political cover when it resisted unlocking a dead terrorist's iPhone following the 2015 San Bernardino attack, and helped to slow down measures like the bipartisan EARN IT Act that could have exposed companies to liability for CSAM if they employed encryption.

The tension has deepened into 2021. Concerns about dangerous posts, pictures and videos only seem to have grown as the traditional emphasis on preventing child abuse imagery, terrorism and other crime has expanded to concerns about election disinformation, lies about COVID-19 vaccines and treatments, violent racism and other instances of harmful content.

Invisible content

"The content questions are already having a moment at the same time that we're seeing encryption come back into the fore," said Emma Llansó, director of the Free Expression Project at the Center for Democracy and Technology, which was highly critical of Apple's moves.

In the U.S., some of those questions are being driven by the Biden administration, which is is critical of online misinformation being spread about COVID-19, its treatments and the efficacy of masks and vaccines. Two Democratic senators also recently led an effort to limit a liability shield for online platforms when their algorithms boost certain kinds of health misinformation.

The concerns don't come only from the Democratic side. Former President Trump, during his tenure, tried repeatedly to stop online criticism of himself, and members of Congress from both parties have issued bills and statements regarding online terrorism, civil rights violations, political misinformation, drug sales, counterfeits and more. Amid the bipartisan concerns are also perennial allegations by Republicans that online platforms censor conservative speech. And on top of all that, social media platforms themselves also have extensive community standards that limit or ban many kinds of speech, even if it's legal.

Government officials from both parties and child safety advocates all over the world have also been arguing in recent years that massively popular end-to-end encrypted services such as Apple's iMessage and Facebook-owned WhatsApp allow the most horrific content to spread rapidly with little ability for the public or law enforcement to spot, trace or correct it.

"This puts our citizens and societies at risk by severely eroding a company's ability to detect and respond to illegal content and activity, such as child sexual exploitation and abuse, terrorism, and foreign adversaries' attempts to undermine democratic values," top government officials from the U.S., U.K. and Australia wrote in a October 2019 letter to Mark Zuckerberg, criticizing his plans to bring end-to-end encryption to Facebook Messenger.

The letter said a switch to full encryption on all of Facebook's platforms would imperil many of the 16.8 million reports that the company made to the National Center for Missing & Exploited Children in 2018 — even as the children's group said that tips it received on potential CSAM had grown more than fourfold from 2015 to 2018. Its data also suggested child abuse material circulating online was becoming more graphic and explicit.

At the time, the heads of Messenger and WhatsApp responded to the government officials that "the 'backdoor' access you are demanding for law enforcement would be a gift to criminals, hackers and repressive regimes."

Last week, WhatsApp chief Will Cathcart again weighed in on Apple's latest changes to assure users it wouldn't be adopting a similar system.

"We've had personal computers for decades and there has never been a mandate to scan the private content of all desktops, laptops or phones globally for unlawful content," Cathcart said in a tweet thread. "It's not how technology built in free countries works."

The world watches

Apple and Facebook have for several months been locked in a battle over the way iOS privacy changes have affected Facebook's advertising business. But Cathcart's invocation of the global scope of Apple's measures appears to reflect how governments around the world have also expressed worries about what their users are sharing.

The U.K. government is pushing to have more platforms take responsibility for illegal content and other online harms, which many civil liberties advocates fear could force companies to weaken encryption. The European Union wants platforms, especially the largest ones, to do more to rein in illegal content and protect children online.

Some nations appear to have more repressive goals in mind, such as India's recent clashes with social media over content critical of the ruling party; the country's new rules potentially require the tracing of messages. Digital rights advocates have also raised concerns about laws and proposals in China, Singapore, Russia, Turkey and others — some of them major markets for technology companies that are eager to use their economic importance to try to shut down citizen dissent.

Although Apple said it would "refuse" any demands to look for images beyond CSAM, some tech-watchers also suggested that the company may have taken its new measures to get ahead of regulations dealing with illegal content, especially those coming from liberal democracies.

"They're looking at regulatory pressure here [in the U.S.] and in Brussels and the U.K. and Australia, and they're seeing that the gig is up," said Hany Farid, a professor at the University of California, Berkeley, and an expert on digital forensics.

Calling Apple's measures the "bare minimum," Farid added, "Maybe they looked at the landscape and thought, 'OK, this is going to be bad for us. Let's get out ahead.'"

Farid compared Apple's moves to the scans for CSAM or viruses that many cloud, email and messaging providers already perform — albeit not usually on the user's own device or on platforms that companies brand as encrypted end-to-end. But, he said, Apple's changes represent a corporate recognition that many would like to see online safety and privacy weighed differently against one another than they have been.

"We all want privacy, but we also recognize online harms," Farid said. "The trick is, how do you find a balance?"

Fintech

Judge Zia Faruqui is trying to teach you crypto, one ‘SNL’ reference at a time

His decisions on major cryptocurrency cases have quoted "The Big Lebowski," "SNL," and "Dr. Strangelove." That’s because he wants you — yes, you — to read them.

The ways Zia Faruqui (right) has weighed on cases that have come before him can give lawyers clues as to what legal frameworks will pass muster.

Photo: Carolyn Van Houten/The Washington Post via Getty Images

“Cryptocurrency and related software analytics tools are ‘The wave of the future, Dude. One hundred percent electronic.’”

That’s not a quote from "The Big Lebowski" — at least, not directly. It’s a quote from a Washington, D.C., district court memorandum opinion on the role cryptocurrency analytics tools can play in government investigations. The author is Magistrate Judge Zia Faruqui.

Keep Reading Show less
Veronica Irwin

Veronica Irwin (@vronirwin) is a San Francisco-based reporter at Protocol covering fintech. Previously she was at the San Francisco Examiner, covering tech from a hyper-local angle. Before that, her byline was featured in SF Weekly, The Nation, Techworker, Ms. Magazine and The Frisc.

The financial technology transformation is driving competition, creating consumer choice, and shaping the future of finance. Hear from seven fintech leaders who are reshaping the future of finance, and join the inaugural Financial Technology Association Fintech Summit to learn more.

Keep Reading Show less
FTA
The Financial Technology Association (FTA) represents industry leaders shaping the future of finance. We champion the power of technology-centered financial services and advocate for the modernization of financial regulation to support inclusion and responsible innovation.
Enterprise

AWS CEO: The cloud isn’t just about technology

As AWS preps for its annual re:Invent conference, Adam Selipsky talks product strategy, support for hybrid environments, and the value of the cloud in uncertain economic times.

Photo: Noah Berger/Getty Images for Amazon Web Services

AWS is gearing up for re:Invent, its annual cloud computing conference where announcements this year are expected to focus on its end-to-end data strategy and delivering new industry-specific services.

It will be the second re:Invent with CEO Adam Selipsky as leader of the industry’s largest cloud provider after his return last year to AWS from data visualization company Tableau Software.

Keep Reading Show less
Donna Goodison

Donna Goodison (@dgoodison) is Protocol's senior reporter focusing on enterprise infrastructure technology, from the 'Big 3' cloud computing providers to data centers. She previously covered the public cloud at CRN after 15 years as a business reporter for the Boston Herald. Based in Massachusetts, she also has worked as a Boston Globe freelancer, business reporter at the Boston Business Journal and real estate reporter at Banker & Tradesman after toiling at weekly newspapers.

Image: Protocol

We launched Protocol in February 2020 to cover the evolving power center of tech. It is with deep sadness that just under three years later, we are winding down the publication.

As of today, we will not publish any more stories. All of our newsletters, apart from our flagship, Source Code, will no longer be sent. Source Code will be published and sent for the next few weeks, but it will also close down in December.

Keep Reading Show less
Bennett Richardson

Bennett Richardson ( @bennettrich) is the president of Protocol. Prior to joining Protocol in 2019, Bennett was executive director of global strategic partnerships at POLITICO, where he led strategic growth efforts including POLITICO's European expansion in Brussels and POLITICO's creative agency POLITICO Focus during his six years with the company. Prior to POLITICO, Bennett was co-founder and CMO of Hinge, the mobile dating company recently acquired by Match Group. Bennett began his career in digital and social brand marketing working with major brands across tech, energy, and health care at leading marketing and communications agencies including Edelman and GMMB. Bennett is originally from Portland, Maine, and received his bachelor's degree from Colgate University.

Enterprise

Why large enterprises struggle to find suitable platforms for MLops

As companies expand their use of AI beyond running just a few machine learning models, and as larger enterprises go from deploying hundreds of models to thousands and even millions of models, ML practitioners say that they have yet to find what they need from prepackaged MLops systems.

As companies expand their use of AI beyond running just a few machine learning models, ML practitioners say that they have yet to find what they need from prepackaged MLops systems.

Photo: artpartner-images via Getty Images

On any given day, Lily AI runs hundreds of machine learning models using computer vision and natural language processing that are customized for its retail and ecommerce clients to make website product recommendations, forecast demand, and plan merchandising. But this spring when the company was in the market for a machine learning operations platform to manage its expanding model roster, it wasn’t easy to find a suitable off-the-shelf system that could handle such a large number of models in deployment while also meeting other criteria.

Some MLops platforms are not well-suited for maintaining even more than 10 machine learning models when it comes to keeping track of data, navigating their user interfaces, or reporting capabilities, Matthew Nokleby, machine learning manager for Lily AI’s product intelligence team, told Protocol earlier this year. “The duct tape starts to show,” he said.

Keep Reading Show less
Kate Kaye

Kate Kaye is an award-winning multimedia reporter digging deep and telling print, digital and audio stories. She covers AI and data for Protocol. Her reporting on AI and tech ethics issues has been published in OneZero, Fast Company, MIT Technology Review, CityLab, Ad Age and Digiday and heard on NPR. Kate is the creator of RedTailMedia.org and is the author of "Campaign '08: A Turning Point for Digital Media," a book about how the 2008 presidential campaigns used digital media and data.

Latest Stories
Bulletins