Apple's announcement that it would take new measures to protect children on iMessage and in users' photo accounts has generated worry in the tech policy community that the phone-maker has abandoned its commitment to privacy following years of pressure from governments worldwide.
Basically everyone wants to protect children, but in recent years questions about how to handle the spread of controversial content, from dangerous COVID-19 misinformation to political dissent repressive regimes want to silence, have become even more broad and politically fraught both in the U.S. and around the world.
The struggle over how to balance privacy and security is a long-running tension in the U.S. The most recent round began to play out when Apple said last Thursday it would allow parents to get warnings when kids under 13 send or receive explicit images and that it would scan photo uploads to iCloud for known child sexual abuse material (CSAM). First, Apple said its goal was "to help protect children from predators who use communication tools to recruit and exploit them." Then privacy, security and free speech advocates argued that, while CSAM is the most horrible of online content, compromising robust encryption means compromising the kind of information it protects, from health and financial data to the identities of LGBTQ+ youth and pro-democracy activists all over the world.
In the past, the tension has so far largely, if uneasily, resolved in favor of end-to-end encryption. Security and privacy prevail in part because of concerns that governments could use safety measures to chill all kinds of speech, and also in part because the First and Fourth Amendments create a high bar for speech and search in the U.S., even when those liberties may mean wrongdoers go uncaught. Such concerns, for instance, gave Apple political cover when it resisted unlocking a dead terrorist's iPhone following the 2015 San Bernardino attack, and helped to slow down measures like the bipartisan EARN IT Act that could have exposed companies to liability for CSAM if they employed encryption.
The tension has deepened into 2021. Concerns about dangerous posts, pictures and videos only seem to have grown as the traditional emphasis on preventing child abuse imagery, terrorism and other crime has expanded to concerns about election disinformation, lies about COVID-19 vaccines and treatments, violent racism and other instances of harmful content.
Invisible content
"The content questions are already having a moment at the same time that we're seeing encryption come back into the fore," said Emma Llansó, director of the Free Expression Project at the Center for Democracy and Technology, which was highly critical of Apple's moves.
In the U.S., some of those questions are being driven by the Biden administration, which is is critical of online misinformation being spread about COVID-19, its treatments and the efficacy of masks and vaccines. Two Democratic senators also recently led an effort to limit a liability shield for online platforms when their algorithms boost certain kinds of health misinformation.
The concerns don't come only from the Democratic side. Former President Trump, during his tenure, tried repeatedly to stop online criticism of himself, and members of Congress from both parties have issued bills and statements regarding online terrorism, civil rights violations, political misinformation, drug sales, counterfeits and more. Amid the bipartisan concerns are also perennial allegations by Republicans that online platforms censor conservative speech. And on top of all that, social media platforms themselves also have extensive community standards that limit or ban many kinds of speech, even if it's legal.
Government officials from both parties and child safety advocates all over the world have also been arguing in recent years that massively popular end-to-end encrypted services such as Apple's iMessage and Facebook-owned WhatsApp allow the most horrific content to spread rapidly with little ability for the public or law enforcement to spot, trace or correct it.
"This puts our citizens and societies at risk by severely eroding a company's ability to detect and respond to illegal content and activity, such as child sexual exploitation and abuse, terrorism, and foreign adversaries' attempts to undermine democratic values," top government officials from the U.S., U.K. and Australia wrote in a October 2019 letter to Mark Zuckerberg, criticizing his plans to bring end-to-end encryption to Facebook Messenger.
The letter said a switch to full encryption on all of Facebook's platforms would imperil many of the 16.8 million reports that the company made to the National Center for Missing & Exploited Children in 2018 — even as the children's group said that tips it received on potential CSAM had grown more than fourfold from 2015 to 2018. Its data also suggested child abuse material circulating online was becoming more graphic and explicit.
At the time, the heads of Messenger and WhatsApp responded to the government officials that "the 'backdoor' access you are demanding for law enforcement would be a gift to criminals, hackers and repressive regimes."
Last week, WhatsApp chief Will Cathcart again weighed in on Apple's latest changes to assure users it wouldn't be adopting a similar system.
"We've had personal computers for decades and there has never been a mandate to scan the private content of all desktops, laptops or phones globally for unlawful content," Cathcart said in a tweet thread. "It's not how technology built in free countries works."
The world watches
Apple and Facebook have for several months been locked in a battle over the way iOS privacy changes have affected Facebook's advertising business. But Cathcart's invocation of the global scope of Apple's measures appears to reflect how governments around the world have also expressed worries about what their users are sharing.
The U.K. government is pushing to have more platforms take responsibility for illegal content and other online harms, which many civil liberties advocates fear could force companies to weaken encryption. The European Union wants platforms, especially the largest ones, to do more to rein in illegal content and protect children online.
Some nations appear to have more repressive goals in mind, such as India's recent clashes with social media over content critical of the ruling party; the country's new rules potentially require the tracing of messages. Digital rights advocates have also raised concerns about laws and proposals in China, Singapore, Russia, Turkey and others — some of them major markets for technology companies that are eager to use their economic importance to try to shut down citizen dissent.
Although Apple said it would "refuse" any demands to look for images beyond CSAM, some tech-watchers also suggested that the company may have taken its new measures to get ahead of regulations dealing with illegal content, especially those coming from liberal democracies.
"They're looking at regulatory pressure here [in the U.S.] and in Brussels and the U.K. and Australia, and they're seeing that the gig is up," said Hany Farid, a professor at the University of California, Berkeley, and an expert on digital forensics.
Calling Apple's measures the "bare minimum," Farid added, "Maybe they looked at the landscape and thought, 'OK, this is going to be bad for us. Let's get out ahead.'"
Farid compared Apple's moves to the scans for CSAM or viruses that many cloud, email and messaging providers already perform — albeit not usually on the user's own device or on platforms that companies brand as encrypted end-to-end. But, he said, Apple's changes represent a corporate recognition that many would like to see online safety and privacy weighed differently against one another than they have been.
"We all want privacy, but we also recognize online harms," Farid said. "The trick is, how do you find a balance?"