Policy

In Apple’s privacy vs. child safety debate, there’s no ‘perfect solution’

At a time when child sexual abuse material online is spiking, how do opponents to Apple's child safety features calculate risk, and more importantly, what alternatives do they propose?

Sharon Bradford Franklin, co-director of the Security and Surveillance Project at the Center for Democracy and Technology

Sharon Bradford Franklin, co-director of the Security and Surveillance Project at the Center for Democracy and Technology

Photo: New America via Flickr

Apple executives had to have known what they were getting themselves into when they announced their new child safety features earlier this month. It wasn't so long ago that Apple itself argued in its battle with the FBI that building a backdoor into its devices would make "users' most confidential and personal information vulnerable to hackers, identity thieves, hostile foreign agents, and unwarranted government surveillance."

Now, privacy advocates and civil liberties groups are making much the same argument about a series of changes Apple will soon introduce to its operating systems in the U.S.

The first update will scan iMessage for users 17 and under and warn them before they send or receive what Apple determines to be "sexually explicit" images. Apple will also notify the parents of users 13 and under if they're using family accounts and opt to send or receive the images anyway. The second update will scan users' devices for images that have been uploaded to iCloud and are also contained in a database of known child sexual abuse material. Once Apple detects a certain number of matches in an account — around 30 — they'll undergo human review before Apple notifies the National Center for Missing and Exploited Children.

The backlash to the announcement was swift, and Apple was forced to trot out executives to defend the changes to the press, vowing that this scanning technology won't be used for other purposes and that Apple has gone about these changes in the most privacy protective way.

None of Apple's defenses stuck. Last week, a group of more than 90 international organizations wrote to CEO Tim Cook, arguing that the updates could put vulnerable youths, particularly LGBTQ+ youths, at risk and that Apple is walking a slippery slope toward expanded government surveillance.

But while the weeks since Apple's announcement have been full of well-founded criticism, they've been short on solutions. The status quo, both sides agree, is intolerable. So, at a time when child sexual abuse material online is spiking, it's worth asking: How do Apple's opponents conduct their risk-benefit analysis, and more importantly, what alternatives they propose?

To find out, Protocol spoke with Sharon Bradford Franklin, co-director of the Security and Surveillance Project at the Center for Democracy and Technology, which co-authored the letter to Cook.

This interview has been lightly edited and condensed for clarity.

How much of the opposition to what Apple's doing has to do with what Apple is currently doing, and how much of it has to do with the slippery slope and what else Apple might decide to or be compelled to scan?

It really is very much both. The risks that we describe with regard to, for example, LGBT youth and other vulnerable youth who can be subject to harmful actions taken by a parent who may not be sympathetic to their situation, those are very present. Those are going to happen as a result of the changes that Apple has announced immediately.

There are also a lot of concerns about how the surveillance capabilities that Apple is building into our phones can be expanded and repurposed. It doesn't take a lot of imagination to see how governments around the world will seek to exploit the new surveillance capabilities that Apple is planning to build.

Even if people use the term slippery slope, we've been focused on steps that don't require a lot of sequencing to get to them.

How much of your concern has to do with the fact that it's Apple making these changes and Apple has said, we stand for privacy in all contexts whether the U.S. government is asking us to break their encryption or not. Is it case specific that Apple is doing this that sets off alarm bells?

We would be concerned with any company that has been offering end-to-end encryption that then announces plans to build a back door into previously end-to-end encrypted services. It's not Apple-specific, but yes, we are focused on the fact that Apple is announcing changes that undermine a prior industry standard for providing encryption and security to users of Apple products.

Do you have the same sort of opposition to the sort of hash scanning that Facebook and other social media platforms do with regard to CSAM? And if not, why not?

The problem of addressing child sexual abuse material online is important. We agree that the intentions are good ones. The concerns are different with what Apple is doing here because they are building surveillance capabilities right into our phones. Your phone is what is doing the spying right there. And once it is built into an operating system, which is how this will happen, there are more potential vectors for repurposing [the tools]. Not only governments pressuring Apple to expand the hash database to all sorts of other content they find objectionable, but also with other apps.

But why is that an important distinction, that it's on the operating system rather than a platform like Facebook that could also be compelled to scan for other types of content?

With the surveillance capabilities being built into the operating system and on the device, that opens up the potential for other apps that exist in this walled garden of the Apple device to seek to use that, and so to expand the scanning beyond the specific instance in which Apple has announced it.

So you're saying with the Facebook analogy, maybe Facebook does get pressured to scan for certain content, but at least it's only on Facebook. Is that part of the argument?

The concerns are different. The fear of other governments pressuring a tech company to expand the disfavored content that it's scanning for is not limited to what Apple has just announced.

Right. I just haven't seen the sort of opposition to a company like Facebook, which files the most reports to NCMEC right now of any company. I'm just trying to get at why.

When Facebook announced that it was going to move to end-to-end encrypted messaging for all its messaging services, not only Whatsapp, which Facebook owns, but also messages in the Facebook main platform and the messaging service within Instagram, they drew a letter from then Attorney General William Barr as well as his counterparts in the U.K. and Australia, urging Mark Zuckerberg to abandon his plans.

The Center for Democracy and Technology at that time organized an open letter to Facebook saying no, actually, the move to end-to-end encryption is a good thing, and you should move forward with that. It's a little bit of a flip of what your question asked. But it is a situation where AG Barr said all these reports from Facebook to NCMEC are going to dry up. And civil society groups that promote strong encryption for all of the value and benefits it can provide — protecting people in so many contexts — wrote this open letter saying this is actually a good thing, and while we support the goal of protecting children, overall on balance, encryption does so much more good.

So there are tradeoffs in any kind of content moderation decision like this. Obviously, child safety advocates would say, doing nothing risks having more CSAM floating around in iCloud and more kids being lured or tricked into sharing photos in iMessage. And for a lot of people, that's an intolerable risk. From your perspective, the risk of this backdoor being misused is too high. So walk me through the risk calculation you're making. Why is the risk of misuse more dangerous than the risk of maintaining the status quo?

First of all I think it's important to recognize, as we started out discussing at the beginning of our conversation, that Apple's changes will also create new risks for children. Even with the goal of protecting children held up as your metric, Apple's changes are going to create new risks for vulnerable youth, such as LGBT youth, young people who are transitioning and young people who are not necessarily in healthy traditional relationship with someone who may be designated as the parent account on their phone. There are those new risks as well to young people.

Encryption protects privacy and security in so many ways to so many communities. To journalists, to human rights activists, to ordinary people and to governments. The scale of protection that encryption provides is vast, and once you create a backdoor, even where you do it, as Apple seems to have done, in a careful way, and you try to establish policy choices to limit its scope, it's very hard to keep that backdoor to a narrow one once that capability's there.

With the first part of what you were saying it sounds like that is a critique really of the iMessage update. So what if Apple scraps that iMessage update? Does an organization like CDT become any more comfortable with the iCloud update?

You're correct. The change to messages is where they're retreating from the commitment to end-to-end encryption. With uploading photos to iCloud, that's not the same thing because there wasn't a prior end-to-end encrypted service that's now being broken.

But there are very clear risks also to free expression. It's not a huge leap to see how governments will pressure Apple, demand of Apple, enact legal requirements that apply to Apple and other tech companies, that once you have this capability to scan for the CSAM hash database, lets scan for the [Global Internet Forum to Counter Terrorism] database of terrorist content, and that's a database that has very little transparency. With terrorist content, context really matters. It may be human rights activists or journalists trying to raise awareness of atrocities. What counts as a terrorist is often very politically dependent. Or a government could pressure Apple to scan for any other type of disfavored content.

Once you have the surveillance capability built into everyone's phone, it's much harder for Apple to argue, as it did five years ago in the San Bernardino case, that it's [not] able to meet those government demands. The further technical changes that are required are much smaller.

What would your proposed alternative be for Apple?

CDT just released a new report on August 12. It looks at end-to-end encryption, and the best ways to moderate content to actually address the situation with harmful content. This was in the works well before we had any idea Apple was making this plan. Our timing turned out to be fortuitous. We released the report a week after Apple made its announcement.

It defines end-to-end encryption and looks at several different types of techniques to try to address various types of harmful content. It didn't focus particularly on CSAM. It's all sorts of harmful content, hate speech and so forth.

It points to two particular techniques that can be helpful in the detection of unwanted content that still allows the protection of end-to-end encryption. The first is user reporting: There's a place to click to report harmful content. The other is metadata analysis. Metadata analysis is one that other companies have pointed to as being potentially helpful when you're trying to address the problem of child exploitation. This is a technique that can, when they train classifiers, actually observe behavior that can help detect all sorts of predatory and harmful content. That's one possibility.

We also talk about some types of techniques that are not compatible with end-to-end encryption. One being client-side scanning, which is what's going on with the Apple announcement.

We also point to a technique that could use more research, which are machine learning classifiers on the user's device, but that are fully voluntary on behalf of the user, and that they choose to turn on and that is fully something they want to have and isn't adversarial to them. An example would be if I wanted to turn on a classifier that would avoid me receiving sexting images.

Now, you can understand how those approaches break down when it's a problem like CSAM on a device. User reporting doesn't work. I'm probably not going to report myself. I also wonder about metadata analysis and how that's more privacy protective. If you suddenly have a machine learning system trying to guess whether my behavior likens me to a child predator, that seems extremely problematic. Do you think we have a good answer to what Apple's trying to do? Do any of these techniques actually satisfy?

I don't know that we have a perfect solution, but the problem you just pointed to with machine learning classifiers being error-prone is one of the problems we've pointed to with the surveillance capability that Apple is installing in messaging. That is their own machine learning classifier that's designed to detect what they call "explicit content." That can also be error prone.

Apple has said that researchers will be able to vet this system to make sure it doesn't expand beyond what Apple has said it's going to do. What do you make of that promise?

It's hard to know in advance how much vetting will be possible. But the other thing I would say is: Transparency can be an important tool for promoting accountability, and we call for transparency to promote accountability in so many contexts, but it doesn't give us a lot of comfort when Apple's key response to our concerns about expansion of capability is: Well, you'll know about it if it happens.

We have voiced a lot of concerns about how governments are going to pressure or demand that Apple expand the reach of these surveillance capabilities to other contexts, to other types of content, and so forth. And when their response is, we don't plan to do that, and if we do implement those changes, researchers will be able to tell, that doesn't give us a lot of comfort that we'll know about it when this feared expansion happens. That's not the same as saying it is not technically possible, the way they said in Apple v. FBI. In Apple v. FBI they said, we'd have to build an entirely new system in order to do what's being demanded of us. Once they've made these changes to build their surveillance capabilities into their operating system and implant them in our devices, it's a much smaller technical change that will be required for the type of expansion and repurposing we're concerned about.

Entertainment

Niantic’s future hinges on mapping the metaverse

The maker of Pokémon Go is hoping the metaverse will deliver its next big break.

Niantic's new standalone messaging and social app, Campfire, is a way to get players organizing and meeting up in the real world. It launches today for select Pokémon Go players.

Image: Niantic

Pokémon Go sent Niantic to the moon. But now the San Francisco-based augmented reality developer has returned to earth, and it’s been trying to chart its way back to the stars ever since. The company yesterday announced layoffs of about 8% of its workforce (about 85 to 90 people) and canceled four projects, Bloomberg reported, signaling another disappointment for the studio that still generates about $1 billion in revenue per year from Pokémon Go.

Finding its next big hit has been Niantic’s priority for years, and the company has been coming up short. For much of the past year or so, Niantic has turned its attention to the metaverse, with hopes that its location-based mobile games, AR tech and company philosophy around fostering physical connection and outdoor exploration can help it build what it now calls the “real world metaverse.”

Keep Reading Show less
Nick Statt

Nick Statt is Protocol's video game reporter. Prior to joining Protocol, he was news editor at The Verge covering the gaming industry, mobile apps and antitrust out of San Francisco, in addition to managing coverage of Silicon Valley tech giants and startups. He now resides in Rochester, New York, home of the garbage plate and, completely coincidentally, the World Video Game Hall of Fame. He can be reached at nstatt@protocol.com.

Every day, millions of us press the “order” button on our favorite coffee store's mobile application: Our chosen brew will be on the counter when we arrive. It’s a personalized, seamless experience that we have all come to expect. What we don’t know is what’s happening behind the scenes. The mobile application is sourcing data from a database that stores information about each customer and what their favorite coffee drinks are. It is also leveraging event-streaming data in real time to ensure the ingredients for your personal coffee are in supply at your local store.

Applications like this power our daily lives, and if they can’t access massive amounts of data stored in a database as well as stream data “in motion” instantaneously, you — and millions of customers — won’t have these in-the-moment experiences.

Keep Reading Show less
Jennifer Goforth Gregory
Jennifer Goforth Gregory has worked in the B2B technology industry for over 20 years. As a freelance writer she writes for top technology brands, including IBM, HPE, Adobe, AT&T, Verizon, Epson, Oracle, Intel and Square. She specializes in a wide range of technology, such as AI, IoT, cloud, cybersecurity, and CX. Jennifer also wrote a bestselling book The Freelance Content Marketing Writer to help other writers launch a high earning freelance business.
Climate

Supreme Court takes a sledgehammer to greenhouse gas regulations

The court ruled 6-3 that the EPA cannot use the Clean Air Act to regulate power plant greenhouse gas emissions. That leaves a patchwork of policies from states, utilities and, increasingly, tech companies to pick up the slack.

The Supreme Court struck a major blow to the federal government's ability to regulate greenhouse gases.

Eric Lee/Bloomberg via Getty Images

Striking down the right to abortion may be the Supreme Court's highest-profile decision this term. But on Thursday, the court handed down an equally massive verdict on the federal government's ability to regulate greenhouse gas emissions. In the case of West Virginia v. EPA, the court decided that the agency has no ability to regulate greenhouse gas pollution under the Clean Air Act. Weakening the federal government's powers leaves a patchwork of states, utilities and, increasingly, tech companies to pick up the slack in reducing carbon pollution.

Keep Reading Show less
Brian Kahn

Brian ( @blkahn) is Protocol's climate editor. Previously, he was the managing editor and founding senior writer at Earther, Gizmodo's climate site, where he covered everything from the weather to Big Oil's influence on politics. He also reported for Climate Central and the Wall Street Journal. In the even more distant past, he led sleigh rides to visit a herd of 7,000 elk and boat tours on the deepest lake in the U.S.

Fintech

Can crypto regulate itself? The Lummis-Gillibrand bill hopes so.

Creating the equivalent of the stock markets’ FINRA for crypto is the ideal, but experts doubt that it will be easy.

The idea of creating a government-sanctioned private regulatory association has been drawing more attention in the debate over how to rein in a fast-growing industry whose technological quirks have baffled policymakers.

Illustration: Christopher T. Fong/Protocol

Regulating crypto is complicated. That’s why Sens. Cynthia Lummis and Kirsten Gillibrand want to explore the creation of a private sector group to help federal regulators do their job.

The bipartisan bill introduced by Lummis and Gillibrand would require the CFTC and the SEC to work with the crypto industry to look into setting up a self-regulatory organization to “facilitate innovative, efficient and orderly markets for digital assets.”

Keep Reading Show less
Benjamin Pimentel

Benjamin Pimentel ( @benpimentel) covers crypto and fintech from San Francisco. He has reported on many of the biggest tech stories over the past 20 years for the San Francisco Chronicle, Dow Jones MarketWatch and Business Insider, from the dot-com crash, the rise of cloud computing, social networking and AI to the impact of the Great Recession and the COVID crisis on Silicon Valley and beyond. He can be reached at bpimentel@protocol.com or via Google Voice at (925) 307-9342.

Enterprise

Alperovitch: Cybersecurity defenders can’t be on high alert every day

With the continued threat of Russian cyber escalation, cybersecurity and geopolitics expert Dmitri Alperovitch says it’s not ideal for the U.S. to oscillate between moments of high alert and lesser states of cyber readiness.

Dmitri Alperovitch (the co-founder and former CTO of CrowdStrike) speaks at RSA Conference 2022.

Photo: RSA Conference

When it comes to cybersecurity vigilance, Dmitri Alperovitch wants to see more focus on resiliency of IT systems — and less on doing "surges" around particular dates or events.

For instance, whatever Russia is doing at the moment.

Keep Reading Show less
Kyle Alspach

Kyle Alspach ( @KyleAlspach) is a senior reporter at Protocol, focused on cybersecurity. He has covered the tech industry since 2010 for outlets including VentureBeat, CRN and the Boston Globe. He lives in Portland, Oregon, and can be reached at kalspach@protocol.com.

Latest Stories
Bulletins