Protocol | Policy

In Apple’s privacy vs. child safety debate, there’s no ‘perfect solution’

At a time when child sexual abuse material online is spiking, how do opponents to Apple's child safety features calculate risk, and more importantly, what alternatives do they propose?

Sharon Bradford Franklin, co-director of the Security and Surveillance Project at the Center for Democracy and Technology

Sharon Bradford Franklin, co-director of the Security and Surveillance Project at the Center for Democracy and Technology

Photo: New America via Flickr

Apple executives had to have known what they were getting themselves into when they announced their new child safety features earlier this month. It wasn't so long ago that Apple itself argued in its battle with the FBI that building a backdoor into its devices would make "users' most confidential and personal information vulnerable to hackers, identity thieves, hostile foreign agents, and unwarranted government surveillance."

Now, privacy advocates and civil liberties groups are making much the same argument about a series of changes Apple will soon introduce to its operating systems in the U.S.

The first update will scan iMessage for users 17 and under and warn them before they send or receive what Apple determines to be "sexually explicit" images. Apple will also notify the parents of users 13 and under if they're using family accounts and opt to send or receive the images anyway. The second update will scan users' devices for images that have been uploaded to iCloud and are also contained in a database of known child sexual abuse material. Once Apple detects a certain number of matches in an account — around 30 — they'll undergo human review before Apple notifies the National Center for Missing and Exploited Children.

The backlash to the announcement was swift, and Apple was forced to trot out executives to defend the changes to the press, vowing that this scanning technology won't be used for other purposes and that Apple has gone about these changes in the most privacy protective way.

None of Apple's defenses stuck. Last week, a group of more than 90 international organizations wrote to CEO Tim Cook, arguing that the updates could put vulnerable youths, particularly LGBTQ+ youths, at risk and that Apple is walking a slippery slope toward expanded government surveillance.

But while the weeks since Apple's announcement have been full of well-founded criticism, they've been short on solutions. The status quo, both sides agree, is intolerable. So, at a time when child sexual abuse material online is spiking, it's worth asking: How do Apple's opponents conduct their risk-benefit analysis, and more importantly, what alternatives they propose?

To find out, Protocol spoke with Sharon Bradford Franklin, co-director of the Security and Surveillance Project at the Center for Democracy and Technology, which co-authored the letter to Cook.

This interview has been lightly edited and condensed for clarity.

How much of the opposition to what Apple's doing has to do with what Apple is currently doing, and how much of it has to do with the slippery slope and what else Apple might decide to or be compelled to scan?

It really is very much both. The risks that we describe with regard to, for example, LGBT youth and other vulnerable youth who can be subject to harmful actions taken by a parent who may not be sympathetic to their situation, those are very present. Those are going to happen as a result of the changes that Apple has announced immediately.

There are also a lot of concerns about how the surveillance capabilities that Apple is building into our phones can be expanded and repurposed. It doesn't take a lot of imagination to see how governments around the world will seek to exploit the new surveillance capabilities that Apple is planning to build.

Even if people use the term slippery slope, we've been focused on steps that don't require a lot of sequencing to get to them.

How much of your concern has to do with the fact that it's Apple making these changes and Apple has said, we stand for privacy in all contexts whether the U.S. government is asking us to break their encryption or not. Is it case specific that Apple is doing this that sets off alarm bells?

We would be concerned with any company that has been offering end-to-end encryption that then announces plans to build a back door into previously end-to-end encrypted services. It's not Apple-specific, but yes, we are focused on the fact that Apple is announcing changes that undermine a prior industry standard for providing encryption and security to users of Apple products.

Do you have the same sort of opposition to the sort of hash scanning that Facebook and other social media platforms do with regard to CSAM? And if not, why not?

The problem of addressing child sexual abuse material online is important. We agree that the intentions are good ones. The concerns are different with what Apple is doing here because they are building surveillance capabilities right into our phones. Your phone is what is doing the spying right there. And once it is built into an operating system, which is how this will happen, there are more potential vectors for repurposing [the tools]. Not only governments pressuring Apple to expand the hash database to all sorts of other content they find objectionable, but also with other apps.

But why is that an important distinction, that it's on the operating system rather than a platform like Facebook that could also be compelled to scan for other types of content?

With the surveillance capabilities being built into the operating system and on the device, that opens up the potential for other apps that exist in this walled garden of the Apple device to seek to use that, and so to expand the scanning beyond the specific instance in which Apple has announced it.

So you're saying with the Facebook analogy, maybe Facebook does get pressured to scan for certain content, but at least it's only on Facebook. Is that part of the argument?

The concerns are different. The fear of other governments pressuring a tech company to expand the disfavored content that it's scanning for is not limited to what Apple has just announced.

Right. I just haven't seen the sort of opposition to a company like Facebook, which files the most reports to NCMEC right now of any company. I'm just trying to get at why.

When Facebook announced that it was going to move to end-to-end encrypted messaging for all its messaging services, not only Whatsapp, which Facebook owns, but also messages in the Facebook main platform and the messaging service within Instagram, they drew a letter from then Attorney General William Barr as well as his counterparts in the U.K. and Australia, urging Mark Zuckerberg to abandon his plans.

The Center for Democracy and Technology at that time organized an open letter to Facebook saying no, actually, the move to end-to-end encryption is a good thing, and you should move forward with that. It's a little bit of a flip of what your question asked. But it is a situation where AG Barr said all these reports from Facebook to NCMEC are going to dry up. And civil society groups that promote strong encryption for all of the value and benefits it can provide — protecting people in so many contexts — wrote this open letter saying this is actually a good thing, and while we support the goal of protecting children, overall on balance, encryption does so much more good.

So there are tradeoffs in any kind of content moderation decision like this. Obviously, child safety advocates would say, doing nothing risks having more CSAM floating around in iCloud and more kids being lured or tricked into sharing photos in iMessage. And for a lot of people, that's an intolerable risk. From your perspective, the risk of this backdoor being misused is too high. So walk me through the risk calculation you're making. Why is the risk of misuse more dangerous than the risk of maintaining the status quo?

First of all I think it's important to recognize, as we started out discussing at the beginning of our conversation, that Apple's changes will also create new risks for children. Even with the goal of protecting children held up as your metric, Apple's changes are going to create new risks for vulnerable youth, such as LGBT youth, young people who are transitioning and young people who are not necessarily in healthy traditional relationship with someone who may be designated as the parent account on their phone. There are those new risks as well to young people.

Encryption protects privacy and security in so many ways to so many communities. To journalists, to human rights activists, to ordinary people and to governments. The scale of protection that encryption provides is vast, and once you create a backdoor, even where you do it, as Apple seems to have done, in a careful way, and you try to establish policy choices to limit its scope, it's very hard to keep that backdoor to a narrow one once that capability's there.

With the first part of what you were saying it sounds like that is a critique really of the iMessage update. So what if Apple scraps that iMessage update? Does an organization like CDT become any more comfortable with the iCloud update?

You're correct. The change to messages is where they're retreating from the commitment to end-to-end encryption. With uploading photos to iCloud, that's not the same thing because there wasn't a prior end-to-end encrypted service that's now being broken.

But there are very clear risks also to free expression. It's not a huge leap to see how governments will pressure Apple, demand of Apple, enact legal requirements that apply to Apple and other tech companies, that once you have this capability to scan for the CSAM hash database, lets scan for the [Global Internet Forum to Counter Terrorism] database of terrorist content, and that's a database that has very little transparency. With terrorist content, context really matters. It may be human rights activists or journalists trying to raise awareness of atrocities. What counts as a terrorist is often very politically dependent. Or a government could pressure Apple to scan for any other type of disfavored content.

Once you have the surveillance capability built into everyone's phone, it's much harder for Apple to argue, as it did five years ago in the San Bernardino case, that it's [not] able to meet those government demands. The further technical changes that are required are much smaller.

What would your proposed alternative be for Apple?

CDT just released a new report on August 12. It looks at end-to-end encryption, and the best ways to moderate content to actually address the situation with harmful content. This was in the works well before we had any idea Apple was making this plan. Our timing turned out to be fortuitous. We released the report a week after Apple made its announcement.

It defines end-to-end encryption and looks at several different types of techniques to try to address various types of harmful content. It didn't focus particularly on CSAM. It's all sorts of harmful content, hate speech and so forth.

It points to two particular techniques that can be helpful in the detection of unwanted content that still allows the protection of end-to-end encryption. The first is user reporting: There's a place to click to report harmful content. The other is metadata analysis. Metadata analysis is one that other companies have pointed to as being potentially helpful when you're trying to address the problem of child exploitation. This is a technique that can, when they train classifiers, actually observe behavior that can help detect all sorts of predatory and harmful content. That's one possibility.

We also talk about some types of techniques that are not compatible with end-to-end encryption. One being client-side scanning, which is what's going on with the Apple announcement.

We also point to a technique that could use more research, which are machine learning classifiers on the user's device, but that are fully voluntary on behalf of the user, and that they choose to turn on and that is fully something they want to have and isn't adversarial to them. An example would be if I wanted to turn on a classifier that would avoid me receiving sexting images.

Now, you can understand how those approaches break down when it's a problem like CSAM on a device. User reporting doesn't work. I'm probably not going to report myself. I also wonder about metadata analysis and how that's more privacy protective. If you suddenly have a machine learning system trying to guess whether my behavior likens me to a child predator, that seems extremely problematic. Do you think we have a good answer to what Apple's trying to do? Do any of these techniques actually satisfy?

I don't know that we have a perfect solution, but the problem you just pointed to with machine learning classifiers being error-prone is one of the problems we've pointed to with the surveillance capability that Apple is installing in messaging. That is their own machine learning classifier that's designed to detect what they call "explicit content." That can also be error prone.

Apple has said that researchers will be able to vet this system to make sure it doesn't expand beyond what Apple has said it's going to do. What do you make of that promise?

It's hard to know in advance how much vetting will be possible. But the other thing I would say is: Transparency can be an important tool for promoting accountability, and we call for transparency to promote accountability in so many contexts, but it doesn't give us a lot of comfort when Apple's key response to our concerns about expansion of capability is: Well, you'll know about it if it happens.

We have voiced a lot of concerns about how governments are going to pressure or demand that Apple expand the reach of these surveillance capabilities to other contexts, to other types of content, and so forth. And when their response is, we don't plan to do that, and if we do implement those changes, researchers will be able to tell, that doesn't give us a lot of comfort that we'll know about it when this feared expansion happens. That's not the same as saying it is not technically possible, the way they said in Apple v. FBI. In Apple v. FBI they said, we'd have to build an entirely new system in order to do what's being demanded of us. Once they've made these changes to build their surveillance capabilities into their operating system and implant them in our devices, it's a much smaller technical change that will be required for the type of expansion and repurposing we're concerned about.

Protocol | Enterprise

How Cloudflare thinks it can become ‘the fourth major public cloud’

With its new low-cost R2 cloud storage service, Cloudflare is jumping into direct competition with the AWS service that launched the cloud computing revolution.

Cloudflare will not charge data-egress fees for customers using R2, taking direct aim at the fees AWS charges developers to move data out of its widely popular S3 storage service.

Photo: Martina Albertazzi/Bloomberg via Getty Images

Cloudflare is ready to launch a new cloud object storage service that promises to be cheaper than the established alternatives, a step the company believes will catapult it into direct competition with AWS and other cloud providers.

The service will be called R2 — "one less than S3," quipped Cloudflare CEO Matthew Prince in an interview with Protocol ahead of Cloudflare's announcement Tuesday morning. Cloudflare will not charge data-egress fees for customers using R2, taking direct aim at the fees AWS charges developers to move data out of its widely popular S3 storage service.

Keep Reading Show less
Tom Krazit

Tom Krazit ( @tomkrazit) is Protocol's enterprise editor, covering cloud computing and enterprise technology out of the Pacific Northwest. He has written and edited stories about the technology industry for almost two decades for publications such as IDG, CNET, paidContent, and GeekWire, and served as executive editor of Gigaom and Structure.

The pandemic won't be over until the economy recovers. While cities, states and regions across the U.S. are grappling with new variants, shifting mask policies and other factors that directly impact businesses large and small, it is nevertheless time for brands and enterprises to jumpstart COVID-19 recovery strategies.

Data will undoubtedly be critical to such strategies, but there is one type of data in particular that is poised to yield greater impact than ever in the COVID-19 Recovery Era: location data.

Keep Reading Show less
Michele Morelli, Foursquare
As SVP of Marketing, Michele is responsible for overseeing the brand strategy, communications, and product and performance marketing of Foursquare’s apps and enterprise products. Prior to joining Foursquare, Michele held several senior leadership positions with wide-ranging responsibilities at AOL, Toluna, Citibank and Yahoo!.

VR pioneer The Void is plotting a comeback

Assets of the location-based VR startup have been acquired by a former investor, who plans a relaunch with key former team members.

The Void's New York outpost closed during the pandemic. Now, the company is planning a comeback under new ownership.

Photo: The Void

Location-based VR pioneer The Void may rise from the ashes next year: A former investor has acquired key assets of the defunct startup and is now looking to relaunch it with key team members, Protocol has learned. The company is said to be actively fundraising, and is getting ready to start hiring additional talent soon.

The Void's patents and trademarks were recently acquired by Hyper Reality Partners, a company headed by former OneWeb CEO Adrian Steckel, who also used to be an investor in and board member of The Void. Hyper Reality Partners is actively fundraising for a relaunch of the VR startup, and is said to have raised as much as $20 million already, according to an industry insider.

Keep Reading Show less
Janko Roettgers

Janko Roettgers (@jank0) is a senior reporter at Protocol, reporting on the shifting power dynamics between tech, media, and entertainment, including the impact of new technologies. Previously, Janko was Variety's first-ever technology writer in San Francisco, where he covered big tech and emerging technologies. He has reported for Gigaom, Frankfurter Rundschau, Berliner Zeitung, and ORF, among others. He has written three books on consumer cord-cutting and online music and co-edited an anthology on internet subcultures. He lives with his family in Oakland.

Protocol | Workplace

A new McKinsey study shows that women do more emotional labor at work

The 2021 Women in the Workplace report from McKinsey found that women are far more likely than men to help their teams manage time and work-life balance and provide emotional support.

Senior leaders who identify as women were 60% more likely to provide emotional support to their teams and 26% more likely to help team members navigate work/life challenges, according to the report.

Photo: Luis Alvarez via Getty Images

Over the last year, emotional support, time management skills and work-life balance have become drastically more important and difficult in the workplace — and women leaders were far more likely than men to step in and do that work for their teams, according to the latest iteration of McKinsey and's annual Women in the Workplace report.

Senior leaders who identify as women were 60% more likely to provide emotional support to their teams, 24% more likely to ensure their teams' workload is manageable and 26% more likely to help team members navigate work/life challenges, according to the report. In addition, about one in five women senior leaders spend a substantial amount of time on DEI work that is not central to their job, compared to less than one in 10 male senior leaders.

Keep Reading Show less
Anna Kramer

Anna Kramer is a reporter at Protocol (Twitter: @ anna_c_kramer, email:, where she writes about labor and workplace issues. Prior to joining the team, she covered tech and small business for the San Francisco Chronicle and privacy for Bloomberg Law. She is a recent graduate of Brown University, where she studied International Relations and Arabic and wrote her senior thesis about surveillance tools and technological development in the Middle East.

Amazon needs New World’s launch to be a success

New World arrives Tuesday. Whether it flops could determine the future of Amazon Games.

New World launches on Tuesday, after four delays. It could be Amazon's first big hit.

Image: Amazon

Amazon's New World launches on Tuesday, marking the end of a long and bumpy road to release day for the company's most pivotal video game release to date. There's a lot riding on New World, a massively multiplayer online game in the vein of iconic successes like Blizzard's long-running World of Warcraft and Square Enix's immensely popular Final Fantasy XIV.

If the game succeeds, New World will mark a rare success for a technology company in the gaming space. With the exception of Microsoft, which entered the console game industry nearly two decades ago, tech firms have tried time and again to use their engineering talent and resources to crack the code behind making successful video games. Almost every attempt has failed, but Amazon is the closest to having a hit on its hands. If it flops, we could see Amazon's gaming ambitions go the way of Google's.

Keep Reading Show less
Nick Statt
Nick Statt is Protocol's video game reporter. Prior to joining Protocol, he was news editor at The Verge covering the gaming industry, mobile apps and antitrust out of San Francisco, in addition to managing coverage of Silicon Valley tech giants and startups. He now resides in Rochester, New York, home of the garbage plate and, completely coincidentally, the World Video Game Hall of Fame. He can be reached at
Latest Stories