Policy

In Apple’s privacy vs. child safety debate, there’s no ‘perfect solution’

At a time when child sexual abuse material online is spiking, how do opponents to Apple's child safety features calculate risk, and more importantly, what alternatives do they propose?

Sharon Bradford Franklin, co-director of the Security and Surveillance Project at the Center for Democracy and Technology

Sharon Bradford Franklin, co-director of the Security and Surveillance Project at the Center for Democracy and Technology

Photo: New America via Flickr

Apple executives had to have known what they were getting themselves into when they announced their new child safety features earlier this month. It wasn't so long ago that Apple itself argued in its battle with the FBI that building a backdoor into its devices would make "users' most confidential and personal information vulnerable to hackers, identity thieves, hostile foreign agents, and unwarranted government surveillance."

Now, privacy advocates and civil liberties groups are making much the same argument about a series of changes Apple will soon introduce to its operating systems in the U.S.

The first update will scan iMessage for users 17 and under and warn them before they send or receive what Apple determines to be "sexually explicit" images. Apple will also notify the parents of users 13 and under if they're using family accounts and opt to send or receive the images anyway. The second update will scan users' devices for images that have been uploaded to iCloud and are also contained in a database of known child sexual abuse material. Once Apple detects a certain number of matches in an account — around 30 — they'll undergo human review before Apple notifies the National Center for Missing and Exploited Children.

The backlash to the announcement was swift, and Apple was forced to trot out executives to defend the changes to the press, vowing that this scanning technology won't be used for other purposes and that Apple has gone about these changes in the most privacy protective way.

None of Apple's defenses stuck. Last week, a group of more than 90 international organizations wrote to CEO Tim Cook, arguing that the updates could put vulnerable youths, particularly LGBTQ+ youths, at risk and that Apple is walking a slippery slope toward expanded government surveillance.

But while the weeks since Apple's announcement have been full of well-founded criticism, they've been short on solutions. The status quo, both sides agree, is intolerable. So, at a time when child sexual abuse material online is spiking, it's worth asking: How do Apple's opponents conduct their risk-benefit analysis, and more importantly, what alternatives they propose?

To find out, Protocol spoke with Sharon Bradford Franklin, co-director of the Security and Surveillance Project at the Center for Democracy and Technology, which co-authored the letter to Cook.

This interview has been lightly edited and condensed for clarity.

How much of the opposition to what Apple's doing has to do with what Apple is currently doing, and how much of it has to do with the slippery slope and what else Apple might decide to or be compelled to scan?

It really is very much both. The risks that we describe with regard to, for example, LGBT youth and other vulnerable youth who can be subject to harmful actions taken by a parent who may not be sympathetic to their situation, those are very present. Those are going to happen as a result of the changes that Apple has announced immediately.

There are also a lot of concerns about how the surveillance capabilities that Apple is building into our phones can be expanded and repurposed. It doesn't take a lot of imagination to see how governments around the world will seek to exploit the new surveillance capabilities that Apple is planning to build.

Even if people use the term slippery slope, we've been focused on steps that don't require a lot of sequencing to get to them.

How much of your concern has to do with the fact that it's Apple making these changes and Apple has said, we stand for privacy in all contexts whether the U.S. government is asking us to break their encryption or not. Is it case specific that Apple is doing this that sets off alarm bells?

We would be concerned with any company that has been offering end-to-end encryption that then announces plans to build a back door into previously end-to-end encrypted services. It's not Apple-specific, but yes, we are focused on the fact that Apple is announcing changes that undermine a prior industry standard for providing encryption and security to users of Apple products.

Do you have the same sort of opposition to the sort of hash scanning that Facebook and other social media platforms do with regard to CSAM? And if not, why not?

The problem of addressing child sexual abuse material online is important. We agree that the intentions are good ones. The concerns are different with what Apple is doing here because they are building surveillance capabilities right into our phones. Your phone is what is doing the spying right there. And once it is built into an operating system, which is how this will happen, there are more potential vectors for repurposing [the tools]. Not only governments pressuring Apple to expand the hash database to all sorts of other content they find objectionable, but also with other apps.

But why is that an important distinction, that it's on the operating system rather than a platform like Facebook that could also be compelled to scan for other types of content?

With the surveillance capabilities being built into the operating system and on the device, that opens up the potential for other apps that exist in this walled garden of the Apple device to seek to use that, and so to expand the scanning beyond the specific instance in which Apple has announced it.

So you're saying with the Facebook analogy, maybe Facebook does get pressured to scan for certain content, but at least it's only on Facebook. Is that part of the argument?

The concerns are different. The fear of other governments pressuring a tech company to expand the disfavored content that it's scanning for is not limited to what Apple has just announced.

Right. I just haven't seen the sort of opposition to a company like Facebook, which files the most reports to NCMEC right now of any company. I'm just trying to get at why.

When Facebook announced that it was going to move to end-to-end encrypted messaging for all its messaging services, not only Whatsapp, which Facebook owns, but also messages in the Facebook main platform and the messaging service within Instagram, they drew a letter from then Attorney General William Barr as well as his counterparts in the U.K. and Australia, urging Mark Zuckerberg to abandon his plans.

The Center for Democracy and Technology at that time organized an open letter to Facebook saying no, actually, the move to end-to-end encryption is a good thing, and you should move forward with that. It's a little bit of a flip of what your question asked. But it is a situation where AG Barr said all these reports from Facebook to NCMEC are going to dry up. And civil society groups that promote strong encryption for all of the value and benefits it can provide — protecting people in so many contexts — wrote this open letter saying this is actually a good thing, and while we support the goal of protecting children, overall on balance, encryption does so much more good.

So there are tradeoffs in any kind of content moderation decision like this. Obviously, child safety advocates would say, doing nothing risks having more CSAM floating around in iCloud and more kids being lured or tricked into sharing photos in iMessage. And for a lot of people, that's an intolerable risk. From your perspective, the risk of this backdoor being misused is too high. So walk me through the risk calculation you're making. Why is the risk of misuse more dangerous than the risk of maintaining the status quo?

First of all I think it's important to recognize, as we started out discussing at the beginning of our conversation, that Apple's changes will also create new risks for children. Even with the goal of protecting children held up as your metric, Apple's changes are going to create new risks for vulnerable youth, such as LGBT youth, young people who are transitioning and young people who are not necessarily in healthy traditional relationship with someone who may be designated as the parent account on their phone. There are those new risks as well to young people.

Encryption protects privacy and security in so many ways to so many communities. To journalists, to human rights activists, to ordinary people and to governments. The scale of protection that encryption provides is vast, and once you create a backdoor, even where you do it, as Apple seems to have done, in a careful way, and you try to establish policy choices to limit its scope, it's very hard to keep that backdoor to a narrow one once that capability's there.

With the first part of what you were saying it sounds like that is a critique really of the iMessage update. So what if Apple scraps that iMessage update? Does an organization like CDT become any more comfortable with the iCloud update?

You're correct. The change to messages is where they're retreating from the commitment to end-to-end encryption. With uploading photos to iCloud, that's not the same thing because there wasn't a prior end-to-end encrypted service that's now being broken.

But there are very clear risks also to free expression. It's not a huge leap to see how governments will pressure Apple, demand of Apple, enact legal requirements that apply to Apple and other tech companies, that once you have this capability to scan for the CSAM hash database, lets scan for the [Global Internet Forum to Counter Terrorism] database of terrorist content, and that's a database that has very little transparency. With terrorist content, context really matters. It may be human rights activists or journalists trying to raise awareness of atrocities. What counts as a terrorist is often very politically dependent. Or a government could pressure Apple to scan for any other type of disfavored content.

Once you have the surveillance capability built into everyone's phone, it's much harder for Apple to argue, as it did five years ago in the San Bernardino case, that it's [not] able to meet those government demands. The further technical changes that are required are much smaller.

What would your proposed alternative be for Apple?

CDT just released a new report on August 12. It looks at end-to-end encryption, and the best ways to moderate content to actually address the situation with harmful content. This was in the works well before we had any idea Apple was making this plan. Our timing turned out to be fortuitous. We released the report a week after Apple made its announcement.

It defines end-to-end encryption and looks at several different types of techniques to try to address various types of harmful content. It didn't focus particularly on CSAM. It's all sorts of harmful content, hate speech and so forth.

It points to two particular techniques that can be helpful in the detection of unwanted content that still allows the protection of end-to-end encryption. The first is user reporting: There's a place to click to report harmful content. The other is metadata analysis. Metadata analysis is one that other companies have pointed to as being potentially helpful when you're trying to address the problem of child exploitation. This is a technique that can, when they train classifiers, actually observe behavior that can help detect all sorts of predatory and harmful content. That's one possibility.

We also talk about some types of techniques that are not compatible with end-to-end encryption. One being client-side scanning, which is what's going on with the Apple announcement.

We also point to a technique that could use more research, which are machine learning classifiers on the user's device, but that are fully voluntary on behalf of the user, and that they choose to turn on and that is fully something they want to have and isn't adversarial to them. An example would be if I wanted to turn on a classifier that would avoid me receiving sexting images.

Now, you can understand how those approaches break down when it's a problem like CSAM on a device. User reporting doesn't work. I'm probably not going to report myself. I also wonder about metadata analysis and how that's more privacy protective. If you suddenly have a machine learning system trying to guess whether my behavior likens me to a child predator, that seems extremely problematic. Do you think we have a good answer to what Apple's trying to do? Do any of these techniques actually satisfy?

I don't know that we have a perfect solution, but the problem you just pointed to with machine learning classifiers being error-prone is one of the problems we've pointed to with the surveillance capability that Apple is installing in messaging. That is their own machine learning classifier that's designed to detect what they call "explicit content." That can also be error prone.

Apple has said that researchers will be able to vet this system to make sure it doesn't expand beyond what Apple has said it's going to do. What do you make of that promise?

It's hard to know in advance how much vetting will be possible. But the other thing I would say is: Transparency can be an important tool for promoting accountability, and we call for transparency to promote accountability in so many contexts, but it doesn't give us a lot of comfort when Apple's key response to our concerns about expansion of capability is: Well, you'll know about it if it happens.

We have voiced a lot of concerns about how governments are going to pressure or demand that Apple expand the reach of these surveillance capabilities to other contexts, to other types of content, and so forth. And when their response is, we don't plan to do that, and if we do implement those changes, researchers will be able to tell, that doesn't give us a lot of comfort that we'll know about it when this feared expansion happens. That's not the same as saying it is not technically possible, the way they said in Apple v. FBI. In Apple v. FBI they said, we'd have to build an entirely new system in order to do what's being demanded of us. Once they've made these changes to build their surveillance capabilities into their operating system and implant them in our devices, it's a much smaller technical change that will be required for the type of expansion and repurposing we're concerned about.

Fintech

Judge Zia Faruqui is trying to teach you crypto, one ‘SNL’ reference at a time

His decisions on major cryptocurrency cases have quoted "The Big Lebowski," "SNL," and "Dr. Strangelove." That’s because he wants you — yes, you — to read them.

The ways Zia Faruqui (right) has weighed on cases that have come before him can give lawyers clues as to what legal frameworks will pass muster.

Photo: Carolyn Van Houten/The Washington Post via Getty Images

“Cryptocurrency and related software analytics tools are ‘The wave of the future, Dude. One hundred percent electronic.’”

That’s not a quote from "The Big Lebowski" — at least, not directly. It’s a quote from a Washington, D.C., district court memorandum opinion on the role cryptocurrency analytics tools can play in government investigations. The author is Magistrate Judge Zia Faruqui.

Keep ReadingShow less
Veronica Irwin

Veronica Irwin (@vronirwin) is a San Francisco-based reporter at Protocol covering fintech. Previously she was at the San Francisco Examiner, covering tech from a hyper-local angle. Before that, her byline was featured in SF Weekly, The Nation, Techworker, Ms. Magazine and The Frisc.

The financial technology transformation is driving competition, creating consumer choice, and shaping the future of finance. Hear from seven fintech leaders who are reshaping the future of finance, and join the inaugural Financial Technology Association Fintech Summit to learn more.

Keep ReadingShow less
FTA
The Financial Technology Association (FTA) represents industry leaders shaping the future of finance. We champion the power of technology-centered financial services and advocate for the modernization of financial regulation to support inclusion and responsible innovation.
Enterprise

AWS CEO: The cloud isn’t just about technology

As AWS preps for its annual re:Invent conference, Adam Selipsky talks product strategy, support for hybrid environments, and the value of the cloud in uncertain economic times.

Photo: Noah Berger/Getty Images for Amazon Web Services

AWS is gearing up for re:Invent, its annual cloud computing conference where announcements this year are expected to focus on its end-to-end data strategy and delivering new industry-specific services.

It will be the second re:Invent with CEO Adam Selipsky as leader of the industry’s largest cloud provider after his return last year to AWS from data visualization company Tableau Software.

Keep ReadingShow less
Donna Goodison

Donna Goodison (@dgoodison) is Protocol's senior reporter focusing on enterprise infrastructure technology, from the 'Big 3' cloud computing providers to data centers. She previously covered the public cloud at CRN after 15 years as a business reporter for the Boston Herald. Based in Massachusetts, she also has worked as a Boston Globe freelancer, business reporter at the Boston Business Journal and real estate reporter at Banker & Tradesman after toiling at weekly newspapers.

Image: Protocol

We launched Protocol in February 2020 to cover the evolving power center of tech. It is with deep sadness that just under three years later, we are winding down the publication.

As of today, we will not publish any more stories. All of our newsletters, apart from our flagship, Source Code, will no longer be sent. Source Code will be published and sent for the next few weeks, but it will also close down in December.

Keep ReadingShow less
Bennett Richardson

Bennett Richardson ( @bennettrich) is the president of Protocol. Prior to joining Protocol in 2019, Bennett was executive director of global strategic partnerships at POLITICO, where he led strategic growth efforts including POLITICO's European expansion in Brussels and POLITICO's creative agency POLITICO Focus during his six years with the company. Prior to POLITICO, Bennett was co-founder and CMO of Hinge, the mobile dating company recently acquired by Match Group. Bennett began his career in digital and social brand marketing working with major brands across tech, energy, and health care at leading marketing and communications agencies including Edelman and GMMB. Bennett is originally from Portland, Maine, and received his bachelor's degree from Colgate University.

Enterprise

Why large enterprises struggle to find suitable platforms for MLops

As companies expand their use of AI beyond running just a few machine learning models, and as larger enterprises go from deploying hundreds of models to thousands and even millions of models, ML practitioners say that they have yet to find what they need from prepackaged MLops systems.

As companies expand their use of AI beyond running just a few machine learning models, ML practitioners say that they have yet to find what they need from prepackaged MLops systems.

Photo: artpartner-images via Getty Images

On any given day, Lily AI runs hundreds of machine learning models using computer vision and natural language processing that are customized for its retail and ecommerce clients to make website product recommendations, forecast demand, and plan merchandising. But this spring when the company was in the market for a machine learning operations platform to manage its expanding model roster, it wasn’t easy to find a suitable off-the-shelf system that could handle such a large number of models in deployment while also meeting other criteria.

Some MLops platforms are not well-suited for maintaining even more than 10 machine learning models when it comes to keeping track of data, navigating their user interfaces, or reporting capabilities, Matthew Nokleby, machine learning manager for Lily AI’s product intelligence team, told Protocol earlier this year. “The duct tape starts to show,” he said.

Keep ReadingShow less
Kate Kaye

Kate Kaye is an award-winning multimedia reporter digging deep and telling print, digital and audio stories. She covers AI and data for Protocol. Her reporting on AI and tech ethics issues has been published in OneZero, Fast Company, MIT Technology Review, CityLab, Ad Age and Digiday and heard on NPR. Kate is the creator of RedTailMedia.org and is the author of "Campaign '08: A Turning Point for Digital Media," a book about how the 2008 presidential campaigns used digital media and data.

Latest Stories
Bulletins