Policy

In Apple’s privacy vs. child safety debate, there’s no ‘perfect solution’

At a time when child sexual abuse material online is spiking, how do opponents to Apple's child safety features calculate risk, and more importantly, what alternatives do they propose?

Sharon Bradford Franklin, co-director of the Security and Surveillance Project at the Center for Democracy and Technology

Sharon Bradford Franklin, co-director of the Security and Surveillance Project at the Center for Democracy and Technology

Photo: New America via Flickr

Apple executives had to have known what they were getting themselves into when they announced their new child safety features earlier this month. It wasn't so long ago that Apple itself argued in its battle with the FBI that building a backdoor into its devices would make "users' most confidential and personal information vulnerable to hackers, identity thieves, hostile foreign agents, and unwarranted government surveillance."

Now, privacy advocates and civil liberties groups are making much the same argument about a series of changes Apple will soon introduce to its operating systems in the U.S.

The first update will scan iMessage for users 17 and under and warn them before they send or receive what Apple determines to be "sexually explicit" images. Apple will also notify the parents of users 13 and under if they're using family accounts and opt to send or receive the images anyway. The second update will scan users' devices for images that have been uploaded to iCloud and are also contained in a database of known child sexual abuse material. Once Apple detects a certain number of matches in an account — around 30 — they'll undergo human review before Apple notifies the National Center for Missing and Exploited Children.

The backlash to the announcement was swift, and Apple was forced to trot out executives to defend the changes to the press, vowing that this scanning technology won't be used for other purposes and that Apple has gone about these changes in the most privacy protective way.

None of Apple's defenses stuck. Last week, a group of more than 90 international organizations wrote to CEO Tim Cook, arguing that the updates could put vulnerable youths, particularly LGBTQ+ youths, at risk and that Apple is walking a slippery slope toward expanded government surveillance.

But while the weeks since Apple's announcement have been full of well-founded criticism, they've been short on solutions. The status quo, both sides agree, is intolerable. So, at a time when child sexual abuse material online is spiking, it's worth asking: How do Apple's opponents conduct their risk-benefit analysis, and more importantly, what alternatives they propose?

To find out, Protocol spoke with Sharon Bradford Franklin, co-director of the Security and Surveillance Project at the Center for Democracy and Technology, which co-authored the letter to Cook.

This interview has been lightly edited and condensed for clarity.

How much of the opposition to what Apple's doing has to do with what Apple is currently doing, and how much of it has to do with the slippery slope and what else Apple might decide to or be compelled to scan?

It really is very much both. The risks that we describe with regard to, for example, LGBT youth and other vulnerable youth who can be subject to harmful actions taken by a parent who may not be sympathetic to their situation, those are very present. Those are going to happen as a result of the changes that Apple has announced immediately.

There are also a lot of concerns about how the surveillance capabilities that Apple is building into our phones can be expanded and repurposed. It doesn't take a lot of imagination to see how governments around the world will seek to exploit the new surveillance capabilities that Apple is planning to build.

Even if people use the term slippery slope, we've been focused on steps that don't require a lot of sequencing to get to them.

How much of your concern has to do with the fact that it's Apple making these changes and Apple has said, we stand for privacy in all contexts whether the U.S. government is asking us to break their encryption or not. Is it case specific that Apple is doing this that sets off alarm bells?

We would be concerned with any company that has been offering end-to-end encryption that then announces plans to build a back door into previously end-to-end encrypted services. It's not Apple-specific, but yes, we are focused on the fact that Apple is announcing changes that undermine a prior industry standard for providing encryption and security to users of Apple products.

Do you have the same sort of opposition to the sort of hash scanning that Facebook and other social media platforms do with regard to CSAM? And if not, why not?

The problem of addressing child sexual abuse material online is important. We agree that the intentions are good ones. The concerns are different with what Apple is doing here because they are building surveillance capabilities right into our phones. Your phone is what is doing the spying right there. And once it is built into an operating system, which is how this will happen, there are more potential vectors for repurposing [the tools]. Not only governments pressuring Apple to expand the hash database to all sorts of other content they find objectionable, but also with other apps.

But why is that an important distinction, that it's on the operating system rather than a platform like Facebook that could also be compelled to scan for other types of content?

With the surveillance capabilities being built into the operating system and on the device, that opens up the potential for other apps that exist in this walled garden of the Apple device to seek to use that, and so to expand the scanning beyond the specific instance in which Apple has announced it.

So you're saying with the Facebook analogy, maybe Facebook does get pressured to scan for certain content, but at least it's only on Facebook. Is that part of the argument?

The concerns are different. The fear of other governments pressuring a tech company to expand the disfavored content that it's scanning for is not limited to what Apple has just announced.

Right. I just haven't seen the sort of opposition to a company like Facebook, which files the most reports to NCMEC right now of any company. I'm just trying to get at why.

When Facebook announced that it was going to move to end-to-end encrypted messaging for all its messaging services, not only Whatsapp, which Facebook owns, but also messages in the Facebook main platform and the messaging service within Instagram, they drew a letter from then Attorney General William Barr as well as his counterparts in the U.K. and Australia, urging Mark Zuckerberg to abandon his plans.

The Center for Democracy and Technology at that time organized an open letter to Facebook saying no, actually, the move to end-to-end encryption is a good thing, and you should move forward with that. It's a little bit of a flip of what your question asked. But it is a situation where AG Barr said all these reports from Facebook to NCMEC are going to dry up. And civil society groups that promote strong encryption for all of the value and benefits it can provide — protecting people in so many contexts — wrote this open letter saying this is actually a good thing, and while we support the goal of protecting children, overall on balance, encryption does so much more good.

So there are tradeoffs in any kind of content moderation decision like this. Obviously, child safety advocates would say, doing nothing risks having more CSAM floating around in iCloud and more kids being lured or tricked into sharing photos in iMessage. And for a lot of people, that's an intolerable risk. From your perspective, the risk of this backdoor being misused is too high. So walk me through the risk calculation you're making. Why is the risk of misuse more dangerous than the risk of maintaining the status quo?

First of all I think it's important to recognize, as we started out discussing at the beginning of our conversation, that Apple's changes will also create new risks for children. Even with the goal of protecting children held up as your metric, Apple's changes are going to create new risks for vulnerable youth, such as LGBT youth, young people who are transitioning and young people who are not necessarily in healthy traditional relationship with someone who may be designated as the parent account on their phone. There are those new risks as well to young people.

Encryption protects privacy and security in so many ways to so many communities. To journalists, to human rights activists, to ordinary people and to governments. The scale of protection that encryption provides is vast, and once you create a backdoor, even where you do it, as Apple seems to have done, in a careful way, and you try to establish policy choices to limit its scope, it's very hard to keep that backdoor to a narrow one once that capability's there.

With the first part of what you were saying it sounds like that is a critique really of the iMessage update. So what if Apple scraps that iMessage update? Does an organization like CDT become any more comfortable with the iCloud update?

You're correct. The change to messages is where they're retreating from the commitment to end-to-end encryption. With uploading photos to iCloud, that's not the same thing because there wasn't a prior end-to-end encrypted service that's now being broken.

But there are very clear risks also to free expression. It's not a huge leap to see how governments will pressure Apple, demand of Apple, enact legal requirements that apply to Apple and other tech companies, that once you have this capability to scan for the CSAM hash database, lets scan for the [Global Internet Forum to Counter Terrorism] database of terrorist content, and that's a database that has very little transparency. With terrorist content, context really matters. It may be human rights activists or journalists trying to raise awareness of atrocities. What counts as a terrorist is often very politically dependent. Or a government could pressure Apple to scan for any other type of disfavored content.

Once you have the surveillance capability built into everyone's phone, it's much harder for Apple to argue, as it did five years ago in the San Bernardino case, that it's [not] able to meet those government demands. The further technical changes that are required are much smaller.

What would your proposed alternative be for Apple?

CDT just released a new report on August 12. It looks at end-to-end encryption, and the best ways to moderate content to actually address the situation with harmful content. This was in the works well before we had any idea Apple was making this plan. Our timing turned out to be fortuitous. We released the report a week after Apple made its announcement.

It defines end-to-end encryption and looks at several different types of techniques to try to address various types of harmful content. It didn't focus particularly on CSAM. It's all sorts of harmful content, hate speech and so forth.

It points to two particular techniques that can be helpful in the detection of unwanted content that still allows the protection of end-to-end encryption. The first is user reporting: There's a place to click to report harmful content. The other is metadata analysis. Metadata analysis is one that other companies have pointed to as being potentially helpful when you're trying to address the problem of child exploitation. This is a technique that can, when they train classifiers, actually observe behavior that can help detect all sorts of predatory and harmful content. That's one possibility.

We also talk about some types of techniques that are not compatible with end-to-end encryption. One being client-side scanning, which is what's going on with the Apple announcement.

We also point to a technique that could use more research, which are machine learning classifiers on the user's device, but that are fully voluntary on behalf of the user, and that they choose to turn on and that is fully something they want to have and isn't adversarial to them. An example would be if I wanted to turn on a classifier that would avoid me receiving sexting images.

Now, you can understand how those approaches break down when it's a problem like CSAM on a device. User reporting doesn't work. I'm probably not going to report myself. I also wonder about metadata analysis and how that's more privacy protective. If you suddenly have a machine learning system trying to guess whether my behavior likens me to a child predator, that seems extremely problematic. Do you think we have a good answer to what Apple's trying to do? Do any of these techniques actually satisfy?

I don't know that we have a perfect solution, but the problem you just pointed to with machine learning classifiers being error-prone is one of the problems we've pointed to with the surveillance capability that Apple is installing in messaging. That is their own machine learning classifier that's designed to detect what they call "explicit content." That can also be error prone.

Apple has said that researchers will be able to vet this system to make sure it doesn't expand beyond what Apple has said it's going to do. What do you make of that promise?

It's hard to know in advance how much vetting will be possible. But the other thing I would say is: Transparency can be an important tool for promoting accountability, and we call for transparency to promote accountability in so many contexts, but it doesn't give us a lot of comfort when Apple's key response to our concerns about expansion of capability is: Well, you'll know about it if it happens.

We have voiced a lot of concerns about how governments are going to pressure or demand that Apple expand the reach of these surveillance capabilities to other contexts, to other types of content, and so forth. And when their response is, we don't plan to do that, and if we do implement those changes, researchers will be able to tell, that doesn't give us a lot of comfort that we'll know about it when this feared expansion happens. That's not the same as saying it is not technically possible, the way they said in Apple v. FBI. In Apple v. FBI they said, we'd have to build an entirely new system in order to do what's being demanded of us. Once they've made these changes to build their surveillance capabilities into their operating system and implant them in our devices, it's a much smaller technical change that will be required for the type of expansion and repurposing we're concerned about.

Policy

The Senate antitrust bill just created some very weird alliances

Democrats and Republicans have found the tech reform debate scrambles traditional party politics — and Tim Cook and Ted Cruz have found themselves chatting.

The Senate Judiciary Committee advanced a bill on Thursday that could remake the tech industry.

Photo: PartTime Portraits/Unsplash

Strange alliances formed ahead of Thursday's vote to advance a key antitrust bill to the Senate floor, with frequent foes like Sens. Amy Klobuchar and Ted Cruz supporting the measure, and prominent Democrats including California Sen. Dianne Feinstein pushing back against it.

Ultimately the bill moved out of the Senate Judiciary Committee by a vote of 16-6 after a surprisingly speedy debate (at least, speedy for the Senate). Even some of the lawmakers who called for further changes agreed to move the bill forward — a sign that the itch to finally regulate Big Tech after years of congressional inaction is intensifying, even as the issue scrambles traditional party politics in a way that could threaten its final passage.

Keep Reading Show less
Ben Brody

Ben Brody (@ BenBrodyDC) is a senior reporter at Protocol focusing on how Congress, courts and agencies affect the online world we live in. He formerly covered tech policy and lobbying (including antitrust, Section 230 and privacy) at Bloomberg News, where he previously reported on the influence industry, government ethics and the 2016 presidential election. Before that, Ben covered business news at CNNMoney and AdAge, and all manner of stories in and around New York. He still loves appearing on the New York news radio he grew up with.

Sponsored Content

A CCO’s viewpoint on top enterprise priorities in 2022

The 2022 non-predictions guide to what your enterprise is working on starting this week

As Honeywell’s global chief commercial officer, I am privileged to have the vantage point of seeing the demands, challenges and dynamics that customers across the many sectors we cater to are experiencing and sharing.

This past year has brought upon all businesses and enterprises an unparalleled change and challenge. This was the case at Honeywell, for example, a company with a legacy in innovation and technology for over a century. When I joined the company just months before the pandemic hit we were already in the midst of an intense transformation under the leadership of CEO Darius Adamczyk. This transformation spanned our portfolio and business units. We were already actively working on products and solutions in advanced phases of rollouts that the world has shown a need and demand for pre-pandemic. Those included solutions in edge intelligence, remote operations, quantum computing, warehouse automation, building technologies, safety and health monitoring and of course ESG and climate tech which was based on our exceptional success over the previous decade.

Keep Reading Show less
Jeff Kimbell
Jeff Kimbell is Senior Vice President and Chief Commercial Officer at Honeywell. In this role, he has broad responsibilities to drive organic growth by enhancing global sales and marketing capabilities. Jeff has nearly three decades of leadership experience. Prior to joining Honeywell in 2019, Jeff served as a Partner in the Transformation Practice at McKinsey & Company, where he worked with companies facing operational and financial challenges and undergoing “good to great” transformations. Before that, he was an Operating Partner at Silver Lake Partners, a global leader in technology and held a similar position at Cerberus Capital LP. Jeff started his career as a Manufacturing Team Manager and Engineering Project Manager at Procter & Gamble before becoming a strategy consultant at Bain & Company and holding executive roles at Dell EMC and Transamerica Corporation. Jeff earned a B.S. in electrical engineering at Kansas State University and an M.B.A. at Dartmouth College.
Workplace

Should your salary depend on meeting DEI goals?

Diversio just raised $6.5 million to use AI to fix DEI.

Laura McGee has spent her entire career thinking about diversity and business. At one point, she helped lead the Trump-Trudeau Council for Advancement of Women, working with the prime minister and president to build a plan to grow the North American economy through diversity. During that time, she kept hearing from CEOs that they cared about diversity and wanted to improve, but that they had “no data and no metrics.”

That was when she decided to build Diversio: a platform that makes data collection, as well as acting on it, “super simple.”

Keep Reading Show less
Michelle Ma

Michelle Ma (@himichellema) is a reporter at Protocol, where she writes about management, leadership and workplace issues in tech. Previously, she was a news editor of live journalism and special coverage for The Wall Street Journal. Prior to that, she worked as a staff writer at Wirecutter. She can be reached at mma@protocol.com.

Boost 2

Can Matt Mullenweg save the internet?

He's turning Automattic into a different kind of tech giant. But can he take on the trillion-dollar walled gardens and give the internet back to the people?

Matt Mullenweg, CEO of Automattic and founder of WordPress, poses for Protocol at his home in Houston, Texas.
Photo: Arturo Olmos for Protocol

In the early days of the pandemic, Matt Mullenweg didn't move to a compound in Hawaii, bug out to a bunker in New Zealand or head to Miami and start shilling for crypto. No, in the early days of the pandemic, Mullenweg bought an RV. He drove it all over the country, bouncing between Houston and San Francisco and Jackson Hole with plenty of stops in national parks. In between, he started doing some tinkering.

The tinkering is a part-time gig: Most of Mullenweg’s time is spent as CEO of Automattic, one of the web’s largest platforms. It’s best known as the company that runs WordPress.com, the hosted version of the blogging platform that powers about 43% of the websites on the internet. Since WordPress is open-source software, no company technically owns it, but Automattic provides tools and services and oversees most of the WordPress-powered internet. It’s also the owner of the booming ecommerce platform WooCommerce, Day One, the analytics tool Parse.ly and the podcast app Pocket Casts. Oh, and Tumblr. And Simplenote. And many others. That makes Mullenweg one of the most powerful CEOs in tech, and one of the most important voices in the debate over the future of the internet.

Keep Reading Show less
David Pierce

David Pierce ( @pierce) is Protocol's editorial director. Prior to joining Protocol, he was a columnist at The Wall Street Journal, a senior writer with Wired, and deputy editor at The Verge. He owns all the phones.

Enterprise

Why low-code and no-code AI tools pose new risks

The low-code trend has come to AI, but skeptics worry that gifting amateurs with Easy-Bake Ovens for machine-learning models is a recipe for disaster.

The same things that make low- and no-code AI so appealing can pose problems.

Image: Boris SV/Moment/Getty Images

“No code. No joke.”

This is the promise made by enterprise AI company C3 AI in splashy web ads for its Ex Machina software. Its competitor Dataiku says its own low-code and no-code software “elevates” business experts to use AI. DataRobot calls customers using its no-code software to make AI-based apps “AI heroes.”

Keep Reading Show less
Kate Kaye

Kate Kaye is an award-winning multimedia reporter digging deep and telling print, digital and audio stories. She covers AI and data for Protocol. Her reporting on AI and tech ethics issues has been published in OneZero, Fast Company, MIT Technology Review, CityLab, Ad Age and Digiday and heard on NPR. Kate is the creator of RedTailMedia.org and is the author of "Campaign '08: A Turning Point for Digital Media," a book about how the 2008 presidential campaigns used digital media and data.

Workplace

How 'Dan from HR' became TikTok’s favorite career coach

You can get a lot of advice about corporate America on TikTok. ‘Dan from HR’ wants to make sure you’re getting the right instruction.

'Dan from HR' has posted hundreds of videos on his TikTok account about everything from cover letters to compensation.

Image: Dan Space

Daniel Space downloaded TikTok for the same reason most of us did. He was bored.

At the beginning of the COVID-19 pandemic, Space wanted to connect with his younger cousin, who uses TikTok, so he thought he’d get on the platform and try it out (although he refused to do any of the dances). Eventually, the algorithm figured out that Space is a longtime HR professional and fed him a post with resume tips — the only issue was that the advice was “really horrible,” he said.

Keep Reading Show less
Sarah Roach

Sarah Roach is a reporter and producer at Protocol (@sarahroach_) where she contributes to Source Code, Protocol's daily newsletter. She is a recent graduate of George Washington University, where she studied journalism and mass communication and criminal justice. She previously worked for two years as editor in chief of her school's independent newspaper, The GW Hatchet.

Latest Stories
Bulletins