Child sexual abuse is exploding online. Tech’s best defenses are no match.

A new report argues there's more tech companies can do to stop child sexual abuse material from spreading online without sacrificing privacy.

silhouette of hands

In many cases, company safeguards are failing to keep pace with the evolving threat of child sexual abuse material.

Photo: Catherine Falls Commercial/Getty Images

Online child sex abuse material has grown exponentially during the pandemic, and tech's best defenses are no match against it, according to a new report on the threat facing countries around the world.

The report, published last month, was developed by the WeProtect Global Alliance, an NGO that represents nearly 100 governments as well as dozens of companies including giants like Apple, Google, Facebook and Microsoft in their efforts to stop the spread of child sexual exploitation. The report, which also includes a survey of 32 member-companies, found that not only is the sheer volume of child sexual abuse material, or CSAM, increasing, but it's growing more complex and capitalizing on tech's blind spots.

"The scale and rate of change is unprecedented," the report reads.

During the pandemic, global reports of suspected sexual child exploitation to the National Center for Missing and Exploited Children increased 106%, according to the report. That's while troubling trends like online grooming and livestreaming child sexual abuse for pay have grown. At the same time, companies and law enforcement officials are grappling with a rapid increase in "self-generated" content from kids, who may be sending images of themselves consensually to peers only to have those images circulated without their consent later on.

The report finds that while tech companies have come a long way toward addressing this problem over the last decade, in many cases their safeguards are failing to keep pace with the evolving threat.

"The scale of child sexual exploitation and abuse online is increasing. This sustained growth is outstripping our global capacity to respond," the report reads.

A 'perfect storm'

The COVID-19 crisis created a "perfect storm" for CSAM to proliferate online, the report argues. Children spent more time online than ever and offenders had reduced opportunities to commit offline abuses, which increased online demand for imagery.

Increases in reporting don't necessarily equate to an increase in volume, the report's authors caution, but they point to other recent trends that have also exacerbated the problem.

WeProtect's report included research that analyzed conversations in offender forums on the dark web and found that offenders use these forums to exchange best practices. More than two-thirds of the discussions were about technical tools for messaging, exchanging funds or storing content in the cloud. "They're talking about which social media platforms to use, they're talking about communication tools, they're talking about storage facilities," said Iain Drennan, executive director of the WeProtect Global Alliance. "There's that kind of community — in inverted commas — but that kind of exchange of information," he said.

The report also emphasizes the challenges in policing this content on a global scale. The internet makes it easy for offenders to exploit vulnerabilities in whichever country has the weakest technical and regulatory defenses, because it's just as easy to access a site hosted in the US as it is in Europe, Asia or anywhere else in the world.

In developing countries, the dramatic uptick in online adoption has outpaced those countries' ability to protect against these kinds of abuses, Drennan said. "The focus has previously been to get people online, and there's been amazing progress in that," he said. "But you're getting a situation where you don't necessarily have those specialized capabilities."

This is further complicated by the inherently global nature of CSAM. "This is a crime fundamentally where the victim could be in the Philippines, the offender could be in the United States, and they're using a platform that's headquartered in Finland. It's that kind of international dimension," said Drennan. To truly collect evidence or prosecute offenders that are overseas requires careful coordination with international entities like Interpol and Europol, or bilateral collaboration with other countries.

The report also points to an increase in "self-generated" sexual material over the last year. That includes imagery and videos that young people capture themselves, either because they were coerced, or because they voluntarily shared it with someone their own age, who then shared it more broadly without their consent. According to the Internet Watch Foundation, a UK-based non-profit that also works to stop child abuse online, there was a 77% increase between 2019 and 2020 in reports of self-generated material, a category of content that can be particularly difficult to police. "You can have a perfectly healthy consenting relationship between two teens, the relationship breaks up, one of them starts sharing it, then that gets into the internet," said Drennan. "And you've gone from something that was not harmful and that we don't want to criminalize in any way into something that's more problematic. And that's a real challenge for policymakers to try [to] address."

Safety by design

In addition to outlining the scope of the problem, the report also takes stock of what the tech industry has done so far to address it. In February and March of this year, WeProtect and the Technology Coalition, a group of tech companies working to fight child sex abuse on their platforms, conducted a survey of 32 companies and found that 87% of them are already using image-specific hash-based detection tools to automatically block known child sexual abuse material from their platforms.

And yet, far fewer companies actually contribute new material to existing hash databases. Only 26% of tech companies make their content classifiers available to other companies, the survey found, indicating a major lack of collaboration.

Sean Litton, the executive director of the Technology Coalition, said tech companies have a responsibility to share "hard lessons learned, to share technology, to share best practices, to share insights." Even if one platform is impenetrable to abuse — and none are — "that leaves all the other platforms for bad actors to exploit," he said.

While companies may be making progress on removing static images, the survey found that only 30% of companies surveyed are using classifiers to detect CSAM in videos. Just 22% of them are detecting this behavior in livestreaming, even as that medium has become a popular new frontier for CSAM.

That needs to change, the report argues. The report also suggests tech companies use techniques such as deterrence messaging, age-estimation tools and digital literacy training. These interventions can include showing users a message when they attempt to make searches for CSAM or using AI to scan a user's face and check their age.

Some regions, including Australia, are also pursuing an approach known as "safety by design," creating toolkits that tech companies can use to ensure their platforms are considering safety from their inception. "It's creating the technology in such a way that it's child-friendly, and children can't be harmed using it," said Litton. "And it's designing it specifically for children."

Safety or surveillance

The authors of the report advocate for more regulation to protect against online harms to children, as well as new approaches to encryption that would protect users' privacy without making CSAM virtually invisible. But many of the techniques for detecting CSAM come with serious privacy concerns and have raised objections from some of WeProtect's own member-companies.

Privacy experts, like the ACLU's Daniel Kahn Gillmor, worry that features like Apple's proposed child safety features — which WeProtect publicly supports, but which the company has put on hold — can open gateways to infringements on privacy and security. Apple proposed scanning users' iCloud accounts for known CSAM and submitting a report to NCMEC if the volume of material passes a certain threshold. "You want your information being read by the folks that's intended to be read by," Gillmor said. "You don't want whoever is hanging on to your backup to be able to dig into that and find information that you don't like."

Another proposed feature would scan iMessages on devices of children under 13 and alert their parents if they send or receive sexually explicit imagery. But privacy advocates say that could put vulnerable kids, including most prominently LGBTQ+ youth, at unnecessary risk.

Another concern is miscategorization. In a world where tech platforms use metadata to detect adults who may be grooming young people, what would that mean for, say, a teacher who's regularly in contact with students, Gillmor asked.

There's also the risk of mass surveillance in the name of protection, he argued. "If what we say to kids is, 'Here's your safety tool, and it's safe because someone is surveilling you,' we shouldn't be surprised if those kids become adults who equate surveillance with safety," Gillmor said.

Gillmor is careful to frame the conversation as surveillance versus security rather than privacy versus child safety, because he doesn't see the latter as mutually exclusive. "I view working for people to have effective privacy as also being a child-safety concern," he said.

Tech companies have a long way to go in making sure prevention and detection methods are up to speed and that their platforms provide protections without sacrificing security. But there's no neat and easy solution to such a complicated, multidimensional threat.

Drennan likens it to counterterrorism: "You put the big concrete blocks in front of the stadium — you make it hard," he said. While some perpetrators may slip through, "you immediately lose all of those lower-threat actors, and you can focus law enforcement resources on the really dangerous and high-priority threats."


Musk’s texts reveal what tech’s most powerful people really want

From Jack Dorsey to Joe Rogan, Musk’s texts are chock-full of überpowerful people, bending a knee to Twitter’s once and (still maybe?) future king.

“Maybe Oprah would be interested in joining the Twitter board if my bid succeeds,” one text reads.

Photo illustration: Patrick Pleul/picture alliance via Getty Images; Protocol

Elon Musk’s text inbox is a rarefied space. It’s a place where tech’s wealthiest casually commit to spending billions of dollars with little more than a thumbs-up emoji and trade tips on how to rewrite the rules for how hundreds of millions of people around the world communicate.

Now, Musk’s ongoing legal battle with Twitter is giving the rest of us a fleeting glimpse into that world. The collection of Musk’s private texts that was made public this week is chock-full of tech power brokers. While the messages are meant to reveal something about Musk’s motivations — and they do — they also say a lot about how things get done and deals get made among some of the most powerful people in the world.

Keep Reading Show less
Issie Lapowsky

Issie Lapowsky ( @issielapowsky) is Protocol's chief correspondent, covering the intersection of technology, politics, and national affairs. She also oversees Protocol's fellowship program. Previously, she was a senior writer at Wired, where she covered the 2016 election and the Facebook beat in its aftermath. Prior to that, Issie worked as a staff writer for Inc. magazine, writing about small business and entrepreneurship. She has also worked as an on-air contributor for CBS News and taught a graduate-level course at New York University's Center for Publishing on how tech giants have affected publishing.

Sponsored Content

Great products are built on strong patents

Experts say robust intellectual property protection is essential to ensure the long-term R&D required to innovate and maintain America's technology leadership.

Every great tech product that you rely on each day, from the smartphone in your pocket to your music streaming service and navigational system in the car, shares one important thing: part of its innovative design is protected by intellectual property (IP) laws.

From 5G to artificial intelligence, IP protection offers a powerful incentive for researchers to create ground-breaking products, and governmental leaders say its protection is an essential part of maintaining US technology leadership. To quote Secretary of Commerce Gina Raimondo: "intellectual property protection is vital for American innovation and entrepreneurship.”

Keep Reading Show less
James Daly
James Daly has a deep knowledge of creating brand voice identity, including understanding various audiences and targeting messaging accordingly. He enjoys commissioning, editing, writing, and business development, particularly in launching new ventures and building passionate audiences. Daly has led teams large and small to multiple awards and quantifiable success through a strategy built on teamwork, passion, fact-checking, intelligence, analytics, and audience growth while meeting budget goals and production deadlines in fast-paced environments. Daly is the Editorial Director of 2030 Media and a contributor at Wired.

Circle’s CEO: This is not the time to ‘go crazy’

Jeremy Allaire is leading the stablecoin powerhouse in a time of heightened regulation.

“It’s a complex environment. So every CEO and every board has to be a little bit cautious, because there’s a lot of uncertainty,” Circle CEO Jeremy Allaire told Protocol at Converge22.

Photo: Circle

Sitting solo on a San Francisco stage, Circle CEO Jeremy Allaire asked tennis superstar Serena Williams what it’s like to face “unrelenting skepticism.”

“What do you do when someone says you can’t do this?” Allaire asked the athlete turned VC, who was beaming into Circle’s Converge22 convention by video.

Keep Reading Show less
Benjamin Pimentel

Benjamin Pimentel ( @benpimentel) covers crypto and fintech from San Francisco. He has reported on many of the biggest tech stories over the past 20 years for the San Francisco Chronicle, Dow Jones MarketWatch and Business Insider, from the dot-com crash, the rise of cloud computing, social networking and AI to the impact of the Great Recession and the COVID crisis on Silicon Valley and beyond. He can be reached at bpimentel@protocol.com or via Google Voice at (925) 307-9342.


Is Salesforce still a growth company? Investors are skeptical

Salesforce is betting that customer data platform Genie and new Slack features can push the company to $50 billion in revenue by 2026. But investors are skeptical about the company’s ability to deliver.

Photo: Marlena Sloss/Bloomberg via Getty Images

Salesforce has long been enterprise tech’s golden child. The company said everything customers wanted to hear and did everything investors wanted to see: It produced robust, consistent growth from groundbreaking products combined with an aggressive M&A strategy and a cherished culture, all operating under the helm of a bombastic, but respected, CEO and team of well-coiffed executives.

Dreamforce is the embodiment of that success. Every year, alongside frustrating San Francisco residents, the over-the-top celebration serves as a battle cry to the enterprise software industry, reminding everyone that Marc Benioff’s mighty fiefdom is poised to expand even deeper into your corporate IT stack.

Keep Reading Show less
Joe Williams

Joe Williams is a writer-at-large at Protocol. He previously covered enterprise software for Protocol, Bloomberg and Business Insider. Joe can be reached at JoeWilliams@Protocol.com. To share information confidentially, he can also be contacted on a non-work device via Signal (+1-309-265-6120) or JPW53189@protonmail.com.


The US and EU are splitting on tech policy. That’s putting the web at risk.

A conversation with Cédric O, the former French minister of state for digital.

“With the difficulty of the U.S. in finding political agreement or political basis to legislate more, we are facing a risk of decoupling in the long term between the EU and the U.S.”

Photo: David Paul Morris/Bloomberg via Getty Images

Cédric O, France’s former minister of state for digital, has been an advocate of Europe’s approach to tech and at the forefront of the continent’s relations with U.S. giants. Protocol caught up with O last week at a conference in New York focusing on social media’s negative effects on society and the possibilities of blockchain-based protocols for alternative networks.

O said watching the U.S. lag in tech policy — even as some states pass their own measures and federal bills gain momentum — has made him worry about the EU and U.S. decoupling. While not as drastic as a disentangling of economic fortunes between the West and China, such a divergence, as O describes it, could still make it functionally impossible for companies to serve users on both sides of the Atlantic with the same product.

Keep Reading Show less
Ben Brody

Ben Brody (@ BenBrodyDC) is a senior reporter at Protocol focusing on how Congress, courts and agencies affect the online world we live in. He formerly covered tech policy and lobbying (including antitrust, Section 230 and privacy) at Bloomberg News, where he previously reported on the influence industry, government ethics and the 2016 presidential election. Before that, Ben covered business news at CNNMoney and AdAge, and all manner of stories in and around New York. He still loves appearing on the New York news radio he grew up with.

Latest Stories