Policy

TikTok is testing PG-13 filters to protect younger users

TikTok is trying to keep content with “strong profanity or adult innuendo” out of young users’ feeds.

TikTok logo on a phone

TikTok is adjusting its algorithm in an effort to promote more age-appropriate content to younger users.

Photo: Solen Feyissa/Unsplash

TikTok is experimenting with a rating system that vets the age-appropriateness of videos in order to keep more mature content from reaching younger users.

The system, which is currently being tested on a small scale, borrows from the familiar rating system already used for film — G, PG, PG-13 and so on. But instead of showing those ratings to users, TikTok will use the ratings internally to categorize videos on the back end and automatically decide which users to show them to.

“When the system is fully launched, content that we've identified as containing overtly mature themes could be restricted from teens,” Tracy Elizabeth, TikTok’s global issue policy lead, said on a call with reporters. “And for content that has less overtly mature themes, or even just less mature themes, our community members are going to be able to choose from the comfort zones or content maturity that they would prefer to skip.”

The company is also working on ways to allow creators themselves to target mature audiences and prevent their videos from being shown to young users. “We are drawing upon the types of content rating standards and maturity ratings expectations that regulators across the globe already have in place,” said Elizabeth. “So we're learning from them in order to make that type of approach applicable to our unique platform.”

Lawmakers and regulators around the world have been stepping up pressure on social media platforms, particularly TikTok, to protect kids online. The U.K.’s Age Appropriate Design Code, which went into effect in September, prompted a broad range of tech companies to change the way they deal with data associated with teenage users.

TikTok’s experiment with rating videos tries to tackle a different problem: Once kids turns 13, they largely have the same exact same experience on social media that adult users have. Adjusting the algorithm to promote more age-appropriate content to younger users could address some of those concerns.

Of course, in order for that to work, TikTok still needs to know how old its users are — and if the numbers are any indication, that’s a particularly difficult problem for TikTok. The company’s most recent content enforcement report, which also went live Tuesday, shows TikTok removed more than 12 million suspected underage accounts in the third quarter of 2021. That’s up from more than 11 million accounts in the prior quarter. Facebook, by contrast, has previously reported removing about 600,000 underage Instagram accounts in a quarter, a significantly lower number for a substantially larger company.

To account for the contrast, Elizabeth said, the company is aggressive in policing underage accounts and that “when in doubt, we're going to err on the side of being conservative and cautious.”

But determining users’ ages isn’t the only hurdle. The new rating system opens TikTok up to being wrong about the appropriateness of the content itself, which could wind up limiting the exposure that videos labeled as mature are able to get on the platform.

YouTube, for one, faced widespread backlash when users alleged that its Restricted Mode feature was blocking certain videos with LGBTQ+ themes. “Our system sometimes makes mistakes in understanding context and nuances when it assesses which videos to make available in Restricted Mode,” the company said at the time, before manually changing its restriction on a handful of videos. The company has continued to face allegations of censorship by YouTubers, who say the company unfairly demonetizes videos by queer creators. As TikTok expands the ratings test, it will no doubt face similar questions.

TikTok’s latest report also showed the company removed more than 91 million videos from the platform in the third quarter of 2021, the vast majority of which were removed before users reported them and before they had received any views.

In addition to removal stats, the company announced changes to its content policies, including banning content that includes dead-naming, misgendering or misogyny, or that promotes conversion therapy. The company will also no longer promote videos related to disordered eating.

On the call with reporters, TikTok’s head of U.S. Safety, Eric Han, talked about using this year “to mature as a safety team and as a platform.” While he didn’t mention it outright, that includes emulating systems that more established U.S. tech companies, most notably Meta, already have in place. One such system Han mentioned is coming up with a “prevalence” measurement, which is a number that shows not how many videos have been removed from the platform, but how much violative content is actually being seen by users.

“Our approach has to become more sophisticated, especially as we grow and mature as a platform,” Han said.

Update: The headline of this article was updated to better reflect the scope of the filters that TikTok is testing. Updated Feb. 8, 2022.

Fintech

Judge Zia Faruqui is trying to teach you crypto, one ‘SNL’ reference at a time

His decisions on major cryptocurrency cases have quoted "The Big Lebowski," "SNL," and "Dr. Strangelove." That’s because he wants you — yes, you — to read them.

The ways Zia Faruqui (right) has weighed on cases that have come before him can give lawyers clues as to what legal frameworks will pass muster.

Photo: Carolyn Van Houten/The Washington Post via Getty Images

“Cryptocurrency and related software analytics tools are ‘The wave of the future, Dude. One hundred percent electronic.’”

That’s not a quote from "The Big Lebowski" — at least, not directly. It’s a quote from a Washington, D.C., district court memorandum opinion on the role cryptocurrency analytics tools can play in government investigations. The author is Magistrate Judge Zia Faruqui.

Keep Reading Show less
Veronica Irwin

Veronica Irwin (@vronirwin) is a San Francisco-based reporter at Protocol covering fintech. Previously she was at the San Francisco Examiner, covering tech from a hyper-local angle. Before that, her byline was featured in SF Weekly, The Nation, Techworker, Ms. Magazine and The Frisc.

The financial technology transformation is driving competition, creating consumer choice, and shaping the future of finance. Hear from seven fintech leaders who are reshaping the future of finance, and join the inaugural Financial Technology Association Fintech Summit to learn more.

Keep Reading Show less
FTA
The Financial Technology Association (FTA) represents industry leaders shaping the future of finance. We champion the power of technology-centered financial services and advocate for the modernization of financial regulation to support inclusion and responsible innovation.
Enterprise

AWS CEO: The cloud isn’t just about technology

As AWS preps for its annual re:Invent conference, Adam Selipsky talks product strategy, support for hybrid environments, and the value of the cloud in uncertain economic times.

Photo: Noah Berger/Getty Images for Amazon Web Services

AWS is gearing up for re:Invent, its annual cloud computing conference where announcements this year are expected to focus on its end-to-end data strategy and delivering new industry-specific services.

It will be the second re:Invent with CEO Adam Selipsky as leader of the industry’s largest cloud provider after his return last year to AWS from data visualization company Tableau Software.

Keep Reading Show less
Donna Goodison

Donna Goodison (@dgoodison) is Protocol's senior reporter focusing on enterprise infrastructure technology, from the 'Big 3' cloud computing providers to data centers. She previously covered the public cloud at CRN after 15 years as a business reporter for the Boston Herald. Based in Massachusetts, she also has worked as a Boston Globe freelancer, business reporter at the Boston Business Journal and real estate reporter at Banker & Tradesman after toiling at weekly newspapers.

Image: Protocol

We launched Protocol in February 2020 to cover the evolving power center of tech. It is with deep sadness that just under three years later, we are winding down the publication.

As of today, we will not publish any more stories. All of our newsletters, apart from our flagship, Source Code, will no longer be sent. Source Code will be published and sent for the next few weeks, but it will also close down in December.

Keep Reading Show less
Bennett Richardson

Bennett Richardson ( @bennettrich) is the president of Protocol. Prior to joining Protocol in 2019, Bennett was executive director of global strategic partnerships at POLITICO, where he led strategic growth efforts including POLITICO's European expansion in Brussels and POLITICO's creative agency POLITICO Focus during his six years with the company. Prior to POLITICO, Bennett was co-founder and CMO of Hinge, the mobile dating company recently acquired by Match Group. Bennett began his career in digital and social brand marketing working with major brands across tech, energy, and health care at leading marketing and communications agencies including Edelman and GMMB. Bennett is originally from Portland, Maine, and received his bachelor's degree from Colgate University.

Enterprise

Why large enterprises struggle to find suitable platforms for MLops

As companies expand their use of AI beyond running just a few machine learning models, and as larger enterprises go from deploying hundreds of models to thousands and even millions of models, ML practitioners say that they have yet to find what they need from prepackaged MLops systems.

As companies expand their use of AI beyond running just a few machine learning models, ML practitioners say that they have yet to find what they need from prepackaged MLops systems.

Photo: artpartner-images via Getty Images

On any given day, Lily AI runs hundreds of machine learning models using computer vision and natural language processing that are customized for its retail and ecommerce clients to make website product recommendations, forecast demand, and plan merchandising. But this spring when the company was in the market for a machine learning operations platform to manage its expanding model roster, it wasn’t easy to find a suitable off-the-shelf system that could handle such a large number of models in deployment while also meeting other criteria.

Some MLops platforms are not well-suited for maintaining even more than 10 machine learning models when it comes to keeping track of data, navigating their user interfaces, or reporting capabilities, Matthew Nokleby, machine learning manager for Lily AI’s product intelligence team, told Protocol earlier this year. “The duct tape starts to show,” he said.

Keep Reading Show less
Kate Kaye

Kate Kaye is an award-winning multimedia reporter digging deep and telling print, digital and audio stories. She covers AI and data for Protocol. Her reporting on AI and tech ethics issues has been published in OneZero, Fast Company, MIT Technology Review, CityLab, Ad Age and Digiday and heard on NPR. Kate is the creator of RedTailMedia.org and is the author of "Campaign '08: A Turning Point for Digital Media," a book about how the 2008 presidential campaigns used digital media and data.

Latest Stories
Bulletins