The 2022 midterms will be a major test for TikTok

TikTok was a blip in 2018, and still growing in 2020. How will it handle misinformation around the 2022 midterms, especially with high turnover in its trust and safety teams?

A Tiktok logo hangs on a sign in a display among out-of-focus coloured lights

“Platforms often tend to start the interventions too late, and then exit too early, and don’t recognize that election misinformation is an incredibly durable piece of information."

Photo: Tomohiro Ohsumi via Getty Images

As the midterm election nears, TikTok has faced unrelenting scrutiny about the role it plays in spreading misinformation and the way influencers and political operatives skirt its advertising rules. But according to seven former employees from TikTok's trust and safety team, the company may have an even more basic problem inhibiting its efforts to secure the midterm election: High turnover among the employees who are supposed to carry out that work.

TikTok is still the new kid on the social media block. In 2018, the up-and-comer was barely a blip in the conversation about U.S. elections. By the 2020 election, it had built out its trust and safety team. But since that time, former employees told Protocol, members of that team have scattered, leaving TikTok with limited muscle memory just when it needs it most. “Since so many people are new, they don’t necessarily have the history or institutional knowledge,” one former employee said.

These former employees attributed the trust and safety team’s high attrition to TikTok’s grueling work culture and a lack of transparency from leadership about access to and policies around user data. “If they don’t stem the tide of all the people they’re losing, it’s going to be hard for them to be effective,” another former employee told Protocol.

TikTok refused to say exactly how many employees have left its trust and safety teams, or how many are employed now. The company also declined to specify exactly how the U.S. trust and safety team is structured, and if that structure has changed since 2020. In September, chief operating officer Vanessa Pappas told Congress that trust and safety is “our largest labor expense for TikTok’s U.S. operations,” with “thousands of people working across safety, privacy, and security on a daily basis.”

Protocol identified 239 people worldwide on LinkedIn who left TikTok’s trust and safety operations since 2021, with 94 of those leaving just this year and 67 based in the U.S. (LinkedIn member information may not fully represent staffing or attrition, due to the potential for fake accounts.) The company posted listings seeking to fill election misinformation roles as recently as October.

“We encourage a culture of transparency and feedback, and are committed to building an equitable platform and business that allows both our community and our employees to thrive,” a TikTok spokesperson told Protocol.

Why misinformation is particularly thorny on TikTok

Civil rights groups have been sounding the alarm about election misinformation for weeks. The Leadership Conference on Civil and Human Rights recently addressed a letter to social media companies urging them to tamp down on the “Big Lie,” false claims that President Joe Biden lost the 2020 election to former President Donald Trump, in addition to new misinformation. Conspiracy theories about general fraud within the U.S. election system abound not only on social media but also among a majority of Republican candidates on the ballot this fall.

Every platform is working on ways to tackle mis- and disinformation related to elections, but there are a few factors that make it even harder on TikTok than elsewhere. For one, video can be a challenging medium to analyze: It’s harder to extract information and search for keywords in images and audio than in text. YouTube faces similar challenges, but it’s been around for far longer than TikTok.

And then, of course, there are the challenges that have nothing to do with technology and everything to do with humans. Karan Lala, a fellow at the Integrity Institute and former software engineer at Meta, noted that mass enforcement is difficult because people might tell the same lie in completely different ways.

“Let’s say you review one video and say, ‘oh the content of this video was a lie,’” Lala said. “How do you effectively link that decision to all the videos that might be coming from different creators that phrased the lie in a different way?”

TikTok’s algorithm also largely displays content from strangers, which means information spreads far beyond a person’s social circle. You don’t necessarily need a following to go viral, so any video might have infinite reach. Because of this — and the fact that researchers don’t have access to TikTok via an API, though TikTok promises to release one soon — it’s especially hard for outside researchers and experts to recreate what an average “For You” page might look like.

“How do you keep TikTok’s For You page from picking a relatively obscure video that is harmful and blasting it to millions of people?” Odanga Madung, a researcher with the Mozilla Foundation, asked. “Because that's essentially what I was seeing on a consistent basis.”

Empowering trust and safety teams is critical in halting misinformation. TikTok is not very transparent internally, as it doesn’t provide an organizational chart to employees. Several former employees told Protocol they felt disconnected from the teams working on TikTok’s algorithm in China, often having to wait for engineers in China to respond to crises such as unflagging critical keywords.

“You need that team to have as much power and be as closely situated to the team that is building the algorithm in and of itself,” Lala said. “You need to be able to disrupt or slightly tweak the outcome of an algorithm based on integrity signals.”

TikTok’s ties to China have also led to heightened scrutiny of its content moderation decisions, even above and beyond accusations of “censorship” that routinely get leveled at other platforms. “TikTok has received a huge amount of criticism for both moderating too much and moderating too little,” said Casey Fiesler, an online communities professor at University of Colorado, Boulder.

Trust and safety needs institutional memory

All of this makes for a complex trust and safety landscape inside TikTok, which was just beginning to take shape during the 2020 election. Like other platforms at the time, TikTok’s employees had to decide how to handle videos discussing Hunter Biden’s laptop. “Do we just take it down and assume it’s all misinformation because it’s unverified?” a former employee told Protocol. “What do we do so that we’re not ‘big bad China’ and we’re not censoring everyone?” The team eventually decided not to fully take those videos down, unless they seemed egregious.

Another challenge during the 2020 election: handling political party-based hype houses. The former employee told Protocol that the Republican Hype House was especially difficult to deal with. The house members kept referencing unsubstantiated QAnon-related claims, but TikTok didn’t want to be accused of suppressing partisan speech. TikTok employees warned the house several times, but never took the account down.

Several told Protocol they were worried about the company’s ability to handle these types of issues with high turnover. One former TikTok employee said only one of the U.S.-based employees currently working on the threat analyst side worked at TikTok during the 2020 election. TikTok declined to comment on this claim.

David Polgar, founder of All Tech is Human, said one of the reasons for high trust and safety attrition more broadly is the explosion of the field. Every tech company worth its salt is looking for quality trust and safety employees, he said: “If you’re doing trust and safety for a major platform that is on the up-and-up like TikTok, you are also a really hot commodity for any startup.”

Burnout is common among trust and safety professionals and may also be a factor in high churn. Even if you’re not a direct content moderator, you’re working in a constant flow of disturbing content.

That type of turnover isn’t always a bad thing, said Katie Harbath, founder of Anchor Change and former public policy director at Facebook. “You are seeing people that were incubated in some of these other platforms, particularly your Metas and Googles, and also your Twitters and TikToks,” Harbath said. “They’re able to take that experience to other companies that may actually have nothing.”

TikTok, for its part, says it’s learned a lot from 2020 and is putting that knowledge to use this year. The company has already publicly released some of the lessons it learned after the 2020 election, including the need to improve its disinformation detection systems and educate creators on TikTok’s zero political ads policy. TikTok released its in-app election center six weeks earlier this year than in 2020 and is labeling content related to the 2022 midterms with a button leading to the information center. Hashtags like #elections2022 will also lead to the center and TikTok’s community guidelines. While content is in the process of being fact checked, it will be excluded from For You feeds.

Around the world, TikTok has hired more trust and safety employees, opening a Europe, Middle East, and Africa hub in Dublin and expanding its hub in San Francisco. It also launched content advisory councils in the U.S., Asia, and Europe. But former employees fear none of that will be enough without a battle-tested team in place.

Two years since the 2020 election, misinformation about the process and outcome still abounds. Mozilla’s Madung said TikTok will need to remain vigilant in the weeks following the midterms. The overarching goal is to avoid violence on or around election day, but Madung said TikTok needs to think about the deeper, pervasive damage caused by misinformation. Lies are persistent.

“Platforms often tend to start the interventions too late, and then exit too early, and don’t recognize that election misinformation is an incredibly durable piece of information,” Madung said.


Judge Zia Faruqui is trying to teach you crypto, one ‘SNL’ reference at a time

His decisions on major cryptocurrency cases have quoted "The Big Lebowski," "SNL," and "Dr. Strangelove." That’s because he wants you — yes, you — to read them.

The ways Zia Faruqui (right) has weighed on cases that have come before him can give lawyers clues as to what legal frameworks will pass muster.

Photo: Carolyn Van Houten/The Washington Post via Getty Images

“Cryptocurrency and related software analytics tools are ‘The wave of the future, Dude. One hundred percent electronic.’”

That’s not a quote from "The Big Lebowski" — at least, not directly. It’s a quote from a Washington, D.C., district court memorandum opinion on the role cryptocurrency analytics tools can play in government investigations. The author is Magistrate Judge Zia Faruqui.

Keep ReadingShow less
Veronica Irwin

Veronica Irwin (@vronirwin) is a San Francisco-based reporter at Protocol covering fintech. Previously she was at the San Francisco Examiner, covering tech from a hyper-local angle. Before that, her byline was featured in SF Weekly, The Nation, Techworker, Ms. Magazine and The Frisc.

The financial technology transformation is driving competition, creating consumer choice, and shaping the future of finance. Hear from seven fintech leaders who are reshaping the future of finance, and join the inaugural Financial Technology Association Fintech Summit to learn more.

Keep ReadingShow less
The Financial Technology Association (FTA) represents industry leaders shaping the future of finance. We champion the power of technology-centered financial services and advocate for the modernization of financial regulation to support inclusion and responsible innovation.

AWS CEO: The cloud isn’t just about technology

As AWS preps for its annual re:Invent conference, Adam Selipsky talks product strategy, support for hybrid environments, and the value of the cloud in uncertain economic times.

Photo: Noah Berger/Getty Images for Amazon Web Services

AWS is gearing up for re:Invent, its annual cloud computing conference where announcements this year are expected to focus on its end-to-end data strategy and delivering new industry-specific services.

It will be the second re:Invent with CEO Adam Selipsky as leader of the industry’s largest cloud provider after his return last year to AWS from data visualization company Tableau Software.

Keep ReadingShow less
Donna Goodison

Donna Goodison (@dgoodison) is Protocol's senior reporter focusing on enterprise infrastructure technology, from the 'Big 3' cloud computing providers to data centers. She previously covered the public cloud at CRN after 15 years as a business reporter for the Boston Herald. Based in Massachusetts, she also has worked as a Boston Globe freelancer, business reporter at the Boston Business Journal and real estate reporter at Banker & Tradesman after toiling at weekly newspapers.

Image: Protocol

We launched Protocol in February 2020 to cover the evolving power center of tech. It is with deep sadness that just under three years later, we are winding down the publication.

As of today, we will not publish any more stories. All of our newsletters, apart from our flagship, Source Code, will no longer be sent. Source Code will be published and sent for the next few weeks, but it will also close down in December.

Keep ReadingShow less
Bennett Richardson

Bennett Richardson ( @bennettrich) is the president of Protocol. Prior to joining Protocol in 2019, Bennett was executive director of global strategic partnerships at POLITICO, where he led strategic growth efforts including POLITICO's European expansion in Brussels and POLITICO's creative agency POLITICO Focus during his six years with the company. Prior to POLITICO, Bennett was co-founder and CMO of Hinge, the mobile dating company recently acquired by Match Group. Bennett began his career in digital and social brand marketing working with major brands across tech, energy, and health care at leading marketing and communications agencies including Edelman and GMMB. Bennett is originally from Portland, Maine, and received his bachelor's degree from Colgate University.


Why large enterprises struggle to find suitable platforms for MLops

As companies expand their use of AI beyond running just a few machine learning models, and as larger enterprises go from deploying hundreds of models to thousands and even millions of models, ML practitioners say that they have yet to find what they need from prepackaged MLops systems.

As companies expand their use of AI beyond running just a few machine learning models, ML practitioners say that they have yet to find what they need from prepackaged MLops systems.

Photo: artpartner-images via Getty Images

On any given day, Lily AI runs hundreds of machine learning models using computer vision and natural language processing that are customized for its retail and ecommerce clients to make website product recommendations, forecast demand, and plan merchandising. But this spring when the company was in the market for a machine learning operations platform to manage its expanding model roster, it wasn’t easy to find a suitable off-the-shelf system that could handle such a large number of models in deployment while also meeting other criteria.

Some MLops platforms are not well-suited for maintaining even more than 10 machine learning models when it comes to keeping track of data, navigating their user interfaces, or reporting capabilities, Matthew Nokleby, machine learning manager for Lily AI’s product intelligence team, told Protocol earlier this year. “The duct tape starts to show,” he said.

Keep ReadingShow less
Kate Kaye

Kate Kaye is an award-winning multimedia reporter digging deep and telling print, digital and audio stories. She covers AI and data for Protocol. Her reporting on AI and tech ethics issues has been published in OneZero, Fast Company, MIT Technology Review, CityLab, Ad Age and Digiday and heard on NPR. Kate is the creator of RedTailMedia.org and is the author of "Campaign '08: A Turning Point for Digital Media," a book about how the 2008 presidential campaigns used digital media and data.

Latest Stories