Protocol | China

I helped build ByteDance's vast censorship machine

I wasn't proud of it, and neither were my coworkers. But that's life in today's China.

I helped build ByteDance's vast censorship machine

A view from outside ByteDance's headquarters in Beijing.


Emmanuel Wong
/ Contributor via Getty Images

This is the story of Li An, a pseudonymous former employee at ByteDance, as told to Protocol's Shen Lu.

It was the night Dr. Li Wenliang struggled for his last breath in the emergency room of Wuhan Central Hospital. I, like many Chinese web users, had stayed awake to refresh my Weibo feed constantly for updates on his condition. Dr. Li was an ophthalmologist who sounded the alarm early in the COVID-19 outbreak. He soon faced government intimidation and then contracted the virus. When he passed away in the early hours of Friday, Feb. 7, 2020, I was among many Chinese netizens who expressed grief and outrage at the events on Weibo, only to have my account deleted.

I felt guilt more than anger. At the time, I was a tech worker at ByteDance, where I helped develop tools and platforms for content moderation. In other words, I had helped build the system that censored accounts like mine. I was helping to bury myself in China's ever-expanding cyber grave.

I hadn't received explicit directives about Li Wenliang, but Weibo was certainly not the only Chinese tech company relentlessly deleting posts and accounts that night. I knew ByteDance's army of content moderators were using the tools and algorithms that I helped develop to delete content, change the narrative and alter memories of the suffering and trauma inflicted on Chinese people during the COVID-19 outbreak. I couldn't help but feel every day like I was a tiny cog in a vast, evil machine.

ByteDance is one of China's largest unicorns and creator of short video-sharing app TikTok, its original Chinese version Douyin and news aggregator Toutiao. Last year, when ByteDance was at the center of U.S. controversy over data-sharing with Beijing, it cut its domestic engineers' access to products overseas, including TikTok. TikTok has plans to launch two physical Transparency Centers in Los Angeles and Washington, D.C., to showcase content moderation practices. But in China, content moderation is mostly kept in the shadows.

I was on a central technology team that supports the Trust and Safety team, which sits within ByteDance's core data department. The data department is mainly devoted to developing technologies for short-video platforms. As of early 2020, the technologies we created supported the entire company's content moderation in and outside China, including Douyin at home and its international equivalent, TikTok. About 50 staff worked on the product team and between 100 to 150 software engineers worked on the technical team. Additionally, ByteDance employed about 20,000 content moderators to monitor content in China. They worked at what are known internally as "bases" (基地) in Tianjin, Chengdu (in Sichuan), Jinan (in Shandong) and other cities. Some were ByteDance employees, others contractors.

My job was to use technology to make the low-level content moderators' work more efficient. For example, we created a tool that allowed them to throw a video clip into our database and search for similar content.

When I was at ByteDance, we received multiple requests from the bases to develop an algorithm that could automatically detect when a Douyin user spoke Uyghur, and then cut off the livestream session. The moderators had asked for this because they didn't understand the language. Streamers speaking ethnic languages and dialects that Mandarin-speakers don't understand would receive a warning to switch to Mandarin. If they didn't comply, moderators would respond by manually cutting off the livestreams, regardless of the actual content. But when it comes to Uyghur, with an algorithm that did this automatically, the moderators wouldn't have to be responsible for missing content that authorities could deem to have instigated "separatism" or "terrorism." We eventually decided not to do it: We didn't have enough Uyghur language data points in our system, and the most popular livestream rooms were already closely monitored.

The truth is, political speech comprised a tiny fraction of deleted content. Chinese netizens are fluent in self-censorship and know what not to say. ByteDance's platforms — Douyin, Toutiao, Xigua and Huoshan — are mostly entertainment apps. We mostly censored content the Chinese government considers morally hazardous — pornography, lewd conversations, nudity, graphic images and curse words — as well as unauthorized livestreaming sales and content that violated copyright.

But political speech still looms large. What Chinese user-generated content platforms most fear is failing to delete politically sensitive content that later puts the company under heavy government scrutiny. It's a life-and-death matter. Occasionally, ByteDance's content moderation system would go down for a few minutes. It was nerve-wracking because we didn't know what kind of political disaster could occur in that window. As a young unicorn, ByteDance does not have strong government relationships like other tech giants do, so it's walking a tightrope every second.

The team I was part of, content moderation policymakers, plus the army of about 20,000 content moderators, have helped shield ByteDance from major political repercussions and achieve commercial success. ByteDance's powerful algorithms not only can make precise predictions and recommend content to users — one of the things it's best known for in the rest of the world — but can also assist content moderators with swift censorship. Not many tech companies in China have so many resources dedicated to moderating content. Other user-generated content platforms in China have nothing on ByteDance.

Many of my colleagues felt uneasy about what we were doing. Some of them had studied journalism in college. Some were graduates of top universities. They were well-educated and liberal-leaning. We would openly talk from time to time about how our work aided censorship. But we all felt that there was nothing we could do.

A dim light of idealism still burned, of course. Perhaps it was naive of me — I had thought if I tried a bit harder, maybe I could "raise the muzzle of the gun an inch," as they say in Chinese: to let a bit more speech sneak through. Eventually, I learned how limited my influence really was.

When it comes to day-to-day censorship, the Cyberspace Administration of China would frequently issue directives to ByteDance's Content Quality Center (内容质量中心), which oversees the company's domestic moderation operation: sometimes over 100 directives a day. They would then task different teams with applying the specific instructions to both ongoing speech and to past content, which needed to be searched to determine whether it was allowed to stand.

During livestreaming shows, every audio clip would be automatically transcribed into text, allowing algorithms to compare the notes with a long and constantly-updated list of sensitive words, dates and names, as well as Natural Language Processing models. Algorithms would then analyze whether the content was risky enough to require individual monitoring.

If a user mentioned a sensitive term, a content moderator would receive the original video clip and the transcript showing where the term appeared. If the moderator deemed the speech sensitive or inappropriate, they would shut down the ongoing livestreaming session and even suspend or delete the account. Around politically sensitive holidays, such as Oct. 1 (China's National Day), July 1 (the birthday of the Chinese Communist Party) or major political anniversaries like the anniversary of the 1989 protests and crackdown in Tiananmen Square, the Content Quality Center would generate special lists of sensitive terms for content moderators to use. Influencers enjoyed some special treatment — there were content moderators assigned specifically to monitor certain influencers' channels in case their content or accounts were mistakenly deleted. Some extremely popular influencers, state media and government agencies were on a ByteDance-generated white list, free from any censorship — their compliance was assumed.

Colleagues on my team were not in direct contact with content moderators or internet regulators. The Content Quality Center came up with moderation guidelines and worked directly with base managers on implementation. After major events or sensitive anniversaries, colleagues from the operational side would debrief everyone on what worked and what needed improvement. We were in those meetings to see what we could do to better support the censorship operation.

Our role was to make sure that low-level content moderators could find "harmful and dangerous content" as soon as possible, just like fishing out needles from an ocean. And we were tasked with improving censorship efficiency. That is, use as few people as possible to detect as much content as possible that violated ByteDance's community guidelines. I do not recall any major political blowback from the Chinese government during my time at ByteDance, meaning we did our jobs.

It was certainly not a job I'd tell my friends and family about with pride. When they asked what I did at ByteDance, I usually told them I deleted posts (删帖). Some of my friends would say, "Now I know who gutted my account." The tools I helped create can also help fight dangers like fake news. But in China, one primary function of these technologies is to censor speech and erase collective memories of major events, however infrequently this function gets used.

Dr. Li warned his colleagues and friends about an unknown virus that was encroaching on hospitals in Wuhan. He was punished for that. And for weeks, we had no idea what was really happening because of authorities' cover-up of the severity of the crisis. Around this time last year, many Chinese tech companies were actively deleting posts, videos, diaries and pictures that were not part of the "correct collective memory" that China's governments would later approve. Just imagine: Had any social media platform been able to reject the government's censorship directives and retain Dr. Li and other whistleblowers' warnings, perhaps millions of lives would have been saved today.

Protocol | Workplace

Pay audits catch your salary mistakes. Here's how to conduct one.

It’s not unlawful to pay people differently. You just have to be able to justify the difference.

Pay audits reveal the truth about how you’re paying employees.

Illustration: Christopher T. Fong/Protocol

This story is part of our Salary Series, where we take a deep dive into the world of pay: how it's set, how it's changing and what's next. Read the rest of the series.

In 2015, Marc Benioff famously signed off on a salary review of every employee at Salesforce on the urging of then-Chief People Officer Cindy Robbins and another senior woman executive, Leyla Seka. They suspected that women employees at the company were being paid less than men for the same work.

Keep Reading Show less
Michelle Ma

Michelle Ma (@himichellema) is a reporter at Protocol, where she writes about management, leadership and workplace issues in tech. Previously, she was a news editor of live journalism and special coverage for The Wall Street Journal. Prior to that, she worked as a staff writer at Wirecutter. She can be reached at mma@protocol.com.

The fintech developers who made mobile banking as routine as texting or online shopping aren't done. The next frontier for innovation is open banking – fintech builders are enabling consumers to be at the center of where and how their data is used to provide the services they want and need.

Most people don't even realize they're using open banking services today. If they connected their investment and banking accounts in a personal financial management solution or app, they're using open banking. Perhaps they've seen ads about how they can improve their credit score by uploading pay stubs or utility records to that same app – this is also powered by open banking.

Keep Reading Show less
Bob Schukai
Bob Schukai is Executive Vice President of Technology Development, New Digital Infrastructure & Fintech at Mastercard, where he leads the technical design, execution and support of innovative open banking and fintech solutions, as well as next generation technologies to support global payment and data capabilities. Prior to Mastercard, Schukai’s work focused on cognitive computing, financial technology, blockchain, user experience and digital identity. He is also a member of the Institute for Electrical and Electronics Engineers.
Protocol | China

Chinese ed-tech firms’ poignant pivots

Beijing’s tutoring ban has forced ed tech and private tuition companies to explore new opportunities, from clothing to coffee to agriculture.

Some Chinese online tutoring firms are pivoting away from education. Others continue offering classes, but a different kind.

Photo: Liu Ying/Xinhua via Getty Images

Management at China’s leading tutoring and ed-tech firms has been racking brains in an effort to pivot away from the once-lucrative but now moribund K-9 tutoring business. Pivoting has become a necessity ever since Beijing delivered a devastating blow to the private tutoring industry this past summer by banning many types of after-school tutoring outright.

The Wall Street Journal in November reported that several major Chinese tutoring and ed-tech companies were in discussions with China's government to resume K-9 tutoring, under the condition they run their businesses as nonprofits. But some companies have decided to sever their K-9 operations altogether, exploring completely different businesses: agriculture ecommerce, garment-making and even coffee houses. Others will stay in the business of teaching but place their bets on professional education and “well-rounded education”(素质教育), which is not schoolwork-oriented and instead involves extracurricular activities such as arts, sports, science, technology, engineering and civics.

Keep Reading Show less
Shen Lu

Shen Lu is a reporter with Protocol | China. Her writing has appeared in Foreign Policy, The New York Times and POLITICO, among other publications. She can be reached at shenlu@protocol.com, or via Twitter @shenlulushen.

Protocol | Workplace

Calendly thinks it can save you from group meeting scheduling hell

Add another tool to your arsenal for that 15-person, multi-time zone meeting you need to schedule.

Calendly now offers meeting polls.

Image: Calendly

Scheduling a one-on-one meeting over email requires its own multi-email song and dance. But scheduling a meeting for seven people over email is a full-blown nightmare. Add in multiple time zones and incomplete email responses, and you’re deep in a distressingly long email thread. So far, scheduling app Calendly has tackled one-on-one scenarios: The host sends a Calendly link and the invitee chooses the time slot that works for them. Group meetings were still a hassle, despite a few features allowing for round robins or multihost meetings. With the company’s Thursday launch of meeting polls, Calendly joins tools like Doodle and When2meet in solving group scheduling nightmares.

Srinivas Somayajula, Calendly's head of Product Operations, hopes that new and existing users will recognize Calendly as a tool for both the one-on-one use case and complex group scheduling. “We've got the capabilities in the toolset to support either of those extremes and everything in the middle,” Somayajula said.

Keep Reading Show less
Lizzy Lawrence

Lizzy Lawrence ( @LizzyLaw_) is a reporter at Protocol, covering tools and productivity in the workplace. She's a recent graduate of the University of Michigan, where she studied sociology and international studies. She served as editor in chief of The Michigan Daily, her school's independent newspaper. She's based in D.C., and can be reached at llawrence@protocol.com.

Protocol | Workplace

CTO to CEO: The case for putting the tech expert in charge

Parag Agrawal is one of the few tech industry CTOs to nab the top job. But the tides may be shifting.

Parag Agrawal’s appointment to Twitter's CEO seat is already alerting a new generation of CTOs that the top job may not be so out of reach.

Photo: Twitter

Parag Agrawal’s ascension to CEO of Twitter is notable for a few reasons. For one, at 37, he’s now the youngest CEO of an S&P 500 company, beating out Mark Zuckerberg. For another, his path to the top as a CTO-turned-CEO is still relatively rare in the corporate world.

His leap suggests that CEO succession trends may be shifting, as technology increasingly takes the center stage in business and strategy decisions not just for tech companies, but for the business world more broadly.

Keep Reading Show less
Michelle Ma

Michelle Ma (@himichellema) is a reporter at Protocol, where she writes about management, leadership and workplace issues in tech. Previously, she was a news editor of live journalism and special coverage for The Wall Street Journal. Prior to that, she worked as a staff writer at Wirecutter. She can be reached at mma@protocol.com.

Latest Stories