China

Censored word lists are 'proprietary assets' for Chinese big tech

A Q&A with a former Weibo censor on how many censors ByteDance hires, where the new 'red lines' lie and what's different under Xi Jinping.

A view from outside ByteDance's headquarters in Beijing.
A view from outside ByteDance's headquarters in Beijing.
Emmanuel Wong / Contributor via Getty Images

Each year, around the anniversary of the 1989 Tiananmen pro-democracy protests and ensuing crackdown, censorship tightens in China. This year, web users found it to be business as usual. Social media users and even video gamers discovered they were not able to change profile images. Internet companies prevented users from sharing content "due to maintenance." The candle emoji disappeared from WeChat's default emoji collection. Weibo users reported account deletions or suspensions after they shared images of candles, even when they didn't mention a specific event or cause.

The moves felt familiar, but censorship has been tightening nevertheless. As Protocol | China has reported, several major hate campaigns led by ultra-nationalist influencers, or "Red Vs," have triggered widespread censorship in recent months. In the weeks leading up to the June 4 anniversary this year, analysts who track censorship in China have observed an uptick in censorship activities that seemed rather random, as well as an elevated level of punishment for speech infractions. Last weekend, Chinese web users reported that "lie flat," a popular term referring to the retreat from the rat race in protest against cutthroat competition, had been censored.

Protocol spoke about these evolving censorship measures with Liu Lipeng, who worked as an internet censor and a content quality manager at several Chinese tech companies, including Weibo, for nearly 10 years. Liu is currently an editor at China Digital Times, a U.S.-based publication tracking censorship in China. Liu has recently made public over 1,000 internal memos that he saved while working on Weibo's content moderation team, as well as government directives and lists of censored content he received while working at Le.com, a Beijing-based video streaming company.

The below transcript has been lightly edited for clarity and sequencing.

Protocol: Can social media platforms achieve censorship largely through technology, or is it still a labor-intensive undertaking?

Liu: Human censors are still a critical part of it. One reason why ByteDance has become so successful is that it hires the most censors among all social media companies. Though they boast their AI capabilities, they spend the most on manpower moderating content; they have at least 10,000 content moderators in Tianjin alone. I think Weibo's content moderation apparatus is just 10% of ByteDance. When I worked for Weibo, they only hired about 200 censors.

Why aren't other tech companies hiring more censors, then?

It's costly. If the profitability of your product isn't there yet, you won't be able to expand your censorship staff drastically. To some degree, a Chinese tech company's censorship mechanism determines the kind of social media product they can offer. ByteDance is able to achieve a host of features in their products because of the size of their content moderation team.

Are companies outsourcing censorship to third-party companies?

Startups do that, but big tech companies have in-house censorship teams because they wouldn't risk leaking their own data. The bottom line is that every user-generated content platform needs to be closely monitored, because if you don't do that, you'll be forced to close shop. The companies' censorship mechanisms now have to be inspected. Starting in 2018, social media companies have had to carry out "security evaluations," which have to be reviewed and approved by authorities.

How valuable is the censored list to each company?

It's their proprietary asset. Why? No one will hand it to you. You can't communicate openly about what needs to be censored. Authorities definitely won't give you a specific list. So you have to come up with your own list. And if you do it well, that will give you a leg up in the competition.

Are you saying the companies often have to guess what's sensitive?

It's a mix of explicit directives from censorship authorities and self-initiation. For example, Douban recently suspended groups that are related to "lie flat" and the term is censored on other platforms as well. But in this case, they probably were just preemptively deleting content, instead of receiving any explicit directive to do so, because the degree to which the term is gutted is different across platforms.

Who are the censorship authorities, exactly?

There are many at the central and local levels. It's a shifting list, and infighting often occurs among them over jurisdictions. The main one is the Cyberspace Administration of China, which has various local branches. Then the public security organs have their internet police and security apparatus. Each local government has set up an Internet Culture Management Office. The propaganda organs can also direct companies to censor content.

These agencies can distribute so-called "harmful samples" that contain text, images, videos or links that need to be censored. Many of the banned words are distilled from those "harmful samples."

Why did you decide to keep the internal memos from Weibo? And what made you come forward and release them?

Before I joined Weibo, I had hoped to work for a social media company and provide value to users through my work. But I couldn't pursue my professional goal, so I left. I felt it was my responsibility to preserve history. Every day, things disappeared from the internet. But I got to keep the photo negatives, which I believe have their historical value. It's personally gratifying to publish those records and now work against censorship. In some way, I am making up for my past engagement in censorship.

Many Chinese web users now feel anything they post online can trigger censorship. Is there still a red line?

The censors' strategy is to make you feel that the red line no longer exists, scaring you into complete self-censorship. It's always a cat-and-mouse game. Once censors realize users have tested a red line, they move it. The red line has become a moving target.

How has censorship evolved over the years?

It's tightened over time, of course. From [censorship of] Hong Kong and Xinjiang to COVID-19, the space for discussion is shrinking by the day. Political discussions and social issues have always been sensitive, but the scale of censorship is far bigger these days. Before Xi Jinping came to power, the focus of censorship was on collective action — whether any news or discussion could stir public outrage and lead to potential protests. These are still closely monitored, but what feels like a bigger target today is anything that doesn't align with mainstream political ideology. Anyone who's deemed unpatriotic, disrespectful of state leaders, state-designated heroes or whose politics are considered to deviate from so-called "socialist core values" are damned. And then, of course, you can't joke about Xi. Before, corporate censors also monitored mockery of leaders, but the intensity of censorship then had nothing on how they treat Xi-related comments today.

Fintech

Kraken CEO defends his ‘back to dictatorship’ crackdown

Jesse Powell says the crypto exchange’s cultural revolution was necessary.

"Some people feel they should be able to be whatever they want to be in the workplace. But there's a line," Powell told Protocol.

Photo: David Paul Morris/Bloomberg via Getty Images

Kraken CEO Jesse Powell found himself under fire last month for provocative remarks he made that kicked off a contentious workplace battle and shined a light on the crypto exchange’s distinctive corporate culture.

A New York Times report based on leaked Slack messages and employee interviews accused Powell of making insensitive comments on gender and race, sparking heated conversations within Kraken. Powell responded forcefully, laying out new ground rules and principles in an attempt to define the way he wanted the company to operate — sharply at odds in some aspects with the tech industry’s standard practices.

Keep Reading Show less
Benjamin Pimentel

Benjamin Pimentel ( @benpimentel) covers crypto and fintech from San Francisco. He has reported on many of the biggest tech stories over the past 20 years for the San Francisco Chronicle, Dow Jones MarketWatch and Business Insider, from the dot-com crash, the rise of cloud computing, social networking and AI to the impact of the Great Recession and the COVID crisis on Silicon Valley and beyond. He can be reached at bpimentel@protocol.com or via Google Voice at (925) 307-9342.

Every day, millions of us press the “order” button on our favorite coffee store's mobile application: Our chosen brew will be on the counter when we arrive. It’s a personalized, seamless experience that we have all come to expect. What we don’t know is what’s happening behind the scenes. The mobile application is sourcing data from a database that stores information about each customer and what their favorite coffee drinks are. It is also leveraging event-streaming data in real time to ensure the ingredients for your personal coffee are in supply at your local store.

Applications like this power our daily lives, and if they can’t access massive amounts of data stored in a database as well as stream data “in motion” instantaneously, you — and millions of customers — won’t have these in-the-moment experiences.

Keep Reading Show less
Jennifer Goforth Gregory
Jennifer Goforth Gregory has worked in the B2B technology industry for over 20 years. As a freelance writer she writes for top technology brands, including IBM, HPE, Adobe, AT&T, Verizon, Epson, Oracle, Intel and Square. She specializes in a wide range of technology, such as AI, IoT, cloud, cybersecurity, and CX. Jennifer also wrote a bestselling book The Freelance Content Marketing Writer to help other writers launch a high earning freelance business.
Enterprise

GitHub’s CEO wants to go passwordless by 2025

Thomas Dohmke sat down with Protocol to talk about what the open-source code hosting site is doing to address security vulnerabilities, including an aim to go passwordless by 2025.

GitHub CEO Thomas Dohmke spoke to Protocol about its plan to go passwordless.

Photo: Vaughn Ridley/Sportsfile for Collision via Getty Images

GitHub CEO Thomas Dohmke wants to get rid of passwords.

Open-source software has been plagued with cybersecurity issues for years, and GitHub and other companies in the space have been taking steps to bolster security. Dohmke knows, however, that to get to the root of the industrywide problem will take more than just corporate action: It will ultimately require a sea change and cultural shift in how developers work.

Keep Reading Show less
Michelle Ma

Michelle Ma (@himichellema) is a reporter at Protocol, where she writes about management, leadership and workplace issues in tech. Previously, she was a news editor of live journalism and special coverage for The Wall Street Journal. Prior to that, she worked as a staff writer at Wirecutter. She can be reached at mma@protocol.com.

Enterprise

Why foundation models in AI need to be released responsibly

Foundation models like GPT-3 and DALL-E are changing AI forever. We urgently need to develop community norms that guarantee research access and help guide the future of AI responsibly.

Releasing new foundation models doesn’t have to be an all or nothing proposition.

Illustration: sorbetto/DigitalVision Vectors

Percy Liang is director of the Center for Research on Foundation Models, a faculty affiliate at the Stanford Institute for Human-Centered AI and an associate professor of Computer Science at Stanford University.

Humans are not very good at forecasting the future, especially when it comes to technology.

Keep Reading Show less
Percy Liang
Percy Liang is Director of the Center for Research on Foundation Models, a Faculty Affiliate at the Stanford Institute for Human-Centered AI, and an Associate Professor of Computer Science at Stanford University.
Climate

The West’s drought could bring about a data center reckoning

When it comes to water use, data centers are the tech industry’s secret water hogs — and they could soon come under increased scrutiny.

Lake Mead, North America's largest artificial reservoir, has dropped to about 1,052 feet above sea level, the lowest it's been since being filled in 1937.

Photo: Mario Tama/Getty Images

The West is parched, and getting more so by the day. Lake Mead — the country’s largest reservoir — is nearing “dead pool” levels, meaning it may soon be too low to flow downstream. The entirety of the Four Corners plus California is mired in megadrought.

Amid this desiccation, hundreds of the country’s data centers use vast amounts of water to hum along. Dozens cluster around major metro centers, including those with mandatory or voluntary water restrictions in place to curtail residential and agricultural use.

Keep Reading Show less
Lisa Martine Jenkins

Lisa Martine Jenkins is a senior reporter at Protocol covering climate. Lisa previously wrote for Morning Consult, Chemical Watch and the Associated Press. Lisa is currently based in Brooklyn, and is originally from the Bay Area. Find her on Twitter ( @l_m_j_) or reach out via email (ljenkins@protocol.com).

Latest Stories
Bulletins