Bulletins

China releases new rules for algorithms

Xi Jinping
Xi Jinping
Image: Chairman of the Joint Chiefs of Staff / Protocol

China's cyberspace watchdog released a set of draft regulations on Friday aimed at restricting tech companies' usage of algorithmic recommendations.


With the new rules, the Cyberspace Administration of China is clearly responding to mounting concerns among Chinese consumers about privacy and algorithmic-mediated gig work. But it also details the central internet censor's mandate to control public opinion.

"This policy marks the moment that China's tech regulation is not simply keeping pace with data regulations in the EU, but has gone beyond them," Kendra Schaefer, a data and tech expert at Trivium China, wrote on Twitter.

The drafted rules require internet companies to allow users to block algorithmically curated information or personalized ads. And the CAC, following last Friday's passage of China's Personal Information Protection Law, explicitly bans much-hated "big data ripoffs" (大数据杀熟) — companies setting differentiated prices and other treatments based on troves of consumer data they collect. Such discriminatory automated decision-making can also trigger antitrust investigations.

The CAC also asks internet companies to protect delivery workers and drivers on car-hailing platforms by setting up and fine-tuning algorithms related to order dispatch, compensation structure, payments, work time and rewards. This is in direct response to the nationwide outcry over the plight of food delivery workers that started last summer.

One specific provision aims to control algorithmic recommendations that target minors. Companies are not allowed to send any content to underage users that may harm their mental and physical health, nor may they make recommendations that "induce internet addiction among minors."

But the drafted rules also evince CAC's strong desire to control public opinion and moderate social media content. Service providers are asked to "uphold mainstream value orientations" and optimize algorithmically-based recommendation mechanisms to "disseminate positive energy." Tech companies are blocked from "entering illegal or undesirable keywords as user interests or as user tags" and then "push information content accordingly," nor may companies "set discriminatory or biased user labels." And they are also required to allow autonomous user choices and build mechanisms that allow humans to manually curate hot topics, popular search terms, trending topic charts and pop-up windows to "vigorously present information content that conforms to mainstream value orientations."

The CAC drafted the new rules based on existing data laws and regulations, old and new, that emphasize the protection of cybersecurity as well as personal information, including the Cybersecurity Law, the Data Security law and the Personal Information Protection Law. The internet regulator is seeking public consultation and feedback until Sept. 26.

Protocol | Enterprise

Startups are pouncing as SaaS giants struggle in the intelligence race

Companies like Salesforce and Workday spent the last two decades building walled gardens around their systems. Now, it's a mad dash to make those ecosystems more open.

Companies want to predict the future, and "systems of intelligence" might be their best bet.

Image: Yuichiro Chino / Getty Images

Take a look at any software vendor's marketing materials and you're sure to see some variation of the word "intelligence" splattered everywhere.

It's part of a tectonic shift happening within enterprise technology. Companies spent the last several years moving their systems to the internet and, along the way, rapidly adopting new applications.

Keep Reading Show less
Joe Williams

Joe Williams is a senior reporter at Protocol covering enterprise software, including industry giants like Salesforce, Microsoft, IBM and Oracle. He previously covered emerging technology for Business Insider. Joe can be reached at JWilliams@Protocol.com. To share information confidentially, he can also be contacted on a non-work device via Signal (+1-309-265-6120) or JPW53189@protonmail.com.


Keep Reading Show less
Nasdaq
A technology company reimagining global capital markets and economies.
Protocol | Workplace

The hottest new perk in tech: A week off for burnout recovery

In an industry where long hours are a "badge of honor," a week of rest may be the best way to retain talent.

Tech companies are giving their employees a week to rest and recover from burnout.

Photo: Kinga Cichewicz/Unsplash

In early May, the founder of Lessonly, a company that makes training software, sent out a companywide email issuing a mandate to all employees. But it wasn't the sort of mandate employees around the world have been receiving related to vaccines and masks. This mandate required that every worker take an entire week off in July.

The announcement took Lessonly's staff by surprise. "We had employees reach out and share that they were emotional, just thankful that they had the opportunity to do this," said Megan Jarvis, who leads the company's talent team and worked on planning the week off.

Keep Reading Show less
Aisha Counts
Aisha J. Counts is a reporting fellow at Protocol, based out of Los Angeles. Previously, she worked for Ernst & Young, where she researched and wrote about the future of work, the gig economy and startups. She is a graduate of the University of Southern California, where she studied business and philosophy.
Power

Chip costs are rising. How will that affect gadget prices?

The global chip shortage is causing component costs to go up, so hardware makers are finding new ways to keep their prices low.

Chips are getting more expensive, but most consumer electronics companies have so far resisted price increases.

Photo: Chris Hondros/Getty Images

How do you get people to pay more for your products while avoiding sticker shock? That's a question consumer electronics companies are grappling with as worldwide chip shortages and component cost increases are squeezing their bottom lines.

One way to do it: Make more expensive and higher-margin products seem like a good deal to customers.

Keep Reading Show less
Janko Roettgers

Janko Roettgers (@jank0) is a senior reporter at Protocol, reporting on the shifting power dynamics between tech, media, and entertainment, including the impact of new technologies. Previously, Janko was Variety's first-ever technology writer in San Francisco, where he covered big tech and emerging technologies. He has reported for Gigaom, Frankfurter Rundschau, Berliner Zeitung, and ORF, among others. He has written three books on consumer cord-cutting and online music and co-edited an anthology on internet subcultures. He lives with his family in Oakland.

Protocol | Policy

Laws want humans to check biased AI. Research shows they can’t.

Policymakers want people to oversee — and override — biased AI. But research suggests there's no evidence to prove humans are up to the task.

The recent trend toward requiring human oversight of automated decision-making systems runs counter to mounting research about humans' inability to effectively override AI tools.

Photo: Jackal Pan/Getty Images

There was a time, not long ago, when a certain brand of technocrat could argue with a straight face that algorithms are less biased decision-makers than human beings — and not be laughed out of the room. That time has come and gone, as the perils of AI bias have entered mainstream awareness.

Awareness of bias hasn't stopped institutions from deploying algorithms to make life-altering decisions about, say, people's prison sentences or their health care coverage. But the fear of runaway AI has led to a spate of laws and policy guidance requiring or recommending that these systems have some sort of human oversight, so machines aren't making the final call all on their own. The problem is: These laws almost never stop to ask whether human beings are actually up to the job.

Keep Reading Show less
Issie Lapowsky

Issie Lapowsky ( @issielapowsky) is Protocol's chief correspondent, covering the intersection of technology, politics, and national affairs. She also oversees Protocol's fellowship program. Previously, she was a senior writer at Wired, where she covered the 2016 election and the Facebook beat in its aftermath. Prior to that, Issie worked as a staff writer for Inc. magazine, writing about small business and entrepreneurship. She has also worked as an on-air contributor for CBS News and taught a graduate-level course at New York University's Center for Publishing on how tech giants have affected publishing.

Latest Stories