China

China could soon have stronger privacy laws than the U.S.

Really — at least, when it comes to corporations collecting data. A major draft law could end the free-for-all in the world's largest market.

China could soon have stronger privacy laws than the U.S.

Back to work and back to surveillance in China.

Photo: Getty Images

International observers often think of China as a place where privacy protections are thin or nonexistent. But a forthcoming law could arm Chinese consumers with offensive and defensive tools that web users in places like the United States could only dream of.

In late April, China unveiled the second draft of the country's privacy law, the Personal Information Protection Law, for public comment. The law is expected to pass by the end of the year, and would shield Chinese internet users from excessive data collection and misuse of personal data by tech companies — and even, to some extent, by the government.

The new law, similar to the European Union's General Data Protection Regulation, will give individuals the power to know how their personal data is being used and to consent to it.

"It's a good law," Jeremy Daum, a senior fellow of the Yale Law School Paul Tsai China Center, told Protocol. "We tend to think of China as not being overly concerned with privacy, and that's just wrong … There's a growing expectation of privacy in the Chinese public, and the government is responding to it by passing high-level authority to try and ensure some protections."

Several of the provisions in the draft could change the experience of Chinese web users nationwide.

Privacy committees come to China

One new provision requires big internet platforms — including WeChat, Douyin and Taobao — to each form a committee, staffed predominantly by non-company employees, that supervises the handling of personal information. Tech companies will have to stop providing services that misuse personal information. The big platforms are also required to publish periodic reports about information protection.

This is not a model new to the rest of the world. Facebook, for example, was required to form an independent privacy committee by the Federal Trade Commission in 2019. But Chinese tech companies will be required for the first time to establish an outside board to review the handling of consumer data, especially sensitive biometric data.

"It gives consumers an added level of protection," Daum said. "You're going to have somebody besides the government and besides the company itself looking at how the company is using personal information, making sure they're doing it as they said in their user agreements for the purposes that they state."

Personal information as inheritance

The law also adds novel protections for "post-mortem privacy." When a person passes, that person's family will inherit the rights to handling the decedent's personal information, which could include the power to delete their account.

Alexa Lee, senior manager for global cyber and privacy policy with the Information Technology Industry Council and an associate editor of DigiChina, told Protocol that privacy laws globally rarely address the rights to personal information of the deceased. "This is a very unique addition because if you look at all the global privacy laws, only France has one on this," Lee said. "No other privacy laws have something like this, not even GDPR."

Bye-bye, personalized recommendations

Under the new Personal Information Protection Law, users can elect to block algorithmically-curated information or personalized ads. This means that if Taobao or Douyin recommends merchants or video clips to users, by default the platforms must provide options that aren't tailored for the user based on past and present engagement. If companies want to serve ads based on personal information "with a major influence on the rights and interests of the individual," the app or the site must get affirmative user consent first.

Consent. Consent. Consent.

Other than the new changes, one big theme throughout the draft law is ensuring the consumer's right to consent and the right to know how their information is being used. They also have the right to withdraw that consent at any time, and service providers are required to make a streamlined withdrawal process by law.

The draft law stipulates the concrete applications of minimum necessary standards, meaning tech companies can only collect personal information when it's necessary for the services being provided.

But getting consent is tricky. Users don't always understand the terms and privacy notices to which they're agreeing. "It will be a really long process for consumers to get used to that, and it will also be work for the company themselves to figure out how they can streamline and make this notice easier for the consumer to read," Lee said. Otherwise, Lee said, the process will generate consent fatigue.

Limits on the state

The new law contains a whole chapter regulating the use of personal information by state organs, even though it has less teeth compared to protections against corporate abuse.

The draft law stipulates that when handling personal information, the state "may not exceed the scope or extent necessary to fulfill their statutory duties and responsibilities." And state organs, like corporations and individuals, have to inform individuals and get consent from them when handling their personal information. But there are broad exceptions. The government doesn't have to inform individuals about data collection when doing so would undermine the purpose of the collection — for example, in a criminal investigation.

"Of course, that can get abused," Daum said, "because how do you know to question their use if you don't even know that they're using it? But the fact that there is even a section on state organ use of personal information shows they're really considering who should be allowed to use this and when."

Climate

A pro-China disinformation campaign is targeting rare earth miners

It’s uncommon for cyber criminals to target private industry. But a new operation has cast doubt on miners looking to gain a foothold in the West in an apparent attempt to protect China’s upper hand in a market that has become increasingly vital.

It is very uncommon for coordinated disinformation operations to target private industry, rather than governments or civil society, a cybersecurity expert says.

Photo: Goh Seng Chong/Bloomberg via Getty Images

Just when we thought the renewable energy supply chains couldn’t get more fraught, a sophisticated disinformation campaign has taken to social media to further complicate things.

Known as Dragonbridge, the campaign has existed for at least three years, but in the last few months it has shifted its focus to target several mining companies “with negative messaging in response to potential or planned rare earths production activities.” It was initially uncovered by cybersecurity firm Mandiant and peddles narratives in the Chinese interest via its network of thousands of fake social media accounts.

Keep Reading Show less
Lisa Martine Jenkins

Lisa Martine Jenkins is a senior reporter at Protocol covering climate. Lisa previously wrote for Morning Consult, Chemical Watch and the Associated Press. Lisa is currently based in Brooklyn, and is originally from the Bay Area. Find her on Twitter ( @l_m_j_) or reach out via email (ljenkins@protocol.com).

Some of the most astounding tech-enabled advances of the next decade, from cutting-edge medical research to urban traffic control and factory floor optimization, will be enabled by a device often smaller than a thumbnail: the memory chip.

While vast amounts of data are created, stored and processed every moment — by some estimates, 2.5 quintillion bytes daily — the insights in that code are unlocked by the memory chips that hold it and transfer it. “Memory will propel the next 10 years into the most transformative years in human history,” said Sanjay Mehrotra, president and CEO of Micron Technology.

Keep Reading Show less
James Daly
James Daly has a deep knowledge of creating brand voice identity, including understanding various audiences and targeting messaging accordingly. He enjoys commissioning, editing, writing, and business development, particularly in launching new ventures and building passionate audiences. Daly has led teams large and small to multiple awards and quantifiable success through a strategy built on teamwork, passion, fact-checking, intelligence, analytics, and audience growth while meeting budget goals and production deadlines in fast-paced environments. Daly is the Editorial Director of 2030 Media and a contributor at Wired.
Fintech

Ripple’s CEO threatens to leave the US if it loses SEC case

CEO Brad Garlinghouse said a few countries have reached out to Ripple about relocating.

"There's no doubt that if the SEC doesn't win their case against us that that is good for crypto in the United States,” Brad Garlinghouse told Protocol.

Photo: Stephen McCarthy/Sportsfile for Collision via Getty Images

Ripple CEO Brad Garlinghouse said the crypto company will move to another country if it loses in its legal battle with the SEC.

Garlinghouse said he’s confident that Ripple will prevail against the federal regulator, which accused the company of failing to register roughly $1.4 billion in XRP tokens as securities.

Keep Reading Show less
Benjamin Pimentel

Benjamin Pimentel ( @benpimentel) covers crypto and fintech from San Francisco. He has reported on many of the biggest tech stories over the past 20 years for the San Francisco Chronicle, Dow Jones MarketWatch and Business Insider, from the dot-com crash, the rise of cloud computing, social networking and AI to the impact of the Great Recession and the COVID crisis on Silicon Valley and beyond. He can be reached at bpimentel@protocol.com or via Google Voice at (925) 307-9342.

Policy

The Supreme Court’s EPA ruling is bad news for tech regulation, too

The justices just gave themselves a lot of discretion to smack down agency rules.

The ruling could also endanger work on competition issues by the FTC and net neutrality by the FCC.

Photo: Geoff Livingston/Getty Images

The Supreme Court’s decision last week gutting the Environmental Protection Agency’s ability to regulate greenhouse gas emissions didn’t just signal the conservative justices’ dislike of the Clean Air Act at a moment of climate crisis. It also served as a warning for anyone that would like to see more regulation of Big Tech.

At the heart of Chief Justice John Roberts’ decision in West Virginia v. EPA was a codification of the “major questions doctrine,” which, he wrote, requires “clear congressional authorization” when agencies want to regulate on areas of great “economic and political significance.”

Keep Reading Show less
Ben Brody

Ben Brody (@ BenBrodyDC) is a senior reporter at Protocol focusing on how Congress, courts and agencies affect the online world we live in. He formerly covered tech policy and lobbying (including antitrust, Section 230 and privacy) at Bloomberg News, where he previously reported on the influence industry, government ethics and the 2016 presidential election. Before that, Ben covered business news at CNNMoney and AdAge, and all manner of stories in and around New York. He still loves appearing on the New York news radio he grew up with.

Enterprise

Microsoft and Google are still using emotion AI, but with limits

Microsoft said accessibility goals overrode problems with emotion recognition and Google offers off-the-shelf emotion recognition technology amid growing concern over the controversial AI.

Emotion recognition is a well-established field of computer vision research; however, AI-based technologies used in an attempt to assess people’s emotional states have moved beyond the research phase.

Photo: Microsoft

Microsoft said last month it would no longer provide general use of an AI-based cloud software feature used to infer people’s emotions. However, despite its own admission that emotion recognition technology creates “risks,” it turns out the company will retain its emotion recognition capability in an app used by people with vision loss.

In fact, amid growing concerns over development and use of controversial emotion recognition in everyday software, both Microsoft and Google continue to incorporate the AI-based features in their products.

“The Seeing AI person channel enables you to recognize people and to get a description of them, including an estimate of their age and also their emotion,” said Saqib Shaikh, a software engineering manager and project lead for Seeing AI at Microsoft who helped build the app, in a tutorial about the product in a 2017 Microsoft video.

Keep Reading Show less
Kate Kaye

Kate Kaye is an award-winning multimedia reporter digging deep and telling print, digital and audio stories. She covers AI and data for Protocol. Her reporting on AI and tech ethics issues has been published in OneZero, Fast Company, MIT Technology Review, CityLab, Ad Age and Digiday and heard on NPR. Kate is the creator of RedTailMedia.org and is the author of "Campaign '08: A Turning Point for Digital Media," a book about how the 2008 presidential campaigns used digital media and data.

Latest Stories
Bulletins