Policy

Big Tech is still fighting to curb California’s privacy law

Google, Pinterest and more are pushing California’s new privacy agency to narrow the California Privacy Rights Act before it goes into effect in January.

graffiti of a surveillance camera

Tech companies are seizing on the chance to shape how the California Privacy Protection Agency defines automated decision-making.

Image: Tobias Tullius/Unsplash

California’s revamped privacy law, the California Privacy Rights Act, goes into effect in January 2023. The law, which passed by ballot proposition in 2020, is the product of years of backroom battles between lawmakers, regulators, businesses and privacy advocates. But even after all these years, it seems Big Tech companies and their lobbyists are still working to limit the law before it’s too late.

Everyone seemed to want to have their say in public comments released this week by California’s new privacy regulator, the California Privacy Protection Agency. Tech giants including Google and Pinterest, as well as top industry groups including TechNet and Internet Association, urged the agency to issue regulations that would narrow the scope of CPRA. One of their top concerns is how the agency plans to define “automated decision making,” which consumers can opt out of under the law. They also asked the agency to limit which companies have to conduct annual cybersecurity audits under the law.

CPRA gave the CPPA broad authority to implement and enforce the law and issue new regulations to go along with it. The agency is now considering these and other comments as it considers how to handle what it called “new and undecided” issues contained in CPRA.

It’s no surprise that tech companies are seizing on the chance to shape how the agency defines automated decision-making. It’s a broad term that isn’t clearly defined in the law, but could implicate just about every tech company in the world — which is precisely what tech companies are arguing.

“Automated decisionmaking technology is not a universally defined term and could encompass a wide range of technology that has been broadly used for many decades, including spreadsheets and nearly all forms of software,” wrote Cameron Demetre, the California and Southwest executive director for TechNet, which represents Meta, Google, Apple and more.

Google in particular argued that the agency should focus its rules on “fully automated decisionmaking that produces legal effects or effects of a similar import, such as a consumer's eligibility for credit, employment, insurance, rental housing, or license or other government benefit.” Such a standard, the company argued, would bring California into alignment with Europe’s General Data Protection Regulation as well as Colorado and Virginia’s recently passed privacy laws, which both take effect in 2023. “These laws' focus on decisionmaking that has the potential to produce substantial harm is well-considered,” Google director of State Policy Cynthia Pantazis wrote.

Pinterest went so far as to argue that “any effort” to regulate automated decision-making, beyond decisions that have legal consequences, would be “overly broad.”

Privacy advocates are pushing the agency to take a wider view. In their joint comments, the Electronic Frontier Foundation, Common Sense Media, the American Civil Liberties Union in California and the National Fair Housing Alliance suggested that the agency should adopt a definition of automated decision-making put forward by Rashida Richardson, the White House’s current senior policy adviser for data and democracy.

Richardson’s definition is broader than what tech companies might want, but narrow enough so as not to encompass all technology. It focuses instead on systems that “aid or replace government decisions, judgments, and/or policy implementation that impact opportunities, access, liberties, rights, and/or safety.”

In addition to defining automated decision-making, tech companies also have concerns about how the agency will handle the part of CPRA that requires companies to undergo regular risk assessments and annual cybersecurity audits if they process consumer data in a way that “presents significant risk to consumers’ privacy or security.”

Right now, it’s unclear what constitutes “significant risk” or what types of companies will be required to submit to audits and assessments. In the comments, tech companies once again urged the agency to take a conservative approach. TechNet, for one, argued that companies should be able to do self-audits because third-party audits are “burdensome and expensive.” Google encouraged the agency to use California’s existing data-breach law as a guide when determining what data could pose a “significant risk.”

“[S]tate data breach reporting laws require businesses to report security breaches with respect to certain categories of information precisely because such information, in the wrong hands, may pose a significant risk to consumers' privacy and security,” Google’s Pantazis wrote.

The Internet Association, meanwhile, argued that data processing should only present a significant risk under the law if it could have a "legal or similarly significant effect" on people.

Tech companies have been fighting to shape California privacy law for years now, beginning with negotiations over the California Consumer Privacy Act in 2018. That work continued when Alastair Mactaggart, the driving force behind CCPA, decided to take another stab at the law and put CPRA forward as a ballot initiative in 2020 following a frenzied consultation process with large tech companies, privacy advocates and other business and consumer groups.

The passage of CPRA all but guaranteed a new round of jockeying among businesses and watchdogs, given the amount of discretion it gives to the new privacy agency. The new head of that agency, Ashkan Soltani, is no stranger to these debates: Soltani is a former chief technologist for the FTC and worked closely with Mactaggart during the development of both CCPA and CPRA. "California is leading the way when it comes to privacy rights and I'm honored to be able to serve its residents," Soltani said when he took the job. "I am eager to get to work to help build the agency's team and begin doing the work required by CCPA and the CPRA."

In addition to soliciting feedback, the agency will also hold informational hearings on these topics and others before beginning its formal rule-making process.

A MESSAGE FROM FACEBOOK

www.protocol.com

We’ve invested more than $13 billion in teams and technology to stop bad actors and remove illicit content.

Since July, we’ve taken action on:

  • 1.8 billion fake accounts
  • 26.6 million violent and graphic posts
  • 9.8 million terrorism-related posts

Find out how we're working to enhance safety.

Learn more

Climate

A pro-China disinformation campaign is targeting rare earth miners

It’s uncommon for cyber criminals to target private industry. But a new operation has cast doubt on miners looking to gain a foothold in the West in an apparent attempt to protect China’s upper hand in a market that has become increasingly vital.

It is very uncommon for coordinated disinformation operations to target private industry, rather than governments or civil society, a cybersecurity expert says.

Photo: Goh Seng Chong/Bloomberg via Getty Images

Just when we thought the renewable energy supply chains couldn’t get more fraught, a sophisticated disinformation campaign has taken to social media to further complicate things.

Known as Dragonbridge, the campaign has existed for at least three years, but in the last few months it has shifted its focus to target several mining companies “with negative messaging in response to potential or planned rare earths production activities.” It was initially uncovered by cybersecurity firm Mandiant and peddles narratives in the Chinese interest via its network of thousands of fake social media accounts.

Keep Reading Show less
Lisa Martine Jenkins

Lisa Martine Jenkins is a senior reporter at Protocol covering climate. Lisa previously wrote for Morning Consult, Chemical Watch and the Associated Press. Lisa is currently based in Brooklyn, and is originally from the Bay Area. Find her on Twitter ( @l_m_j_) or reach out via email (ljenkins@protocol.com).

Some of the most astounding tech-enabled advances of the next decade, from cutting-edge medical research to urban traffic control and factory floor optimization, will be enabled by a device often smaller than a thumbnail: the memory chip.

While vast amounts of data are created, stored and processed every moment — by some estimates, 2.5 quintillion bytes daily — the insights in that code are unlocked by the memory chips that hold it and transfer it. “Memory will propel the next 10 years into the most transformative years in human history,” said Sanjay Mehrotra, president and CEO of Micron Technology.

Keep Reading Show less
James Daly
James Daly has a deep knowledge of creating brand voice identity, including understanding various audiences and targeting messaging accordingly. He enjoys commissioning, editing, writing, and business development, particularly in launching new ventures and building passionate audiences. Daly has led teams large and small to multiple awards and quantifiable success through a strategy built on teamwork, passion, fact-checking, intelligence, analytics, and audience growth while meeting budget goals and production deadlines in fast-paced environments. Daly is the Editorial Director of 2030 Media and a contributor at Wired.
Fintech

Ripple’s CEO threatens to leave the US if it loses SEC case

CEO Brad Garlinghouse said a few countries have reached out to Ripple about relocating.

"There's no doubt that if the SEC doesn't win their case against us that that is good for crypto in the United States,” Brad Garlinghouse told Protocol.

Photo: Stephen McCarthy/Sportsfile for Collision via Getty Images

Ripple CEO Brad Garlinghouse said the crypto company will move to another country if it loses in its legal battle with the SEC.

Garlinghouse said he’s confident that Ripple will prevail against the federal regulator, which accused the company of failing to register roughly $1.4 billion in XRP tokens as securities.

Keep Reading Show less
Benjamin Pimentel

Benjamin Pimentel ( @benpimentel) covers crypto and fintech from San Francisco. He has reported on many of the biggest tech stories over the past 20 years for the San Francisco Chronicle, Dow Jones MarketWatch and Business Insider, from the dot-com crash, the rise of cloud computing, social networking and AI to the impact of the Great Recession and the COVID crisis on Silicon Valley and beyond. He can be reached at bpimentel@protocol.com or via Google Voice at (925) 307-9342.

Policy

The Supreme Court’s EPA ruling is bad news for tech regulation, too

The justices just gave themselves a lot of discretion to smack down agency rules.

The ruling could also endanger work on competition issues by the FTC and net neutrality by the FCC.

Photo: Geoff Livingston/Getty Images

The Supreme Court’s decision last week gutting the Environmental Protection Agency’s ability to regulate greenhouse gas emissions didn’t just signal the conservative justices’ dislike of the Clean Air Act at a moment of climate crisis. It also served as a warning for anyone that would like to see more regulation of Big Tech.

At the heart of Chief Justice John Roberts’ decision in West Virginia v. EPA was a codification of the “major questions doctrine,” which, he wrote, requires “clear congressional authorization” when agencies want to regulate on areas of great “economic and political significance.”

Keep Reading Show less
Ben Brody

Ben Brody (@ BenBrodyDC) is a senior reporter at Protocol focusing on how Congress, courts and agencies affect the online world we live in. He formerly covered tech policy and lobbying (including antitrust, Section 230 and privacy) at Bloomberg News, where he previously reported on the influence industry, government ethics and the 2016 presidential election. Before that, Ben covered business news at CNNMoney and AdAge, and all manner of stories in and around New York. He still loves appearing on the New York news radio he grew up with.

Enterprise

Microsoft and Google are still using emotion AI, but with limits

Microsoft said accessibility goals overrode problems with emotion recognition and Google offers off-the-shelf emotion recognition technology amid growing concern over the controversial AI.

Emotion recognition is a well-established field of computer vision research; however, AI-based technologies used in an attempt to assess people’s emotional states have moved beyond the research phase.

Photo: Microsoft

Microsoft said last month it would no longer provide general use of an AI-based cloud software feature used to infer people’s emotions. However, despite its own admission that emotion recognition technology creates “risks,” it turns out the company will retain its emotion recognition capability in an app used by people with vision loss.

In fact, amid growing concerns over development and use of controversial emotion recognition in everyday software, both Microsoft and Google continue to incorporate the AI-based features in their products.

“The Seeing AI person channel enables you to recognize people and to get a description of them, including an estimate of their age and also their emotion,” said Saqib Shaikh, a software engineering manager and project lead for Seeing AI at Microsoft who helped build the app, in a tutorial about the product in a 2017 Microsoft video.

Keep Reading Show less
Kate Kaye

Kate Kaye is an award-winning multimedia reporter digging deep and telling print, digital and audio stories. She covers AI and data for Protocol. Her reporting on AI and tech ethics issues has been published in OneZero, Fast Company, MIT Technology Review, CityLab, Ad Age and Digiday and heard on NPR. Kate is the creator of RedTailMedia.org and is the author of "Campaign '08: A Turning Point for Digital Media," a book about how the 2008 presidential campaigns used digital media and data.

Latest Stories
Bulletins