Big Tech is still fighting to curb California’s privacy law

Google, Pinterest and more are pushing California’s new privacy agency to narrow the California Privacy Rights Act before it goes into effect in January.

graffiti of a surveillance camera

Tech companies are seizing on the chance to shape how the California Privacy Protection Agency defines automated decision-making.

Image: Tobias Tullius/Unsplash

California’s revamped privacy law, the California Privacy Rights Act, goes into effect in January 2023. The law, which passed by ballot proposition in 2020, is the product of years of backroom battles between lawmakers, regulators, businesses and privacy advocates. But even after all these years, it seems Big Tech companies and their lobbyists are still working to limit the law before it’s too late.

Everyone seemed to want to have their say in public comments released this week by California’s new privacy regulator, the California Privacy Protection Agency. Tech giants including Google and Pinterest, as well as top industry groups including TechNet and Internet Association, urged the agency to issue regulations that would narrow the scope of CPRA. One of their top concerns is how the agency plans to define “automated decision making,” which consumers can opt out of under the law. They also asked the agency to limit which companies have to conduct annual cybersecurity audits under the law.

CPRA gave the CPPA broad authority to implement and enforce the law and issue new regulations to go along with it. The agency is now considering these and other comments as it considers how to handle what it called “new and undecided” issues contained in CPRA.

It’s no surprise that tech companies are seizing on the chance to shape how the agency defines automated decision-making. It’s a broad term that isn’t clearly defined in the law, but could implicate just about every tech company in the world — which is precisely what tech companies are arguing.

“Automated decisionmaking technology is not a universally defined term and could encompass a wide range of technology that has been broadly used for many decades, including spreadsheets and nearly all forms of software,” wrote Cameron Demetre, the California and Southwest executive director for TechNet, which represents Meta, Google, Apple and more.

Google in particular argued that the agency should focus its rules on “fully automated decisionmaking that produces legal effects or effects of a similar import, such as a consumer's eligibility for credit, employment, insurance, rental housing, or license or other government benefit.” Such a standard, the company argued, would bring California into alignment with Europe’s General Data Protection Regulation as well as Colorado and Virginia’s recently passed privacy laws, which both take effect in 2023. “These laws' focus on decisionmaking that has the potential to produce substantial harm is well-considered,” Google director of State Policy Cynthia Pantazis wrote.

Pinterest went so far as to argue that “any effort” to regulate automated decision-making, beyond decisions that have legal consequences, would be “overly broad.”

Privacy advocates are pushing the agency to take a wider view. In their joint comments, the Electronic Frontier Foundation, Common Sense Media, the American Civil Liberties Union in California and the National Fair Housing Alliance suggested that the agency should adopt a definition of automated decision-making put forward by Rashida Richardson, the White House’s current senior policy adviser for data and democracy.

Richardson’s definition is broader than what tech companies might want, but narrow enough so as not to encompass all technology. It focuses instead on systems that “aid or replace government decisions, judgments, and/or policy implementation that impact opportunities, access, liberties, rights, and/or safety.”

In addition to defining automated decision-making, tech companies also have concerns about how the agency will handle the part of CPRA that requires companies to undergo regular risk assessments and annual cybersecurity audits if they process consumer data in a way that “presents significant risk to consumers’ privacy or security.”

Right now, it’s unclear what constitutes “significant risk” or what types of companies will be required to submit to audits and assessments. In the comments, tech companies once again urged the agency to take a conservative approach. TechNet, for one, argued that companies should be able to do self-audits because third-party audits are “burdensome and expensive.” Google encouraged the agency to use California’s existing data-breach law as a guide when determining what data could pose a “significant risk.”

“[S]tate data breach reporting laws require businesses to report security breaches with respect to certain categories of information precisely because such information, in the wrong hands, may pose a significant risk to consumers' privacy and security,” Google’s Pantazis wrote.

The Internet Association, meanwhile, argued that data processing should only present a significant risk under the law if it could have a "legal or similarly significant effect" on people.

Tech companies have been fighting to shape California privacy law for years now, beginning with negotiations over the California Consumer Privacy Act in 2018. That work continued when Alastair Mactaggart, the driving force behind CCPA, decided to take another stab at the law and put CPRA forward as a ballot initiative in 2020 following a frenzied consultation process with large tech companies, privacy advocates and other business and consumer groups.

The passage of CPRA all but guaranteed a new round of jockeying among businesses and watchdogs, given the amount of discretion it gives to the new privacy agency. The new head of that agency, Ashkan Soltani, is no stranger to these debates: Soltani is a former chief technologist for the FTC and worked closely with Mactaggart during the development of both CCPA and CPRA. "California is leading the way when it comes to privacy rights and I'm honored to be able to serve its residents," Soltani said when he took the job. "I am eager to get to work to help build the agency's team and begin doing the work required by CCPA and the CPRA."

In addition to soliciting feedback, the agency will also hold informational hearings on these topics and others before beginning its formal rule-making process.



We’ve invested more than $13 billion in teams and technology to stop bad actors and remove illicit content.

Since July, we’ve taken action on:

  • 1.8 billion fake accounts
  • 26.6 million violent and graphic posts
  • 9.8 million terrorism-related posts

Find out how we're working to enhance safety.

Learn more


How GM plans to make its ambitious EV goals reality

The automaker's chief sustainability officer is optimistic that GM is well-positioned to rapidly scale up the EV side of its business.

"I think everything that’s been put in place to support the transition will be a real positive for the industry and for the country."

Photo: Eva Marie Uzcategui/Bloomberg via Getty Images

Automakers are on the cusp of an entirely new era.

The transition to electric vehicles is quickly becoming more than just theoretical: More models are coming onto the scene every day. This week, the Inflation Reduction Act was signed into law, enshrining a new structure for EV tax credits and offering a boost to domestic critical mineral mining. The transition isn’t coming a moment too soon, given that the transportation sector makes up the largest share of greenhouse gas emissions in the U.S.

Keep Reading Show less
Lisa Martine Jenkins

Lisa Martine Jenkins is a senior reporter at Protocol covering climate. Lisa previously wrote for Morning Consult, Chemical Watch and the Associated Press. Lisa is currently based in Brooklyn, and is originally from the Bay Area. Find her on Twitter ( @l_m_j_) or reach out via email (ljenkins@protocol.com).

As management teams at financial institutions look for best practices to make part of their regular toolkit, they are reaching most for the ones that increase the speed and reduce the risk of large-scale change.

That forward-thinking approach can lead financial institutions to leverage AI technology, which can help give decision-makers trusted tools to solve integral challenges vital to the health of the business. One of the leading providers of AI and machine-learning software, DataRobot continues to attract clients in financial services who want to de-risk their AI investments and rapidly scale AI to almost every part of their operations, resulting in improved productivity and higher customer satisfaction.

Keep Reading Show less
David Silverberg
David Silverberg is a Toronto-based freelance journalist, editor and writing coach. He writes for The Washington Post, BBC News, Business Insider, The Toronto Star, New Scientist, Fodor's, and several alumni magazines. He also writes for brands such as 23andme, Shopify and Bold Commerce. He has served as editor of B2B News Network, Canada's only B2B news magazine, and Digital Journal, a leading pioneer in citizen journalism. Find more about him at www.davidsilverberg.ca

How Embracer Group bought ‘Lord of the Rings’ rights for a bargain

The Swedish holding company, known best for its gaming acquisitions, bought the rights to “The Lord of the Rings.” But the deal is much more complicated than it seems.

Who really owns LOTR's rights?

Photo: New Line/WireImage

A new stakeholder has entered the complex licensing web of “The Lord of the Rings,” and the landmark deal has further complicated the already messy media empire surrounding author J.R.R. Tolkien’s fantasy epic.

The buyer, the acquisition-hungry Swedish gaming conglomerate known as Embracer Group, has purchased Middle-earth Enterprises, and with it the associated film, video game, board game, merchandise, theater production and theme park rights to the core LOTR book trilogy and “The Hobbit'' from its previous owner, The Saul Zaentz Company. Formerly Tolkien Enterprises, Zaentz’s holding group has held onto the rights since purchasing them from United Artists in 1976. (Tolkien initially sold them to UA in 1969, four years before his death.)

Keep Reading Show less
Nick Statt

Nick Statt is Protocol's video game reporter. Prior to joining Protocol, he was news editor at The Verge covering the gaming industry, mobile apps and antitrust out of San Francisco, in addition to managing coverage of Silicon Valley tech giants and startups. He now resides in Rochester, New York, home of the garbage plate and, completely coincidentally, the World Video Game Hall of Fame. He can be reached at nstatt@protocol.com.


Upstart has a new plan to sell Wall Street on its loans

The AI-powered lender will hold some loans on its balance sheet as it seeks partners for long-term capital.

Despite the current struggles, Upstart views the marketplace model as the best way to write to keep its loan business growing.

Photo: Upstart

After a revenue drop its CEO called “unacceptable,” the leadership at fintech lender Upstart is making a bet on the strength of its ability to underwrite loans with AI.

The San Mateo company is planning to leave some loans on its balance sheet that investors do not want to buy, as concerns about the economy shift Wall Street away from backing riskier consumer debt. Rather than pull back on its lending in response, the company said it will hold some loans as it seeks longer-term capital partners.

Keep Reading Show less
Ryan Deffenbaugh
Ryan Deffenbaugh is a reporter at Protocol focused on fintech. Before joining Protocol, he reported on New York's technology industry for Crain's New York Business. He is based in New York and can be reached at rdeffenbaugh@protocol.com.

Does your boss sound a little funny? It might be an audio deepfake

Voice deepfake attacks against enterprises, often aimed at tricking corporate employees into transferring money to the attackers, are on the rise. And at least in some cases, they’re succeeding.

Audio deepfakes are a new spin on the impersonation tactics that have long been used in social engineering and phishing attacks, but most people aren’t trained to disbelieve their ears.

Illustration: Christopher T. Fong/Protocol

As a cyberattack investigator, Nick Giacopuzzi’s work now includes responding to growing attacks against businesses that involve deepfaked voices — and has ultimately left him convinced that in today's world, "we need to question everything."

In particular, Giacopuzzi has investigated multiple incidents where an attacker deployed fabricated audio, created with the help of AI, that purported to be an executive or a manager at a company. You can guess how it went: The fake boss asked an employee to urgently transfer funds. And in some cases, it’s worked, he said.

Keep Reading Show less
Kyle Alspach

Kyle Alspach ( @KyleAlspach) is a senior reporter at Protocol, focused on cybersecurity. He has covered the tech industry since 2010 for outlets including VentureBeat, CRN and the Boston Globe. He lives in Portland, Oregon, and can be reached at kalspach@protocol.com.

Latest Stories