California’s revamped privacy law, the California Privacy Rights Act, goes into effect in January 2023. The law, which passed by ballot proposition in 2020, is the product of years of backroom battles between lawmakers, regulators, businesses and privacy advocates. But even after all these years, it seems Big Tech companies and their lobbyists are still working to limit the law before it’s too late.
Everyone seemed to want to have their say in public comments released this week by California’s new privacy regulator, the California Privacy Protection Agency. Tech giants including Google and Pinterest, as well as top industry groups including TechNet and Internet Association, urged the agency to issue regulations that would narrow the scope of CPRA. One of their top concerns is how the agency plans to define “automated decision making,” which consumers can opt out of under the law. They also asked the agency to limit which companies have to conduct annual cybersecurity audits under the law.
CPRA gave the CPPA broad authority to implement and enforce the law and issue new regulations to go along with it. The agency is now considering these and other comments as it considers how to handle what it called “new and undecided” issues contained in CPRA.
It’s no surprise that tech companies are seizing on the chance to shape how the agency defines automated decision-making. It’s a broad term that isn’t clearly defined in the law, but could implicate just about every tech company in the world — which is precisely what tech companies are arguing.
“Automated decisionmaking technology is not a universally defined term and could encompass a wide range of technology that has been broadly used for many decades, including spreadsheets and nearly all forms of software,” wrote Cameron Demetre, the California and Southwest executive director for TechNet, which represents Meta, Google, Apple and more.
Google in particular argued that the agency should focus its rules on “fully automated decisionmaking that produces legal effects or effects of a similar import, such as a consumer's eligibility for credit, employment, insurance, rental housing, or license or other government benefit.” Such a standard, the company argued, would bring California into alignment with Europe’s General Data Protection Regulation as well as Colorado and Virginia’s recently passed privacy laws, which both take effect in 2023. “These laws' focus on decisionmaking that has the potential to produce substantial harm is well-considered,” Google director of State Policy Cynthia Pantazis wrote.
Pinterest went so far as to argue that “any effort” to regulate automated decision-making, beyond decisions that have legal consequences, would be “overly broad.”
Privacy advocates are pushing the agency to take a wider view. In their joint comments, the Electronic Frontier Foundation, Common Sense Media, the American Civil Liberties Union in California and the National Fair Housing Alliance suggested that the agency should adopt a definition of automated decision-making put forward by Rashida Richardson, the White House’s current senior policy adviser for data and democracy.
Richardson’s definition is broader than what tech companies might want, but narrow enough so as not to encompass all technology. It focuses instead on systems that “aid or replace government decisions, judgments, and/or policy implementation that impact opportunities, access, liberties, rights, and/or safety.”
In addition to defining automated decision-making, tech companies also have concerns about how the agency will handle the part of CPRA that requires companies to undergo regular risk assessments and annual cybersecurity audits if they process consumer data in a way that “presents significant risk to consumers’ privacy or security.”
Right now, it’s unclear what constitutes “significant risk” or what types of companies will be required to submit to audits and assessments. In the comments, tech companies once again urged the agency to take a conservative approach. TechNet, for one, argued that companies should be able to do self-audits because third-party audits are “burdensome and expensive.” Google encouraged the agency to use California’s existing data-breach law as a guide when determining what data could pose a “significant risk.”
“[S]tate data breach reporting laws require businesses to report security breaches with respect to certain categories of information precisely because such information, in the wrong hands, may pose a significant risk to consumers' privacy and security,” Google’s Pantazis wrote.
The Internet Association, meanwhile, argued that data processing should only present a significant risk under the law if it could have a "legal or similarly significant effect" on people.
Tech companies have been fighting to shape California privacy law for years now, beginning with negotiations over the California Consumer Privacy Act in 2018. That work continued when Alastair Mactaggart, the driving force behind CCPA, decided to take another stab at the law and put CPRA forward as a ballot initiative in 2020 following a frenzied consultation process with large tech companies, privacy advocates and other business and consumer groups.
The passage of CPRA all but guaranteed a new round of jockeying among businesses and watchdogs, given the amount of discretion it gives to the new privacy agency. The new head of that agency, Ashkan Soltani, is no stranger to these debates: Soltani is a former chief technologist for the FTC and worked closely with Mactaggart during the development of both CCPA and CPRA. "California is leading the way when it comes to privacy rights and I'm honored to be able to serve its residents," Soltani said when he took the job. "I am eager to get to work to help build the agency's team and begin doing the work required by CCPA and the CPRA."
In addition to soliciting feedback, the agency will also hold informational hearings on these topics and others before beginning its formal rule-making process.
A MESSAGE FROM FACEBOOK

We’ve invested more than $13 billion in teams and technology to stop bad actors and remove illicit content.
Since July, we’ve taken action on:
- 1.8 billion fake accounts
- 26.6 million violent and graphic posts
- 9.8 million terrorism-related posts
Find out how we're working to enhance safety.