Policy

The FTC’s new enforcement weapon spells death for algorithms

It may have found a new standard for penalizing tech companies that violate privacy and use deceptive data practices: algorithmic destruction.

The FTC’s new enforcement weapon spells death for algorithms

Forcing companies to delete algorithmic systems built with ill-gotten data could become a more routine approach.

Illustration: CreepyCube/iStock/Getty Images Plus; Protocol

The Federal Trade Commission has struggled over the years to find ways to combat deceptive digital data practices using its limited set of enforcement options. Now, it’s landed on one that could have a big impact on tech companies: algorithmic destruction. And as the agency gets more aggressive on tech by slowly introducing this new type of penalty, applying it in a settlement for the third time in three years could be the charm.

In a March 4 settlement order, the agency demanded that WW International — formerly known as Weight Watchers — destroy the algorithms or AI models it built using personal information collected through its Kurbo healthy eating app from kids as young as 8 without parental permission. The agency also fined the company $1.5 million and ordered it to delete the illegally harvested data.

When it comes to today’s data-centric business models, algorithmic systems and the data used to build and train them are intellectual property, products that are core to how many companies operate and generate revenue. While in the past the FTC has required companies to disgorge ill-gotten monetary gains obtained through deceptive practices, forcing them to delete algorithmic systems built with ill-gotten data could become a more routine approach, one that modernizes FTC enforcement to directly affect how companies do business.

A slow rollout

The FTC first used the approach in 2019, amid scandalous headlines that exposed Facebook’s privacy vulnerabilities and brought down political data and campaign consultancy Cambridge Analytica. The agency called on Cambridge Analytica to destroy the data it had gathered about Facebook users through deceptive means along with “information or work product, including any algorithms or equations” built using that data.

It was another two years before algorithmic disgorgement came around again when the commission settled a case with photo-sharing app company Everalbum. The company was charged with using facial recognition in its Ever app to detect people’s identities in images without allowing users to turn it off, and for using photos uploaded through the app to help build its facial recognition technology.

In that case, the commission told Everalbum to destroy the photos, videos and facial and biometric data it gleaned from app users and to delete products built using it, including “any models or algorithms developed in whole or in part” using that data.

Technically speaking, the term “algorithm” can cover any piece of code that can make a software application do a set of actions, said Krishna Gade, founder and CEO of AI monitoring software company Fiddler. When it comes to AI specifically, the term usually refers to an AI model or machine-learning model, he said.

Clearing the way for algorithmic destruction inside the FTC

It hasn’t always been clear that the FTC might use algorithmic disgorgement more regularly.

“Cambridge Analytica a was a good decision, but I wasn’t certain that that was going to become a pattern,” Pam Dixon, executive director of World Privacy Forum, said regarding the requirement for the company to delete its algorithmic models. Now, Dixon said, algorithmic disgorgement will likely become a standard enforcement mechanism, just like monetary fines. “This is definitely now to be expected whenever it is applicable or the right decision,” she said.

The winds inside the FTC seem to be shifting. “Commissioners have previously voted to allow data protection law violators to retain algorithms and technologies that derive much of their value from ill-gotten data,” former FTC Commissioner Rohit Chopra, now director of the Consumer Financial Protection Bureau, wrote in a statement related to the Everalbum case. He said requiring the company to “forfeit the fruits of its deception” was “an important course correction.”

“If Ever meant a course correction, Kurbo means full speed ahead,” said Jevan Hutson, associate at Hintze Law, a data privacy and security law firm.

FTC Commissioner Rebecca Slaughter has been a vocal supporter of algorithmic destruction as a way to penalize companies for unfair and deceptive data practices. In a Yale Journal of Law and Technology article published last year, she and FTC lawyers Janice Kopec and Mohamad Batal highlighted it as a tool the FTC could use to foster economic and algorithmic justice.

“The premise is simple: when companies collect data illegally, they should not be able to profit from either the data or any algorithm developed using it,” they wrote. “The authority to seek this type of remedy comes from the Commission’s power to order relief reasonably tailored to the violation of the law. This innovative enforcement approach should send a clear message to companies engaging in illicit data collection in order to train AI models: Not worth it.”

Indeed, some believe the threat to intellectual property value and tech product viability could make companies think twice about using data collected through unscrupulous means. “Big fines are the cost of doing business. Algorithmic disgorgement traced to illicit data collection/processing is an actual deterrent,” David Carroll, an associate professor of media design at The New School’s Parsons School of Design, said in a tweet. Carroll sued Cambridge Analytica in Europe to obtain his 2016 voter profile data from the now-defunct company.

Forecast: Future privacy use

When people sign up to use the Kurbo healthy eating app, they can choose a fruit or vegetable-themed avatar such as an artichoke, pea pod or pineapple. In exchange for health coaching and help tracking food intake and exercise, the app requires personal information about its users such as age, gender, height, weight and their food and exercise choices, which improve the app.

In its case against WW, the FTC said that until late 2019, Kurbo users could sign up for the service either by indicating that they were a parent signing up for their child or that they were over the age of 13 and registering for themselves. The agency said the company failed to ensure that the people signing up were actually parents or adult guardians rather than kids pretending to be adults. It also said that from 2014 to 2019, hundreds of users who signed up for the app originally claiming they were over age 13 later changed their profile birth dates to indicate they were actually under 13, but continued to have access to the app.

The fact that algorithmic disgorgement was used by the FTC in relation to one of the country’s only existing federal privacy laws could be a sign that it will be used again, legal and policy experts said. While the Cambridge Analytica and Everalbum cases charged those companies for violating the FTC Act, the Kurbo case added an important wrinkle, alleging that WW violated both the FTC Act and Children’s Online Privacy Protection Act. Both are important pieces of legislation under which the agency can bring consumer protection cases against businesses.

“This means that for any organization that has collected data illegally under COPPA that data is at risk and the models built on top of it are at risk for disgorgement,” Hutson said.

The use of COPPA could be a foundational precedent paving the way for the FTC to require destruction of algorithmic models under future legislation, such as a would-be comprehensive federal privacy law. “It stands to reason it would be leveraged in any other arena where the FTC has enforcement authority under legislation,” Hutson said.

Application of algorithmic disgorgement in the COPPA context is “a clear jurisdiction and trigger of enforcement through a law that exists and explicitly protects kids’ data, [so] if there was a corollary law for everyone it would allow the FTC to enforce in this way for companies that are not just gathering kids’ data,” said Ben Winters, a counsel for the Electronic Privacy Information Center.

He added, “It shows it would be really great if we had a privacy law for everybody, in addition to kids.”

Policy

Musk’s texts reveal what tech’s most powerful people really want

From Jack Dorsey to Joe Rogan, Musk’s texts are chock-full of überpowerful people, bending a knee to Twitter’s once and (still maybe?) future king.

“Maybe Oprah would be interested in joining the Twitter board if my bid succeeds,” one text reads.

Photo illustration: Patrick Pleul/picture alliance via Getty Images; Protocol

Elon Musk’s text inbox is a rarefied space. It’s a place where tech’s wealthiest casually commit to spending billions of dollars with little more than a thumbs-up emoji and trade tips on how to rewrite the rules for how hundreds of millions of people around the world communicate.

Now, Musk’s ongoing legal battle with Twitter is giving the rest of us a fleeting glimpse into that world. The collection of Musk’s private texts that was made public this week is chock-full of tech power brokers. While the messages are meant to reveal something about Musk’s motivations — and they do — they also say a lot about how things get done and deals get made among some of the most powerful people in the world.

Keep Reading Show less
Issie Lapowsky

Issie Lapowsky ( @issielapowsky) is Protocol's chief correspondent, covering the intersection of technology, politics, and national affairs. She also oversees Protocol's fellowship program. Previously, she was a senior writer at Wired, where she covered the 2016 election and the Facebook beat in its aftermath. Prior to that, Issie worked as a staff writer for Inc. magazine, writing about small business and entrepreneurship. She has also worked as an on-air contributor for CBS News and taught a graduate-level course at New York University's Center for Publishing on how tech giants have affected publishing.

Sponsored Content

Great products are built on strong patents

Experts say robust intellectual property protection is essential to ensure the long-term R&D required to innovate and maintain America's technology leadership.

Every great tech product that you rely on each day, from the smartphone in your pocket to your music streaming service and navigational system in the car, shares one important thing: part of its innovative design is protected by intellectual property (IP) laws.

From 5G to artificial intelligence, IP protection offers a powerful incentive for researchers to create ground-breaking products, and governmental leaders say its protection is an essential part of maintaining US technology leadership. To quote Secretary of Commerce Gina Raimondo: "intellectual property protection is vital for American innovation and entrepreneurship.”

Keep Reading Show less
James Daly
James Daly has a deep knowledge of creating brand voice identity, including understanding various audiences and targeting messaging accordingly. He enjoys commissioning, editing, writing, and business development, particularly in launching new ventures and building passionate audiences. Daly has led teams large and small to multiple awards and quantifiable success through a strategy built on teamwork, passion, fact-checking, intelligence, analytics, and audience growth while meeting budget goals and production deadlines in fast-paced environments. Daly is the Editorial Director of 2030 Media and a contributor at Wired.
Fintech

Circle’s CEO: This is not the time to ‘go crazy’

Jeremy Allaire is leading the stablecoin powerhouse in a time of heightened regulation.

“It’s a complex environment. So every CEO and every board has to be a little bit cautious, because there’s a lot of uncertainty,” Circle CEO Jeremy Allaire told Protocol at Converge22.

Photo: Circle

Sitting solo on a San Francisco stage, Circle CEO Jeremy Allaire asked tennis superstar Serena Williams what it’s like to face “unrelenting skepticism.”

“What do you do when someone says you can’t do this?” Allaire asked the athlete turned VC, who was beaming into Circle’s Converge22 convention by video.

Keep Reading Show less
Benjamin Pimentel

Benjamin Pimentel ( @benpimentel) covers crypto and fintech from San Francisco. He has reported on many of the biggest tech stories over the past 20 years for the San Francisco Chronicle, Dow Jones MarketWatch and Business Insider, from the dot-com crash, the rise of cloud computing, social networking and AI to the impact of the Great Recession and the COVID crisis on Silicon Valley and beyond. He can be reached at bpimentel@protocol.com or via Google Voice at (925) 307-9342.

Enterprise

Is Salesforce still a growth company? Investors are skeptical

Salesforce is betting that customer data platform Genie and new Slack features can push the company to $50 billion in revenue by 2026. But investors are skeptical about the company’s ability to deliver.

Photo: Marlena Sloss/Bloomberg via Getty Images

Salesforce has long been enterprise tech’s golden child. The company said everything customers wanted to hear and did everything investors wanted to see: It produced robust, consistent growth from groundbreaking products combined with an aggressive M&A strategy and a cherished culture, all operating under the helm of a bombastic, but respected, CEO and team of well-coiffed executives.

Dreamforce is the embodiment of that success. Every year, alongside frustrating San Francisco residents, the over-the-top celebration serves as a battle cry to the enterprise software industry, reminding everyone that Marc Benioff’s mighty fiefdom is poised to expand even deeper into your corporate IT stack.

Keep Reading Show less
Joe Williams

Joe Williams is a writer-at-large at Protocol. He previously covered enterprise software for Protocol, Bloomberg and Business Insider. Joe can be reached at JoeWilliams@Protocol.com. To share information confidentially, he can also be contacted on a non-work device via Signal (+1-309-265-6120) or JPW53189@protonmail.com.

Policy

The US and EU are splitting on tech policy. That’s putting the web at risk.

A conversation with Cédric O, the former French minister of state for digital.

“With the difficulty of the U.S. in finding political agreement or political basis to legislate more, we are facing a risk of decoupling in the long term between the EU and the U.S.”

Photo: David Paul Morris/Bloomberg via Getty Images

Cédric O, France’s former minister of state for digital, has been an advocate of Europe’s approach to tech and at the forefront of the continent’s relations with U.S. giants. Protocol caught up with O last week at a conference in New York focusing on social media’s negative effects on society and the possibilities of blockchain-based protocols for alternative networks.

O said watching the U.S. lag in tech policy — even as some states pass their own measures and federal bills gain momentum — has made him worry about the EU and U.S. decoupling. While not as drastic as a disentangling of economic fortunes between the West and China, such a divergence, as O describes it, could still make it functionally impossible for companies to serve users on both sides of the Atlantic with the same product.

Keep Reading Show less
Ben Brody

Ben Brody (@ BenBrodyDC) is a senior reporter at Protocol focusing on how Congress, courts and agencies affect the online world we live in. He formerly covered tech policy and lobbying (including antitrust, Section 230 and privacy) at Bloomberg News, where he previously reported on the influence industry, government ethics and the 2016 presidential election. Before that, Ben covered business news at CNNMoney and AdAge, and all manner of stories in and around New York. He still loves appearing on the New York news radio he grew up with.

Latest Stories
Bulletins