Policy

The FTC’s new enforcement weapon spells death for algorithms

It may have found a new standard for penalizing tech companies that violate privacy and use deceptive data practices: algorithmic destruction.

The FTC’s new enforcement weapon spells death for algorithms

Forcing companies to delete algorithmic systems built with ill-gotten data could become a more routine approach.

Illustration: CreepyCube/iStock/Getty Images Plus; Protocol

The Federal Trade Commission has struggled over the years to find ways to combat deceptive digital data practices using its limited set of enforcement options. Now, it’s landed on one that could have a big impact on tech companies: algorithmic destruction. And as the agency gets more aggressive on tech by slowly introducing this new type of penalty, applying it in a settlement for the third time in three years could be the charm.

In a March 4 settlement order, the agency demanded that WW International — formerly known as Weight Watchers — destroy the algorithms or AI models it built using personal information collected through its Kurbo healthy eating app from kids as young as 8 without parental permission. The agency also fined the company $1.5 million and ordered it to delete the illegally harvested data.

When it comes to today’s data-centric business models, algorithmic systems and the data used to build and train them are intellectual property, products that are core to how many companies operate and generate revenue. While in the past the FTC has required companies to disgorge ill-gotten monetary gains obtained through deceptive practices, forcing them to delete algorithmic systems built with ill-gotten data could become a more routine approach, one that modernizes FTC enforcement to directly affect how companies do business.

A slow rollout

The FTC first used the approach in 2019, amid scandalous headlines that exposed Facebook’s privacy vulnerabilities and brought down political data and campaign consultancy Cambridge Analytica. The agency called on Cambridge Analytica to destroy the data it had gathered about Facebook users through deceptive means along with “information or work product, including any algorithms or equations” built using that data.

It was another two years before algorithmic disgorgement came around again when the commission settled a case with photo-sharing app company Everalbum. The company was charged with using facial recognition in its Ever app to detect people’s identities in images without allowing users to turn it off, and for using photos uploaded through the app to help build its facial recognition technology.

In that case, the commission told Everalbum to destroy the photos, videos and facial and biometric data it gleaned from app users and to delete products built using it, including “any models or algorithms developed in whole or in part” using that data.

Technically speaking, the term “algorithm” can cover any piece of code that can make a software application do a set of actions, said Krishna Gade, founder and CEO of AI monitoring software company Fiddler. When it comes to AI specifically, the term usually refers to an AI model or machine-learning model, he said.

Clearing the way for algorithmic destruction inside the FTC

It hasn’t always been clear that the FTC might use algorithmic disgorgement more regularly.

“Cambridge Analytica a was a good decision, but I wasn’t certain that that was going to become a pattern,” Pam Dixon, executive director of World Privacy Forum, said regarding the requirement for the company to delete its algorithmic models. Now, Dixon said, algorithmic disgorgement will likely become a standard enforcement mechanism, just like monetary fines. “This is definitely now to be expected whenever it is applicable or the right decision,” she said.

The winds inside the FTC seem to be shifting. “Commissioners have previously voted to allow data protection law violators to retain algorithms and technologies that derive much of their value from ill-gotten data,” former FTC Commissioner Rohit Chopra, now director of the Consumer Financial Protection Bureau, wrote in a statement related to the Everalbum case. He said requiring the company to “forfeit the fruits of its deception” was “an important course correction.”

“If Ever meant a course correction, Kurbo means full speed ahead,” said Jevan Hutson, associate at Hintze Law, a data privacy and security law firm.

FTC Commissioner Rebecca Slaughter has been a vocal supporter of algorithmic destruction as a way to penalize companies for unfair and deceptive data practices. In a Yale Journal of Law and Technology article published last year, she and FTC lawyers Janice Kopec and Mohamad Batal highlighted it as a tool the FTC could use to foster economic and algorithmic justice.

“The premise is simple: when companies collect data illegally, they should not be able to profit from either the data or any algorithm developed using it,” they wrote. “The authority to seek this type of remedy comes from the Commission’s power to order relief reasonably tailored to the violation of the law. This innovative enforcement approach should send a clear message to companies engaging in illicit data collection in order to train AI models: Not worth it.”

Indeed, some believe the threat to intellectual property value and tech product viability could make companies think twice about using data collected through unscrupulous means. “Big fines are the cost of doing business. Algorithmic disgorgement traced to illicit data collection/processing is an actual deterrent,” David Carroll, an associate professor of media design at The New School’s Parsons School of Design, said in a tweet. Carroll sued Cambridge Analytica in Europe to obtain his 2016 voter profile data from the now-defunct company.

Forecast: Future privacy use

When people sign up to use the Kurbo healthy eating app, they can choose a fruit or vegetable-themed avatar such as an artichoke, pea pod or pineapple. In exchange for health coaching and help tracking food intake and exercise, the app requires personal information about its users such as age, gender, height, weight and their food and exercise choices, which improve the app.

In its case against WW, the FTC said that until late 2019, Kurbo users could sign up for the service either by indicating that they were a parent signing up for their child or that they were over the age of 13 and registering for themselves. The agency said the company failed to ensure that the people signing up were actually parents or adult guardians rather than kids pretending to be adults. It also said that from 2014 to 2019, hundreds of users who signed up for the app originally claiming they were over age 13 later changed their profile birth dates to indicate they were actually under 13, but continued to have access to the app.

The fact that algorithmic disgorgement was used by the FTC in relation to one of the country’s only existing federal privacy laws could be a sign that it will be used again, legal and policy experts said. While the Cambridge Analytica and Everalbum cases charged those companies for violating the FTC Act, the Kurbo case added an important wrinkle, alleging that WW violated both the FTC Act and Children’s Online Privacy Protection Act. Both are important pieces of legislation under which the agency can bring consumer protection cases against businesses.

“This means that for any organization that has collected data illegally under COPPA that data is at risk and the models built on top of it are at risk for disgorgement,” Hutson said.

The use of COPPA could be a foundational precedent paving the way for the FTC to require destruction of algorithmic models under future legislation, such as a would-be comprehensive federal privacy law. “It stands to reason it would be leveraged in any other arena where the FTC has enforcement authority under legislation,” Hutson said.

Application of algorithmic disgorgement in the COPPA context is “a clear jurisdiction and trigger of enforcement through a law that exists and explicitly protects kids’ data, [so] if there was a corollary law for everyone it would allow the FTC to enforce in this way for companies that are not just gathering kids’ data,” said Ben Winters, a counsel for the Electronic Privacy Information Center.

He added, “It shows it would be really great if we had a privacy law for everybody, in addition to kids.”

Workplace

The tools that make you pay for not getting stuff done

Some tools let you put your money on the line for productivity. Should you bite?

Commitment contracts are popular in a niche corner of the internet, and the tools have built up loyal followings of people who find the extra motivation effective.

Photoillustration: Anna Shvets/Pexels; Protocol

Danny Reeves, CEO and co-founder of Beeminder, is used to defending his product.

“When people first hear about it, they’re kind of appalled,” Reeves said. “Making money off of people’s failure is how they view it.”

Keep Reading Show less
Lizzy Lawrence

Lizzy Lawrence ( @LizzyLaw_) is a reporter at Protocol, covering tools and productivity in the workplace. She's a recent graduate of the University of Michigan, where she studied sociology and international studies. She served as editor in chief of The Michigan Daily, her school's independent newspaper. She's based in D.C., and can be reached at llawrence@protocol.com.

Sponsored Content

Foursquare data story: leveraging location data for site selection

We take a closer look at points of interest and foot traffic patterns to demonstrate how location data can be leveraged to inform better site selecti­on strategies.

Imagine: You’re the leader of a real estate team at a restaurant brand looking to open a new location in Manhattan. You have two options you’re evaluating: one site in SoHo, and another site in the Flatiron neighborhood. Which do you choose?

Keep Reading Show less

Elon Musk has bots on his mind.

Photo: Christian Marquardt/Getty Images

Elon Musk says he needs proof that less than 5% of Twitter's users are bots — or the deal isn't going ahead.

Keep Reading Show less
Jamie Condliffe

Jamie Condliffe ( @jme_c) is the executive editor at Protocol, based in London. Prior to joining Protocol in 2019, he worked on the business desk at The New York Times, where he edited the DealBook newsletter and wrote Bits, the weekly tech newsletter. He has previously worked at MIT Technology Review, Gizmodo, and New Scientist, and has held lectureships at the University of Oxford and Imperial College London. He also holds a doctorate in engineering from the University of Oxford.

Policy

Nobody will help Big Tech prevent online terrorism but itself

There’s no will in Congress or the C-suites of social media giants for a new approach, but smaller platforms would have room to step up — if they decided to.

Timothy Kujawski of Buffalo lights candles at a makeshift memorial as people gather at the scene of a mass shooting at Tops Friendly Market at Jefferson Avenue and Riley Street on Sunday, May 15, 2022 in Buffalo, NY. The fatal shooting of 10 people at a grocery store in a historically Black neighborhood of Buffalo by a young white gunman is being investigated as a hate crime and an act of racially motivated violent extremism, according to federal officials.

Photo: Kent Nishimura / Los Angeles Times via Getty Images

The shooting in Buffalo, New York, that killed 10 people over the weekend has put the spotlight back on social media companies. Some of the attack was livestreamed, beginning on Amazon-owned Twitch, and the alleged shooter appears to have written about how his racist motivations arose from misinformation on smaller or fringe sites including 4chan.

In response, policymakers are directing their anger at tech platforms, with New York Governor Kathy Hochul calling for the companies to be “more vigilant in monitoring” and for “a legal responsibility to ensure that such hate cannot populate these sites.”

Keep Reading Show less
Ben Brody

Ben Brody (@ BenBrodyDC) is a senior reporter at Protocol focusing on how Congress, courts and agencies affect the online world we live in. He formerly covered tech policy and lobbying (including antitrust, Section 230 and privacy) at Bloomberg News, where he previously reported on the influence industry, government ethics and the 2016 presidential election. Before that, Ben covered business news at CNNMoney and AdAge, and all manner of stories in and around New York. He still loves appearing on the New York news radio he grew up with.

We're answering all your questions about the crypto crash.

Photo: Chris Liverani/Unsplash

People started talking about another crypto winter in January, when falling prices had wiped out $1 trillion in value from November’s peak. Prices rallied back in March, restoring some of the losses. Then crypto fell hard again, with bitcoin down more than 60% from its all-time high and other cryptocurrencies harder hit. The market’s message was clear: Crypto winter was no longer coming. It’s here.

If you’ve got questions about the crypto crash, the Protocol Fintech team has answers.

Keep Reading Show less
Latest Stories
Bulletins