Enterprise

The FTC’s 'profoundly vague' plan to force companies to destroy algorithms could get very messy

Companies take algorithms out of production all the time. But wiping an AI model and the data that built it off the face of the earth could be a lot more challenging.

Government trigger to blow up a safe of algorithms with dynamite

Algorithmic systems using AI and machine or deep learning can involve large models or families of models involving extremely complex logic expressed in code.

Illustration: CreepyCube/iStock/Getty Images Plus; Protocol

“The premise is simple,” FTC Commissioner Rebecca Slaughter and FTC lawyers wrote last year.

They were talking about a little-used enforcement tool called algorithmic disgorgement, a penalty the agency can wield against companies that used deceptive data practices to build algorithmic systems like AI and machine-learning models. The punishment: They have to destroy ill-gotten data and the models built with it. But while privacy advocates and critics of excessive data collection are praising the concept in theory, in practice it could be anything but simple to implement.

In fact, separating tainted data and algorithmic systems from the unaffected parts of a company’s technology products and intellectual property could be about as easy as teasing out a child’s tangled hair.

“Once you delete the algorithm, you delete the learning. But if it’s entangled with other data, it can get complex pretty fast,” said Rana el Kaliouby, a machine-learning scientist and deputy CEO of driver-monitoring AI firm Smart Eye.

The FTC’s March 3 settlement order against WW, the company formerly known as Weight Watchers, marked the most recent time the agency has demanded a company destroy algorithmic systems. As part of its settlement, the company also must delete data gathered deceptively, provide a written statement confirming deletion sworn under penalty of perjury and keep records for 10 years demonstrating compliance.

But the order provides little detail about how the company must comply or how the FTC will know for sure it did.

The order is “profoundly vague,” said Pam Dixon, executive director of World Privacy Forum. “We’re not usually talking about a single algorithm. I would like to have seen more in their materials about what it is that is being disgorged specifically.”

For example, she said it is unclear whether WW used the data the FTC wants it to delete for marketing, for machine-learning models to predict or score kids’ health status or for other purposes.

How to kill an algorithm

Companies decommission algorithmic models by taking them out of production all the time. In some cases, an algorithm is just a simple piece of code: something that tells a software application how to perform a set of actions.

If WW used the data it was ordered to delete to build just one machine-learning model used in one particular feature of its app, for example, deleting the code for that feature could be a relatively straightforward process, el Kaliouby said.

But algorithmic systems using AI and machine or deep learning can involve large models or families of models involving extremely complex logic expressed in code. Algorithmic systems used in social media platforms, for example, might incorporate several different intersecting models and data sets all working together.

In any case, the first step does involve taking the model out of operation. This ensures that it will no longer process data or ingest new data.

But it’s not so easy to decouple data from algorithmic systems, in part because data used to train and feed them hardly ever sits in one place. Data obtained through deceptive means may end up in a data set that is then sliced and diced to form multiple data set “splits,” each used for separate purposes throughout the machine-learning model development process for model training, testing and validation, said Anupam Datta, co-founder and chief scientist at TruEra, which provides a platform for explaining and monitoring AI models.

And once a model has been deployed, it might blend ill-gotten data along with additional information from other sources, such as data ingested through APIs or real-time data streams.

Explosion of algorithm safe The first step in killing an algorithm involves taking the model out of operationIllustration: CreepyCube/iStock/Getty Images Plus; Protocol

Nowadays, data is often managed in the cloud. Cloud providers like AWS, Azure or Google Cloud offer standardized ways to delete data. A data scientist could use a tool from a cloud platform to mark which data needs to be deleted at varying levels of granularity, Datta said.

When the data storage area for that particular data is marked for removal, the space is freed up, allowing the system to write over or replace that doomed data with new information. However, in that case, the data that was intended to be deleted could still be recovered, Datta said.

Cryptographic erasure could be used to delete data more permanently, he said. The process encrypts the data record with an encoded key that is itself deleted, like locking the data in a box and throwing away the key.

The data replicant problem

In addition to data blending, data copying adds more layers of complexity to the removal process. Data often is replicated and distributed so it can be accessed or used by multiple people or for multiple purposes.

Krishnaram Kenthapadi, chief scientist at machine-learning model monitoring company Fiddler, called this problem — deleting algorithmic models built with ill-gotten information — one of data provenance. It requires an understanding of how data gleaned through deceptive means has moved or been processed within a complex data ecosystem from the time the data was originally collected.

“You want to track all the downstream applications that touched or may have used this data,” he said.

Inspired in part by Europe’s General Data Protection Regulation, which gives people the right to demand that companies delete their personal data, today’s cloud platforms, data management software and technologies used for building and operationalizing AI and machine-learning models — sold by companies such as AWS, C3.ai, Dataiku, Databricks, Dremio, Google Cloud, Informatica, Matillion and others — provide tools that help companies keep track of data lineage to know where data came from, when it was copied for backup or multiple uses and where those copies moved over time.

Without those sorts of tools in place, though, it could be difficult for a company to know for sure whether every copy has actually been deleted. “You might still have some copies left over that are unaccounted for,” said Datta.

Many companies do not have processes set up to automatically attach lineage information to data they collect and use in building algorithmic systems, said Kevin Campbell, CEO of Syniti, a company that provides data technologies and services for things like data migration and data quality.

“If you don’t have a centralized way of capturing that information, you have to have a whole bunch of people chase it down,” said Campbell. “A whole lot of people are going to write a lot of queries.”

As data use and AI become increasingly complex, monitoring for compliance could be difficult for regulators, said el Kaliouby. “It’s not impossible,” she said, but “it’s just hard to enforce some of these things, because you have to be a domain expert.”

Workplace

The tools that make you pay for not getting stuff done

Some tools let you put your money on the line for productivity. Should you bite?

Commitment contracts are popular in a niche corner of the internet, and the tools have built up loyal followings of people who find the extra motivation effective.

Photoillustration: Anna Shvets/Pexels; Protocol

Danny Reeves, CEO and co-founder of Beeminder, is used to defending his product.

“When people first hear about it, they’re kind of appalled,” Reeves said. “Making money off of people’s failure is how they view it.”

Keep Reading Show less
Lizzy Lawrence

Lizzy Lawrence ( @LizzyLaw_) is a reporter at Protocol, covering tools and productivity in the workplace. She's a recent graduate of the University of Michigan, where she studied sociology and international studies. She served as editor in chief of The Michigan Daily, her school's independent newspaper. She's based in D.C., and can be reached at llawrence@protocol.com.

Sponsored Content

Foursquare data story: leveraging location data for site selection

We take a closer look at points of interest and foot traffic patterns to demonstrate how location data can be leveraged to inform better site selecti­on strategies.

Imagine: You’re the leader of a real estate team at a restaurant brand looking to open a new location in Manhattan. You have two options you’re evaluating: one site in SoHo, and another site in the Flatiron neighborhood. Which do you choose?

Keep Reading Show less

Elon Musk has bots on his mind.

Photo: Christian Marquardt/Getty Images

Elon Musk says he needs proof that less than 5% of Twitter's users are bots — or the deal isn't going ahead.

Keep Reading Show less
Jamie Condliffe

Jamie Condliffe ( @jme_c) is the executive editor at Protocol, based in London. Prior to joining Protocol in 2019, he worked on the business desk at The New York Times, where he edited the DealBook newsletter and wrote Bits, the weekly tech newsletter. He has previously worked at MIT Technology Review, Gizmodo, and New Scientist, and has held lectureships at the University of Oxford and Imperial College London. He also holds a doctorate in engineering from the University of Oxford.

Policy

Nobody will help Big Tech prevent online terrorism but itself

There’s no will in Congress or the C-suites of social media giants for a new approach, but smaller platforms would have room to step up — if they decided to.

Timothy Kujawski of Buffalo lights candles at a makeshift memorial as people gather at the scene of a mass shooting at Tops Friendly Market at Jefferson Avenue and Riley Street on Sunday, May 15, 2022 in Buffalo, NY. The fatal shooting of 10 people at a grocery store in a historically Black neighborhood of Buffalo by a young white gunman is being investigated as a hate crime and an act of racially motivated violent extremism, according to federal officials.

Photo: Kent Nishimura / Los Angeles Times via Getty Images

The shooting in Buffalo, New York, that killed 10 people over the weekend has put the spotlight back on social media companies. Some of the attack was livestreamed, beginning on Amazon-owned Twitch, and the alleged shooter appears to have written about how his racist motivations arose from misinformation on smaller or fringe sites including 4chan.

In response, policymakers are directing their anger at tech platforms, with New York Governor Kathy Hochul calling for the companies to be “more vigilant in monitoring” and for “a legal responsibility to ensure that such hate cannot populate these sites.”

Keep Reading Show less
Ben Brody

Ben Brody (@ BenBrodyDC) is a senior reporter at Protocol focusing on how Congress, courts and agencies affect the online world we live in. He formerly covered tech policy and lobbying (including antitrust, Section 230 and privacy) at Bloomberg News, where he previously reported on the influence industry, government ethics and the 2016 presidential election. Before that, Ben covered business news at CNNMoney and AdAge, and all manner of stories in and around New York. He still loves appearing on the New York news radio he grew up with.

We're answering all your questions about the crypto crash.

Photo: Chris Liverani/Unsplash

People started talking about another crypto winter in January, when falling prices had wiped out $1 trillion in value from November’s peak. Prices rallied back in March, restoring some of the losses. Then crypto fell hard again, with bitcoin down more than 60% from its all-time high and other cryptocurrencies harder hit. The market’s message was clear: Crypto winter was no longer coming. It’s here.

If you’ve got questions about the crypto crash, the Protocol Fintech team has answers.

Keep Reading Show less
Latest Stories
Bulletins