Enterprise

How will this AI critic influence Biden’s policies? The clues are hiding in plain sight.

Two leaders from NYU's AI Now Institute recently joined the Biden administration and have already laid the groundwork for what government AI policy could look like.

​Meredith Whittaker

Meredith Whittaker's track record at AI Now provides some hints about her influence on the Biden administration's AI policy.

Photo: AI Now

The second of two leaders from NYU's AI Now Institute, a small but influential organization researching the social implications of artificial intelligence, just joined the Biden administration to lay the groundwork for government AI policy. Their previous work suggests their presence might encourage the government to require new transparency from tech companies about how their algorithms work.

The Federal Trade Commission earlier this month created an entirely new role for AI Now co-founder Meredith Whittaker, who will serve as senior adviser on AI for an agency where tech staff has been in flux despite a mission to get tougher on tech. AI Now alumna Rashida Richardson — a law professor who served as director of policy research for the group and has a background studying the impact of AI systems like predictive policing tools — joined the White House Office of Science and Technology Policy in July as senior policy adviser for data and democracy.

"[Whittaker's] hiring is just the latest evidence of the FTC's attention on algorithms and algorithmic issues," said Laura Riposo VanDruff, former assistant director in the FTC's privacy and identity protection division and a partner at law firm Kelley Drye & Warren. "How she shapes policy and enforcement could be [FTC] Chairman [Lina] Khan's privacy legacy," she added.

Whittaker, who once led product and engineering teams at Google and founded the company's Open Research Group, made headlines in 2018 for helping guide worker walkouts and fighting use of Google's AI technology by the Pentagon. But it's her work at AI Now crafting practical AI policies intended to prevent encoded bias and discrimination against people that is likely to have the most relevance in her new FTC role.

Merve Hickok, senior research director of the Center for AI and Digital Policy, a group that evaluates national and international policy work on AI, said the inclusion of Whittaker and Richardson in the administration aligns with the mission of OSTP leaders to clarify the rights and freedoms people should be afforded in relation to data-driven technologies. "There has been a trend in both organizations toward fighting [AI] bias," she said.

AI Now and Whittaker declined to comment for this story, but an AI Now spokesperson said Whittaker will remain involved with the organization. Richardson and the OSTP did not respond to request to comment.

A history of AI policy goals

As the administration attempts to keep regulatory pace with a rapidly advancing AI tech industry, it's a little too early to know whether Whittaker and Richardson's policy goals will sync with those of their respective agencies. However, with Whittaker filling an entirely new seat and Richardson working under a director at OSTP whose position has for the first time been elevated to the Cabinet level, their voices could carry.

For one thing, Whittaker's work could bolster the FTC's efforts to intertwine data privacy and antitrust considerations in cases against tech firms. Whittaker has argued that AI advancements have been largely facilitated by a few dominant tech giants that have the resources to suck up massive amounts of data and spin it into algorithmic systems because of ad-driven business models, a common refrain of FTC Chairwoman Lina Khan.

Whittaker told lawmakers at a U.S. House Committee on Science, Space and Technology hearing in 2019 that the massive amounts of data and vast computational resources fueling the AI boom "are assets that only a handful of major tech companies have, and very few others do."

Whittaker and Richardson's work at AI Now proposing policies for regulating algorithmic systems — from commercial voice and facial recognition tech to automated Medicaid benefit allocation tools — offers some big clues for what they might want to push at their respective agencies. In general, they have suggested specific steps they'd like to see implemented by the government that could force more transparency around AI, something many lawmakers demand from big tech firms like Facebook, Google, Amazon and Twitter, as well as smaller companies.

In testimonies given at separate 2019 congressional hearings addressing AI, Whittaker and Richardson called for tech firms to waive trade secrecy claims that block government entities and the public from accessing information about their systems. They also wanted lawmakers to require that companies disclose the names and vendors of AI they use to make decisions that affect people.

At the center of the AI Now proposals highlighted by both Whittaker and Richardson is the algorithmic impact assessment, a framework for evaluating the effects of algorithmic and AI systems. It's a concept that has its foundation in more widely used environmental, human rights and privacy impact assessments.

In fact, AI Now has already said it wants the FTC to use them. The group spotlighted algorithmic impact assessments in comments on "Competition and Consumer Protection in the 21st Century" submitted to the FTC in 2018, noting the evaluation process would "provide essential information for FTC investigations into potential deception, unfair business practices, or other violations of consumer rights."

Squaring Whittaker's policy ambition with FTC authority

VanDruff said we may see Whittaker's influence manifest if she weighs in on new or updated FTC rules or in a case against a company. "How Whittaker can square her ambition with existing legal tools — it will be interesting for us to watch that," she said.

Even before Khan was named chair, the commission had begun taking a more aggressive stance on algorithmic technologies. In separate cases, it forced photo app Everalbum and Cambridge Analytica to destroy data garnered through allegedly deceptive means as well as the algorithms built from that data.

The agency in April reminded observers of its consumer-protection work in the AI arena, including issuing guidance for businesses using automated decision systems to determine credit scoring or home-loan decisions. It pointed to three laws the FTC has authority to enforce in relation to AI: the Federal Trade Commission Act, the Fair Credit Reporting Act and the Equal Credit Opportunity Act.

"Section 5 can get to disparate impact, and if it can get to disparate impact, that can have substantial implications for AI," said Justin Brookman, director of Consumer Privacy and Technology Policy for Consumer Reports, who previously served as policy director of the FTC's Office of Technology Research and Investigation. "The FTC has a lot of power in this space."

Whatever exact role Whittaker plays in the FTC's AI efforts, the ripple effect could extend far beyond big tech, said VanDruff.

"Algorithms power the economy," she said. "There's no question about that, so the consequences of action by the FTC as it relates to algorithms could have significant reach."

Fintech

Judge Zia Faruqui is trying to teach you crypto, one ‘SNL’ reference at a time

His decisions on major cryptocurrency cases have quoted "The Big Lebowski," "SNL," and "Dr. Strangelove." That’s because he wants you — yes, you — to read them.

The ways Zia Faruqui (right) has weighed on cases that have come before him can give lawyers clues as to what legal frameworks will pass muster.

Photo: Carolyn Van Houten/The Washington Post via Getty Images

“Cryptocurrency and related software analytics tools are ‘The wave of the future, Dude. One hundred percent electronic.’”

That’s not a quote from "The Big Lebowski" — at least, not directly. It’s a quote from a Washington, D.C., district court memorandum opinion on the role cryptocurrency analytics tools can play in government investigations. The author is Magistrate Judge Zia Faruqui.

Keep ReadingShow less
Veronica Irwin

Veronica Irwin (@vronirwin) is a San Francisco-based reporter at Protocol covering fintech. Previously she was at the San Francisco Examiner, covering tech from a hyper-local angle. Before that, her byline was featured in SF Weekly, The Nation, Techworker, Ms. Magazine and The Frisc.

The financial technology transformation is driving competition, creating consumer choice, and shaping the future of finance. Hear from seven fintech leaders who are reshaping the future of finance, and join the inaugural Financial Technology Association Fintech Summit to learn more.

Keep ReadingShow less
FTA
The Financial Technology Association (FTA) represents industry leaders shaping the future of finance. We champion the power of technology-centered financial services and advocate for the modernization of financial regulation to support inclusion and responsible innovation.
Enterprise

AWS CEO: The cloud isn’t just about technology

As AWS preps for its annual re:Invent conference, Adam Selipsky talks product strategy, support for hybrid environments, and the value of the cloud in uncertain economic times.

Photo: Noah Berger/Getty Images for Amazon Web Services

AWS is gearing up for re:Invent, its annual cloud computing conference where announcements this year are expected to focus on its end-to-end data strategy and delivering new industry-specific services.

It will be the second re:Invent with CEO Adam Selipsky as leader of the industry’s largest cloud provider after his return last year to AWS from data visualization company Tableau Software.

Keep ReadingShow less
Donna Goodison

Donna Goodison (@dgoodison) is Protocol's senior reporter focusing on enterprise infrastructure technology, from the 'Big 3' cloud computing providers to data centers. She previously covered the public cloud at CRN after 15 years as a business reporter for the Boston Herald. Based in Massachusetts, she also has worked as a Boston Globe freelancer, business reporter at the Boston Business Journal and real estate reporter at Banker & Tradesman after toiling at weekly newspapers.

Image: Protocol

We launched Protocol in February 2020 to cover the evolving power center of tech. It is with deep sadness that just under three years later, we are winding down the publication.

As of today, we will not publish any more stories. All of our newsletters, apart from our flagship, Source Code, will no longer be sent. Source Code will be published and sent for the next few weeks, but it will also close down in December.

Keep ReadingShow less
Bennett Richardson

Bennett Richardson ( @bennettrich) is the president of Protocol. Prior to joining Protocol in 2019, Bennett was executive director of global strategic partnerships at POLITICO, where he led strategic growth efforts including POLITICO's European expansion in Brussels and POLITICO's creative agency POLITICO Focus during his six years with the company. Prior to POLITICO, Bennett was co-founder and CMO of Hinge, the mobile dating company recently acquired by Match Group. Bennett began his career in digital and social brand marketing working with major brands across tech, energy, and health care at leading marketing and communications agencies including Edelman and GMMB. Bennett is originally from Portland, Maine, and received his bachelor's degree from Colgate University.

Enterprise

Why large enterprises struggle to find suitable platforms for MLops

As companies expand their use of AI beyond running just a few machine learning models, and as larger enterprises go from deploying hundreds of models to thousands and even millions of models, ML practitioners say that they have yet to find what they need from prepackaged MLops systems.

As companies expand their use of AI beyond running just a few machine learning models, ML practitioners say that they have yet to find what they need from prepackaged MLops systems.

Photo: artpartner-images via Getty Images

On any given day, Lily AI runs hundreds of machine learning models using computer vision and natural language processing that are customized for its retail and ecommerce clients to make website product recommendations, forecast demand, and plan merchandising. But this spring when the company was in the market for a machine learning operations platform to manage its expanding model roster, it wasn’t easy to find a suitable off-the-shelf system that could handle such a large number of models in deployment while also meeting other criteria.

Some MLops platforms are not well-suited for maintaining even more than 10 machine learning models when it comes to keeping track of data, navigating their user interfaces, or reporting capabilities, Matthew Nokleby, machine learning manager for Lily AI’s product intelligence team, told Protocol earlier this year. “The duct tape starts to show,” he said.

Keep ReadingShow less
Kate Kaye

Kate Kaye is an award-winning multimedia reporter digging deep and telling print, digital and audio stories. She covers AI and data for Protocol. Her reporting on AI and tech ethics issues has been published in OneZero, Fast Company, MIT Technology Review, CityLab, Ad Age and Digiday and heard on NPR. Kate is the creator of RedTailMedia.org and is the author of "Campaign '08: A Turning Point for Digital Media," a book about how the 2008 presidential campaigns used digital media and data.

Latest Stories
Bulletins