The second of two leaders from NYU's AI Now Institute, a small but influential organization researching the social implications of artificial intelligence, just joined the Biden administration to lay the groundwork for government AI policy. Their previous work suggests their presence might encourage the government to require new transparency from tech companies about how their algorithms work.
The Federal Trade Commission earlier this month created an entirely new role for AI Now co-founder Meredith Whittaker, who will serve as senior adviser on AI for an agency where tech staff has been in flux despite a mission to get tougher on tech. AI Now alumna Rashida Richardson — a law professor who served as director of policy research for the group and has a background studying the impact of AI systems like predictive policing tools — joined the White House Office of Science and Technology Policy in July as senior policy adviser for data and democracy.
"[Whittaker's] hiring is just the latest evidence of the FTC's attention on algorithms and algorithmic issues," said Laura Riposo VanDruff, former assistant director in the FTC's privacy and identity protection division and a partner at law firm Kelley Drye & Warren. "How she shapes policy and enforcement could be [FTC] Chairman [Lina] Khan's privacy legacy," she added.
Whittaker, who once led product and engineering teams at Google and founded the company's Open Research Group, made headlines in 2018 for helping guide worker walkouts and fighting use of Google's AI technology by the Pentagon. But it's her work at AI Now crafting practical AI policies intended to prevent encoded bias and discrimination against people that is likely to have the most relevance in her new FTC role.
Merve Hickok, senior research director of the Center for AI and Digital Policy, a group that evaluates national and international policy work on AI, said the inclusion of Whittaker and Richardson in the administration aligns with the mission of OSTP leaders to clarify the rights and freedoms people should be afforded in relation to data-driven technologies. "There has been a trend in both organizations toward fighting [AI] bias," she said.
AI Now and Whittaker declined to comment for this story, but an AI Now spokesperson said Whittaker will remain involved with the organization. Richardson and the OSTP did not respond to request to comment.
A history of AI policy goals
As the administration attempts to keep regulatory pace with a rapidly advancing AI tech industry, it's a little too early to know whether Whittaker and Richardson's policy goals will sync with those of their respective agencies. However, with Whittaker filling an entirely new seat and Richardson working under a director at OSTP whose position has for the first time been elevated to the Cabinet level, their voices could carry.
For one thing, Whittaker's work could bolster the FTC's efforts to intertwine data privacy and antitrust considerations in cases against tech firms. Whittaker has argued that AI advancements have been largely facilitated by a few dominant tech giants that have the resources to suck up massive amounts of data and spin it into algorithmic systems because of ad-driven business models, a common refrain of FTC Chairwoman Lina Khan.
Whittaker told lawmakers at a U.S. House Committee on Science, Space and Technology hearing in 2019 that the massive amounts of data and vast computational resources fueling the AI boom "are assets that only a handful of major tech companies have, and very few others do."
Whittaker and Richardson's work at AI Now proposing policies for regulating algorithmic systems — from commercial voice and facial recognition tech to automated Medicaid benefit allocation tools — offers some big clues for what they might want to push at their respective agencies. In general, they have suggested specific steps they'd like to see implemented by the government that could force more transparency around AI, something many lawmakers demand from big tech firms like Facebook, Google, Amazon and Twitter, as well as smaller companies.
In testimonies given at separate 2019 congressional hearings addressing AI, Whittaker and Richardson called for tech firms to waive trade secrecy claims that block government entities and the public from accessing information about their systems. They also wanted lawmakers to require that companies disclose the names and vendors of AI they use to make decisions that affect people.
At the center of the AI Now proposals highlighted by both Whittaker and Richardson is the algorithmic impact assessment, a framework for evaluating the effects of algorithmic and AI systems. It's a concept that has its foundation in more widely used environmental, human rights and privacy impact assessments.
In fact, AI Now has already said it wants the FTC to use them. The group spotlighted algorithmic impact assessments in comments on "Competition and Consumer Protection in the 21st Century" submitted to the FTC in 2018, noting the evaluation process would "provide essential information for FTC investigations into potential deception, unfair business practices, or other violations of consumer rights."
Squaring Whittaker's policy ambition with FTC authority
VanDruff said we may see Whittaker's influence manifest if she weighs in on new or updated FTC rules or in a case against a company. "How Whittaker can square her ambition with existing legal tools — it will be interesting for us to watch that," she said.
Even before Khan was named chair, the commission had begun taking a more aggressive stance on algorithmic technologies. In separate cases, it forced photo app Everalbum and Cambridge Analytica to destroy data garnered through allegedly deceptive means as well as the algorithms built from that data.
The agency in April reminded observers of its consumer-protection work in the AI arena, including issuing guidance for businesses using automated decision systems to determine credit scoring or home-loan decisions. It pointed to three laws the FTC has authority to enforce in relation to AI: the Federal Trade Commission Act, the Fair Credit Reporting Act and the Equal Credit Opportunity Act.
"Section 5 can get to disparate impact, and if it can get to disparate impact, that can have substantial implications for AI," said Justin Brookman, director of Consumer Privacy and Technology Policy for Consumer Reports, who previously served as policy director of the FTC's Office of Technology Research and Investigation. "The FTC has a lot of power in this space."
Whatever exact role Whittaker plays in the FTC's AI efforts, the ripple effect could extend far beyond big tech, said VanDruff.
"Algorithms power the economy," she said. "There's no question about that, so the consequences of action by the FTC as it relates to algorithms could have significant reach."