This Senate bill would force companies to audit AI used for housing and loans

The Algorithmic Accountability Act gives the FTC more tech staff and follows an approach to AI accountability already promoted by key advisers inside the agency.

Ron Wyden

The bill would require companies deploying automated systems to assess them, mitigate negative impacts and submit annual reports about those assessments to the FTC.

Photo: Alex Wong/Getty Images

Legislation introduced last week would require companies to assess the impact of AI and automated systems they use to make decisions affecting people’s employment, finances, housing and more.

The Algorithmic Accountability Act of 2022, sponsored by Oregon Democratic Sen. Ron Wyden, would give the FTC more tech staff to oversee enforcement and let the agency publish information about the algorithmic tech that companies use. In fact, it follows an approach to AI accountability and transparency already promoted by key advisers inside the FTC.

Algorithms used by social media companies are often the ones in the regulatory spotlight. However, all sorts of businesses — from home loan providers and banks to job recruitment services — use algorithmic systems to make automated decisions. In an effort to enable more oversight and control of technologies that make discriminatory decisions or create safety risks or other harms, the bill would require companies deploying automated systems to assess them, mitigate negative impacts and submit annual reports about those assessments to the FTC.

“This is legislation that is focused on creating a baseline of transparency and accountability around where companies are using algorithms to legal effect,” said Ben Winters, a counsel at Electronic Privacy Information Center who leads the advocacy group’s AI and Human Rights Project.

The bill, an expanded version of the same legislation originally introduced in 2019, gives state attorneys general the right to sue on behalf of state residents for relief. Lead co-sponsors of the bill are Sen. Cory Booker of New Jersey and Rep. Yvette Clarke of New York, and it is also co-sponsored by Democratic Sens. Tammy Baldwin of Wisconsin, Bob Casey of Pennsylvania, Brian Schatz and Mazie Hirono of Hawaii and Martin Heinrich and Ben Ray Luján of New Mexico.

The onus is on AI users, not vendors

Companies using algorithmic technology to make “critical decisions” that have significant effects on people’s lives relating to education, employment, financial planning, essential utilities, housing or legal services would be required to conduct impact assessments. The evaluations would entail ongoing testing and analysis of decision-making processes, and require companies to supply documentation about the data used to develop, test or maintain algorithmic systems.

The legislation calls on companies using this tech to conduct algorithmic impact assessments of those systems. In particular, companies using automated systems would be on the hook if they make more than $50 million in annual revenue or are worth more than $250 million in equity and have identifiable data on more than one million people, households or devices used in automated decisions.

That would mean a large number of companies in the health care, recruitment and human resources, real estate and financial lending industries would be required to conduct assessments of AI they use.

Suppliers of algorithmic tools also would have to conduct assessments if they expect them to be used for a critical decision. However, Winters said it makes the most sense to focus on users rather than vendors.

“The bill focuses on the impact of the algorithmic systems, and the impact depends on the context in which it is used,” he said. Vendors selling algorithmic systems might only assess those tools according to perfect use cases rather than in relation to how they are used in more realistic circumstances, he said.

The new version of the bill is 50 pages long, with far more detail than its 15-page predecessor. One key distinction involves the language used to determine whether technologies are covered by the legislation. While the previous version would have required assessments of “high-risk” systems, the updated bill requires companies to evaluate the impact of algorithmic tech used in making “critical decisions.”

“Focusing on the decision’s effect is a lot more concrete and effective,” said Winters. Because it would have been difficult to determine what constitutes a high-risk system, he said the new approach is “less likely to be loopholed” or argued into futility by defense lawyers.

A statement about the bill from software industry group BSA The Software Alliance indicated the risk factor associated with algorithmic systems could be a sticking point. “We look forward to working with the sponsors and committees of jurisdiction to improve on the legislation to ensure that any new law clearly targets high risk systems and sensibly allocates responsibilities between organizations that develop and deploy them,” wrote Craig Albright, vice president of Legislative Strategy at the trade group.

FTC reports and public data on company AI use

Companies covered by the bill also would have to provide annual reports about those tech assessments. Then, based on those company reports, the FTC would produce annual reports showing trends, aggregated statistics, lessons from anonymized cases and updated guidance.

Plus, the FTC would publish a publicly available repository of information about the automated systems companies provide reports on. It is unclear, however, whether the contents of assessment reports provided to the FTC would be available via Freedom of Information Act requests.

The FTC would be in charge of making rules for algorithmic impact assessments if the bill passes, and the agency has already begun to gear up for the task. The commission brought on members of a new AI advisory team who have worked for the AI Now Institute, a group that has been critical of AI’s negative impacts on minority communities and has helped propel the use of algorithmic impact assessments as a framework for evaluating the effects of algorithmic systems. The co-founder of AI Now, Meredith Whittaker, is senior adviser on AI at the FTC.

“Given that the bill instructs the FTC to build out exactly what the impact assessments will require, it’s heartening that the current FTC AI team is led by individuals that are particularly well suited to draw the bounds of a meaningful algorithmic impact assessment,” said Winters.

The bill also provides resources to establish a Bureau of Technology to enforce the would-be law, including resources for hiring 50 people and a chief technologist for the new bureau, in addition to 25 more staff positions at the agency’s Bureau of Consumer Protection.

Many federal bills aimed at policing algorithmic systems have focused on reforming Section 230 to address social media harms rather than addressing technologies used by government or other types of companies. For instance, the Algorithmic Justice and Online Platform Transparency Act of 2021 would require digital platforms to maintain records of how their algorithms work. The Protecting Americans from Dangerous Algorithms Act would hold tech companies liable if their algorithms amplify hateful or extremist content.

Though lawmakers have pushed for more oversight of algorithms, there’s no telling whether the Wyden bill will gain momentum in Congress. But if it passes, it would likely benefit a growing sector of AI auditing and monitoring services that could provide impact assessments.

There’s also action among states to create more algorithmic accountability. EPIC is backing a Washington State bill that would create guidelines for government use of automated decision systems. That bill currently is “getting a lot of government office and agency pushback,” said Winters, who testified in January in a committee hearing about the legislation.

This post was updated to include the lead co-sponsors of the bill, and corrected to clarify that suppliers of AI-based tech will be required to conduct algorithmic impact assessments.


Judge Zia Faruqui is trying to teach you crypto, one ‘SNL’ reference at a time

His decisions on major cryptocurrency cases have quoted "The Big Lebowski," "SNL," and "Dr. Strangelove." That’s because he wants you — yes, you — to read them.

The ways Zia Faruqui (right) has weighed on cases that have come before him can give lawyers clues as to what legal frameworks will pass muster.

Photo: Carolyn Van Houten/The Washington Post via Getty Images

“Cryptocurrency and related software analytics tools are ‘The wave of the future, Dude. One hundred percent electronic.’”

That’s not a quote from "The Big Lebowski" — at least, not directly. It’s a quote from a Washington, D.C., district court memorandum opinion on the role cryptocurrency analytics tools can play in government investigations. The author is Magistrate Judge Zia Faruqui.

Keep ReadingShow less
Veronica Irwin

Veronica Irwin (@vronirwin) is a San Francisco-based reporter at Protocol covering fintech. Previously she was at the San Francisco Examiner, covering tech from a hyper-local angle. Before that, her byline was featured in SF Weekly, The Nation, Techworker, Ms. Magazine and The Frisc.

The financial technology transformation is driving competition, creating consumer choice, and shaping the future of finance. Hear from seven fintech leaders who are reshaping the future of finance, and join the inaugural Financial Technology Association Fintech Summit to learn more.

Keep ReadingShow less
The Financial Technology Association (FTA) represents industry leaders shaping the future of finance. We champion the power of technology-centered financial services and advocate for the modernization of financial regulation to support inclusion and responsible innovation.

AWS CEO: The cloud isn’t just about technology

As AWS preps for its annual re:Invent conference, Adam Selipsky talks product strategy, support for hybrid environments, and the value of the cloud in uncertain economic times.

Photo: Noah Berger/Getty Images for Amazon Web Services

AWS is gearing up for re:Invent, its annual cloud computing conference where announcements this year are expected to focus on its end-to-end data strategy and delivering new industry-specific services.

It will be the second re:Invent with CEO Adam Selipsky as leader of the industry’s largest cloud provider after his return last year to AWS from data visualization company Tableau Software.

Keep ReadingShow less
Donna Goodison

Donna Goodison (@dgoodison) is Protocol's senior reporter focusing on enterprise infrastructure technology, from the 'Big 3' cloud computing providers to data centers. She previously covered the public cloud at CRN after 15 years as a business reporter for the Boston Herald. Based in Massachusetts, she also has worked as a Boston Globe freelancer, business reporter at the Boston Business Journal and real estate reporter at Banker & Tradesman after toiling at weekly newspapers.

Image: Protocol

We launched Protocol in February 2020 to cover the evolving power center of tech. It is with deep sadness that just under three years later, we are winding down the publication.

As of today, we will not publish any more stories. All of our newsletters, apart from our flagship, Source Code, will no longer be sent. Source Code will be published and sent for the next few weeks, but it will also close down in December.

Keep ReadingShow less
Bennett Richardson

Bennett Richardson ( @bennettrich) is the president of Protocol. Prior to joining Protocol in 2019, Bennett was executive director of global strategic partnerships at POLITICO, where he led strategic growth efforts including POLITICO's European expansion in Brussels and POLITICO's creative agency POLITICO Focus during his six years with the company. Prior to POLITICO, Bennett was co-founder and CMO of Hinge, the mobile dating company recently acquired by Match Group. Bennett began his career in digital and social brand marketing working with major brands across tech, energy, and health care at leading marketing and communications agencies including Edelman and GMMB. Bennett is originally from Portland, Maine, and received his bachelor's degree from Colgate University.


Why large enterprises struggle to find suitable platforms for MLops

As companies expand their use of AI beyond running just a few machine learning models, and as larger enterprises go from deploying hundreds of models to thousands and even millions of models, ML practitioners say that they have yet to find what they need from prepackaged MLops systems.

As companies expand their use of AI beyond running just a few machine learning models, ML practitioners say that they have yet to find what they need from prepackaged MLops systems.

Photo: artpartner-images via Getty Images

On any given day, Lily AI runs hundreds of machine learning models using computer vision and natural language processing that are customized for its retail and ecommerce clients to make website product recommendations, forecast demand, and plan merchandising. But this spring when the company was in the market for a machine learning operations platform to manage its expanding model roster, it wasn’t easy to find a suitable off-the-shelf system that could handle such a large number of models in deployment while also meeting other criteria.

Some MLops platforms are not well-suited for maintaining even more than 10 machine learning models when it comes to keeping track of data, navigating their user interfaces, or reporting capabilities, Matthew Nokleby, machine learning manager for Lily AI’s product intelligence team, told Protocol earlier this year. “The duct tape starts to show,” he said.

Keep ReadingShow less
Kate Kaye

Kate Kaye is an award-winning multimedia reporter digging deep and telling print, digital and audio stories. She covers AI and data for Protocol. Her reporting on AI and tech ethics issues has been published in OneZero, Fast Company, MIT Technology Review, CityLab, Ad Age and Digiday and heard on NPR. Kate is the creator of RedTailMedia.org and is the author of "Campaign '08: A Turning Point for Digital Media," a book about how the 2008 presidential campaigns used digital media and data.

Latest Stories