Enterprise

This Senate bill would force companies to audit AI used for housing and loans

The Algorithmic Accountability Act gives the FTC more tech staff and follows an approach to AI accountability already promoted by key advisers inside the agency.

Ron Wyden

The bill would require companies deploying automated systems to assess them, mitigate negative impacts and submit annual reports about those assessments to the FTC.

Photo: Alex Wong/Getty Images

Legislation introduced last week would require companies to assess the impact of AI and automated systems they use to make decisions affecting people’s employment, finances, housing and more.

The Algorithmic Accountability Act of 2022, sponsored by Oregon Democratic Sen. Ron Wyden, would give the FTC more tech staff to oversee enforcement and let the agency publish information about the algorithmic tech that companies use. In fact, it follows an approach to AI accountability and transparency already promoted by key advisers inside the FTC.

Algorithms used by social media companies are often the ones in the regulatory spotlight. However, all sorts of businesses — from home loan providers and banks to job recruitment services — use algorithmic systems to make automated decisions. In an effort to enable more oversight and control of technologies that make discriminatory decisions or create safety risks or other harms, the bill would require companies deploying automated systems to assess them, mitigate negative impacts and submit annual reports about those assessments to the FTC.

“This is legislation that is focused on creating a baseline of transparency and accountability around where companies are using algorithms to legal effect,” said Ben Winters, a counsel at Electronic Privacy Information Center who leads the advocacy group’s AI and Human Rights Project.

The bill, an expanded version of the same legislation originally introduced in 2019, gives state attorneys general the right to sue on behalf of state residents for relief. Lead co-sponsors of the bill are Sen. Cory Booker of New Jersey and Rep. Yvette Clarke of New York, and it is also co-sponsored by Democratic Sens. Tammy Baldwin of Wisconsin, Bob Casey of Pennsylvania, Brian Schatz and Mazie Hirono of Hawaii and Martin Heinrich and Ben Ray Luján of New Mexico.

The onus is on AI users, not vendors

Companies using algorithmic technology to make “critical decisions” that have significant effects on people’s lives relating to education, employment, financial planning, essential utilities, housing or legal services would be required to conduct impact assessments. The evaluations would entail ongoing testing and analysis of decision-making processes, and require companies to supply documentation about the data used to develop, test or maintain algorithmic systems.

The legislation calls on companies using this tech to conduct algorithmic impact assessments of those systems. In particular, companies using automated systems would be on the hook if they make more than $50 million in annual revenue or are worth more than $250 million in equity and have identifiable data on more than one million people, households or devices used in automated decisions.

That would mean a large number of companies in the health care, recruitment and human resources, real estate and financial lending industries would be required to conduct assessments of AI they use.

Suppliers of algorithmic tools also would have to conduct assessments if they expect them to be used for a critical decision. However, Winters said it makes the most sense to focus on users rather than vendors.

“The bill focuses on the impact of the algorithmic systems, and the impact depends on the context in which it is used,” he said. Vendors selling algorithmic systems might only assess those tools according to perfect use cases rather than in relation to how they are used in more realistic circumstances, he said.

The new version of the bill is 50 pages long, with far more detail than its 15-page predecessor. One key distinction involves the language used to determine whether technologies are covered by the legislation. While the previous version would have required assessments of “high-risk” systems, the updated bill requires companies to evaluate the impact of algorithmic tech used in making “critical decisions.”

“Focusing on the decision’s effect is a lot more concrete and effective,” said Winters. Because it would have been difficult to determine what constitutes a high-risk system, he said the new approach is “less likely to be loopholed” or argued into futility by defense lawyers.

A statement about the bill from software industry group BSA The Software Alliance indicated the risk factor associated with algorithmic systems could be a sticking point. “We look forward to working with the sponsors and committees of jurisdiction to improve on the legislation to ensure that any new law clearly targets high risk systems and sensibly allocates responsibilities between organizations that develop and deploy them,” wrote Craig Albright, vice president of Legislative Strategy at the trade group.

FTC reports and public data on company AI use

Companies covered by the bill also would have to provide annual reports about those tech assessments. Then, based on those company reports, the FTC would produce annual reports showing trends, aggregated statistics, lessons from anonymized cases and updated guidance.

Plus, the FTC would publish a publicly available repository of information about the automated systems companies provide reports on. It is unclear, however, whether the contents of assessment reports provided to the FTC would be available via Freedom of Information Act requests.

The FTC would be in charge of making rules for algorithmic impact assessments if the bill passes, and the agency has already begun to gear up for the task. The commission brought on members of a new AI advisory team who have worked for the AI Now Institute, a group that has been critical of AI’s negative impacts on minority communities and has helped propel the use of algorithmic impact assessments as a framework for evaluating the effects of algorithmic systems. The co-founder of AI Now, Meredith Whittaker, is senior adviser on AI at the FTC.

“Given that the bill instructs the FTC to build out exactly what the impact assessments will require, it’s heartening that the current FTC AI team is led by individuals that are particularly well suited to draw the bounds of a meaningful algorithmic impact assessment,” said Winters.

The bill also provides resources to establish a Bureau of Technology to enforce the would-be law, including resources for hiring 50 people and a chief technologist for the new bureau, in addition to 25 more staff positions at the agency’s Bureau of Consumer Protection.

Many federal bills aimed at policing algorithmic systems have focused on reforming Section 230 to address social media harms rather than addressing technologies used by government or other types of companies. For instance, the Algorithmic Justice and Online Platform Transparency Act of 2021 would require digital platforms to maintain records of how their algorithms work. The Protecting Americans from Dangerous Algorithms Act would hold tech companies liable if their algorithms amplify hateful or extremist content.

Though lawmakers have pushed for more oversight of algorithms, there’s no telling whether the Wyden bill will gain momentum in Congress. But if it passes, it would likely benefit a growing sector of AI auditing and monitoring services that could provide impact assessments.

There’s also action among states to create more algorithmic accountability. EPIC is backing a Washington State bill that would create guidelines for government use of automated decision systems. That bill currently is “getting a lot of government office and agency pushback,” said Winters, who testified in January in a committee hearing about the legislation.

This post was updated to include the lead co-sponsors of the bill, and corrected to clarify that suppliers of AI-based tech will be required to conduct algorithmic impact assessments.

Entertainment

Inside Amazon’s free video strategy

Amazon has been doubling down on original content for Freevee, its ad-supported video service, which has seen a lot of growth thanks to a deep integration with other Amazon properties.

Freevee’s investment into original programming like 'Bosch: Legacy' has increased by 70%.

Photo: Tyler Golden/Amazon Freevee

Amazon’s streaming efforts have long been all about Prime Video. So the company caught pundits by surprise when, in early 2019, it launched a stand-alone ad-supported streaming service called IMDb Freedive, with Techcrunch calling the move “a bit odd.”

Nearly four years and two rebrandings later, Amazon’s ad-supported video efforts appear to be flourishing. Viewership of the service grew by 138% from 2020 to 2021, according to Amazon. The company declined to share any updated performance data on the service, which is now called Freevee, but a spokesperson told Protocol the performance of originals in particular “exceeded expectations,” leading Amazon to increase investments into original content by 70% year-over-year.

Keep Reading Show less
Janko Roettgers

Janko Roettgers (@jank0) is a senior reporter at Protocol, reporting on the shifting power dynamics between tech, media, and entertainment, including the impact of new technologies. Previously, Janko was Variety's first-ever technology writer in San Francisco, where he covered big tech and emerging technologies. He has reported for Gigaom, Frankfurter Rundschau, Berliner Zeitung, and ORF, among others. He has written three books on consumer cord-cutting and online music and co-edited an anthology on internet subcultures. He lives with his family in Oakland.

Sponsored Content

Great products are built on strong patents

Experts say robust intellectual property protection is essential to ensure the long-term R&D required to innovate and maintain America's technology leadership.

Every great tech product that you rely on each day, from the smartphone in your pocket to your music streaming service and navigational system in the car, shares one important thing: part of its innovative design is protected by intellectual property (IP) laws.

From 5G to artificial intelligence, IP protection offers a powerful incentive for researchers to create ground-breaking products, and governmental leaders say its protection is an essential part of maintaining US technology leadership. To quote Secretary of Commerce Gina Raimondo: "intellectual property protection is vital for American innovation and entrepreneurship.”

Keep Reading Show less
James Daly
James Daly has a deep knowledge of creating brand voice identity, including understanding various audiences and targeting messaging accordingly. He enjoys commissioning, editing, writing, and business development, particularly in launching new ventures and building passionate audiences. Daly has led teams large and small to multiple awards and quantifiable success through a strategy built on teamwork, passion, fact-checking, intelligence, analytics, and audience growth while meeting budget goals and production deadlines in fast-paced environments. Daly is the Editorial Director of 2030 Media and a contributor at Wired.
Fintech

Wall Street is warming up to crypto

Secure, well-regulated technology infrastructure could draw more large banks to crypto.

Technology infrastructure for crypto has begun to mature.

Illustration: Christopher T. Fong/Protocol

Despite a downturn in crypto markets, more large institutional investors are seeking to invest in crypto.

One factor holding them back is a lack of infrastructure for large institutions compared to what exists in the traditional, regulated capital markets.

Keep Reading Show less
Tomio Geron

Tomio Geron ( @tomiogeron) is a San Francisco-based reporter covering fintech. He was previously a reporter and editor at The Wall Street Journal, covering venture capital and startups. Before that, he worked as a staff writer at Forbes, covering social media and venture capital, and also edited the Midas List of top tech investors. He has also worked at newspapers covering crime, courts, health and other topics. He can be reached at tgeron@protocol.com or tgeron@protonmail.com.

Policy

How I decided to go all-in on a federal contract — before assignment

Amanda Renteria knew Code for America could help facilitate access to expanded child tax credits. She also knew there was no guarantee her proof of concept would convince others — but tried anyway.

Code for America CEO Amanda Renteria explained how it's helped people claim the Child Tax Credit.

Photo: Code for America

Click banner image for more How I decided series

After the American Rescue Plan Act passed in March 2021, the U.S. government expanded child tax credits to provide relief for American families during the pandemic. The legislation allowed some families to nearly double their tax benefits per child, which was especially critical for low-income families, who disproportionately bore the financial brunt of the pandemic.

Keep Reading Show less
Hirsh Chitkara

Hirsh Chitkara ( @HirshChitkara) is a reporter at Protocol focused on the intersection of politics, technology and society. Before joining Protocol, he helped write a daily newsletter at Insider that covered all things Big Tech. He's based in New York and can be reached at hchitkara@protocol.com.

Climate

This carbon capture startup wants to clean up the worst polluters

The founder and CEO of point-source carbon capture company Carbon Clean discusses what the startup has learned, the future of carbon capture technology, as well as the role of companies like his in battling the climate crisis.

Carbon Clean CEO Aniruddha Sharma told Protocol that fossil fuels are necessary, at least in the near term, to lift the living standards of those who don’t have access to cars and electricity.

Photo: Carbon Clean

Carbon capture and storage has taken on increasing importance as companies with stubborn emissions look for new ways to meet their net zero goals. For hard-to-abate industries like cement and steel production, it’s one of the few options that exist to help them get there.

Yet it’s proven incredibly challenging to scale the technology, which captures carbon pollution at the source. U.K.-based company Carbon Clean is leading the charge to bring down costs. This year, it raised a $150 million series C round, which the startup said is the largest-ever funding round for a point-source carbon capture company.

Keep Reading Show less
Michelle Ma

Michelle Ma (@himichellema) is a reporter at Protocol covering climate. Previously, she was a news editor of live journalism and special coverage for The Wall Street Journal. Prior to that, she worked as a staff writer at Wirecutter. She can be reached at mma@protocol.com.

Latest Stories
Bulletins