Enterprise

This Senate bill would force companies to audit AI used for housing and loans

The Algorithmic Accountability Act gives the FTC more tech staff and follows an approach to AI accountability already promoted by key advisers inside the agency.

Ron Wyden

The bill would require companies deploying automated systems to assess them, mitigate negative impacts and submit annual reports about those assessments to the FTC.

Photo: Alex Wong/Getty Images

Legislation introduced last week would require companies to assess the impact of AI and automated systems they use to make decisions affecting people’s employment, finances, housing and more.

The Algorithmic Accountability Act of 2022, sponsored by Oregon Democratic Sen. Ron Wyden, would give the FTC more tech staff to oversee enforcement and let the agency publish information about the algorithmic tech that companies use. In fact, it follows an approach to AI accountability and transparency already promoted by key advisers inside the FTC.

Algorithms used by social media companies are often the ones in the regulatory spotlight. However, all sorts of businesses — from home loan providers and banks to job recruitment services — use algorithmic systems to make automated decisions. In an effort to enable more oversight and control of technologies that make discriminatory decisions or create safety risks or other harms, the bill would require companies deploying automated systems to assess them, mitigate negative impacts and submit annual reports about those assessments to the FTC.

“This is legislation that is focused on creating a baseline of transparency and accountability around where companies are using algorithms to legal effect,” said Ben Winters, a counsel at Electronic Privacy Information Center who leads the advocacy group’s AI and Human Rights Project.

The bill, an expanded version of the same legislation originally introduced in 2019, gives state attorneys general the right to sue on behalf of state residents for relief. Lead co-sponsors of the bill are Sen. Cory Booker of New Jersey and Rep. Yvette Clarke of New York, and it is also co-sponsored by Democratic Sens. Tammy Baldwin of Wisconsin, Bob Casey of Pennsylvania, Brian Schatz and Mazie Hirono of Hawaii and Martin Heinrich and Ben Ray Luján of New Mexico.

The onus is on AI users, not vendors

Companies using algorithmic technology to make “critical decisions” that have significant effects on people’s lives relating to education, employment, financial planning, essential utilities, housing or legal services would be required to conduct impact assessments. The evaluations would entail ongoing testing and analysis of decision-making processes, and require companies to supply documentation about the data used to develop, test or maintain algorithmic systems.

The legislation calls on companies using this tech to conduct algorithmic impact assessments of those systems. In particular, companies using automated systems would be on the hook if they make more than $50 million in annual revenue or are worth more than $250 million in equity and have identifiable data on more than one million people, households or devices used in automated decisions.

That would mean a large number of companies in the health care, recruitment and human resources, real estate and financial lending industries would be required to conduct assessments of AI they use.

Suppliers of algorithmic tools also would have to conduct assessments if they expect them to be used for a critical decision. However, Winters said it makes the most sense to focus on users rather than vendors.

“The bill focuses on the impact of the algorithmic systems, and the impact depends on the context in which it is used,” he said. Vendors selling algorithmic systems might only assess those tools according to perfect use cases rather than in relation to how they are used in more realistic circumstances, he said.

The new version of the bill is 50 pages long, with far more detail than its 15-page predecessor. One key distinction involves the language used to determine whether technologies are covered by the legislation. While the previous version would have required assessments of “high-risk” systems, the updated bill requires companies to evaluate the impact of algorithmic tech used in making “critical decisions.”

“Focusing on the decision’s effect is a lot more concrete and effective,” said Winters. Because it would have been difficult to determine what constitutes a high-risk system, he said the new approach is “less likely to be loopholed” or argued into futility by defense lawyers.

A statement about the bill from software industry group BSA The Software Alliance indicated the risk factor associated with algorithmic systems could be a sticking point. “We look forward to working with the sponsors and committees of jurisdiction to improve on the legislation to ensure that any new law clearly targets high risk systems and sensibly allocates responsibilities between organizations that develop and deploy them,” wrote Craig Albright, vice president of Legislative Strategy at the trade group.

FTC reports and public data on company AI use

Companies covered by the bill also would have to provide annual reports about those tech assessments. Then, based on those company reports, the FTC would produce annual reports showing trends, aggregated statistics, lessons from anonymized cases and updated guidance.

Plus, the FTC would publish a publicly available repository of information about the automated systems companies provide reports on. It is unclear, however, whether the contents of assessment reports provided to the FTC would be available via Freedom of Information Act requests.

The FTC would be in charge of making rules for algorithmic impact assessments if the bill passes, and the agency has already begun to gear up for the task. The commission brought on members of a new AI advisory team who have worked for the AI Now Institute, a group that has been critical of AI’s negative impacts on minority communities and has helped propel the use of algorithmic impact assessments as a framework for evaluating the effects of algorithmic systems. The co-founder of AI Now, Meredith Whittaker, is senior adviser on AI at the FTC.

“Given that the bill instructs the FTC to build out exactly what the impact assessments will require, it’s heartening that the current FTC AI team is led by individuals that are particularly well suited to draw the bounds of a meaningful algorithmic impact assessment,” said Winters.

The bill also provides resources to establish a Bureau of Technology to enforce the would-be law, including resources for hiring 50 people and a chief technologist for the new bureau, in addition to 25 more staff positions at the agency’s Bureau of Consumer Protection.

Many federal bills aimed at policing algorithmic systems have focused on reforming Section 230 to address social media harms rather than addressing technologies used by government or other types of companies. For instance, the Algorithmic Justice and Online Platform Transparency Act of 2021 would require digital platforms to maintain records of how their algorithms work. The Protecting Americans from Dangerous Algorithms Act would hold tech companies liable if their algorithms amplify hateful or extremist content.

Though lawmakers have pushed for more oversight of algorithms, there’s no telling whether the Wyden bill will gain momentum in Congress. But if it passes, it would likely benefit a growing sector of AI auditing and monitoring services that could provide impact assessments.

There’s also action among states to create more algorithmic accountability. EPIC is backing a Washington State bill that would create guidelines for government use of automated decision systems. That bill currently is “getting a lot of government office and agency pushback,” said Winters, who testified in January in a committee hearing about the legislation.

This post was updated to include the lead co-sponsors of the bill, and corrected to clarify that suppliers of AI-based tech will be required to conduct algorithmic impact assessments.

Workplace

Phil Libin wants to help you with your meetings

Mmhmm was focused on helping people who cared deeply about creating impressive video presentations. Now, it’s courting the casual user.

The role of video has changed dramatically over the past few years.

Image: mmhmm

The workplace video call is a given. But the best way to do it is still up for grabs. Phil Libin saw the potential for a more flexible, fun video app back when he started mmhmm in 2020 after goofing around with a green screen in the early, Zoom-obsessed pandemic days. He ended up zeroing in on asynchronous work, deciding to build a product where people could make truly engaging video presentations, a la “The Tonight Show.”

But this version was pretty inaccessible for the average person. You had to download the mmhmm app and install a virtual camera. If you wanted to use mmhmm’s effects or backgrounds, you had to run it with a third-party app. That’s why mmhmm has launched a web-based product that lets people record, watch and talk over video together.

Keep Reading Show less
Lizzy Lawrence

Lizzy Lawrence ( @LizzyLaw_) is a reporter at Protocol, covering tools and productivity in the workplace. She's a recent graduate of the University of Michigan, where she studied sociology and international studies. She served as editor in chief of The Michigan Daily, her school's independent newspaper. She's based in D.C., and can be reached at llawrence@protocol.com.

Sponsored Content

Foursquare data story: leveraging location data for site selection

We take a closer look at points of interest and foot traffic patterns to demonstrate how location data can be leveraged to inform better site selecti­on strategies.

Imagine: You’re the leader of a real estate team at a restaurant brand looking to open a new location in Manhattan. You have two options you’re evaluating: one site in SoHo, and another site in the Flatiron neighborhood. Which do you choose?

Keep Reading Show less
Policy

Social media’s legal foundation crumbled in Texas: What happens now?

A federal appeals court decision is forcing social media to abandon the way it’s treated content for a quarter-century, and the experiment may run until the Supreme Court acts.

While HB 20 is in effect, Texas users can sue platforms like Facebook and Twitter if they get “censored” for their viewpoints.

Photo: Adem AY/Unsplash

The surprise Wednesday ruling by a panel of three federal appeals court judges allows Texas’ social media law to go into effect — and has led to panicked befuddlement among tech policy experts wondering how platforms could possibly comply, even if they wanted to, and what options the services have for challenging the ruling.

The judges ruled 2-1 that the law should be effective while they hear an appeal by two Big Tech trade groups of a district court injunction that initially put the measure on hold. The judges did not immediately publish their reasoning, but the move will force social media companies to face a legal environment that could threaten the core content bans, moderation practices and ranking algorithms that have allowed them to flourish since the 1990s.

Keep Reading Show less
Ben Brody

Ben Brody (@ BenBrodyDC) is a senior reporter at Protocol focusing on how Congress, courts and agencies affect the online world we live in. He formerly covered tech policy and lobbying (including antitrust, Section 230 and privacy) at Bloomberg News, where he previously reported on the influence industry, government ethics and the 2016 presidential election. Before that, Ben covered business news at CNNMoney and AdAge, and all manner of stories in and around New York. He still loves appearing on the New York news radio he grew up with.

Policy

Clarence Thomas wants to check Big Tech. Texas could be his shot.

Thomas has argued social media companies are like common carriers. Texas’s social media law, which could make its way to SCOTUS, makes a similar case.

Supreme Court Justice Clarence Thomas has been itching for a case that could rein in Section 230.

Photographer: Al Drago/Bloomberg via Getty Images

For years, Supreme Court Justice Clarence Thomas has been openly itching for a case that would give the court an opportunity to rein in Section 230 protections. Now, Texas’ controversial social media “censorship” law could give him an opening.

The Texas law prohibits tech platforms with more than 50 million monthly users from moderating content on the basis of “viewpoint,” an ill-defined concept that is ripe for bad-faith interpretation. On Wednesday, a Fifth Circuit appeals court lifted an injunction on the law, which will now take effect in the state.

Keep Reading Show less
Issie Lapowsky

Issie Lapowsky ( @issielapowsky) is Protocol's chief correspondent, covering the intersection of technology, politics, and national affairs. She also oversees Protocol's fellowship program. Previously, she was a senior writer at Wired, where she covered the 2016 election and the Facebook beat in its aftermath. Prior to that, Issie worked as a staff writer for Inc. magazine, writing about small business and entrepreneurship. She has also worked as an on-air contributor for CBS News and taught a graduate-level course at New York University's Center for Publishing on how tech giants have affected publishing.

Entertainment

Exclusive: Mark Zuckerberg on Meta’s first mixed reality headset

By incorporating a view of the real world in VR experiences, Meta is getting ready for smarter headsets and ultimately AR glasses.

Meta's Quest offers a glimpse of a mixed reality future.

Image: Meta

The next big thing in VR may be your living room furniture: Meta is releasing a new SDK for its Quest VR headset next week that will make it easier for developers to incorporate real-world surroundings into VR apps and games.

The release marks a major step toward bringing mixed reality experiences to Meta’s VR headsets, which has the potential to make VR feel more real. It also foreshadows a world in which headsets make sense of the world around us, blurring the lines between AR and VR.

Keep Reading Show less
Janko Roettgers

Janko Roettgers (@jank0) is a senior reporter at Protocol, reporting on the shifting power dynamics between tech, media, and entertainment, including the impact of new technologies. Previously, Janko was Variety's first-ever technology writer in San Francisco, where he covered big tech and emerging technologies. He has reported for Gigaom, Frankfurter Rundschau, Berliner Zeitung, and ORF, among others. He has written three books on consumer cord-cutting and online music and co-edited an anthology on internet subcultures. He lives with his family in Oakland.

Latest Stories
Bulletins