Legislation introduced last week would require companies to assess the impact of AI and automated systems they use to make decisions affecting people’s employment, finances, housing and more.
The Algorithmic Accountability Act of 2022, sponsored by Oregon Democratic Sen. Ron Wyden, would give the FTC more tech staff to oversee enforcement and let the agency publish information about the algorithmic tech that companies use. In fact, it follows an approach to AI accountability and transparency already promoted by key advisers inside the FTC.
Algorithms used by social media companies are often the ones in the regulatory spotlight. However, all sorts of businesses — from home loan providers and banks to job recruitment services — use algorithmic systems to make automated decisions. In an effort to enable more oversight and control of technologies that make discriminatory decisions or create safety risks or other harms, the bill would require companies deploying automated systems to assess them, mitigate negative impacts and submit annual reports about those assessments to the FTC.
“This is legislation that is focused on creating a baseline of transparency and accountability around where companies are using algorithms to legal effect,” said Ben Winters, a counsel at Electronic Privacy Information Center who leads the advocacy group’s AI and Human Rights Project.
The bill, an expanded version of the same legislation originally introduced in 2019, gives state attorneys general the right to sue on behalf of state residents for relief. Lead co-sponsors of the bill are Sen. Cory Booker of New Jersey and Rep. Yvette Clarke of New York, and it is also co-sponsored by Democratic Sens. Tammy Baldwin of Wisconsin, Bob Casey of Pennsylvania, Brian Schatz and Mazie Hirono of Hawaii and Martin Heinrich and Ben Ray Luján of New Mexico.
The onus is on AI users, not vendors
Companies using algorithmic technology to make “critical decisions” that have significant effects on people’s lives relating to education, employment, financial planning, essential utilities, housing or legal services would be required to conduct impact assessments. The evaluations would entail ongoing testing and analysis of decision-making processes, and require companies to supply documentation about the data used to develop, test or maintain algorithmic systems.
The legislation calls on companies using this tech to conduct algorithmic impact assessments of those systems. In particular, companies using automated systems would be on the hook if they make more than $50 million in annual revenue or are worth more than $250 million in equity and have identifiable data on more than one million people, households or devices used in automated decisions.
That would mean a large number of companies in the health care, recruitment and human resources, real estate and financial lending industries would be required to conduct assessments of AI they use.
Suppliers of algorithmic tools also would have to conduct assessments if they expect them to be used for a critical decision. However, Winters said it makes the most sense to focus on users rather than vendors.
“The bill focuses on the impact of the algorithmic systems, and the impact depends on the context in which it is used,” he said. Vendors selling algorithmic systems might only assess those tools according to perfect use cases rather than in relation to how they are used in more realistic circumstances, he said.
The new version of the bill is 50 pages long, with far more detail than its 15-page predecessor. One key distinction involves the language used to determine whether technologies are covered by the legislation. While the previous version would have required assessments of “high-risk” systems, the updated bill requires companies to evaluate the impact of algorithmic tech used in making “critical decisions.”
“Focusing on the decision’s effect is a lot more concrete and effective,” said Winters. Because it would have been difficult to determine what constitutes a high-risk system, he said the new approach is “less likely to be loopholed” or argued into futility by defense lawyers.
A statement about the bill from software industry group BSA The Software Alliance indicated the risk factor associated with algorithmic systems could be a sticking point. “We look forward to working with the sponsors and committees of jurisdiction to improve on the legislation to ensure that any new law clearly targets high risk systems and sensibly allocates responsibilities between organizations that develop and deploy them,” wrote Craig Albright, vice president of Legislative Strategy at the trade group.
FTC reports and public data on company AI use
Companies covered by the bill also would have to provide annual reports about those tech assessments. Then, based on those company reports, the FTC would produce annual reports showing trends, aggregated statistics, lessons from anonymized cases and updated guidance.
Plus, the FTC would publish a publicly available repository of information about the automated systems companies provide reports on. It is unclear, however, whether the contents of assessment reports provided to the FTC would be available via Freedom of Information Act requests.
The FTC would be in charge of making rules for algorithmic impact assessments if the bill passes, and the agency has already begun to gear up for the task. The commission brought on members of a new AI advisory team who have worked for the AI Now Institute, a group that has been critical of AI’s negative impacts on minority communities and has helped propel the use of algorithmic impact assessments as a framework for evaluating the effects of algorithmic systems. The co-founder of AI Now, Meredith Whittaker, is senior adviser on AI at the FTC.
“Given that the bill instructs the FTC to build out exactly what the impact assessments will require, it’s heartening that the current FTC AI team is led by individuals that are particularly well suited to draw the bounds of a meaningful algorithmic impact assessment,” said Winters.
The bill also provides resources to establish a Bureau of Technology to enforce the would-be law, including resources for hiring 50 people and a chief technologist for the new bureau, in addition to 25 more staff positions at the agency’s Bureau of Consumer Protection.
Many federal bills aimed at policing algorithmic systems have focused on reforming Section 230 to address social media harms rather than addressing technologies used by government or other types of companies. For instance, the Algorithmic Justice and Online Platform Transparency Act of 2021 would require digital platforms to maintain records of how their algorithms work. The Protecting Americans from Dangerous Algorithms Act would hold tech companies liable if their algorithms amplify hateful or extremist content.
Though lawmakers have pushed for more oversight of algorithms, there’s no telling whether the Wyden bill will gain momentum in Congress. But if it passes, it would likely benefit a growing sector of AI auditing and monitoring services that could provide impact assessments.
There’s also action among states to create more algorithmic accountability. EPIC is backing a Washington State bill that would create guidelines for government use of automated decision systems. That bill currently is “getting a lot of government office and agency pushback,” said Winters, who testified in January in a committee hearing about the legislation.
This post was updated to include the lead co-sponsors of the bill, and corrected to clarify that suppliers of AI-based tech will be required to conduct algorithmic impact assessments.