A major tech group representing companies such as AWS, Microsoft, IBM and Oracle is calling for Congress to enact legislation that would require companies to assess and minimize bias in "high-risk" uses of artificial intelligence.
The group, known as the BSA or The Software Alliance, made its plea on Tuesday alongside a new report on identifying and lessening bias in AI that urges organizations to look at potential bias in project conception, historical data, processing information and deployment, as well as assess human oversight and the structure of compliance systems. The group is trying to stay ahead of tech regulators who have begun deciding how to rein in the algorithms on which many of the companies are basing their futures.
The BSA cites health care, finance and transportation as areas "where the severity and/or likelihood of potential harm is high," but its definition of risk for a particular AI system revolves around potential impact, purpose, level of human oversight and training data in a way that lawmakers across the world have signaled they might find too vague.
Regulators are already starting to target AI. In April, the European Commission proposed impact assessments, human oversight and quality verification for high-risk systems used in employment, immigration, education, law enforcement and facial recognition, among others. Earlier this month, King County, Washington — which includes Seattle and Microsoft's headquarters — became the latest municipal region to ban almost all government uses of facial recognition, following in the footsteps of several other cities such as Boston, San Francisco and Portland, Oregon. Congress, too, is consideringaction.
The potential regulatory clampdown, and the memory of how other rules relating to technology have spread, were important reasons for the BSA to urge lawmakers to act and to seek a seat at the bargaining table, the trade group said.
"Certainly, those lessons loomed largely in our world," said Christian Troncoso, a senior director for policy at BSA, who helped spearhead what he described as a nearly yearlong project to develop the accompanying report. "I think there's just an acknowledgement within our industry and among our member companies that good regulation can only really arise if there is a good faith effort by us to contribute to the process."
The BSA, formerly known as the Business Software Alliance, is already working to pass a federal privacy law that would overrule state statutes, including the strict California Consumer Privacy Act, in the hopes of easing member companies' compliance obligations, particularly as other states pass their own laws. Proponents of state privacy laws have said they're often stronger than what Congress can produce, but the industry dreads the idea of another state-by-state or country-by-country regulatory "patchwork," such as the pile of laws that exist on data breach notifications.
"Our hope is that if we can work with everyone, we're going to arrive at a place where there's more of a harmonized approach on these issues," Troncoso said. "We want to just ensure that all of these processes are informed by our member companies' experience developing the technologies."
'Drumbeat of stories'
Companies spend plenty of their lobbying time trying to block new laws and regulations outright, but when lawmakers and agencies become determined to act, firms and their trade groups often sit down to negotiate narrower or more workable rules. Although tech companies lobbied on CCPA, for instance, the sector has long felt that, thanks to a groundswell of public support, it passed too quickly for the industry to get much of what it wanted. The efforts to participate themselves can also become good PR, or a way to bend laws and regulations toward particular competitors.
While the BSA broadly endorses the idea of legislation, the group doesn't mention a specific bill in Congress. In the last Congress, Sens. Ron Wyden and Cory Booker put forth a bill similar to the BSA's current ask, which would require companies to perform similar assessments, including looking at AI systems for fairness, accuracy and their impact on privacy and security.
At the moment, legislators, regulators, civil rights groups and even other companiesworry about an array of examples, from photo cropping to false arrest, where AI systems reproduced or exacerbated discrimination against people on the basis of race, origin, religion, sex, disability and other factors.
"The report itself I think was prompted by the general anxiety and steady drumbeat of stories that have emerged in recent years that have understandably shaken public confidence," Troncoso said.
For instance, in its report, the BSA cited research that examined whether Mac and PC ownership can predict the ability to repay a loan — but device brand ownership appears correlated with race, meaning AI systems including this factor would introduce "a significant risk of" bias. The concerns are particularly acute when AI systems make decisions about law enforcement, financial products, housing, education, health care and other domains that are crucial to full, equitable participation in society and that have long suffered from deep and often deadly discrimination.
Troncoso says that the BSA's framework can go a long way to mitigating these harms for organizations looking to begin assessments, and that it can provide a basis for the legislation the group is seeking.
The group, though, hasn't developed specific legislative text or a preferred method of enforcement in the law, Troncoso said, and won't be able to "foist" any quick solutions on Congress. He added the work could take years, and that the BSA will be seated beside lawmakers and groups that would like to see a strict approach to the industry.
"We don't profess to have all of the answers for what a legislative framework should look like," he said. "The only way that legislation is going to work is if organizations like ours partner with the range of stakeholders that have equities in this debate."