The Business Roundtable, whose members include more than 230 CEOs of some of the world’s largest companies, want “rational” and “flexible” government regulations for AI. The group, which includes CEOs of tech, financial services and defense industry giants such as 3M, Amazon, Bank of America, Google, Mastercard, Northrop Grumman, Oracle and Verizon, published a set of guidelines Wednesday for businesses implementing AI and recommendations for policymakers.
As technologies employing artificial intelligence and machine learning stream into every nook and cranny of business operations — from algorithmic recruitment platforms to no-code AI tools used by marketing or supply chain teams — the need to address the implications of technologies that make automated decisions and replace humans is no longer merely the purview of the data science or legal team, but the C-suite.
The organization’s 10 core principles for responsible AI call on Business Roundtable members to operationalize AI governance throughout their businesses, design and deploy secure AI, collect and manage data responsibly, mitigate for AI bias that produces unfair decisions and evaluate and monitor AI models according to such goals.
Some companies in the group have adopted their own AI principles in recent years. However, in general, there are few assurances that companies have put pledges to use AI fairly and safely into practice in standardized, measurable or accountable ways. In fact, some have been criticized for their internal approaches to addressing AI ethics. Google famously shuttered its AI ethics committee following criticism over its choices for committee members. The company also angered the AI ethics community after firing one of the industry's most respected AI ethics researchers.
The Business Roundtable called on the U.S. government to establish practices and rules for AI, including by updating existing rules or laws. The recommendations state that the administration, Congress and regulators should craft “targeted and flexible governance and oversight” that account for the evolving nature of AI by employing “an agile and collaborative approach to AI governance” and an “adaptive approach to enforcement" that focuses on “efforts on bad actors.”
The group also wants to limit new regulations. The policy recommendations state that government should “assess regulatory gaps before considering new regulations,” and when new rules are deemed necessary, “narrowly scope the new rules or guidance to address the gaps.”
Some businesses are already starting to incorporate some of the approaches the organization suggests policymakers implement. For example, the group called for incentivizing industry to conduct its own self-assessments of AI. The group suggests its members evaluate and monitor AI models for accuracy, unfair bias, data protection and unintended uses.
Introduction of the principles could bode well for a new crop of AI auditing providers.