Being responsible hasn’t always been a top business priority. Now, some companies that audit AI to assess its potential to create unintended harms hope to sell business customers that build or use AI on the profit potential of doing the right thing.
And they may have new customers soon as new AI-focused regulations pile up. New York City just passed a law requiring companies providing AI that helps make hiring decisions to obtain “bias audits” of their technology. Washington, D.C. could pass a bill that would force entities using algorithmic tools to determine employment, housing or loan eligibility to cough up algorithms and the data used to train them for auditing.
At the national level, the European Union is mulling legislation addressing the risks of artificial intelligence, and the Federal Trade Commission and U.S. lawmakers want to hold companies accountable for harms and discrimination created by AI systems.
Amid the steady stream of damning headlines exposing AI-gone-wrong , it’s no surprise a cottage industry of AI auditing-related services and technologies — including companies like Fiddler and Parity — is getting attention from businesses that are incorporating AI into their day-to-day operations.
However, it’s not just the threat of regulatory enforcement, reputational damage or the desire to use AI for good that could compel more businesses to seek out these tools. Business goals factor in too.
Often referred to as “responsible AI,” AI ethics-related concepts such as accountability, transparency, explainability and fairness are “becoming more and more a business consideration,” said Mona Sloane, senior research scientist at New York University’s Center for Responsible AI.
“It’s a sign of the times when AI is an enterprise citizen,” said Svetlana Sicular, a VP analyst at Gartner focused on data and AI. Interest in services evaluating the impacts of AI systems is building as AI becomes an integral component of the enterprise, she said. Last year, Gartner predicted that by 2023, all people hired for AI development and training work will be required to demonstrate expertise in responsible development of AI.
The momentum behind services that evaluate whether or not AI systems are making equitable decisions and satisfying other ethical goals could be strongest among the companies investing in and using AI, said Dunstan Allison-Hope, vice president at BSR, a consulting firm that helps clients assess AI systems according to human rights-related measures. To date, most of the human rights impact assessments the consultancy has conducted have been for clients that build and sell AI tools, but that will change, said Allison-Hope.
“The biggest growth I expect is AI assessment by companies using AI, as distinct from companies building AI,” he said. “A key decision-maker determining impact on human rights will not just be the tech company, but the retailer, bank, or agency deploying the [AI and machine learning].”
That push is already happening. Companies including American Express, General Motors, Nike and Walmart said they’d adopt scoring criteria developed by a new industry-led group called the Data & Trust Alliance to help reduce bias in algorithmic tools used to make hiring and workforce decisions.
Audits that pop the hood on 'volatile' AI
Hired’s algorithmic system helps employers find job candidates from diverse communities, and notifies job seekers if they are requesting salaries that are below the average for a position. Hired CTO Dave Walters said that before the NYC law passed, the company already intended to audit its systems for fairness and transparency in 2022.
The company has yet to choose an audit service for that, but Walters said he expects any effective audit to require his company to provide its algorithmic models and training data with proper security protections in place. “That third-party service is going to need to be able to see deep enough under the hood to understand what’s going on,” said Walters.
Some AI audit startups do want to go beyond using scoring criteria by poking around inside algorithmic tech. Like other companies in the nascent sector, Fiddler wants its software to become part of its customers’ machine-learning operations process, helping companies prevent unwanted discriminatory effects, explaining how the AI they build makes decisions and avoiding the risk of “model drift” — a slippery slope that can shift machine-learning systems away from their originally intended purposes as new data flows into them over time.
“AI is a volatile thing,” said Krishna Gade, founder and CEO of Fiddler. He alluded to the highly publicized flaws in Zillow’s home-buying algorithms , which led the company to purchase homes for more than they were worth. Not only did the company lose money and lay off staff as a result, but some argue the flawed AI may have exacerbated gentrification by fueling inflated home prices.
But in a situation like Zillow’s, the potential societal harms of its home-buying algorithm may have seemed less obvious than the immediate business harms. And in general, tech decision-makers might respond more readily to an appeal to traditional business considerations when it comes to AI auditing services, said Gade.
“Most CEOs don’t really know what is the immediate ROI for implementing ethical AI and responsible AI,” he said. So, he said he starts conversations by trying to convince prospective clients that monitoring their AI could create positive ROI. “Then we say, by the way, there’s also all these other benefits — you can reduce these reputation risks.”
Hired’s employer clients have recruitment diversity goals, so refining or auditing its algorithms to help customers diversify staff makes good business sense, said Walters. For companies that aren’t in the recruitment, housing or credit-lending industries, the business justifications for getting AI audits could be more subtle or manifest longer term, he said.
“I think that companies like Fiddler are going to have to do a good job of understanding the company they’re talking to and the industry they’re in and how to connect those dots for them,” he said.
Algorithmic auditing platform Parity also emphasizes the profit potential from evaluating AI systems for fairness. The company’s website states that when companies use its model risk-assessment auditing process, “customers and bottom line both benefit.”
Making systems more fair means making them work better, said Liz O’Sullivan, Parity’s CEO. "People sometimes forget that so-called 'responsible AI' is AI that simply works the way you intend it to. More accurate predictions about under-served populations can open new revenue streams by correcting misconceptions about the behavior of marginalized groups," said O’Sullivan.
“Even setting aside the legal and reputational risks of inadequate oversight, our clients are motivated to find techniques that produce models which thoughtfully balance risk mitigation with enhanced accuracy," she said.
Parity also digs into its customers’ tech rather than merely asking a set of questions about it, as some consultancies or other audit providers do. Parity asks clients to provide full access to their models, the code and the data feeding them, then it adjusts features of the tech, running various permutations of the models to find the ones with the most equitable outcomes. The end result is an algorithmic impact assessment, a report that could be provided to regulators or a third-party auditor.
Venture capitalists are betting on the success of companies providing AI evaluation and monitoring tech. Fiddler grabbed $32 million in series B funding in June 2021. Truera, which provides a platform for explaining and monitoring AI models according to quality and reliability criteria received $12 million in series A funding a year ago. Arthur, a company co-founded in 2018 by O’Sullivan, collected $15 million in series A funding in December 2020.
O’Sullivan said Parity’s technology can be used in conjunction with Arthur’s, which tracks performance of AI models including computer vision and natural-language processing models. Indeed, for companies to use services that monitor AI once it’s deployed, they’ll want it to connect with other technologies they already use. Fiddler, for example, can be integrated with tools such as Amazon’s SageMaker and the open-source TensorFlow software library, said Gade.
AI tech providers and consultancies get on board
The momentum behind pure-play AI auditing tech is driving existing AI tech vendors to build new responsible AI products or tack on new features to existing ones. AWS offers a tool called SageMaker Clarify as part of its larger AI-building SageMaker product suite. The system looks for potential bias during data preparation and after models are trained and deployed. IBM provides algorithms intended to mitigate distrust and inequity problems that can arise from AI, and offers an open-source tool for detecting bias in models called AI Fairness 360.
Data and AI platform Databricks provides tools for building machine-learning models that warn developers when imbalances in data might throw a model off-kilter, enabling biases that could create discriminatory harms or negatively affect model accuracy. SparkBeyond, a company that provides a platform for building machine-learning models and finding patterns in data, also lets users check for bias in their data sets. However, some of these baked-in features are mere add-ons to primary functions of their core AI-building platforms, and rely on users to seek them out and actually use them.
Of course, businesses will not only seek out software to evaluate the inner workings of their AI. Traditional consulting companies are also creating services to assess AI.
Ernst and Young, for example, sells a service that turns responses to questions about AI systems into a score that quantifies risks. BSR does human rights assessments, but does not audit the bias or accuracy of the algorithm itself.
“We explore the potential human rights impacts that may arise from the deployment of AI/ML in context — for example, impacts on privacy, non-discrimination, freedom of expression, freedom of movement, freedom of association, security, access to culture, child rights and access to remedy,” said Allison-Hope.
Deloitte partners with Chatterbox Labs to operationalize the consultancy’s Trustworthy AI framework. The program is designed to test and monitor AI according to trust and fairness measures. Accenture provides an algorithmic assessment process that checks for disparities in potential outcomes of AI systems and monitors for future problems once a model is deployed; however, it is unclear whether the process involves a technical inspection. The company declined to comment for this story.
Even as pure-play startups gain ground, consultancies will see a business case for offering more traditional AI auditing services, said NYU’s Sloane. Because business decision-makers are increasingly concerned with potential regulatory requirements around AI, she said, “What we really should look for is the big players offering that as part of their expensive service packages. That’s going to be happening more under the radar.”
“There's real worry that these approaches are easy to fudge,” said O’Sullivan. “There are many ways to define fairness, and a casual look at an algorithm without access to code or training data can easily miss the finer points on where these algorithms go wrong.”