A new wave of AI auditing startups wants to prove responsibility can be profitable

Businesses hate regulatory compliance, but love profits. AI auditing startups could help them accomplish both.

Illustration of a magnifying glass examining circuitry

AI-focused regulations are piling up.

Illustration: Protocol

Being responsible hasn’t always been a top business priority. Now, some companies that audit AI to assess its potential to create unintended harms hope to sell business customers that build or use AI on the profit potential of doing the right thing.

And they may have new customers soon as new AI-focused regulations pile up. New York City just passed a law requiring companies providing AI that helps make hiring decisions to obtain “bias audits” of their technology. Washington, D.C. could pass a bill that would force entities using algorithmic tools to determine employment, housing or loan eligibility to cough up algorithms and the data used to train them for auditing.

At the national level, the European Union is mulling legislation addressing the risks of artificial intelligence, and the Federal Trade Commission and U.S. lawmakers want to hold companies accountable for harms and discrimination created by AI systems.

Amid the steady stream of damning headlines exposing AI-gone-wrong, it’s no surprise a cottage industry of AI auditing-related services and technologies — including companies like Fiddler and Parity — is getting attention from businesses that are incorporating AI into their day-to-day operations.

However, it’s not just the threat of regulatory enforcement, reputational damage or the desire to use AI for good that could compel more businesses to seek out these tools. Business goals factor in too.

Often referred to as “responsible AI,” AI ethics-related concepts such as accountability, transparency, explainability and fairness are “becoming more and more a business consideration,” said Mona Sloane, senior research scientist at New York University’s Center for Responsible AI.

“It’s a sign of the times when AI is an enterprise citizen,” said Svetlana Sicular, a VP analyst at Gartner focused on data and AI. Interest in services evaluating the impacts of AI systems is building as AI becomes an integral component of the enterprise, she said. Last year, Gartner predicted that by 2023, all people hired for AI development and training work will be required to demonstrate expertise in responsible development of AI.

The momentum behind services that evaluate whether or not AI systems are making equitable decisions and satisfying other ethical goals could be strongest among the companies investing in and using AI, said Dunstan Allison-Hope, vice president at BSR, a consulting firm that helps clients assess AI systems according to human rights-related measures. To date, most of the human rights impact assessments the consultancy has conducted have been for clients that build and sell AI tools, but that will change, said Allison-Hope.

“The biggest growth I expect is AI assessment by companies using AI, as distinct from companies building AI,” he said. “A key decision-maker determining impact on human rights will not just be the tech company, but the retailer, bank, or agency deploying the [AI and machine learning].”

That push is already happening. Companies including American Express, General Motors, Nike and Walmart said they’d adopt scoring criteria developed by a new industry-led group called the Data & Trust Alliance to help reduce bias in algorithmic tools used to make hiring and workforce decisions.

Audits that pop the hood on 'volatile' AI

Hired’s algorithmic system helps employers find job candidates from diverse communities, and notifies job seekers if they are requesting salaries that are below the average for a position. Hired CTO Dave Walters said that before the NYC law passed, the company already intended to audit its systems for fairness and transparency in 2022.

The company has yet to choose an audit service for that, but Walters said he expects any effective audit to require his company to provide its algorithmic models and training data with proper security protections in place. “That third-party service is going to need to be able to see deep enough under the hood to understand what’s going on,” said Walters.

Some AI audit startups do want to go beyond using scoring criteria by poking around inside algorithmic tech. Like other companies in the nascent sector, Fiddler wants its software to become part of its customers’ machine-learning operations process, helping companies prevent unwanted discriminatory effects, explaining how the AI they build makes decisions and avoiding the risk of “model drift” — a slippery slope that can shift machine-learning systems away from their originally intended purposes as new data flows into them over time.

“AI is a volatile thing,” said Krishna Gade, founder and CEO of Fiddler. He alluded to the highly publicized flaws in Zillow’s home-buying algorithms, which led the company to purchase homes for more than they were worth. Not only did the company lose money and lay off staff as a result, but some argue the flawed AI may have exacerbated gentrification by fueling inflated home prices.

But in a situation like Zillow’s, the potential societal harms of its home-buying algorithm may have seemed less obvious than the immediate business harms. And in general, tech decision-makers might respond more readily to an appeal to traditional business considerations when it comes to AI auditing services, said Gade.

“Most CEOs don’t really know what is the immediate ROI for implementing ethical AI and responsible AI,” he said. So, he said he starts conversations by trying to convince prospective clients that monitoring their AI could create positive ROI. “Then we say, by the way, there’s also all these other benefits — you can reduce these reputation risks.”

Hired’s employer clients have recruitment diversity goals, so refining or auditing its algorithms to help customers diversify staff makes good business sense, said Walters. For companies that aren’t in the recruitment, housing or credit-lending industries, the business justifications for getting AI audits could be more subtle or manifest longer term, he said.

“I think that companies like Fiddler are going to have to do a good job of understanding the company they’re talking to and the industry they’re in and how to connect those dots for them,” he said.

Algorithmic auditing platform Parity also emphasizes the profit potential from evaluating AI systems for fairness. The company’s website states that when companies use its model risk-assessment auditing process, “customers and bottom line both benefit.”

Making systems more fair means making them work better, said Liz O’Sullivan, Parity’s CEO. "People sometimes forget that so-called 'responsible AI' is AI that simply works the way you intend it to. More accurate predictions about under-served populations can open new revenue streams by correcting misconceptions about the behavior of marginalized groups," said O’Sullivan.

“Even setting aside the legal and reputational risks of inadequate oversight, our clients are motivated to find techniques that produce models which thoughtfully balance risk mitigation with enhanced accuracy," she said.

Parity also digs into its customers’ tech rather than merely asking a set of questions about it, as some consultancies or other audit providers do. Parity asks clients to provide full access to their models, the code and the data feeding them, then it adjusts features of the tech, running various permutations of the models to find the ones with the most equitable outcomes. The end result is an algorithmic impact assessment, a report that could be provided to regulators or a third-party auditor.

Venture capitalists are betting on the success of companies providing AI evaluation and monitoring tech. Fiddler grabbed $32 million in series B funding in June 2021. Truera, which provides a platform for explaining and monitoring AI models according to quality and reliability criteria received $12 million in series A funding a year ago. Arthur, a company co-founded in 2018 by O’Sullivan, collected $15 million in series A funding in December 2020.

O’Sullivan said Parity’s technology can be used in conjunction with Arthur’s, which tracks performance of AI models including computer vision and natural-language processing models. Indeed, for companies to use services that monitor AI once it’s deployed, they’ll want it to connect with other technologies they already use. Fiddler, for example, can be integrated with tools such as Amazon’s SageMaker and the open-source TensorFlow software library, said Gade.

AI tech providers and consultancies get on board

The momentum behind pure-play AI auditing tech is driving existing AI tech vendors to build new responsible AI products or tack on new features to existing ones. AWS offers a tool called SageMaker Clarify as part of its larger AI-building SageMaker product suite. The system looks for potential bias during data preparation and after models are trained and deployed. IBM provides algorithms intended to mitigate distrust and inequity problems that can arise from AI, and offers an open-source tool for detecting bias in models called AI Fairness 360.

Data and AI platform Databricks provides tools for building machine-learning models that warn developers when imbalances in data might throw a model off-kilter, enabling biases that could create discriminatory harms or negatively affect model accuracy. SparkBeyond, a company that provides a platform for building machine-learning models and finding patterns in data, also lets users check for bias in their data sets. However, some of these baked-in features are mere add-ons to primary functions of their core AI-building platforms, and rely on users to seek them out and actually use them.

Of course, businesses will not only seek out software to evaluate the inner workings of their AI. Traditional consulting companies are also creating services to assess AI.

Ernst and Young, for example, sells a service that turns responses to questions about AI systems into a score that quantifies risks. BSR does human rights assessments, but does not audit the bias or accuracy of the algorithm itself.

“We explore the potential human rights impacts that may arise from the deployment of AI/ML in context — for example, impacts on privacy, non-discrimination, freedom of expression, freedom of movement, freedom of association, security, access to culture, child rights and access to remedy,” said Allison-Hope.

Deloitte partners with Chatterbox Labs to operationalize the consultancy’s Trustworthy AI framework. The program is designed to test and monitor AI according to trust and fairness measures. Accenture provides an algorithmic assessment process that checks for disparities in potential outcomes of AI systems and monitors for future problems once a model is deployed; however, it is unclear whether the process involves a technical inspection. The company declined to comment for this story.

Even as pure-play startups gain ground, consultancies will see a business case for offering more traditional AI auditing services, said NYU’s Sloane. Because business decision-makers are increasingly concerned with potential regulatory requirements around AI, she said, “What we really should look for is the big players offering that as part of their expensive service packages. That’s going to be happening more under the radar.”

“There's real worry that these approaches are easy to fudge,” said O’Sullivan. “There are many ways to define fairness, and a casual look at an algorithm without access to code or training data can easily miss the finer points on where these algorithms go wrong.”


Supreme Court takes a sledgehammer greenhouse gas regulations

The court ruled 6-3 that the EPA cannot use the Clean Air Act to regulate greenhouse gases. That leaves a patchwork of states, utilities and, increasingly, tech companies.

The Supreme Court struck a major blow to the federal government's ability to regulate greenhouse gases.

Eric Lee/Bloomberg via Getty Images

Striking down the right to abortion may be the Supreme Court's highest profile decision this term. But on Wednesday, the court handed down an equally massive verdict on the federal government's ability to regulate greenhouse gas emissions. In the case of West Virginia v. EPA, the court decided that the agency has no ability to regulate greenhouse gas pollution under the Clean Air Act. Weakening the federal government's powers leaves a patchwork of states, utilities and, increasingly, tech companies to pick up the slack in reducing carbon pollution.

Keep Reading Show less
Brian Kahn

Brian ( @blkahn) is Protocol's climate editor. Previously, he was the managing editor and founding senior writer at Earther, Gizmodo's climate site, where he covered everything from the weather to Big Oil's influence on politics. He also reported for Climate Central and the Wall Street Journal. In the even more distant past, he led sleigh rides to visit a herd of 7,000 elk and boat tours on the deepest lake in the U.S.

Every day, millions of us press the “order” button on our favorite coffee store's mobile application: Our chosen brew will be on the counter when we arrive. It’s a personalized, seamless experience that we have all come to expect. What we don’t know is what’s happening behind the scenes. The mobile application is sourcing data from a database that stores information about each customer and what their favorite coffee drinks are. It is also leveraging event-streaming data in real time to ensure the ingredients for your personal coffee are in supply at your local store.

Applications like this power our daily lives, and if they can’t access massive amounts of data stored in a database as well as stream data “in motion” instantaneously, you — and millions of customers — won’t have these in-the-moment experiences.

Keep Reading Show less
Jennifer Goforth Gregory
Jennifer Goforth Gregory has worked in the B2B technology industry for over 20 years. As a freelance writer she writes for top technology brands, including IBM, HPE, Adobe, AT&T, Verizon, Epson, Oracle, Intel and Square. She specializes in a wide range of technology, such as AI, IoT, cloud, cybersecurity, and CX. Jennifer also wrote a bestselling book The Freelance Content Marketing Writer to help other writers launch a high earning freelance business.

Can crypto regulate itself? The Lummis-Gillibrand bill hopes so.

Creating the equivalent of the stock markets’ FINRA for crypto is the ideal, but experts doubt that it will be easy.

The idea of creating a government-sanctioned private regulatory association has been drawing more attention in the debate over how to rein in a fast-growing industry whose technological quirks have baffled policymakers.

Illustration: Christopher T. Fong/Protocol

Regulating crypto is complicated. That’s why Sens. Cynthia Lummis and Kirsten Gillibrand want to explore the creation of a private sector group to help federal regulators do their job.

The bipartisan bill introduced by Lummis and Gillibrand would require the CFTC and the SEC to work with the crypto industry to look into setting up a self-regulatory organization to “facilitate innovative, efficient and orderly markets for digital assets.”

Keep Reading Show less
Benjamin Pimentel

Benjamin Pimentel ( @benpimentel) covers crypto and fintech from San Francisco. He has reported on many of the biggest tech stories over the past 20 years for the San Francisco Chronicle, Dow Jones MarketWatch and Business Insider, from the dot-com crash, the rise of cloud computing, social networking and AI to the impact of the Great Recession and the COVID crisis on Silicon Valley and beyond. He can be reached at bpimentel@protocol.com or via Google Voice at (925) 307-9342.


Alperovitch: Cybersecurity defenders can’t be on high alert every day

With the continued threat of Russian cyber escalation, cybersecurity and geopolitics expert Dmitri Alperovitch says it’s not ideal for the U.S. to oscillate between moments of high alert and lesser states of cyber readiness.

Dmitri Alperovitch (the co-founder and former CTO of CrowdStrike) speaks at RSA Conference 2022.

Photo: RSA Conference

When it comes to cybersecurity vigilance, Dmitri Alperovitch wants to see more focus on resiliency of IT systems — and less on doing "surges" around particular dates or events.

For instance, whatever Russia is doing at the moment.

Keep Reading Show less
Kyle Alspach

Kyle Alspach ( @KyleAlspach) is a senior reporter at Protocol, focused on cybersecurity. He has covered the tech industry since 2010 for outlets including VentureBeat, CRN and the Boston Globe. He lives in Portland, Oregon, and can be reached at kalspach@protocol.com.


How the internet got privatized and how the government could fix it

Author Ben Tarnoff discusses municipal broadband, Web3 and why closing the “digital divide” isn’t enough.

The Biden administration’s Internet for All initiative, which kicked off in May, will roll out grant programs to expand and improve broadband infrastructure, teach digital skills and improve internet access for “everyone in America by the end of the decade.”

Decisions about who is eligible for these grants will be made based on the Federal Communications Commission’s broken, outdated and incorrect broadband maps — maps the FCC plans to update only after funding has been allocated. Inaccurate broadband maps are just one of many barriers to getting everyone in the country successfully online. Internet service providers that use government funds to connect rural and low-income areas have historically provided those regions with slow speeds and poor service, forcing community residents to find reliable internet outside of their homes.

Keep Reading Show less
Aditi Mukund
Aditi Mukund is Protocol’s Data Analyst. Prior to joining Protocol, she was an analyst at The Daily Beast and NPR where she wrangled data into actionable insights for editorial, audience, commerce, subscription, and product teams. She holds a B.S in Cognitive Science, Human Computer Interaction from The University of California, San Diego.
Latest Stories