Policy

Lawmakers want humans to check runaway AI. Research shows they’re not up to the job.

Policymakers want people to oversee — and override — biased AI. But research suggests there's no evidence to prove humans are up to the task.

Closeup of lights reflected in a person's eye

The recent trend toward requiring human oversight of automated decision-making systems runs counter to mounting research about humans' inability to effectively override AI tools.

Photo: Jackal Pan/Getty Images

There was a time, not long ago, when a certain brand of technocrat could argue with a straight face that algorithms are less biased decision-makers than human beings — and not be laughed out of the room. That time has come and gone, as the perils of AI bias have entered mainstream awareness.

Awareness of bias hasn't stopped institutions from deploying algorithms to make life-altering decisions about, say, people's prison sentences or their health care coverage. But the fear of runaway AI has led to a spate of laws and policy guidance requiring or recommending that these systems have some sort of human oversight, so machines aren't making the final call all on their own. The problem is: These laws almost never stop to ask whether human beings are actually up to the job.

"These assumptions about human oversight are playing a really critical role in justifying the use of these tools," said Ben Green, a postdoctoral scholar at the University of Michigan and an assistant professor at the Gerald R. Ford School of Public Policy. "If it doesn't work, then we're failing to get any of the protections that are seen as essential for making the system acceptable to us at all."

In a new paper, Green, who has extensively studied the use of algorithms in parole and sentencing decisions, demonstrates how the recent trend toward requiring human oversight of automated decision-making systems runs counter to mounting research about humans' inability to effectively override AI tools.

"The point is not to say: Let's just allow these algorithms to be used without the human oversight," Green said. "But if we're only comfortable with these algorithms because we have human oversight, we actually shouldn't be comfortable with these algorithms at all, because the human oversight doesn't work."

This interview has been lightly edited and condensed.

What got you thinking about this issue to begin with?

For the last several years, I've been doing experimental technical work, studying how people interact with algorithms when making predictions and decisions. A good chunk of the empirical findings that I'm drawing on in the paper are this research that I've conducted over the last couple of years.

One of the starting points for me, several years ago, was thinking about this gap between how we evaluate algorithms — often just thinking about if they're accurate, if they're fair — and the actual mechanisms by which algorithms have impact. That is, this process where they're giving advice to a human, and then a human has to actually somehow interpret that information and decide whether and how to use it.

In doing that work, I uncovered a lot of issues in people's ability to identify errors, biases and how people respond to algorithms, and noticed a pretty significant disconnect between the empirical findings and the way that a lot of policies talked about this.

[Policies] are essentially just saying, "Hey, well, there's a human in the loop. So it's fine to use these risk assessments when making sentencing decisions." I wanted to really dig into this and see: What do the policies actually call for? And how do they fall short? Does anything actually work?

Before we walk through your findings in this paper, let's talk a little bit about what you have discovered in your more technical research on algorithms' impact.

The first paper really looked at how introducing risk assessments alters the predictions that people make. The primary finding was that people respond to risk assessments in biased ways. People are more likely to follow a recommendation to increase their estimate of risk when evaluating Black defendants and more likely to decrease their estimate of risk suggested by the risk assessment when evaluating white defendants. So, even if we were to say, "OK, this algorithm might meet certain standards of fairness," the actual impacts of these algorithms might not satisfy those constraints when you think about how humans are going to respond.

The second study was an extension of that, looking at whether people are able to evaluate the quality of algorithmic predictions. We found that they weren't. People can't really do that job, which is central to the idea of people being able to determine which recommendations from an algorithm they should work with or not.

The final piece, which was just published, was shifting from predictions to the decision-making process, and looking at how risk assessments alter the underlying decision-making process that people follow. If they're shown a risk assessment, does that actually make judges more likely to weigh risk more heavily when making decisions? We must balance the desire to reduce risk with other interests around the liberty of defendants, and so on. Are we improving the accuracy of human prediction? Or are we actually making risk a more salient feature of decision-making?

We ran an experiment to test that and found that we're more in the latter camp. We're not simply altering people's predictions of risk. We're altering how people factor risk into their decisions, and essentially prompting them to weigh risk as a more important factor when making decisions.

In the paper about human oversight of algorithms, you walk through three different ways policies are trying to introduce some level of human oversight to the deployment of AI, and you argue each way is flawed. Walk me through those three ways and their flaws.

They're all somewhat overlapping and related. The first approach is to say: If a decision is based on solely automated processing, then we're going to either prohibit it entirely or require certain rights, like the ability to request human review afterward. The most notable example of this would be the European General Data Protection Regulation, which has an article dedicated to solely automated processing.

By drawing this really strict boundary, we're failing to capture a lot of the influences of algorithms that have actually generated the most significant controversy and demonstrated injustice. Most of the decisions that we're most concerned about are not made in a solely automated fashion already. You could have a human play some relatively superficial role in the decision-making process, such that it's no longer solely automated. And if it doesn't count as solely automated decision-making, then you aren't subject to any of those regulations.

The second approach operates in some ways as a corollary to the first. It's saying: It's OK to use algorithms, as long as there's human discretion, and the human gets to make the final decisions. This is what we see, in particular, for a lot of the risk assessment tools used in the U.S.

But when you actually give people discretion to determine how they should use an algorithm, they don't do what you might want them to do with it. A lot of the research looks at how people override algorithms: How do people diverge from algorithmic predictions? And typically, they do that in sub-optimal ways.People are diverging from algorithms in ways that are actually making their predictions less accurate.

If the risk assessment says to detain someone, they'll generally follow that. If it says to release someone, they will override that in favor of detention much more frequently. Police who are supposed to be overseeing facial recognition predictions also do a really bad job of that. So all of the documentation we have about human oversight and human overrides suggests that they either defer to the tool when they shouldn't, or override the tools in typically detrimental ways.

The third category says: People might not understand the algorithm. So we really, really need to be sure that [the oversight] is meaningful. People should be able to understand how the algorithm works in some form that can help them determine when they should follow it or how to interpret it. The emphasis there is on explanations or algorithmic transparency.

The issue here really just builds on the issues of the second group. Yes, you can give people the ability to override the algorithm. But that doesn't necessarily help. Typically, people don't override algorithms in beneficial ways. Unfortunately, even explanations and transparency don't seem to improve things — and can actually make it worse. The explanations can make people trust the algorithm more, even if the algorithm shouldn't be trusted.

What are the alternatives, if human beings are not a sufficient safeguard?

It's not simply, "Oh, we can just turn from human oversight to something else." Human oversight plays a really fundamental role in justifying and legitimizing these tools. So we actually need to, given these failures, start from farther upstream and think about how we're even making decisions about when algorithms should be used at all.

We should be putting much more scrutiny on whether it's actually appropriate to use an algorithm in a given situation. Often, courts and policymakers will justify the use of low-quality algorithms by assuming that human review can account for their flaws, but I think we should be much more critical. And I think in many of these cases, we should be ready to say: This actually just isn't an algorithm that we trust. This isn't a decision where an algorithm is particularly well-suited to enhancing decision-making.

We should put much more of a burden on agencies to justify why it's appropriate to use an algorithm in a given situation. They should have to describe more proactively why this algorithm is going to improve decision-making or why it's appropriate to have an algorithm make this decision. And what is the quality of this algorithm? Is it actually one that we would trust with altering potentially high-stakes decisions? We just need to do much more proactive research of the actual human oversight or human-algorithm collaboration process.

Already, we're seeing policies that are calling for various types of evaluations of algorithms themselves, saying, "before you deploy the system, you have to run a test to show that the algorithm is accurate, and to show that it's fair." And I think that we should have similar types of tests that are required for the actual decision-making process. So if you're going to incorporate a pre-trial risk assessment into judicial decision making, there should be some sort of proactive assessment, not just of the pre-trial risk assessment itself, but also of how people or judges use the algorithm to make decisions.

Right now, we'll do evaluations after the fact. Two years down the line, we'll see that judges have been using this algorithm in all sorts of unexpected ways. And that's because we didn't actually properly do the homework.

A MESSAGE FROM ALIBABA

www.protocol.com

This year, China will become the first country where ecommerce sales will outpace brick-and-mortar transactions. U.S. businesses are using Alibaba's platforms to sell to 900 million digitally savvy consumers in China and untap new opportunities for long-term growth.

LEARN MORE

Fintech

Judge Zia Faruqui is trying to teach you crypto, one ‘SNL’ reference at a time

His decisions on major cryptocurrency cases have quoted "The Big Lebowski," "SNL," and "Dr. Strangelove." That’s because he wants you — yes, you — to read them.

The ways Zia Faruqui (right) has weighed on cases that have come before him can give lawyers clues as to what legal frameworks will pass muster.

Photo: Carolyn Van Houten/The Washington Post via Getty Images

“Cryptocurrency and related software analytics tools are ‘The wave of the future, Dude. One hundred percent electronic.’”

That’s not a quote from "The Big Lebowski" — at least, not directly. It’s a quote from a Washington, D.C., district court memorandum opinion on the role cryptocurrency analytics tools can play in government investigations. The author is Magistrate Judge Zia Faruqui.

Keep Reading Show less
Veronica Irwin

Veronica Irwin (@vronirwin) is a San Francisco-based reporter at Protocol covering fintech. Previously she was at the San Francisco Examiner, covering tech from a hyper-local angle. Before that, her byline was featured in SF Weekly, The Nation, Techworker, Ms. Magazine and The Frisc.

The financial technology transformation is driving competition, creating consumer choice, and shaping the future of finance. Hear from seven fintech leaders who are reshaping the future of finance, and join the inaugural Financial Technology Association Fintech Summit to learn more.

Keep Reading Show less
FTA
The Financial Technology Association (FTA) represents industry leaders shaping the future of finance. We champion the power of technology-centered financial services and advocate for the modernization of financial regulation to support inclusion and responsible innovation.
Enterprise

AWS CEO: The cloud isn’t just about technology

As AWS preps for its annual re:Invent conference, Adam Selipsky talks product strategy, support for hybrid environments, and the value of the cloud in uncertain economic times.

Photo: Noah Berger/Getty Images for Amazon Web Services

AWS is gearing up for re:Invent, its annual cloud computing conference where announcements this year are expected to focus on its end-to-end data strategy and delivering new industry-specific services.

It will be the second re:Invent with CEO Adam Selipsky as leader of the industry’s largest cloud provider after his return last year to AWS from data visualization company Tableau Software.

Keep Reading Show less
Donna Goodison

Donna Goodison (@dgoodison) is Protocol's senior reporter focusing on enterprise infrastructure technology, from the 'Big 3' cloud computing providers to data centers. She previously covered the public cloud at CRN after 15 years as a business reporter for the Boston Herald. Based in Massachusetts, she also has worked as a Boston Globe freelancer, business reporter at the Boston Business Journal and real estate reporter at Banker & Tradesman after toiling at weekly newspapers.

Image: Protocol

We launched Protocol in February 2020 to cover the evolving power center of tech. It is with deep sadness that just under three years later, we are winding down the publication.

As of today, we will not publish any more stories. All of our newsletters, apart from our flagship, Source Code, will no longer be sent. Source Code will be published and sent for the next few weeks, but it will also close down in December.

Keep Reading Show less
Bennett Richardson

Bennett Richardson ( @bennettrich) is the president of Protocol. Prior to joining Protocol in 2019, Bennett was executive director of global strategic partnerships at POLITICO, where he led strategic growth efforts including POLITICO's European expansion in Brussels and POLITICO's creative agency POLITICO Focus during his six years with the company. Prior to POLITICO, Bennett was co-founder and CMO of Hinge, the mobile dating company recently acquired by Match Group. Bennett began his career in digital and social brand marketing working with major brands across tech, energy, and health care at leading marketing and communications agencies including Edelman and GMMB. Bennett is originally from Portland, Maine, and received his bachelor's degree from Colgate University.

Enterprise

Why large enterprises struggle to find suitable platforms for MLops

As companies expand their use of AI beyond running just a few machine learning models, and as larger enterprises go from deploying hundreds of models to thousands and even millions of models, ML practitioners say that they have yet to find what they need from prepackaged MLops systems.

As companies expand their use of AI beyond running just a few machine learning models, ML practitioners say that they have yet to find what they need from prepackaged MLops systems.

Photo: artpartner-images via Getty Images

On any given day, Lily AI runs hundreds of machine learning models using computer vision and natural language processing that are customized for its retail and ecommerce clients to make website product recommendations, forecast demand, and plan merchandising. But this spring when the company was in the market for a machine learning operations platform to manage its expanding model roster, it wasn’t easy to find a suitable off-the-shelf system that could handle such a large number of models in deployment while also meeting other criteria.

Some MLops platforms are not well-suited for maintaining even more than 10 machine learning models when it comes to keeping track of data, navigating their user interfaces, or reporting capabilities, Matthew Nokleby, machine learning manager for Lily AI’s product intelligence team, told Protocol earlier this year. “The duct tape starts to show,” he said.

Keep Reading Show less
Kate Kaye

Kate Kaye is an award-winning multimedia reporter digging deep and telling print, digital and audio stories. She covers AI and data for Protocol. Her reporting on AI and tech ethics issues has been published in OneZero, Fast Company, MIT Technology Review, CityLab, Ad Age and Digiday and heard on NPR. Kate is the creator of RedTailMedia.org and is the author of "Campaign '08: A Turning Point for Digital Media," a book about how the 2008 presidential campaigns used digital media and data.

Latest Stories
Bulletins