The (possibly dystopian) rise of the automated video interview

Companies are embracing automated video interviews to filter through floods of job applicants. But interviews with a computer screen raise big ethical questions and might scare off candidates.

Faceless woman using laptop at home

Although automated interview companies claim to reduce bias in hiring, the researchers and advocates who study AI bias are these companies’ most frequent critics.

Photo: Johner Images via Getty Images

Applying for a job these days is starting to feel a lot like online dating. Job-seekers send their resume into portal after portal and a silent abyss waits on the other side.

That abyss is silent for a reason and it has little to do with the still-tight job market or the quality of your particular resume. On the other side of the portal, hiring managers watch the hundreds and even thousands of resumes pile up. It’s an infinite mountain of digital profiles, most of them from people completely unqualified. Going through them all would be a virtually fruitless task.

Enter the Tinders of corporate America. These services are the ones that made it so easy for anyone to apply for a job on the internet. But just like online dating, once the entire world is available for a match, you need to introduce some kind of filter to figure out who you should review first.

Most large companies use software to sort through resumes and cover letters, identifying likely candidates based on keywords, professed qualifications or even just where they went to college. But these services have taken their product a step further. Now, when some companies (ranging from major financial institutions like J.P. Morgan to food prep and retail) invite someone for an interview, they have no intention of showing up for the interview themselves.

Instead, these corporate Tinders give people an automated video interview, guiding the candidate through a conversation with their computer screen. The applicant stares at the webcam distortion of their face (instructed to emote normally like they would if speaking with an actual person), tries to explain why they want the job and then once more sends the information back into the abyss, often without being able to review their video first. The software will then produce a report and likely a ranking that will be used to determine if they get an interview with an actual person.

Automated resume and cover letter screening is just not advanced enough in a world where remote work is increasingly common and remote job applications are easier than ever. For hiring departments, automated video interview software makes whittling down the initial hiring pool infinitely easier. As an added bonus, the companies that make this software sell themselves as scientific and less biased than the flawed humans who run actual HR departments. The market is so fruitful that there are nearly endless options with similar services — among them HireVue, Modern Hire, Spark Hire, myInterview,, Willo and Curious Thing. Entry-level college graduates in tech, banking and even consulting almost always get funneled through these systems. In March 2021, HireVue announced that its platform had hosted more than 20 million video interviews since its inception.

But easy, frictionless processes like these always have a catch. Most companies like to talk about hiring like they’re finding the right fit specifically for their workplace. By relying on automated video interviews, they willingly introduce a third party — another company with its own goals, preferences and biases — between themselves and their new hires. Someone or something else is making the initial decision that could make all the difference.

That pesky AI problem

All of these companies use AI buzzwords to sell their services and advertise their tools. Modern Hire calls its service an “AI-Powered Automated Interview Creator;” at HireVue, the words “science-backed” appear frequently on marketing materials, and a HireVue spokesperson told Protocol that its “assessments are designed by psychologists with evidence-based approaches.” Companies deploy machine learning in different ways; HireVue and Modern Hire use AI tools primarily to transcribe the interviews and then to evaluate and rank the interview text.

Although the companies claim to reduce bias in hiring, the researchers and advocates who study AI bias are these companies’ most frequent critics. They argue that most machine-learning tools aren’t properly audited or regulated and commonly recreate or enhance already existing biases, so opting to incorporate AI into the hiring system is knowingly making a choice to take that risk.

The FTC has warned companies against using algorithms that could be unfair or create adverse outcomes, according to Sara Geoghegan, a law fellow at the Electronic Privacy Information Center. In 2019, EPIC filed a complaint with the FTC alleging that HireVue was engaging in unfair and deceptive practices that violated AI standards by using facial recognition AI tools in its video-interview analysis.

Then, in 2021, HireVue removed the facial recognition tools from its system. “HireVue research, conducted early this year, concluded that for the significant majority of jobs and industries, visual analysis has far less correlation to job performance than other elements of our algorithmic assessment,” the company wrote about its decision. “We made the decision to not use any visual analysis in our pre-hire algorithms going forward. We recommend and hope that this decision becomes an industry standard.”

Federal and state regulators have also started to propose legislation that would restrict how these algorithms are used and require independent audits. New York City passed a bill recently that would require “bias audits” for algorithms used in hiring, and Washington, D.C.’s proposed Stop Discrimination by Algorithms Act of 2021 would set a strict list of requirements for companies wanting to use algorithms in employment settings like the automated video interviews.

“We only score by the way the words people say that are transcribed, not the way they sound or the way they look. That is a hard line that we draw and have always drawn; my mentality and our mentality as a company is that we should only be scoring information that candidates consciously provide to us,” said Eric Sydell, the executive vice president of Innovation at Modern Hire. “There are organizations that use that information. I think it’s wrong. I only give you express permission to use my responses; that’s the right way that we need to proceed.”

For the systems’ critics, it’s difficult to actually prove why someone has been filtered out of the system. “What’s particularly tricky about this — it’s really hard to find people who have experienced an adverse outcome because of these systems, because you don’t know. If I do a little 90-second or 60-second video of myself, and I say, ‘Hi, I’m a lawyer and I do tech stuff,’ I won’t know if I don’t get a job if it’s because I wasn’t qualified or if it’s because a system made a call in a matter of seconds, and now I’m subject to that system,” Geoghegan said.

The companies that actually make the systems argue that hiring is already such a flawed and biased process that taking the actual interviewer out of the screening process actually makes it more fair. When people conduct unstructured interviews, they almost always hire the people they like, not necessarily the ones best qualified for the job. One striking example: a University of Texas study found that after its medical school had to accept students it had initially rejected based on interviews, the rejected students and the originally accepted ones had the same performance in school.

“The hiring industry and the hiring process itself has long been broken,” Sydell said. “This is a challenge that algorithms and modern science are suited to help solve, and help make scientific sense of it — which pieces about a candidate are predictive about your success on the job.”

“We are humans; the way our brains process information is very biased. We are always looking for people who are similar to ourselves; we weed out other people who might be different,” he said.

Problem whack-a-mole

Companies implement these systems because they have commercial and practical hiring needs they must meet. “It’s very difficult for them to go through this mass of applicants. They are indispensable, they couldn’t cope without them,” said Zahira Jaser, a professor at the University of Sussex Business School. “Though I am quite critical, I also don’t see a way out of it. I think this is going to become a bigger and bigger phenomenon.”

Jaser studies how people experience automated video interviews and how they affect hiring, not the AI itself. Her research has found that most people who undergo these video interviews don’t understand how the system works or what they’re getting themselves into, and she urges employers to adopt a “glass-box” approach where they provide as much transparency as possible about how their interviews will be processed and screened. At the very least, candidates need to understand that software, not a person, will be analyzing the text of what they say to a webcam. She also recommends employers create their own simple systems for candidates where they can see what successful interviews look like and why, and that they provide feedback to people who are rejected about why and what they can do to improve.

Without some of these changes, companies could run afoul of laws like the Americans with Disabilities Act. Federal regulators just released guidance in May that explains how the use of algorithms could violate the ADA. One of the key recommendations? Applicants need to understand the system and have straightforward ways to ask for alternative interview methods if they have a disability that could interfere with how the algorithm assesses their interview.

Smaller firms also need to consider whether the video interview might turn away potential candidates who see the system as offensive and develop easy alternative interview methods. One job applicant for a major media firm told Protocol that he immediately rescinded his application when the firm asked him to complete a Modern Hire interview. “It’s just the lack of transparency, and the data, and the laziness as well. It wouldn’t be that hard to just ask for a 20-minute chat. The person I actually want to talk to is the hiring manager,” he said.

“Why do they feel their time is more valuable? And this was for a mid-relatively high-up position; I can maybe understand it for graduates where you are receiving thousands of applications, maybe it’s a good tool to filter out from literally thousands. But even that is questionable in my opinion.”

Jaser sees that same sentiment from the people she has interviewed in her research.

“The technology doesn’t care about the human. So effectively it’s very exploitative of the human,” she said. “They are extracting what’s of interest to the employer in a very narrow way, forgetting almost all of humanity. It’s a very narrow way of judging. There is no relationship built.”


Judge Zia Faruqui is trying to teach you crypto, one ‘SNL’ reference at a time

His decisions on major cryptocurrency cases have quoted "The Big Lebowski," "SNL," and "Dr. Strangelove." That’s because he wants you — yes, you — to read them.

The ways Zia Faruqui (right) has weighed on cases that have come before him can give lawyers clues as to what legal frameworks will pass muster.

Photo: Carolyn Van Houten/The Washington Post via Getty Images

“Cryptocurrency and related software analytics tools are ‘The wave of the future, Dude. One hundred percent electronic.’”

That’s not a quote from "The Big Lebowski" — at least, not directly. It’s a quote from a Washington, D.C., district court memorandum opinion on the role cryptocurrency analytics tools can play in government investigations. The author is Magistrate Judge Zia Faruqui.

Keep ReadingShow less
Veronica Irwin

Veronica Irwin (@vronirwin) is a San Francisco-based reporter at Protocol covering fintech. Previously she was at the San Francisco Examiner, covering tech from a hyper-local angle. Before that, her byline was featured in SF Weekly, The Nation, Techworker, Ms. Magazine and The Frisc.

The financial technology transformation is driving competition, creating consumer choice, and shaping the future of finance. Hear from seven fintech leaders who are reshaping the future of finance, and join the inaugural Financial Technology Association Fintech Summit to learn more.

Keep ReadingShow less
The Financial Technology Association (FTA) represents industry leaders shaping the future of finance. We champion the power of technology-centered financial services and advocate for the modernization of financial regulation to support inclusion and responsible innovation.

AWS CEO: The cloud isn’t just about technology

As AWS preps for its annual re:Invent conference, Adam Selipsky talks product strategy, support for hybrid environments, and the value of the cloud in uncertain economic times.

Photo: Noah Berger/Getty Images for Amazon Web Services

AWS is gearing up for re:Invent, its annual cloud computing conference where announcements this year are expected to focus on its end-to-end data strategy and delivering new industry-specific services.

It will be the second re:Invent with CEO Adam Selipsky as leader of the industry’s largest cloud provider after his return last year to AWS from data visualization company Tableau Software.

Keep ReadingShow less
Donna Goodison

Donna Goodison (@dgoodison) is Protocol's senior reporter focusing on enterprise infrastructure technology, from the 'Big 3' cloud computing providers to data centers. She previously covered the public cloud at CRN after 15 years as a business reporter for the Boston Herald. Based in Massachusetts, she also has worked as a Boston Globe freelancer, business reporter at the Boston Business Journal and real estate reporter at Banker & Tradesman after toiling at weekly newspapers.

Image: Protocol

We launched Protocol in February 2020 to cover the evolving power center of tech. It is with deep sadness that just under three years later, we are winding down the publication.

As of today, we will not publish any more stories. All of our newsletters, apart from our flagship, Source Code, will no longer be sent. Source Code will be published and sent for the next few weeks, but it will also close down in December.

Keep ReadingShow less
Bennett Richardson

Bennett Richardson ( @bennettrich) is the president of Protocol. Prior to joining Protocol in 2019, Bennett was executive director of global strategic partnerships at POLITICO, where he led strategic growth efforts including POLITICO's European expansion in Brussels and POLITICO's creative agency POLITICO Focus during his six years with the company. Prior to POLITICO, Bennett was co-founder and CMO of Hinge, the mobile dating company recently acquired by Match Group. Bennett began his career in digital and social brand marketing working with major brands across tech, energy, and health care at leading marketing and communications agencies including Edelman and GMMB. Bennett is originally from Portland, Maine, and received his bachelor's degree from Colgate University.


Why large enterprises struggle to find suitable platforms for MLops

As companies expand their use of AI beyond running just a few machine learning models, and as larger enterprises go from deploying hundreds of models to thousands and even millions of models, ML practitioners say that they have yet to find what they need from prepackaged MLops systems.

As companies expand their use of AI beyond running just a few machine learning models, ML practitioners say that they have yet to find what they need from prepackaged MLops systems.

Photo: artpartner-images via Getty Images

On any given day, Lily AI runs hundreds of machine learning models using computer vision and natural language processing that are customized for its retail and ecommerce clients to make website product recommendations, forecast demand, and plan merchandising. But this spring when the company was in the market for a machine learning operations platform to manage its expanding model roster, it wasn’t easy to find a suitable off-the-shelf system that could handle such a large number of models in deployment while also meeting other criteria.

Some MLops platforms are not well-suited for maintaining even more than 10 machine learning models when it comes to keeping track of data, navigating their user interfaces, or reporting capabilities, Matthew Nokleby, machine learning manager for Lily AI’s product intelligence team, told Protocol earlier this year. “The duct tape starts to show,” he said.

Keep ReadingShow less
Kate Kaye

Kate Kaye is an award-winning multimedia reporter digging deep and telling print, digital and audio stories. She covers AI and data for Protocol. Her reporting on AI and tech ethics issues has been published in OneZero, Fast Company, MIT Technology Review, CityLab, Ad Age and Digiday and heard on NPR. Kate is the creator of and is the author of "Campaign '08: A Turning Point for Digital Media," a book about how the 2008 presidential campaigns used digital media and data.

Latest Stories