Workplace

The (possibly dystopian) rise of the automated video interview

Companies are embracing automated video interviews to filter through floods of job applicants. But interviews with a computer screen raise big ethical questions and might scare off candidates.

Faceless woman using laptop at home

Although automated interview companies claim to reduce bias in hiring, the researchers and advocates who study AI bias are these companies’ most frequent critics.

Photo: Johner Images via Getty Images

Applying for a job these days is starting to feel a lot like online dating. Job-seekers send their resume into portal after portal and a silent abyss waits on the other side.

That abyss is silent for a reason and it has little to do with the still-tight job market or the quality of your particular resume. On the other side of the portal, hiring managers watch the hundreds and even thousands of resumes pile up. It’s an infinite mountain of digital profiles, most of them from people completely unqualified. Going through them all would be a virtually fruitless task.

Enter the Tinders of corporate America. These services are the ones that made it so easy for anyone to apply for a job on the internet. But just like online dating, once the entire world is available for a match, you need to introduce some kind of filter to figure out who you should review first.

Most large companies use software to sort through resumes and cover letters, identifying likely candidates based on keywords, professed qualifications or even just where they went to college. But these services have taken their product a step further. Now, when some companies (ranging from major financial institutions like J.P. Morgan to food prep and retail) invite someone for an interview, they have no intention of showing up for the interview themselves.

Instead, these corporate Tinders give people an automated video interview, guiding the candidate through a conversation with their computer screen. The applicant stares at the webcam distortion of their face (instructed to emote normally like they would if speaking with an actual person), tries to explain why they want the job and then once more sends the information back into the abyss, often without being able to review their video first. The software will then produce a report and likely a ranking that will be used to determine if they get an interview with an actual person.

Automated resume and cover letter screening is just not advanced enough in a world where remote work is increasingly common and remote job applications are easier than ever. For hiring departments, automated video interview software makes whittling down the initial hiring pool infinitely easier. As an added bonus, the companies that make this software sell themselves as scientific and less biased than the flawed humans who run actual HR departments. The market is so fruitful that there are nearly endless options with similar services — among them HireVue, Modern Hire, Spark Hire, myInterview, Humanly.io, Willo and Curious Thing. Entry-level college graduates in tech, banking and even consulting almost always get funneled through these systems. In March 2021, HireVue announced that its platform had hosted more than 20 million video interviews since its inception.

But easy, frictionless processes like these always have a catch. Most companies like to talk about hiring like they’re finding the right fit specifically for their workplace. By relying on automated video interviews, they willingly introduce a third party — another company with its own goals, preferences and biases — between themselves and their new hires. Someone or something else is making the initial decision that could make all the difference.

That pesky AI problem

All of these companies use AI buzzwords to sell their services and advertise their tools. Modern Hire calls its service an “AI-Powered Automated Interview Creator;” at HireVue, the words “science-backed” appear frequently on marketing materials, and a HireVue spokesperson told Protocol that its “assessments are designed by psychologists with evidence-based approaches.” Companies deploy machine learning in different ways; HireVue and Modern Hire use AI tools primarily to transcribe the interviews and then to evaluate and rank the interview text.

Although the companies claim to reduce bias in hiring, the researchers and advocates who study AI bias are these companies’ most frequent critics. They argue that most machine-learning tools aren’t properly audited or regulated and commonly recreate or enhance already existing biases, so opting to incorporate AI into the hiring system is knowingly making a choice to take that risk.

The FTC has warned companies against using algorithms that could be unfair or create adverse outcomes, according to Sara Geoghegan, a law fellow at the Electronic Privacy Information Center. In 2019, EPIC filed a complaint with the FTC alleging that HireVue was engaging in unfair and deceptive practices that violated AI standards by using facial recognition AI tools in its video-interview analysis.

Then, in 2021, HireVue removed the facial recognition tools from its system. “HireVue research, conducted early this year, concluded that for the significant majority of jobs and industries, visual analysis has far less correlation to job performance than other elements of our algorithmic assessment,” the company wrote about its decision. “We made the decision to not use any visual analysis in our pre-hire algorithms going forward. We recommend and hope that this decision becomes an industry standard.”

Federal and state regulators have also started to propose legislation that would restrict how these algorithms are used and require independent audits. New York City passed a bill recently that would require “bias audits” for algorithms used in hiring, and Washington, D.C.’s proposed Stop Discrimination by Algorithms Act of 2021 would set a strict list of requirements for companies wanting to use algorithms in employment settings like the automated video interviews.

“We only score by the way the words people say that are transcribed, not the way they sound or the way they look. That is a hard line that we draw and have always drawn; my mentality and our mentality as a company is that we should only be scoring information that candidates consciously provide to us,” said Eric Sydell, the executive vice president of Innovation at Modern Hire. “There are organizations that use that information. I think it’s wrong. I only give you express permission to use my responses; that’s the right way that we need to proceed.”

For the systems’ critics, it’s difficult to actually prove why someone has been filtered out of the system. “What’s particularly tricky about this — it’s really hard to find people who have experienced an adverse outcome because of these systems, because you don’t know. If I do a little 90-second or 60-second video of myself, and I say, ‘Hi, I’m a lawyer and I do tech stuff,’ I won’t know if I don’t get a job if it’s because I wasn’t qualified or if it’s because a system made a call in a matter of seconds, and now I’m subject to that system,” Geoghegan said.

The companies that actually make the systems argue that hiring is already such a flawed and biased process that taking the actual interviewer out of the screening process actually makes it more fair. When people conduct unstructured interviews, they almost always hire the people they like, not necessarily the ones best qualified for the job. One striking example: a University of Texas study found that after its medical school had to accept students it had initially rejected based on interviews, the rejected students and the originally accepted ones had the same performance in school.

“The hiring industry and the hiring process itself has long been broken,” Sydell said. “This is a challenge that algorithms and modern science are suited to help solve, and help make scientific sense of it — which pieces about a candidate are predictive about your success on the job.”

“We are humans; the way our brains process information is very biased. We are always looking for people who are similar to ourselves; we weed out other people who might be different,” he said.

Problem whack-a-mole

Companies implement these systems because they have commercial and practical hiring needs they must meet. “It’s very difficult for them to go through this mass of applicants. They are indispensable, they couldn’t cope without them,” said Zahira Jaser, a professor at the University of Sussex Business School. “Though I am quite critical, I also don’t see a way out of it. I think this is going to become a bigger and bigger phenomenon.”

Jaser studies how people experience automated video interviews and how they affect hiring, not the AI itself. Her research has found that most people who undergo these video interviews don’t understand how the system works or what they’re getting themselves into, and she urges employers to adopt a “glass-box” approach where they provide as much transparency as possible about how their interviews will be processed and screened. At the very least, candidates need to understand that software, not a person, will be analyzing the text of what they say to a webcam. She also recommends employers create their own simple systems for candidates where they can see what successful interviews look like and why, and that they provide feedback to people who are rejected about why and what they can do to improve.

Without some of these changes, companies could run afoul of laws like the Americans with Disabilities Act. Federal regulators just released guidance in May that explains how the use of algorithms could violate the ADA. One of the key recommendations? Applicants need to understand the system and have straightforward ways to ask for alternative interview methods if they have a disability that could interfere with how the algorithm assesses their interview.

Smaller firms also need to consider whether the video interview might turn away potential candidates who see the system as offensive and develop easy alternative interview methods. One job applicant for a major media firm told Protocol that he immediately rescinded his application when the firm asked him to complete a Modern Hire interview. “It’s just the lack of transparency, and the data, and the laziness as well. It wouldn’t be that hard to just ask for a 20-minute chat. The person I actually want to talk to is the hiring manager,” he said.

“Why do they feel their time is more valuable? And this was for a mid-relatively high-up position; I can maybe understand it for graduates where you are receiving thousands of applications, maybe it’s a good tool to filter out from literally thousands. But even that is questionable in my opinion.”

Jaser sees that same sentiment from the people she has interviewed in her research.

“The technology doesn’t care about the human. So effectively it’s very exploitative of the human,” she said. “They are extracting what’s of interest to the employer in a very narrow way, forgetting almost all of humanity. It’s a very narrow way of judging. There is no relationship built.”

Entertainment

Niantic’s future hinges on mapping the metaverse

The maker of Pokémon Go is hoping the metaverse will deliver its next big break.

Niantic's new standalone messaging and social app, Campfire, is a way to get players organizing and meeting up in the real world. It launches today for select Pokémon Go players.

Image: Niantic

Pokémon Go sent Niantic to the moon. But now the San Francisco-based augmented reality developer has returned to earth, and it’s been trying to chart its way back to the stars ever since. The company yesterday announced layoffs of about 8% of its workforce (about 85 to 90 people) and canceled four projects, Bloomberg reported, signaling another disappointment for the studio that still generates about $1 billion in revenue per year from Pokémon Go.

Finding its next big hit has been Niantic’s priority for years, and the company has been coming up short. For much of the past year or so, Niantic has turned its attention to the metaverse, with hopes that its location-based mobile games, AR tech and company philosophy around fostering physical connection and outdoor exploration can help it build what it now calls the “real world metaverse.”

Keep Reading Show less
Nick Statt

Nick Statt is Protocol's video game reporter. Prior to joining Protocol, he was news editor at The Verge covering the gaming industry, mobile apps and antitrust out of San Francisco, in addition to managing coverage of Silicon Valley tech giants and startups. He now resides in Rochester, New York, home of the garbage plate and, completely coincidentally, the World Video Game Hall of Fame. He can be reached at nstatt@protocol.com.

Every day, millions of us press the “order” button on our favorite coffee store's mobile application: Our chosen brew will be on the counter when we arrive. It’s a personalized, seamless experience that we have all come to expect. What we don’t know is what’s happening behind the scenes. The mobile application is sourcing data from a database that stores information about each customer and what their favorite coffee drinks are. It is also leveraging event-streaming data in real time to ensure the ingredients for your personal coffee are in supply at your local store.

Applications like this power our daily lives, and if they can’t access massive amounts of data stored in a database as well as stream data “in motion” instantaneously, you — and millions of customers — won’t have these in-the-moment experiences.

Keep Reading Show less
Jennifer Goforth Gregory
Jennifer Goforth Gregory has worked in the B2B technology industry for over 20 years. As a freelance writer she writes for top technology brands, including IBM, HPE, Adobe, AT&T, Verizon, Epson, Oracle, Intel and Square. She specializes in a wide range of technology, such as AI, IoT, cloud, cybersecurity, and CX. Jennifer also wrote a bestselling book The Freelance Content Marketing Writer to help other writers launch a high earning freelance business.
Climate

Supreme Court takes a sledgehammer to greenhouse gas regulations

The court ruled 6-3 that the EPA cannot use the Clean Air Act to regulate power plant greenhouse gas emissions. That leaves a patchwork of policies from states, utilities and, increasingly, tech companies to pick up the slack.

The Supreme Court struck a major blow to the federal government's ability to regulate greenhouse gases.

Eric Lee/Bloomberg via Getty Images

Striking down the right to abortion may be the Supreme Court's highest-profile decision this term. But on Thursday, the court handed down an equally massive verdict on the federal government's ability to regulate greenhouse gas emissions. In the case of West Virginia v. EPA, the court decided that the agency has no ability to regulate greenhouse gas pollution under the Clean Air Act. Weakening the federal government's powers leaves a patchwork of states, utilities and, increasingly, tech companies to pick up the slack in reducing carbon pollution.

Keep Reading Show less
Brian Kahn

Brian ( @blkahn) is Protocol's climate editor. Previously, he was the managing editor and founding senior writer at Earther, Gizmodo's climate site, where he covered everything from the weather to Big Oil's influence on politics. He also reported for Climate Central and the Wall Street Journal. In the even more distant past, he led sleigh rides to visit a herd of 7,000 elk and boat tours on the deepest lake in the U.S.

Fintech

Can crypto regulate itself? The Lummis-Gillibrand bill hopes so.

Creating the equivalent of the stock markets’ FINRA for crypto is the ideal, but experts doubt that it will be easy.

The idea of creating a government-sanctioned private regulatory association has been drawing more attention in the debate over how to rein in a fast-growing industry whose technological quirks have baffled policymakers.

Illustration: Christopher T. Fong/Protocol

Regulating crypto is complicated. That’s why Sens. Cynthia Lummis and Kirsten Gillibrand want to explore the creation of a private sector group to help federal regulators do their job.

The bipartisan bill introduced by Lummis and Gillibrand would require the CFTC and the SEC to work with the crypto industry to look into setting up a self-regulatory organization to “facilitate innovative, efficient and orderly markets for digital assets.”

Keep Reading Show less
Benjamin Pimentel

Benjamin Pimentel ( @benpimentel) covers crypto and fintech from San Francisco. He has reported on many of the biggest tech stories over the past 20 years for the San Francisco Chronicle, Dow Jones MarketWatch and Business Insider, from the dot-com crash, the rise of cloud computing, social networking and AI to the impact of the Great Recession and the COVID crisis on Silicon Valley and beyond. He can be reached at bpimentel@protocol.com or via Google Voice at (925) 307-9342.

Enterprise

Alperovitch: Cybersecurity defenders can’t be on high alert every day

With the continued threat of Russian cyber escalation, cybersecurity and geopolitics expert Dmitri Alperovitch says it’s not ideal for the U.S. to oscillate between moments of high alert and lesser states of cyber readiness.

Dmitri Alperovitch (the co-founder and former CTO of CrowdStrike) speaks at RSA Conference 2022.

Photo: RSA Conference

When it comes to cybersecurity vigilance, Dmitri Alperovitch wants to see more focus on resiliency of IT systems — and less on doing "surges" around particular dates or events.

For instance, whatever Russia is doing at the moment.

Keep Reading Show less
Kyle Alspach

Kyle Alspach ( @KyleAlspach) is a senior reporter at Protocol, focused on cybersecurity. He has covered the tech industry since 2010 for outlets including VentureBeat, CRN and the Boston Globe. He lives in Portland, Oregon, and can be reached at kalspach@protocol.com.

Latest Stories
Bulletins