Workplace

You just hired a deepfake. Get ready for the rise of imposter employees.

New technology — plus the pandemic remote work trend — is helping fraudsters use someone else’s identity to get a job.

Man wearing mask

Not all job applicants are who they claim to be.

Illustration: z_wei/iStock/Getty Images Plus; Protocol

Mike Elgan is a journalist, opinion columnist and author.

Before COVID-19, job interviews took place in person and new hires worked in the office, for the most part.

But with remote work came an increase in remote hiring, from the job application to onboarding and everything in between. Many employees have never been in the same room as their employers and co-workers, and that has opened the door for a rise in imposter employees.

The FBI is concerned; you should be too.

Lies, spies and deepfake video

Companies have been increasingly complaining to the FBI about prospective employees using real-time deepfake video and deepfake audio for remote interviews, along with personally identifiable information (PII), to land jobs at American companies.

One place they’re likely getting the PII is through posting fake job openings, which enables them to harvest job candidate information, resumes and more, according to the FBI.

Deepfake video sounds advanced. But shady job candidates don’t need exotic or expensive hardware or software to impersonate someone on a live video call — only a photo of the fake person. Consumer products like Xpression Camera enable fraudsters to upload someone’s picture and use their face during a live video interview.

The FBI points out that such deepfake video calls often fail, as the “actions and lip movement of the person seen interviewed on-camera do not completely coordinate with the audio of the person speaking.”

In other words, dishonest job applicants would like to take advantage of deepfake technology for remote hiring, but the technology isn’t there yet. But soon the technology will be so good that deepfake audio and video will look just like the real thing.

And it’s not just deepfake video: You can clone someone’s voice with just a short audio sample and a publicly available tool on GitHub. It’s unlikely that a cybercriminal would get a job using deepfake audio clone, but attackers can (and do) use cloned human voices for workplace phishing attacks.

What imposters want

The main drivers appear to be money, espionage, access to company systems and unearned career advancement.

Many of the job openings sought by these imposters include “information technology and computer programming, database, and software related job functions. Notably, some reported positions include access to customer PII, financial data, corporate IT databases and/or proprietary information,” according to a June 28 posted alert by the FBI’s Internet Crime Complaint Center. The perfect jobs for spies.

Some imposter candidates actually work for the North Korean government, according to a statement by the FBI and the U.S. State and Treasury Departments. Because of U.S. sanctions, North Koreans are ineligible for jobs at American companies. (Companies that employ North Koreans can be fined roughly $330,000 per violation.) So the North Korean government lets people apply and work as imposters in exchange for taking most of their salaries, or North Korean spies get jobs under false identities in order to steal secrets. Some North Koreans used their real identities, but claimed they were outside North Korea.

The problem of imposter employees exists on a scale from exaggerating experience to lying about credentials and personal details to faking experience to claiming to be an entirely different person. And every facet is growing in scale.

Glider AI’s “The Future of Candidate Evaluation” report found that what they call “candidate fraud” has nearly doubled — a 92% increase — since before the pandemic.

In addition to the imposter employee frauds already reported, it’s easy to imagine other scams that take advantage of new technology and remote work.

Malicious cyberattackers could get hired under stolen credentials in order to gain unauthorized access to sensitive data or systems inside companies. A skilled hacker may actually have the IT skills to get the job, and doing so may prove to be a relatively easy act of social engineering.

The bottom line is that our old habits for verifying employees — namely, interacting with them and recognizing who they are — are increasingly unreliable in the face of remote work and new technology that enables people to fake their appearance, voice and identity.

How to avoid hiring imposters

Remote work is here to stay. And it’s time to revisit and revamp hiring. Here are some tips to bear in mind when hiring.

  • Include real identity verification before hiring, and make sure identity matches background screening. (Don’t assume your background provider is verifying identity.)
  • Asking for a driver’s license or passport can lead to a discrimination lawsuit if the candidate isn’t hired — they can claim discrimination based on age, health or country of birth. Request this information only after you’re certain you’ll hire.
  • Know the law in the state you’re in to find out what’s allowed in terms of biometric data collection.
  • If you’re doing background checks and identity verification on remote hires, do the same for in-office hires to avoid discrimination.
  • Consider abandoning all-remote hiring in favor of in-person interviews, even for remote staff. And bring in remote staff for in-house team building quarterly or annually.
  • Rely more on skills assessment and testing for technical positions rather than resume-based claims of experience, certifications and education. Verify identities at the point of testing and follow up on test results with a post-test interview. Imposters are likely to seek employment elsewhere if they have to prove their qualifications.
  • Take extra care with the hiring of IT people and others who will gain access to email systems, passwords, business secrets, physical security systems and other juicy targets for cyberattack. Do thorough background checks and criminal records checks and verify identity throughout the hiring and onboarding process.
  • Embrace AI fraud detection to evaluate resumes and job candidates. Fraud detection has been used for years in banking, insurance and other fields, and is slowly being applied to hiring.

The new world of remote work calls for a new approach to hiring. It’s time to rethink your HR practices to make sure the people you’re hiring and employing are who they say they are — and not imposters.

Fintech

Judge Zia Faruqui is trying to teach you crypto, one ‘SNL’ reference at a time

His decisions on major cryptocurrency cases have quoted "The Big Lebowski," "SNL," and "Dr. Strangelove." That’s because he wants you — yes, you — to read them.

The ways Zia Faruqui (right) has weighed on cases that have come before him can give lawyers clues as to what legal frameworks will pass muster.

Photo: Carolyn Van Houten/The Washington Post via Getty Images

“Cryptocurrency and related software analytics tools are ‘The wave of the future, Dude. One hundred percent electronic.’”

That’s not a quote from "The Big Lebowski" — at least, not directly. It’s a quote from a Washington, D.C., district court memorandum opinion on the role cryptocurrency analytics tools can play in government investigations. The author is Magistrate Judge Zia Faruqui.

Keep ReadingShow less
Veronica Irwin

Veronica Irwin (@vronirwin) is a San Francisco-based reporter at Protocol covering fintech. Previously she was at the San Francisco Examiner, covering tech from a hyper-local angle. Before that, her byline was featured in SF Weekly, The Nation, Techworker, Ms. Magazine and The Frisc.

The financial technology transformation is driving competition, creating consumer choice, and shaping the future of finance. Hear from seven fintech leaders who are reshaping the future of finance, and join the inaugural Financial Technology Association Fintech Summit to learn more.

Keep ReadingShow less
FTA
The Financial Technology Association (FTA) represents industry leaders shaping the future of finance. We champion the power of technology-centered financial services and advocate for the modernization of financial regulation to support inclusion and responsible innovation.
Enterprise

AWS CEO: The cloud isn’t just about technology

As AWS preps for its annual re:Invent conference, Adam Selipsky talks product strategy, support for hybrid environments, and the value of the cloud in uncertain economic times.

Photo: Noah Berger/Getty Images for Amazon Web Services

AWS is gearing up for re:Invent, its annual cloud computing conference where announcements this year are expected to focus on its end-to-end data strategy and delivering new industry-specific services.

It will be the second re:Invent with CEO Adam Selipsky as leader of the industry’s largest cloud provider after his return last year to AWS from data visualization company Tableau Software.

Keep ReadingShow less
Donna Goodison

Donna Goodison (@dgoodison) is Protocol's senior reporter focusing on enterprise infrastructure technology, from the 'Big 3' cloud computing providers to data centers. She previously covered the public cloud at CRN after 15 years as a business reporter for the Boston Herald. Based in Massachusetts, she also has worked as a Boston Globe freelancer, business reporter at the Boston Business Journal and real estate reporter at Banker & Tradesman after toiling at weekly newspapers.

Image: Protocol

We launched Protocol in February 2020 to cover the evolving power center of tech. It is with deep sadness that just under three years later, we are winding down the publication.

As of today, we will not publish any more stories. All of our newsletters, apart from our flagship, Source Code, will no longer be sent. Source Code will be published and sent for the next few weeks, but it will also close down in December.

Keep ReadingShow less
Bennett Richardson

Bennett Richardson ( @bennettrich) is the president of Protocol. Prior to joining Protocol in 2019, Bennett was executive director of global strategic partnerships at POLITICO, where he led strategic growth efforts including POLITICO's European expansion in Brussels and POLITICO's creative agency POLITICO Focus during his six years with the company. Prior to POLITICO, Bennett was co-founder and CMO of Hinge, the mobile dating company recently acquired by Match Group. Bennett began his career in digital and social brand marketing working with major brands across tech, energy, and health care at leading marketing and communications agencies including Edelman and GMMB. Bennett is originally from Portland, Maine, and received his bachelor's degree from Colgate University.

Enterprise

Why large enterprises struggle to find suitable platforms for MLops

As companies expand their use of AI beyond running just a few machine learning models, and as larger enterprises go from deploying hundreds of models to thousands and even millions of models, ML practitioners say that they have yet to find what they need from prepackaged MLops systems.

As companies expand their use of AI beyond running just a few machine learning models, ML practitioners say that they have yet to find what they need from prepackaged MLops systems.

Photo: artpartner-images via Getty Images

On any given day, Lily AI runs hundreds of machine learning models using computer vision and natural language processing that are customized for its retail and ecommerce clients to make website product recommendations, forecast demand, and plan merchandising. But this spring when the company was in the market for a machine learning operations platform to manage its expanding model roster, it wasn’t easy to find a suitable off-the-shelf system that could handle such a large number of models in deployment while also meeting other criteria.

Some MLops platforms are not well-suited for maintaining even more than 10 machine learning models when it comes to keeping track of data, navigating their user interfaces, or reporting capabilities, Matthew Nokleby, machine learning manager for Lily AI’s product intelligence team, told Protocol earlier this year. “The duct tape starts to show,” he said.

Keep ReadingShow less
Kate Kaye

Kate Kaye is an award-winning multimedia reporter digging deep and telling print, digital and audio stories. She covers AI and data for Protocol. Her reporting on AI and tech ethics issues has been published in OneZero, Fast Company, MIT Technology Review, CityLab, Ad Age and Digiday and heard on NPR. Kate is the creator of RedTailMedia.org and is the author of "Campaign '08: A Turning Point for Digital Media," a book about how the 2008 presidential campaigns used digital media and data.

Latest Stories
Bulletins