Power

Stopping fake accounts is a cat-and-mouse game. Can Facebook win with AI?

The social network is in a constant power struggle with bad actors. Increasingly complex machine learning gives it an edge — for now.

A screenshot of Mark Zuckerberg's Facebook account

Even humans can have a hard time telling fake Facebook accounts from real ones. The company's newly detailed AI draws on a mountain of data to help it tell the difference.

Photo illustration: Rafael Henrique/SOPA Images/LightRocket via Getty Images

Bochra Gharbaoui pointed to four Facebook profiles on a screen in the social network's London headquarters and asked a seemingly simple, but fundamentally important, question: Which is fake?

I didn't know.

Something looked a little off about each of them. One used an overtly raunchy profile picture. A second appeared to be the account of a cat. A third described a toy dog. And another was oddly detailed but didn't have an image. But all those kinds of profiles exist among my Facebook acquaintances. Who was I to call them fake?

Get what matters in tech, in your inbox every morning. Sign up for Source Code.

"You basically just told us what all of our research suggests," said Gharbaoui, a data science manager on Facebook's community integrity team. "Which is that when people say fake, they often mean suspicious."

Now, I don't like to brag, but … I am a human. And humans tend to be quite a lot better than computers at complex reasoning and dealing with ambiguities like the ones raised by these possibly fake accounts. So if I struggled to answer Gharbaoui's question, one starts to understand why the algorithms Facebook has pointed at the problem might, too. (By the way, all of the accounts were fictive, but each could have set off alarms for different reasons. See? I told you it was difficult.)

Against that backdrop of uncertainty, over the past few years, the company has developed a new machine learning system that it calls Deep Entity Classification for detecting convincing fake accounts that make it on to the platform. The algorithm studies 20,000 features belonging to the acquaintances of each account it considers to establish whether it's genuine or not. That's something that no human could ever do, and the system has already been used to take down hundreds of millions of accounts that violate the company's terms of service.

The question: Is it enough?

There's fake accounts. Then there's fake accounts.

The threat of fake accounts on social media platforms is real. They "can be used for so much bad or evil," said Max Heinemeyer, director of threat hunting at Darktrace, which specializes in machine learning approaches for cybersecurity. That could be generating spam, running scams, inciting violence, organizing terrorism, or other behavior that is generally considered to be deeply problematic.

But for a company like Facebook, every decision it makes to disable an account is high-stakes. Getting it wrong "essentially means that we are denying people access to this platform," Gharbaoui said, so it has invested in several layers of analysis to root out problem accounts.

In the first instance, it blocks millions of attempts to create accounts every day, says Gharbaoui, using a machine learning model that is designed to very quickly process a high volume of information and make rapid decisions. Facebook won't describe the precise features that could lead to blocked signups, arguing that to do so would provide bad actors with too much information, but factors like the IP address of the request and the volume from that location are the kinds of information that are likely to be considered, among others.

Meanwhile, many more accounts — in fact, what Gharbaoui describes as the "vast majority" of the 1.7 billion that were disabled in the third quarter of last year — are also caught by fast, high-volume machine learning algorithms before the accounts had broad access to the platform. Again, Facebook won't describe what leads to such disablement, but it could be, say, a pattern of initial behavior that has been repeated by many thousands of other accounts in the past — a telltale sign that an account is controlled by a bot.

Even with those protections in place, though, many accounts still sneak through. And perhaps I shouldn't feel too bad about my own ineptitude, because the quality of fake accounts on social media platforms has improved dramatically in recent years.

Today's more advanced approaches to fake account creation use machine learning to generate increasingly realistic profiles, said Shuman Ghosemajumder, global head of artificial intelligence systems at F5 Networks. They are able to create convincing-sounding names and biographies, and even entirely synthetic images that are almost impossible to discern from genuine photographs of real humans.

This situation is born out of necessity on the part of bad actors, according to Heinemeyer: If a bad actor's business model is based on creating fake accounts to, for example, scam people, they're damn sure going to try to learn how to beat the systems that block their fake accounts by creating increasingly realistic spoofs. It makes the situation harder to deal with.

"Where Facebook has a great advantage is knowing what organic activity looks like in its social graph," Ghosemajumder said.

20,000 features under the hood

The social network has tapped that knowledge to build Deep Entity Classification, the machine learning model that it claims has helped it make a big advance in how many of those convincing fake accounts it can root out.

Instead of studying direct properties of an account, like its name or how many friends it has — attributes that the user has direct control over — DEC studies what Facebook calls "deep features." These are properties of the users, groups, pages and other entities that the account is linked to, which it is much harder, if not impossible, for the user to directly control. And it looks not just at those entities, but also at the ones that are another branch along the social graph — it stops there in order to limit the computational overheads of its model. Still, that creates a bewildering number of features that are available to study. Currently, 20,000 are used for DEC's decision-making.

The system is then trained on a data set composed of accounts that have been labeled as fake or real in the past. Unlike most machine learning algorithms, though, it uses two pools of data: high-precision human labels ascribed by security experts, along with much larger amounts of lower-precision automated labels created by other algorithms used by the company. Facebook says that the new algorithm is first roughly trained using millions of examples of lower-precision data, before being fine-tuned by hundreds of thousands of examples of the higher-precision data.

The model is also frequently retrained from data gathered from across the social network, allowing a new version to "ship many times a day," said Daniel Bernhardt, an engineering manager on the company's community integrity team.

How's it working out? So far, DEC has been responsible for the identification and deactivation of over 900 million fake accounts over the past two years, according to Facebook.

A cat-and-mouse game

The levels of nuance and complexity provided by complex machine learning models like this "significantly raise the bar" that bad actors must pass to continue using fake accounts, Ghosemajumder said. But the bar is not raised to impossible heights — and bad actors can always learn to jump higher.

"It will always be a cat-and-mouse game," said Zubair Shafiq, an assistant professor in the department of computer science at the University of Iowa. That's because "you have an active attacker, who changes its behavior."

It's not that bad actors are necessarily able to reverse-engineer a system like the one Facebook has developed. Instead, it's a process of trial-and-error. "They will tweak their approach on intuition," Shafiq said. "And then after five or 10 tries, something might work."

Facebook's Bernhardt likens this to the way a biological virus mutates. "All the virus needs is like one or two mutations in order to make it past an existing defense system," he said. So it's Facebook's job to put enough defenses in place that even those extra mutations don't allow bad actors to fool its systems.

Security experts disagree on whether they think it's possible to keep those defenses improving beyond the capabilities of bad actors in the future.

"You find yourself in a war of algorithms," Heinemeyer said. As machine learning becomes more ubiquitous, he argued, it will be harder for companies to rely on their in-house expertise to keep ahead.

Get in touch with us: Share information securely with Protocol via encrypted Signal or WhatsApp message, at 415-214-4715 or through our anonymous SecureDrop.

But Ghosemajumder likens the situation of fake accounts on social media platforms to that of spam email. It will never be a solved problem, but it could be solved enough to live with. "Most people don't feel the effect of spam now in the same way they did 15 years ago," he said. "I think we have the technology to be able to get ahead of this problem," he added. "It's really just about making the right investments and performing the right R&D."

For its part, Facebook knows this isn't a problem that it's going to solve and move on from. "We will see quite fast and quite, you know, robust reactions" from bad actors every time its fake account defenses are upgraded, said Facebook's Bernhardt. "That's what the team basically comes into work on every day."

Fintech

Judge Zia Faruqui is trying to teach you crypto, one ‘SNL’ reference at a time

His decisions on major cryptocurrency cases have quoted "The Big Lebowski," "SNL," and "Dr. Strangelove." That’s because he wants you — yes, you — to read them.

The ways Zia Faruqui (right) has weighed on cases that have come before him can give lawyers clues as to what legal frameworks will pass muster.

Photo: Carolyn Van Houten/The Washington Post via Getty Images

“Cryptocurrency and related software analytics tools are ‘The wave of the future, Dude. One hundred percent electronic.’”

That’s not a quote from "The Big Lebowski" — at least, not directly. It’s a quote from a Washington, D.C., district court memorandum opinion on the role cryptocurrency analytics tools can play in government investigations. The author is Magistrate Judge Zia Faruqui.

Keep ReadingShow less
Veronica Irwin

Veronica Irwin (@vronirwin) is a San Francisco-based reporter at Protocol covering fintech. Previously she was at the San Francisco Examiner, covering tech from a hyper-local angle. Before that, her byline was featured in SF Weekly, The Nation, Techworker, Ms. Magazine and The Frisc.

The financial technology transformation is driving competition, creating consumer choice, and shaping the future of finance. Hear from seven fintech leaders who are reshaping the future of finance, and join the inaugural Financial Technology Association Fintech Summit to learn more.

Keep ReadingShow less
FTA
The Financial Technology Association (FTA) represents industry leaders shaping the future of finance. We champion the power of technology-centered financial services and advocate for the modernization of financial regulation to support inclusion and responsible innovation.
Enterprise

AWS CEO: The cloud isn’t just about technology

As AWS preps for its annual re:Invent conference, Adam Selipsky talks product strategy, support for hybrid environments, and the value of the cloud in uncertain economic times.

Photo: Noah Berger/Getty Images for Amazon Web Services

AWS is gearing up for re:Invent, its annual cloud computing conference where announcements this year are expected to focus on its end-to-end data strategy and delivering new industry-specific services.

It will be the second re:Invent with CEO Adam Selipsky as leader of the industry’s largest cloud provider after his return last year to AWS from data visualization company Tableau Software.

Keep ReadingShow less
Donna Goodison

Donna Goodison (@dgoodison) is Protocol's senior reporter focusing on enterprise infrastructure technology, from the 'Big 3' cloud computing providers to data centers. She previously covered the public cloud at CRN after 15 years as a business reporter for the Boston Herald. Based in Massachusetts, she also has worked as a Boston Globe freelancer, business reporter at the Boston Business Journal and real estate reporter at Banker & Tradesman after toiling at weekly newspapers.

Image: Protocol

We launched Protocol in February 2020 to cover the evolving power center of tech. It is with deep sadness that just under three years later, we are winding down the publication.

As of today, we will not publish any more stories. All of our newsletters, apart from our flagship, Source Code, will no longer be sent. Source Code will be published and sent for the next few weeks, but it will also close down in December.

Keep ReadingShow less
Bennett Richardson

Bennett Richardson ( @bennettrich) is the president of Protocol. Prior to joining Protocol in 2019, Bennett was executive director of global strategic partnerships at POLITICO, where he led strategic growth efforts including POLITICO's European expansion in Brussels and POLITICO's creative agency POLITICO Focus during his six years with the company. Prior to POLITICO, Bennett was co-founder and CMO of Hinge, the mobile dating company recently acquired by Match Group. Bennett began his career in digital and social brand marketing working with major brands across tech, energy, and health care at leading marketing and communications agencies including Edelman and GMMB. Bennett is originally from Portland, Maine, and received his bachelor's degree from Colgate University.

Enterprise

Why large enterprises struggle to find suitable platforms for MLops

As companies expand their use of AI beyond running just a few machine learning models, and as larger enterprises go from deploying hundreds of models to thousands and even millions of models, ML practitioners say that they have yet to find what they need from prepackaged MLops systems.

As companies expand their use of AI beyond running just a few machine learning models, ML practitioners say that they have yet to find what they need from prepackaged MLops systems.

Photo: artpartner-images via Getty Images

On any given day, Lily AI runs hundreds of machine learning models using computer vision and natural language processing that are customized for its retail and ecommerce clients to make website product recommendations, forecast demand, and plan merchandising. But this spring when the company was in the market for a machine learning operations platform to manage its expanding model roster, it wasn’t easy to find a suitable off-the-shelf system that could handle such a large number of models in deployment while also meeting other criteria.

Some MLops platforms are not well-suited for maintaining even more than 10 machine learning models when it comes to keeping track of data, navigating their user interfaces, or reporting capabilities, Matthew Nokleby, machine learning manager for Lily AI’s product intelligence team, told Protocol earlier this year. “The duct tape starts to show,” he said.

Keep ReadingShow less
Kate Kaye

Kate Kaye is an award-winning multimedia reporter digging deep and telling print, digital and audio stories. She covers AI and data for Protocol. Her reporting on AI and tech ethics issues has been published in OneZero, Fast Company, MIT Technology Review, CityLab, Ad Age and Digiday and heard on NPR. Kate is the creator of RedTailMedia.org and is the author of "Campaign '08: A Turning Point for Digital Media," a book about how the 2008 presidential campaigns used digital media and data.

Latest Stories
Bulletins