Power

Stopping fake accounts is a cat-and-mouse game. Can Facebook win with AI?

The social network is in a constant power struggle with bad actors. Increasingly complex machine learning gives it an edge — for now.

A screenshot of Mark Zuckerberg's Facebook account

Even humans can have a hard time telling fake Facebook accounts from real ones. The company's newly detailed AI draws on a mountain of data to help it tell the difference.

Photo illustration: Rafael Henrique/SOPA Images/LightRocket via Getty Images

Bochra Gharbaoui pointed to four Facebook profiles on a screen in the social network's London headquarters and asked a seemingly simple, but fundamentally important, question: Which is fake?

I didn't know.

Something looked a little off about each of them. One used an overtly raunchy profile picture. A second appeared to be the account of a cat. A third described a toy dog. And another was oddly detailed but didn't have an image. But all those kinds of profiles exist among my Facebook acquaintances. Who was I to call them fake?

Get what matters in tech, in your inbox every morning. Sign up for Source Code.

"You basically just told us what all of our research suggests," said Gharbaoui, a data science manager on Facebook's community integrity team. "Which is that when people say fake, they often mean suspicious."

Now, I don't like to brag, but … I am a human. And humans tend to be quite a lot better than computers at complex reasoning and dealing with ambiguities like the ones raised by these possibly fake accounts. So if I struggled to answer Gharbaoui's question, one starts to understand why the algorithms Facebook has pointed at the problem might, too. (By the way, all of the accounts were fictive, but each could have set off alarms for different reasons. See? I told you it was difficult.)

Against that backdrop of uncertainty, over the past few years, the company has developed a new machine learning system that it calls Deep Entity Classification for detecting convincing fake accounts that make it on to the platform. The algorithm studies 20,000 features belonging to the acquaintances of each account it considers to establish whether it's genuine or not. That's something that no human could ever do, and the system has already been used to take down hundreds of millions of accounts that violate the company's terms of service.

The question: Is it enough?

There's fake accounts. Then there's fake accounts.

The threat of fake accounts on social media platforms is real. They "can be used for so much bad or evil," said Max Heinemeyer, director of threat hunting at Darktrace, which specializes in machine learning approaches for cybersecurity. That could be generating spam, running scams, inciting violence, organizing terrorism, or other behavior that is generally considered to be deeply problematic.

But for a company like Facebook, every decision it makes to disable an account is high-stakes. Getting it wrong "essentially means that we are denying people access to this platform," Gharbaoui said, so it has invested in several layers of analysis to root out problem accounts.

In the first instance, it blocks millions of attempts to create accounts every day, says Gharbaoui, using a machine learning model that is designed to very quickly process a high volume of information and make rapid decisions. Facebook won't describe the precise features that could lead to blocked signups, arguing that to do so would provide bad actors with too much information, but factors like the IP address of the request and the volume from that location are the kinds of information that are likely to be considered, among others.

Meanwhile, many more accounts — in fact, what Gharbaoui describes as the "vast majority" of the 1.7 billion that were disabled in the third quarter of last year — are also caught by fast, high-volume machine learning algorithms before the accounts had broad access to the platform. Again, Facebook won't describe what leads to such disablement, but it could be, say, a pattern of initial behavior that has been repeated by many thousands of other accounts in the past — a telltale sign that an account is controlled by a bot.

Even with those protections in place, though, many accounts still sneak through. And perhaps I shouldn't feel too bad about my own ineptitude, because the quality of fake accounts on social media platforms has improved dramatically in recent years.

Today's more advanced approaches to fake account creation use machine learning to generate increasingly realistic profiles, said Shuman Ghosemajumder, global head of artificial intelligence systems at F5 Networks. They are able to create convincing-sounding names and biographies, and even entirely synthetic images that are almost impossible to discern from genuine photographs of real humans.

This situation is born out of necessity on the part of bad actors, according to Heinemeyer: If a bad actor's business model is based on creating fake accounts to, for example, scam people, they're damn sure going to try to learn how to beat the systems that block their fake accounts by creating increasingly realistic spoofs. It makes the situation harder to deal with.

"Where Facebook has a great advantage is knowing what organic activity looks like in its social graph," Ghosemajumder said.

20,000 features under the hood

The social network has tapped that knowledge to build Deep Entity Classification, the machine learning model that it claims has helped it make a big advance in how many of those convincing fake accounts it can root out.

Instead of studying direct properties of an account, like its name or how many friends it has — attributes that the user has direct control over — DEC studies what Facebook calls "deep features." These are properties of the users, groups, pages and other entities that the account is linked to, which it is much harder, if not impossible, for the user to directly control. And it looks not just at those entities, but also at the ones that are another branch along the social graph — it stops there in order to limit the computational overheads of its model. Still, that creates a bewildering number of features that are available to study. Currently, 20,000 are used for DEC's decision-making.

The system is then trained on a data set composed of accounts that have been labeled as fake or real in the past. Unlike most machine learning algorithms, though, it uses two pools of data: high-precision human labels ascribed by security experts, along with much larger amounts of lower-precision automated labels created by other algorithms used by the company. Facebook says that the new algorithm is first roughly trained using millions of examples of lower-precision data, before being fine-tuned by hundreds of thousands of examples of the higher-precision data.

The model is also frequently retrained from data gathered from across the social network, allowing a new version to "ship many times a day," said Daniel Bernhardt, an engineering manager on the company's community integrity team.

How's it working out? So far, DEC has been responsible for the identification and deactivation of over 900 million fake accounts over the past two years, according to Facebook.

A cat-and-mouse game

The levels of nuance and complexity provided by complex machine learning models like this "significantly raise the bar" that bad actors must pass to continue using fake accounts, Ghosemajumder said. But the bar is not raised to impossible heights — and bad actors can always learn to jump higher.

"It will always be a cat-and-mouse game," said Zubair Shafiq, an assistant professor in the department of computer science at the University of Iowa. That's because "you have an active attacker, who changes its behavior."

It's not that bad actors are necessarily able to reverse-engineer a system like the one Facebook has developed. Instead, it's a process of trial-and-error. "They will tweak their approach on intuition," Shafiq said. "And then after five or 10 tries, something might work."

Facebook's Bernhardt likens this to the way a biological virus mutates. "All the virus needs is like one or two mutations in order to make it past an existing defense system," he said. So it's Facebook's job to put enough defenses in place that even those extra mutations don't allow bad actors to fool its systems.

Security experts disagree on whether they think it's possible to keep those defenses improving beyond the capabilities of bad actors in the future.

"You find yourself in a war of algorithms," Heinemeyer said. As machine learning becomes more ubiquitous, he argued, it will be harder for companies to rely on their in-house expertise to keep ahead.

Get in touch with us: Share information securely with Protocol via encrypted Signal or WhatsApp message, at 415-214-4715 or through our anonymous SecureDrop.

But Ghosemajumder likens the situation of fake accounts on social media platforms to that of spam email. It will never be a solved problem, but it could be solved enough to live with. "Most people don't feel the effect of spam now in the same way they did 15 years ago," he said. "I think we have the technology to be able to get ahead of this problem," he added. "It's really just about making the right investments and performing the right R&D."

For its part, Facebook knows this isn't a problem that it's going to solve and move on from. "We will see quite fast and quite, you know, robust reactions" from bad actors every time its fake account defenses are upgraded, said Facebook's Bernhardt. "That's what the team basically comes into work on every day."

Podcasts

Crypto’s big crash

Is the tech superbubble about to burst?

red and blue light streaks
Photo by Maxim Hopman on Unsplash

This week, we're diving into the crypto crash. What led luna to fall off a cliff? Are we seeing the dot-com bust, part two? Protocol fintech editor Owen Thomas explains it all to us. Then entertainment reporter Janko Roettgers joins us to share the inside scoop on his exclusive interview with Mark Zuckerberg. We learn why Meta is betting it all on the metaverse and Brian finally gets to ask the most pressing question on his mind this week: What does Mark smell like?

And finally, Caitlin and Brian take a moment to reminisce about the iPod, which was put out to pasture this week after more than two decades on the market.

Keep Reading Show less
Caitlin McGarry

Caitlin McGarry is the news editor at Protocol.

Sponsored Content

Foursquare data story: leveraging location data for site selection

We take a closer look at points of interest and foot traffic patterns to demonstrate how location data can be leveraged to inform better site selecti­on strategies.

Imagine: You’re the leader of a real estate team at a restaurant brand looking to open a new location in Manhattan. You have two options you’re evaluating: one site in SoHo, and another site in the Flatiron neighborhood. Which do you choose?

Keep Reading Show less
Enterprise

Say goodbye to unicorns. The cloud centaurs are here.

Protocol caught up with Bessemer Venture Partners’ Kent Bennett to discuss the state of the cloud, the new SaaS models poised to make a dent on the industry and why the firm developed a new SaaS milestone.

Bessemer Venture Partners developed a new SaaS milestone that it’s calling the “centaur,” for startups that reach over $100 million in annual recurring revenue.

Photo: Bessemer Venture Partners

Kent Bennett thinks the SaaS business model is the “greatest business model in the history of the planet.” As a partner at Bessemer Venture Partners, it’s fitting that he’s bullish on the cloud: Bennett was one of the main authors of Bessemer’s annual State of the Cloud report, which gives a bird's eye view of what’s happening in the cloud economy.

In the report, Bessemer analyzed everything from the new ways SaaS companies are trying to monetize their software to what areas are still underserved by SaaS. The firm also developed a new SaaS milestone that it’s calling the “centaur,” for startups that reach over $100 million in annual recurring revenue.

Keep Reading Show less
Aisha Counts

Aisha Counts (@aishacounts) is a reporter at Protocol covering enterprise software. Formerly, she was a management consultant for EY. She's based in Los Angeles and can be reached at acounts@protocol.com.

Climate

The future of electrification, according to Google Trends

People are searching more often for how to electrify their lives, from induction stoves to e-bikes.

From “induction stove” to “home EV charging,” search interest is rising.

Photo: Michael Tuszynski via Unsplash

Feeling cynical about the state of the climate? Well, it’s hardly a guarantee of a liveable climate, but a peek at Google Trends might provide a glimmer of hope.

People are increasingly ready for the all-electric future at home and on the road. From “induction stove” to “home EV charging,” search interest is rising. And while climate change is certainly not up to the individual to solve — that’s mainly on governments and corporations — shifts in public tastes can bring about policy changes. Fast. (See: outdoor dining in major cities; marriage equality.)

Keep Reading Show less
Lisa Martine Jenkins

Lisa Martine Jenkins is a senior reporter at Protocol covering climate. Lisa previously wrote for Morning Consult, Chemical Watch and the Associated Press. Lisa is currently based in Brooklyn, and is originally from the Bay Area. Find her on Twitter ( @l_m_j_) or reach out via email (ljenkins@protocol.com).

What Elon's Twitter 'hold' even means

The answers to all the Musk-iest Twitter acquisition questions.

Keep in mind that Elon Musk isn't exactly known for telling the truth.

Photo illustration: Getty Images; Unsplash; Protocol

Elon Musk can tweet anything he likes, because he’s Elon Musk, and he’s buying Twitter, and free speech is awesome. What he can’t do is make false tweets true.

Musk said Friday that the Twitter deal was temporarily on hold while he looked into a report that spam bots and other fake accounts made up less than 5% of its users. He added, hours after his first tweet, that he was “still committed to [the] acquisition.” Investors promptly sold off shares of Twitter, thinking that Musk’s words somehow had meaning, embodied intent or otherwise had an impact on the world. They did not, eppur si muove, and yet the stock market moved.

Keep Reading Show less
Owen Thomas

Owen Thomas is a senior editor at Protocol overseeing venture capital and financial technology coverage. He was previously business editor at the San Francisco Chronicle and before that editor-in-chief at ReadWrite, a technology news site. You're probably going to remind him that he was managing editor at Valleywag, Gawker Media's Silicon Valley gossip rag. He lives in San Francisco with his husband and Ramona the Love Terrier, whom you should follow on Instagram.

Latest Stories
Bulletins