Power

Stopping fake accounts is a cat-and-mouse game. Can Facebook win with AI?

The social network is in a constant power struggle with bad actors. Increasingly complex machine learning gives it an edge — for now.

A screenshot of Mark Zuckerberg's Facebook account

Even humans can have a hard time telling fake Facebook accounts from real ones. The company's newly detailed AI draws on a mountain of data to help it tell the difference.

Photo illustration: Rafael Henrique/SOPA Images/LightRocket via Getty Images

Bochra Gharbaoui pointed to four Facebook profiles on a screen in the social network's London headquarters and asked a seemingly simple, but fundamentally important, question: Which is fake?

I didn't know.

Something looked a little off about each of them. One used an overtly raunchy profile picture. A second appeared to be the account of a cat. A third described a toy dog. And another was oddly detailed but didn't have an image. But all those kinds of profiles exist among my Facebook acquaintances. Who was I to call them fake?

Get what matters in tech, in your inbox every morning. Sign up for Source Code.

"You basically just told us what all of our research suggests," said Gharbaoui, a data science manager on Facebook's community integrity team. "Which is that when people say fake, they often mean suspicious."

Now, I don't like to brag, but … I am a human. And humans tend to be quite a lot better than computers at complex reasoning and dealing with ambiguities like the ones raised by these possibly fake accounts. So if I struggled to answer Gharbaoui's question, one starts to understand why the algorithms Facebook has pointed at the problem might, too. (By the way, all of the accounts were fictive, but each could have set off alarms for different reasons. See? I told you it was difficult.)

Against that backdrop of uncertainty, over the past few years, the company has developed a new machine learning system that it calls Deep Entity Classification for detecting convincing fake accounts that make it on to the platform. The algorithm studies 20,000 features belonging to the acquaintances of each account it considers to establish whether it's genuine or not. That's something that no human could ever do, and the system has already been used to take down hundreds of millions of accounts that violate the company's terms of service.

The question: Is it enough?

There's fake accounts. Then there's fake accounts.

The threat of fake accounts on social media platforms is real. They "can be used for so much bad or evil," said Max Heinemeyer, director of threat hunting at Darktrace, which specializes in machine learning approaches for cybersecurity. That could be generating spam, running scams, inciting violence, organizing terrorism, or other behavior that is generally considered to be deeply problematic.

But for a company like Facebook, every decision it makes to disable an account is high-stakes. Getting it wrong "essentially means that we are denying people access to this platform," Gharbaoui said, so it has invested in several layers of analysis to root out problem accounts.

In the first instance, it blocks millions of attempts to create accounts every day, says Gharbaoui, using a machine learning model that is designed to very quickly process a high volume of information and make rapid decisions. Facebook won't describe the precise features that could lead to blocked signups, arguing that to do so would provide bad actors with too much information, but factors like the IP address of the request and the volume from that location are the kinds of information that are likely to be considered, among others.

Meanwhile, many more accounts — in fact, what Gharbaoui describes as the "vast majority" of the 1.7 billion that were disabled in the third quarter of last year — are also caught by fast, high-volume machine learning algorithms before the accounts had broad access to the platform. Again, Facebook won't describe what leads to such disablement, but it could be, say, a pattern of initial behavior that has been repeated by many thousands of other accounts in the past — a telltale sign that an account is controlled by a bot.

Even with those protections in place, though, many accounts still sneak through. And perhaps I shouldn't feel too bad about my own ineptitude, because the quality of fake accounts on social media platforms has improved dramatically in recent years.

Today's more advanced approaches to fake account creation use machine learning to generate increasingly realistic profiles, said Shuman Ghosemajumder, global head of artificial intelligence systems at F5 Networks. They are able to create convincing-sounding names and biographies, and even entirely synthetic images that are almost impossible to discern from genuine photographs of real humans.

This situation is born out of necessity on the part of bad actors, according to Heinemeyer: If a bad actor's business model is based on creating fake accounts to, for example, scam people, they're damn sure going to try to learn how to beat the systems that block their fake accounts by creating increasingly realistic spoofs. It makes the situation harder to deal with.

"Where Facebook has a great advantage is knowing what organic activity looks like in its social graph," Ghosemajumder said.

20,000 features under the hood

The social network has tapped that knowledge to build Deep Entity Classification, the machine learning model that it claims has helped it make a big advance in how many of those convincing fake accounts it can root out.

Instead of studying direct properties of an account, like its name or how many friends it has — attributes that the user has direct control over — DEC studies what Facebook calls "deep features." These are properties of the users, groups, pages and other entities that the account is linked to, which it is much harder, if not impossible, for the user to directly control. And it looks not just at those entities, but also at the ones that are another branch along the social graph — it stops there in order to limit the computational overheads of its model. Still, that creates a bewildering number of features that are available to study. Currently, 20,000 are used for DEC's decision-making.

The system is then trained on a data set composed of accounts that have been labeled as fake or real in the past. Unlike most machine learning algorithms, though, it uses two pools of data: high-precision human labels ascribed by security experts, along with much larger amounts of lower-precision automated labels created by other algorithms used by the company. Facebook says that the new algorithm is first roughly trained using millions of examples of lower-precision data, before being fine-tuned by hundreds of thousands of examples of the higher-precision data.

The model is also frequently retrained from data gathered from across the social network, allowing a new version to "ship many times a day," said Daniel Bernhardt, an engineering manager on the company's community integrity team.

How's it working out? So far, DEC has been responsible for the identification and deactivation of over 900 million fake accounts over the past two years, according to Facebook.

A cat-and-mouse game

The levels of nuance and complexity provided by complex machine learning models like this "significantly raise the bar" that bad actors must pass to continue using fake accounts, Ghosemajumder said. But the bar is not raised to impossible heights — and bad actors can always learn to jump higher.

"It will always be a cat-and-mouse game," said Zubair Shafiq, an assistant professor in the department of computer science at the University of Iowa. That's because "you have an active attacker, who changes its behavior."

It's not that bad actors are necessarily able to reverse-engineer a system like the one Facebook has developed. Instead, it's a process of trial-and-error. "They will tweak their approach on intuition," Shafiq said. "And then after five or 10 tries, something might work."

Facebook's Bernhardt likens this to the way a biological virus mutates. "All the virus needs is like one or two mutations in order to make it past an existing defense system," he said. So it's Facebook's job to put enough defenses in place that even those extra mutations don't allow bad actors to fool its systems.

Security experts disagree on whether they think it's possible to keep those defenses improving beyond the capabilities of bad actors in the future.

"You find yourself in a war of algorithms," Heinemeyer said. As machine learning becomes more ubiquitous, he argued, it will be harder for companies to rely on their in-house expertise to keep ahead.

Get in touch with us: Share information securely with Protocol via encrypted Signal or WhatsApp message, at 415-214-4715 or through our anonymous SecureDrop.

But Ghosemajumder likens the situation of fake accounts on social media platforms to that of spam email. It will never be a solved problem, but it could be solved enough to live with. "Most people don't feel the effect of spam now in the same way they did 15 years ago," he said. "I think we have the technology to be able to get ahead of this problem," he added. "It's really just about making the right investments and performing the right R&D."

For its part, Facebook knows this isn't a problem that it's going to solve and move on from. "We will see quite fast and quite, you know, robust reactions" from bad actors every time its fake account defenses are upgraded, said Facebook's Bernhardt. "That's what the team basically comes into work on every day."

Protocol | Fintech

A lawsuit tests who controls the stock market

Citadel Securities seeks to block IEX's product that limits high-frequency trading advantages.

Kenneth Griffin is the founder and chief executive officer of Citadel LLC, which argued during Monday's hearing that IEX's D-Limit order type shouldn't have been approved by the SEC.

Photo: Patrick T. Fallon/Bloomberg via Getty Images

Market maker Citadel Securities, stock exchange IEX and the Securities and Exchange Commission each gave oral arguments Monday in a legal case that could have large implications for financial markets.

Last October, Citadel Securities sued the SEC, seeking to reverse the SEC's previous decision last August to approve IEX's D-Limit order type, arguing that this order type would hurt the overall market. The case was argued before the U.S. Court of Appeals Monday.

Keep Reading Show less
Tomio Geron

Tomio Geron ( @tomiogeron) is a San Francisco-based reporter covering fintech. He was previously a reporter and editor at The Wall Street Journal, covering venture capital and startups. Before that, he worked as a staff writer at Forbes, covering social media and venture capital, and also edited the Midas List of top tech investors. He has also worked at newspapers covering crime, courts, health and other topics. He can be reached at tgeron@protocol.com or tgeron@protonmail.com.

If you've ever tried to pick up a new fitness routine like running, chances are you may have fallen into the "motivation vs. habit" trap once or twice. You go for a run when the sun is shining, only to quickly fall off the wagon when the weather turns sour.

Similarly, for many businesses, 2020 acted as the storm cloud that disrupted their plans for innovation. With leaders busy grappling with the pandemic, innovation frequently got pushed to the backburner. In fact, according to McKinsey, the majority of organizations shifted their focus mainly to maintaining business continuity throughout the pandemic.

Keep Reading Show less
Gaurav Kataria
Group Product Manager, Trello at Atlassian

Everything you need to know about the Allbirds IPO

Allbirds wants to become an iconic global brand for shoes and everything else.

Photo: Spencer Platt/Getty Images

The humble venture capitalist puts on her Allbirds one shoe at a time, just like everybody else (or at least everyone else in Palo Alto).

Since its founding in 2015, Allbirds has become an essential component of the tech bro uniform, alongside such staples as the embroidered Patagonia quarter-zip, Lululemon ABC pants, the Zuck-inspired black T-shirt and a Y Combinator-branded Hydro Flask.

Keep Reading Show less
Hirsh Chitkara
Hirsh Chitkara (@ChitkaraHirsh) is a researcher at Protocol, based out of New York City. Before joining Protocol, he worked for Business Insider Intelligence, where he wrote about Big Tech, telecoms, workplace privacy, smart cities, and geopolitics. He also worked on the Strategy & Analytics team at the Cleveland Indians.
Protocol | Policy

It’s Frances Haugen’s world. We’re all just living in it.

With the release of the Facebook Papers, Haugen holds Facebook's future in her hands.

Haugen's decision to open the trove of documents up to outlets beyond the Journal has sparked a feeding frenzy.

Photo: Frances Haugen

Facebook knows a thing or two about optimizing content for outrage. As it turns out, so does Frances Haugen.

Or at least, the heavyweight team of media and political operatives helping manage the rollout of her massive trove of internal documents seems to have learned the lesson well. Because the document dump known as the Facebook Papers, published the same day as Facebook's earnings call with investors and the same week as the conference where it plans to lay out its future as a metaverse company, wasn't just designed for mass awareness.

Keep Reading Show less
Issie Lapowsky

Issie Lapowsky ( @issielapowsky) is Protocol's chief correspondent, covering the intersection of technology, politics, and national affairs. She also oversees Protocol's fellowship program. Previously, she was a senior writer at Wired, where she covered the 2016 election and the Facebook beat in its aftermath. Prior to that, Issie worked as a staff writer for Inc. magazine, writing about small business and entrepreneurship. She has also worked as an on-air contributor for CBS News and taught a graduate-level course at New York University's Center for Publishing on how tech giants have affected publishing.

Here are all the Facebook Papers stories

They paint a picture of Facebook that's very different from what Mark Zuckerberg likes to say.

Image: Getty Images, Protocol

Monday morning's news drop was a doozy. There was story after story about the goings-on inside Facebook, thanks to thousands of leaked documents from Frances Haugen, the whistleblower who wants the information within those files to spread far and wide. Haugen is also set to speak in front of the British Parliament on Monday, continuing the story that is becoming known as The Facebook Papers.

Keep Reading Show less
David Pierce

David Pierce ( @pierce) is Protocol's editorial director. Prior to joining Protocol, he was a columnist at The Wall Street Journal, a senior writer with Wired, and deputy editor at The Verge. He owns all the phones.

Latest Stories