Source Code: Your daily look at what matters in tech.

source-codesource codeauthorJamie CondliffeNoneWant your finger on the pulse of everything that's happening in tech? Sign up to get David Pierce's daily newsletter.64fd3cbe9f
×

Get access to Protocol

Your information will be used in accordance with our Privacy Policy

I’m already a subscriber
Power

Stopping fake accounts is a cat-and-mouse game. Can Facebook win with AI?

The social network is in a constant power struggle with bad actors. Increasingly complex machine learning gives it an edge — for now.

A screenshot of Mark Zuckerberg's Facebook account

Even humans can have a hard time telling fake Facebook accounts from real ones. The company's newly detailed AI draws on a mountain of data to help it tell the difference.

Photo illustration: Rafael Henrique/SOPA Images/LightRocket via Getty Images

Bochra Gharbaoui pointed to four Facebook profiles on a screen in the social network's London headquarters and asked a seemingly simple, but fundamentally important, question: Which is fake?

I didn't know.

Something looked a little off about each of them. One used an overtly raunchy profile picture. A second appeared to be the account of a cat. A third described a toy dog. And another was oddly detailed but didn't have an image. But all those kinds of profiles exist among my Facebook acquaintances. Who was I to call them fake?

Get what matters in tech, in your inbox every morning. Sign up for Source Code.

"You basically just told us what all of our research suggests," said Gharbaoui, a data science manager on Facebook's community integrity team. "Which is that when people say fake, they often mean suspicious."

Now, I don't like to brag, but … I am a human. And humans tend to be quite a lot better than computers at complex reasoning and dealing with ambiguities like the ones raised by these possibly fake accounts. So if I struggled to answer Gharbaoui's question, one starts to understand why the algorithms Facebook has pointed at the problem might, too. (By the way, all of the accounts were fictive, but each could have set off alarms for different reasons. See? I told you it was difficult.)

Against that backdrop of uncertainty, over the past few years, the company has developed a new machine learning system that it calls Deep Entity Classification for detecting convincing fake accounts that make it on to the platform. The algorithm studies 20,000 features belonging to the acquaintances of each account it considers to establish whether it's genuine or not. That's something that no human could ever do, and the system has already been used to take down hundreds of millions of accounts that violate the company's terms of service.

The question: Is it enough?

There's fake accounts. Then there's fake accounts.

The threat of fake accounts on social media platforms is real. They "can be used for so much bad or evil," said Max Heinemeyer, director of threat hunting at Darktrace, which specializes in machine learning approaches for cybersecurity. That could be generating spam, running scams, inciting violence, organizing terrorism, or other behavior that is generally considered to be deeply problematic.

But for a company like Facebook, every decision it makes to disable an account is high-stakes. Getting it wrong "essentially means that we are denying people access to this platform," Gharbaoui said, so it has invested in several layers of analysis to root out problem accounts.

In the first instance, it blocks millions of attempts to create accounts every day, says Gharbaoui, using a machine learning model that is designed to very quickly process a high volume of information and make rapid decisions. Facebook won't describe the precise features that could lead to blocked signups, arguing that to do so would provide bad actors with too much information, but factors like the IP address of the request and the volume from that location are the kinds of information that are likely to be considered, among others.

Meanwhile, many more accounts — in fact, what Gharbaoui describes as the "vast majority" of the 1.7 billion that were disabled in the third quarter of last year — are also caught by fast, high-volume machine learning algorithms before the accounts had broad access to the platform. Again, Facebook won't describe what leads to such disablement, but it could be, say, a pattern of initial behavior that has been repeated by many thousands of other accounts in the past — a telltale sign that an account is controlled by a bot.

Even with those protections in place, though, many accounts still sneak through. And perhaps I shouldn't feel too bad about my own ineptitude, because the quality of fake accounts on social media platforms has improved dramatically in recent years.

Today's more advanced approaches to fake account creation use machine learning to generate increasingly realistic profiles, said Shuman Ghosemajumder, global head of artificial intelligence systems at F5 Networks. They are able to create convincing-sounding names and biographies, and even entirely synthetic images that are almost impossible to discern from genuine photographs of real humans.

This situation is born out of necessity on the part of bad actors, according to Heinemeyer: If a bad actor's business model is based on creating fake accounts to, for example, scam people, they're damn sure going to try to learn how to beat the systems that block their fake accounts by creating increasingly realistic spoofs. It makes the situation harder to deal with.

"Where Facebook has a great advantage is knowing what organic activity looks like in its social graph," Ghosemajumder said.

20,000 features under the hood

The social network has tapped that knowledge to build Deep Entity Classification, the machine learning model that it claims has helped it make a big advance in how many of those convincing fake accounts it can root out.

Instead of studying direct properties of an account, like its name or how many friends it has — attributes that the user has direct control over — DEC studies what Facebook calls "deep features." These are properties of the users, groups, pages and other entities that the account is linked to, which it is much harder, if not impossible, for the user to directly control. And it looks not just at those entities, but also at the ones that are another branch along the social graph — it stops there in order to limit the computational overheads of its model. Still, that creates a bewildering number of features that are available to study. Currently, 20,000 are used for DEC's decision-making.

The system is then trained on a data set composed of accounts that have been labeled as fake or real in the past. Unlike most machine learning algorithms, though, it uses two pools of data: high-precision human labels ascribed by security experts, along with much larger amounts of lower-precision automated labels created by other algorithms used by the company. Facebook says that the new algorithm is first roughly trained using millions of examples of lower-precision data, before being fine-tuned by hundreds of thousands of examples of the higher-precision data.

The model is also frequently retrained from data gathered from across the social network, allowing a new version to "ship many times a day," said Daniel Bernhardt, an engineering manager on the company's community integrity team.

How's it working out? So far, DEC has been responsible for the identification and deactivation of over 900 million fake accounts over the past two years, according to Facebook.

A cat-and-mouse game

The levels of nuance and complexity provided by complex machine learning models like this "significantly raise the bar" that bad actors must pass to continue using fake accounts, Ghosemajumder said. But the bar is not raised to impossible heights — and bad actors can always learn to jump higher.

"It will always be a cat-and-mouse game," said Zubair Shafiq, an assistant professor in the department of computer science at the University of Iowa. That's because "you have an active attacker, who changes its behavior."

It's not that bad actors are necessarily able to reverse-engineer a system like the one Facebook has developed. Instead, it's a process of trial-and-error. "They will tweak their approach on intuition," Shafiq said. "And then after five or 10 tries, something might work."

Facebook's Bernhardt likens this to the way a biological virus mutates. "All the virus needs is like one or two mutations in order to make it past an existing defense system," he said. So it's Facebook's job to put enough defenses in place that even those extra mutations don't allow bad actors to fool its systems.

Security experts disagree on whether they think it's possible to keep those defenses improving beyond the capabilities of bad actors in the future.

"You find yourself in a war of algorithms," Heinemeyer said. As machine learning becomes more ubiquitous, he argued, it will be harder for companies to rely on their in-house expertise to keep ahead.

Get in touch with us: Share information securely with Protocol via encrypted Signal or WhatsApp message, at 415-214-4715 or through our anonymous SecureDrop.

But Ghosemajumder likens the situation of fake accounts on social media platforms to that of spam email. It will never be a solved problem, but it could be solved enough to live with. "Most people don't feel the effect of spam now in the same way they did 15 years ago," he said. "I think we have the technology to be able to get ahead of this problem," he added. "It's really just about making the right investments and performing the right R&D."

For its part, Facebook knows this isn't a problem that it's going to solve and move on from. "We will see quite fast and quite, you know, robust reactions" from bad actors every time its fake account defenses are upgraded, said Facebook's Bernhardt. "That's what the team basically comes into work on every day."

Protocol | Workplace

The Activision Blizzard lawsuit has opened the floodgates

An employee walkout, a tumbling stock price and damning new reports of misconduct.

Activision Blizzard is being sued for widespread sexism, harassment and discrimination.

Photo: Bloomberg/Getty Images

Activision Blizzard is in crisis mode. The World of Warcraft publisher was the subject of a shocking lawsuit filed by California's Department of Fair Employment and Housing last week over claims of widespread sexism, harassment and discrimination against female employees. The resulting fallout has only intensified by the day, culminating in a 500-person walkout at the headquarters of Blizzard Entertainment in Irvine on Wednesday.

The company's stock price has tumbled nearly 10% this week, and CEO Bobby Kotick acknowledged in a message to employees Tuesday that Activision Blizzard's initial response was "tone deaf." Meanwhile, there has been a continuous stream of new reports unearthing horrendous misconduct as more and more former and current employees speak out about the working conditions and alleged rampant misogyny at one of the video game industry's largest and most powerful employers.

Keep Reading Show less
Nick Statt
Nick Statt is Protocol's video game reporter. Prior to joining Protocol, he was news editor at The Verge covering the gaming industry, mobile apps and antitrust out of San Francisco, in addition to managing coverage of Silicon Valley tech giants and startups. He now resides in Rochester, New York, home of the garbage plate and, completely coincidentally, the World Video Game Hall of Fame. He can be reached at nstatt@protocol.com.

Over the last year, financial institutions have experienced unprecedented demand from their customers for exposure to cryptocurrency, and we've seen an inflow of institutional dollars driving bitcoin and other cryptocurrencies to record prices. Some banks have already launched cryptocurrency programs, but many more are evaluating the market.

That's why we've created the Crypto Maturity Model: an iterative roadmap for cryptocurrency product rollout, enabling financial institutions to evaluate market opportunities while addressing compliance requirements.

Keep Reading Show less
Caitlin Barnett, Chainanalysis
Caitlin’s legal and compliance experience encompasses both cryptocurrency and traditional finance. As Director of Regulation and Compliance at Chainalysis, she helps leading financial institutions strategize and build compliance programs in order to adopt cryptocurrencies and offer new products to their customers. In addition, Caitlin helps facilitate dialogue with regulators and the industry on key policy issues within the cryptocurrency industry.
Protocol | Workplace

Founder sues the company that acquired her startup

Knoq founder Kendall Hope Tucker is suing the company that acquired her startup for discrimination, retaliation and fraud.

Kendall Hope Tucker, founder of Knoq, is suing Ad Practitioners, which acquired her company last year.

Photo: Kendall Hope Tucker

Kendall Hope Tucker felt excited when she sold her startup last December. Tucker, the founder of Knoq, was sad to "give up control of a company [she] had poured five years of [her] heart, soul and energy into building," she told Protocol, but ultimately felt hopeful that selling it to digital media company Ad Practitioners was the best financial outcome for her, her team and her investors. Now, seven months later, Tucker is suing Ad Practitioners alleging discrimination, retaliation and fraud.

Knoq found success selling its door-to-door sales and analytics services to companies such as Google Fiber, Inspire Energy, Fluent Home and others. Knoq representatives would walk around neighborhoods, knocking on doors to market its customers' products and services. The pandemic, however, threw a wrench in its business. Prior to the acquisition, Knoq says it raised $6.5 million from Initialized Capital, Haystack.vc, Techstars and others.

Keep Reading Show less
Megan Rose Dickey
Megan Rose Dickey is a senior reporter at Protocol covering labor and diversity in tech. Prior to joining Protocol, she was a senior reporter at TechCrunch and a reporter at Business Insider.
dei
Protocol | Workplace

What’s the purpose of a chief purpose officer?

Cisco's EVP and chief people, policy & purpose officer shares how the company is creating a more conscious and hybrid work culture.

Like many large organizations, the leaders at Cisco spent much of the past year working to ensure their employees had an inclusive and flexible workplace while everyone worked from home during the pandemic. In doing so, they brought a new role into the mix. In March 2021 Francine Katsoudas transitioned from EVP and chief people officer to chief people, policy & purpose Officer.

For many, the role of a purpose officer is new. Purpose officers hold their companies accountable to their mission and the people who work for them. In a conversation with Protocol, Katsoudas shared how she is thinking about the expanded role and the future of hybrid work at Cisco.

Keep Reading Show less
Amber Burton

Amber Burton (@amberbburton) is a reporter at Protocol. Previously, she covered personal finance and diversity in business at The Wall Street Journal. She earned an M.S. in Strategic Communications from Columbia University and B.A. in English and Journalism from Wake Forest University. She lives in North Carolina.

Protocol | Fintech

The digital dollar is coming. The payments industry is worried.

Jodie Kelley heads the Electronic Transactions Association. The trade group's members, who process $7 trillion a year in payments, want a say in the digital currency.

Jodie Kelley is CEO of the Electronic Transactions Association.

Photo: Electronic Transactions Association

The Electronic Transactions Association launched in 1990 just as new technologies, led by the World Wide Web, began upending the world of commerce and finance.

The disruption hasn't stopped.

Keep Reading Show less
Benjamin Pimentel

Benjamin Pimentel ( @benpimentel) covers fintech from San Francisco. He has reported on many of the biggest tech stories over the past 20 years for the San Francisco Chronicle, Dow Jones MarketWatch and Business Insider, from the dot-com crash, the rise of cloud computing, social networking and AI to the impact of the Great Recession and the COVID crisis on Silicon Valley and beyond. He can be reached at bpimentel@protocol.com or via Signal at (510)731-8429.

Latest Stories