Election 2020

Facebook and Twitter are finally calling out election misinformation. Is it working?

It's unclear if labels are effectively limiting the spread of falsehoods.

Facebook and Twitter are finally calling out election misinformation. Is it working?

Facebook is applying the same label to all posts from both Biden and Trump, which some might find confusing.

Image: Facebook

Over the past 24 hours, both Twitter and Facebook have slapped a relentless stream of labels on misleading posts about voting and election results — most prominently from the president himself.

Misinformation researchers have praised some of their efforts. But a much larger question hangs over each of these decisions: Do these labels even work?

Some preliminary studies have found that warning labels on fake news stories can have the unintended effect of making readers more willing to share unlabeled stories — even if those turn out to be untrue as well. Still, other surveys have suggested that in search, when websites are rated as containing unreliable information, the majority of people are less likely to share news from those sites. The impact Facebook, Twitter and YouTube's labels have had over these last few days, however, remains a mystery.

"There's been no research on the effectiveness of this," said Aimee Rinehart, U.S. deputy director of the misinformation project First Draft News.

As election results came in Tuesday and Wednesday, researchers applauded Twitter's strategy — particularly the most heavy-handed labels from the social media platform, which require users to click through an interstitial and prevent them from sharing or engaging with the post.

"Twitter's been the fastest to actually append effective labels and to actually hide objectionable content," said Emerson Brooking, a resident fellow at the Atlantic Council's Digital Forensic Research Lab.

The company took forceful action against tweets from President Trump falsely claiming Democrats were trying to "STEAL the election," hiding it behind a label that said some of the content in the post was "disputed and might be misleading about an election or other civic process."

Facebook's approach has, meanwhile, received more middling reviews. It also labeled President Trump's posts about election stealing, but those labels appeared beneath the post and did not limit users from sharing or engaging with the underlying message. "Final results may be different from initial vote counts," Facebook's label reads. Facebook has also begun applying the same label to all posts from both Biden and Trump, despite the fact that only the Trump campaign has prematurely declared victory, leading some to wonder whether the labels might confuse Facebook users about who's telling the truth.

Researchers have expressed similar concerns about YouTube, which has affixed small, subtle "information panels" with factual information directly under videos and search results related to the election, an approach that experts said could confuse people trying to differentiate between misinformation and reputable news sources. "Too often, YouTube has tried to get away with doing the minimum, and this is another instance of that," said Paul Barrett, deputy director of the New York University Stern Center for Business and Human Rights.

YouTube spokesperson Ivy Choi said in a statement that YouTube "remains vigilant with regards to election-related content in this post-election period."

"In this post-election period, our teams are continuing to work around the clock to quickly remove content misleading people about voting or encouraging interference in the democratic process, raise up authoritative news publishers in search results and 'watch next' panels, and reduce the spread of harmful election-related misinformation," Choi said. "On Election Day, we removed several livestreams for violating our spam policies, and our election results information panel is prominently surfaced above search results and under videos about the election."

Nina Jankowicz, a disinformation fellow at the Wilson Center, said Twitter's labels are likely the most effective because they provide "friction," requiring users to click through warnings with relevant context about why the posts are inaccurate. That process can slow people down and force them to reconsider what they're interacting with.

But Twitter has mainly reserved such muscular actions for the president's tweets. When Trump aides, including White House press secretary Kayleigh McEnany and Eric Trump, prematurely declared Trump had won Pennsylvania on Wednesday afternoon, Twitter affixed a less aggressive label underneath their posts. "Official sources may not have called the race when this was tweeted," the Twitter label reads.

"As votes are still being counted across the country, our teams continue to take enforcement action on tweets that prematurely declare victory or contain misleading information about the election broadly," said a Twitter spokesperson. "Our teams continue to monitor tweets that attempt to spread misleading information about voting, accounts engaged in spammy behavior, and tweets that make premature or inaccurate claims about election results."

One key question is whether any of this is slowing the spread of misinformation. Researchers are doubtful. "The information we have so far regarding the effectiveness of labeling generally is it doesn't really reduce the spread of content," Brooking said.

Some of Trump's most devoted followers have started to copy and paste the tweets that Twitter hides, according to the Election Integrity Partnership, which created an even bigger mess for Twitter to deal with. As of last night, "some of them were cleaned up [and] some of them weren't," said Kate Starbird, a researcher with the EIP.

In some cases, groups like the Real Facebook Oversight Board, a collective of academics and activists focused on accountability at Facebook, reported that misinformation continued to go viral on multiple platforms, even after it got labeled. "#StopTheSteal went from Twitter and transferred over to Facebook with millions of views," said Shireen Mitchell, a member of the group and founder of Stop Online Violence Against Women, in a statement. "It was labeled as inaccurate but it was still spread. It's the perfect example of digital voter suppression."

Beyond the impact information labels have on the spread of those posts, there are even trickier questions to answer, like do the labels actually convince people not to believe the underlying message? Do the labels unintentionally create a sort of Streisand effect, driving people to the original posts purely because they have labels? How much can a single misinformation label really accomplish now when, for four years, the president has been using social media to seed the idea that voter fraud is rampant in America, entirely without objection from Facebook or Twitter? By Election Day, was it already too late?

Getting to those answers would require more sophisticated polling of social media users, which so far, doesn't exist.

On Facebook, at least, that could change. Earlier this year, Facebook announced it would be working with a 17-person independent team of researchers to study the platform's impact on the 2020 election. Among the areas of study was the role Facebook plays in the spread of political misinformation. But it's unclear if the researchers will specifically look at whether people are actually processing the labels in ways that limit the misinformation's spread or, at the very least, helps deter people from believing in the misinformation themselves. Neither Facebook nor the lead researchers on the project responded to Protocol's request for comment.

Of course, it's noteworthy that tech companies are making an effort on this front at all. It's more than they could say they did in 2016 when misinformation went entirely unchecked by every social platform. And there are limits to what these companies alone can do. On Wednesday, even as Facebook and Twitter tried to correct the record on the president's claims about election stealing, his campaign was sending the same message to voters by email — where no one could say he's wrong.

Update: This story was updated at 4:52 p.m. PT to include statements from Twitter and YouTube.

Power

How the creators of Spligate built gaming’s newest unicorn

1047 Games is now valued at $1.5 billion after three rounds of funding since May.

1047 Games' Splitgate amassed 13 million downloads when its beta launched in July.

Image: 1047 Games

The creators of Splitgate had a problem. Their new free-to-play video game, a take on the legendary arena shooter Halo with a teleportation twist borrowed from Valve's Portal, was gaining steam during its open beta period in July. But it was happening too quickly.

Splitgate was growing so fast and unexpectedly that the entire game was starting to break, as the servers supporting the game began to, figuratively speaking, melt down. The game went from fewer than 1,000 people playing it at any given moment in time to suddenly having tens of thousands of concurrent players. Then it grew to hundreds of thousands of players, all trying to log in and play at once across PlayStation, Xbox and PC.

Keep Reading Show less
Nick Statt
Nick Statt is Protocol's video game reporter. Prior to joining Protocol, he was news editor at The Verge covering the gaming industry, mobile apps and antitrust out of San Francisco, in addition to managing coverage of Silicon Valley tech giants and startups. He now resides in Rochester, New York, home of the garbage plate and, completely coincidentally, the World Video Game Hall of Fame. He can be reached at nstatt@protocol.com.

While it's easy to get lost in the operational and technical side of a transaction, it's important to remember the third component of a payment. That is, the human behind the screen.

Over the last two years, many retailers have seen the benefit of investing in new, flexible payments. Ones that reflect the changing lifestyles of younger spenders, who are increasingly holding onto their cash — despite reports to the contrary. This means it's more important than ever for merchants to take note of the latest payment innovations so they can tap into the savings of the COVID-19 generation.

Keep Reading Show less
Antoine Nougue,Checkout.com

Antoine Nougue is Head of Europe at Checkout.com. He works with ambitious enterprise businesses to help them scale and grow their operations through payment processing services. He is responsible for leading the European sales, customer success, engineering & implementation teams and is based out of London, U.K.

Protocol | Policy

Why Twitch’s 'hate raid' lawsuit isn’t just about Twitch

When is it OK for tech companies to unmask their anonymous users? And when should a violation of terms of service get someone sued?

The case Twitch is bringing against two hate raiders is hardly black and white.

Photo: Caspar Camille Rubin/Unsplash

It isn't hard to figure out who the bad guys are in Twitch's latest lawsuit against two of its users. On one side are two anonymous "hate raiders" who have been allegedly bombarding the gaming platform with abhorrent attacks on Black and LGBTQ+ users, using armies of bots to do it. On the other side is Twitch, a company that, for all the lumps it's taken for ignoring harassment on its platform, is finally standing up to protect its users against persistent violators whom it's been unable to stop any other way.

But the case Twitch is bringing against these hate raiders is hardly black and white. For starters, the plaintiff here isn't an aggrieved user suing another user for defamation on the platform. The plaintiff is the platform itself. Complicating matters more is the fact that, according to a spokesperson, at least part of Twitch's goal in the case is to "shed light on the identity of the individuals behind these attacks," raising complicated questions about when tech companies should be able to use the courts to unmask their own anonymous users and, just as critically, when they should be able to actually sue them for violating their speech policies.

Keep Reading Show less
Issie Lapowsky

Issie Lapowsky ( @issielapowsky) is Protocol's chief correspondent, covering the intersection of technology, politics, and national affairs. She also oversees Protocol's fellowship program. Previously, she was a senior writer at Wired, where she covered the 2016 election and the Facebook beat in its aftermath. Prior to that, Issie worked as a staff writer for Inc. magazine, writing about small business and entrepreneurship. She has also worked as an on-air contributor for CBS News and taught a graduate-level course at New York University's Center for Publishing on how tech giants have affected publishing.

Protocol | Workplace

Remote work is here to stay. Here are the cybersecurity risks.

Phishing and ransomware are on the rise. Is your remote workforce prepared?

Before your company institutes work-from-home-forever plans, you need to ensure that your workforce is prepared to face the cybersecurity implications of long-term remote work.

Photo: Stefan Wermuth/Bloomberg via Getty Images

The delta variant continues to dash or delay return-to-work plans, but before your company institutes work-from-home-forever plans, you need to ensure that your workforce is prepared to face the cybersecurity implications of long-term remote work.

So far in 2021, CrowdStrike has already observed over 1,400 "big game hunting" ransomware incidents and $180 million in ransom demands averaging over $5 million each. That's due in part to the "expanded attack surface that work-from-home creates," according to CTO Michael Sentonas.

Keep Reading Show less
Michelle Ma
Michelle Ma (@himichellema) is a reporter at Protocol, where she writes about management, leadership and workplace issues in tech. Previously, she was a news editor of live journalism and special coverage for The Wall Street Journal. Prior to that, she worked as a staff writer at Wirecutter. She can be reached at mma@protocol.com.
Protocol | Fintech

When COVID rocked the insurance market, this startup saw opportunity

Ethos has outraised and outmarketed the competition in selling life insurance directly online — but there's still an $887 billion industry to transform.

Life insurance has been slow to change.

Image: courtneyk/Getty Images

Peter Colis cited a striking statistic that he said led him to launch a life insurance startup: One in twenty children will lose a parent before they turn 15.

"No one ever thinks that will happen to them, but that's the statistics," the co-CEO and co-founder of Ethos told Protocol. "If it's a breadwinning parent, the majority of those families will go bankrupt immediately, within three months. Life insurance elegantly solves this problem."

Keep Reading Show less
Benjamin Pimentel

Benjamin Pimentel ( @benpimentel) covers fintech from San Francisco. He has reported on many of the biggest tech stories over the past 20 years for the San Francisco Chronicle, Dow Jones MarketWatch and Business Insider, from the dot-com crash, the rise of cloud computing, social networking and AI to the impact of the Great Recession and the COVID crisis on Silicon Valley and beyond. He can be reached at bpimentel@protocol.com or via Signal at (510)731-8429.

Latest Stories