Election 2020

Facebook and Twitter are finally calling out election misinformation. Is it working?

It's unclear if labels are effectively limiting the spread of falsehoods.

Facebook and Twitter are finally calling out election misinformation. Is it working?

Facebook is applying the same label to all posts from both Biden and Trump, which some might find confusing.

Image: Facebook

Over the past 24 hours, both Twitter and Facebook have slapped a relentless stream of labels on misleading posts about voting and election results — most prominently from the president himself.

Misinformation researchers have praised some of their efforts. But a much larger question hangs over each of these decisions: Do these labels even work?

Some preliminary studies have found that warning labels on fake news stories can have the unintended effect of making readers more willing to share unlabeled stories — even if those turn out to be untrue as well. Still, other surveys have suggested that in search, when websites are rated as containing unreliable information, the majority of people are less likely to share news from those sites. The impact Facebook, Twitter and YouTube's labels have had over these last few days, however, remains a mystery.

"There's been no research on the effectiveness of this," said Aimee Rinehart, U.S. deputy director of the misinformation project First Draft News.

As election results came in Tuesday and Wednesday, researchers applauded Twitter's strategy — particularly the most heavy-handed labels from the social media platform, which require users to click through an interstitial and prevent them from sharing or engaging with the post.

"Twitter's been the fastest to actually append effective labels and to actually hide objectionable content," said Emerson Brooking, a resident fellow at the Atlantic Council's Digital Forensic Research Lab.

The company took forceful action against tweets from President Trump falsely claiming Democrats were trying to "STEAL the election," hiding it behind a label that said some of the content in the post was "disputed and might be misleading about an election or other civic process."

Facebook's approach has, meanwhile, received more middling reviews. It also labeled President Trump's posts about election stealing, but those labels appeared beneath the post and did not limit users from sharing or engaging with the underlying message. "Final results may be different from initial vote counts," Facebook's label reads. Facebook has also begun applying the same label to all posts from both Biden and Trump, despite the fact that only the Trump campaign has prematurely declared victory, leading some to wonder whether the labels might confuse Facebook users about who's telling the truth.

Researchers have expressed similar concerns about YouTube, which has affixed small, subtle "information panels" with factual information directly under videos and search results related to the election, an approach that experts said could confuse people trying to differentiate between misinformation and reputable news sources. "Too often, YouTube has tried to get away with doing the minimum, and this is another instance of that," said Paul Barrett, deputy director of the New York University Stern Center for Business and Human Rights.

YouTube spokesperson Ivy Choi said in a statement that YouTube "remains vigilant with regards to election-related content in this post-election period."

"In this post-election period, our teams are continuing to work around the clock to quickly remove content misleading people about voting or encouraging interference in the democratic process, raise up authoritative news publishers in search results and 'watch next' panels, and reduce the spread of harmful election-related misinformation," Choi said. "On Election Day, we removed several livestreams for violating our spam policies, and our election results information panel is prominently surfaced above search results and under videos about the election."

Nina Jankowicz, a disinformation fellow at the Wilson Center, said Twitter's labels are likely the most effective because they provide "friction," requiring users to click through warnings with relevant context about why the posts are inaccurate. That process can slow people down and force them to reconsider what they're interacting with.

But Twitter has mainly reserved such muscular actions for the president's tweets. When Trump aides, including White House press secretary Kayleigh McEnany and Eric Trump, prematurely declared Trump had won Pennsylvania on Wednesday afternoon, Twitter affixed a less aggressive label underneath their posts. "Official sources may not have called the race when this was tweeted," the Twitter label reads.

"As votes are still being counted across the country, our teams continue to take enforcement action on tweets that prematurely declare victory or contain misleading information about the election broadly," said a Twitter spokesperson. "Our teams continue to monitor tweets that attempt to spread misleading information about voting, accounts engaged in spammy behavior, and tweets that make premature or inaccurate claims about election results."

One key question is whether any of this is slowing the spread of misinformation. Researchers are doubtful. "The information we have so far regarding the effectiveness of labeling generally is it doesn't really reduce the spread of content," Brooking said.

Some of Trump's most devoted followers have started to copy and paste the tweets that Twitter hides, according to the Election Integrity Partnership, which created an even bigger mess for Twitter to deal with. As of last night, "some of them were cleaned up [and] some of them weren't," said Kate Starbird, a researcher with the EIP.

In some cases, groups like the Real Facebook Oversight Board, a collective of academics and activists focused on accountability at Facebook, reported that misinformation continued to go viral on multiple platforms, even after it got labeled. "#StopTheSteal went from Twitter and transferred over to Facebook with millions of views," said Shireen Mitchell, a member of the group and founder of Stop Online Violence Against Women, in a statement. "It was labeled as inaccurate but it was still spread. It's the perfect example of digital voter suppression."

Beyond the impact information labels have on the spread of those posts, there are even trickier questions to answer, like do the labels actually convince people not to believe the underlying message? Do the labels unintentionally create a sort of Streisand effect, driving people to the original posts purely because they have labels? How much can a single misinformation label really accomplish now when, for four years, the president has been using social media to seed the idea that voter fraud is rampant in America, entirely without objection from Facebook or Twitter? By Election Day, was it already too late?

Getting to those answers would require more sophisticated polling of social media users, which so far, doesn't exist.

On Facebook, at least, that could change. Earlier this year, Facebook announced it would be working with a 17-person independent team of researchers to study the platform's impact on the 2020 election. Among the areas of study was the role Facebook plays in the spread of political misinformation. But it's unclear if the researchers will specifically look at whether people are actually processing the labels in ways that limit the misinformation's spread or, at the very least, helps deter people from believing in the misinformation themselves. Neither Facebook nor the lead researchers on the project responded to Protocol's request for comment.

Of course, it's noteworthy that tech companies are making an effort on this front at all. It's more than they could say they did in 2016 when misinformation went entirely unchecked by every social platform. And there are limits to what these companies alone can do. On Wednesday, even as Facebook and Twitter tried to correct the record on the president's claims about election stealing, his campaign was sending the same message to voters by email — where no one could say he's wrong.

Update: This story was updated at 4:52 p.m. PT to include statements from Twitter and YouTube.

Workplace

You need a healthy ‘debate culture’

From their first day, employees at Appian are encouraged to disagree with anyone at the company — including the CEO. Here’s how it works.

Appian co-founder and CEO Matt Calkins wants his employees to disagree with him.

Photo: Appian

Matt Calkins often hears that he’s polite, even deferential. But as CEO of Appian, he tells employees to challenge each other — especially their bosses — early and often.

“I love arguments. I love ideas clashing,” Calkins said. “I regard it as a personal compliment when someone respectfully dissents.”

Keep Reading Show less
Allison Levitsky
Allison Levitsky is a reporter at Protocol covering workplace issues in tech. She previously covered big tech companies and the tech workforce for the Silicon Valley Business Journal. Allison grew up in the Bay Area and graduated from UC Berkeley.

Some of the most astounding tech-enabled advances of the next decade, from cutting-edge medical research to urban traffic control and factory floor optimization, will be enabled by a device often smaller than a thumbnail: the memory chip.

While vast amounts of data are created, stored and processed every moment — by some estimates, 2.5 quintillion bytes daily — the insights in that code are unlocked by the memory chips that hold it and transfer it. “Memory will propel the next 10 years into the most transformative years in human history,” said Sanjay Mehrotra, president and CEO of Micron Technology.

Keep Reading Show less
James Daly
James Daly has a deep knowledge of creating brand voice identity, including understanding various audiences and targeting messaging accordingly. He enjoys commissioning, editing, writing, and business development, particularly in launching new ventures and building passionate audiences. Daly has led teams large and small to multiple awards and quantifiable success through a strategy built on teamwork, passion, fact-checking, intelligence, analytics, and audience growth while meeting budget goals and production deadlines in fast-paced environments. Daly is the Editorial Director of 2030 Media and a contributor at Wired.

Gopuff says it will make it through the fast-delivery slump

Maria Renz on her new role, the state of fast delivery and Gopuff’s goals for the coming year.

Gopuff has raised $4 billion at a $15 billion valuation.

Photo: Gopuff

The fast-delivery boom sent startups soaring during the pandemic, only for them to come crashing down in recent months. But Maria Renz said Gopuff is prepared to get through the slump.

“Gopuff is really well-positioned to weather through those challenges that we expect in the next year or so,” Renz told Protocol. “We're first party, we control elements of our mix, like price, very directly. And again, we have nine years of experience.”

Keep Reading Show less
Sarah Roach

Sarah (Sarahroach_) writes for Source Code at Protocol. She's a recent graduate of The George Washington University, where she studied journalism and criminal justice. She served for two years as editor-in-chief of GW's independent newspaper, The GW Hatchet. Sarah is based in New York, and can be reached at sroach@protocol.com

Enterprise

AT&T CTO: Challenges of the cloud transition are interpersonal

Jeremy Legg sat down with Protocol to discuss the race to 5G, the challenges of the cloud transition and nabbing tech talent.

AT&T CTO Jeremy Legg spoke with Protocol about the company's cloud transition and more.

Photo: AT&T

Jeremy Legg is two months into his role as CTO of AT&T, and he has been tasked with a big mandate: transforming the company into a software-driven business, with 5G and fiber as core growth areas.

This isn’t Legg’s first CTO gig, just his biggest one. He’s an entertainment biz guy who’s now at the center of the much bigger, albeit less glamorous, telecom business. Prior to joining AT&T in 2020, Legg was the CTO of WarnerMedia, where he was the technical architect behind HBO Max.

Keep Reading Show less
Michelle Ma

Michelle Ma (@himichellema) is a reporter at Protocol, where she writes about management, leadership and workplace issues in tech. Previously, she was a news editor of live journalism and special coverage for The Wall Street Journal. Prior to that, she worked as a staff writer at Wirecutter. She can be reached at mma@protocol.com.

Workplace

How Canva uses Canva

Design tips and tricks from the ultimate Canva pros: Canva employees themselves.

Employees use Canva to build the internal weekly “Canvazine,” product vision decks, team swag and more.

Illustration: Christopher T. Fong/Protocol

Ever wondered how the companies behind your favorite tech use their own products? We’ve told you how Spotify uses Spotify, How Slack uses Slack and how Meta uses its workplace tools. We talked to Canva employees about the creative ways they use the design tool.

The thing about Canva is that it's ridiculously easy to use. Anyone, regardless of skill level, can open up the app and produce a visually appealing presentation, infographic or video. The 10-year-old company has become synonymous with DIY design, serving as the preferred Instagram infographic app for the social justice “girlies.” Still, the app has plenty of overlooked features that Canvanauts (Canva’s word for its employees) use every day.

Keep Reading Show less
Lizzy Lawrence

Lizzy Lawrence ( @LizzyLaw_) is a reporter at Protocol, covering tools and productivity in the workplace. She's a recent graduate of the University of Michigan, where she studied sociology and international studies. She served as editor in chief of The Michigan Daily, her school's independent newspaper. She's based in D.C., and can be reached at llawrence@protocol.com.

Latest Stories
Bulletins