Politics

Facebook’s new research project will show how it influenced the 2020 election — after it's over

To study its impact on the 2020 election, Facebook will ask some people to stop using Facebook

Mark Zuckerberg standing in front of his own face on a screen

Facebook's CEO Mark Zuckerberg once said argued it was a "pretty crazy idea" to think Facebook could influence an election

Photo: Chip Somodevilla/Getty Images

Facebook is teaming up with academics across the country to determine once and for all whether Facebook is in fact influencing the 2020 election. The only catch: They won't know the answer until well after it's over.

The new research project, which Facebook announced Monday, will study how the 2020 election is playing out on the world's largest social network, and how the platform affects things like political polarization, voter participation, trust in democracy and the spread of misinformation. A 17-person research team, which includes leading academics in the fields of media and politics, will work with some two dozen Facebook staffers to design the experiments.

Once users opt in to be part of the study, the research team will deidentify their data, split them into groups and begin tinkering with their News Feeds, switch up their ad experiences and, in some cases, even ask them to stop using Facebook temporarily, all while surveying participants to see how their experiences and viewpoints evolve and stack up against control groups. The findings, which Facebook will have no veto power over, will be published for free to the public beginning next summer.

In some ways, the undertaking demonstrates how far Facebook has come since 2016, when it eagerly courted political clients with the promise of influence, then, following President Trump's victory, just as eagerly denied that it had any influence at all. Mark Zuckerberg himself famously called it a "pretty crazy idea." But the fact that this sort of research is only now getting underway also demonstrates just how little we actually know four years later. So much of what regulators and the public have come to believe about how social media affects elections boils down to anecdotes and assumption, not data. This research could change that.

"We're in these really uncharted waters where you have two or three enormous companies that control the vast majority of the data we need to advance science," said Joshua Tucker, a professor of politics at New York University, who is leading the outside research team along with Talia Stroud, a professor of communications at the University of Texas at Austin. "This has been just an amazing opportunity to work with a platform, and so we have to try that."

Facebook has a spotty record when it comes to social science research, particularly as it pertains to elections. In 2010, the company performed a randomized controlled trial on 61 million users to see whether serving different iterations of its "I Voted" sticker to different groups of users would impact voter turnout. (It did, sparking outrage.) And of course there was the Cambridge Analytica scandal, where a Cambridge University professor used a Facebook app to scrape data from tens of millions of Facebook users, which he then sold to the now defunct and disgraced data analytics firm that powered the Trump campaign.

This time around, Facebook is being significantly more cautious. Facebook employees will be the only ones with access to the raw data, for one thing, and users will have to explicitly opt-in to participate. That said, Stroud, Tucker and the 15 other researchers they've selected will have maximum input on the research questions and the experiment design. They will preregister their study plans so anyone interested can see exactly what the researchers are setting out to find before they even begin and can compare those goals to whatever they ultimately deliver. Facebook won't have any say over what the researchers do or don't publish, and the researchers won't be paid by Facebook, either.

"The academic team really thought about what the key questions might be, and then in collaboration with Facebook, we've been thinking through what the designs would be, with us bringing the expertise as far as methodology and the statistical techniques involved, and the Facebook team really bring a lot of expertise about what the platform is able to do," Stroud said.

The researchers plan to recruit about 200,000 to 400,000 people across the U.S. to participate. Those who opt in might see fewer political ads or fewer news stories about certain topics in their News Feeds. They might be asked to download apps that monitor their online behavior, though Facebook said researchers won't have access to private messages. They might even be asked to stop using Facebook altogether. Some who will act as a control group won't see any changes at all.

Both the researchers and the Facebook executives are confident the experiment group is so small that these tweaks won't have a meaningful outcome on the election. Of course, that might not stop political operatives from claiming that it will; some, like President Trump's digital director, Gary Coby, have already speculated without evidence that Facebook will intentionally register more Democrats than Republicans as part of its voter registration push.

But the potential insights may be well worth the reputational risks. Researchers and regulators have been pushing Facebook and other tech companies to be more transparent for years. In 2018, Facebook helped launch an organization called Social Science One, through which it has been working to open up data to third-party researchers. In February, as part of that work, Facebook released 38 million URLs that were shared on Facebook between 2017 and 2019. That tranche of data was the result of a painstaking process that took years to come to fruition, as Facebook and the researchers sparred over the best way to share data without sacrificing user privacy.

With this design, the researchers are taking a less intensive route, in order to ensure the research could actually get underway before November. "It's not the ideal system, but it is going to be unprecedented in the degree of research we will be able to accomplish," said Nate Persily, a professor at Stanford University School of Law and one of the co-chairs of Social Science One. "If we didn't do something related to the 2020 election, then that would have been a problem. This is the thing we can do."

Persily as well as Tucker and Stroud, who co-chair committees within Social Science One, hope that if this project is successful, it can serve as a model for other social media giants like YouTube and Twitter to open up their black boxes, too.

Of course, whatever the researchers do find about Facebook's impact on the election, it won't come in time for Facebook to act on it before November. But Facebook's head of research and transparency, Chaya Nayak, said such a huge undertaking wouldn't have been possible two years ago, because the company needed time to figure out how to make it work. "There's going to be other elections, and it's really important for us, as a company, to understand the impact of our platform on both the election coming up as well as elections moving forward," she said. "It's never too late."

Policy

We’ll be here again: How tech companies fail to prevent terrorism

Social media platforms are playing defense to stop mass shootings. Without cooperation and legislation, it’s not working.

The Buffalo attack showed that tech’s best defenses against online hate aren’t sophisticated enough to fight the algorithms designed by those same companies to promote content.

Photo: Kent Nishimura / Los Angeles Times via Getty Images

Tech platforms' patchwork approach to content moderation has made them a hotbed for hate speech that can turn deadly, as it did this weekend in Buffalo. The alleged shooter that killed 10 in a historically Black neighborhood used Discord to plan his rampage for months and livestreamed it on Twitch.

The move mirrors what happened in Christchurch, New Zealand, when a white supremacist murdered 51 people in a mosque in 2019. He viewed the killings as a meme. To disseminate that meme, he turned to the same place more than 1 billion other users do: Facebook. This pattern is destined to repeat itself as long as tech companies continue to play defense instead of offense against online hate and fail to work together.

Keep Reading Show less
Sarah Roach

Sarah Roach is a news writer at Protocol (@sarahroach_) and contributes to Source Code. She is a recent graduate of George Washington University, where she studied journalism and mass communication and criminal justice. She previously worked for two years as editor in chief of her school's independent newspaper, The GW Hatchet.

Sponsored Content

Foursquare data story: leveraging location data for site selection

We take a closer look at points of interest and foot traffic patterns to demonstrate how location data can be leveraged to inform better site selecti­on strategies.

Imagine: You’re the leader of a real estate team at a restaurant brand looking to open a new location in Manhattan. You have two options you’re evaluating: one site in SoHo, and another site in the Flatiron neighborhood. Which do you choose?

Keep Reading Show less
Enterprise

SAP’s leadership vacuum on display with Hasso Plattner’s last stand

Conflict of interest questions, blowback to the Ukraine response and a sinking stock price hang in the backdrop of Plattner’s last election to the SAP supervisory board.

Plattner will run for a final two-year transition term atop SAP’s supervisory board.

Photo: Soeren Stache/picture alliance via Getty Images

Just one man has been with SAP over its entire 50-year history: co-founder Hasso Plattner. Now, the 78-year-old software visionary is making his last stand.

On Wednesday, Plattner will run for a final two-year transition term atop SAP’s supervisory board, an entity mandated by law in Germany that basically oversees the executive team. Leaders at SAP, for example, report to the supervisory board, not the CEO.

Keep Reading Show less
Joe Williams

Joe Williams is a writer-at-large at Protocol. He previously covered enterprise software for Protocol, Bloomberg and Business Insider. Joe can be reached at JoeWilliams@Protocol.com. To share information confidentially, he can also be contacted on a non-work device via Signal (+1-309-265-6120) or JPW53189@protonmail.com.

Enterprise

Why Google Cloud is providing security for AWS and Azure users too

“To just focus on Google Cloud, we wouldn't be serving our customers,” Google Cloud security chief Phil Venables told Protocol.

Google Cloud announced the newest addition to its menu of security offerings.

Photo: G/Unsplash

In August, Google Cloud pledged to invest $10 billion over five years in cybersecurity — a target that looks like it will be easily achieved, thanks to the $5.4 billion deal to acquire Mandiant and reported $500 million acquisition of Siemplify in the first few months of 2022 alone.

But the moves raise questions about Google Cloud’s main goal for its security operation. Does Google want to offer the most secure cloud platform in order to inspire more businesses to run on it — or build a major enterprise cybersecurity products and services business, in whatever environment it’s chosen?

Keep Reading Show less
Kyle Alspach

Kyle Alspach ( @KyleAlspach) is a senior reporter at Protocol, focused on cybersecurity. He has covered the tech industry since 2010 for outlets including VentureBeat, CRN and the Boston Globe. He lives in Portland, Oregon, and can be reached at kalspach@procotol.com.

Workplace

The tools that make you pay for not getting stuff done

Some tools let you put your money on the line for productivity. Should you bite?

Commitment contracts are popular in a niche corner of the internet, and the tools have built up loyal followings of people who find the extra motivation effective.

Photoillustration: Anna Shvets/Pexels; Protocol

Danny Reeves, CEO and co-founder of Beeminder, is used to defending his product.

“When people first hear about it, they’re kind of appalled,” Reeves said. “Making money off of people’s failure is how they view it.”

Keep Reading Show less
Lizzy Lawrence

Lizzy Lawrence ( @LizzyLaw_) is a reporter at Protocol, covering tools and productivity in the workplace. She's a recent graduate of the University of Michigan, where she studied sociology and international studies. She served as editor in chief of The Michigan Daily, her school's independent newspaper. She's based in D.C., and can be reached at llawrence@protocol.com.

Latest Stories
Bulletins