On a September afternoon in 2015, Adam Sharp and a clutch of his fellow Twitter employees led Donald Trump on a walking tour of the company's New York City headquarters. The unlikely frontrunner for the Republican presidential nomination had come to the office for a live Q&A with Twitter users.
As they walked the halls of Twitter's newly renovated office, Sharp, who was head of news, government and elections at the time, expected canned questions from Trump about STEM education or high-skilled immigration, which politicians always asked when they visited. But Trump wasn't particularly interested in talking tech.
Instead, Sharp said, Trump's first question as he surveyed the space was: "Who did your concrete?"
The banter that day was mostly chummy, Sharp said, as Trump talked about the success he'd had on Twitter both as a candidate and as a reality show host. Before he left, he posted a picture of himself beaming and giving a thumbs up with the Twitter crew. "I had a great time at @TwitterNYC," the tweet read. Some of the Twitter staffers even wore Make America Great Again hats for the photo op. Five years later, Sharp is relieved he did not. "I've never been happier to follow the Dukakis rule," he said. "Don't put on the hat."
I had a great time at @TwitterNYC #AskTrump pic.twitter.com/RfWQdDPB33
— Donald J. Trump (@realDonaldTrump) September 21, 2015
To say the president's relationship with tech companies has soured since then would be almost laughably understated. Today, he routinely accuses Twitter, Facebook and Google of censorship, and just last week, in a video also published on Twitter, he told his followers that Big Tech "has to be stopped" because "they're taking away your rights."
But Trump's outlook on tech hasn't changed in a vacuum. It's changed because of all of the decisions these companies have made — whether it's labeling election misinformation or suppressing the spread of false news — to make up for their missteps in 2016. Ironically, it was the election of a candidate who visited Twitter five years ago with no tech talking points to speak of that finally forced the country to question tech platforms' power and their dramatic influence on elections.
There is no consensus at all about the platforms downranking or labeling the speech of the president of the United States.
Facebook, Google and Twitter have spent the last four years retracing their steps leading up to the 2016 election, setting up guardrails and hoping not to suffer another public stumble in 2020. So, now that Election Day is finally here, how have they done?
The good news is the big three tech companies are finally prepared to fend off the kind of foreign threats they faced in 2016. The bad news? It's 2020, and the questions this time around — such as what to do if the sitting president declares victory prematurely — are a lot harder to answer.
"There's a pretty reasonably broad consensus that Russian actors should not pretend to be Americans. There's a bit less consensus that Russian actors shouldn't be able to leak stolen emails. There is no consensus at all about the platforms downranking or labeling the speech of the president of the United States," said Alex Stamos, who served as Facebook's chief security officer during the 2016 election. "In the places where there's no democratic consensus, where there's no law to guide them, that's where they have the least preparation."
A failure of imagination
Before the 2016 election, Facebook, Twitter and Google were all eager to play a part in electing the next president. They co-hosted debates with major news outlets, set up elaborate media centers at both conventions and, in Facebook's case, even offered to embed staff inside the presidential campaigns (Trump's campaign famously took them up on it).
All the while, Stamos said the biggest threats Facebook was watching for were direct cybersecurity attacks from nation-states and any content that intentionally misled people about how and where to vote. Stamos' team did detect and shut down a number of fake accounts linked to Russian military leading up to Election Day, which it reported to the FBI. But otherwise, the idea that foreign propaganda was swirling around the platform wasn't on anyone's radar.
The same was true on Twitter, where the company proactively removed dozens of tweets that encouraged people to vote by text for Hillary Clinton before Election Day. Those messages violated the company's policy against voter fraud, but at that point, Twitter also had no idea how deeply foreign influence operations had penetrated the public conversation on social media. "There was, perhaps, a failure of imagination at that point," Sharp said.
At the same time, tech companies shared a rather cozy relationship with the presidential candidates. While the Trump campaign took Facebook staffers in and toured Twitter, Hillary Clinton recruited heavily from Google, contracted a tech firm backed by Eric Schmidt and even considered Bill Gates and Tim Cook as potential running mates. There were brief dustups that indicated what was to come, like when CEO Jack Dorsey personally forbade Trump's campaign from buying a custom Crooked Hillary emoji, sparking outrage on the right.
But for the most part, the companies seemed blissfully unaware of what was happening under their noses. Shortly after the election, Mark Zuckerberg went so far as to say the idea that fake news had influenced the election was "a pretty crazy idea."
Troll hunting
After 2016, everything changed. By September of the following year, Stamos' team had detected what he later described as a "web of fake personae that we could confidently tie to Russia," and the intelligence community concluded Russia had used social media to facilitate a widespread influence campaign, kickstarting years of investigations into foreign interference and compelling tech companies to change the way they did business.
For one thing, they started talking to each other and to law enforcement. Before that time, said Nu Wexler, who worked on Twitter's policy communications team during the 2016 election, "There was a lot of distrust between the companies and some of the government agencies that are working in this space." That distrust, Wexler said, largely stemmed from the disclosures about government surveillance that Edward Snowden had made a little over two years before.
But by the beginning of 2018, Stamos began chairing regular meetings with the Department of Homeland Security, the FBI and the three big tech companies, which had all built teams by then focused on foreign influence operations. The tips from the feds to the tech companies came slowly trickling in. "In the beginning of 2018, there started to be a flow there, and now that flow looks like a torrent," Stamos said, pointing to the 50 foreign influence operations Facebook has taken down in the last year alone.
That cooperation continues to this day and now includes even more companies. Stamos views that as a significant improvement from 2016, because as foreign threats have evolved, the paper trails required to catch them have also grown more complex.
"The FBI can say: 'We found these people who are being paid,' and then the job of Facebook and Twitter is to take those people, and then find all of their related assets, using information they have that the government does not have," Stamos said.
Testing, testing
The use of political ads to juice foreign influence campaigns also meant that post-2016, each of the platforms had to rethink their approaches to political ads and microtargeting. "Ads really never had been a big focus of the public policy and government affairs teams," said Ross LaJeunesse, who served as Google's global head of international relations until he left in 2020. "So it felt to me like, 'Wow, we really should have been paying more attention to that.'"
Facebook forced advertisers to verify they were based in the U.S. and, along with Google, launched a library of political ads that provides hyper-detailed information on the content of ads, how much they cost and what demographics and regions they targeted. Google followed by limiting election ad targeting categories to age, gender and postal code, dramatically limiting the type of microtargeting that campaigns could deploy compared to 2016. Twitter, meanwhile, opted to ban political ads altogether.
"The product wasn't robust enough to combat misuse, and frankly, the revenue wasn't enough to justify the investments it would take to make it robust enough," Sharp, who left Twitter in 2016 and is now CEO of the National Academy of Television Arts and Sciences, said of the decision. "If it's going to be too expensive to put seatbelts and airbags in the car, you're just going to discontinue that model."
U.S. elections are still the Super Bowl.
The platforms also spruced up their content moderation policies, broadening their definitions of categories like hate speech and voter suppression. In 2018, in what appeared to be a direct response to the flood of stolen emails Wikileaks disseminated in 2016, Twitter created its "hacked materials" policy, which prohibited hacked materials from being shared on the platform.
As the years went on, the global platforms stress-tested their expanding policies during hundreds of elections around the world. "U.S. elections are still the Super Bowl, but the other elections are a good opportunity to test-drive programs beforehand," Wexler, who left Twitter for Facebook in 2017, said.
Ahead of Germany's election in 2017, Facebook took down tens of thousands of fake accounts, and for perhaps the first time, touted that as a good thing. That same year, LaJeunesse said his team at Google became focused on protecting a series of elections in Latin America. "When we realized the extent of what happened in the U.S. in 2016, I told the [Latin America] team to throw out their normal policy plans because we had only one job for 2017 and 2018: not allow a repeat of 2016 to happen," LaJeunesse said.
But few countries have as complex an election system as that in the U.S., which has a different election commission for every state. That made the 2018 U.S. midterms the best test of the platforms' growth yet. "It gave us the chance to see how that coordination amongst 50 different jurisdictions, the Department of Homeland Security, the FBI's foreign interference task force and the social media companies were coordinated," said Colin Crowell, who served as Twitter's vice president of global public policy between 2011 and 2019.
The call is coming from inside the White House
Tech giants emerged from the midterms with a relatively clean record in terms of foreign interference. But by then, experts were already warning that the most concerning disinformation was coming from domestic rather than foreign actors.
Fast-forward to 2020, and it's clear those 2018 predictions are all coming true. Research suggests some of the biggest spreaders of misinformation regarding elections are domestic ones, including President Trump and many of his most fervent followers. The platforms have been far more reluctant to take aggressive action against real people in the U.S., citing free expression and a belief that the platforms shouldn't act as the "arbiters of truth." But all those years of preparing to fend off foreign interference didn't prepare these companies for a president who flirts with the idea of rejecting the results of the election or uses their platforms to undermine the validity of mail-in voting in a pandemic.
Now, tech platforms are rushing to confront the possibility that perhaps the biggest threat against the integrity of this election is the president himself. "In general today, the work that the companies are doing with respect to foreign state-sponsored actions is vastly improved from 2016," Crowell said. "However, the biggest difference between 2016 and 2020 is that in 2016, foreign state actors were attempting to sow dissent and convey disinformation in the U.S. election contest, and today the source of increasing concern is domestic sources of disinformation."
There's an acceptance there is going to be misinformation that goes viral and goes wide.
In recent months, Facebook and Twitter have announced policies around labeling posts in which one candidate declares victory prematurely (Google's policy is less explicit). They've all set up sections of their sites filled with authoritative information about the outcome, which they've been aggressively directing users toward. Twitter even rolled out a set of "pre-bunks," warning users that they're likely to encounter misinformation.
"There's an acceptance there is going to be misinformation that goes viral and goes wide before the company can put on the brakes," Sharp said. "These are things that at some level go completely against the decade and a half of engineering optimization at all the social media companies that, from the start, have been focused on engagement."
Just a few years ago, these sorts of questions tech companies are asking "weren't on the table," said William Fitzgerald, a former Google policy communications manager. "The window has changed so much."
That's forced these companies to write the rules around these scenarios in real time. Every few weeks for the last few months, Facebook, Google and Twitter have announced new policies in direct response to whatever's happening in the news. After QAnon candidates rose to prominence during the Republican primary, Facebook and YouTube announced a series of new restrictions on the conspiracy theory group. After President Trump called on the Proud Boys to "stand by" during a recent debate, which the white supremacist group took as a call to arms, Facebook dashed out a new, niche policy, prohibiting calls for poll watching that use militarized language or are intended to intimidate voters.
Bracing for election night
All of that has made for messy enforcement. Twitter, for one, appears to be reassessing its hacked materials policy by the day, after it initially blocked a viral story in The New York Post, then reversed course following an outcry from conservatives and the president. Facebook's attempt to slow the spread of the same story before it was fact-checked was also widely condemned. Last week, its plan to prohibit new political ads the week of the election led to chaos, as campaigns complained that the company also shut off loads of ads that were already approved and running. And the company has repeatedly found ways around its own rules to accommodate posts by prominent conservatives, like Donald Trump Jr. YouTube, meanwhile, has mostly avoided such public backlash, in part, because, as Stamos noted, it has some of the least robust policies of the three top social media giants.
"The overwhelming disinformation and election integrity threats that we face are domestic now," said Jesse Lehrich, Clinton's former foreign policy spokesman and co-founder of tech advocacy nonprofit Accountable Tech. "I don't think the platforms have been successful in coping with that, and I don't think they're prepared for what's to come."
Stamos, who has helped analyze how the companies' policies on a range of threats have evolved over the year, sees things differently. He believes platforms' willingness to change is a good thing. "The idea that you could write rules in February that were going to be applicable in October is just silly," he said. "You want to see them engaged and changing the rules to fit the kind of situations that they're actually seeing."
The idea that you could write rules in February that were going to be applicable in October is just silly.
He acknowledges, though, that these policies mean that tech companies are beginning to "butt up against the limits of the consensus of what [tech platforms'] position is in our society." Which is to say: How much control do Americans really want tech companies to have over what the president is allowed to say?
"We're now talking about them directly changing the discourse within a democracy of speech we legally protect in other circumstances," Stamos said. "We have to be really, really careful with what happens next."
Even if tech companies did exercise full control — even if they actually tried to censor the president as he so often claims they already do — Crowell says there are limits to what they alone could accomplish. "If the president of the United States declares victory 'prematurely,' standing at the portico of the White House and tweets out that message, Twitter can certainly address the tweet on Twitter," Crowell said. "But every cable news channel is going to carry the president's remarks live, and some of those cable news channels will air them over and over and over again."
"The reality is that if any national candidate says anything about the election process or results live on national television, you're going to have tens of millions of Americans seeing it and then sharing that news on Facebook, Instagram, Snap, TikTok and Twitter almost instantaneously," Crowell said.
The tech companies can try to act fast. But they can't save us from ourselves.
Trump has told advisers that he is planning to prematurely declare victory on Nov. 3, capping off a yearslong effort to sow disinformation about widespread voter fraud. Election night will provide the ultimate test of what the companies have learned over the past four years — and how far they're willing to go.