Politics

Big Tech is finally ready for Election 2016. Too bad it’s 2020.

Senior staff who worked at Facebook, Google and Twitter in 2016 reflect on whether those companies have done enough to cope today.

A computer monitor

Remember when Mark Zuckerberg said the idea that fake news had influenced the 2016 election was "a pretty crazy idea"?

Image: Kisspng and Protocol

On a September afternoon in 2015, Adam Sharp and a clutch of his fellow Twitter employees led Donald Trump on a walking tour of the company's New York City headquarters. The unlikely frontrunner for the Republican presidential nomination had come to the office for a live Q&A with Twitter users.

As they walked the halls of Twitter's newly renovated office, Sharp, who was head of news, government and elections at the time, expected canned questions from Trump about STEM education or high-skilled immigration, which politicians always asked when they visited. But Trump wasn't particularly interested in talking tech.

Instead, Sharp said, Trump's first question as he surveyed the space was: "Who did your concrete?"

The banter that day was mostly chummy, Sharp said, as Trump talked about the success he'd had on Twitter both as a candidate and as a reality show host. Before he left, he posted a picture of himself beaming and giving a thumbs up with the Twitter crew. "I had a great time at @TwitterNYC," the tweet read. Some of the Twitter staffers even wore Make America Great Again hats for the photo op. Five years later, Sharp is relieved he did not. "I've never been happier to follow the Dukakis rule," he said. "Don't put on the hat."

To say the president's relationship with tech companies has soured since then would be almost laughably understated. Today, he routinely accuses Twitter, Facebook and Google of censorship, and just last week, in a video also published on Twitter, he told his followers that Big Tech "has to be stopped" because "they're taking away your rights."

But Trump's outlook on tech hasn't changed in a vacuum. It's changed because of all of the decisions these companies have made — whether it's labeling election misinformation or suppressing the spread of false news — to make up for their missteps in 2016. Ironically, it was the election of a candidate who visited Twitter five years ago with no tech talking points to speak of that finally forced the country to question tech platforms' power and their dramatic influence on elections.

There is no consensus at all about the platforms downranking or labeling the speech of the president of the United States.

Facebook, Google and Twitter have spent the last four years retracing their steps leading up to the 2016 election, setting up guardrails and hoping not to suffer another public stumble in 2020. So, now that Election Day is finally here, how have they done?

The good news is the big three tech companies are finally prepared to fend off the kind of foreign threats they faced in 2016. The bad news? It's 2020, and the questions this time around — such as what to do if the sitting president declares victory prematurely — are a lot harder to answer.

"There's a pretty reasonably broad consensus that Russian actors should not pretend to be Americans. There's a bit less consensus that Russian actors shouldn't be able to leak stolen emails. There is no consensus at all about the platforms downranking or labeling the speech of the president of the United States," said Alex Stamos, who served as Facebook's chief security officer during the 2016 election. "In the places where there's no democratic consensus, where there's no law to guide them, that's where they have the least preparation."

A failure of imagination

Before the 2016 election, Facebook, Twitter and Google were all eager to play a part in electing the next president. They co-hosted debates with major news outlets, set up elaborate media centers at both conventions and, in Facebook's case, even offered to embed staff inside the presidential campaigns (Trump's campaign famously took them up on it).

All the while, Stamos said the biggest threats Facebook was watching for were direct cybersecurity attacks from nation-states and any content that intentionally misled people about how and where to vote. Stamos' team did detect and shut down a number of fake accounts linked to Russian military leading up to Election Day, which it reported to the FBI. But otherwise, the idea that foreign propaganda was swirling around the platform wasn't on anyone's radar.

The same was true on Twitter, where the company proactively removed dozens of tweets that encouraged people to vote by text for Hillary Clinton before Election Day. Those messages violated the company's policy against voter fraud, but at that point, Twitter also had no idea how deeply foreign influence operations had penetrated the public conversation on social media. "There was, perhaps, a failure of imagination at that point," Sharp said.

At the same time, tech companies shared a rather cozy relationship with the presidential candidates. While the Trump campaign took Facebook staffers in and toured Twitter, Hillary Clinton recruited heavily from Google, contracted a tech firm backed by Eric Schmidt and even considered Bill Gates and Tim Cook as potential running mates. There were brief dustups that indicated what was to come, like when CEO Jack Dorsey personally forbade Trump's campaign from buying a custom Crooked Hillary emoji, sparking outrage on the right.

But for the most part, the companies seemed blissfully unaware of what was happening under their noses. Shortly after the election, Mark Zuckerberg went so far as to say the idea that fake news had influenced the election was "a pretty crazy idea."

Troll hunting

After 2016, everything changed. By September of the following year, Stamos' team had detected what he later described as a "web of fake personae that we could confidently tie to Russia," and the intelligence community concluded Russia had used social media to facilitate a widespread influence campaign, kickstarting years of investigations into foreign interference and compelling tech companies to change the way they did business.

For one thing, they started talking to each other and to law enforcement. Before that time, said Nu Wexler, who worked on Twitter's policy communications team during the 2016 election, "There was a lot of distrust between the companies and some of the government agencies that are working in this space." That distrust, Wexler said, largely stemmed from the disclosures about government surveillance that Edward Snowden had made a little over two years before.

But by the beginning of 2018, Stamos began chairing regular meetings with the Department of Homeland Security, the FBI and the three big tech companies, which had all built teams by then focused on foreign influence operations. The tips from the feds to the tech companies came slowly trickling in. "In the beginning of 2018, there started to be a flow there, and now that flow looks like a torrent," Stamos said, pointing to the 50 foreign influence operations Facebook has taken down in the last year alone.

That cooperation continues to this day and now includes even more companies. Stamos views that as a significant improvement from 2016, because as foreign threats have evolved, the paper trails required to catch them have also grown more complex.

"The FBI can say: 'We found these people who are being paid,' and then the job of Facebook and Twitter is to take those people, and then find all of their related assets, using information they have that the government does not have," Stamos said.

Testing, testing

The use of political ads to juice foreign influence campaigns also meant that post-2016, each of the platforms had to rethink their approaches to political ads and microtargeting. "Ads really never had been a big focus of the public policy and government affairs teams," said Ross LaJeunesse, who served as Google's global head of international relations until he left in 2020. "So it felt to me like, 'Wow, we really should have been paying more attention to that.'"

Facebook forced advertisers to verify they were based in the U.S. and, along with Google, launched a library of political ads that provides hyper-detailed information on the content of ads, how much they cost and what demographics and regions they targeted. Google followed by limiting election ad targeting categories to age, gender and postal code, dramatically limiting the type of microtargeting that campaigns could deploy compared to 2016. Twitter, meanwhile, opted to ban political ads altogether.

"The product wasn't robust enough to combat misuse, and frankly, the revenue wasn't enough to justify the investments it would take to make it robust enough," Sharp, who left Twitter in 2016 and is now CEO of the National Academy of Television Arts and Sciences, said of the decision. "If it's going to be too expensive to put seatbelts and airbags in the car, you're just going to discontinue that model."

U.S. elections are still the Super Bowl.

The platforms also spruced up their content moderation policies, broadening their definitions of categories like hate speech and voter suppression. In 2018, in what appeared to be a direct response to the flood of stolen emails Wikileaks disseminated in 2016, Twitter created its "hacked materials" policy, which prohibited hacked materials from being shared on the platform.

As the years went on, the global platforms stress-tested their expanding policies during hundreds of elections around the world. "U.S. elections are still the Super Bowl, but the other elections are a good opportunity to test-drive programs beforehand," Wexler, who left Twitter for Facebook in 2017, said.

Ahead of Germany's election in 2017, Facebook took down tens of thousands of fake accounts, and for perhaps the first time, touted that as a good thing. That same year, LaJeunesse said his team at Google became focused on protecting a series of elections in Latin America. "When we realized the extent of what happened in the U.S. in 2016, I told the [Latin America] team to throw out their normal policy plans because we had only one job for 2017 and 2018: not allow a repeat of 2016 to happen," LaJeunesse said.

But few countries have as complex an election system as that in the U.S., which has a different election commission for every state. That made the 2018 U.S. midterms the best test of the platforms' growth yet. "It gave us the chance to see how that coordination amongst 50 different jurisdictions, the Department of Homeland Security, the FBI's foreign interference task force and the social media companies were coordinated," said Colin Crowell, who served as Twitter's vice president of global public policy between 2011 and 2019.

The call is coming from inside the White House

Tech giants emerged from the midterms with a relatively clean record in terms of foreign interference. But by then, experts were already warning that the most concerning disinformation was coming from domestic rather than foreign actors.

Fast-forward to 2020, and it's clear those 2018 predictions are all coming true. Research suggests some of the biggest spreaders of misinformation regarding elections are domestic ones, including President Trump and many of his most fervent followers. The platforms have been far more reluctant to take aggressive action against real people in the U.S., citing free expression and a belief that the platforms shouldn't act as the "arbiters of truth." But all those years of preparing to fend off foreign interference didn't prepare these companies for a president who flirts with the idea of rejecting the results of the election or uses their platforms to undermine the validity of mail-in voting in a pandemic.

Now, tech platforms are rushing to confront the possibility that perhaps the biggest threat against the integrity of this election is the president himself. "In general today, the work that the companies are doing with respect to foreign state-sponsored actions is vastly improved from 2016," Crowell said. "However, the biggest difference between 2016 and 2020 is that in 2016, foreign state actors were attempting to sow dissent and convey disinformation in the U.S. election contest, and today the source of increasing concern is domestic sources of disinformation."

There's an acceptance there is going to be misinformation that goes viral and goes wide.

In recent months, Facebook and Twitter have announced policies around labeling posts in which one candidate declares victory prematurely (Google's policy is less explicit). They've all set up sections of their sites filled with authoritative information about the outcome, which they've been aggressively directing users toward. Twitter even rolled out a set of "pre-bunks," warning users that they're likely to encounter misinformation.

"There's an acceptance there is going to be misinformation that goes viral and goes wide before the company can put on the brakes," Sharp said. "These are things that at some level go completely against the decade and a half of engineering optimization at all the social media companies that, from the start, have been focused on engagement."

Just a few years ago, these sorts of questions tech companies are asking "weren't on the table," said William Fitzgerald, a former Google policy communications manager. "The window has changed so much."

That's forced these companies to write the rules around these scenarios in real time. Every few weeks for the last few months, Facebook, Google and Twitter have announced new policies in direct response to whatever's happening in the news. After QAnon candidates rose to prominence during the Republican primary, Facebook and YouTube announced a series of new restrictions on the conspiracy theory group. After President Trump called on the Proud Boys to "stand by" during a recent debate, which the white supremacist group took as a call to arms, Facebook dashed out a new, niche policy, prohibiting calls for poll watching that use militarized language or are intended to intimidate voters.

Bracing for election night

All of that has made for messy enforcement. Twitter, for one, appears to be reassessing its hacked materials policy by the day, after it initially blocked a viral story in The New York Post, then reversed course following an outcry from conservatives and the president. Facebook's attempt to slow the spread of the same story before it was fact-checked was also widely condemned. Last week, its plan to prohibit new political ads the week of the election led to chaos, as campaigns complained that the company also shut off loads of ads that were already approved and running. And the company has repeatedly found ways around its own rules to accommodate posts by prominent conservatives, like Donald Trump Jr. YouTube, meanwhile, has mostly avoided such public backlash, in part, because, as Stamos noted, it has some of the least robust policies of the three top social media giants.

"The overwhelming disinformation and election integrity threats that we face are domestic now," said Jesse Lehrich, Clinton's former foreign policy spokesman and co-founder of tech advocacy nonprofit Accountable Tech. "I don't think the platforms have been successful in coping with that, and I don't think they're prepared for what's to come."

Stamos, who has helped analyze how the companies' policies on a range of threats have evolved over the year, sees things differently. He believes platforms' willingness to change is a good thing. "The idea that you could write rules in February that were going to be applicable in October is just silly," he said. "You want to see them engaged and changing the rules to fit the kind of situations that they're actually seeing."

The idea that you could write rules in February that were going to be applicable in October is just silly.

He acknowledges, though, that these policies mean that tech companies are beginning to "butt up against the limits of the consensus of what [tech platforms'] position is in our society." Which is to say: How much control do Americans really want tech companies to have over what the president is allowed to say?

"We're now talking about them directly changing the discourse within a democracy of speech we legally protect in other circumstances," Stamos said. "We have to be really, really careful with what happens next."

Even if tech companies did exercise full control — even if they actually tried to censor the president as he so often claims they already do — Crowell says there are limits to what they alone could accomplish. "If the president of the United States declares victory 'prematurely,' standing at the portico of the White House and tweets out that message, Twitter can certainly address the tweet on Twitter," Crowell said. "But every cable news channel is going to carry the president's remarks live, and some of those cable news channels will air them over and over and over again."

"The reality is that if any national candidate says anything about the election process or results live on national television, you're going to have tens of millions of Americans seeing it and then sharing that news on Facebook, Instagram, Snap, TikTok and Twitter almost instantaneously," Crowell said.

The tech companies can try to act fast. But they can't save us from ourselves.

Trump has told advisers that he is planning to prematurely declare victory on Nov. 3, capping off a yearslong effort to sow disinformation about widespread voter fraud. Election night will provide the ultimate test of what the companies have learned over the past four years — and how far they're willing to go.
Power

How the creators of Spligate built gaming’s newest unicorn

1047 Games is now valued at $1.5 billion after three rounds of funding since May.

1047 Games' Splitgate amassed 13 million downloads when its beta launched in July.

Image: 1047 Games

The creators of Splitgate had a problem. Their new free-to-play video game, a take on the legendary arena shooter Halo with a teleportation twist borrowed from Valve's Portal, was gaining steam during its open beta period in July. But it was happening too quickly.

Splitgate was growing so fast and unexpectedly that the entire game was starting to break, as the servers supporting the game began to, figuratively speaking, melt down. The game went from fewer than 1,000 people playing it at any given moment in time to suddenly having tens of thousands of concurrent players. Then it grew to hundreds of thousands of players, all trying to log in and play at once across PlayStation, Xbox and PC.

Keep Reading Show less
Nick Statt
Nick Statt is Protocol's video game reporter. Prior to joining Protocol, he was news editor at The Verge covering the gaming industry, mobile apps and antitrust out of San Francisco, in addition to managing coverage of Silicon Valley tech giants and startups. He now resides in Rochester, New York, home of the garbage plate and, completely coincidentally, the World Video Game Hall of Fame. He can be reached at nstatt@protocol.com.

While it's easy to get lost in the operational and technical side of a transaction, it's important to remember the third component of a payment. That is, the human behind the screen.

Over the last two years, many retailers have seen the benefit of investing in new, flexible payments. Ones that reflect the changing lifestyles of younger spenders, who are increasingly holding onto their cash — despite reports to the contrary. This means it's more important than ever for merchants to take note of the latest payment innovations so they can tap into the savings of the COVID-19 generation.

Keep Reading Show less
Antoine Nougue,Checkout.com

Antoine Nougue is Head of Europe at Checkout.com. He works with ambitious enterprise businesses to help them scale and grow their operations through payment processing services. He is responsible for leading the European sales, customer success, engineering & implementation teams and is based out of London, U.K.

Protocol | Policy

Why Twitch’s 'hate raid' lawsuit isn’t just about Twitch

When is it OK for tech companies to unmask their anonymous users? And when should a violation of terms of service get someone sued?

The case Twitch is bringing against two hate raiders is hardly black and white.

Photo: Caspar Camille Rubin/Unsplash

It isn't hard to figure out who the bad guys are in Twitch's latest lawsuit against two of its users. On one side are two anonymous "hate raiders" who have been allegedly bombarding the gaming platform with abhorrent attacks on Black and LGBTQ+ users, using armies of bots to do it. On the other side is Twitch, a company that, for all the lumps it's taken for ignoring harassment on its platform, is finally standing up to protect its users against persistent violators whom it's been unable to stop any other way.

But the case Twitch is bringing against these hate raiders is hardly black and white. For starters, the plaintiff here isn't an aggrieved user suing another user for defamation on the platform. The plaintiff is the platform itself. Complicating matters more is the fact that, according to a spokesperson, at least part of Twitch's goal in the case is to "shed light on the identity of the individuals behind these attacks," raising complicated questions about when tech companies should be able to use the courts to unmask their own anonymous users and, just as critically, when they should be able to actually sue them for violating their speech policies.

Keep Reading Show less
Issie Lapowsky

Issie Lapowsky ( @issielapowsky) is Protocol's chief correspondent, covering the intersection of technology, politics, and national affairs. She also oversees Protocol's fellowship program. Previously, she was a senior writer at Wired, where she covered the 2016 election and the Facebook beat in its aftermath. Prior to that, Issie worked as a staff writer for Inc. magazine, writing about small business and entrepreneurship. She has also worked as an on-air contributor for CBS News and taught a graduate-level course at New York University's Center for Publishing on how tech giants have affected publishing.

Protocol | Workplace

Remote work is here to stay. Here are the cybersecurity risks.

Phishing and ransomware are on the rise. Is your remote workforce prepared?

Before your company institutes work-from-home-forever plans, you need to ensure that your workforce is prepared to face the cybersecurity implications of long-term remote work.

Photo: Stefan Wermuth/Bloomberg via Getty Images

The delta variant continues to dash or delay return-to-work plans, but before your company institutes work-from-home-forever plans, you need to ensure that your workforce is prepared to face the cybersecurity implications of long-term remote work.

So far in 2021, CrowdStrike has already observed over 1,400 "big game hunting" ransomware incidents and $180 million in ransom demands averaging over $5 million each. That's due in part to the "expanded attack surface that work-from-home creates," according to CTO Michael Sentonas.

Keep Reading Show less
Michelle Ma
Michelle Ma (@himichellema) is a reporter at Protocol, where she writes about management, leadership and workplace issues in tech. Previously, she was a news editor of live journalism and special coverage for The Wall Street Journal. Prior to that, she worked as a staff writer at Wirecutter. She can be reached at mma@protocol.com.
Protocol | Fintech

When COVID rocked the insurance market, this startup saw opportunity

Ethos has outraised and outmarketed the competition in selling life insurance directly online — but there's still an $887 billion industry to transform.

Life insurance has been slow to change.

Image: courtneyk/Getty Images

Peter Colis cited a striking statistic that he said led him to launch a life insurance startup: One in twenty children will lose a parent before they turn 15.

"No one ever thinks that will happen to them, but that's the statistics," the co-CEO and co-founder of Ethos told Protocol. "If it's a breadwinning parent, the majority of those families will go bankrupt immediately, within three months. Life insurance elegantly solves this problem."

Keep Reading Show less
Benjamin Pimentel

Benjamin Pimentel ( @benpimentel) covers fintech from San Francisco. He has reported on many of the biggest tech stories over the past 20 years for the San Francisco Chronicle, Dow Jones MarketWatch and Business Insider, from the dot-com crash, the rise of cloud computing, social networking and AI to the impact of the Great Recession and the COVID crisis on Silicon Valley and beyond. He can be reached at bpimentel@protocol.com or via Signal at (510)731-8429.

Latest Stories