Politics

Big Tech is finally ready for Election 2016. Too bad it’s 2020.

Senior staff who worked at Facebook, Google and Twitter in 2016 reflect on whether those companies have done enough to cope today.

A computer monitor

Remember when Mark Zuckerberg said the idea that fake news had influenced the 2016 election was "a pretty crazy idea"?

Image: Kisspng and Protocol

On a September afternoon in 2015, Adam Sharp and a clutch of his fellow Twitter employees led Donald Trump on a walking tour of the company's New York City headquarters. The unlikely frontrunner for the Republican presidential nomination had come to the office for a live Q&A with Twitter users.

As they walked the halls of Twitter's newly renovated office, Sharp, who was head of news, government and elections at the time, expected canned questions from Trump about STEM education or high-skilled immigration, which politicians always asked when they visited. But Trump wasn't particularly interested in talking tech.

Instead, Sharp said, Trump's first question as he surveyed the space was: "Who did your concrete?"

The banter that day was mostly chummy, Sharp said, as Trump talked about the success he'd had on Twitter both as a candidate and as a reality show host. Before he left, he posted a picture of himself beaming and giving a thumbs up with the Twitter crew. "I had a great time at @TwitterNYC," the tweet read. Some of the Twitter staffers even wore Make America Great Again hats for the photo op. Five years later, Sharp is relieved he did not. "I've never been happier to follow the Dukakis rule," he said. "Don't put on the hat."

To say the president's relationship with tech companies has soured since then would be almost laughably understated. Today, he routinely accuses Twitter, Facebook and Google of censorship, and just last week, in a video also published on Twitter, he told his followers that Big Tech "has to be stopped" because "they're taking away your rights."

But Trump's outlook on tech hasn't changed in a vacuum. It's changed because of all of the decisions these companies have made — whether it's labeling election misinformation or suppressing the spread of false news — to make up for their missteps in 2016. Ironically, it was the election of a candidate who visited Twitter five years ago with no tech talking points to speak of that finally forced the country to question tech platforms' power and their dramatic influence on elections.

There is no consensus at all about the platforms downranking or labeling the speech of the president of the United States.

Facebook, Google and Twitter have spent the last four years retracing their steps leading up to the 2016 election, setting up guardrails and hoping not to suffer another public stumble in 2020. So, now that Election Day is finally here, how have they done?

The good news is the big three tech companies are finally prepared to fend off the kind of foreign threats they faced in 2016. The bad news? It's 2020, and the questions this time around — such as what to do if the sitting president declares victory prematurely — are a lot harder to answer.

"There's a pretty reasonably broad consensus that Russian actors should not pretend to be Americans. There's a bit less consensus that Russian actors shouldn't be able to leak stolen emails. There is no consensus at all about the platforms downranking or labeling the speech of the president of the United States," said Alex Stamos, who served as Facebook's chief security officer during the 2016 election. "In the places where there's no democratic consensus, where there's no law to guide them, that's where they have the least preparation."

A failure of imagination

Before the 2016 election, Facebook, Twitter and Google were all eager to play a part in electing the next president. They co-hosted debates with major news outlets, set up elaborate media centers at both conventions and, in Facebook's case, even offered to embed staff inside the presidential campaigns (Trump's campaign famously took them up on it).

All the while, Stamos said the biggest threats Facebook was watching for were direct cybersecurity attacks from nation-states and any content that intentionally misled people about how and where to vote. Stamos' team did detect and shut down a number of fake accounts linked to Russian military leading up to Election Day, which it reported to the FBI. But otherwise, the idea that foreign propaganda was swirling around the platform wasn't on anyone's radar.

The same was true on Twitter, where the company proactively removed dozens of tweets that encouraged people to vote by text for Hillary Clinton before Election Day. Those messages violated the company's policy against voter fraud, but at that point, Twitter also had no idea how deeply foreign influence operations had penetrated the public conversation on social media. "There was, perhaps, a failure of imagination at that point," Sharp said.

At the same time, tech companies shared a rather cozy relationship with the presidential candidates. While the Trump campaign took Facebook staffers in and toured Twitter, Hillary Clinton recruited heavily from Google, contracted a tech firm backed by Eric Schmidt and even considered Bill Gates and Tim Cook as potential running mates. There were brief dustups that indicated what was to come, like when CEO Jack Dorsey personally forbade Trump's campaign from buying a custom Crooked Hillary emoji, sparking outrage on the right.

But for the most part, the companies seemed blissfully unaware of what was happening under their noses. Shortly after the election, Mark Zuckerberg went so far as to say the idea that fake news had influenced the election was "a pretty crazy idea."

Troll hunting

After 2016, everything changed. By September of the following year, Stamos' team had detected what he later described as a "web of fake personae that we could confidently tie to Russia," and the intelligence community concluded Russia had used social media to facilitate a widespread influence campaign, kickstarting years of investigations into foreign interference and compelling tech companies to change the way they did business.

For one thing, they started talking to each other and to law enforcement. Before that time, said Nu Wexler, who worked on Twitter's policy communications team during the 2016 election, "There was a lot of distrust between the companies and some of the government agencies that are working in this space." That distrust, Wexler said, largely stemmed from the disclosures about government surveillance that Edward Snowden had made a little over two years before.

But by the beginning of 2018, Stamos began chairing regular meetings with the Department of Homeland Security, the FBI and the three big tech companies, which had all built teams by then focused on foreign influence operations. The tips from the feds to the tech companies came slowly trickling in. "In the beginning of 2018, there started to be a flow there, and now that flow looks like a torrent," Stamos said, pointing to the 50 foreign influence operations Facebook has taken down in the last year alone.

That cooperation continues to this day and now includes even more companies. Stamos views that as a significant improvement from 2016, because as foreign threats have evolved, the paper trails required to catch them have also grown more complex.

"The FBI can say: 'We found these people who are being paid,' and then the job of Facebook and Twitter is to take those people, and then find all of their related assets, using information they have that the government does not have," Stamos said.

Testing, testing

The use of political ads to juice foreign influence campaigns also meant that post-2016, each of the platforms had to rethink their approaches to political ads and microtargeting. "Ads really never had been a big focus of the public policy and government affairs teams," said Ross LaJeunesse, who served as Google's global head of international relations until he left in 2020. "So it felt to me like, 'Wow, we really should have been paying more attention to that.'"

Facebook forced advertisers to verify they were based in the U.S. and, along with Google, launched a library of political ads that provides hyper-detailed information on the content of ads, how much they cost and what demographics and regions they targeted. Google followed by limiting election ad targeting categories to age, gender and postal code, dramatically limiting the type of microtargeting that campaigns could deploy compared to 2016. Twitter, meanwhile, opted to ban political ads altogether.

"The product wasn't robust enough to combat misuse, and frankly, the revenue wasn't enough to justify the investments it would take to make it robust enough," Sharp, who left Twitter in 2016 and is now CEO of the National Academy of Television Arts and Sciences, said of the decision. "If it's going to be too expensive to put seatbelts and airbags in the car, you're just going to discontinue that model."

U.S. elections are still the Super Bowl.

The platforms also spruced up their content moderation policies, broadening their definitions of categories like hate speech and voter suppression. In 2018, in what appeared to be a direct response to the flood of stolen emails Wikileaks disseminated in 2016, Twitter created its "hacked materials" policy, which prohibited hacked materials from being shared on the platform.

As the years went on, the global platforms stress-tested their expanding policies during hundreds of elections around the world. "U.S. elections are still the Super Bowl, but the other elections are a good opportunity to test-drive programs beforehand," Wexler, who left Twitter for Facebook in 2017, said.

Ahead of Germany's election in 2017, Facebook took down tens of thousands of fake accounts, and for perhaps the first time, touted that as a good thing. That same year, LaJeunesse said his team at Google became focused on protecting a series of elections in Latin America. "When we realized the extent of what happened in the U.S. in 2016, I told the [Latin America] team to throw out their normal policy plans because we had only one job for 2017 and 2018: not allow a repeat of 2016 to happen," LaJeunesse said.

But few countries have as complex an election system as that in the U.S., which has a different election commission for every state. That made the 2018 U.S. midterms the best test of the platforms' growth yet. "It gave us the chance to see how that coordination amongst 50 different jurisdictions, the Department of Homeland Security, the FBI's foreign interference task force and the social media companies were coordinated," said Colin Crowell, who served as Twitter's vice president of global public policy between 2011 and 2019.

The call is coming from inside the White House

Tech giants emerged from the midterms with a relatively clean record in terms of foreign interference. But by then, experts were already warning that the most concerning disinformation was coming from domestic rather than foreign actors.

Fast-forward to 2020, and it's clear those 2018 predictions are all coming true. Research suggests some of the biggest spreaders of misinformation regarding elections are domestic ones, including President Trump and many of his most fervent followers. The platforms have been far more reluctant to take aggressive action against real people in the U.S., citing free expression and a belief that the platforms shouldn't act as the "arbiters of truth." But all those years of preparing to fend off foreign interference didn't prepare these companies for a president who flirts with the idea of rejecting the results of the election or uses their platforms to undermine the validity of mail-in voting in a pandemic.

Now, tech platforms are rushing to confront the possibility that perhaps the biggest threat against the integrity of this election is the president himself. "In general today, the work that the companies are doing with respect to foreign state-sponsored actions is vastly improved from 2016," Crowell said. "However, the biggest difference between 2016 and 2020 is that in 2016, foreign state actors were attempting to sow dissent and convey disinformation in the U.S. election contest, and today the source of increasing concern is domestic sources of disinformation."

There's an acceptance there is going to be misinformation that goes viral and goes wide.

In recent months, Facebook and Twitter have announced policies around labeling posts in which one candidate declares victory prematurely (Google's policy is less explicit). They've all set up sections of their sites filled with authoritative information about the outcome, which they've been aggressively directing users toward. Twitter even rolled out a set of "pre-bunks," warning users that they're likely to encounter misinformation.

"There's an acceptance there is going to be misinformation that goes viral and goes wide before the company can put on the brakes," Sharp said. "These are things that at some level go completely against the decade and a half of engineering optimization at all the social media companies that, from the start, have been focused on engagement."

Just a few years ago, these sorts of questions tech companies are asking "weren't on the table," said William Fitzgerald, a former Google policy communications manager. "The window has changed so much."

That's forced these companies to write the rules around these scenarios in real time. Every few weeks for the last few months, Facebook, Google and Twitter have announced new policies in direct response to whatever's happening in the news. After QAnon candidates rose to prominence during the Republican primary, Facebook and YouTube announced a series of new restrictions on the conspiracy theory group. After President Trump called on the Proud Boys to "stand by" during a recent debate, which the white supremacist group took as a call to arms, Facebook dashed out a new, niche policy, prohibiting calls for poll watching that use militarized language or are intended to intimidate voters.

Bracing for election night

All of that has made for messy enforcement. Twitter, for one, appears to be reassessing its hacked materials policy by the day, after it initially blocked a viral story in The New York Post, then reversed course following an outcry from conservatives and the president. Facebook's attempt to slow the spread of the same story before it was fact-checked was also widely condemned. Last week, its plan to prohibit new political ads the week of the election led to chaos, as campaigns complained that the company also shut off loads of ads that were already approved and running. And the company has repeatedly found ways around its own rules to accommodate posts by prominent conservatives, like Donald Trump Jr. YouTube, meanwhile, has mostly avoided such public backlash, in part, because, as Stamos noted, it has some of the least robust policies of the three top social media giants.

"The overwhelming disinformation and election integrity threats that we face are domestic now," said Jesse Lehrich, Clinton's former foreign policy spokesman and co-founder of tech advocacy nonprofit Accountable Tech. "I don't think the platforms have been successful in coping with that, and I don't think they're prepared for what's to come."

Stamos, who has helped analyze how the companies' policies on a range of threats have evolved over the year, sees things differently. He believes platforms' willingness to change is a good thing. "The idea that you could write rules in February that were going to be applicable in October is just silly," he said. "You want to see them engaged and changing the rules to fit the kind of situations that they're actually seeing."

The idea that you could write rules in February that were going to be applicable in October is just silly.

He acknowledges, though, that these policies mean that tech companies are beginning to "butt up against the limits of the consensus of what [tech platforms'] position is in our society." Which is to say: How much control do Americans really want tech companies to have over what the president is allowed to say?

"We're now talking about them directly changing the discourse within a democracy of speech we legally protect in other circumstances," Stamos said. "We have to be really, really careful with what happens next."

Even if tech companies did exercise full control — even if they actually tried to censor the president as he so often claims they already do — Crowell says there are limits to what they alone could accomplish. "If the president of the United States declares victory 'prematurely,' standing at the portico of the White House and tweets out that message, Twitter can certainly address the tweet on Twitter," Crowell said. "But every cable news channel is going to carry the president's remarks live, and some of those cable news channels will air them over and over and over again."

"The reality is that if any national candidate says anything about the election process or results live on national television, you're going to have tens of millions of Americans seeing it and then sharing that news on Facebook, Instagram, Snap, TikTok and Twitter almost instantaneously," Crowell said.

The tech companies can try to act fast. But they can't save us from ourselves.

Trump has told advisers that he is planning to prematurely declare victory on Nov. 3, capping off a yearslong effort to sow disinformation about widespread voter fraud. Election night will provide the ultimate test of what the companies have learned over the past four years — and how far they're willing to go.
Fintech

Kraken CEO defends his ‘back to dictatorship’ crackdown

Jesse Powell says the crypto exchange’s cultural revolution was necessary.

"Some people feel they should be able to be whatever they want to be in the workplace. But there's a line," Powell told Protocol.

Photo: David Paul Morris/Bloomberg via Getty Images

Kraken CEO Jesse Powell found himself under fire last month for provocative remarks he made that kicked off a contentious workplace battle and shined a light on the crypto exchange’s distinctive corporate culture.

A New York Times report based on leaked Slack messages and employee interviews accused Powell of making insensitive comments on gender and race, sparking heated conversations within Kraken. Powell responded forcefully, laying out new ground rules and principles in an attempt to define the way he wanted the company to operate — sharply at odds in some aspects with the tech industry’s standard practices.

Keep Reading Show less
Benjamin Pimentel

Benjamin Pimentel ( @benpimentel) covers crypto and fintech from San Francisco. He has reported on many of the biggest tech stories over the past 20 years for the San Francisco Chronicle, Dow Jones MarketWatch and Business Insider, from the dot-com crash, the rise of cloud computing, social networking and AI to the impact of the Great Recession and the COVID crisis on Silicon Valley and beyond. He can be reached at bpimentel@protocol.com or via Google Voice at (925) 307-9342.

Every day, millions of us press the “order” button on our favorite coffee store's mobile application: Our chosen brew will be on the counter when we arrive. It’s a personalized, seamless experience that we have all come to expect. What we don’t know is what’s happening behind the scenes. The mobile application is sourcing data from a database that stores information about each customer and what their favorite coffee drinks are. It is also leveraging event-streaming data in real time to ensure the ingredients for your personal coffee are in supply at your local store.

Applications like this power our daily lives, and if they can’t access massive amounts of data stored in a database as well as stream data “in motion” instantaneously, you — and millions of customers — won’t have these in-the-moment experiences.

Keep Reading Show less
Jennifer Goforth Gregory
Jennifer Goforth Gregory has worked in the B2B technology industry for over 20 years. As a freelance writer she writes for top technology brands, including IBM, HPE, Adobe, AT&T, Verizon, Epson, Oracle, Intel and Square. She specializes in a wide range of technology, such as AI, IoT, cloud, cybersecurity, and CX. Jennifer also wrote a bestselling book The Freelance Content Marketing Writer to help other writers launch a high earning freelance business.
Enterprise

GitHub’s CEO wants to go passwordless by 2025

Thomas Dohmke sat down with Protocol to talk about what the open-source code hosting site is doing to address security vulnerabilities, including an aim to go passwordless by 2025.

GitHub CEO Thomas Dohmke spoke to Protocol about its plan to go passwordless.

Photo: Vaughn Ridley/Sportsfile for Collision via Getty Images

GitHub CEO Thomas Dohmke wants to get rid of passwords.

Open-source software has been plagued with cybersecurity issues for years, and GitHub and other companies in the space have been taking steps to bolster security. Dohmke knows, however, that to get to the root of the industrywide problem will take more than just corporate action: It will ultimately require a sea change and cultural shift in how developers work.

Keep Reading Show less
Michelle Ma

Michelle Ma (@himichellema) is a reporter at Protocol, where she writes about management, leadership and workplace issues in tech. Previously, she was a news editor of live journalism and special coverage for The Wall Street Journal. Prior to that, she worked as a staff writer at Wirecutter. She can be reached at mma@protocol.com.

Enterprise

Why foundation models in AI need to be released responsibly

Foundation models like GPT-3 and DALL-E are changing AI forever. We urgently need to develop community norms that guarantee research access and help guide the future of AI responsibly.

Releasing new foundation models doesn’t have to be an all or nothing proposition.

Illustration: sorbetto/DigitalVision Vectors

Percy Liang is director of the Center for Research on Foundation Models, a faculty affiliate at the Stanford Institute for Human-Centered AI and an associate professor of Computer Science at Stanford University.

Humans are not very good at forecasting the future, especially when it comes to technology.

Keep Reading Show less
Percy Liang
Percy Liang is Director of the Center for Research on Foundation Models, a Faculty Affiliate at the Stanford Institute for Human-Centered AI, and an Associate Professor of Computer Science at Stanford University.
Climate

The West’s drought could bring about a data center reckoning

When it comes to water use, data centers are the tech industry’s secret water hogs — and they could soon come under increased scrutiny.

Lake Mead, North America's largest artificial reservoir, has dropped to about 1,052 feet above sea level, the lowest it's been since being filled in 1937.

Photo: Mario Tama/Getty Images

The West is parched, and getting more so by the day. Lake Mead — the country’s largest reservoir — is nearing “dead pool” levels, meaning it may soon be too low to flow downstream. The entirety of the Four Corners plus California is mired in megadrought.

Amid this desiccation, hundreds of the country’s data centers use vast amounts of water to hum along. Dozens cluster around major metro centers, including those with mandatory or voluntary water restrictions in place to curtail residential and agricultural use.

Keep Reading Show less
Lisa Martine Jenkins

Lisa Martine Jenkins is a senior reporter at Protocol covering climate. Lisa previously wrote for Morning Consult, Chemical Watch and the Associated Press. Lisa is currently based in Brooklyn, and is originally from the Bay Area. Find her on Twitter ( @l_m_j_) or reach out via email (ljenkins@protocol.com).

Latest Stories
Bulletins