yesIssie LapowskyNone
×

Get access to Protocol

Will be used in accordance with our Privacy Policy

I’m already a subscriber
Politics

Big Tech is finally ready for Election 2016. Too bad it’s 2020.

Senior staff who worked at Facebook, Google and Twitter in 2016 reflect on whether those companies have done enough to cope today.

A computer monitor

Remember when Mark Zuckerberg said the idea that fake news had influenced the 2016 election was "a pretty crazy idea"?

Image: Kisspng and Protocol

On a September afternoon in 2015, Adam Sharp and a clutch of his fellow Twitter employees led Donald Trump on a walking tour of the company's New York City headquarters. The unlikely frontrunner for the Republican presidential nomination had come to the office for a live Q&A with Twitter users.

As they walked the halls of Twitter's newly renovated office, Sharp, who was head of news, government and elections at the time, expected canned questions from Trump about STEM education or high-skilled immigration, which politicians always asked when they visited. But Trump wasn't particularly interested in talking tech.

Instead, Sharp said, Trump's first question as he surveyed the space was: "Who did your concrete?"

The banter that day was mostly chummy, Sharp said, as Trump talked about the success he'd had on Twitter both as a candidate and as a reality show host. Before he left, he posted a picture of himself beaming and giving a thumbs up with the Twitter crew. "I had a great time at @TwitterNYC," the tweet read. Some of the Twitter staffers even wore Make America Great Again hats for the photo op. Five years later, Sharp is relieved he did not. "I've never been happier to follow the Dukakis rule," he said. "Don't put on the hat."

To say the president's relationship with tech companies has soured since then would be almost laughably understated. Today, he routinely accuses Twitter, Facebook and Google of censorship, and just last week, in a video also published on Twitter, he told his followers that Big Tech "has to be stopped" because "they're taking away your rights."

But Trump's outlook on tech hasn't changed in a vacuum. It's changed because of all of the decisions these companies have made — whether it's labeling election misinformation or suppressing the spread of false news — to make up for their missteps in 2016. Ironically, it was the election of a candidate who visited Twitter five years ago with no tech talking points to speak of that finally forced the country to question tech platforms' power and their dramatic influence on elections.

There is no consensus at all about the platforms downranking or labeling the speech of the president of the United States.

Facebook, Google and Twitter have spent the last four years retracing their steps leading up to the 2016 election, setting up guardrails and hoping not to suffer another public stumble in 2020. So, now that Election Day is finally here, how have they done?

The good news is the big three tech companies are finally prepared to fend off the kind of foreign threats they faced in 2016. The bad news? It's 2020, and the questions this time around — such as what to do if the sitting president declares victory prematurely — are a lot harder to answer.

"There's a pretty reasonably broad consensus that Russian actors should not pretend to be Americans. There's a bit less consensus that Russian actors shouldn't be able to leak stolen emails. There is no consensus at all about the platforms downranking or labeling the speech of the president of the United States," said Alex Stamos, who served as Facebook's chief security officer during the 2016 election. "In the places where there's no democratic consensus, where there's no law to guide them, that's where they have the least preparation."

A failure of imagination

Before the 2016 election, Facebook, Twitter and Google were all eager to play a part in electing the next president. They co-hosted debates with major news outlets, set up elaborate media centers at both conventions and, in Facebook's case, even offered to embed staff inside the presidential campaigns (Trump's campaign famously took them up on it).

All the while, Stamos said the biggest threats Facebook was watching for were direct cybersecurity attacks from nation-states and any content that intentionally misled people about how and where to vote. Stamos' team did detect and shut down a number of fake accounts linked to Russian military leading up to Election Day, which it reported to the FBI. But otherwise, the idea that foreign propaganda was swirling around the platform wasn't on anyone's radar.

The same was true on Twitter, where the company proactively removed dozens of tweets that encouraged people to vote by text for Hillary Clinton before Election Day. Those messages violated the company's policy against voter fraud, but at that point, Twitter also had no idea how deeply foreign influence operations had penetrated the public conversation on social media. "There was, perhaps, a failure of imagination at that point," Sharp said.

At the same time, tech companies shared a rather cozy relationship with the presidential candidates. While the Trump campaign took Facebook staffers in and toured Twitter, Hillary Clinton recruited heavily from Google, contracted a tech firm backed by Eric Schmidt and even considered Bill Gates and Tim Cook as potential running mates. There were brief dustups that indicated what was to come, like when CEO Jack Dorsey personally forbade Trump's campaign from buying a custom Crooked Hillary emoji, sparking outrage on the right.

But for the most part, the companies seemed blissfully unaware of what was happening under their noses. Shortly after the election, Mark Zuckerberg went so far as to say the idea that fake news had influenced the election was "a pretty crazy idea."

Troll hunting

After 2016, everything changed. By September of the following year, Stamos' team had detected what he later described as a "web of fake personae that we could confidently tie to Russia," and the intelligence community concluded Russia had used social media to facilitate a widespread influence campaign, kickstarting years of investigations into foreign interference and compelling tech companies to change the way they did business.

For one thing, they started talking to each other and to law enforcement. Before that time, said Nu Wexler, who worked on Twitter's policy communications team during the 2016 election, "There was a lot of distrust between the companies and some of the government agencies that are working in this space." That distrust, Wexler said, largely stemmed from the disclosures about government surveillance that Edward Snowden had made a little over two years before.

But by the beginning of 2018, Stamos began chairing regular meetings with the Department of Homeland Security, the FBI and the three big tech companies, which had all built teams by then focused on foreign influence operations. The tips from the feds to the tech companies came slowly trickling in. "In the beginning of 2018, there started to be a flow there, and now that flow looks like a torrent," Stamos said, pointing to the 50 foreign influence operations Facebook has taken down in the last year alone.

That cooperation continues to this day and now includes even more companies. Stamos views that as a significant improvement from 2016, because as foreign threats have evolved, the paper trails required to catch them have also grown more complex.

"The FBI can say: 'We found these people who are being paid,' and then the job of Facebook and Twitter is to take those people, and then find all of their related assets, using information they have that the government does not have," Stamos said.

Testing, testing

The use of political ads to juice foreign influence campaigns also meant that post-2016, each of the platforms had to rethink their approaches to political ads and microtargeting. "Ads really never had been a big focus of the public policy and government affairs teams," said Ross LaJeunesse, who served as Google's global head of international relations until he left in 2020. "So it felt to me like, 'Wow, we really should have been paying more attention to that.'"

Facebook forced advertisers to verify they were based in the U.S. and, along with Google, launched a library of political ads that provides hyper-detailed information on the content of ads, how much they cost and what demographics and regions they targeted. Google followed by limiting election ad targeting categories to age, gender and postal code, dramatically limiting the type of microtargeting that campaigns could deploy compared to 2016. Twitter, meanwhile, opted to ban political ads altogether.

"The product wasn't robust enough to combat misuse, and frankly, the revenue wasn't enough to justify the investments it would take to make it robust enough," Sharp, who left Twitter in 2016 and is now CEO of the National Academy of Television Arts and Sciences, said of the decision. "If it's going to be too expensive to put seatbelts and airbags in the car, you're just going to discontinue that model."

U.S. elections are still the Super Bowl.

The platforms also spruced up their content moderation policies, broadening their definitions of categories like hate speech and voter suppression. In 2018, in what appeared to be a direct response to the flood of stolen emails Wikileaks disseminated in 2016, Twitter created its "hacked materials" policy, which prohibited hacked materials from being shared on the platform.

As the years went on, the global platforms stress-tested their expanding policies during hundreds of elections around the world. "U.S. elections are still the Super Bowl, but the other elections are a good opportunity to test-drive programs beforehand," Wexler, who left Twitter for Facebook in 2017, said.

Ahead of Germany's election in 2017, Facebook took down tens of thousands of fake accounts, and for perhaps the first time, touted that as a good thing. That same year, LaJeunesse said his team at Google became focused on protecting a series of elections in Latin America. "When we realized the extent of what happened in the U.S. in 2016, I told the [Latin America] team to throw out their normal policy plans because we had only one job for 2017 and 2018: not allow a repeat of 2016 to happen," LaJeunesse said.

But few countries have as complex an election system as that in the U.S., which has a different election commission for every state. That made the 2018 U.S. midterms the best test of the platforms' growth yet. "It gave us the chance to see how that coordination amongst 50 different jurisdictions, the Department of Homeland Security, the FBI's foreign interference task force and the social media companies were coordinated," said Colin Crowell, who served as Twitter's vice president of global public policy between 2011 and 2019.

The call is coming from inside the White House

Tech giants emerged from the midterms with a relatively clean record in terms of foreign interference. But by then, experts were already warning that the most concerning disinformation was coming from domestic rather than foreign actors.

Fast-forward to 2020, and it's clear those 2018 predictions are all coming true. Research suggests some of the biggest spreaders of misinformation regarding elections are domestic ones, including President Trump and many of his most fervent followers. The platforms have been far more reluctant to take aggressive action against real people in the U.S., citing free expression and a belief that the platforms shouldn't act as the "arbiters of truth." But all those years of preparing to fend off foreign interference didn't prepare these companies for a president who flirts with the idea of rejecting the results of the election or uses their platforms to undermine the validity of mail-in voting in a pandemic.

Now, tech platforms are rushing to confront the possibility that perhaps the biggest threat against the integrity of this election is the president himself. "In general today, the work that the companies are doing with respect to foreign state-sponsored actions is vastly improved from 2016," Crowell said. "However, the biggest difference between 2016 and 2020 is that in 2016, foreign state actors were attempting to sow dissent and convey disinformation in the U.S. election contest, and today the source of increasing concern is domestic sources of disinformation."

There's an acceptance there is going to be misinformation that goes viral and goes wide.

In recent months, Facebook and Twitter have announced policies around labeling posts in which one candidate declares victory prematurely (Google's policy is less explicit). They've all set up sections of their sites filled with authoritative information about the outcome, which they've been aggressively directing users toward. Twitter even rolled out a set of "pre-bunks," warning users that they're likely to encounter misinformation.

"There's an acceptance there is going to be misinformation that goes viral and goes wide before the company can put on the brakes," Sharp said. "These are things that at some level go completely against the decade and a half of engineering optimization at all the social media companies that, from the start, have been focused on engagement."

Just a few years ago, these sorts of questions tech companies are asking "weren't on the table," said William Fitzgerald, a former Google policy communications manager. "The window has changed so much."

That's forced these companies to write the rules around these scenarios in real time. Every few weeks for the last few months, Facebook, Google and Twitter have announced new policies in direct response to whatever's happening in the news. After QAnon candidates rose to prominence during the Republican primary, Facebook and YouTube announced a series of new restrictions on the conspiracy theory group. After President Trump called on the Proud Boys to "stand by" during a recent debate, which the white supremacist group took as a call to arms, Facebook dashed out a new, niche policy, prohibiting calls for poll watching that use militarized language or are intended to intimidate voters.

Bracing for election night

All of that has made for messy enforcement. Twitter, for one, appears to be reassessing its hacked materials policy by the day, after it initially blocked a viral story in The New York Post, then reversed course following an outcry from conservatives and the president. Facebook's attempt to slow the spread of the same story before it was fact-checked was also widely condemned. Last week, its plan to prohibit new political ads the week of the election led to chaos, as campaigns complained that the company also shut off loads of ads that were already approved and running. And the company has repeatedly found ways around its own rules to accommodate posts by prominent conservatives, like Donald Trump Jr. YouTube, meanwhile, has mostly avoided such public backlash, in part, because, as Stamos noted, it has some of the least robust policies of the three top social media giants.

"The overwhelming disinformation and election integrity threats that we face are domestic now," said Jesse Lehrich, Clinton's former foreign policy spokesman and co-founder of tech advocacy nonprofit Accountable Tech. "I don't think the platforms have been successful in coping with that, and I don't think they're prepared for what's to come."

Stamos, who has helped analyze how the companies' policies on a range of threats have evolved over the year, sees things differently. He believes platforms' willingness to change is a good thing. "The idea that you could write rules in February that were going to be applicable in October is just silly," he said. "You want to see them engaged and changing the rules to fit the kind of situations that they're actually seeing."

The idea that you could write rules in February that were going to be applicable in October is just silly.

He acknowledges, though, that these policies mean that tech companies are beginning to "butt up against the limits of the consensus of what [tech platforms'] position is in our society." Which is to say: How much control do Americans really want tech companies to have over what the president is allowed to say?

"We're now talking about them directly changing the discourse within a democracy of speech we legally protect in other circumstances," Stamos said. "We have to be really, really careful with what happens next."

Even if tech companies did exercise full control — even if they actually tried to censor the president as he so often claims they already do — Crowell says there are limits to what they alone could accomplish. "If the president of the United States declares victory 'prematurely,' standing at the portico of the White House and tweets out that message, Twitter can certainly address the tweet on Twitter," Crowell said. "But every cable news channel is going to carry the president's remarks live, and some of those cable news channels will air them over and over and over again."

"The reality is that if any national candidate says anything about the election process or results live on national television, you're going to have tens of millions of Americans seeing it and then sharing that news on Facebook, Instagram, Snap, TikTok and Twitter almost instantaneously," Crowell said.

The tech companies can try to act fast. But they can't save us from ourselves.

Trump has told advisers that he is planning to prematurely declare victory on Nov. 3, capping off a yearslong effort to sow disinformation about widespread voter fraud. Election night will provide the ultimate test of what the companies have learned over the past four years — and how far they're willing to go.
Protocol | China

Everything you need to know about the Zhihu IPO

The Beijing-based question-and-answer site just filed for an IPO.

The Zhihu homepage.

David Wertime/Protocol

Investors eager to buy a slice of China's urban elite internet will soon have the chance. Zhihu, a Beijing-based question-and-answer site similar to the U.S.-based Quora, has just filed for an IPO to sell American Depositary Shares on the New York Stock Exchange.

What does Zhihu do?

Zhihu is China's largest online Q&A platform — the name comes from the expression "Do you know?" in classical Chinese. It was founded 10 years ago by Yuan Zhou (周源), a former journalist, and spent two years as an invite-only online platform. It quickly built a reputation as a source for quality answers and has drawn a community of elite professionals, including ZhenFund managing partner Bob Xu and venture capitalist Kai-Fu Lee, also an early investor.

Over time, the Chinese-language Zhihu has become more mainstream, and now says it hosts 315.3 million questions and answers contributed by 43.1 million "creators." (Quora, about one year older than Zhihu, had almost 61 million questions and 108 million answers by the end of 2019). The website has grown into a content platform where people also keep diaries, write fiction and blog as social media influencers.

Zhihu users do not look like China as a whole. Most than half are men, most live in "Tier 1" cities and more than three-quarters are under 30 years old.

Zhihu continues to emphasize the quality of its content. "Zhihu is also recognized as the most trustworthy online content community and widely regarded as offering the highest-quality content in China," its prospectus says.

Zhihu's financials

Zhihu registered for its IPO via the Jumpstart Our Business Startups Act, a.k.a. the JOBS Act, which has reduced disclosure requirements for companies with less than $1.07 billion in annual revenue. Zhihu's revenue doubled from 2019 to 2020, but still only reached $207.2 million, and the company is short of profitability with a 2020 net loss of $79.3 million. The company says it's "still in an early stage of monetization" with "significant runway for growth across multiple new monetization channels."

Trend lines are good. Zhihu has managed to double revenue while keeping expenses largely constant, with selling and marketing aimed at growing Zhihu's user base as the biggest single expense.

The company is trying to diversify its revenue streams. In 2019, 86.1% came from advertising. 2020 saw advertising account for 62.4% while "content-commerce" — meaning native advertising — took in 10%. The rest was mostly paid memberships.

What's next for Zhihu

After years of evincing a relaxed attitude toward monetization, Zhihu is putting itself in the hot seat to do just that. Zhihu is betting that monetizing Chinese web users will get easier over time. The prospectus describes "significant growth potential" in China's "online content community market" and says average revenue per user in China is expected to more than triple from about $55 in 2019 to about $199 in 2025, with revenue in the overall market reaching a projected $200 billion in 2025.

The company looks like it will basically try everything to monetize, and see what sticks. It plans to "ramp up our online education service" and to "continue to explore other innovative monetization channels, such as content e-commerce and IP-based monetization."

The prospectus also mentions AI frequently, touting Zhihu's AI content moderation tool wali as well as a "question routing system" and "feed recommendation and search systems." However, the depth and quality of content remains far more important to Zhihu's success. Users have joked on Zhihu about the poor quality of its wali filter.

What could go wrong?

Zhihu could fail to turn a profit. Like most content platforms, Zhihu has found it hard to monetize its traffic and the vast amount of free content at its core. The platform was built on the premise that anyone can acquire professional knowledge easily, which means users are not inclined to pay.

Since 2016, Zhihu has tried many monetization models: paid physical/virtual events, online courses taught by its top creators, premium memberships and paid consulting services. None have been a hit. Zhihu Live, the paid virtual event product, attracted a lot of public attention in 2016 and 2017, but since then its popularity has waned. According to the prospectus, Zhihu currently has 2.4 million paying members, or only 3.4% of its monthly active users.

Zhihu also faces intense competition. Defined narrowly, it has no rivals, with would-be contenders like Baidu Zhidao and Wukong, owned by ByteDance, falling by the wayside. But Zhihu has positioned itself as something more: a community for diverse content. In this regard, it's competing with big public-facing social media platforms such as the Twitter-like Weibo and Bilibili. While Zhihu's 68.5 million monthly active user base is growing fast, Weibo has over 500 million and Bilibili over 200 million. Zhihu differentiates itself with the quality and depth of its content, but maintaining that creates inevitable tension with the business imperative to expand.

Like every content platform in China, Zhihu is subject to rigid state censorship and faces harsh penalties for failing to police speech itself. Politically-sensitive questions are nowhere to be found on the platform, while other topics including transgender rights have been censored in the past. Even so, in March 2018, Zhihu was taken off every mobile app store for seven days at the request of Beijing's municipal Cyberspace Administration. Authorities did not specify why, but the suspension probably related to subtle criticisms of Xi Jinping on the platform; Zhihu promised to "make adjustments."

Zhihu's prospectus is largely mum on the censorship question, perhaps because the company feels it's gotten good enough at doing it. Zhihu says it has a "comprehensive community governance system" that combines "AI-powered content assessment algorithms" with the ability of users to report each other as well as "proprietary know-how." These resemble the same tools most big Chinese social media platforms use to censor content and keep in Beijing's good graces.

Who gets rich?

Here's what we know:

  • Founder, CEO and Chairman Yuan Zhou currently owns 8.2% of Zhihu, with another 8% worth of options, which he can exercise within 60 days of the IPO, held in a separate holding company controlled by a trust of which he is the beneficiary. Following exercise, Zhou will have the vast majority of aggregate voting power.
  • Innovation Works, beneficially owned by Peter Liu and Kai-Fu Lee, owns 13.1% of Zhihu. According to corporate database Qichacha, Innovation Works invested about $153,000 in an angel round in January 2011, then made follow-on investments in the C and D rounds.
  • Tencent owns 12.3%.
  • Qiming Entities owns 11.3%. According to corporate database Qichacha, Qiming invested $1 million in Zhihu's series A, then made follow-on investments in the B, C and D rounds.

Kuaishou, Baidu and Sogou also own stakes, as does SAIF IV Mobile Apps Limited.

Innovation Works' Kai-Fu Lee and Peter Liu, and Qiming Ventures, both of which invested early and often, look like the biggest winners besides founder Zhou.

What people are saying

"Zhihu, if it ever wants to be a truly massive platform, will need to go out of the hardcore knowledge-sharing space, and become more mainstream, more entertaining, and yes, even less intellectual. But to capture that market, who better to partner with than Kuaishou, who built its business on exactly those characteristics?" —Ying-Ying Lu, co-host of Tech Buzz China.

"After separating video content into its own feed, Zhihu is now in competition with Bilibili and [ByteDance-owned] Xigua Video. Education-themed videos used to be one of the important growth drivers for the latter two apps. Now [Zhihu], the app that specialized in educational content, has joined the game." —Lan Xi (pen name), independent tech writer.

David Wertime

David Wertime is Protocol's executive director. David is a widely cited China expert with twenty years' experience who has served as a Peace Corps Volunteer in China, founded and sold a media company, and worked in senior positions within multiple newsrooms. He also hosts POLITICO's China Watcher newsletter. After four years working on international deals for top law firms in New York and Hong Kong, David co-founded Tea Leaf Nation, a website that tracked Chinese social media, later selling it to the Washington Post Company. David then served as Senior Editor for China at Foreign Policy magazine, where he launched the first Chinese-language articles in the publication's history. Thereafter, he was Entrepreneur in Residence at the Lenfest Institute for Journalism, which owns the Philadelphia Inquirer. In 2019, David joined Protocol's parent company and in 2020, launched POLITICO's widely-read China Watcher. David is a Senior Fellow at the Foreign Policy Research Institute, a Research Associate at the University of Pennsylvania's Center for the Study of Contemporary China, a Member of the National Committee on U.S.-China Relations, and a Truman National Security fellow. He lives in San Francisco with his wife Diane and his puppy, Luna.

Sponsored Content

The future of computing at the edge: an interview with Intel’s Tom Lantzsch

An interview with Tom Lantzsch, SVP and GM, Internet of Things Group at Intel

An interview with Tom Lantzsch

Senior Vice President and General Manager of the Internet of Things Group (IoT) at Intel Corporation

Edge computing had been on the rise in the last 18 months – and accelerated amid the need for new applications to solve challenges created by the Covid-19 pandemic. Tom Lantzsch, Senior Vice President and General Manager of the Internet of Things Group (IoT) at Intel Corp., thinks there are more innovations to come – and wants technology leaders to think equally about data and the algorithms as critical differentiators.

In his role at Intel, Lantzsch leads the worldwide group of solutions architects across IoT market segments, including retail, banking, hospitality, education, industrial, transportation, smart cities and healthcare. And he's seen first-hand how artificial intelligence run at the edge can have a big impact on customers' success.

Protocol sat down with Lantzsch to talk about the challenges faced by companies seeking to move from the cloud to the edge; some of the surprising ways that Intel has found to help customers and the next big breakthrough in this space.

What are the biggest trends you are seeing with edge computing and IoT?

A few years ago, there was a notion that the edge was going to be a simplistic model, where we were going to have everything connected up into the cloud and all the compute was going to happen in the cloud. At Intel, we had a bit of a contrarian view. We thought much of the interesting compute was going to happen closer to where data was created. And we believed, at that time, that camera technology was going to be the driving force – that just the sheer amount of content that was created would be overwhelming to ship to the cloud – so we'd have to do compute at the edge. A few years later – that hypothesis is in action and we're seeing edge compute happen in a big way.

Keep Reading Show less
Saul Hudson
Saul Hudson has a deep knowledge of creating brand voice identity, especially in understanding and targeting messages in cutting-edge technologies. He enjoys commissioning, editing, writing, and business development, in helping companies to build passionate audiences and accelerate their growth. Hudson has reported from more than 30 countries, from war zones to boardrooms to presidential palaces. He has led multinational, multi-lingual teams and managed operations for hundreds of journalists. Hudson is a Managing Partner at Angle42, a strategic communications consultancy.
People

Google’s trying to build a more inclusive, less chaotic future of work

Javier Soltero, the VP of Workspace at Google, said time management is everything.

With everyone working in new places, Google believes time management is everything.

Image: Google

Javier Soltero was still pretty new to the G Suite team when the pandemic hit. Pretty quickly, everything about Google's hugely popular suite of work tools seemed to change. (It's not even called G Suite anymore, but rather Workspace.) And Soltero had to both guide his team through a new way of working and help them build the tools to guide billions of Workspace users.

This week, Soltero and his team announced a number of new Workspace features designed to help people manage their time, collaborate and get stuff done more effectively. It offered new tools for frontline workers to communicate better, more hardware for hybrid meetings, lots of Assistant and Calendar features to make planning easier and a picture-in-picture mode so people could be on Meet calls without really having to pay attention.

Keep Reading Show less
David Pierce

David Pierce ( @pierce) is Protocol's editor at large. Prior to joining Protocol, he was a columnist at The Wall Street Journal, a senior writer with Wired, and deputy editor at The Verge. He owns all the phones.

Transforming 2021

Blockchain, QR codes and your phone: the race to build vaccine passports

Digital verification systems could give people the freedom to work and travel. Here's how they could actually happen.

One day, you might not need to carry that physical passport around, either.

Photo: CommonPass

There will come a time, hopefully in the near future, when you'll feel comfortable getting on a plane again. You might even stop at the lounge at the airport, head to the regional office when you land and maybe even see a concert that evening. This seemingly distant reality will depend upon vaccine rollouts continuing on schedule, an open-sourced digital verification system and, amazingly, the blockchain.

Several countries around the world have begun to prepare for what comes after vaccinations. Swaths of the population will be vaccinated before others, but that hasn't stopped industries decimated by the pandemic from pioneering ways to get some people back to work and play. One of the most promising efforts is the idea of a "vaccine passport," which would allow individuals to show proof that they've been vaccinated against COVID-19 in a way that could be verified by businesses to allow them to travel, work or relax in public without a great fear of spreading the virus.

Keep Reading Show less
Mike Murphy

Mike Murphy ( @mcwm) is the director of special projects at Protocol, focusing on the industries being rapidly upended by technology and the companies disrupting incumbents. Previously, Mike was the technology editor at Quartz, where he frequently wrote on robotics, artificial intelligence, and consumer electronics.

Protocol | Policy

Far-right misinformation: Facebook's most engaging news

A new study shows that before and after the election, far-right misinformation pages drew more engagement than all other partisan news.

A new study finds that far right misinformation pulls in more engagement on Facebook than other types of partisan news.

Photo: Brett Jordan/Unsplash

In the months before and after the 2020 election, far-right pages that are known to spread misinformation consistently garnered more engagement on Facebook than any other partisan news, according to a New York University study published Wednesday.

The study looked at Facebook engagement for news sources across the political spectrum between Aug. 10, 2020 and Jan. 11, 2021, and found that on average, far-right pages that regularly trade in misinformation raked in 65% more engagement per follower than other far-right pages that aren't known for spreading misinformation.

Keep Reading Show less
Issie Lapowsky
Issie Lapowsky (@issielapowsky) is a senior reporter at Protocol, covering the intersection of technology, politics, and national affairs. Previously, she was a senior writer at Wired, where she covered the 2016 election and the Facebook beat in its aftermath. Prior to that, Issie worked as a staff writer for Inc. magazine, writing about small business and entrepreneurship. She has also worked as an on-air contributor for CBS News and taught a graduate-level course at New York University’s Center for Publishing on how tech giants have affected publishing. Email Issie.
Latest Stories