yesHayden FieldNone
×

Get access to Protocol

Will be used in accordance with our Privacy Policy

I’m already a subscriber
People

Coronavirus is AI moderation’s big test. Don’t expect flying colors.

The pandemic forced some social networks to send home content moderators and instead rely on AI. What happens now?

Facebook co-founder Mark Zuckerberg

Facebook is moving its content-moderation efforts to AI because of coronavirus, and the results will be "a little less effective in the near term," CEO Mark Zuckerberg said Thursday.

Photo: Drew Angerer/Getty Images

Tech CEOs have said for years that artificial intelligence is the future of content moderation. They never expected that the future would come so soon.

As the coronavirus outbreak sweeps the planet, Facebook, YouTube and Twitter have sent their legions of global content moderators home. That leaves the task of identifying and removing hate speech, child sexual abuse imagery, terrorist propaganda, violent imagery and many other types of offensive content largely to the algorithms these companies have been steadily building over the last few years, even as they built their content moderation armies.


Get what matters in tech, in your inbox every morning. Sign up for Source Code.


What this AI-moderated future will look like is unclear. But for Facebook, moderation will be "a little less effective in the near-term" as a result of the changes, Mark Zuckerberg said Thursday on a call with reporters. If the world learns anything from this unavoidable test of automated moderation, it will be that artificially intelligent systems still have a lot to learn.

Facebook has committed to sending all of its contract workers home, but will continue to pay them for the foreseeable future. That's not a decision the company would make lightly: Over the last few years, Facebook has invested massively in contracting content moderators around the world. But watching beheading videos and suicide attempts on a screen all day is dangerous enough for moderators' mental health when they're working in an office; imagine doing it at home, potentially with family around. "Working from home on those types of things, that will be very challenging to enforce that people are getting the mental health support that they needed," Zuckerberg said. And that's to say nothing of the privacy concerns of doing the work from home.

The decision is no less challenging for YouTube, which is sending thousands of global moderators home as well and leaning into algorithmic moderation. "We will temporarily start relying more on technology to help with some of the work normally done by reviewers," the company announced Monday in a blog post. "This means automated systems will start removing some content without human review, so we can continue to act quickly to remove violative content."

Twitter, meanwhile, will institute a "triage system" to prioritize the most harmful content as its moderation team goes home. Twitter hasn't yet shared additional details on how, exactly, that will work.

For as much time and money as these companies have spent on developing AI moderation systems, they know — as Zuckerberg conceded — that they're still a long way from perfect.

"We want to be clear: While we work to ensure our systems are consistent, they can sometimes lack the context that our teams bring," echoed Twitter employees Vijaya Gadde and Matt Derella in a blog post.

YouTube's past efforts to automatically crack down on banned content have led to massive takedowns of videos that researchers and educators relied on to document, for example, the war in Syria.

Tech giants' transparency reports show that these machine learning systems already have varying levels of success in identifying different kinds of problematic content. For instance, Facebook is fairly successful at spotting terrorist content. In the second and third quarters of last year, it removed 98% of terrorist propaganda automatically, before a single user reported it. It's less successful when it comes to content that's more open to interpretation. During the same quarter, Facebook proactively removed just 80% of hate speech-related content before any users reported it.

"One of the things that's really, really hard — and has always been hard — is when people post bad content that's removable, but they post it in protest or to raise awareness," said Kate Klonick, an assistant professor of law at St. John's University. "Generally, the biggest threat is going to be over-censorship rather than under-censorship."

This period will undoubtedly see some overzealous removal of content and longer wait times for users appealing the platforms' decisions. For that reason, the tech giants are also tinkering with the rules they've clung to in the past.

YouTube has said that during this period, it won't issue strikes on content except for cases in which the platform has "high confidence that it's violative." Typically, a user receives one warning, then up to three strikes before a channel is permanently removed from YouTube. This switch, of course, opens YouTube up to the type of accusations of bias that its strike system has sought to avoid in the past. The company will also take extra precautions with livestreams, preventing some unreviewed content from being searchable or available via the homepage or recommendations feature.

Twitter, meanwhile, has expanded its definition of "harm" to include content related to COVID-19 "that goes directly against guidance from authoritative sources of global and local public health information." And, like YouTube, it's also adjusting its rules so that it will no longer suspend accounts permanently based on automated enforcement.

Despite the inevitable challenges, there are those, like Marco Iansiti, professor at Harvard Business School and author of "Competing in the Age of AI," who believe this episode will "drive even more innovation on the AI front," referencing the advances in manufacturing and air travel that followed World War II.

Some of those advancements were already underway. Last fall, for instance, Facebook explained a new capability it's deploying known as Whole Post Integrity Embeddings, which allows its AI systems to simultaneously analyze the entirety of a post — the images and the text — for signs of a violation. This can be especially helpful for posts where context is key, like, say illegal drug sales. That particular innovation seems to be making a difference: Facebook reported deleting about 4.4 million pieces of drug sale content in the third quarter of 2019, 97.6% of which was proactively detected. That's compared with just 841,000 pieces of the same type of content in the first quarter of that year, of which 84% was flagged by automation.

These developments will be critical to Facebook and other tech giants as they try to combat a globe's worth of problematic content without their usual first line of defense. Yet, there are some categories of forbidden content that are simply too risky to leave up to technology, meaning all of these companies will continue to have at least some human beings screening the most harmful posts.

Twitter said it's working on providing wellness resources to moderators to continue to allow them to work from home. Going forward, Facebook will task some of its full-time employees with moderating the most imminently dangerous content, like posts related to suicide and self-injury, while it doesn't have its army of contractors on hand. "We basically shifted our work, so that it's more full-time work force, and we're surging some of that, which inherently is going to create a trade-off against some other types of content that may not have as imminent physical risks for people," Zuckerberg said.

That may mean that some of Facebook's full-time employees will continue to work on these issues in-person, Zuckerberg said, "in the same way that first responders and health organizations or the police need to act to work on different threats." Facebook says that these full-time employees will all have the option to work from home, and those who do will have weekly one-on-one check-ins with licensed therapists. They'll also have weekly virtual group therapy sessions.


Get in touch with us: Share information securely with Protocol via encrypted Signal or WhatsApp message, at 415-214-4715 or through our anonymous SecureDrop.


Even as tech companies try to ensure the safety of their workers, then, they need to balance the safety of their billions of users, too. And machine learning alone isn't yet up to that task.

Protocol | Fintech

Plaid’s COO is riding fintech’s choppy waves

He's a striking presence on the beach. If he navigates Plaid's data challenges, Eric Sager will loom large in the financial world as well.

Plaid COO Eric Sager is an avid surfer.

Photo: Plaid

Eric Sager is an avid surfer. It's a fitting passion for the No. 2 executive at Plaid, a startup that's riding fintech's rough waters — including a rogue wave on the horizon that could cause a wipeout.

As Plaid's chief operating officer, Sager has been helping the startup navigate that choppiness, from an abandoned merger with Visa to a harsh critique by the CEO of a top Wall Street bank.

Keep Reading Show less
Benjamin Pimentel

Benjamin Pimentel ( @benpimentel) covers fintech from San Francisco. He has reported on many of the biggest tech stories over the past 20 years for the San Francisco Chronicle, Dow Jones MarketWatch and Business Insider, from the dot-com crash, the rise of cloud computing, social networking and AI to the impact of the Great Recession and the COVID crisis on Silicon Valley and beyond. He can be reached at bpimentel@protocol.com or via Signal at (510)731-8429.

Sponsored Content

The future of computing at the edge: an interview with Intel’s Tom Lantzsch

An interview with Tom Lantzsch, SVP and GM, Internet of Things Group at Intel

An interview with Tom Lantzsch

Senior Vice President and General Manager of the Internet of Things Group (IoT) at Intel Corporation

Edge computing had been on the rise in the last 18 months – and accelerated amid the need for new applications to solve challenges created by the Covid-19 pandemic. Tom Lantzsch, Senior Vice President and General Manager of the Internet of Things Group (IoT) at Intel Corp., thinks there are more innovations to come – and wants technology leaders to think equally about data and the algorithms as critical differentiators.

In his role at Intel, Lantzsch leads the worldwide group of solutions architects across IoT market segments, including retail, banking, hospitality, education, industrial, transportation, smart cities and healthcare. And he's seen first-hand how artificial intelligence run at the edge can have a big impact on customers' success.

Protocol sat down with Lantzsch to talk about the challenges faced by companies seeking to move from the cloud to the edge; some of the surprising ways that Intel has found to help customers and the next big breakthrough in this space.

What are the biggest trends you are seeing with edge computing and IoT?

A few years ago, there was a notion that the edge was going to be a simplistic model, where we were going to have everything connected up into the cloud and all the compute was going to happen in the cloud. At Intel, we had a bit of a contrarian view. We thought much of the interesting compute was going to happen closer to where data was created. And we believed, at that time, that camera technology was going to be the driving force – that just the sheer amount of content that was created would be overwhelming to ship to the cloud – so we'd have to do compute at the edge. A few years later – that hypothesis is in action and we're seeing edge compute happen in a big way.

Keep Reading Show less
Saul Hudson
Saul Hudson has a deep knowledge of creating brand voice identity, especially in understanding and targeting messages in cutting-edge technologies. He enjoys commissioning, editing, writing, and business development, in helping companies to build passionate audiences and accelerate their growth. Hudson has reported from more than 30 countries, from war zones to boardrooms to presidential palaces. He has led multinational, multi-lingual teams and managed operations for hundreds of journalists. Hudson is a Managing Partner at Angle42, a strategic communications consultancy.
Protocol | Policy

Far-right misinformation: Facebook's most engaging news

A new study shows that before and after the election, far-right misinformation pages drew more engagement than all other partisan news.

A new study finds that far right misinformation pulls in more engagement on Facebook than other types of partisan news.

Photo: Brett Jordan/Unsplash

In the months before and after the 2020 election, far-right pages that are known to spread misinformation consistently garnered more engagement on Facebook than any other partisan news, according to a New York University study published Wednesday.

The study looked at Facebook engagement for news sources across the political spectrum between Aug. 10, 2020 and Jan. 11, 2021, and found that on average, far-right pages that regularly trade in misinformation raked in 65% more engagement per follower than other far-right pages that aren't known for spreading misinformation.

Keep Reading Show less
Issie Lapowsky
Issie Lapowsky (@issielapowsky) is a senior reporter at Protocol, covering the intersection of technology, politics, and national affairs. Previously, she was a senior writer at Wired, where she covered the 2016 election and the Facebook beat in its aftermath. Prior to that, Issie worked as a staff writer for Inc. magazine, writing about small business and entrepreneurship. She has also worked as an on-air contributor for CBS News and taught a graduate-level course at New York University’s Center for Publishing on how tech giants have affected publishing. Email Issie.
Transforming 2021

Blockchain, QR codes and your phone: the race to build vaccine passports

Digital verification systems could give people the freedom to work and travel. Here's how they could actually happen.

One day, you might not need to carry that physical passport around, either.

Photo: CommonPass

There will come a time, hopefully in the near future, when you'll feel comfortable getting on a plane again. You might even stop at the lounge at the airport, head to the regional office when you land and maybe even see a concert that evening. This seemingly distant reality will depend upon vaccine rollouts continuing on schedule, an open-sourced digital verification system and, amazingly, the blockchain.

Several countries around the world have begun to prepare for what comes after vaccinations. Swaths of the population will be vaccinated before others, but that hasn't stopped industries decimated by the pandemic from pioneering ways to get some people back to work and play. One of the most promising efforts is the idea of a "vaccine passport," which would allow individuals to show proof that they've been vaccinated against COVID-19 in a way that could be verified by businesses to allow them to travel, work or relax in public without a great fear of spreading the virus.

Keep Reading Show less
Mike Murphy

Mike Murphy ( @mcwm) is the director of special projects at Protocol, focusing on the industries being rapidly upended by technology and the companies disrupting incumbents. Previously, Mike was the technology editor at Quartz, where he frequently wrote on robotics, artificial intelligence, and consumer electronics.

People

WhatsApp thinks business chat is the future — but it won't be easy

From privacy policy screw-ups to UI questions, can WhatsApp crack the super-app riddle?

WhatsApp Business is trying to wrap shopping around messaging. It's not always easy.

Image: WhatsApp

At some point, WhatsApp was always going to have to make some money. Facebook paid $21.8 billion for the company in 2014, and since then, WhatsApp has grown to more than 2 billion users in more than 180 countries. And while, yes, Facebook's acquisition was in part simply a way to neutralize a competitor, it also knows how to monetize an audience.

The trick, though, would be figuring out how to do that without putting ads into the app. Nobody at WhatsApp ever wanted to do that, including co-founders Jan Koum and Brian Acton, who reportedly left Facebook after disagreements over ads. More recently, even Mark Zuckerberg has slowed the WhatsApp ad train, with The Information reporting that ads in WhatsApp likely won't come while the company's under so much regulatory scrutiny. So: $21.8 billion, no ads. What to do?

Keep Reading Show less
David Pierce

David Pierce ( @pierce) is Protocol's editor at large. Prior to joining Protocol, he was a columnist at The Wall Street Journal, a senior writer with Wired, and deputy editor at The Verge. He owns all the phones.

Latest Stories