Source Code: Your daily look at what matters in tech.

source-codesource codeauthorHayden FieldNoneWant your finger on the pulse of everything that's happening in tech? Sign up to get David Pierce's daily newsletter.64fd3cbe9f
×

Get access to Protocol

Your information will be used in accordance with our Privacy Policy

I’m already a subscriber
People

Coronavirus is AI moderation’s big test. Don’t expect flying colors.

The pandemic forced some social networks to send home content moderators and instead rely on AI. What happens now?

Facebook co-founder Mark Zuckerberg

Facebook is moving its content-moderation efforts to AI because of coronavirus, and the results will be "a little less effective in the near term," CEO Mark Zuckerberg said Thursday.

Photo: Drew Angerer/Getty Images

Tech CEOs have said for years that artificial intelligence is the future of content moderation. They never expected that the future would come so soon.

As the coronavirus outbreak sweeps the planet, Facebook, YouTube and Twitter have sent their legions of global content moderators home. That leaves the task of identifying and removing hate speech, child sexual abuse imagery, terrorist propaganda, violent imagery and many other types of offensive content largely to the algorithms these companies have been steadily building over the last few years, even as they built their content moderation armies.


Get what matters in tech, in your inbox every morning. Sign up for Source Code.


What this AI-moderated future will look like is unclear. But for Facebook, moderation will be "a little less effective in the near-term" as a result of the changes, Mark Zuckerberg said Thursday on a call with reporters. If the world learns anything from this unavoidable test of automated moderation, it will be that artificially intelligent systems still have a lot to learn.

Facebook has committed to sending all of its contract workers home, but will continue to pay them for the foreseeable future. That's not a decision the company would make lightly: Over the last few years, Facebook has invested massively in contracting content moderators around the world. But watching beheading videos and suicide attempts on a screen all day is dangerous enough for moderators' mental health when they're working in an office; imagine doing it at home, potentially with family around. "Working from home on those types of things, that will be very challenging to enforce that people are getting the mental health support that they needed," Zuckerberg said. And that's to say nothing of the privacy concerns of doing the work from home.

The decision is no less challenging for YouTube, which is sending thousands of global moderators home as well and leaning into algorithmic moderation. "We will temporarily start relying more on technology to help with some of the work normally done by reviewers," the company announced Monday in a blog post. "This means automated systems will start removing some content without human review, so we can continue to act quickly to remove violative content."

Twitter, meanwhile, will institute a "triage system" to prioritize the most harmful content as its moderation team goes home. Twitter hasn't yet shared additional details on how, exactly, that will work.

For as much time and money as these companies have spent on developing AI moderation systems, they know — as Zuckerberg conceded — that they're still a long way from perfect.

"We want to be clear: While we work to ensure our systems are consistent, they can sometimes lack the context that our teams bring," echoed Twitter employees Vijaya Gadde and Matt Derella in a blog post.

YouTube's past efforts to automatically crack down on banned content have led to massive takedowns of videos that researchers and educators relied on to document, for example, the war in Syria.

Tech giants' transparency reports show that these machine learning systems already have varying levels of success in identifying different kinds of problematic content. For instance, Facebook is fairly successful at spotting terrorist content. In the second and third quarters of last year, it removed 98% of terrorist propaganda automatically, before a single user reported it. It's less successful when it comes to content that's more open to interpretation. During the same quarter, Facebook proactively removed just 80% of hate speech-related content before any users reported it.

"One of the things that's really, really hard — and has always been hard — is when people post bad content that's removable, but they post it in protest or to raise awareness," said Kate Klonick, an assistant professor of law at St. John's University. "Generally, the biggest threat is going to be over-censorship rather than under-censorship."

This period will undoubtedly see some overzealous removal of content and longer wait times for users appealing the platforms' decisions. For that reason, the tech giants are also tinkering with the rules they've clung to in the past.

YouTube has said that during this period, it won't issue strikes on content except for cases in which the platform has "high confidence that it's violative." Typically, a user receives one warning, then up to three strikes before a channel is permanently removed from YouTube. This switch, of course, opens YouTube up to the type of accusations of bias that its strike system has sought to avoid in the past. The company will also take extra precautions with livestreams, preventing some unreviewed content from being searchable or available via the homepage or recommendations feature.

Twitter, meanwhile, has expanded its definition of "harm" to include content related to COVID-19 "that goes directly against guidance from authoritative sources of global and local public health information." And, like YouTube, it's also adjusting its rules so that it will no longer suspend accounts permanently based on automated enforcement.

Despite the inevitable challenges, there are those, like Marco Iansiti, professor at Harvard Business School and author of "Competing in the Age of AI," who believe this episode will "drive even more innovation on the AI front," referencing the advances in manufacturing and air travel that followed World War II.

Some of those advancements were already underway. Last fall, for instance, Facebook explained a new capability it's deploying known as Whole Post Integrity Embeddings, which allows its AI systems to simultaneously analyze the entirety of a post — the images and the text — for signs of a violation. This can be especially helpful for posts where context is key, like, say illegal drug sales. That particular innovation seems to be making a difference: Facebook reported deleting about 4.4 million pieces of drug sale content in the third quarter of 2019, 97.6% of which was proactively detected. That's compared with just 841,000 pieces of the same type of content in the first quarter of that year, of which 84% was flagged by automation.

These developments will be critical to Facebook and other tech giants as they try to combat a globe's worth of problematic content without their usual first line of defense. Yet, there are some categories of forbidden content that are simply too risky to leave up to technology, meaning all of these companies will continue to have at least some human beings screening the most harmful posts.

Twitter said it's working on providing wellness resources to moderators to continue to allow them to work from home. Going forward, Facebook will task some of its full-time employees with moderating the most imminently dangerous content, like posts related to suicide and self-injury, while it doesn't have its army of contractors on hand. "We basically shifted our work, so that it's more full-time work force, and we're surging some of that, which inherently is going to create a trade-off against some other types of content that may not have as imminent physical risks for people," Zuckerberg said.

That may mean that some of Facebook's full-time employees will continue to work on these issues in-person, Zuckerberg said, "in the same way that first responders and health organizations or the police need to act to work on different threats." Facebook says that these full-time employees will all have the option to work from home, and those who do will have weekly one-on-one check-ins with licensed therapists. They'll also have weekly virtual group therapy sessions.


Get in touch with us: Share information securely with Protocol via encrypted Signal or WhatsApp message, at 415-214-4715 or through our anonymous SecureDrop.


Even as tech companies try to ensure the safety of their workers, then, they need to balance the safety of their billions of users, too. And machine learning alone isn't yet up to that task.

Protocol | Workplace

The Activision Blizzard lawsuit has opened the floodgates

An employee walkout, a tumbling stock price and damning new reports of misconduct.

Activision Blizzard is being sued for widespread sexism, harassment and discrimination.

Photo: Bloomberg/Getty Images

Activision Blizzard is in crisis mode. The World of Warcraft publisher was the subject of a shocking lawsuit filed by California's Department of Fair Employment and Housing last week over claims of widespread sexism, harassment and discrimination against female employees. The resulting fallout has only intensified by the day, culminating in a 500-person walkout at the headquarters of Blizzard Entertainment in Irvine on Wednesday.

The company's stock price has tumbled nearly 10% this week, and CEO Bobby Kotick acknowledged in a message to employees Tuesday that Activision Blizzard's initial response was "tone deaf." Meanwhile, there has been a continuous stream of new reports unearthing horrendous misconduct as more and more former and current employees speak out about the working conditions and alleged rampant misogyny at one of the video game industry's largest and most powerful employers.

Keep Reading Show less
Nick Statt
Nick Statt is Protocol's video game reporter. Prior to joining Protocol, he was news editor at The Verge covering the gaming industry, mobile apps and antitrust out of San Francisco, in addition to managing coverage of Silicon Valley tech giants and startups. He now resides in Rochester, New York, home of the garbage plate and, completely coincidentally, the World Video Game Hall of Fame. He can be reached at nstatt@protocol.com.

Over the last year, financial institutions have experienced unprecedented demand from their customers for exposure to cryptocurrency, and we've seen an inflow of institutional dollars driving bitcoin and other cryptocurrencies to record prices. Some banks have already launched cryptocurrency programs, but many more are evaluating the market.

That's why we've created the Crypto Maturity Model: an iterative roadmap for cryptocurrency product rollout, enabling financial institutions to evaluate market opportunities while addressing compliance requirements.

Keep Reading Show less
Caitlin Barnett, Chainanalysis
Caitlin’s legal and compliance experience encompasses both cryptocurrency and traditional finance. As Director of Regulation and Compliance at Chainalysis, she helps leading financial institutions strategize and build compliance programs in order to adopt cryptocurrencies and offer new products to their customers. In addition, Caitlin helps facilitate dialogue with regulators and the industry on key policy issues within the cryptocurrency industry.
Protocol | Workplace

Founder sues the company that acquired her startup

Knoq founder Kendall Hope Tucker is suing the company that acquired her startup for discrimination, retaliation and fraud.

Kendall Hope Tucker, founder of Knoq, is suing Ad Practitioners, which acquired her company last year.

Photo: Kendall Hope Tucker

Kendall Hope Tucker felt excited when she sold her startup last December. Tucker, the founder of Knoq, was sad to "give up control of a company [she] had poured five years of [her] heart, soul and energy into building," she told Protocol, but ultimately felt hopeful that selling it to digital media company Ad Practitioners was the best financial outcome for her, her team and her investors. Now, seven months later, Tucker is suing Ad Practitioners alleging discrimination, retaliation and fraud.

Knoq found success selling its door-to-door sales and analytics services to companies such as Google Fiber, Inspire Energy, Fluent Home and others. Knoq representatives would walk around neighborhoods, knocking on doors to market its customers' products and services. The pandemic, however, threw a wrench in its business. Prior to the acquisition, Knoq says it raised $6.5 million from Initialized Capital, Haystack.vc, Techstars and others.

Keep Reading Show less
Megan Rose Dickey
Megan Rose Dickey is a senior reporter at Protocol covering labor and diversity in tech. Prior to joining Protocol, she was a senior reporter at TechCrunch and a reporter at Business Insider.
dei
Protocol | Workplace

What’s the purpose of a chief purpose officer?

Cisco's EVP and chief people, policy & purpose officer shares how the company is creating a more conscious and hybrid work culture.

Like many large organizations, the leaders at Cisco spent much of the past year working to ensure their employees had an inclusive and flexible workplace while everyone worked from home during the pandemic. In doing so, they brought a new role into the mix. In March 2021 Francine Katsoudas transitioned from EVP and chief people officer to chief people, policy & purpose Officer.

For many, the role of a purpose officer is new. Purpose officers hold their companies accountable to their mission and the people who work for them. In a conversation with Protocol, Katsoudas shared how she is thinking about the expanded role and the future of hybrid work at Cisco.

Keep Reading Show less
Amber Burton

Amber Burton (@amberbburton) is a reporter at Protocol. Previously, she covered personal finance and diversity in business at The Wall Street Journal. She earned an M.S. in Strategic Communications from Columbia University and B.A. in English and Journalism from Wake Forest University. She lives in North Carolina.

Protocol | Fintech

The digital dollar is coming. The payments industry is worried.

Jodie Kelley heads the Electronic Transactions Association. The trade group's members, who process $7 trillion a year in payments, want a say in the digital currency.

Jodie Kelley is CEO of the Electronic Transactions Association.

Photo: Electronic Transactions Association

The Electronic Transactions Association launched in 1990 just as new technologies, led by the World Wide Web, began upending the world of commerce and finance.

The disruption hasn't stopped.

Keep Reading Show less
Benjamin Pimentel

Benjamin Pimentel ( @benpimentel) covers fintech from San Francisco. He has reported on many of the biggest tech stories over the past 20 years for the San Francisco Chronicle, Dow Jones MarketWatch and Business Insider, from the dot-com crash, the rise of cloud computing, social networking and AI to the impact of the Great Recession and the COVID crisis on Silicon Valley and beyond. He can be reached at bpimentel@protocol.com or via Signal at (510)731-8429.

Latest Stories