Facebook's decision to send content moderators home in March had devastating consequences for the company's ability to catch and remove posts containing the most harmful content. Now, as some moderators have returned to the office in recent months, things are getting back to normal — underlining the importance of having humans in the loop.
That's according to Facebook's third-quarter transparency report, published Thursday. It shows, for instance, that in the third quarter of this year, Instagram removed nearly twice as much child sexual abuse material and nearly five times as much suicide-related content as it did in the second quarter.
This dramatic shift underscores just how crucial this global army of moderators is to the way the world's largest social media platform operates.
"People are an important part of the equation for content enforcement," Guy Rosen, Facebook's vice president of integrity, said on a call with reporters Thursday. "These are incredibly important workers who do an incredibly important part of this job ... The reason we're bringing some workers back into offices is exactly to ensure that we can have that balance of both people and AI working on these areas."
Facebook's report comes just one day after dozens of moderators signed an open letter to Facebook CEO Mark Zuckerberg and others that criticizes recent orders that they return to the office despite a surge in COVID-19 cases and demands that they be made full-time employees. "By outsourcing our jobs, Facebook implies that the 35,000 of us who work in moderation are somehow peripheral to social media," the letter read. "Yet we are so integral to Facebook's viability that we must risk our lives to come into work."
Rosen stressed that the majority of content moderators are still working from home, but said that those who have gone back to the office are doing so in spaces with reduced capacity, physical distancing, mandatory temperature checks and other safety precautions "to ensure that we're providing a safe workspace for them to do this incredibly important work to keep our community safe as well."
The moderators argued that's not enough, and are pushing Facebook to guarantee them things like "real healthcare," hazard pay and the ability to continue working from home if they live with at-risk individuals.
Facebook's executives credited many of the gains they made this quarter to their investment in automated systems. That includes their ability to proactively detect 95% of hate speech on the platform. When Facebook first began reporting this stat in 2017, just 23.6% of hate speech was proactively detected before users reported it. Facebook also reported the prevalence of hate speech on the platform — that is, the percentage of times people actually see hate speech while using Facebook — for the first time. They found that prevalence was 0.1% to 0.11%, suggesting for every 10,000 views on Facebook, about 10 or 11 of them contain hate speech.
Despite these advances, Facebook's chief technology officer Mike Schroepfer acknowledged that automated filters will never replace the work of human moderators. "I don't see any short-term reduction or long-term reduction in the human involvement in this," he said on the call. "We get faster, more accurate, more powerful and then we can use our amazing staff we have to work on the more nuanced problems we have that really require human review."