For the last six months, Facebook engineers have been seeing intermittent spikes in misinformation and other harmful content on News Feed, with posts that would usually be demoted by the company's algorithms being boosted by as much as 30% instead. The cause, according to reporting by The Verge, was a bug that one internal report described as a “massive ranking failure.”
The bug first originated in 2019, but its impact was first noticed in October 2021. The company said it was resolved March 11. “We traced the root cause to a software bug and applied needed fixes,” Meta spokesperson Joe Osborne told The Verge.
The bug caused posts that had been flagged by fact-checkers, as well as nudity, violence and Russian state media, to slip through the company's usual down-ranking filters, according to an internal report obtained by The Verge.
Meta and other tech giants have leaned on down-ranking as a more palatable approach to content moderation than removing content altogether. Scholars like Stanford's Renée DiResta have also called on tech giants to embrace this approach and realize that "free speech is not the same as free reach."
In this case, those ranking systems appear to have failed. But Osborne told The Verge the bug “has not had any meaningful, long-term impact on our metrics.”
It will be difficult for those outside of Meta to vet those metrics. Meta has blocked new users from accessing CrowdTangle, one of the core tools researchers and journalists have used to track trends in what's popular on Facebook, and has dismantled the team leading it. And while the company does release reports on the prevalence of certain kinds of policy violations in any given quarter, those reports offer little indication of what's behind those numbers. Even if the report did show an uptick in, say, violence on Facebook, it'd be impossible to know if that's due to this bug or to Russia's invasion of Ukraine or some other global atrocity.
The company in a statement to Protocol said:
"The Verge vastly overstated what this bug was because ultimately it had no meaningful, long-term impact on problematic content. Only a very small number of views of content in Feed were ever impacted because the overwhelming majority of posts in Feed are not eligible to be down-ranked in the first place. After detecting inconsistencies we found the root cause and quickly applied fixes. Even without the fixes, the multitude of other mechanisms we have to keep people from seeing harmful content — including other demotions, fact-checking labels and violating content removals — remained in place.”
But it's still unclear which posts were boosted due to the bug or how many views they received.
This story was updated on March 31 with a statement from Meta.