Facebook will now allow group administrators to automatically block posts flagged as false by third party fact-checkers before they're shared to the group. Administrators can also mute or suspend repeat offenders from posting in group chats and automatically approve or deny member requests based on predesignated criteria.
Today’s new features are rolling out as Meta continues to prioritize content moderation. In a welcome move — and one which happened later than critics say it should have — Facebook started demoting Russian disinformation about the war in Ukraine last month. This, in turn, led Russia to block access to the platform.
Groups have perpetuated many of the misinformation movements on the platform. Parenting and natural medicine groups have at times become hubs of vaccine misinformation, for example, while much of the momentum behind the Jan. 6 insurrection came from “Stop the Steal” groups. Facebook has publicly attempted to slow the spread of misinformation in groups with various moderator and administrator tools since last year.
Zuckerberg has said he doesn't want to be the "arbiter of truth," but allowing moderators to block content that has been fact-checked by third parties seems like a hands-off workaround. This might allow Facebook to allow messy situations in which misinformation foments violence in private areas of the platform. The question, of course, is why these features took so long to roll out, given that content shared in Facebook groups has been a known issue since 2018, when Zuckerberg announced that he intended to help people make “meaningful relationships” on the platform.
Now, Facebook is putting (at least, marginally) more moderation power in the hands of individual users.