People

How a screenshot started a fight that took over Reddit

Users and mods have always fought on Reddit. But when a group of "PowerMods" was accused of having too much control, the ensuing brawl hit every corner of the platform.

Reddit

Evan Hamilton, who runs Reddit's community team, said the goal is to "allow people to really build and curate the experience they want to have on the platform, and have some ownership, right?"

Illustration: Reddit

As with anything on Reddit, it's hard to know exactly how it all started. But the fight that has consumed the platform in recent weeks definitely started well before it went viral. The first version I found was from March 16, posted to a subreddit called r/WatchRedditDie (users refer to subreddits as "r WhateverTheNameIs," and write them with a slash in between). It came from a user named Steve_Cuckman1312.

The post was simple: a screenshot of a table, listing popular subreddits in one column and moderators in another. It was titled "92 of top 500 subreddits are controlled by just 4 people." There were actually five Redditors in the table. The name Siouxsie_siousv2 appeared 14 times; Merari01 20 times; Gallowboob 23 times; Awkwardtheturtle 24 times; and Cyxie a whopping 45 times. The list was at best deeply misleading; those subreddits often have dozens of moderators, and all Steve_Cuckman1312 had done was cherry-pick names. But that fact paled next to the post's ominous subtext: These are the people who run Reddit. And they have way too much power.

Over the next several weeks, the list rocketed around Reddit. It hit other Reddit-hating subreddits (which are surprisingly common), like r/subredditcancer and r/DeclineIntoCensorship. It hit conspiracy-minded ones, like r/conspiracy-commons, r/conspiracies and r/topconspiracy. It went to weird places, like subreddits devoted to Philip DeFranco and Lil Uzi Vert.

The list hit the big time when a Redditor named rootin-tootin_putin posted it to r/ThatsInsane, r/mildlyinfuriating and r/interestingasfuck. "I saw a link to it somewhere," rootin-tootin_putin told me, "which caught my attention due to negative run-ins with mods before." Those three subreddits have almost 9 million subscribers among them. The post promptly went viral — at one point it was among the most popular posts on Reddit.

Rootin-tootin_putin's post was quickly removed, without much explanation, and they got a notice they'd been banned from a subreddit. But rootin-tootin_putin wasn't banned from the places they'd posted. (Yet.) They were banned from r/comedyheaven, a subreddit "which I hadn't posted in or referenced in months." One of the sub's moderators? Cyxie. Soon after, rootin-tootin_putin faced other bans and was eventually suspended from Reddit altogether.

That was May 12, which was approximately when things went haywire. A pattern took hold: The list gets posted and then deleted — sometimes because it doesn't follow subreddit rules, other times because it causes uncivil conversations, or for no stated reason at all — and then gets posted somewhere else. The dispute, both about the post itself and the way the post has been handled all over Reddit, has turned into a brawl between the platform's users and its moderators.

One of the most popular versions of the PowerMods list that's been passed around Reddit in recent weeks.Screenshot: David Pierce

At its core, what's happening on Reddit feels evocative of this moment on the internet — and society — as a whole: a deep mistrust of authority yields a relentless and potentially destabilizing search for the secretly powerful hand keeping people down. In this case, some users say they've identified a cabal of "PowerMods" who control everything that happens on Reddit and manipulate the platform to their advantage. Moderators say they're receiving death threats because of a misleading list and for simply trying to do their part to make Reddit better. When Reddit's corporate team steps in, it only seems to make things worse.

Reddit's approach to content moderation has always been both unusual and central to its building of community. It gives users the right to set their own rules and the tools to enforce them. This kind of drama is hardly new to the platform, but something about this instance feels different. It certainly did to Cyxie: The massively prolific poster and moderator, who had been on Reddit since 2011 and was helping oversee more than 200 subreddits, abruptly deleted his account in the midst of it all. And more than one person I spoke to believes the ordeal has proven that something about Reddit is fundamentally broken.

The guardians of the homepage

Most social platforms have an established set of rules and a three-pronged approach to enforcing them. There are the automated tools, designed to catch most bad content before anyone sees it. There are the reporting tools, meant to make it easy for users to report rule-breaking. And there are the teams of contractors, reviewing everything and making decisions. They decide what stays, what goes, what gets buried.

Reddit isn't like that. Reddit is less a single platform and more a loose confederation of platforms, each with its own user-created norms. Evan Hamilton, who runs Reddit's community team, described it as similar to the United States. "There are rules that everyone has to abide by," he said, "to ensure safety and consistency." Those are the platform rules — which Reddit does have. Beyond that? Hamilton said Reddit's goal is to "allow people to really build and curate the experience they want to have on the platform, and have some ownership, right?"

Practically every subreddit, once it hits a certain size, develops its own rulebook. No two are alike: You can have a "Game of Thrones" subreddit that doesn't allow memes, serious discussion only, and a competing one where memes flow like Dornish reds. Some are ruthless about formatting and style, others couldn't care less.

The users responsible for enforcing these rules and getting the best out of their subreddit are the moderators, or mods. By default, the creator of a subreddit becomes its moderator, and from there it's easy to add and remove new mods and control their permissions. Moderators can have widely varying capabilities, from total authority over the subreddit to something like a backstage pass to watch others perform. Some subreddits have one or two, others have dozens.

The largest I've seen is r/worldnews, with 103 moderators. That sounds like a lot, except r/worldnews also has 24.1 million subscribers, with tens of thousands online and posting every minute of the day.

Everything in moderation, including moderation

Rob Allam, better known as Gallowboob on Reddit, helps oversee a number of popular subreddits, of which r/tifu (Today I Fucked Up) is the most popular, with 15.6 million subscribers and 28 moderators. The first thing you need to understand about moderating, he said, is that nobody does it alone. "If you show me on one sub and there's 50 people on the mod team," he said, "I don't have a say in that sub." He said he's not a "top mod" of any popular subreddit, meaning he can't do much of anything unilaterally.

Allam's been on Reddit since 2014, when he became obsessed with r/photoshopbattles while supposedly at work as a landscape architect. "I'd do it during work, when no one's behind my screen," he said. Pretty quickly, Allam started joining more communities, posting more stuff, and discovered he had a knack for knowing what people might like on Reddit. "My discovery was that, oh shit, you can actually post stuff there and it ripples everywhere," he said. He started seeing things he posted make it into news stories and onto TV shows.

Meanwhile, Reddit started to consume his life. "I was one of the fastest-growing users on the platform," he said. "I was so active." According to one list, Allam has more karma — Reddit's term for upvotes and a general measure of approval on the platform — than any other user. You could call him the most popular person on Reddit.

Even before he started modding, Allam saw first hand how immersed in suspicion Reddit can be. He'd join subreddits, he said, and moderators would instinctively throw him out: He was posting so much they assumed he was a bot or a corporation masquerading as a single person. After a time, though, he got to know some of the moderators personally, and they brought him on board. "I think some of them offered me a mod position just because I was on the site 24/7," he said. He started in smaller communities, eventually building to bigger and bigger ones. At his peak, Allam guessed, he was moderating about 100 communities.

What does it mean to moderate a community? It depends. Some moderators are active, taking down posts, enforcing the rules, guiding the community. Others are more hands off. "In many cases, these folks who are veteran moderators are brought into moderation teams to provide advice," Reddit's Hamilton said, "and bring their experience to bear." He offered r/coronavirus as an example: Before the pandemic, it was a small subreddit run largely by a group of epidemiologists, but when it exploded in size and activity, they recruited experienced mods to help them cope.

For the most part, Allam said, modding is thankless and often horrific. He said he's talked with suicidal users, woken up to an inbox full of child pornography. And it's all done on a volunteer basis. "Moderators on Facebook are paid, and they have moral support," he told me. "Because you actually develop PTSD by being a janitor online and scraping the shit that no one else has to see." Reddit works with some mental health organizations, he said, but doesn't offer enough resources. He's not always sure why he keeps coming back.

Much of the work of moderating a subreddit doesn't actually happen on Reddit. It happens in email and Discord but mostly in Slack, where the moderators can discuss policies and specific decisions. Sometimes a subreddit will get its own Slack workspace, but more recently mods have been joining a single space for all moderators and creating private channels for each community. In most cases, even the Slack is run by mods. The mods do have frequent contact with Hamilton's staffers at Reddit, who are known as "admins" and function sort of as the grown-ups in a kids show: They don't show up often, but when they do, you know someone's in trouble.

Pay no attention to the man behind the curtain

Knowing all this, consider the implication of a list that says five moderators essentially control Reddit. These five people are surely running the show in Slack, telling others how to run their communities, making everyone play by their rules and adhere to their values. One not-unpopular theory held that there's no way one person could be this active — some of these mods must be run by corporations or governments. Maybe from Russia or China. "I have no idea what goes on behind the curtains of Reddit, and there's a high probability that I never will," user sqwatish wrote on a post about the list, "however, I can confidently wonder with the information given to me."

In the same thread, a user named notevengonnatryffs neatly summed up a broad feeling on Reddit right now. "People are becoming increasingly wary of this and get massively hyped up by everything that smells like censorship." Actually, Reddit has always been thus: wary of authority, protective of the autonomy of both the platform and its users. Any wizard behind the curtain must be dragged out into the open.

This, maybe more than anything, is what differentiates Reddit from so many other social platforms. All have similar moderation issues — just this week, YouTube was criticized for automatically censoring comments deemed anti-China, as was Twitter for leaving up tweets by President Trump about Joe Scarborough that seemingly violate the rules. But in most cases, there's no one to rage at other than a faceless corporation or an unreachable CEO. On Reddit, the boogeyman has a name and an inbox.

Users, mods and admins have been arguing since Reddit's earliest days, of course. As Gallowboob, Allam has been accused of deleting and reposting other users' content, just for the karma. (He denies doing so.) Once, Allam said, he posted an animation of a new Netflix logo he thought was cool, and instantly the community assumed he was a paid shill for the company. The response got so bad that Allam emailed Netflix, begging the company to acknowledge he hadn't been paid. There have been cases in which prominent users were being compensated, of course — and Reddit never forgets.

Getting the banned back together

The PowerMods list first crossed Allam's radar when a long-term Gallowboob troll posted it. "It's just anger and spite and venom," Allam said, "and he's projecting everywhere, and he was fixated on me." It kept getting posted and deleted, posted and deleted. Then it began to show up on other subreddits, Discords and 4Chan boards, where users would encourage others to post it themselves. They figured eventually moderators wouldn't be able to keep up. And with every deleted post or suspended user, the vitriol got worse.

Then, Allam said, his friend Cyxie made a crucial mistake. (Cyxie didn't respond to multiple requests for comment.) He used one of Reddit's automatic moderation bots, a tool designed to combat spam — people selling T-shirts or posting the same link over and over — that can be used to quickly ban someone from all of a mod's communities. Cyxie happened to moderate a lot of communities. So he mass-banned rootin-tootin_putin, who had posted the list in the subreddits that made it go truly viral. Which only made things worse.

"Every post was another 10 or so subs I was banned from," rootin-tootin_putin said, "every ban a direct violation of Reddit's moderator guidelines. I believe it was this ardent rule-breaking, coupled with Reddit's ignorance of it, which drew people to my cause, right up to my baseless suspension."

For a while, Reddit's community team didn't think much of the drama. "Criticism of Reddit is perfectly fine," Hamilton said. "We're happy to have those conversations and let people have a space to talk about them." Things hit a breaking point, though, when a number of the so-called PowerMods started receiving death threats. Mods were sending new posts containing the list — and the harassment the posts were causing — to admins in huge volume. That's apparently what led Cyxie to delete his account entirely.

Sodypop post Eventually, a Reddit admin named Sodypop weighed in on the PowerMods issue.Screenshot: David Pierce

On May 15, Reddit's admins removed versions of the list (though nowhere near all of them), and sodypop, a Reddit employee, explained the interventions in r/therewasanattempt. "Regardless of how you feel about certain people on Reddit," sodypop wrote, "it is 100% against our policies to threaten them. We expect our users and moderators to abide by our site-wide rules and will continue to take action against anyone breaking these rules."

It wasn't enough for Allam. "Didn't change a single thing," he said. "It maybe added oil to the fire, more than anything." He said the admins will just sweep it under the rug, say it was a learning experience, and forget about it. Meanwhile, the post continues to spread, its implications more powerful every time it gets removed.

While Allam didn't delete his account, he did take an extended break from Reddit. He's only posted once in the last three weeks, a cute cartoon with the title "Hardcore mental health check for all." He's commenting and moderating, but with nothing like his normal volume. But after it all, he's still on Reddit — something about the platform, and the drama, is irresistible.

And he's trying this interesting thing: Every time he's tasked with deciding whether to take down a post, Allam has taken to polling the subreddit. Upvote if you want it to stay, downvote if you want it gone. In a new way, Reddit is being allowed to moderate itself. Allam isn't confident this latest experiment in gatekeeping will work, but he's giving users what they always said they wanted. Now they'll see what that looks like.

Climate

A pro-China disinformation campaign is targeting rare earth miners

It’s uncommon for cyber criminals to target private industry. But a new operation has cast doubt on miners looking to gain a foothold in the West in an apparent attempt to protect China’s upper hand in a market that has become increasingly vital.

It is very uncommon for coordinated disinformation operations to target private industry, rather than governments or civil society, a cybersecurity expert says.

Photo: Goh Seng Chong/Bloomberg via Getty Images

Just when we thought the renewable energy supply chains couldn’t get more fraught, a sophisticated disinformation campaign has taken to social media to further complicate things.

Known as Dragonbridge, the campaign has existed for at least three years, but in the last few months it has shifted its focus to target several mining companies “with negative messaging in response to potential or planned rare earths production activities.” It was initially uncovered by cybersecurity firm Mandiant and peddles narratives in the Chinese interest via its network of thousands of fake social media accounts.

Keep Reading Show less
Lisa Martine Jenkins

Lisa Martine Jenkins is a senior reporter at Protocol covering climate. Lisa previously wrote for Morning Consult, Chemical Watch and the Associated Press. Lisa is currently based in Brooklyn, and is originally from the Bay Area. Find her on Twitter ( @l_m_j_) or reach out via email (ljenkins@protocol.com).

Some of the most astounding tech-enabled advances of the next decade, from cutting-edge medical research to urban traffic control and factory floor optimization, will be enabled by a device often smaller than a thumbnail: the memory chip.

While vast amounts of data are created, stored and processed every moment — by some estimates, 2.5 quintillion bytes daily — the insights in that code are unlocked by the memory chips that hold it and transfer it. “Memory will propel the next 10 years into the most transformative years in human history,” said Sanjay Mehrotra, president and CEO of Micron Technology.

Keep Reading Show less
James Daly
James Daly has a deep knowledge of creating brand voice identity, including understanding various audiences and targeting messaging accordingly. He enjoys commissioning, editing, writing, and business development, particularly in launching new ventures and building passionate audiences. Daly has led teams large and small to multiple awards and quantifiable success through a strategy built on teamwork, passion, fact-checking, intelligence, analytics, and audience growth while meeting budget goals and production deadlines in fast-paced environments. Daly is the Editorial Director of 2030 Media and a contributor at Wired.
Fintech

Ripple’s CEO threatens to leave the US if it loses SEC case

CEO Brad Garlinghouse said a few countries have reached out to Ripple about relocating.

"There's no doubt that if the SEC doesn't win their case against us that that is good for crypto in the United States,” Brad Garlinghouse told Protocol.

Photo: Stephen McCarthy/Sportsfile for Collision via Getty Images

Ripple CEO Brad Garlinghouse said the crypto company will move to another country if it loses in its legal battle with the SEC.

Garlinghouse said he’s confident that Ripple will prevail against the federal regulator, which accused the company of failing to register roughly $1.4 billion in XRP tokens as securities.

Keep Reading Show less
Benjamin Pimentel

Benjamin Pimentel ( @benpimentel) covers crypto and fintech from San Francisco. He has reported on many of the biggest tech stories over the past 20 years for the San Francisco Chronicle, Dow Jones MarketWatch and Business Insider, from the dot-com crash, the rise of cloud computing, social networking and AI to the impact of the Great Recession and the COVID crisis on Silicon Valley and beyond. He can be reached at bpimentel@protocol.com or via Google Voice at (925) 307-9342.

Policy

The Supreme Court’s EPA ruling is bad news for tech regulation, too

The justices just gave themselves a lot of discretion to smack down agency rules.

The ruling could also endanger work on competition issues by the FTC and net neutrality by the FCC.

Photo: Geoff Livingston/Getty Images

The Supreme Court’s decision last week gutting the Environmental Protection Agency’s ability to regulate greenhouse gas emissions didn’t just signal the conservative justices’ dislike of the Clean Air Act at a moment of climate crisis. It also served as a warning for anyone that would like to see more regulation of Big Tech.

At the heart of Chief Justice John Roberts’ decision in West Virginia v. EPA was a codification of the “major questions doctrine,” which, he wrote, requires “clear congressional authorization” when agencies want to regulate on areas of great “economic and political significance.”

Keep Reading Show less
Ben Brody

Ben Brody (@ BenBrodyDC) is a senior reporter at Protocol focusing on how Congress, courts and agencies affect the online world we live in. He formerly covered tech policy and lobbying (including antitrust, Section 230 and privacy) at Bloomberg News, where he previously reported on the influence industry, government ethics and the 2016 presidential election. Before that, Ben covered business news at CNNMoney and AdAge, and all manner of stories in and around New York. He still loves appearing on the New York news radio he grew up with.

Enterprise

Microsoft and Google are still using emotion AI, but with limits

Microsoft said accessibility goals overrode problems with emotion recognition and Google offers off-the-shelf emotion recognition technology amid growing concern over the controversial AI.

Emotion recognition is a well-established field of computer vision research; however, AI-based technologies used in an attempt to assess people’s emotional states have moved beyond the research phase.

Photo: Microsoft

Microsoft said last month it would no longer provide general use of an AI-based cloud software feature used to infer people’s emotions. However, despite its own admission that emotion recognition technology creates “risks,” it turns out the company will retain its emotion recognition capability in an app used by people with vision loss.

In fact, amid growing concerns over development and use of controversial emotion recognition in everyday software, both Microsoft and Google continue to incorporate the AI-based features in their products.

“The Seeing AI person channel enables you to recognize people and to get a description of them, including an estimate of their age and also their emotion,” said Saqib Shaikh, a software engineering manager and project lead for Seeing AI at Microsoft who helped build the app, in a tutorial about the product in a 2017 Microsoft video.

Keep Reading Show less
Kate Kaye

Kate Kaye is an award-winning multimedia reporter digging deep and telling print, digital and audio stories. She covers AI and data for Protocol. Her reporting on AI and tech ethics issues has been published in OneZero, Fast Company, MIT Technology Review, CityLab, Ad Age and Digiday and heard on NPR. Kate is the creator of RedTailMedia.org and is the author of "Campaign '08: A Turning Point for Digital Media," a book about how the 2008 presidential campaigns used digital media and data.

Latest Stories
Bulletins