How Russia’s troll army spread on YouTube and Instagram

A new study shows how a pro-Kremlin group used Telegram to coordinate talking points for real Russians, helping propaganda about the war in Ukraine spread on Instagram and YouTube.

Telegram phone icon

The Russian “cyber soldiers” have embraced new tactics in the information front of the country’s war in Ukraine, helping pro-Moscow influence operations get around the policies of social platforms.

Photo: Fabian Sommer/picture alliance via Getty Images

A self-described group of Russian “cyber soldiers” have embraced new tactics in the information front of the country’s war in Ukraine, according to research viewed by Protocol, and the approach is helping pro-Moscow influence operations get around the policies of social platforms such as Instagram.

The prominent Moscow-allied group, known as Cyber Front Z, is increasingly organizing on Telegram, where it tells supporters how to deploy harassment of pro-Ukraine voices and disseminate pro-Russia talking points about the conflict on major social media sites. That’s according to a report from researchers at Reset, which focuses on studies that examine “the interplay between democracy and technology.”

The research suggests that by using Telegram as a staging ground, the Russian group can instruct “forces” made up of real people while avoiding much of the monitoring and “coordinated inauthentic behavior” policies of bigger services like Instagram, YouTube and Facebook, where the posts and comments ultimately ended up.

The findings raise questions about whether the platforms can keep up with Vladimir Putin’s administration and its evolving attempts to support its war in Ukraine. And given Russia’s habit of going global with information warfare tactics it first tries out in Eastern Europe, the report also suggests the same techniques could become a feature of potential Russian efforts to sway the upcoming midterm election in the U.S.

Moscow-allied group Cyber Front Z is increasingly organizing on Telegram, where it tells supporters how to deploy harassment and disseminate pro-Russia talking points on major social media sites.

Before, Russia had largely fought in cyberspace with government-allied trolls posing as everyday citizens of target countries as well as spam bots posting the same message over and over again — both of which prompted major platforms to come down on what’s often called coordinated inauthentic behavior. Reset’s research revealed the pro-Russian effort has pivoted instead to mobilize and guide real people to post on major services with their own version of messages revolving around the same coordinated theme — a form of influence that has proven potent online from Gamergate to the Big Lie.

The conclusions from Reset build on reports of everyday Russians turning to Telegram — an encrypted app originally founded in Russia that has developed a following among far-right groups and purveyors of disinformation — because more mainstream platforms chose or had to shut down in the country in response to the war. But the app also functions as a digital battlefront where those loyal to Putin attempt to spin the conflict, and the Kremlin has also increasingly turned to real people to post on TikTok, reportedly paying existing influencers to have them talk up the war.

The researchers from Reset focused on the Telegram channel for Kremlin-allied group Cyber Front Z — a variation on a troll factory that the U.K. government said “has suspected links to … the founder of the most infamous and wide-ranging bot-farm the Internet Research Agency.” The study looked at Cyber Front Z’s activities during roughly the opening two months of Russia’s invasion and found more than 1,100 posts, most containing instructions to followers. The researchers then traced the resulting posts and comments back to major platforms.

Cyber Front Z first posted on March 12 with a call to Russia supporters to help, according to the study. The channel has now reached more than 100,000 followers, although its most active participants likely only number around 4,000.

Many of the group’s posts encourage followers to disseminate patriotic images to boost the Russian war, according to Reset. It also routinely instructs users to target “traitors” — such as Russians that are apparently insufficiently enthusiastic as well as international pro-Ukraine voices — with harassing comments, reports, downvotes and more. Such posts include explicit “links to the target’s social media platforms, and usually a clear instruction of which actions should be performed,” according to Reset.

At times, the study found, Cyber Front Z communications go for subtle menace, inviting supporters to “visit” targets or “explain” things to them. The group also employs pseudo-military terminology, referring to mobilizing and conducting “blitzkrieg,” or else suggesting followers “spam” or “drown” particular people or posts.

The study found clear instances when Cyber Front Z instructions immediately preceded a spike in activity on the major social media platforms, including comments in Russian on content where the discussion had previously been mainly in other languages. The research also suggested the Russian effort was focused primarily on Instagram, followed by YouTube — precisely those services that prior studies have blasted for their relative lack of enforcement aimed at curbing bot-driven, pro-Moscow disinformation and harassment.

Many of the group’s posts encourage followers to disseminate patriotic images to boost the Russian war.

A recent report from a NATO-accredited international group aimed at educating military and government personnel, for instance, found that Instagram was the cheapest major service for bots to manipulate, with comparatively poor removal of abuse (despite Facebook scoring better among social media services). That research group, the NATO Strategic Communications Centre of Excellence, also found that YouTube is improving somewhat in content removal when bots post, but is getting outpaced by the increasing speed of manipulation.

In general, in response to the Ukraine war, social media platforms have doubled down on their policies around coordinated inauthentic behavior, or occasionally coordinated mass bullying. Meta said during the early days of the war that it had taken down “a relatively small network” targeting Ukraine and coordinating persons off its platform, including on Telegram. Cyber Front Z’s strategy involves real humans, distinct posts and activity that’s only coordinated on Telegram, which is out of view of the social media companies themselves.

Spokesperson Ivy Choi told Protocol YouTube is “aware of the group Cyber Front Z,” adding, “since the war in Ukraine began, our Threat Analysis Group (TAG) has worked to quickly identify and terminate a number of YouTube channels as part of our investigations into coordinated influence operations linked to Russia.” YouTube has removed more than 9,000 channels and more than 70,000 videos related to the war for violating its policies, Choi said.

Reset suggested this practice creates a need for a “content-based approach” to finding users who are posting at the behest of Cyber Front Z, suggesting that certain keywords could identify them. Focusing on content over actions, however, might also sweep in everyday users who are posting without instructions from a Telegram channel.

The question of the platforms’ ability and willingness to tackle Russian information operations on social media comes as the U.S. midterm election season is getting into full swing. The topic of Russia’s online activities came under intense scrutiny in the U.S. following revelations of Moscow’s attempts to boost Donald Trump’s presidential campaign in 2016. To try to sow division and affect turnout among particular groups, the effort used bots as well as accounts and groups that purported to be Americans. The tactics relied on coordinated inauthentic activity that Russia had previously used in other countries in Europe, including Ukraine.

Yet the Russian endeavors continued, with similar use of fake personas in 2020, although the work of genuine U.S. groups and politicians to spread misinformation, particularly about the integrity of the vote, ended up being the focus of many of the major platforms’ revised policies. If current cyber techniques being used in the Ukraine war prove successful and Russia does indeed aim operations at the 2022 elections here in the U.S., however, its troll army could continue to boost propaganda and harassment campaigns on the major platforms by coordinating them off-platform.

Reset, which includes major scholars on its advisory board and is overseen day to day by the former head of the technology policy advisory group to Hillary Clinton’s 2016 presidential campaign, provided its findings to Protocol but is not otherwise publicizing them.


Binance’s co-founder could remake its crypto deal-making

Yi He is overseeing a $7.5 billion portfolio, with more investments to come, making her one of the most powerful investors in the industry.

Binance co-founder Yi He will oversee $7.5 billion in assets.

Photo: Binance

Binance co-founder Yi He isn’t as well known as the crypto giant’s colorful and controversial CEO, Changpeng “CZ” Zhao.

That could soon change. The 35-year-old executive is taking on a new, higher-profile role at the world’s largest crypto exchange as head of Binance Labs, the company’s venture capital arm. With $7.5 billion in assets to oversee, that instantly makes her one of the most powerful VC investors in crypto.

Keep Reading Show less
Benjamin Pimentel

Benjamin Pimentel ( @benpimentel) covers crypto and fintech from San Francisco. He has reported on many of the biggest tech stories over the past 20 years for the San Francisco Chronicle, Dow Jones MarketWatch and Business Insider, from the dot-com crash, the rise of cloud computing, social networking and AI to the impact of the Great Recession and the COVID crisis on Silicon Valley and beyond. He can be reached at bpimentel@protocol.com or via Google Voice at (925) 307-9342.

Sponsored Content

How cybercrime is going small time

Blockbuster hacks are no longer the norm – causing problems for companies trying to track down small-scale crime

Cybercrime is often thought of on a relatively large scale. Massive breaches lead to painful financial losses, bankrupting companies and causing untold embarrassment, splashed across the front pages of news websites worldwide. That’s unsurprising: cyber events typically cost businesses around $200,000, according to cybersecurity firm the Cyentia Institute. One in 10 of those victims suffer losses of more than $20 million, with some reaching $100 million or more.

That’s big money – but there’s plenty of loot out there for cybercriminals willing to aim lower. In 2021, the Internet Crime Complaint Center (IC3) received 847,376 complaints – reports by cybercrime victims – totaling losses of $6.9 billion. Averaged out, each victim lost $8,143.

Keep Reading Show less
Chris Stokel-Walker

Chris Stokel-Walker is a freelance technology and culture journalist and author of "YouTubers: How YouTube Shook Up TV and Created a New Generation of Stars." His work has been published in The New York Times, The Guardian and Wired.


Trump ordered social media visa screening. Biden's defending it.

The Knight First Amendment Institute just lost a battle to force the Biden administration to provide a report on the collection of social media handles from millions of visa applicants every year.

Visa applicants have to give up any of their social media handles from the past five years.

Photo: belterz/Getty Images

Would you feel comfortable if a U.S. immigration official reviewed all that you post on Facebook, Reddit, Snapchat, Twitter or even YouTube? Would it change what you decide to post or whom you talk to online? Perhaps you’ve said something critical of the U.S. government. Perhaps you’ve jokingly threatened to whack someone.

If you’ve applied for a U.S. visa, there’s a chance your online missives have been subjected to this kind of scrutiny, all in the name of keeping America safe. But three years after the Trump administration ordered enhanced vetting of visa applications, the Biden White House has not only continued the program, but is defending it — despite refusing to say if it’s had any impact.

Keep Reading Show less
Anna Kramer

Anna Kramer is a reporter at Protocol (Twitter: @ anna_c_kramer, email: akramer@protocol.com), where she writes about labor and workplace issues. Prior to joining the team, she covered tech and small business for the San Francisco Chronicle and privacy for Bloomberg Law. She is a recent graduate of Brown University, where she studied International Relations and Arabic and wrote her senior thesis about surveillance tools and technological development in the Middle East.


The US plans to block sales of older chipmaking tech to China

The Biden administration will attempt to roll back China’s chipmaking abilities by blocking tools that make a widely used type of transistor other chipmakers have employed for years.

By using a specific, fundamental building block of chip design as the basis for the overall policy, the White House hopes to both tighten existing controls and avoid the pitfalls around trying to block a generation of manufacturing technology.

Illustration: Christopher T. Fong/Protocol

The Biden administration has for several months been working to tighten its grip on U.S. exports of technology that China needs to make advanced chips, with the goals of both hurting China’s current manufacturing ability and also blocking its future access to next-generation capabilities.

According to two people familiar with the administration’s plans, President Joe Biden’s approach is based around choking off access to the tools, software and support mechanisms necessary to manufacture a specific type of technology that is one of the fundamental building blocks of modern microchips: the transistor.

Keep Reading Show less
Max A. Cherney

Max A. Cherney is a senior reporter at Protocol covering the semiconductor industry. He has worked for Barron's magazine as a Technology Reporter, and its sister site MarketWatch. He is based in San Francisco.


Netflix Games had its best month yet. Here's what's next.

A closer look at the company’s nascent gaming initiative suggests big plans that could involve cloud gaming and more.

Netflix’s acquisitions in the gaming space, and clues found in a number of job listings, suggest it has big plans.

Illustration: Christopher T. Fong/Protocol

Netflix’s foray into gaming is dead on arrival — at least according to the latest headlines about the company’s first few mobile games.

“Less than 1 percent of Netflix’s subscribers are playing its games,” declared Engadget recently. The article was referencing data from app analytics company Apptopia, which estimated that on any given day, only around 1.7 million people were playing Netflix’s mobile games on average.

Keep Reading Show less
Janko Roettgers

Janko Roettgers (@jank0) is a senior reporter at Protocol, reporting on the shifting power dynamics between tech, media, and entertainment, including the impact of new technologies. Previously, Janko was Variety's first-ever technology writer in San Francisco, where he covered big tech and emerging technologies. He has reported for Gigaom, Frankfurter Rundschau, Berliner Zeitung, and ORF, among others. He has written three books on consumer cord-cutting and online music and co-edited an anthology on internet subcultures. He lives with his family in Oakland.

Latest Stories