How Russia’s troll army spread on YouTube and Instagram

A new study shows how a pro-Kremlin group used Telegram to coordinate talking points for real Russians, helping propaganda about the war in Ukraine spread on Instagram and YouTube.

Telegram phone icon

The Russian “cyber soldiers” have embraced new tactics in the information front of the country’s war in Ukraine, helping pro-Moscow influence operations get around the policies of social platforms.

Photo: Fabian Sommer/picture alliance via Getty Images

A self-described group of Russian “cyber soldiers” have embraced new tactics in the information front of the country’s war in Ukraine, according to research viewed by Protocol, and the approach is helping pro-Moscow influence operations get around the policies of social platforms such as Instagram.

The prominent Moscow-allied group, known as Cyber Front Z, is increasingly organizing on Telegram, where it tells supporters how to deploy harassment of pro-Ukraine voices and disseminate pro-Russia talking points about the conflict on major social media sites. That’s according to a report from researchers at Reset, which focuses on studies that examine “the interplay between democracy and technology.”

The research suggests that by using Telegram as a staging ground, the Russian group can instruct “forces” made up of real people while avoiding much of the monitoring and “coordinated inauthentic behavior” policies of bigger services like Instagram, YouTube and Facebook, where the posts and comments ultimately ended up.

The findings raise questions about whether the platforms can keep up with Vladimir Putin’s administration and its evolving attempts to support its war in Ukraine. And given Russia’s habit of going global with information warfare tactics it first tries out in Eastern Europe, the report also suggests the same techniques could become a feature of potential Russian efforts to sway the upcoming midterm election in the U.S.

Moscow-allied group Cyber Front Z is increasingly organizing on Telegram, where it tells supporters how to deploy harassment and disseminate pro-Russia talking points on major social media sites.

Before, Russia had largely fought in cyberspace with government-allied trolls posing as everyday citizens of target countries as well as spam bots posting the same message over and over again — both of which prompted major platforms to come down on what’s often called coordinated inauthentic behavior. Reset’s research revealed the pro-Russian effort has pivoted instead to mobilize and guide real people to post on major services with their own version of messages revolving around the same coordinated theme — a form of influence that has proven potent online from Gamergate to the Big Lie.

The conclusions from Reset build on reports of everyday Russians turning to Telegram — an encrypted app originally founded in Russia that has developed a following among far-right groups and purveyors of disinformation — because more mainstream platforms chose or had to shut down in the country in response to the war. But the app also functions as a digital battlefront where those loyal to Putin attempt to spin the conflict, and the Kremlin has also increasingly turned to real people to post on TikTok, reportedly paying existing influencers to have them talk up the war.

The researchers from Reset focused on the Telegram channel for Kremlin-allied group Cyber Front Z — a variation on a troll factory that the U.K. government said “has suspected links to … the founder of the most infamous and wide-ranging bot-farm the Internet Research Agency.” The study looked at Cyber Front Z’s activities during roughly the opening two months of Russia’s invasion and found more than 1,100 posts, most containing instructions to followers. The researchers then traced the resulting posts and comments back to major platforms.

Cyber Front Z first posted on March 12 with a call to Russia supporters to help, according to the study. The channel has now reached more than 100,000 followers, although its most active participants likely only number around 4,000.

Many of the group’s posts encourage followers to disseminate patriotic images to boost the Russian war, according to Reset. It also routinely instructs users to target “traitors” — such as Russians that are apparently insufficiently enthusiastic as well as international pro-Ukraine voices — with harassing comments, reports, downvotes and more. Such posts include explicit “links to the target’s social media platforms, and usually a clear instruction of which actions should be performed,” according to Reset.

At times, the study found, Cyber Front Z communications go for subtle menace, inviting supporters to “visit” targets or “explain” things to them. The group also employs pseudo-military terminology, referring to mobilizing and conducting “blitzkrieg,” or else suggesting followers “spam” or “drown” particular people or posts.

The study found clear instances when Cyber Front Z instructions immediately preceded a spike in activity on the major social media platforms, including comments in Russian on content where the discussion had previously been mainly in other languages. The research also suggested the Russian effort was focused primarily on Instagram, followed by YouTube — precisely those services that prior studies have blasted for their relative lack of enforcement aimed at curbing bot-driven, pro-Moscow disinformation and harassment.

Many of the group’s posts encourage followers to disseminate patriotic images to boost the Russian war.

A recent report from a NATO-accredited international group aimed at educating military and government personnel, for instance, found that Instagram was the cheapest major service for bots to manipulate, with comparatively poor removal of abuse (despite Facebook scoring better among social media services). That research group, the NATO Strategic Communications Centre of Excellence, also found that YouTube is improving somewhat in content removal when bots post, but is getting outpaced by the increasing speed of manipulation.

In general, in response to the Ukraine war, social media platforms have doubled down on their policies around coordinated inauthentic behavior, or occasionally coordinated mass bullying. Meta said during the early days of the war that it had taken down “a relatively small network” targeting Ukraine and coordinating persons off its platform, including on Telegram. Cyber Front Z’s strategy involves real humans, distinct posts and activity that’s only coordinated on Telegram, which is out of view of the social media companies themselves.

Spokesperson Ivy Choi told Protocol YouTube is “aware of the group Cyber Front Z,” adding, “since the war in Ukraine began, our Threat Analysis Group (TAG) has worked to quickly identify and terminate a number of YouTube channels as part of our investigations into coordinated influence operations linked to Russia.” YouTube has removed more than 9,000 channels and more than 70,000 videos related to the war for violating its policies, Choi said.

Reset suggested this practice creates a need for a “content-based approach” to finding users who are posting at the behest of Cyber Front Z, suggesting that certain keywords could identify them. Focusing on content over actions, however, might also sweep in everyday users who are posting without instructions from a Telegram channel.

The question of the platforms’ ability and willingness to tackle Russian information operations on social media comes as the U.S. midterm election season is getting into full swing. The topic of Russia’s online activities came under intense scrutiny in the U.S. following revelations of Moscow’s attempts to boost Donald Trump’s presidential campaign in 2016. To try to sow division and affect turnout among particular groups, the effort used bots as well as accounts and groups that purported to be Americans. The tactics relied on coordinated inauthentic activity that Russia had previously used in other countries in Europe, including Ukraine.

Yet the Russian endeavors continued, with similar use of fake personas in 2020, although the work of genuine U.S. groups and politicians to spread misinformation, particularly about the integrity of the vote, ended up being the focus of many of the major platforms’ revised policies. If current cyber techniques being used in the Ukraine war prove successful and Russia does indeed aim operations at the 2022 elections here in the U.S., however, its troll army could continue to boost propaganda and harassment campaigns on the major platforms by coordinating them off-platform.

Reset, which includes major scholars on its advisory board and is overseen day to day by the former head of the technology policy advisory group to Hillary Clinton’s 2016 presidential campaign, provided its findings to Protocol but is not otherwise publicizing them.


Judge Zia Faruqui is trying to teach you crypto, one ‘SNL’ reference at a time

His decisions on major cryptocurrency cases have quoted "The Big Lebowski," "SNL," and "Dr. Strangelove." That’s because he wants you — yes, you — to read them.

The ways Zia Faruqui (right) has weighed on cases that have come before him can give lawyers clues as to what legal frameworks will pass muster.

Photo: Carolyn Van Houten/The Washington Post via Getty Images

“Cryptocurrency and related software analytics tools are ‘The wave of the future, Dude. One hundred percent electronic.’”

That’s not a quote from "The Big Lebowski" — at least, not directly. It’s a quote from a Washington, D.C., district court memorandum opinion on the role cryptocurrency analytics tools can play in government investigations. The author is Magistrate Judge Zia Faruqui.

Keep ReadingShow less
Veronica Irwin

Veronica Irwin (@vronirwin) is a San Francisco-based reporter at Protocol covering fintech. Previously she was at the San Francisco Examiner, covering tech from a hyper-local angle. Before that, her byline was featured in SF Weekly, The Nation, Techworker, Ms. Magazine and The Frisc.

The financial technology transformation is driving competition, creating consumer choice, and shaping the future of finance. Hear from seven fintech leaders who are reshaping the future of finance, and join the inaugural Financial Technology Association Fintech Summit to learn more.

Keep ReadingShow less
The Financial Technology Association (FTA) represents industry leaders shaping the future of finance. We champion the power of technology-centered financial services and advocate for the modernization of financial regulation to support inclusion and responsible innovation.

AWS CEO: The cloud isn’t just about technology

As AWS preps for its annual re:Invent conference, Adam Selipsky talks product strategy, support for hybrid environments, and the value of the cloud in uncertain economic times.

Photo: Noah Berger/Getty Images for Amazon Web Services

AWS is gearing up for re:Invent, its annual cloud computing conference where announcements this year are expected to focus on its end-to-end data strategy and delivering new industry-specific services.

It will be the second re:Invent with CEO Adam Selipsky as leader of the industry’s largest cloud provider after his return last year to AWS from data visualization company Tableau Software.

Keep ReadingShow less
Donna Goodison

Donna Goodison (@dgoodison) is Protocol's senior reporter focusing on enterprise infrastructure technology, from the 'Big 3' cloud computing providers to data centers. She previously covered the public cloud at CRN after 15 years as a business reporter for the Boston Herald. Based in Massachusetts, she also has worked as a Boston Globe freelancer, business reporter at the Boston Business Journal and real estate reporter at Banker & Tradesman after toiling at weekly newspapers.

Image: Protocol

We launched Protocol in February 2020 to cover the evolving power center of tech. It is with deep sadness that just under three years later, we are winding down the publication.

As of today, we will not publish any more stories. All of our newsletters, apart from our flagship, Source Code, will no longer be sent. Source Code will be published and sent for the next few weeks, but it will also close down in December.

Keep ReadingShow less
Bennett Richardson

Bennett Richardson ( @bennettrich) is the president of Protocol. Prior to joining Protocol in 2019, Bennett was executive director of global strategic partnerships at POLITICO, where he led strategic growth efforts including POLITICO's European expansion in Brussels and POLITICO's creative agency POLITICO Focus during his six years with the company. Prior to POLITICO, Bennett was co-founder and CMO of Hinge, the mobile dating company recently acquired by Match Group. Bennett began his career in digital and social brand marketing working with major brands across tech, energy, and health care at leading marketing and communications agencies including Edelman and GMMB. Bennett is originally from Portland, Maine, and received his bachelor's degree from Colgate University.


Why large enterprises struggle to find suitable platforms for MLops

As companies expand their use of AI beyond running just a few machine learning models, and as larger enterprises go from deploying hundreds of models to thousands and even millions of models, ML practitioners say that they have yet to find what they need from prepackaged MLops systems.

As companies expand their use of AI beyond running just a few machine learning models, ML practitioners say that they have yet to find what they need from prepackaged MLops systems.

Photo: artpartner-images via Getty Images

On any given day, Lily AI runs hundreds of machine learning models using computer vision and natural language processing that are customized for its retail and ecommerce clients to make website product recommendations, forecast demand, and plan merchandising. But this spring when the company was in the market for a machine learning operations platform to manage its expanding model roster, it wasn’t easy to find a suitable off-the-shelf system that could handle such a large number of models in deployment while also meeting other criteria.

Some MLops platforms are not well-suited for maintaining even more than 10 machine learning models when it comes to keeping track of data, navigating their user interfaces, or reporting capabilities, Matthew Nokleby, machine learning manager for Lily AI’s product intelligence team, told Protocol earlier this year. “The duct tape starts to show,” he said.

Keep ReadingShow less
Kate Kaye

Kate Kaye is an award-winning multimedia reporter digging deep and telling print, digital and audio stories. She covers AI and data for Protocol. Her reporting on AI and tech ethics issues has been published in OneZero, Fast Company, MIT Technology Review, CityLab, Ad Age and Digiday and heard on NPR. Kate is the creator of RedTailMedia.org and is the author of "Campaign '08: A Turning Point for Digital Media," a book about how the 2008 presidential campaigns used digital media and data.

Latest Stories