Policy

Google is launching an experiment to curb anti-Ukraine misinformation in Europe. It just might work.

New research finds that YouTube ads actually can thwart misinformation. Now Google's Jigsaw is preparing to test the theory in Europe.

Refugees from Ukraine are seen at the Polish/Ukrainian border crossing in Medyka on April 7, 2022. - More than 4.3 million Ukrainians have now fled their country since the Russian invasion, the United Nations said on April 7, 2022. UNHCR, the UN refugee agency, said 4,319,494 Ukrainians had fled across the border since the war began on February 24 -- a figure up 40,705 since Wednesday, April 6. (Photo by Wojtek RADWANSKI / AFP) (Photo by WOJTEK RADWANSKI/AFP via Getty Images)

In the coming weeks, YouTube and Google's Jigsaw will blanket Poland, Slovakia and Czechia with video ads designed to help people identify and refute derogatory tropes about migrants.

Photo: Wojtek Radwanski/AFP via Getty Images

Three months after Russia launched its brutal assault on Ukraine, sending millions of refugees over the border into neighboring countries, Yasmin Green took a trip to Poland to study how disinformation about migrants was spreading there.

But instead of finding the kind of simmering resentment she’d seen directed at so many other migrant groups, Green sensed what she called a “real euphoria” among Polish citizens about the humanitarian role they were playing. “The country of 38 million people, Poland, was taking in 4 million refugees from Ukraine with open arms,” Green said. “The whole country had become an NGO.”

She also knew it couldn’t last. As the CEO of Jigsaw, a sort of anti-extremism research group within Google, Green has seen and studied the ways viral lies about Syrian refugees have been weaponized in recent years. “I didn't know what the individual claims were going to be,” Green said, “but seeing Russia use migrant flows to disrupt countries in the context of Syria and Syrian refugees in the past in Europe, it's clear that's going to repeat itself.”

Now Green and her team are trying to test whether they can fend off those hateful ideas using a tool that too often fuels them: YouTube.

Director of research and development at Jigsaw Yasmin Green speaks onstage at WIRED Business Conference in New York City in 2017.Jigsaw CEO Yasmin Green and her team's project examines the efficacy of using video to “inoculate” people against misinformation on social media. Photo: Brian Ach/Getty Images for Wired

In the coming weeks, Jigsaw and YouTube will blanket Poland, Slovakia and Czechia with a series of video ads designed to help people identify and refute derogatory tropes about migrants. The campaign, which will run for a month across several social media platforms, including YouTube, is expected to garner at least 55 million impressions — roughly equal to the population of those three countries combined.

But the videos are more than just a marketing push to burnish YouTube’s reputation. They’re part of a years-long research project at Jigsaw on the efficacy of using video to “inoculate” people against misinformation on social media. The idea is that there will never be enough fact-checkers in the world to correct all of the lies online. But there may be a way to build people’s defenses against those lies. If the group’s latest research paper is to be believed, the experiment in Eastern Europe just might work.

The paper, published Wednesday in the journal Science Advances, was authored by Jigsaw’s head of research Beth Goldberg, as well as a team of researchers at Cambridge University, the University of Bristol and the University of Western Australia. In a series of experiments — one of which was conducted as a live YouTube ad campaign — the researchers tested whether showing people videos about different manipulation techniques might actually help them spot instances of misinformation later on and limit their likelihood of sharing them.

The researchers controlled for different variables — participants’ age, gender, political ideology and even their “bullshit receptivity” — and found that across the board, the group’s videos made people more likely to recognize manipulation attempts after the fact and less likely to share manipulative messages than people who hadn’t seen the videos.

“It worked for conservatives and liberals. It worked for people who are already conspiratorial or not,” said Goldberg. “It even worked for people who are very receptive to bullshit, which was particularly exciting.”

The videos don’t refute any specific piece of misinformation, which Goldberg said is a key difference between this technique and the kind of fact-checking that most social platforms currently rely on. “A lot of other approaches to misinformation are both reactive and really localized,” she said. “They're trying to address a very particular claim, or they're trying to address one particular topic, so you lack the ability to scale and to move quickly.”

Instead, the videos in this experiment used apolitical animations and pop culture references from “Star Wars” or “The Simpsons” to explain how common manipulation techniques work. One video describing “emotionally manipulative rhetoric” aimed at luring people to watch something they otherwise would skip opens with stock footage of a little girl sadly hugging her teddy bear. “You might think about skipping this ad — don’t. What happens next will make you tear up,” a somber voice-over says, before flipping the script. “Kidding! You just got tricked.”

“It worked for conservatives and liberals. It worked for people who are already conspiratorial or not. It even worked for people who are very receptive to bullshit."

The researchers also developed videos on scapegoating, false dichotomies, ad hominem attacks and the use of mutually exclusive arguments. Each one built on decades of research into “inoculation theory,” which suggests people can build a response against attempts to persuade them in the same way vaccines provide an immune response against a virus. To be effective, the message must first include a warning that some kind of manipulation is coming, followed by exposure to the manipulating message and an emphatic refutation of that message.

Goldberg was inspired by prior research, which found that inoculation messaging helped build people’s resistance to extremist ideologies in lab experiments. But her team wanted to know whether the interventions would work just as well in video form.

In the lab experiments, paid participants watched the videos, then were asked to decide whether a series of social media posts were either manipulative or neutral. The researchers found that for every video, the experiment group was significantly more likely to correctly identify the posts than a control group. The videos also reduced the participants’ overall likelihood of sharing manipulative posts later.

But the real question was whether the same would be true when the researchers showed these videos not to paid participants diligently watching them in a lab, but to anyone browsing YouTube. For that experiment, the research team ran the videos as ads on YouTube, which were served to some 5.4 million people in the U.S. — about a million of whom actually watched the ads for at least 30 seconds. A subset of the group that actually watched the videos were then served another ad a day later, this time with a survey question, showing them a manipulative message and asking them which manipulation technique was being used. Once again, the group that watched the inoculation ads outperformed the control group.

The encouraging results of these studies inspired Jigsaw to work with YouTube to go even bigger with their experiment in Eastern Europe. That ad campaign is expected to generate at least 10 times as many impressions as the initial experiment. “We wanted to reach as broad an audience as possible,” Goldberg said, noting that she and her team will conduct a similar study using survey questions to measure the impact of the videos.

But unlike the videos in the earlier experiments, these ads will more directly address the war in Ukraine and warn people against manipulative messaging they may see about migrants supposedly stealing jobs and other social services. Already, experts say, those narratives are beginning to move from the fringe to the mainstream in Europe. “My sense is that there is a significant amount of disinformation concerning Ukrainian refugees since the war outbreak, but that this is gradually becoming more visible, especially in Eastern Europe, due to the wider context,” said Alberto-Horst Neidhardt, a policy analyst with the European Policy Centre. Back in March, Neidhardt warned of the need to “start preparing today for the lies of tomorrow.”

“My sense is that there is a significant amount of disinformation concerning Ukrainian refugees since the war outbreak, but that this is gradually becoming more visible."

Ultimately, Green’s goal is to push YouTube and all social platforms into less reactive positions. Over the last five years, tech companies have tortured themselves over the best way to fact-check misinformation, only to realize the limits of that kind of intervention. Not only is it impossible to do at scale, but it risks alienating people who have already digested the message. But the results of these experiments suggest a major new opportunity for tech platforms to move beyond debunking specific lies.

“If we can pan out a little bit from the immediate event — the arresting examples of misinformation that are going viral — we see that there are tropes and narratives that are recurring,” Green said, “and we can build resilience to those.”

Fintech

Judge Zia Faruqui is trying to teach you crypto, one ‘SNL’ reference at a time

His decisions on major cryptocurrency cases have quoted "The Big Lebowski," "SNL," and "Dr. Strangelove." That’s because he wants you — yes, you — to read them.

The ways Zia Faruqui (right) has weighed on cases that have come before him can give lawyers clues as to what legal frameworks will pass muster.

Photo: Carolyn Van Houten/The Washington Post via Getty Images

“Cryptocurrency and related software analytics tools are ‘The wave of the future, Dude. One hundred percent electronic.’”

That’s not a quote from "The Big Lebowski" — at least, not directly. It’s a quote from a Washington, D.C., district court memorandum opinion on the role cryptocurrency analytics tools can play in government investigations. The author is Magistrate Judge Zia Faruqui.

Keep ReadingShow less
Veronica Irwin

Veronica Irwin (@vronirwin) is a San Francisco-based reporter at Protocol covering fintech. Previously she was at the San Francisco Examiner, covering tech from a hyper-local angle. Before that, her byline was featured in SF Weekly, The Nation, Techworker, Ms. Magazine and The Frisc.

The financial technology transformation is driving competition, creating consumer choice, and shaping the future of finance. Hear from seven fintech leaders who are reshaping the future of finance, and join the inaugural Financial Technology Association Fintech Summit to learn more.

Keep ReadingShow less
FTA
The Financial Technology Association (FTA) represents industry leaders shaping the future of finance. We champion the power of technology-centered financial services and advocate for the modernization of financial regulation to support inclusion and responsible innovation.
Enterprise

AWS CEO: The cloud isn’t just about technology

As AWS preps for its annual re:Invent conference, Adam Selipsky talks product strategy, support for hybrid environments, and the value of the cloud in uncertain economic times.

Photo: Noah Berger/Getty Images for Amazon Web Services

AWS is gearing up for re:Invent, its annual cloud computing conference where announcements this year are expected to focus on its end-to-end data strategy and delivering new industry-specific services.

It will be the second re:Invent with CEO Adam Selipsky as leader of the industry’s largest cloud provider after his return last year to AWS from data visualization company Tableau Software.

Keep ReadingShow less
Donna Goodison

Donna Goodison (@dgoodison) is Protocol's senior reporter focusing on enterprise infrastructure technology, from the 'Big 3' cloud computing providers to data centers. She previously covered the public cloud at CRN after 15 years as a business reporter for the Boston Herald. Based in Massachusetts, she also has worked as a Boston Globe freelancer, business reporter at the Boston Business Journal and real estate reporter at Banker & Tradesman after toiling at weekly newspapers.

Image: Protocol

We launched Protocol in February 2020 to cover the evolving power center of tech. It is with deep sadness that just under three years later, we are winding down the publication.

As of today, we will not publish any more stories. All of our newsletters, apart from our flagship, Source Code, will no longer be sent. Source Code will be published and sent for the next few weeks, but it will also close down in December.

Keep ReadingShow less
Bennett Richardson

Bennett Richardson ( @bennettrich) is the president of Protocol. Prior to joining Protocol in 2019, Bennett was executive director of global strategic partnerships at POLITICO, where he led strategic growth efforts including POLITICO's European expansion in Brussels and POLITICO's creative agency POLITICO Focus during his six years with the company. Prior to POLITICO, Bennett was co-founder and CMO of Hinge, the mobile dating company recently acquired by Match Group. Bennett began his career in digital and social brand marketing working with major brands across tech, energy, and health care at leading marketing and communications agencies including Edelman and GMMB. Bennett is originally from Portland, Maine, and received his bachelor's degree from Colgate University.

Enterprise

Why large enterprises struggle to find suitable platforms for MLops

As companies expand their use of AI beyond running just a few machine learning models, and as larger enterprises go from deploying hundreds of models to thousands and even millions of models, ML practitioners say that they have yet to find what they need from prepackaged MLops systems.

As companies expand their use of AI beyond running just a few machine learning models, ML practitioners say that they have yet to find what they need from prepackaged MLops systems.

Photo: artpartner-images via Getty Images

On any given day, Lily AI runs hundreds of machine learning models using computer vision and natural language processing that are customized for its retail and ecommerce clients to make website product recommendations, forecast demand, and plan merchandising. But this spring when the company was in the market for a machine learning operations platform to manage its expanding model roster, it wasn’t easy to find a suitable off-the-shelf system that could handle such a large number of models in deployment while also meeting other criteria.

Some MLops platforms are not well-suited for maintaining even more than 10 machine learning models when it comes to keeping track of data, navigating their user interfaces, or reporting capabilities, Matthew Nokleby, machine learning manager for Lily AI’s product intelligence team, told Protocol earlier this year. “The duct tape starts to show,” he said.

Keep ReadingShow less
Kate Kaye

Kate Kaye is an award-winning multimedia reporter digging deep and telling print, digital and audio stories. She covers AI and data for Protocol. Her reporting on AI and tech ethics issues has been published in OneZero, Fast Company, MIT Technology Review, CityLab, Ad Age and Digiday and heard on NPR. Kate is the creator of RedTailMedia.org and is the author of "Campaign '08: A Turning Point for Digital Media," a book about how the 2008 presidential campaigns used digital media and data.

Latest Stories
Bulletins