Three months after Russia launched its brutal assault on Ukraine, sending millions of refugees over the border into neighboring countries, Yasmin Green took a trip to Poland to study how disinformation about migrants was spreading there.
But instead of finding the kind of simmering resentment she’d seen directed at so many other migrant groups, Green sensed what she called a “real euphoria” among Polish citizens about the humanitarian role they were playing. “The country of 38 million people, Poland, was taking in 4 million refugees from Ukraine with open arms,” Green said. “The whole country had become an NGO.”
She also knew it couldn’t last. As the CEO of Jigsaw, a sort of anti-extremism research group within Google, Green has seen and studied the ways viral lies about Syrian refugees have been weaponized in recent years. “I didn't know what the individual claims were going to be,” Green said, “but seeing Russia use migrant flows to disrupt countries in the context of Syria and Syrian refugees in the past in Europe, it's clear that's going to repeat itself.”
Now Green and her team are trying to test whether they can fend off those hateful ideas using a tool that too often fuels them: YouTube.
Jigsaw CEO Yasmin Green and her team's project examines the efficacy of using video to “inoculate” people against misinformation on social media.
Photo: Brian Ach/Getty Images for Wired
In the coming weeks, Jigsaw and YouTube will blanket Poland, Slovakia and Czechia with a series of video ads designed to help people identify and refute derogatory tropes about migrants. The campaign, which will run for a month across several social media platforms, including YouTube, is expected to garner at least 55 million impressions — roughly equal to the population of those three countries combined.
But the videos are more than just a marketing push to burnish YouTube’s reputation. They’re part of a years-long research project at Jigsaw on the efficacy of using video to “inoculate” people against misinformation on social media. The idea is that there will never be enough fact-checkers in the world to correct all of the lies online. But there may be a way to build people’s defenses against those lies. If the group’s latest research paper is to be believed, the experiment in Eastern Europe just might work.
The paper, published Wednesday in the journal Science Advances, was authored by Jigsaw’s head of research Beth Goldberg, as well as a team of researchers at Cambridge University, the University of Bristol and the University of Western Australia. In a series of experiments — one of which was conducted as a live YouTube ad campaign — the researchers tested whether showing people videos about different manipulation techniques might actually help them spot instances of misinformation later on and limit their likelihood of sharing them.
The researchers controlled for different variables — participants’ age, gender, political ideology and even their “bullshit receptivity” — and found that across the board, the group’s videos made people more likely to recognize manipulation attempts after the fact and less likely to share manipulative messages than people who hadn’t seen the videos.
“It worked for conservatives and liberals. It worked for people who are already conspiratorial or not,” said Goldberg. “It even worked for people who are very receptive to bullshit, which was particularly exciting.”
The videos don’t refute any specific piece of misinformation, which Goldberg said is a key difference between this technique and the kind of fact-checking that most social platforms currently rely on. “A lot of other approaches to misinformation are both reactive and really localized,” she said. “They're trying to address a very particular claim, or they're trying to address one particular topic, so you lack the ability to scale and to move quickly.”
Instead, the videos in this experiment used apolitical animations and pop culture references from “Star Wars” or “The Simpsons” to explain how common manipulation techniques work. One video describing “emotionally manipulative rhetoric” aimed at luring people to watch something they otherwise would skip opens with stock footage of a little girl sadly hugging her teddy bear. “You might think about skipping this ad — don’t. What happens next will make you tear up,” a somber voice-over says, before flipping the script. “Kidding! You just got tricked.”
“It worked for conservatives and liberals. It worked for people who are already conspiratorial or not. It even worked for people who are very receptive to bullshit."
The researchers also developed videos on scapegoating, false dichotomies, ad hominem attacks and the use of mutually exclusive arguments. Each one built on decades of research into “inoculation theory,” which suggests people can build a response against attempts to persuade them in the same way vaccines provide an immune response against a virus. To be effective, the message must first include a warning that some kind of manipulation is coming, followed by exposure to the manipulating message and an emphatic refutation of that message.
Goldberg was inspired by prior research, which found that inoculation messaging helped build people’s resistance to extremist ideologies in lab experiments. But her team wanted to know whether the interventions would work just as well in video form.
In the lab experiments, paid participants watched the videos, then were asked to decide whether a series of social media posts were either manipulative or neutral. The researchers found that for every video, the experiment group was significantly more likely to correctly identify the posts than a control group. The videos also reduced the participants’ overall likelihood of sharing manipulative posts later.
But the real question was whether the same would be true when the researchers showed these videos not to paid participants diligently watching them in a lab, but to anyone browsing YouTube. For that experiment, the research team ran the videos as ads on YouTube, which were served to some 5.4 million people in the U.S. — about a million of whom actually watched the ads for at least 30 seconds. A subset of the group that actually watched the videos were then served another ad a day later, this time with a survey question, showing them a manipulative message and asking them which manipulation technique was being used. Once again, the group that watched the inoculation ads outperformed the control group.
The encouraging results of these studies inspired Jigsaw to work with YouTube to go even bigger with their experiment in Eastern Europe. That ad campaign is expected to generate at least 10 times as many impressions as the initial experiment. “We wanted to reach as broad an audience as possible,” Goldberg said, noting that she and her team will conduct a similar study using survey questions to measure the impact of the videos.
But unlike the videos in the earlier experiments, these ads will more directly address the war in Ukraine and warn people against manipulative messaging they may see about migrants supposedly stealing jobs and other social services. Already, experts say, those narratives are beginning to move from the fringe to the mainstream in Europe. “My sense is that there is a significant amount of disinformation concerning Ukrainian refugees since the war outbreak, but that this is gradually becoming more visible, especially in Eastern Europe, due to the wider context,” said Alberto-Horst Neidhardt, a policy analyst with the European Policy Centre. Back in March, Neidhardt warned of the need to “start preparing today for the lies of tomorrow.”
“My sense is that there is a significant amount of disinformation concerning Ukrainian refugees since the war outbreak, but that this is gradually becoming more visible."
Ultimately, Green’s goal is to push YouTube and all social platforms into less reactive positions. Over the last five years, tech companies have tortured themselves over the best way to fact-check misinformation, only to realize the limits of that kind of intervention. Not only is it impossible to do at scale, but it risks alienating people who have already digested the message. But the results of these experiments suggest a major new opportunity for tech platforms to move beyond debunking specific lies.
“If we can pan out a little bit from the immediate event — the arresting examples of misinformation that are going viral — we see that there are tropes and narratives that are recurring,” Green said, “and we can build resilience to those.”