Policy

Are algorithms to blame for extremism? Yes. But so are we.

In his new book, Duke Polarization Lab director Chris Bail explains why pointing the finger at Big Tech is a little too easy.

Are algorithms to blame for extremism? Yes. But so are we.
Rioters scale the U.S. Capitol walls during the insurrection.
Photo: Blink O'faneye/Flickr

At a Senate hearing on social media algorithms Tuesday, lawmakers on both sides of the aisle sat rapt as former Googler Tristan Harris explained why Facebook, Twitter and YouTube are "attention vampires" steering people into "different rabbit hole[s] of reality" in what he described as a "psychological deranging process."

A little purple, perhaps, but the committee ate it up. Because at a time when Democrats and Republicans can't agree on much, they share a common view on this: Social media platforms and their shadowy, unexplainable algorithms are driving Americans to extremes.

For Chris Bail, director of Duke University's Polarization Lab, that explanation is too easy and ignores all of the deeply human reasons why people behave the way they do online.

In his new book, "Breaking the Social Media Prism," Bail argues that the primary failing of social media is not the way it radicalizes or confines people to echo chambers, but the way it distorts political discourse as it actually exists, rewarding status-seeking extremists, muting moderate voices and giving us all the impression that the other side of the aisle is more intolerable, and more different, than they really are.

Through a series of experiments at the Lab, which he outlines in the book, Bail has studied what actually works to make people behave more civilly online — and just as importantly, what does not. Bail spoke with Protocol about those experiments, what people misunderstand about political polarization and how tech giants can and should evolve.

This interview has been edited and condensed from a longer virtual conversation hosted by the Mechanics' Institute in San Francisco.

What do you think are some of the biggest misconceptions people have about the way social media contributes to polarization?

A good place to start is the idea of the echo chamber. I think all of us have experienced how social media can allow us to surround ourselves with like-minded people and insulate ourselves from people who don't share views. People are concerned that algorithms may reinforce that tendency, creating filter bubbles.

Four years ago, right after the election of Trump and right after the Brexit referendum in the U.K., these were shocking developments to a lot of people. I thought the echo chamber just explained this. As someone who's liberal, whose social media feeds leans left, I didn't see that Trump was generating excitement. We had the idea in 2017 to try to break people out of their echo chambers and see what would happen. If we did that, would they become more moderate?

So let's talk about this experiment. You paid people to essentially follow bots that were on the opposite side of the political spectrum, after you had already gotten some sense of where their baseline was. And you wanted to study how following those bots for a month would influence them. So what were you expecting to find from that experiment?

One idea going into this was if you break people out of their echo chambers, they should be able to understand each other and empathize with each other. The key question, though, of course, is: Is social media the right place for that to happen? A rival idea would be that, actually, this might make things worse.

So when we went to do the experiment, we surveyed about 1,220 Republicans and Democrats. This was all on Twitter, all in 2017. And we asked them a bunch of questions about things like climate change, racial discrimination, government, regulation, the economy, all sorts of stuff. Then a week later, we invited half of them to follow these bots that we created. We told people: Hey, we're gonna pay you up to $26 if you can correctly answer questions about an automated Twitter account that we'd like you to follow. They didn't know that the bots were going to tweet about politics, they were only told that they were going to tweet 24 times a day.

At the end of the month, we sent them the same survey, and we looked at how their views changed. We really wanted to see that people would become more moderate, because that would dovetail with the story of the echo chamber, right? Unfortunately, nobody in our study became more moderate. And most people became either a little bit more polarized, or a lot more polarized, especially Republicans.

What do you think explains that phenomenon?

We want to think about social media as this place where we go to get information. When you take the pulse of social media for real for a minute, you realize that it's really not about that at all. Most people use it for this more basic human purpose, which is to really figure out our identities. Social scientists, for a long time, have known that every day each one of us puts on a different presentation of ourselves. Neuroscientists will call this the social brain hypothesis — that we're constantly surveying our environment for cues about what's working with other people.


Chris Bail is director of Duke's Polarization Lab. Photo: Alex Boerner


We did this long before social media. But I think the really interesting question is: How does social media change this all too human thing that we all do? I think it happens in two ways. The first is that we have unprecedented flexibility to present different versions of ourselves. And then we also have all these new tools to monitor what other people are thinking about us: things like follower accounts, "like" buttons. They distort the social landscape. This strange feedback loop can fuel status-seeking extremists, but also make moderates just totally disengage.

Those are two twin problems that I think really explain why an attempt to take someone out of their echo chamber doesn't make them more moderate. It just puts them into this identity war that we're all fighting constantly on social media. We're trying to figure out ways to make our side look good, and the other side look bad. Exposing yourself more to the other side only brings you further into the war.

One of the stories that you tell is about a woman named Patty, and you describe her as basically an unenthusiastic Democrat and you said that she went into the experiment with not very forceful viewpoints. But after the month following the bots, she became increasingly more defensive and more likely to intervene. The hypothesis you make is that the more that she saw her side attacked, even though she wasn't necessarily a strong defender of her side, the more she went on the attack. That made me wonder: Is it really a matter of exposing people to other viewpoints that doesn't work? Or is it a matter of exposing people to extreme viewpoints that doesn't work?

I think you put your finger on it. What we see in the course of this month, and in talking to her after she followed the bot that she was never focusing on the moderate posts of, like, a David Brooks, a center-right Republican [and columnist for The New York Times] or a Steve Bullock, the centrist governor of Montana. She was focusing on [Sen.] Ted Cruz and the most polarizing parts of the continuum. Extremists are a huge part of the reason why people backfire, because what we're being exposed to leads us to get a dosage of identity, rather than a dosage of information.

In your experiment, you have people follow these bots for a month. But people's viewpoints are based on their entire lifetime of experiences. Do you think that we can really write off the potential that exposing people to differing viewpoints on social media would have a moderating effect over time? What further research can be done on that?

You're absolutely right. People are deeply complex, and they're encountering this information from different perspectives, and different media channels. One of the things that I wanted to do with this book was to really tell the story of the online versus the offline. So to do that, and to really figure out why people are having this counterintuitive reaction, we did interviews with about 80 Republicans and Democrats. The book presents the stories of people as they're stepping outside their echo chambers, and tries to really fill in how what you see online is such a small part of the story.

You interviewed internet trolls. What did you learn from those conversations?

One story that was shocking from the research in this book was the story of this guy, Ray. He claims to be a centrist Republican, and he's very polite, even deferential. He goes out of his way to say all these people on the internet are always getting in fights, and they're probably just losers who live with their mother, right? And then we go to look at his Twitter data, and what we discovered is this guy was actually probably the biggest political troll I've ever seen on the internet in 10 years of studying political extremism online. The question is: Why did they do that? What's this Dr. Jekyll and Mr. Hyde thing that he's doing every night?

What we discovered is that the reason this guy Ray does this is because he's a social outcast in his real life. He's recently divorced. He actually lives with his mother. He was literally talking about himself in our interview with him. He's literally created two separate realities for himself. The micro-celebrity that he's been able to achieve from sharing this unspeakable stuff on Twitter is really actually profoundly important to him. He checks nightly: How many followers do I have? How many likes did I get? And they're really fulfilling a sense of purpose for him.

If so much of what people are seeking is the feedback, the likes and the engagement and the retweets, how much of an impact do you think it would have for all that stuff to disappear?

To clarify, I'm not suggesting that all of us have the potential to become this guy, Ray. In fact, he's the outlier. The far more common story is relatively moderate people who are just completely invisible on social media. So the other story that really stuck with me from this research, was the story of this woman I called Sarah. We begin all of our interviews by asking people to tell us about the last time they used social media. She says, I was up late the other night, and the NRA posted something about how it's Americans' right to own guns. She posted something like: My husband owns a gun and he's responsible gun owner, something fairly innocuous in the landscape of America's gun debate.

And then she says, within minutes, someone looked at her Twitter feed, saw that she had kids and posted: I hope your kids find your gun and shoot you. Unfortunately, this is the story that is all too common. Experiences like this made her completely disengage. Unlike this guy, Ray, who is getting all this feedback that is vital for his sense of identity. For Sarah, this is actually a liability.

This is the much more common story that we see among this huge, but largely invisible, more moderate majority. It's not that we're stuck in an echo chamber and seeing the same type of people. It's that the people we're seeing are very different online and off. And that has terrible consequences for the rest of us and makes us all feel more polarized than I think we really are.

Instagram has begun hiding likes — is there any evidence it's working?

In the case of Instagram, one of the key problems is there's a lot of research happening inside social media companies that just never gets shared publicly. But we can look at some traces. A recent study, for example, looked at web tracking data, where someone agrees to share with researchers every single digital footprint that they make. And in an interesting study, some political scientists basically tracked how often people progress from a moderate view to a more extreme view. What they discovered was actually pretty surprising.

If you take the pulse of the public debate, you'd say, it's the algorithms that are the problem. Companies are profiting. This is a really neat, tidy explanation. But when we look at the data, this only seems to happen to probably well under 2% of people, and maybe as few as one in 100,000 people.

Now, these are preliminary studies, we need a lot more research, I think, and above all, we need more transparency from the platforms to really get to the bottom of it. But thinking about the human drivers of polarization is something that we haven't done enough of. I'm worried that if we focus too much on these popular ideas that have very little evidence that we might wind up not really moving the needle very much.


Sens. Ben Sasse (left) and Chris Coons speaking at Senate hearing on algorithms and social media.Photo: Al Drago/Getty Images


In the book, you also write about a social media platform you built at the Polarization Lab to test whether there was a way to design a new social network that would encourage civility. The main difference is it's anonymous. For me, that was shocking — I've seen what happens on Reddit. Why did anonymity work for this platform, which you called DiscussIt?

I was shocked too. The theory was that, OK, on the one hand, anonymity allows us to escape social norms, and maybe we'll be uncivil to each other because there's no consequences. On the other hand, escaping identity and public pressure to conform to our side could in theory allow us to entertain ideas that we wouldn't really feel comfortable talking about on a public forum like Twitter.

So we asked people a bunch of questions, all sorts of stuff about their political views. And then we asked a bunch of questions about two very controversial topics, immigration and gun control. After that survey, we waited a little while, and we invited half of them to test this new social media platform called DiscussIt. They were told they were going to chat anonymously with someone else. What they didn't know is that the invite code that we gave them to log on to our platform was pairing them with a member of the other party to talk about either immigration or gun control. We didn't tell them, "Please be nice."

What we found — which was really, really surprising — is that people who used the anonymous chat app to talk about either gun control or immigration depolarized much more than people who didn't. That effect was even stronger for Republicans.

I struggle to see how that can be applied to existing large platforms. You add anonymity and you lose the other parts of social media that are already not toxic, like sharing baby and dog pictures. Can you really overlay this on any of the platforms that we are living with today?

I would never say, "Let's make Facebook anonymous," for example. I think what we're seeing in social media is a splintering of all kinds of social media. Why should we have our political discussions in the same place that we have our cute kid pictures and cat videos? Maybe what we need is to recognize that actually, most people don't want to talk about politics. Something like less than 6% of Facebook posts are about politics. We shouldn't expect these platforms that are really designed for sharing cute cat pictures to produce rational debate.

Instead, we need to create platforms where you're rewarded for producing content that lots of different types of people appreciate. I'm imagining something more like Stack Overflow, which is a site where software coders can answer each other's problems, and when you answer someone's problem correctly, you gain a status. What I think we need is a platform for people who can and will engage in productive conversations about politics.

Does it really do us any good as a society trying to become less polarized, if all the people who are open-minded about having discussions with people on the other side just go over there to have their conversations, and we keep battling each other in the main arena?

I think if we move the conversation to another arena, the other platforms are going to be a much less entertaining place for trolls to play. You can think of it as quarantining some of the extremism. I'm okay with the idea of quarantining as long as it eventually allows the people who are willing to have productive dialogues to have a place to play where trolls aren't incentivized to ruin it all.

I think one of the most surprising parts of the book, any book about polarization written in the year 2021, is that you write that actually, a lot of the polarization we see is false polarization and we're not as polarized as we think we are when looking at social media. Rates of partisanship are actually pretty stable.

When I read that I thought: Is that really what we mean, when we talk about polarization? We're not really talking about our split of partisanship. We're talking about our antipathy toward the other party. And it seems like by a lot of measures, that is growing. So let's end with that: Is polarization getting worse or not?

We make a distinction between issue-based polarization and affective polarization: what you think of the other side, independent of their ideas. And in the journal Science, two months ago, I published a piece with a lot of other social scientists where we identify that for the first time ever, out-party animus has replaced in-group love. So you're absolutely right to say that this is the trend, and the one that concerns a lot of us.

Now, the important question is: Is that because of social media or not? Unfortunately, we know that that trend began before large scale social media usage. So, tempting as it may be to say that social media explains it all the way, I don't think that's the case. I think the question we need to ask is not does social media polarize us or not, but how could social media be reconfigured to help counter polarization?

Fintech

Can crypto regulate itself? The Lummis-Gillibrand bill hopes so.

Creating the equivalent of the stock markets’ FINRA for crypto is the ideal, but experts doubt that it will be easy.

The idea of creating a government-sanctioned private regulatory association has been drawing more attention in the debate over how to rein in a fast-growing industry whose technological quirks have baffled policymakers.

Illustration: Christopher T. Fong/Protocol

Regulating crypto is complicated. That’s why Sens. Cynthia Lummis and Kirsten Gillibrand want to explore the creation of a private sector group to help federal regulators do their job.

The bipartisan bill introduced by Lummis and Gillibrand would require the CFTC and the SEC to work with the crypto industry to look into setting up a self-regulatory organization to “facilitate innovative, efficient and orderly markets for digital assets.”

Keep Reading Show less
Benjamin Pimentel

Benjamin Pimentel ( @benpimentel) covers crypto and fintech from San Francisco. He has reported on many of the biggest tech stories over the past 20 years for the San Francisco Chronicle, Dow Jones MarketWatch and Business Insider, from the dot-com crash, the rise of cloud computing, social networking and AI to the impact of the Great Recession and the COVID crisis on Silicon Valley and beyond. He can be reached at bpimentel@protocol.com or via Google Voice at (925) 307-9342.

Every day, millions of us press the “order” button on our favorite coffee store's mobile application: Our chosen brew will be on the counter when we arrive. It’s a personalized, seamless experience that we have all come to expect. What we don’t know is what’s happening behind the scenes. The mobile application is sourcing data from a database that stores information about each customer and what their favorite coffee drinks are. It is also leveraging event-streaming data in real time to ensure the ingredients for your personal coffee are in supply at your local store.

Applications like this power our daily lives, and if they can’t access massive amounts of data stored in a database as well as stream data “in motion” instantaneously, you — and millions of customers — won’t have these in-the-moment experiences.

Keep Reading Show less
Jennifer Goforth Gregory
Jennifer Goforth Gregory has worked in the B2B technology industry for over 20 years. As a freelance writer she writes for top technology brands, including IBM, HPE, Adobe, AT&T, Verizon, Epson, Oracle, Intel and Square. She specializes in a wide range of technology, such as AI, IoT, cloud, cybersecurity, and CX. Jennifer also wrote a bestselling book The Freelance Content Marketing Writer to help other writers launch a high earning freelance business.
Enterprise

Alperovitch: Cybersecurity defenders can’t be on high alert every day

With the continued threat of Russian cyber escalation, cybersecurity and geopolitics expert Dmitri Alperovitch says it’s not ideal for the U.S. to oscillate between moments of high alert and lesser states of cyber readiness.

Dmitri Alperovitch (the co-founder and former CTO of CrowdStrike) speaks at RSA Conference 2022.

Photo: RSA Conference

When it comes to cybersecurity vigilance, Dmitri Alperovitch wants to see more focus on resiliency of IT systems — and less on doing "surges" around particular dates or events.

For instance, whatever Russia is doing at the moment.

Keep Reading Show less
Kyle Alspach

Kyle Alspach ( @KyleAlspach) is a senior reporter at Protocol, focused on cybersecurity. He has covered the tech industry since 2010 for outlets including VentureBeat, CRN and the Boston Globe. He lives in Portland, Oregon, and can be reached at kalspach@protocol.com.

Policy

How the internet got privatized and how the government could fix it

Author Ben Tarnoff discusses municipal broadband, Web3 and why closing the “digital divide” isn’t enough.

The Biden administration’s Internet for All initiative, which kicked off in May, will roll out grant programs to expand and improve broadband infrastructure, teach digital skills and improve internet access for “everyone in America by the end of the decade.”

Decisions about who is eligible for these grants will be made based on the Federal Communications Commission’s broken, outdated and incorrect broadband maps — maps the FCC plans to update only after funding has been allocated. Inaccurate broadband maps are just one of many barriers to getting everyone in the country successfully online. Internet service providers that use government funds to connect rural and low-income areas have historically provided those regions with slow speeds and poor service, forcing community residents to find reliable internet outside of their homes.

Keep Reading Show less
Aditi Mukund
Aditi Mukund is Protocol’s Data Analyst. Prior to joining Protocol, she was an analyst at The Daily Beast and NPR where she wrangled data into actionable insights for editorial, audience, commerce, subscription, and product teams. She holds a B.S in Cognitive Science, Human Computer Interaction from The University of California, San Diego.
Fintech

How I decided to exit my startup’s original business

Bluevine got its start in factoring invoices for small businesses. CEO Eyal Lifshitz explains why it dropped that business in favor of “end-to-end banking.”

"[I]t was a realization that we can't be successful at both at the same time: You've got to choose."

Photo: Bluevine

Click banner image for more How I decided series

Bluevine got its start in fintech by offering a modern version of invoice factoring, the centuries-old practice where businesses sell off their accounts receivable for up-front cash. It’s raised $240 million in venture capital and about $700 million in total financing since its founding in 2013 by serving small businesses. But along the way, it realized it was better to focus on the checking accounts and lines of credit it provided customers than its original product. It now manages some $500 million in checking-account deposits.

Keep Reading Show less
Ryan Deffenbaugh
Ryan Deffenbaugh is a reporter at Protocol focused on fintech. Before joining Protocol, he reported on New York's technology industry for Crain's New York Business. He is based in New York and can be reached at rdeffenbaugh@protocol.com.
Latest Stories
Bulletins