People

Mozilla’s plan to fix social media (at least until Election Day)

Ashley Boyd, Mozilla's VP of advocacy, says we're long overdue for a more serious conversation about what we want from the internet. But first, she says, let's fix recommendations.

Mozilla open letter

Mozilla's open letter calls on Facebook and Twitter to make big changes to their platforms, at least until Election Day.

Photo: Mozilla

Mozilla loves a good open letter. At particularly poignant moments for the tech industry, the company has a longstanding practice of loudly calling on important companies to do the right thing.

Most recently, ahead of the election, Mozilla published an open letter — on its website and as a full-page ad in the Washington Post — saying that Twitter and Facebook need to turn off some of their important recommendations until the election is over. "Facebook and Twitter still include features that could allow disinformation about voting and election results to go viral," said the letter, signed by Mozilla and 6,000 internet users. "This could escalate quickly and threaten the integrity of the U.S. election."

Ashley Boyd, the VP of advocacy at the Mozilla Foundation, spearheaded the effort, and oversees much of Mozilla's work trying to make the internet better, from all those open letters to the Unfck The Internet program. On this week's Source Code Podcast, she talked about why she's pushing Twitter and Facebook, how she picks Mozilla's battles, and the ways a bizarre 2020 could change the internet forever, even after life goes back to normal. If there's even such a thing.

Below are excerpts from the interview, edited for length and clarity.

I want to start with the open letter. I find the process behind that sort of fascinating, because you sit down, you say, "OK, we're about to have this very consequential election, how do we pick a couple of battles? What should it be about?" What is that decision-making process like?

It's great that you recognize that it's about picking battles, because that is really the art, I think, in this work. Making sure that what you're doing is additive, and not disruptive. And I will say that when we were thinking about what to really highlight as it relates to the upcoming election, we were really asking ourselves: What's going to be most impactful? What can be done responsibly in the weeks prior to the election?

I don't think it's helpful to introduce and call for big changes that we know, practically speaking, would take quite a lot of effort to actually implement. So we were looking for big changes that we think were missing ingredients of what the platforms had already instituted. So we've seen the companies make a lot of changes, even up to these last weeks, and they're all incrementally important. But introducing a new idea that's detailed, and working at the edges, didn't feel right to us. So that's why we looked at a big action, that would be addressing the need for scalable solutions. And that would reduce bias and concerns about bias. And that's where we landed on these two asks — around Facebook groups and Twitter trending topics.

And the thing they have in common is that they're about recommendations, right? We get really hung up on specific policy decisions about specific kinds of content. Why do you see recommendations as the biggest version of this thing to talk about?

Well, recommendations are an instrumental way that the companies themselves are unintentionally recommending viral disinformation. So it's the place where they're active in this ecosystem. In the case of Twitter trends, it packages a set of content that then makes it a trend. Maybe it would have been a trend otherwise, but it definitely is the company setting forth this concept that this is A Thing. And we know that as information is coming at us quickly, even these moments where you sort of carve out what is real or what is a trend are very important to the larger conversation and ultimately, the election.

The Facebook group question is similar but slightly different. We've actually seen Facebook take action to stop health group recommendations. They said they did that because they thought it was critical that people get authoritative information about health concerns. And we feel like the election is basically exactly the same. Particularly in this critical time period, pre- and post-election, it's critical that people get authoritative information about the voting process, about the voting outcomes, and where things stand. And so that's a place where we think the intervention by the companies is in line with what they've done before and just truly impactful at this time period.

Does watching them do things like shut down health recommendations kind of make you tear your hair out? I've talked to people who see these small steps on the sides of these organizations and it's like, OK, you know the problem. You know that what you're recommending is a challenge, that not only is it about the actual content, it's about who's seeing it and when and how. You're just seeing it in this sort of very narrow, specific way.

Yes. It's both encouraging and infuriating. It's encouraging to know that they understand how this is working in that context and have taken action, so I take it as an incremental step in the right direction. It also gives me hope that they know structurally, on their end, how to do it quickly. So when we're putting out a call for pausing Facebook recommendations in the U.S. entirely, it gives me confidence that they've internally worked through the system about how to do that well and quickly. But it is frustrating to see a combination of small steps at a time when we think there need to be bold, very big steps taken. So I think it's frustrating from that standpoint.

It's also frustrating because of the lack of transparency about what's working, and what the impact is of these steps. It leaves us in the dark about what to push for more. Maybe it's a small set of things that actually, quite narrowly applied, really have a big impact. But we don't know that because the companies aren't sharing freely and working with researchers to document those successes. Or those dead ends! like those are equally important to understand what works and doesn't work.

As an advocate, I have no interest in proposing and pressing for a solution that's not effective. That's counter to my practice, and what the world needs. But we really are operating in a black box unless we set up a different relationship with the companies around transparency in third party research.

Was there a version of the thinking that you guys were doing that was obviously too big, where you sit down and you say, "Let's make them turn off all the algorithms!" That clearly isn't going to happen. So what's the too-big version of the solution here?

Yeah, I think the too-big version is "turn these off forevermore." I don't know if that's actually a good idea. I think there are good examples where trending topics have been helpful in some contexts. I'd like to work that out, and think about different use cases where it could be successfully shifted: You see Twitter already trying to create more contextualization and trends. I think we were interested in calling for a pause — I use that word on purpose — for a certain set of time, to help them get a better understanding of what might be fruitful and important changes in these features long term. But we weren't ready to say that the company should disable them forevermore.

I think, particularly as concern about the post-election period has grown, you know, one of the things we really want to emphasize is it's not too late to take action. Doing this after the election could also be very helpful. One thing that we talked with Facebook about is this notion of new groups that could spike after the election based on concerns about the results or the process, and their system relies on user reports and AI looking at the content. But if you have a new group that doesn't have a lot of content, it's difficult for their system to flag a potentially problematic group to not recommend. So it's a very specific case to this post-election period, that isn't an ongoing problem but is something that we think is something to pay attention to in this particular case.

How do you balance those things? Because we're in the middle of so many sort of theoretically temporary, but very complicated things, whether it's the election or the pandemic. How much time do you spend thinking about what we do today or this week, and then turn off when things go back to ... whatever normal looks like? Versus what should we be doing now that sticks around forever, even as the world keeps changing?

This is a really good question. In one sense, it's the Before Times thinking that this is just a time period that's particularly problematic. That's false. That's definitely a game that I play with myself to try to feel some control over this time period.

There are so many elections globally next year, so this is all a run up. There's a lot of focus and speculation and attention on the U.S. election, but we have to think about these conversations as being relevant for next year and all the elections to come. And of course on COVID and all the misinformation about COVID, and the disinformation, this does show how perennial the problems are.

So my recommendation and our thinking is, after we get over this particularly challenging and high-focus time around the election, we really need to come back to some fundamentals. Things feel chaotic and a little bit patchwork. And actually our platform policy tracker about election policies by the platforms really shows that there's just a bunch of different approaches. Different looks at how to think about political ads, different looks at how to deal with political figures. It's all a mishmash. It's fine for there to be variance, but it does feel a little chaotic and difficult to get a handle on.

So we're really looking at some fundamentals about limiting the spread of disinformation, what's most effective, providing advertising transparency across the board, not just on political ads. [And] empowering consumers: The companies have put some features in the hands of consumers to report disinformation to better control their privacy, but we have to get more concrete about what that looks like, and looks like done well.

And then back to this issue about supporting third party researchers and looking at what works. We think that combination will really help provide a little bit of stability as we lurch from crisis to crisis. Those are the fundamental tentpoles of what we need to be doing to have a sane and effective approach to this work.

I feel like this is the second or third election in a row when I've thought, "this is going to be the time when the two candidates for president actually talk about tech in a really substantive way." And it just keeps not happening. And you would think it would be this year! There's the antitrust stuff, China stuff is out there, TikTok was the biggest story in the world for a while, and yet it has been kind of absent from the discussions of the politicians who are running for president. Why do you think that is?

This year, I think it's because of such huge and impactful issues around us. Around COVID, particularly, and all of the economic and personal toll there. But, while I don't hold myself personally responsible, I hold us collectively responsible for failing to create really compelling, crisp narratives about why this is important. We are not translating the kind of nerdy technical details and policy details into the sort of top-level storytelling about why people should care.

And ultimately, I think our political leaders are more like the general public. Not because they're not engaged and interested in the issues, but when I think about trying to reach policymakers and really get them excited, and feeling like tech issues are critical to their political leadership, I'm speaking to them more like a regular user. Because it's a complex and difficult issue to break down. So I think that that's a goal of mine, and many other people: to really get to that kind of core narrative-building challenge. We haven't gotten there yet.

All right, so 2024. That's going to be the election where we actually talk about this stuff.

Yeah, that's our goal! We're going to be back here, and we're going to say, "Isn't it amazing to hear political leaders talk with such nuance and passion about all these issues that we care about?"

Power

How the creators of Spligate built gaming’s newest unicorn

1047 Games is now valued at $1.5 billion after three rounds of funding since May.

1047 Games' Splitgate amassed 13 million downloads when its beta launched in July.

Image: 1047 Games

The creators of Splitgate had a problem. Their new free-to-play video game, a take on the legendary arena shooter Halo with a teleportation twist borrowed from Valve's Portal, was gaining steam during its open beta period in July. But it was happening too quickly.

Splitgate was growing so fast and unexpectedly that the entire game was starting to break, as the servers supporting the game began to, figuratively speaking, melt down. The game went from fewer than 1,000 people playing it at any given moment in time to suddenly having tens of thousands of concurrent players. Then it grew to hundreds of thousands of players, all trying to log in and play at once across PlayStation, Xbox and PC.

Keep Reading Show less
Nick Statt
Nick Statt is Protocol's video game reporter. Prior to joining Protocol, he was news editor at The Verge covering the gaming industry, mobile apps and antitrust out of San Francisco, in addition to managing coverage of Silicon Valley tech giants and startups. He now resides in Rochester, New York, home of the garbage plate and, completely coincidentally, the World Video Game Hall of Fame. He can be reached at nstatt@protocol.com.

While it's easy to get lost in the operational and technical side of a transaction, it's important to remember the third component of a payment. That is, the human behind the screen.

Over the last two years, many retailers have seen the benefit of investing in new, flexible payments. Ones that reflect the changing lifestyles of younger spenders, who are increasingly holding onto their cash — despite reports to the contrary. This means it's more important than ever for merchants to take note of the latest payment innovations so they can tap into the savings of the COVID-19 generation.

Keep Reading Show less
Antoine Nougue,Checkout.com

Antoine Nougue is Head of Europe at Checkout.com. He works with ambitious enterprise businesses to help them scale and grow their operations through payment processing services. He is responsible for leading the European sales, customer success, engineering & implementation teams and is based out of London, U.K.

Protocol | Policy

Why Twitch’s 'hate raid' lawsuit isn’t just about Twitch

When is it OK for tech companies to unmask their anonymous users? And when should a violation of terms of service get someone sued?

The case Twitch is bringing against two hate raiders is hardly black and white.

Photo: Caspar Camille Rubin/Unsplash

It isn't hard to figure out who the bad guys are in Twitch's latest lawsuit against two of its users. On one side are two anonymous "hate raiders" who have been allegedly bombarding the gaming platform with abhorrent attacks on Black and LGBTQ+ users, using armies of bots to do it. On the other side is Twitch, a company that, for all the lumps it's taken for ignoring harassment on its platform, is finally standing up to protect its users against persistent violators whom it's been unable to stop any other way.

But the case Twitch is bringing against these hate raiders is hardly black and white. For starters, the plaintiff here isn't an aggrieved user suing another user for defamation on the platform. The plaintiff is the platform itself. Complicating matters more is the fact that, according to a spokesperson, at least part of Twitch's goal in the case is to "shed light on the identity of the individuals behind these attacks," raising complicated questions about when tech companies should be able to use the courts to unmask their own anonymous users and, just as critically, when they should be able to actually sue them for violating their speech policies.

Keep Reading Show less
Issie Lapowsky

Issie Lapowsky ( @issielapowsky) is Protocol's chief correspondent, covering the intersection of technology, politics, and national affairs. She also oversees Protocol's fellowship program. Previously, she was a senior writer at Wired, where she covered the 2016 election and the Facebook beat in its aftermath. Prior to that, Issie worked as a staff writer for Inc. magazine, writing about small business and entrepreneurship. She has also worked as an on-air contributor for CBS News and taught a graduate-level course at New York University's Center for Publishing on how tech giants have affected publishing.

Protocol | Workplace

Remote work is here to stay. Here are the cybersecurity risks.

Phishing and ransomware are on the rise. Is your remote workforce prepared?

Before your company institutes work-from-home-forever plans, you need to ensure that your workforce is prepared to face the cybersecurity implications of long-term remote work.

Photo: Stefan Wermuth/Bloomberg via Getty Images

The delta variant continues to dash or delay return-to-work plans, but before your company institutes work-from-home-forever plans, you need to ensure that your workforce is prepared to face the cybersecurity implications of long-term remote work.

So far in 2021, CrowdStrike has already observed over 1,400 "big game hunting" ransomware incidents and $180 million in ransom demands averaging over $5 million each. That's due in part to the "expanded attack surface that work-from-home creates," according to CTO Michael Sentonas.

Keep Reading Show less
Michelle Ma
Michelle Ma (@himichellema) is a reporter at Protocol, where she writes about management, leadership and workplace issues in tech. Previously, she was a news editor of live journalism and special coverage for The Wall Street Journal. Prior to that, she worked as a staff writer at Wirecutter. She can be reached at mma@protocol.com.
Protocol | Fintech

When COVID rocked the insurance market, this startup saw opportunity

Ethos has outraised and outmarketed the competition in selling life insurance directly online — but there's still an $887 billion industry to transform.

Life insurance has been slow to change.

Image: courtneyk/Getty Images

Peter Colis cited a striking statistic that he said led him to launch a life insurance startup: One in twenty children will lose a parent before they turn 15.

"No one ever thinks that will happen to them, but that's the statistics," the co-CEO and co-founder of Ethos told Protocol. "If it's a breadwinning parent, the majority of those families will go bankrupt immediately, within three months. Life insurance elegantly solves this problem."

Keep Reading Show less
Benjamin Pimentel

Benjamin Pimentel ( @benpimentel) covers fintech from San Francisco. He has reported on many of the biggest tech stories over the past 20 years for the San Francisco Chronicle, Dow Jones MarketWatch and Business Insider, from the dot-com crash, the rise of cloud computing, social networking and AI to the impact of the Great Recession and the COVID crisis on Silicon Valley and beyond. He can be reached at bpimentel@protocol.com or via Signal at (510)731-8429.

Latest Stories