Policy

Exit interview: Facebook’s former counterterrorism chief talks Meta’s moves in Russia

“They, like everyone else, don’t really know what a company's job is in this situation.”

Brian Fishman  at a table with other social media company representatives

Brian Fishman (center) worked for five years as the counterterrorism chief at Facebook.

Photo: Vincenzo Rando/Kontrolab/LightRocket via Getty Images

Halfway through my interview with Brian Fishman, Russian regulators announced they fully blocked access to Facebook inside the country. Fishman wasn’t surprised. As Facebook’s former head of Counterterrorism Policy, he’d been watching his former employer and other tech giants try to “walk a tightrope” in Russia, attempting to keep lines of communication open for Russian people while also minimizing the propaganda and disinformation coming out of the Russian government.

“You're not going to be able to balance for long,” Fishman said.

Fishman knows more than almost anyone about how Facebook weighs the risks and rewards of silencing people — even very bad people — on the platform. In a company that’s more comfortable with wielding soft power through information labels and fact-checks, Fishman was the pointy end of the content moderation spear, the person responsible for maintaining the company’s list of “dangerous individuals and organizations” — the people and groups deemed too awful to be on Facebook or to even be praised by others on Facebook. That includes foreign terrorists like ISIS and its leaders, but also, in recent years, domestic extremists like the Proud Boys and violent conspiracy networks like QAnon.

Fishman said in October he was leaving the company after five years, smack in the midst of The Wall Street Journal’s reporting on whistleblower Frances Haugen’s disclosures — timing Fishman warned his Twitter followers not to read too much into.

The counterterrorism expert and current senior fellow at New America has kept pretty quiet about Meta’s moves ever since. But he spoke with Protocol about the decisions Meta’s making in Russia, how those decisions apply to other global conflicts and why he knew it was time to walk away from Facebook.

This interview has been lightly edited and condensed for clarity.

What do you make of the decisions Meta’s made related to Russia?

I think that Meta and other tech companies are wrestling with a tremendous geopolitical upheaval. What you've seen from the tech companies is a pattern that we've seen before, which is a relatively careful escalation of policies, in which one company will jump forward and then others will maybe jump a little bit further, and that has been a relatively gradual process.

The trick with Ukraine is that this really is a — not an unprecedented situation, but it is an extraordinary situation. Besides the fall of the Soviet Union, this is the biggest geopolitical event of my lifetime. It will have longer and broader impacts than 9/11. There is a fundamental reality that all sorts of actors are sort of feeling their way through this one really carefully, and the tech companies are no different.

One of the things that we've got to be careful of therefore saying is: Well, if [Meta] did something here in this Russian invasion of Ukraine, they must do it in other places. This is, in terms of its global geopolitical impact, there just aren't a lot of direct comparisons.

A lot of folks wrongly said, well, human suffering here is different. That's not true. The human suffering in Syria was extraordinary. Some of the commentary has been pretty, frankly, racist. But it is true that the geopolitical difference is larger than these other kinds of circumstances, and it's going to impact Silicon Valley companies more than some of the other conflicts.

Are there systems that have been developed in other conflict regions that you can see Facebook relying on in this moment?

Facebook is way better at dealing with crises today than they were when I first got there. There are teams that deal with these things. They've got a much deeper bench of people that have dealt with this kind of problem in previous jobs. They haven’t been sitting on their laurels.

But they, like everyone else, don’t really know what a company's job is in this situation. Every Silicon Valley company, their first instinct is: We’ve got to keep the lights on wherever people can talk. I tend to be skeptical of that view, generally. But I think it's right here. Keeping the lights on, keeping the information flowing in Russia in particular right now, is really important to letting people speak as repression increases. They've been trying to walk that tightrope. How do we limit our ability to be abused by an increasingly overtly authoritarian actor while trying to empower everyday people, who are not responsible for the crimes for the government, to speak to each other, to organize, to get information about what's happening?

You're not going to be able to balance for long. The Russian state is clearly on an authoritarian bent. It has the capacity to block reasonably effectively. And so I think that we're going to see that happen. They're going to force people to rely on systems that they think they can control more effectively.

At the same time, we have seen Facebook not operate in China. We've seen Google pull out of China. There are countries in the world where people would benefit from being able to, in an uncensored manner, communicate, and those companies have opted not to operate there. So I wonder: Do you think they're headed toward a similar moment with Russia?

I hope not, because I do think that information flowing freely is invaluable. But I think that if the Russian government puts conditions on their behavior in Russia, they have to consider the China option. If the Russian government says, “You must carry certain information in order to operate here,” or “You must give us access to certain information in order to operate here,” you can go down the list.

Isn't Russia kind of doing that with all these data localization and hostage-taking laws? [Editor’s note: After this interview, Russia also passed a law prohibiting the publication of “fake news” about the military, prompting TikTok to suspend most of its operations there.]

Pre-the invasion, Russia was trying to set the legal table, so that they would have those leverage points. The case with the Navalny app was an example of that. That's the danger of those laws more generally, around the world. They give governments a tool.

The U.S. government, our constitutional system, is clunky as hell sometimes, but it is built around the idea of making it difficult for governments to crack down on people. Many democracies are not built with those kinds of protections strongly embedded in the institutional structure. And that's an important thing to remember, because governments go bad sometimes, because they're made of people. And people make mistakes. They're prone to the dark side. That's why these laws can be dangerous.

I do think there's a difference between Russia trying to pass those laws in a pre-Ukraine invasion situation and the sort of overtly authoritarian bent that we're seeing now. They're threatening to lock up protesters for five years and forcible conscription and these kinds of things. We are effectively moving into a place where their behavior is extrajudicial.

Going back to what you said about companies having systems in place to manage conflict: What are some of those systems?

There are improved processes for centralizing information from across the company and making sure it gets up to leadership. Companies didn't know that they were going to need that stuff. I would argue they should have known sooner than they did, but they didn't. But now a lot of those processes are there. And they exist. And they're staffed by people that know what they're talking about and have backgrounds doing that kind of work in government and elsewhere.

And so, you've got a lot of people — not just at a senior level — that have dealt with geopolitical crises in various forms now in government, and elsewhere. That doesn't mean that every decision winds up being a good one. But it does mean that they've got folks who have been through the wringer with these kinds of situations in the past.

The flip side of that is: Nobody who wasn't working age in 1989 to 1990 has been through something quite like this.

Do you think what Facebook went through in Myanmar is any way instructive for this moment?

I think that when many people look at Facebook's performance in Myanmar, understandably, they focus on the failures. But there's also a series of actions that Facebook has taken in Myanmar that are more aggressive against the government than they've taken anywhere else in the world — and that are more aggressive, I think, than any other social media company has taken against the government in the world. And I think it's fair to ask if those kinds of actions can be taken in other extraordinary circumstances.

If you are going to do that, the bar needs to be really, really high. It’s untenable for companies to go around and smack the hands of governments all the time around the world. It needs to be a tool that they can pull out of their pocket in extraordinary circumstances. Advocates around the world have to understand that.

And what I worry about is companies being concerned about setting a precedent that they will then be asked to use all the time. What we need them to be able to do is set a bar that's really high, and all of us outside understand that that bar is really high, and that we're not necessarily going to have it used for the conflict that we think is particularly important.

Understanding that the bar should be really, really high — and I agree — do you think there was a missed opportunity along the way, post-2016, to do something more about Russian propaganda on the platform, given that that was a very targeted attempt to interfere in the election by a foreign government?

That was not the stuff that I was most directly involved in. I really want to be careful coming out of a place like that, especially when you're in a senior role, you're aware of a bunch of things but you're not in every conversation if it's not the thing that you're working on.

So, I just saw that Russia did block Facebook.

It's not surprising. It's easy to criticize that iterative policy development that the companies do as incremental. But the salami-slicing is a strategy in international relations.

One of the things we criticize the Russians for doing is they take a little bit, and they see what they can get away with. And they take a little bit more, then see what they can get away with. I don't know that was a strategic choice by any of the companies. But I do think as this entire world gets more mature, we're going to start thinking about those sorts of things as strategic choices in the geopolitical stances for companies.

Sometimes, intentionally, you want to go all in. Other times, maybe you want to see what you can do before you elicit a response. My instinct is that this was not intentional by any of the companies in this case. But I do think, over time, as we get used to companies operating as geopolitical actors, these kinds of decisions may get a little bit more structured and more intentional.

Changing topics, you left the company in November. I'm interested in what led to that decision.

I was at Facebook for five and a half years, the longest job I've ever had. I'm not quite sure how that happened. Those jobs are incredibly intense. Bottom line, I got to a point where I didn't feel the same fire internally for some of the fights that you need to have. When you start to feel that it's time to go, it may not show up in your work right away, but it will. And so it's time to go.

I'm not going to get into specific questions. But I think one of the things that gave me comfort in leaving is that when I got to Facebook, the bench of people with backgrounds kind of like mine [was small]. My old team is stronger, personnel-wise, than it’s ever been. There is a broader universe of smart people that have background thinking about national security issues. It was easier for me to walk away feeling like there was a universe of folks that could take on some of those things.

And I won't lie, there were some things that I disagreed with and that I didn't want to do. And then I was frustrated. But to be honest, I don't really want to have a public discussion about it.

You announced you were leaving right around the time Frances Haugen came out with her disclosures, and you tweeted something like, “Just a reminder that correlation does not equal causation.” I’m interested in what you thought of what she brought forward.

I don't know exactly what she brought forward, because a bunch of the leaked stuff is still not available publicly. I did not know Frances Haugen. I don't think I've ever met her. I think that the folks that have access to those documents need to be very careful. Some of them may indicate really careful work. But many of them are going to reflect random people doing analysis on some issue that is close to their heart, and the terminology that they use and the methods that they use may not be indicative of how the organization as a whole measures or defines anything.

Folks need to be real careful with that kind of data as they analyze this. Social media companies aren't the only ones trying to figure out what to do with social media. Activists, governments are all struggling, you know, in their own ways, with similar problems. But a lot of that means: Don't just take stuff immediately at face value. You have to get down to: How are terms defined? Where did the data come from? How was it actually analyzed? And if you can't do that, you ought to be really, really skeptical.

Fintech

Judge Zia Faruqui is trying to teach you crypto, one ‘SNL’ reference at a time

His decisions on major cryptocurrency cases have quoted "The Big Lebowski," "SNL," and "Dr. Strangelove." That’s because he wants you — yes, you — to read them.

The ways Zia Faruqui (right) has weighed on cases that have come before him can give lawyers clues as to what legal frameworks will pass muster.

Photo: Carolyn Van Houten/The Washington Post via Getty Images

“Cryptocurrency and related software analytics tools are ‘The wave of the future, Dude. One hundred percent electronic.’”

That’s not a quote from "The Big Lebowski" — at least, not directly. It’s a quote from a Washington, D.C., district court memorandum opinion on the role cryptocurrency analytics tools can play in government investigations. The author is Magistrate Judge Zia Faruqui.

Keep ReadingShow less
Veronica Irwin

Veronica Irwin (@vronirwin) is a San Francisco-based reporter at Protocol covering fintech. Previously she was at the San Francisco Examiner, covering tech from a hyper-local angle. Before that, her byline was featured in SF Weekly, The Nation, Techworker, Ms. Magazine and The Frisc.

The financial technology transformation is driving competition, creating consumer choice, and shaping the future of finance. Hear from seven fintech leaders who are reshaping the future of finance, and join the inaugural Financial Technology Association Fintech Summit to learn more.

Keep ReadingShow less
FTA
The Financial Technology Association (FTA) represents industry leaders shaping the future of finance. We champion the power of technology-centered financial services and advocate for the modernization of financial regulation to support inclusion and responsible innovation.
Enterprise

AWS CEO: The cloud isn’t just about technology

As AWS preps for its annual re:Invent conference, Adam Selipsky talks product strategy, support for hybrid environments, and the value of the cloud in uncertain economic times.

Photo: Noah Berger/Getty Images for Amazon Web Services

AWS is gearing up for re:Invent, its annual cloud computing conference where announcements this year are expected to focus on its end-to-end data strategy and delivering new industry-specific services.

It will be the second re:Invent with CEO Adam Selipsky as leader of the industry’s largest cloud provider after his return last year to AWS from data visualization company Tableau Software.

Keep ReadingShow less
Donna Goodison

Donna Goodison (@dgoodison) is Protocol's senior reporter focusing on enterprise infrastructure technology, from the 'Big 3' cloud computing providers to data centers. She previously covered the public cloud at CRN after 15 years as a business reporter for the Boston Herald. Based in Massachusetts, she also has worked as a Boston Globe freelancer, business reporter at the Boston Business Journal and real estate reporter at Banker & Tradesman after toiling at weekly newspapers.

Image: Protocol

We launched Protocol in February 2020 to cover the evolving power center of tech. It is with deep sadness that just under three years later, we are winding down the publication.

As of today, we will not publish any more stories. All of our newsletters, apart from our flagship, Source Code, will no longer be sent. Source Code will be published and sent for the next few weeks, but it will also close down in December.

Keep ReadingShow less
Bennett Richardson

Bennett Richardson ( @bennettrich) is the president of Protocol. Prior to joining Protocol in 2019, Bennett was executive director of global strategic partnerships at POLITICO, where he led strategic growth efforts including POLITICO's European expansion in Brussels and POLITICO's creative agency POLITICO Focus during his six years with the company. Prior to POLITICO, Bennett was co-founder and CMO of Hinge, the mobile dating company recently acquired by Match Group. Bennett began his career in digital and social brand marketing working with major brands across tech, energy, and health care at leading marketing and communications agencies including Edelman and GMMB. Bennett is originally from Portland, Maine, and received his bachelor's degree from Colgate University.

Enterprise

Why large enterprises struggle to find suitable platforms for MLops

As companies expand their use of AI beyond running just a few machine learning models, and as larger enterprises go from deploying hundreds of models to thousands and even millions of models, ML practitioners say that they have yet to find what they need from prepackaged MLops systems.

As companies expand their use of AI beyond running just a few machine learning models, ML practitioners say that they have yet to find what they need from prepackaged MLops systems.

Photo: artpartner-images via Getty Images

On any given day, Lily AI runs hundreds of machine learning models using computer vision and natural language processing that are customized for its retail and ecommerce clients to make website product recommendations, forecast demand, and plan merchandising. But this spring when the company was in the market for a machine learning operations platform to manage its expanding model roster, it wasn’t easy to find a suitable off-the-shelf system that could handle such a large number of models in deployment while also meeting other criteria.

Some MLops platforms are not well-suited for maintaining even more than 10 machine learning models when it comes to keeping track of data, navigating their user interfaces, or reporting capabilities, Matthew Nokleby, machine learning manager for Lily AI’s product intelligence team, told Protocol earlier this year. “The duct tape starts to show,” he said.

Keep ReadingShow less
Kate Kaye

Kate Kaye is an award-winning multimedia reporter digging deep and telling print, digital and audio stories. She covers AI and data for Protocol. Her reporting on AI and tech ethics issues has been published in OneZero, Fast Company, MIT Technology Review, CityLab, Ad Age and Digiday and heard on NPR. Kate is the creator of RedTailMedia.org and is the author of "Campaign '08: A Turning Point for Digital Media," a book about how the 2008 presidential campaigns used digital media and data.

Latest Stories
Bulletins