The first time Mike Schroepfer canceled our interview was the same week President Trump posted messages on both Facebook and Twitter threatening to shoot looters during protests over George Floyd's murder in Minnesota. The second time he canceled our interview was the week Facebook employees staged a virtual walkout opposing Facebook's decision to leave President Trump's post untouched.
As Facebook's chief technology officer, Schroepfer — better known as Schrep — is one of the social media juggernaut's senior-most executives. But at the time, Facebook CEO Mark Zuckerberg was still struggling to explain to the world and to Facebook employees how the company had decided what to do (or not do) with Trump's post.
Schrep certainly wasn't going to be the first to do damage control with the media. When I finally caught up with him this week, he apologized profusely for the scheduling delays. "It's been a crazy couple of weeks," Schrep said.
In truth, it's been a crazy couple of months in an already crazy couple of years for Facebook. Schrep spoke with me about all of it: the recent employee revolt, what it took to keep Facebook running when COVID-19 sent everyone home, and how his team developed AI to fill in for all those people.
This interview has been edited and condensed for clarity
You manage a huge team at Facebook, and over the last few weeks, there's been a good deal of employee backlash to Mark's decision to leave President Trump's posts about Minnesota protesters up. How did members of your team react to that, and what was your response?
I don't think I can generalize the response of tens of thousands of people. Obviously the events of the last few weeks have had a disproportionately negative impact on anyone who's personally experienced or been afraid of experiencing violence at the hands of police, discrimination, bias either in the world or at work. We've been having a lot of hard conversations over the last few weeks with lots of people who are sharing their story, their experience, how these events have touched or triggered them and wanting to know that the company sides with them as individuals, sides with the ideals of better racial justice and equality, and that we're doing whatever we can to further connection and cohesiveness in the world. It's been a tumultuous few weeks, and I think understandably so, particularly for friends and colleagues in the Black community.
There aren't easy answers to this, but a lot of what we're trying to do is just listen and engage and make sure I'm trying my best to hear everyone and funnel that energy into action in our products and what we're building in the company.
How do you feel about the decision that was made on that Trump post?
I think these are unprecedented times, and I think the fact that we're having these sort of divisive conversations in public is something I'd like to see less of, and I'd like to see people come together with more empathy and understanding, trying to spend a little more time trying to understand another person's point of view rather than doing a quick take or dunk or trying to sound great.
At the same time, I think it's really important that we have really clear principles and rules and follow them whenever we're faced with these decisions. At the end of the day, we're here to try to provide a service to let people say what they want and not what we want. Trying to balance all those needs is challenging, and I think we have a lot of work to do to get this exactly right over the coming months and years.
What are you doing personally to expand diversity or understanding on your team?
There's lots of different things. First of all: continuously making space for all points of view and backgrounds and personally getting involved in recruiting and advancement of people on our technology teams across the company. And engaging with people on how we can use our technology to advance the things we care about.
I think people are missing the headline that we took down more hate speech in Q1 than ever before: 10 million pieces, and our proactive rate has gone from 0% to 88 or 89% in three years. That's where day-to-day I try to spend a lot of my time on. If there's one less piece of terrible content on Facebook because our systems got better at finding it, the summation of that work is the experience people are going to have on the platform.
During COVID, we decided health information was so important we put an information hub at the top of everyone's Facebook News Feed, and 2 billion people got access to authoritative information on health.
We decided that the events in the last few weeks in the United States were significant enough that we needed to once again direct people to good information on the topics of racial justice in the Black community. So we launched the Lift Black Voices unit within Facebook to not only combat bad content, but to make sure we could amplify positive content and good voices. That was an idea that came out of hundreds that were submitted by employees that one of my teams helped organize and rally and get shipped. Ultimately my job is to express what we're doing in our products and technology.
Facebook recently committed to doubling the number of Black and Latinx people in the company by 2023, and during a recent event on AI at Facebook, you said something about the importance of having a diverse team working on AI to eliminate some of the biases that can creep into AI tools. What is your plan to ensure your team, the tech team, the AI team, are more representative?
The sorts of things we do on this are invest our time and energy in sourcing, developing and retaining diverse talent across all these teams. Facebook University is a summer program we set up to encourage more people of varying backgrounds to enroll in computer science inside of universities. I've now met full-time employees who were graduates of that program five years ago. We try to scale that program out as much as we can.
This is sort of like, what are we doing to manage content on our site? It's a problem that requires constant time and attention because obviously, if you don't focus on it, it doesn't happen on its own. The answer is a lot of hard work at the recruiting, mentorship, encouragement level and pledging, as we have, goals and measuring where we are. So we're committing to this like we are committing to other business objectives and holding ourselves accountable as we do or don't achieve them.
I want to talk about remote work. Facebook is one of the companies that made the early decision to send employees, and even moderators, home. I'm sure that decision involved Mark's team, HR and others, but I imagine it's also a significant technical lift. Can you walk me through that decision-making process, what input you had, and what action your team had to take to make remote working feasible?
It feels like years ago even though it was just a few months ago. What I was astounded by honestly was that everything still worked. We have all sorts of crazy contingency plans to keep the site running. We regularly run these fire drills where we intentionally break something to simulate failure. We might take an entire data center or data center region offline to make sure our systems can stay resilient even if something really bad happens. We did not ever run a fire drill for sending our entire employee base home. We just hadn't run that test before. We, in Facebook parlance, did it live.
The first challenge was pretty mundane. It's just: Can everyone get access to tools? Do we have the right bandwidth on our VPN networks? Do all our internal tools and video chat systems work? That was basic priority No. 1. Then we had to get equipment to the right people. That took a lot of creative sourcing and doing.
Then I think the biggest challenge in the early days was, in order to keep the site running, we're depending on continually adding compute and networking capacity. We're building out data centers and adding servers in more than a dozen regions around the world, and that gives us more compute capacity, which allows us to deal with the fact that more people are using our products every single day.
We not only sent everyone home, but we paused most of that activity. At the same time, we saw huge spikes in usage of the product. So everyone's home. We can't add capacity, and our products are seeing peak usage. We definitely had in the early weeks a lot of Sundays where we were fretting pretty hard about whether we'd make it through the weekend.
What was awesome was the engineering team came together and just did a lot of heroic work to find lots of ways to optimize our site and say, "Well, we won't get new servers. We need to find ways to make our code more efficient, and to find ways to squeeze more capacity out of our existing hardware. And in the places where we can build, let's build where it's safe to do so."
Then, there's a secondary challenge. Obviously we're dependent on tens of thousands of people who help us monitor and manage content violations on the site. They all had to go home, too. In many cases they had to stop work because we didn't have the capability to have them work remotely. For everything from data privacy to resiliency reasons, we needed them on site. That put a big strain on our automation systems to pick up the slack.
When you realized moderators were going to have to go home and the AI was going to have to pick up the slack, was that a panic-inducing moment? How ready were the systems for that responsibility?
The whole thing felt panic-inducing every day for different reasons, to be honest. Whether it was worrying about the site being up or the health and safety of our co-workers or how do we keep our commitments. I started doing these regular video broadcast Q&As with the entire company to keep in touch on what's going on. From the very first or second one, I said: Here's our four priorities. No. 1 you've got to take care of yourself and your family. It's hard to be useful to the company if you're not taking care of yourself and your family first. The second thing we have to do is keep everything running. The third is none of our commitments have changed, and everyone expects us to keep them and rightly so. Whether that's managing content on our site or managing privacy, we don't get a break or a pass for any of that stuff because of current events. Only if we take care of all of that stuff then do we get the opportunity to build the new products we want to build like we did for COVID. That was my framework for taking people through this and helping them understand. That's the way I still think about it.
In terms of the automation and human reviewers: My first thought was how thankful I was, to be really honest, that we at least had these systems. If we had to run this fire drill five years ago, we would have been in deep, deep trouble. Because these systems aren't perfect, but they do do a lot of work. They did buy us some time. It's not like people stopped posting content. In fact usage was all dramatically up. We still had to manage bad stuff on the platform.
You recently announced SimSearchNet, an AI tool that is able to detect images that are nearly identical, but not exact, so Facebook can do a better job removing violative content at scale, even if there are small changes to a given image. You released this in part to cope with the abundance of spam related to the COVID-19 crisis. What was the original timeline for that tool, and were you confident in its ability to do this right in the COVID era?
I think it was January of 2018 when a bunch of the original work kicked off. Like anything good, it was a two-and-a-half-year overnight success. We were lucky in the sense that this had been in development for years, and it was close to rollout.
This was one of many tools we're developing that allow us to more rapidly adapt. When we built that a couple of years ago, we didn't have COVID in mind, obviously, because COVID didn't exist. But when new things pop up, and we want to quickly adapt — like, we realize people are running possible scam ads for face masks or hand sanitizer, and we want to enforce those things — we can then quickly use SimSearchNet to define near duplicates of that.
I think it's a good example of a long-term investment in a technology that then gives us the ability to rapidly adapt to a new circumstance, which is sort of the whole thing I'm trying to get done in the company, right?
Facebook isn't the only company dealing with this problem. Is there any thought of open-sourcing SimSearchNet to other tech companies, similar to how you've worked with them on terrorist content through the Global Internet Forum to Counter Terrorism?
SimSearch builds on top of a lot of technology we've built and open-sourced from the AI research team. Our priority for that particular project was to get it into production as fast as possible. As with everything, if we find things that we think are of utility for others, we try to open-source them, including the models and whatnot whenever we can.
As you've had to rely more on AI for moderation the last few months, what blind spots have surfaced that you feel like you need to address?
I think we'll be able to talk about this in more detail in the next community standards report. But I do think one mistake people make with all of this is to assume that what we're trying to do is fully automate things, and we're not trying to fully automate things. We're trying to augment people to be able to make decisions that are ultimately people decisions, human decisions, and then be able to scale those decisions out.
In SimSearchNet for example, it doesn't decide what's good or bad, but once I've decided that these six face mask ads are against our new policy that we literally implemented yesterday, SimSearchNet can then go find the tens of thousands or millions of other examples of those things and get them off the platform.
But you still need the person in the loop there to pick the policy and to provide the example. So, I think we're learning what we always knew, which is you need the two together. You need the AI and the machines for scaled enforcement, and you need people to make the day-to-day decisions, changes and certainly to help us with the most nuanced of problems. Over time, what I'd hope is that machines take on more of the grunt work day to day and we free up time for people to spend on the hard, nuanced questions.
Switching gears slightly, can you give a progress report on Facebook's plans to integrate and fully encrypt WhatsApp, Messenger and Instagram Direct? And how are you thinking through a technical solution to the fact that once these platforms are encrypted, Facebook will be blind to a lot of this bad content we've been discussing?
I don't really have much to report on the interoperability work. In terms of how to manage problematic content with encryption? We face this today with WhatsApp. WhatsApp is already fully end-to-end encrypted. So we are spending a lot of time understanding how to combat bad content on that platform. There are ways to do this, allowing users to report things or finding group names that are problematic. So there are ways to combat this that don't require looking at the content. It certainly removes one tool in our tool chain, which is being able to understand the context of the content itself.
A lot of why we're taking it slow on this and why we announced this work so far in advance is so we can work with outside governments, third parties and experts to get their point of view on the trade-offs there and to have the time to develop the tools and techniques live on WhatsApp before we roll it to other parts of the product.
Have there been any breakthroughs on that front?
You've seen product changes, for example, in certain regions on WhatsApp, where you maybe limit forwards so that people can't forward messages as deeply.
We've been talking about a lot of things that are really heavy — all this offending content popping up on Facebook every day. We know the impact that viewing that has on content moderators. What toll has looking at all this content taken on you personally?
It definitely has a risk of sort of dimming my default optimistic view on humanity and the future. It's hard to see how awful people can be, how violent and hateful they can be. Part of what motivates me to get to work every single day is every time we make an improvement to a technical system that catches something or catches something faster, that's [some] number of views of people who don't see it or even a content reviewer under our employment who doesn't have to look at it.
I don't know how to solve this problem any other way. We can certainly shut down our entire platform and say: Sorry you can't say anything, or you can only say these preapproved messages and share these preapproved pictures or videos. That will certainly eliminate the bad content. It has a number of other really terrible side effects.
Or we can try to do what we're trying to do and just say: You know what? Letting people communicate with each other has to be net good. It clearly has really bad expressions, really bad people, really bad actors. And our job is to catch the content and those people and get them off our platform and get the content off our platform to open it up for the true purpose, which is leading people to understand each other a little bit better.
I really struggle to insert myself into this conversation, because I feel like so many other people are so much more impacted by all of this. I'm really not trying to avoid your question. I just want to acknowledge that my job is to help those people, not to worry about myself.