Policy

Inside the Facebook Oversight Board’s Trump decision

Thomas Hughes, director of Facebook's Oversight Board, explains the board's decision and why he thinks self-regulation "is the most effective form of content regulation."

Inside the Facebook Oversight Board’s Trump decision

Protocol spoke with the director of Facebook's Oversight Board.

Photo: Nikolas Kokovlis/NurPhoto via Getty Images



Following months of speculation, Facebook's Oversight Board issued a quasi-decision Wednesday in the case of former President Donald Trump's account. The board upheld Facebook's ban of Trump, following the Jan. 6 riot at the U.S. Capitol, but ordered Facebook to revisit its decision and come up with a fresh ruling within six months.

The outcome led some to accuse the board of shirking its responsibility, while others applauded its members for not letting Facebook off the hook.

How did the board arrive at this outcome and what does it mean for Facebook and its more than 3 billion users around the world?

Facebook Oversight Board director Thomas Hughes spoke with Protocol for a live event shortly after the board's announcement. Here is a lightly edited transcript of that conversation.

I want to start by setting the table a little bit. As director of the Oversight Board, you're not personally one of the board members making these decisions, but you oversee the board, its processes and all that. So as a somewhat bystander, and as a human rights expert yourself, how do you feel about the decision the board made or lack thereof?

I feel very good about the decision. I think it's a strong decision, and I think it's a decision that clearly prioritizes freedom of expression, and other human rights as well. And I think it's a decision that speaks to the need to empower users, and to make sure that their voice is heard.

The reason I think that is because, obviously, the board found that the suspension of former President Donald Trump was essentially necessary to keep people safe and that his actions did encourage and legitimize violence and that it was certainly a severe violation. But at the same time, a rejection of the indefinite nature of that suspension. As a penalty, it was … arbitrary, but also a failure on Facebook's side to apply rules that are clear and consistent and transparent.

This is a very important component of freedom of expression. Users must be able to understand what the penalties are, what the implications are, they must be empowered to have knowledge of the implications around the decisions that they take in terms of content and they must know about what's going to happen subsequently if they post different types of content.

As you pointed out, within six months, and I would stress it's within six months, the board said you must go back, and indefinite suspension is not proportionate. It is not consistent with international human rights and freedom of expression principles, and you've got to look at issues like the prospect of future harm, and the severity of the violation, and you need to apply a penalty.

But at the same time, what the board has said is, you need to think about the construct of what your community standards and your content moderation looks like in the future. You need to consider the fact that vague and arbitrary rules obviously have a potentially chilling effect on speech, but you also need to take into consideration the importance of the fact that individuals have the right to hear political speech, and the fact also that influential users and political leaders should be treated in an equal manner in terms of the application of community standards and the types of penalties that they face, but in recognition of the fact that context is extremely important, and global leaders have a greater voice and therefore have a bigger impact.

The recommendations very clearly say if you suspend a political leader or an influential user in the future for something that may be incitement to violence or discrimination or lawless action, all of which are well-recognized, legitimate restrictions around free expression, you need to apply a harmfulness test. You need to say when those risks have receded, that is the right moment to bring that user back onto the platform. That needs to be transparent, and it needs to be set against public rules, which all users can understand.

When the board agreed to take up the case in January, it said in its announcement, and I'm quoting here, that the board would "determine whether Mr Trump's suspension from access to Facebook and Instagram for an indefinite amount of time is overturned," and when I first read that, and I saw the decision, I thought: Well, the board didn't do that. It didn't overturn the decision. But then I reread it and realized that the language is actually pretty specific, and the board specified that it was going to rule on whether the indefinite suspension was overturned, which it kind of did. The board has said an indefinite suspension is not appropriate here, and it's now up to Facebook whether to make it permanent. So, did the board know all along when it took up this case that it was going to be ruling on the indefinite nature of the ban? And if that wasn't the board's intention the whole time, when did it become clear that this was going to be the way they ruled?

It's not possible for the board to preemptively game out what the outcome of a deliberation process is. It involves a panel of five board members, and then it goes back to the full board and there's comments to review. We'll never say what the final decision would look like.

At the very beginning when we requested public comments, we listed out a number of sub-questions to those around what are the specific issues relating to political leaders, how do you consider newsworthiness and so on and so forth. All of that was framed and understood from the start.

When you when you think about it, the protection of human rights, there's really two components to it. One is the substantive issues. How should the policy be applied? What is the right standard that should be applied in terms of community standards? And the other is: What is the procedure that has human rights obligations? And the board has spoken to both of those issues.

Procedurally, this was arbitrary, and indefinite suspension is not consistent with international human rights standards. We acknowledge the decision that was taken on the day was appropriate for that moment. But the indefinite nature of it was not consistent with international human rights, and so it needs to be redone.

But at the same time, looking to the future, when you consider what the longer term looks like in this case, but also, very importantly, in other cases that come up in the future and including other influential users and political leaders elsewhere in the world, it has some very specific recommendations of how you should consider those cases. What you've got to consider, what length of time is appropriate. You've got to consider if you're going to impose a penalty, what's the severity level that would accompany that? You've got to make that clear. If you're going to look at trying to measure incitement to discrimination and violence and all that, what is the test that you will apply? There's a six-factor test that's spelled out in the decision as well, so all of that is quite specific and quite clear.

I'm interested a little bit more in how the board came to this decision. One challenge that's different here than with the Supreme Court is you don't get to read a full dissent. The five-member panel makes their decision. They circulate it to the rest of the members for a majority, but we don't get to really read the dissent. So I'm wondering to what extent the board decision to put this back on Facebook had to do with an inability to get a majority consensus on whether to issue a permanent ban or not?

This particular decision is a consensus decision. And there are minority opinions written into the decision, so you can see where a minority of board members wanted to push in a slightly different direction, and that's part of the deliberative process, that minority opinions can be reflected. And again, happy to talk in a second, a little bit more about some of those minority opinions.

The panel, and the board by extension, was quite clear on the outcome and these two tracks. The decision has one [track] looking at the procedural process and the human rights implications around the procedural issues and one looking at the substantive issues and the human rights implications around what kind of policies are applied.

One theme throughout the decision is that Facebook basically applied this policy in an ad hoc manner because it did not have a written policy around indefinite suspensions. And when I read that, I understand the rationale and why Facebook should have to explain its rules to users, because users have to abide by Facebook's rules. It should go both ways. At the same time, I see how Facebook is this sort of unprecedentedly huge platform with 3 billion users. They are, of course, facing new challenges for which they have not written rules — every day, many, many times a day, around the world in many languages. Is it really reasonable to hold Facebook to a standard where they cannot adapt their rules in real-time in situations where there is a grave threat at hand?

Yes, it's entirely reasonable, and the reason I say that is because the board is not saying to Facebook: "You can't evolve your community standards. You can't adapt them as new issues do start to arise." What they're saying is, particularly in this case, is that looking at the penalties, and looking at the standard that was applied in that particular moment in time, there was a lack of procedural fairness and transparency and consistency, in that there were rules available to Facebook, at time, that should have been sufficient, and that they did not apply.

The board is not necessarily making an opinion on this as political, international, economic factors come into play. The less transparency you have, the less consistency you have around those rules, the more difficult it is to measure what the decision making actually was. What were the drivers? Was it principles? Was it standards or were there other factors available? So it's very much in Facebook's interest to make sure that they're making clear that there is consistency, and it is about standards.

The other thing I would mention is that the board looks at the application of the community standards, and also Facebook's values, and this goes back to your earlier question about the deliberative process, but it also considers international human rights standards. They are also an evolving body, as it were, of soft law.

Those are issues that can be drawn from and are drawn from by the board. And there's no reason why, if those exist in the offline world, Facebook cannot have community standards to be applied in the online world.

There are a lot of people out there, as I'm sure you know, who think the Oversight Board is a PR stunt or a sideshow and that even interviewing you or writing about this decision is legitimizing it in a way that lets Facebook off the hook. What do you say to that?

Obviously I've heard that perspective. I don't agree with it. I think the board has a very strong structural independence, but also independence to the individuals that have been appointed to it. We have a trust, which is separate from Facebook. Obviously Facebook has put money into that trust, but our trustees have a fiduciary responsibility. The board functions at arm's length. The board members are very prominent, and some of them have been very critical of Facebook and are very minded to continue to be in the future.

I would simply say look at the decisions the board has taken so far. I think they've been tough. They've held Facebook to account. I think it's unrealistic to expect that any single decision will be a moment of total change. This is going to be an incremental process. I sort of conceptualize this almost as a jigsaw puzzle. The more decisions that you make, the puzzle will become clearer — where the standard lies, how it should be applied and so on. But it will take time. It is a very, very momentous and significant undertaking, and it won't be achieved in a single decision.

The other thing I would say is that, as we move forward into a slightly uncertain world of content regulation, the international standard around what is best practice, derived from other fields such as press regulation, is that if you can make independent self-regulation work, and it is truly independent and it functions, and I believe the Oversight Board does and is proving that now and will continue to prove that that is the most effective form of content regulation. The other pathways, looking at statutory or direct government regulation or no regulation at all, they are, I would say, from my perspective, undesirable. I think they will have very problematic outcomes.

So I believe that the Oversight Board is really a very important experiment, if I can use that word, but a very important undertaking. I think it is demonstrating its value and its worth. In the future there will be, as I mentioned, a complex ecosystem. There'll be a place for different bodies, but there must be a place for some type of independence or regulatory structure that deals with content on specific platforms, and I think that's what the Oversight Board speaks to.

I know the Oversight Board was structured to make room for other boards for other platforms. Have you been approached about anything like that so far?

We've not opened up discussions of that nature. Obviously Facebook and Instagram, as you know, are enormous platforms, and there are many, many challenges to be addressed. So, if that were to happen, that is something that would come later down the path.

There is no objective for the board to become a sort of a uber-board covering multiple platforms. That would be rather unhealthy as well. So, that's a discussion for later on.

Another big concern with the board is the amount of money that board members are paid. It's been reported that they're receiving six-figure salaries for what is effectively part-time work. The concern is that this would interfere with their independence, despite all the other lengths the board and Facebook have gone to to create that independence. Is this a conflict in your mind?

No, I don't think so. I mean, the board members are [paid] for their time on a sort of a basis that is applicable across the industry. The bylaws are very clear in terms of the independence of those board members. They cannot be removed by Facebook.

Facebook has no role in appointing, once we get beyond the 14 board members. So there is a lot of structural, pretty meaningful independence being built into the system. So no, I don't think that undermines the independence.

Reading this decision, I saw some frustration about questions that Facebook wouldn't answer. Familiar frustrations, I should add, as a tech reporter. Were you or the board members surprised by Facebook's unwillingness to answer questions posed by an oversight board of its own creation?

Yes. I mean, obviously, the board asked 46 questions. I believe seven received no answers and two only partial answers. The board has an expectation that when it's asked a question that Facebook will answer that question. It is important that they do so in this case, but also moving forward. The board has no intention to not to push on those issues.

Some of the questions that the board was asking were around issues like: What is the impact of the design decisions within Facebook in terms of amplification of certain types of content? How did that play out vis a vis the events of the sixth of January and so on?

As you'll see in the recommendations, although that wasn't answered as a question in advance for Facebook, it certainly is a recommendation that the board has then come back to Facebook and said: Well, you don't answer the question. We're going to put a recommendation in this that you should publicize this information.

We can track, to see what happens with those recommendations. We have a working group that has been set up that has the responsibility to look at following case decisions and following Facebook's implementation of recommendations, and that will be part of our transparency reporting. We will come out very publicly and say what we think Facebook has done and achieved and what we think Facebook hasn't done and has not achieved.

The board received more than 9,000 comments on this case, which is exponentially more than what you've gotten on other cases. Can you tell us a little about how the board went about reading and synthesizing all that feedback, and was any of it particularly influential?

The public comments process is, I really can't stress enough, it's very important. Although the comments have varied, and normally I think the average is double digits, it is significantly different between cases. This is clearly, as you say, exponentially larger. It created an enormous amount of work.

So, essentially, analysts and the board members made their way through those comments, and we have read every single one of them. There's a triage process to go through them and to try and extract and try and categorize and to see what can be pulled from the various different comments, but they are all made available to the board members. All of them were read, and I won't go into details as to particular discussions that took place within the panel, but I can certainly say that many of the public comments and the issues raised were influential. They were important for the panel.

What do you think this decision means for other global leaders?

The board is very clear that the recommendations are not specific to the Trump case. They are global in terms of what is being proposed, and really, what it means is that there are certain circumstances in which an influential user or a political leader or a head of state — and these are well-defined under international human rights standards. These not areas that the Oversight Board has invented solely themselves — that in certain circumstances, specifically relating to incitement to violence and discrimination and lawless action, that restrictions can be applied. And Facebook needs to assess the right level of restriction based on the severity of a particular violation that it sees under one of its different community standards.

So the message to others around the world is that there are limitations. There are certain types of speech, which whether on Facebook or different social media or even in the offline world, are not acceptable. And although the board has very clearly said the same community standards and the same penalty should be applied to a political leader, as any user, the board has also said that the context is extremely important.

If you look at the six factors that the board has outlined as a test to assess problems and to assess those issues around incitement of violence and discrimination, they very clearly speak to the speakers and the impact that their words will have on the audience. And so that needs to be taken into consideration. That needs to be part of that assessment.

And I think the other very important thing that the board has said is, it has given a clear reminder to Facebook, as well, that it has to be cognizant, has to pay attention to the fact that the voice of political opposition around the world in many countries is also not silenced. So there's not a protest or not a context in which the platform is abused or manipulated in order to close down political opposition.

Let's say Facebook had issued a permanent suspension off the bat, so the question of an indefinite suspension was removed. Would you personally be comfortable with the Oversight Board's having the power to decide whether the permanent suspension of a former president sticks?

I don't want to speculate about specifics in this case, which will not be the outcome of the decision. Given the severity, or in a context where a violation is severe, there are certain penalties that Facebook has already spelled out in its community standards, and the board is not saying this penalty should change, and one of those penalties is permanent deletion.

But it really depends on the severity. The board is also clearly saying that if it is a suspension, that has to be time-bound. It can't just be indefinite. It can't be open. It has to be very specific as to the length of time, and at the end of that period, what must happen is there needs to be an assessment of whether the harm has receded, or whether it is still there. And if the harm has not receded, the suspension has to be reinstated. And, again, for a time-bound period.

We said at the outset that you're in charge of administration of the Oversight Board. So what have you learned during this process — adjudicating such a high-profile case — about how the board works or should work?

First off, I'm just very, very impressed with our board members. It's a very internally-focused comment, but it's a difficult case. They've already handled a number of difficult cases. If you look at the ones that have been published on the website, they're all very significant in their own right, but this one of course has drawn global attention for obvious reasons.

I think in terms of what I've learned, and not so much learned, but maybe a clear realization is that there are those two key components in terms of what the board is doing. The first one is ensuring from a rights perspective that Facebook properly applies its procedures so that those procedures are fair and transparent. If they're not, that can have a very strong chilling effect on freedom of expression. That really should not be underestimated. That is extremely important for this case, but also for all other cases as well, and I think that will be a theme that recurs throughout.

The other one is, of course, around looking to the future, around the recommendations that have been made. The recommendations in terms of the harms test, in terms of six factors within that, in terms of the way Facebook themselves should undertake those structural changes that they might want to introduce internally. Those are extremely important in this particular context, in terms of influential users and political leaders. It's going to be extremely important for creating more transparency, more predictability and more accountability as well in terms of what those individuals do and don't do on Facebook.

I don't think anyone can deny that greater transparency and accountability and predictability would be welcome across the board. This is an area which has caused great, great concern around the globe in many, many countries, and hopefully this will be part of I'm not saying solving it entirely, but this will be part of the solution to the problem.

Fintech

Judge Zia Faruqui is trying to teach you crypto, one ‘SNL’ reference at a time

His decisions on major cryptocurrency cases have quoted "The Big Lebowski," "SNL," and "Dr. Strangelove." That’s because he wants you — yes, you — to read them.

The ways Zia Faruqui (right) has weighed on cases that have come before him can give lawyers clues as to what legal frameworks will pass muster.

Photo: Carolyn Van Houten/The Washington Post via Getty Images

“Cryptocurrency and related software analytics tools are ‘The wave of the future, Dude. One hundred percent electronic.’”

That’s not a quote from "The Big Lebowski" — at least, not directly. It’s a quote from a Washington, D.C., district court memorandum opinion on the role cryptocurrency analytics tools can play in government investigations. The author is Magistrate Judge Zia Faruqui.

Keep ReadingShow less
Veronica Irwin

Veronica Irwin (@vronirwin) is a San Francisco-based reporter at Protocol covering fintech. Previously she was at the San Francisco Examiner, covering tech from a hyper-local angle. Before that, her byline was featured in SF Weekly, The Nation, Techworker, Ms. Magazine and The Frisc.

The financial technology transformation is driving competition, creating consumer choice, and shaping the future of finance. Hear from seven fintech leaders who are reshaping the future of finance, and join the inaugural Financial Technology Association Fintech Summit to learn more.

Keep ReadingShow less
FTA
The Financial Technology Association (FTA) represents industry leaders shaping the future of finance. We champion the power of technology-centered financial services and advocate for the modernization of financial regulation to support inclusion and responsible innovation.
Enterprise

AWS CEO: The cloud isn’t just about technology

As AWS preps for its annual re:Invent conference, Adam Selipsky talks product strategy, support for hybrid environments, and the value of the cloud in uncertain economic times.

Photo: Noah Berger/Getty Images for Amazon Web Services

AWS is gearing up for re:Invent, its annual cloud computing conference where announcements this year are expected to focus on its end-to-end data strategy and delivering new industry-specific services.

It will be the second re:Invent with CEO Adam Selipsky as leader of the industry’s largest cloud provider after his return last year to AWS from data visualization company Tableau Software.

Keep ReadingShow less
Donna Goodison

Donna Goodison (@dgoodison) is Protocol's senior reporter focusing on enterprise infrastructure technology, from the 'Big 3' cloud computing providers to data centers. She previously covered the public cloud at CRN after 15 years as a business reporter for the Boston Herald. Based in Massachusetts, she also has worked as a Boston Globe freelancer, business reporter at the Boston Business Journal and real estate reporter at Banker & Tradesman after toiling at weekly newspapers.

Image: Protocol

We launched Protocol in February 2020 to cover the evolving power center of tech. It is with deep sadness that just under three years later, we are winding down the publication.

As of today, we will not publish any more stories. All of our newsletters, apart from our flagship, Source Code, will no longer be sent. Source Code will be published and sent for the next few weeks, but it will also close down in December.

Keep ReadingShow less
Bennett Richardson

Bennett Richardson ( @bennettrich) is the president of Protocol. Prior to joining Protocol in 2019, Bennett was executive director of global strategic partnerships at POLITICO, where he led strategic growth efforts including POLITICO's European expansion in Brussels and POLITICO's creative agency POLITICO Focus during his six years with the company. Prior to POLITICO, Bennett was co-founder and CMO of Hinge, the mobile dating company recently acquired by Match Group. Bennett began his career in digital and social brand marketing working with major brands across tech, energy, and health care at leading marketing and communications agencies including Edelman and GMMB. Bennett is originally from Portland, Maine, and received his bachelor's degree from Colgate University.

Enterprise

Why large enterprises struggle to find suitable platforms for MLops

As companies expand their use of AI beyond running just a few machine learning models, and as larger enterprises go from deploying hundreds of models to thousands and even millions of models, ML practitioners say that they have yet to find what they need from prepackaged MLops systems.

As companies expand their use of AI beyond running just a few machine learning models, ML practitioners say that they have yet to find what they need from prepackaged MLops systems.

Photo: artpartner-images via Getty Images

On any given day, Lily AI runs hundreds of machine learning models using computer vision and natural language processing that are customized for its retail and ecommerce clients to make website product recommendations, forecast demand, and plan merchandising. But this spring when the company was in the market for a machine learning operations platform to manage its expanding model roster, it wasn’t easy to find a suitable off-the-shelf system that could handle such a large number of models in deployment while also meeting other criteria.

Some MLops platforms are not well-suited for maintaining even more than 10 machine learning models when it comes to keeping track of data, navigating their user interfaces, or reporting capabilities, Matthew Nokleby, machine learning manager for Lily AI’s product intelligence team, told Protocol earlier this year. “The duct tape starts to show,” he said.

Keep ReadingShow less
Kate Kaye

Kate Kaye is an award-winning multimedia reporter digging deep and telling print, digital and audio stories. She covers AI and data for Protocol. Her reporting on AI and tech ethics issues has been published in OneZero, Fast Company, MIT Technology Review, CityLab, Ad Age and Digiday and heard on NPR. Kate is the creator of RedTailMedia.org and is the author of "Campaign '08: A Turning Point for Digital Media," a book about how the 2008 presidential campaigns used digital media and data.

Latest Stories
Bulletins