Facebook told employees to avoid the words 'discrimination' and 'bias'

"Do you use terms like algorithmic bias, discrimination, and disparate impact without fully understanding their meaning? If yes, this note is for you."

Facebook Papers: Facebook logo cracking

The document's guidelines instructed employees to opt for terms like "fairness" and "inclusivity" when describing their work, while acknowledging that "fairness" is a "context-specific term with no singular definition."


The first rule of talking about bias on Facebook: Don't talk about bias on Facebook.

At least, don't use the word "bias" or the word "discrimination," or any of those pesky terms that have a funny way of landing companies in court.

That's according to an internal document circulated inside Facebook last fall called, "How to talk about fairness." It teaches employees — with a certain late-night infomercial flair — about what words they should and should definitely not use when talking about the company's efforts to make its platform more fair.

"Are you concerned about the fairness of your product and/or its system(s)? Do you use terms like algorithmic bias, discrimination, and disparate impact without fully understanding their meaning? If yes, this note is for you," it reads.

The note, published in full below, is part of disclosures made to the SEC and provided to Congress in redacted form by whistleblower Frances Haugen's legal counsel. The note's stated goal was to help Facebook employees talk about their work and "avoid inappropriate / inaccurate legal terms." The unstated goal, however, seemed to be to prevent Facebook employees from blabbing about work that might get the company in legal trouble.

In a statement to Protocol, a spokesperson for Meta, Facebook's new name, said, "In an effort to streamline how our teams approach and discuss topics related to fairness, which is an important component of how we build our products and services, we commonly share resources and guidance, like this note from our Responsible AI team last year, across the company."

The note was posted around the time a reporter for MIT Tech Review was working on an extensive reporting project about the Responsible AI team, with Facebook's cooperation. It was also a little more than a year after Facebook settled a lawsuit with civil rights groups over how its platform enabled advertisers to discriminate in housing, job and financial services ads. The Department of Housing and Urban Development also filed a similar suit against Facebook just as the first suit was being settled.

Facebook has since faced ongoing accusations of bias and discrimination on the platform, particularly with regard to advertising. Just this week, the company said it would prohibit advertisers from targeting users based on their Facebook engagement with "sensitive" topics, including religion, health, politics and more.

It was in this environment that Facebook circulated its "guidelines for communicating about fairness analyses" last November. Under those guidelines, Facebook employees were instructed to "avoid legal terminology, including specific concepts like discrimination, disparate treatment and disparate impact, and more general terms such as illegal or violates." The list of off-limits terms also included the phrases "protected characteristics" and "skew."

These terms, the note's author wrote, "are legal terms with specific meanings and are therefore out of scope for what can be learned through fairness analysis."

The guidelines instructed employees instead to opt for terms like "fairness," "inclusivity" and "relevant groups" when describing their work, while acknowledging that "fairness" is a "context-specific term with no singular definition."

Rather than talking about, simply, bias, the guidelines also encouraged Facebook staff to talk about "implementation bias," "model bias" and "label bias," all of which refer to imbalances in how Facebook's AI systems were built, rather than actual discrimination that results from those systems.

The document warns employees that they should get legal approval before making "absolute statements regarding the existence of unfairness or bias or discrimination or appropriate measurement or mitigation strategies." And it tells employees to take special care when referring to race and ethnicity measurements, because while the company doesn't collect users' race data, it does use zip code data as a proxy to study its impact on users of different races.

"The recently developed Zip Code/census approach measures geography, not people's demographic characteristics," it reads. "This critical context must be clear on the face of any document that refers to such measurement."

The guidelines illustrate how Facebook has sought to both study and mitigate algorithmic bias, while also avoiding incriminating itself with those findings. They also show how deeply strategic Facebook's decision to talk about these issues was, months before the company ultimately debuted some of its fairness tools in March.

According to other documents in Haugen's disclosures, the guidelines came along at a time when Facebook was trying to reclaim the public narrative about algorithmic bias on the platform. Another internal document, dated just weeks before the guidelines were posted, discusses how Google, Microsoft and Amazon have publicly discussed their own efforts to make their platforms more fair. In comparison, that document points out, Facebook had "no online presence concerning AI Fairness" at the time.

"Once we have finalized our approach to AI Fairness and our legal team deems us ready to communicate about it externally," that document reads, "there will be an opportunity to reshape public opinion of Facebook's stance on AI Fairness, currently dominated by negative press."


Judge Zia Faruqui is trying to teach you crypto, one ‘SNL’ reference at a time

His decisions on major cryptocurrency cases have quoted "The Big Lebowski," "SNL," and "Dr. Strangelove." That’s because he wants you — yes, you — to read them.

The ways Zia Faruqui (right) has weighed on cases that have come before him can give lawyers clues as to what legal frameworks will pass muster.

Photo: Carolyn Van Houten/The Washington Post via Getty Images

“Cryptocurrency and related software analytics tools are ‘The wave of the future, Dude. One hundred percent electronic.’”

That’s not a quote from "The Big Lebowski" — at least, not directly. It’s a quote from a Washington, D.C., district court memorandum opinion on the role cryptocurrency analytics tools can play in government investigations. The author is Magistrate Judge Zia Faruqui.

Keep ReadingShow less
Veronica Irwin

Veronica Irwin (@vronirwin) is a San Francisco-based reporter at Protocol covering fintech. Previously she was at the San Francisco Examiner, covering tech from a hyper-local angle. Before that, her byline was featured in SF Weekly, The Nation, Techworker, Ms. Magazine and The Frisc.

The financial technology transformation is driving competition, creating consumer choice, and shaping the future of finance. Hear from seven fintech leaders who are reshaping the future of finance, and join the inaugural Financial Technology Association Fintech Summit to learn more.

Keep ReadingShow less
The Financial Technology Association (FTA) represents industry leaders shaping the future of finance. We champion the power of technology-centered financial services and advocate for the modernization of financial regulation to support inclusion and responsible innovation.

AWS CEO: The cloud isn’t just about technology

As AWS preps for its annual re:Invent conference, Adam Selipsky talks product strategy, support for hybrid environments, and the value of the cloud in uncertain economic times.

Photo: Noah Berger/Getty Images for Amazon Web Services

AWS is gearing up for re:Invent, its annual cloud computing conference where announcements this year are expected to focus on its end-to-end data strategy and delivering new industry-specific services.

It will be the second re:Invent with CEO Adam Selipsky as leader of the industry’s largest cloud provider after his return last year to AWS from data visualization company Tableau Software.

Keep ReadingShow less
Donna Goodison

Donna Goodison (@dgoodison) is Protocol's senior reporter focusing on enterprise infrastructure technology, from the 'Big 3' cloud computing providers to data centers. She previously covered the public cloud at CRN after 15 years as a business reporter for the Boston Herald. Based in Massachusetts, she also has worked as a Boston Globe freelancer, business reporter at the Boston Business Journal and real estate reporter at Banker & Tradesman after toiling at weekly newspapers.

Image: Protocol

We launched Protocol in February 2020 to cover the evolving power center of tech. It is with deep sadness that just under three years later, we are winding down the publication.

As of today, we will not publish any more stories. All of our newsletters, apart from our flagship, Source Code, will no longer be sent. Source Code will be published and sent for the next few weeks, but it will also close down in December.

Keep ReadingShow less
Bennett Richardson

Bennett Richardson ( @bennettrich) is the president of Protocol. Prior to joining Protocol in 2019, Bennett was executive director of global strategic partnerships at POLITICO, where he led strategic growth efforts including POLITICO's European expansion in Brussels and POLITICO's creative agency POLITICO Focus during his six years with the company. Prior to POLITICO, Bennett was co-founder and CMO of Hinge, the mobile dating company recently acquired by Match Group. Bennett began his career in digital and social brand marketing working with major brands across tech, energy, and health care at leading marketing and communications agencies including Edelman and GMMB. Bennett is originally from Portland, Maine, and received his bachelor's degree from Colgate University.


Why large enterprises struggle to find suitable platforms for MLops

As companies expand their use of AI beyond running just a few machine learning models, and as larger enterprises go from deploying hundreds of models to thousands and even millions of models, ML practitioners say that they have yet to find what they need from prepackaged MLops systems.

As companies expand their use of AI beyond running just a few machine learning models, ML practitioners say that they have yet to find what they need from prepackaged MLops systems.

Photo: artpartner-images via Getty Images

On any given day, Lily AI runs hundreds of machine learning models using computer vision and natural language processing that are customized for its retail and ecommerce clients to make website product recommendations, forecast demand, and plan merchandising. But this spring when the company was in the market for a machine learning operations platform to manage its expanding model roster, it wasn’t easy to find a suitable off-the-shelf system that could handle such a large number of models in deployment while also meeting other criteria.

Some MLops platforms are not well-suited for maintaining even more than 10 machine learning models when it comes to keeping track of data, navigating their user interfaces, or reporting capabilities, Matthew Nokleby, machine learning manager for Lily AI’s product intelligence team, told Protocol earlier this year. “The duct tape starts to show,” he said.

Keep ReadingShow less
Kate Kaye

Kate Kaye is an award-winning multimedia reporter digging deep and telling print, digital and audio stories. She covers AI and data for Protocol. Her reporting on AI and tech ethics issues has been published in OneZero, Fast Company, MIT Technology Review, CityLab, Ad Age and Digiday and heard on NPR. Kate is the creator of RedTailMedia.org and is the author of "Campaign '08: A Turning Point for Digital Media," a book about how the 2008 presidential campaigns used digital media and data.

Latest Stories