Policy

Facebook told employees to avoid the words 'discrimination' and 'bias'

"Do you use terms like algorithmic bias, discrimination, and disparate impact without fully understanding their meaning? If yes, this note is for you."

Facebook Papers: Facebook logo cracking

The document's guidelines instructed employees to opt for terms like "fairness" and "inclusivity" when describing their work, while acknowledging that "fairness" is a "context-specific term with no singular definition."

Protocol

The first rule of talking about bias on Facebook: Don't talk about bias on Facebook.

At least, don't use the word "bias" or the word "discrimination," or any of those pesky terms that have a funny way of landing companies in court.

That's according to an internal document circulated inside Facebook last fall called, "How to talk about fairness." It teaches employees — with a certain late-night infomercial flair — about what words they should and should definitely not use when talking about the company's efforts to make its platform more fair.

"Are you concerned about the fairness of your product and/or its system(s)? Do you use terms like algorithmic bias, discrimination, and disparate impact without fully understanding their meaning? If yes, this note is for you," it reads.

The note, published in full below, is part of disclosures made to the SEC and provided to Congress in redacted form by whistleblower Frances Haugen's legal counsel. The note's stated goal was to help Facebook employees talk about their work and "avoid inappropriate / inaccurate legal terms." The unstated goal, however, seemed to be to prevent Facebook employees from blabbing about work that might get the company in legal trouble.

In a statement to Protocol, a spokesperson for Meta, Facebook's new name, said, "In an effort to streamline how our teams approach and discuss topics related to fairness, which is an important component of how we build our products and services, we commonly share resources and guidance, like this note from our Responsible AI team last year, across the company."

The note was posted around the time a reporter for MIT Tech Review was working on an extensive reporting project about the Responsible AI team, with Facebook's cooperation. It was also a little more than a year after Facebook settled a lawsuit with civil rights groups over how its platform enabled advertisers to discriminate in housing, job and financial services ads. The Department of Housing and Urban Development also filed a similar suit against Facebook just as the first suit was being settled.

Facebook has since faced ongoing accusations of bias and discrimination on the platform, particularly with regard to advertising. Just this week, the company said it would prohibit advertisers from targeting users based on their Facebook engagement with "sensitive" topics, including religion, health, politics and more.

It was in this environment that Facebook circulated its "guidelines for communicating about fairness analyses" last November. Under those guidelines, Facebook employees were instructed to "avoid legal terminology, including specific concepts like discrimination, disparate treatment and disparate impact, and more general terms such as illegal or violates." The list of off-limits terms also included the phrases "protected characteristics" and "skew."

These terms, the note's author wrote, "are legal terms with specific meanings and are therefore out of scope for what can be learned through fairness analysis."

The guidelines instructed employees instead to opt for terms like "fairness," "inclusivity" and "relevant groups" when describing their work, while acknowledging that "fairness" is a "context-specific term with no singular definition."

Rather than talking about, simply, bias, the guidelines also encouraged Facebook staff to talk about "implementation bias," "model bias" and "label bias," all of which refer to imbalances in how Facebook's AI systems were built, rather than actual discrimination that results from those systems.

The document warns employees that they should get legal approval before making "absolute statements regarding the existence of unfairness or bias or discrimination or appropriate measurement or mitigation strategies." And it tells employees to take special care when referring to race and ethnicity measurements, because while the company doesn't collect users' race data, it does use zip code data as a proxy to study its impact on users of different races.

"The recently developed Zip Code/census approach measures geography, not people's demographic characteristics," it reads. "This critical context must be clear on the face of any document that refers to such measurement."

The guidelines illustrate how Facebook has sought to both study and mitigate algorithmic bias, while also avoiding incriminating itself with those findings. They also show how deeply strategic Facebook's decision to talk about these issues was, months before the company ultimately debuted some of its fairness tools in March.

According to other documents in Haugen's disclosures, the guidelines came along at a time when Facebook was trying to reclaim the public narrative about algorithmic bias on the platform. Another internal document, dated just weeks before the guidelines were posted, discusses how Google, Microsoft and Amazon have publicly discussed their own efforts to make their platforms more fair. In comparison, that document points out, Facebook had "no online presence concerning AI Fairness" at the time.

"Once we have finalized our approach to AI Fairness and our legal team deems us ready to communicate about it externally," that document reads, "there will be an opportunity to reshape public opinion of Facebook's stance on AI Fairness, currently dominated by negative press."


Workplace

Getting reproductive benefits at work could be a privacy nightmare

A growing number of tech companies are extending abortion-related travel benefits. Given privacy and legal fears, will employees be too scared to use them?

How employers can implement and discuss reproductive benefits in a way that puts employees at ease.

Photo: Sigrid Gombert via Getty Images

It’s about to be a lot harder to get an abortion in the United States. For many, it’s already hard. The result is that employers, including large companies, are being called upon to fill the abortion care gap. The likelihood of a Roe v. Wade reversal was the push some needed to extend benefits, with Microsoft and Tesla announcing abortion-related travel reimbursements in recent weeks. But the privacy and legal risks facing people in need of abortions loom large. If people have reason to fear texting friends for abortion resources, will they really want to confide in their company?

An employee doesn’t have “much to worry about” when it comes to health privacy, said employee benefits consultant Jessica Du Bois. “The HR director or whoever's in charge of the benefits program is not going to be sharing that information.” Employers have a duty to protect employee health data under HIPAA and a variety of state laws. Companies with self-funded health plans — in other words, most large companies — can see every prescription and service an employee receives. But the data is deidentified.

Keep Reading Show less
Lizzy Lawrence

Lizzy Lawrence ( @LizzyLaw_) is a reporter at Protocol, covering tools and productivity in the workplace. She's a recent graduate of the University of Michigan, where she studied sociology and international studies. She served as editor in chief of The Michigan Daily, her school's independent newspaper. She's based in D.C., and can be reached at llawrence@protocol.com.

Sponsored Content

Foursquare data story: leveraging location data for site selection

We take a closer look at points of interest and foot traffic patterns to demonstrate how location data can be leveraged to inform better site selecti­on strategies.

Imagine: You’re the leader of a real estate team at a restaurant brand looking to open a new location in Manhattan. You have two options you’re evaluating: one site in SoHo, and another site in the Flatiron neighborhood. Which do you choose?

Keep Reading Show less
Enterprise

VMware CEO Raghu Raghuram: Edge computing is growing faster than the cloud

The now-standalone company is staking its immediate future on the multicloud era of IT and hybrid work, while anticipating increased demand for edge-computing software.

VMware CEO Raghu Raghuram spoke with Protocol about the company's future.

Photo: VMware

Nearly a year into his tenure as CEO, Raghu Raghuram believes VMware is well-positioned for the third phase of its evolution, but acknowledges its product transformation still needs some work.

The company, which pioneered the hypervisor and expanded to virtualized networking and storage with its vSphere operating environment, now is helping customers navigate a distributed, multicloud world and hybrid work with newfound freedom as an independent company after being spun off from Dell Technologies last November.

Keep Reading Show less
Donna Goodison

Donna Goodison (@dgoodison) is Protocol's senior reporter focusing on enterprise infrastructure technology, from the 'Big 3' cloud computing providers to data centers. She previously covered the public cloud at CRN after 15 years as a business reporter for the Boston Herald. Based in Massachusetts, she also has worked as a Boston Globe freelancer, business reporter at the Boston Business Journal and real estate reporter at Banker & Tradesman after toiling at weekly newspapers.

Workplace

What’s wrong with current Big Tech HBCU partnerships?

Big Tech is still trying to crack the code on hiring more Black workers despite years of partnerships with HBCUs.

Pictured is the first cohort in Accenture's Level Up program.

Photo: Accenture

As a business major at Prairie View A&M University in Prairie View, Texas, Sean Johnson had been on track to work in finance after graduating. But then his adviser mentioned a program that the historically Black university had with Accenture and Microsoft that was meant to function as a direct pipeline from Prairie View into roles in tech. It changed his entire career course.

Johnson had always had an interest in tech, and the prospect of being able to get a glimpse into the industry, as well as gain real, hands-on experience, appealed to him. By the end of the program, he had a full-time job offer at Accenture.

Keep Reading Show less
Amber Burton

Amber Burton (@amberbburton) is a reporter at Protocol. Previously, she covered personal finance and diversity in business at The Wall Street Journal. She earned an M.S. in Strategic Communications from Columbia University and B.A. in English and Journalism from Wake Forest University. She lives in North Carolina.

Policy

We’ll be here again: How tech companies fail to prevent terrorism

Social media platforms are playing defense to stop mass shootings. Without cooperation and legislation, it’s not working.

The Buffalo attack showed that tech’s best defenses against online hate aren’t sophisticated enough to fight the algorithms designed by those same companies to promote content.

Photo: Kent Nishimura / Los Angeles Times via Getty Images

Tech platforms' patchwork approach to content moderation has made them a hotbed for hate speech that can turn deadly, as it did this weekend in Buffalo. The alleged shooter that killed 10 in a historically Black neighborhood used Discord to plan his rampage for months and livestreamed it on Twitch.

The move mirrors what happened in Christchurch, New Zealand, when a white supremacist murdered 51 people in a mosque in 2019. He viewed the killings as a meme. To disseminate that meme, he turned to the same place more than 1 billion other users do: Facebook. This pattern is destined to repeat itself as long as tech companies continue to play defense instead of offense against online hate and fail to work together.

Keep Reading Show less
Sarah Roach

Sarah Roach is a news writer at Protocol (@sarahroach_) and contributes to Source Code. She is a recent graduate of George Washington University, where she studied journalism and mass communication and criminal justice. She previously worked for two years as editor in chief of her school's independent newspaper, The GW Hatchet.

Latest Stories
Bulletins