The first rule of talking about bias on Facebook: Don't talk about bias on Facebook.
At least, don't use the word "bias" or the word "discrimination," or any of those pesky terms that have a funny way of landing companies in court.
That's according to an internal document circulated inside Facebook last fall called, "How to talk about fairness." It teaches employees — with a certain late-night infomercial flair — about what words they should and should definitely not use when talking about the company's efforts to make its platform more fair.
"Are you concerned about the fairness of your product and/or its system(s)? Do you use terms like algorithmic bias, discrimination, and disparate impact without fully understanding their meaning? If yes, this note is for you," it reads.
The note, published in full below, is part of disclosures made to the SEC and provided to Congress in redacted form by whistleblower Frances Haugen's legal counsel. The note's stated goal was to help Facebook employees talk about their work and "avoid inappropriate / inaccurate legal terms." The unstated goal, however, seemed to be to prevent Facebook employees from blabbing about work that might get the company in legal trouble.
In a statement to Protocol, a spokesperson for Meta, Facebook's new name, said, "In an effort to streamline how our teams approach and discuss topics related to fairness, which is an important component of how we build our products and services, we commonly share resources and guidance, like this note from our Responsible AI team last year, across the company."
The note was posted around the time a reporter for MIT Tech Review was working on an extensive reporting project about the Responsible AI team, with Facebook's cooperation. It was also a little more than a year after Facebook settled a lawsuit with civil rights groups over how its platform enabled advertisers to discriminate in housing, job and financial services ads. The Department of Housing and Urban Development also filed a similar suit against Facebook just as the first suit was being settled.
Facebook has since faced ongoing accusations of bias and discrimination on the platform, particularly with regard to advertising. Just this week, the company said it would prohibit advertisers from targeting users based on their Facebook engagement with "sensitive" topics, including religion, health, politics and more.
It was in this environment that Facebook circulated its "guidelines for communicating about fairness analyses" last November. Under those guidelines, Facebook employees were instructed to "avoid legal terminology, including specific concepts like discrimination, disparate treatment and disparate impact, and more general terms such as illegal or violates." The list of off-limits terms also included the phrases "protected characteristics" and "skew."
These terms, the note's author wrote, "are legal terms with specific meanings and are therefore out of scope for what can be learned through fairness analysis."
The guidelines instructed employees instead to opt for terms like "fairness," "inclusivity" and "relevant groups" when describing their work, while acknowledging that "fairness" is a "context-specific term with no singular definition."
Rather than talking about, simply, bias, the guidelines also encouraged Facebook staff to talk about "implementation bias," "model bias" and "label bias," all of which refer to imbalances in how Facebook's AI systems were built, rather than actual discrimination that results from those systems.
The document warns employees that they should get legal approval before making "absolute statements regarding the existence of unfairness or bias or discrimination or appropriate measurement or mitigation strategies." And it tells employees to take special care when referring to race and ethnicity measurements, because while the company doesn't collect users' race data, it does use zip code data as a proxy to study its impact on users of different races.
"The recently developed Zip Code/census approach measures geography, not people's demographic characteristics," it reads. "This critical context must be clear on the face of any document that refers to such measurement."
The guidelines illustrate how Facebook has sought to both study and mitigate algorithmic bias, while also avoiding incriminating itself with those findings. They also show how deeply strategic Facebook's decision to talk about these issues was, months before the company ultimately debuted some of its fairness tools in March.
According to other documents in Haugen's disclosures, the guidelines came along at a time when Facebook was trying to reclaim the public narrative about algorithmic bias on the platform. Another internal document, dated just weeks before the guidelines were posted, discusses how Google, Microsoft and Amazon have publicly discussed their own efforts to make their platforms more fair. In comparison, that document points out, Facebook had "no online presence concerning AI Fairness" at the time.
"Once we have finalized our approach to AI Fairness and our legal team deems us ready to communicate about it externally," that document reads, "there will be an opportunity to reshape public opinion of Facebook's stance on AI Fairness, currently dominated by negative press."