In her resignation letter Wednesday, Google ethical AI researcher Alex Hanna accused the company of having deep "rot" in its internal culture, "maintain[ing] white supremacy behind the veneer of race-neutrality" and being a workplace where those with "little interest in mitigating the worst harms" of its products are promoted at "lightning speed."
"I am quitting because I’m tired," Hanna wrote, announcing that she is joining the research institution recently founded by Timnit Gebru, the prominent AI ethicist who previously co-led Google's ethical AI team. Gebru was fired from the company in 2020 after raising concerns about natural-language processing. Hanna will be accompanied by Dylan Baker, a software engineer who also resigned Wednesday, in joining Gebru's Distributed Artificial Intelligence Research Institute.
"When I joined Google, I was cautiously optimistic about the promise of making the world better with technology. I’m a lot less techno-solutionist now," Baker wrote in a separate letter, in which they wrote about the "cognitive dissonance" of working for a place where full-time employees and contract workers received such different benefits. "I understand in vivid detail how far Google leadership will go to feel like they’re protecting their precious bottom line," Baker wrote.
In her Medium post, Hanna described her own experience working at Google and the tensions that arose from having a job that required finding flaws within Google's products. "I could describe, at length, my own experiences, being in rooms when higher-level managers yelled defensively at my colleagues and me when we pointed out the very direct harm that their products were causing to a marginalized population," Hanna wrote. "I could rehash how Google management promotes, at lightning speed, people who have little interest in mitigating the worst harms of sociotechnical systems, compared to people who put their careers on the line to prevent those harms."
She also cited a town hall meeting where a Google executive noted that there were so few Black women working in the research organization that it would be impossible to share survey results specific to the group without revealing their identities. "These data points are sad," Hanna wrote.
The problem, she argued, stems from a much broader issue in the tech industry: "a whiteness problem," as Hanna put it. She urged researchers and critics to study tech companies as "racialized organization[s]" and encouraged tech employees to "continue to complain" about what they see and experience.
“We appreciate Alex and Dylan’s contributions — our research on responsible AI is incredibly important, and we’re continuing to expand our work in this area in keeping with our AI Principles. We’re also committed to building a company where people of different views, backgrounds and experiences can do their best work and show up for one another,” Google told Protocol in a comment.
Hanna's account echoes much of what Gebru and her co-lead on the ethical AI team, Meg Mitchell, have shared about Google's internal culture. Google previously conducted an internal investigation into Gebru's firing and made some changes to its internal policies in response.
This story was updated with a comment from Google.