Google AI lead Jeff Dean is in hot water with members of Timnit Gebru's team at Google, who published a letter Monday challenging his story about Gebru's resignation and calling the research paper review process discriminatory. Gebru's forced departure stemmed from conflict over a research paper about ethical questions for large-language models (which are a big research focus at Google Brain): Google wanted Gebru to take her name off the paper, Gebru threatened to resign, and then Google took that opportunity to remove her from the team.
- The biggest revelation from the letter is that almost half of all research papers submitted for approval through Google's internal process are done so with a day or less notice to reviewers, despite Dean's claim that the process requires two weeks. He used that claim to justify why Google tried to force Gebru to retract or take her name off of the paper, and this letter seems to debunk that main defense.
- The letter also states that Gebru was fired and did not actually resign, despite Dean's insistence on calling it a resignation in his email to the team. Her team has coined a new word for what happened: "resignated," according to her former colleague and co-lead Margaret Mitchell.
More than 1,500 people at Google have signed a petition in support of Gebru and calling for changes to Google's paper review process, including senior engineers and team leaders. Gebru's boss, Samy Bengio, said he was "stunned" at her firing in a post. Others on her team and in the AI ethics community more broadly have written about their disappointment with Dean specifically, who has been a hero for many in the field. Google did not immediately respond to request for comment.
The pushback against Google and Dean specifically here has reached a pretty unprecedented level — and I seriously try to avoid using the word unprecedented. Rather than dying out over the weekend, the anger has escalated. As a number of folks in the industry told me last week, the reputational damage for Google and Dean could be long-lasting here, even when the media circus dies down. Research integrity is paramount for anyone who is remotely serious about AI, and that integrity is a big question mark for anyone looking at Google research now.
Shoot me an email at akramer@protocol.com if you work at Google or in the AI ethics space and would like to talk, or if you'd like my Signal. A version of this article will appear in Tuesday's Source Code newsletter, which you can find here.