Machine learning engineer Ari Font was worried about the future of Twitter's algorithms. It was mid-2020, and the leader of the team researching ethics and accountability for the company's ML had just left Twitter. For Font, the future of the ethics research was unclear.
Font was the manager of Twitter's machine learning platforms teams — part of Twitter Cortex, the company's central ML organization — at the time, but she believed that ethics research could transform the way Twitter relies on machine learning. She'd always felt that algorithmic accountability and ethics should shape not just how Twitter used algorithms, but all practical AI applications.
So she volunteered to help rebuild Twitter's META team (META stands for Machine Learning, Ethics, Transparency and Accountability), embarking on what she called a roadshow to persuade Jack Dorsey and his team that ML ethics didn't only belong in research. Over the course of a few months, after a litany of conversations with Dorsey and other senior leaders, Font hadn't secured just a more powerful, operationalized place for the once-small team. Alongside the budget for increased headcount and a new director, she eventually persuaded Dorsey and Twitter's board of directors to make Responsible ML one of Twitter's main 2021 priorities, which came with the power to scale META's work inside of Twitter's products.
"I wanted to ensure that the very important research was having an impact on product, and was scaling. It was a very strategic next step for META that would allow us to take it to the next level," Font said. "We had strategy talks with Twitter staff, including Jack, and ultimately with the board. It was a very intense and fast process."
One year later, Twitter's commitment to Font's team has convinced even the most skeptical people in tech — the ethics research community itself. Rumman Chowdhury, notorious and beloved by her fellow researchers for her commitment to algorithmic auditing, announced that she would be leaving her new startup to become Twitter's META leader. Kristian Lum, a University of Pennsylvania professor renowned for her work building machine-learning models that could reshape criminal justice, will join Twitter at the end of June as their new head of research. And Sarah Roberts, famous for her critiques of tech companies and the co-director of the Center for Critical Internet Inquiry at UCLA, will become a consultant for the META team this summer, researching what Twitter users actually want from algorithmic transparency.
(If something about this team feels different, it's because all of its leaders are women, and four of them have Ph.D.s. Twitter has been on a massive hiring spree not just for META, and the outcome has been proof that actually, there is no shortage of top talent with widely-varying backgrounds in tech.)
These hires are a massive coup for a social media platform desperate to escape the waves of vitriol and criticism enveloping Google and Facebook's work around algorithms, machine learning and artificial intelligence. While Google was forcing out prominent AI ethicists and researchers Timnit Gebru and Margaret Mitchell and Facebook was trying and failing to persuade politicians and researchers that it did not have the power to manipulate the way algorithms amplified misinformation, Twitter was giving Font and Jutta Williams, the product manager in charge of helping operationalize META'S work, the resources and leeway to hire a team of people who could actually act on Twitter's promise to listen to its researchers.
Font's "roadshow" happened before Gebru and Mitchell's very public dismissals — Chowdhury said she would join Twitter the same week Google forced Mitchell out — but that explosion of attention on algorithms in 2020 nonetheless helped persuade Dorsey and his board of directors that ethical algorithms are worth spending money on.
Over the last year, the amplification of former President Donald Trump's social media posts via Facebook engagement algorithms drew widespread outrage from the left; Facebook's decision to very temporarily adapt those algorithms in response drew even sharper rebuke from the right. The spread of coronavirus misinformation followed a similar trajectory, while the nationwide conversation about criminal justice and race-based policing awakened the general public to the biases inherent in algorithms. All of this new awareness found a flashpoint in Google's Gebru. Her forced exit made the entire world pay attention to ethical AI.
"The ideological polarization … is also coming into responsible AI. We are being specifically targeted by names that I will not mention to you because then they will specifically come after me the way they have come after Timnit," Chowdhury said. "The very violent ideological divide is being pulled into our field."
The birth of META
Font wanted Chowdhury to run META from the beginning, but she thought there would be no way to persuade her. "We needed to get the right leader. I spent months doing this. I was OK that it took that long," Font said. "I wanted someone who was already established and well-respected, which, as you know, is not a community that is easy to please necessarily. This was a tricky quest."
But something about that first phone call made Chowdhury — who'd recently left her job as the senior principal for Responsible AI at Accenture to found her own startup — reconsider her future. "My goal was always to drive change in this industry. The industry is so young. I just want to see it succeed," she explained. If Twitter was actually serious about META, this job offer could be the chance she thought she might never have.
"I asked to talk to everybody. From leadership at Twitter down, I talked to everyone, from policy, from comms. It was absolutely critical to me that every single person who would be interacting with META was really on board. And I always left every interview so impressed. There was never any question of whether or not Twitter had the right kind of ethos," she said.
She took the job four months ago. Since then, in addition to the company's public commitment to its 2021 Responsible ML Initiative (which means Twitter will publicly share how it makes decisions about its algorithms, and how race and politics shape its ML), Twitter has already released an assessment of its image-cropping algorithm and removed the algorithm entirely based on the findings from the research.
Senior leadership said it would commit to Chowdhury's team, promising regular communication. They've been acting on that promise since before she arrived: Team members meet with Dorsey and his senior staff regularly to discuss progress, explain their work, secure additional resources and get buy-in from Dorsey on the research, education and changes they hope to implement.
"We present to Jack and his staff about every six weeks — we report our progress and where we are. They are most interested in learning what we've learned and how they can help. They actually really want to know — what did you learn, where are you going next — they very quickly want to help," Font said.
Williams, the program manager, was skeptical of Twitter's intentions when she agreed in 2020 to leave her job as the senior technical leader for privacy at Facebook and join the team. "It's incredibly disheartening as a very committed person, you go to a place and you think you're going to make a difference. I've had to make pivots and changes in my career because I bought into the hype," Williams said. "I was a bit disheartened about social media when Twitter told me, 'Please come and just talk to this team about this job.'"
Williams took the job, but she didn't give up on the idea that she might go back into health care privacy or nonprofit work: "I carried that healthy skepticism for quite some time."
The reality of change
Solving Twitter's problems means actually defining what users' "problems" are. "It's a lot easier to teach a model how to do something on behalf of people with their input," Williams explained. Roberts, who will be joining Twitter in early July, agreed to come on board to help answer precisely that question. She'll be given independence and latitude to help Twitter learn how to give people choice in usable ways. "We don't really know the answer to that," Williams said.
One of the few easily identifiable problems users had long vocalized was how Twitter's algorithm auto-cropped images, which many people felt often cropped uploaded images in a way that preferred lighter-skinned people and sexualized female bodies. Williams, Font and Chowdhury cited their work on that algorithm as an example for how they plan to run their team.
In their first publicly detailed research project since Chowdhury's start, META created a test to assess how the algorithms actually performed on a wide range of photos. They found a slight race-based bias, and though they could have dismissed the numbers as small, they decided instead to work with the engineers to help remove the algorithm entirely. Rather than conduct their work separately from the team that would be affected if changes were made to the algorithm, they worked alongside them, letting them know early in the process about the research project. And when their findings showed that change should happen, they helped create the plan to remove the algorithm in partnership with the engineers in question.
"To be perfectly honest, people have no problem taking Jack to task on Twitter. And Congress is literally just following what they heard people say," Chowdhury said.
"That's why we just develop in the open now," Williams added.
Beyond user choice and public transparency, Chowdhury's goal is to create a system of rules and assessments that function like government over the models: a system that could prevent harms from occurring, rather than just address the causes after people are hurt.
The team centers the idea that machine-learning engineers don't have bad intentions; they often just lack an understanding of what they're capable of doing and how to go about governing their work in an ethical way. An ethical, holistic approach isn't necessarily taught in most artificial intelligence grad programs, and very few tech companies support ethicists, auditors, and researchers of Chowdhury's caliber with freedom and buy-in (see: Google's collapse of its own ethical AI work).
"Our engineers are looking for guidance and expertise. Things are actionable because they know we can do better, it's hard to know what to do differently unless you have a workflow," Font said. "People don't always know what it is they can do, even if they are smart and good-hearted."
What the META team doesn't have is serious enforcement power. They say they don't want it at the moment — "You can't really drive change through fear of enforcement, but for long-term investment in change you do much better by growing education," according to Williams — but at the end of the day, META is a knowledge-creating team, not a police force. While they can research and propose changes, they cannot necessarily force other teams to fall into line. Their work is democratic, not authoritarian.
"There's a life cycle to enacting change," Williams explained. "You have to focus on enhancement; your first iteration or two is more on monitoring than it is on auditing. This as a concept is so new that focusing very directly on discipline and enforcement, you can't really drive change through fear."
"Ethics is literally about the world of unintended consequences. We're talking about engineers who are well-mentioned in trying to build something who didn't have the background or education," Chowdhury said. "We're talking to people who wanted to do the right thing and didn't know how to do the right thing."
Chowdhury reads widely as a way of processing her thoughts — she cited countless books and papers during our conversation — and she sees herself creating a leadership style through a feminist lens. Rather than punishing or controlling the people she works with, her definition of leadership is about finding ways to share resources and power, not keep it for herself. Seeking enforcement authority would oppose that kind of leadership definition. "I worry very much about the consolidation of ruthless authority," she said.
Many of the researchers and leaders in the ethical machine-learning worlds believe that working inside a tech company and accepting a role as an adviser (rather than an enforcer) makes the work useless. That idea frustrated Chowdhury, Williams and Font, all of whom kept returning to the idea that you can't make real progress if you're forever apart from the industry you're critiquing. "Everyone outside the industry is pointing their fingers at you as if you are the problem. You are trying your best to do your job and do a good job and people are like, you are fundamentally unethical because you take a paycheck from them," Chowdhury said.
"But the goal of META is not to be this shining example of finger-pointing where we get to be the good guys while throwing our company under the bus," she added. "That's actually not very productive if our goal is to change the industry and drive the industry toward actionable positive output."