Protocol | Workplace

How Twitter hired tech's biggest critics to build ethical AI

Twitter's META team is made up of some of tech's most notorious critics, and two more will soon be joining them: Sarah Roberts and Kristian Lum.

Twitter's Ethical AI lead, Rumman Chowdhury

Rumman Chowdhury, the head of Twitter's META team, sees her job as finding ways to distribute power and authority, rather than collect it for herself.

Photo: Rumman Chowdhury

Machine learning engineer Ari Font was worried about the future of Twitter's algorithms. It was mid-2020, and the leader of the team researching ethics and accountability for the company's ML had just left Twitter. For Font, the future of the ethics research was unclear.

Font was the manager of Twitter's machine learning platforms teams — part of Twitter Cortex, the company's central ML organization — at the time, but she believed that ethics research could transform the way Twitter relies on machine learning. She'd always felt that algorithmic accountability and ethics should shape not just how Twitter used algorithms, but all practical AI applications.

So she volunteered to help rebuild Twitter's META team (META stands for Machine Learning, Ethics, Transparency and Accountability), embarking on what she called a roadshow to persuade Jack Dorsey and his team that ML ethics didn't only belong in research. Over the course of a few months, after a litany of conversations with Dorsey and other senior leaders, Font hadn't secured just a more powerful, operationalized place for the once-small team. Alongside the budget for increased headcount and a new director, she eventually persuaded Dorsey and Twitter's board of directors to make Responsible ML one of Twitter's main 2021 priorities, which came with the power to scale META's work inside of Twitter's products.

"I wanted to ensure that the very important research was having an impact on product, and was scaling. It was a very strategic next step for META that would allow us to take it to the next level," Font said. "We had strategy talks with Twitter staff, including Jack, and ultimately with the board. It was a very intense and fast process."

One year later, Twitter's commitment to Font's team has convinced even the most skeptical people in tech — the ethics research community itself. Rumman Chowdhury, notorious and beloved by her fellow researchers for her commitment to algorithmic auditing, announced that she would be leaving her new startup to become Twitter's META leader. Kristian Lum, a University of Pennsylvania professor renowned for her work building machine-learning models that could reshape criminal justice, will join Twitter at the end of June as their new head of research. And Sarah Roberts, famous for her critiques of tech companies and the co-director of the Center for Critical Internet Inquiry at UCLA, will become a consultant for the META team this summer, researching what Twitter users actually want from algorithmic transparency.

(If something about this team feels different, it's because all of its leaders are women, and four of them have Ph.D.s. Twitter has been on a massive hiring spree not just for META, and the outcome has been proof that actually, there is no shortage of top talent with widely-varying backgrounds in tech.)

These hires are a massive coup for a social media platform desperate to escape the waves of vitriol and criticism enveloping Google and Facebook's work around algorithms, machine learning and artificial intelligence. While Google was forcing out prominent AI ethicists and researchers Timnit Gebru and Margaret Mitchell and Facebook was trying and failing to persuade politicians and researchers that it did not have the power to manipulate the way algorithms amplified misinformation, Twitter was giving Font and Jutta Williams, the product manager in charge of helping operationalize META'S work, the resources and leeway to hire a team of people who could actually act on Twitter's promise to listen to its researchers.

Font's "roadshow" happened before Gebru and Mitchell's very public dismissals — Chowdhury said she would join Twitter the same week Google forced Mitchell out — but that explosion of attention on algorithms in 2020 nonetheless helped persuade Dorsey and his board of directors that ethical algorithms are worth spending money on.

Over the last year, the amplification of former President Donald Trump's social media posts via Facebook engagement algorithms drew widespread outrage from the left; Facebook's decision to very temporarily adapt those algorithms in response drew even sharper rebuke from the right. The spread of coronavirus misinformation followed a similar trajectory, while the nationwide conversation about criminal justice and race-based policing awakened the general public to the biases inherent in algorithms. All of this new awareness found a flashpoint in Google's Gebru. Her forced exit made the entire world pay attention to ethical AI.

"The ideological polarization … is also coming into responsible AI. We are being specifically targeted by names that I will not mention to you because then they will specifically come after me the way they have come after Timnit," Chowdhury said. "The very violent ideological divide is being pulled into our field."

The birth of META

Font wanted Chowdhury to run META from the beginning, but she thought there would be no way to persuade her. "We needed to get the right leader. I spent months doing this. I was OK that it took that long," Font said. "I wanted someone who was already established and well-respected, which, as you know, is not a community that is easy to please necessarily. This was a tricky quest."

But something about that first phone call made Chowdhury — who'd recently left her job as the senior principal for Responsible AI at Accenture to found her own startup — reconsider her future. "My goal was always to drive change in this industry. The industry is so young. I just want to see it succeed," she explained. If Twitter was actually serious about META, this job offer could be the chance she thought she might never have.

"I asked to talk to everybody. From leadership at Twitter down, I talked to everyone, from policy, from comms. It was absolutely critical to me that every single person who would be interacting with META was really on board. And I always left every interview so impressed. There was never any question of whether or not Twitter had the right kind of ethos," she said.

She took the job four months ago. Since then, in addition to the company's public commitment to its 2021 Responsible ML Initiative (which means Twitter will publicly share how it makes decisions about its algorithms, and how race and politics shape its ML), Twitter has already released an assessment of its image-cropping algorithm and removed the algorithm entirely based on the findings from the research.

Senior leadership said it would commit to Chowdhury's team, promising regular communication. They've been acting on that promise since before she arrived: Team members meet with Dorsey and his senior staff regularly to discuss progress, explain their work, secure additional resources and get buy-in from Dorsey on the research, education and changes they hope to implement.

"We present to Jack and his staff about every six weeks — we report our progress and where we are. They are most interested in learning what we've learned and how they can help. They actually really want to know — what did you learn, where are you going next — they very quickly want to help," Font said.

Williams, the program manager, was skeptical of Twitter's intentions when she agreed in 2020 to leave her job as the senior technical leader for privacy at Facebook and join the team. "It's incredibly disheartening as a very committed person, you go to a place and you think you're going to make a difference. I've had to make pivots and changes in my career because I bought into the hype," Williams said. "I was a bit disheartened about social media when Twitter told me, 'Please come and just talk to this team about this job.'"

Williams took the job, but she didn't give up on the idea that she might go back into health care privacy or nonprofit work: "I carried that healthy skepticism for quite some time."

The reality of change

Solving Twitter's problems means actually defining what users' "problems" are. "It's a lot easier to teach a model how to do something on behalf of people with their input," Williams explained. Roberts, who will be joining Twitter in early July, agreed to come on board to help answer precisely that question. She'll be given independence and latitude to help Twitter learn how to give people choice in usable ways. "We don't really know the answer to that," Williams said.

One of the few easily identifiable problems users had long vocalized was how Twitter's algorithm auto-cropped images, which many people felt often cropped uploaded images in a way that preferred lighter-skinned people and sexualized female bodies. Williams, Font and Chowdhury cited their work on that algorithm as an example for how they plan to run their team.

In their first publicly detailed research project since Chowdhury's start, META created a test to assess how the algorithms actually performed on a wide range of photos. They found a slight race-based bias, and though they could have dismissed the numbers as small, they decided instead to work with the engineers to help remove the algorithm entirely. Rather than conduct their work separately from the team that would be affected if changes were made to the algorithm, they worked alongside them, letting them know early in the process about the research project. And when their findings showed that change should happen, they helped create the plan to remove the algorithm in partnership with the engineers in question.

And after the algorithm was removed, META published both a press release explaining how they reached their conclusions and a scientific paper showing how they conducted their research.

"To be perfectly honest, people have no problem taking Jack to task on Twitter. And Congress is literally just following what they heard people say," Chowdhury said.

"That's why we just develop in the open now," Williams added.

Beyond user choice and public transparency, Chowdhury's goal is to create a system of rules and assessments that function like government over the models: a system that could prevent harms from occurring, rather than just address the causes after people are hurt.

The team centers the idea that machine-learning engineers don't have bad intentions; they often just lack an understanding of what they're capable of doing and how to go about governing their work in an ethical way. An ethical, holistic approach isn't necessarily taught in most artificial intelligence grad programs, and very few tech companies support ethicists, auditors, and researchers of Chowdhury's caliber with freedom and buy-in (see: Google's collapse of its own ethical AI work).

"Our engineers are looking for guidance and expertise. Things are actionable because they know we can do better, it's hard to know what to do differently unless you have a workflow," Font said. "People don't always know what it is they can do, even if they are smart and good-hearted."

What the META team doesn't have is serious enforcement power. They say they don't want it at the moment — "You can't really drive change through fear of enforcement, but for long-term investment in change you do much better by growing education," according to Williams — but at the end of the day, META is a knowledge-creating team, not a police force. While they can research and propose changes, they cannot necessarily force other teams to fall into line. Their work is democratic, not authoritarian.

"There's a life cycle to enacting change," Williams explained. "You have to focus on enhancement; your first iteration or two is more on monitoring than it is on auditing. This as a concept is so new that focusing very directly on discipline and enforcement, you can't really drive change through fear."

"Ethics is literally about the world of unintended consequences. We're talking about engineers who are well-mentioned in trying to build something who didn't have the background or education," Chowdhury said. "We're talking to people who wanted to do the right thing and didn't know how to do the right thing."

Chowdhury reads widely as a way of processing her thoughts — she cited countless books and papers during our conversation — and she sees herself creating a leadership style through a feminist lens. Rather than punishing or controlling the people she works with, her definition of leadership is about finding ways to share resources and power, not keep it for herself. Seeking enforcement authority would oppose that kind of leadership definition. "I worry very much about the consolidation of ruthless authority," she said.

Many of the researchers and leaders in the ethical machine-learning worlds believe that working inside a tech company and accepting a role as an adviser (rather than an enforcer) makes the work useless. That idea frustrated Chowdhury, Williams and Font, all of whom kept returning to the idea that you can't make real progress if you're forever apart from the industry you're critiquing. "Everyone outside the industry is pointing their fingers at you as if you are the problem. You are trying your best to do your job and do a good job and people are like, you are fundamentally unethical because you take a paycheck from them," Chowdhury said.

"But the goal of META is not to be this shining example of finger-pointing where we get to be the good guys while throwing our company under the bus," she added. "That's actually not very productive if our goal is to change the industry and drive the industry toward actionable positive output."

Hollywood averted its first streaming strike with an 11th-hour deal

IATSE's 60,000 members threatened to strike for better working conditions; at the core of the conflict was Hollywood's move to streaming.

60,000 Hollywood workers are set to go on strike this week.

Photo: Myung J. Chun/Los Angeles Times via Getty Images

The union representing 60,000 studio workers struck an agreement with major studios and production companies.

A last-minute agreement between the International Alliance of Theatrical Stage Employees (IATSE) and the Alliance of Motion Picture and Television Producers (AMPTP) helped avert a strike that would have shut down Hollywood: The two sides agreed on a new contract that includes pay raises as well as improved break schedules, Deadline reported Saturday evening.

Keep Reading Show less
Janko Roettgers

Janko Roettgers (@jank0) is a senior reporter at Protocol, reporting on the shifting power dynamics between tech, media, and entertainment, including the impact of new technologies. Previously, Janko was Variety's first-ever technology writer in San Francisco, where he covered big tech and emerging technologies. He has reported for Gigaom, Frankfurter Rundschau, Berliner Zeitung, and ORF, among others. He has written three books on consumer cord-cutting and online music and co-edited an anthology on internet subcultures. He lives with his family in Oakland.

The way we work has fundamentally changed. COVID-19 upended business dealings and office work processes, putting into hyperdrive a move towards digital collaboration platforms that allow teams to streamline processes and communicate from anywhere. According to the International Data Corporation, the revenue for worldwide collaboration applications increased 32.9 percent from 2019 to 2020, reaching $22.6 billion; it's expected to become a $50.7 billion industry by 2025.

"While consumers and early adopter businesses had widely embraced collaborative applications prior to the pandemic, the market saw five years' worth of new users in the first six months of 2020," said Wayne Kurtzman, research director of social and collaboration at IDC. "This has cemented collaboration, at least to some extent, for every business, large and small."

Keep Reading Show less
Kate Silver

Kate Silver is an award-winning reporter and editor with 15-plus years of journalism experience. Based in Chicago, she specializes in feature and business reporting. Kate's reporting has appeared in the Washington Post, The Chicago Tribune, The Atlantic's CityLab, Atlas Obscura, The Telegraph and many other outlets.

Protocol | Workplace

Instacart workers are on strike. How far can it get them?

Instacart activists want a nationwide strike to start today, but many workers are too afraid of the company and feel they can't afford a day off of work.

Gig workers protest in front of an Amazon facility in 2020.

Photo: Michael Nagle/Bloomberg via Getty Images

Starting today, an Instacart organizing group is asking the app's gig workers to go on a nationwide strike to demand better payment structures, benefits and other changes to the way the company treats its workers — but if past strikes are any indication, most Instacart users probably won't even notice.

The majority of Instacart workers on forums like Reddit and Facebook appear either unaware of the planned strike or don't plan to participate because they are skeptical of its power, afraid of retaliation from the company or are too reliant on what they do make from the app to be able to afford to take even one day off of the platform. "Not unless someone is going to pay my bills," "It will never work, you will never be able to get every shopper to organize" and "Last time there was a 'strike' Instacart took away our quality bonus pay," are just a few of the comments Instacart shoppers have left in response to news of the strike.

Keep Reading Show less
Anna Kramer

Anna Kramer is a reporter at Protocol (Twitter: @ anna_c_kramer, email: akramer@protocol.com), where she writes about labor and workplace issues. Prior to joining the team, she covered tech and small business for the San Francisco Chronicle and privacy for Bloomberg Law. She is a recent graduate of Brown University, where she studied International Relations and Arabic and wrote her senior thesis about surveillance tools and technological development in the Middle East.

Protocol | China

WeChat promises to stop accessing users’ photo albums amid public outcry

A tech blogger claimed that popular Chinese apps snoop around users' photo libraries, provoking heightened public concerns over privacy.

A survey launched by Sina Tech shows 94% of the some 30,000 responding users said they are not comfortable with apps reading their photo libraries just to allow them to share images faster in chats.

Photo: S3studio via Getty Images

A Chinese tech blogger dropped a bombshell last Friday, claiming on Chinese media that he found that several popular Chinese apps, including the Tencent-owned chat apps WeChat and QQ, as well as the Alibaba-owned ecommerce app Taobao, frequently access iPhone users' photo albums in the background even when those apps are not in use.

The original Weibo post from the tech blogger, using the handle of @Hackl0us, provoked intense debates about user privacy on the Chinese internet and consequently prompted WeChat to announce that it would stop fetching users' photo album data in the background.

Keep Reading Show less
Shen Lu

Shen Lu is a reporter with Protocol | China. Her writing has appeared in Foreign Policy, The New York Times and POLITICO, among other publications. She can be reached at shenlu@protocol.com.

Protocol | Enterprise

As businesses struggle with data, enterprise tech is cleaning up

Enterprise tech's vision of "big data" largely fell flat inside silos. But now, an army of providers think they've figured out the problems. And customers and investors are taking note.

Corporate data tends to settle in silos that makes it harder to understand the bigger picture. Enterprise tech vendors smell a lucrative opportunity.

Photo: Jim Witkowski/Unsplash

Data isn't the new oil; it's the new gold. And in any gold rush, the ones who make the most money in the long run are the tool makers and suppliers.

Enterprise tech vendors have long peddled a vision of corporate America centered around so-called "big data." But there was a big problem: Many of those projects failed to produce a return. An army of new providers think they've finally figured out the problem, and investors and customers are taking note.

Keep Reading Show less
Joe Williams

Joe Williams is a senior reporter at Protocol covering enterprise software, including industry giants like Salesforce, Microsoft, IBM and Oracle. He previously covered emerging technology for Business Insider. Joe can be reached at JWilliams@Protocol.com. To share information confidentially, he can also be contacted on a non-work device via Signal (+1-309-265-6120) or JPW53189@protonmail.com.

Latest Stories