Protocol | Workplace

How Twitter hired tech's biggest critics to build ethical AI

Twitter's META team is made up of some of tech's most notorious critics, and two more will soon be joining them: Sarah Roberts and Kristian Lum.

Twitter's Ethical AI lead, Rumman Chowdhury

Rumman Chowdhury, the head of Twitter's META team, sees her job as finding ways to distribute power and authority, rather than collect it for herself.

Photo: Rumman Chowdhury

Machine learning engineer Ari Font was worried about the future of Twitter's algorithms. It was mid-2020, and the leader of the team researching ethics and accountability for the company's ML had just left Twitter. For Font, the future of the ethics research was unclear.

Font was the manager of Twitter's machine learning platforms teams — part of Twitter Cortex, the company's central ML organization — at the time, but she believed that ethics research could transform the way Twitter relies on machine learning. She'd always felt that algorithmic accountability and ethics should shape not just how Twitter used algorithms, but all practical AI applications.

So she volunteered to help rebuild Twitter's META team (META stands for Machine Learning, Ethics, Transparency and Accountability), embarking on what she called a roadshow to persuade Jack Dorsey and his team that ML ethics didn't only belong in research. Over the course of a few months, after a litany of conversations with Dorsey and other senior leaders, Font hadn't secured just a more powerful, operationalized place for the once-small team. Alongside the budget for increased headcount and a new director, she eventually persuaded Dorsey and Twitter's board of directors to make Responsible ML one of Twitter's main 2021 priorities, which came with the power to scale META's work inside of Twitter's products.

"I wanted to ensure that the very important research was having an impact on product, and was scaling. It was a very strategic next step for META that would allow us to take it to the next level," Font said. "We had strategy talks with Twitter staff, including Jack, and ultimately with the board. It was a very intense and fast process."

One year later, Twitter's commitment to Font's team has convinced even the most skeptical people in tech — the ethics research community itself. Rumman Chowdhury, notorious and beloved by her fellow researchers for her commitment to algorithmic auditing, announced that she would be leaving her new startup to become Twitter's META leader. Kristian Lum, a University of Pennsylvania professor renowned for her work building machine-learning models that could reshape criminal justice, will join Twitter at the end of June as their new head of research. And Sarah Roberts, famous for her critiques of tech companies and the co-director of the Center for Critical Internet Inquiry at UCLA, will become a consultant for the META team this summer, researching what Twitter users actually want from algorithmic transparency.

(If something about this team feels different, it's because all of its leaders are women, and four of them have Ph.D.s. Twitter has been on a massive hiring spree not just for META, and the outcome has been proof that actually, there is no shortage of top talent with widely-varying backgrounds in tech.)

These hires are a massive coup for a social media platform desperate to escape the waves of vitriol and criticism enveloping Google and Facebook's work around algorithms, machine learning and artificial intelligence. While Google was forcing out prominent AI ethicists and researchers Timnit Gebru and Margaret Mitchell and Facebook was trying and failing to persuade politicians and researchers that it did not have the power to manipulate the way algorithms amplified misinformation, Twitter was giving Font and Jutta Williams, the product manager in charge of helping operationalize META'S work, the resources and leeway to hire a team of people who could actually act on Twitter's promise to listen to its researchers.

Font's "roadshow" happened before Gebru and Mitchell's very public dismissals — Chowdhury said she would join Twitter the same week Google forced Mitchell out — but that explosion of attention on algorithms in 2020 nonetheless helped persuade Dorsey and his board of directors that ethical algorithms are worth spending money on.

Over the last year, the amplification of former President Donald Trump's social media posts via Facebook engagement algorithms drew widespread outrage from the left; Facebook's decision to very temporarily adapt those algorithms in response drew even sharper rebuke from the right. The spread of coronavirus misinformation followed a similar trajectory, while the nationwide conversation about criminal justice and race-based policing awakened the general public to the biases inherent in algorithms. All of this new awareness found a flashpoint in Google's Gebru. Her forced exit made the entire world pay attention to ethical AI.

"The ideological polarization … is also coming into responsible AI. We are being specifically targeted by names that I will not mention to you because then they will specifically come after me the way they have come after Timnit," Chowdhury said. "The very violent ideological divide is being pulled into our field."

The birth of META

Font wanted Chowdhury to run META from the beginning, but she thought there would be no way to persuade her. "We needed to get the right leader. I spent months doing this. I was OK that it took that long," Font said. "I wanted someone who was already established and well-respected, which, as you know, is not a community that is easy to please necessarily. This was a tricky quest."

But something about that first phone call made Chowdhury — who'd recently left her job as the senior principal for Responsible AI at Accenture to found her own startup — reconsider her future. "My goal was always to drive change in this industry. The industry is so young. I just want to see it succeed," she explained. If Twitter was actually serious about META, this job offer could be the chance she thought she might never have.

"I asked to talk to everybody. From leadership at Twitter down, I talked to everyone, from policy, from comms. It was absolutely critical to me that every single person who would be interacting with META was really on board. And I always left every interview so impressed. There was never any question of whether or not Twitter had the right kind of ethos," she said.

She took the job four months ago. Since then, in addition to the company's public commitment to its 2021 Responsible ML Initiative (which means Twitter will publicly share how it makes decisions about its algorithms, and how race and politics shape its ML), Twitter has already released an assessment of its image-cropping algorithm and removed the algorithm entirely based on the findings from the research.

Senior leadership said it would commit to Chowdhury's team, promising regular communication. They've been acting on that promise since before she arrived: Team members meet with Dorsey and his senior staff regularly to discuss progress, explain their work, secure additional resources and get buy-in from Dorsey on the research, education and changes they hope to implement.

"We present to Jack and his staff about every six weeks — we report our progress and where we are. They are most interested in learning what we've learned and how they can help. They actually really want to know — what did you learn, where are you going next — they very quickly want to help," Font said.

Williams, the program manager, was skeptical of Twitter's intentions when she agreed in 2020 to leave her job as the senior technical leader for privacy at Facebook and join the team. "It's incredibly disheartening as a very committed person, you go to a place and you think you're going to make a difference. I've had to make pivots and changes in my career because I bought into the hype," Williams said. "I was a bit disheartened about social media when Twitter told me, 'Please come and just talk to this team about this job.'"

Williams took the job, but she didn't give up on the idea that she might go back into health care privacy or nonprofit work: "I carried that healthy skepticism for quite some time."

The reality of change

Solving Twitter's problems means actually defining what users' "problems" are. "It's a lot easier to teach a model how to do something on behalf of people with their input," Williams explained. Roberts, who will be joining Twitter in early July, agreed to come on board to help answer precisely that question. She'll be given independence and latitude to help Twitter learn how to give people choice in usable ways. "We don't really know the answer to that," Williams said.

One of the few easily identifiable problems users had long vocalized was how Twitter's algorithm auto-cropped images, which many people felt often cropped uploaded images in a way that preferred lighter-skinned people and sexualized female bodies. Williams, Font and Chowdhury cited their work on that algorithm as an example for how they plan to run their team.

In their first publicly detailed research project since Chowdhury's start, META created a test to assess how the algorithms actually performed on a wide range of photos. They found a slight race-based bias, and though they could have dismissed the numbers as small, they decided instead to work with the engineers to help remove the algorithm entirely. Rather than conduct their work separately from the team that would be affected if changes were made to the algorithm, they worked alongside them, letting them know early in the process about the research project. And when their findings showed that change should happen, they helped create the plan to remove the algorithm in partnership with the engineers in question.

And after the algorithm was removed, META published both a press release explaining how they reached their conclusions and a scientific paper showing how they conducted their research.

"To be perfectly honest, people have no problem taking Jack to task on Twitter. And Congress is literally just following what they heard people say," Chowdhury said.

"That's why we just develop in the open now," Williams added.

Beyond user choice and public transparency, Chowdhury's goal is to create a system of rules and assessments that function like government over the models: a system that could prevent harms from occurring, rather than just address the causes after people are hurt.

The team centers the idea that machine-learning engineers don't have bad intentions; they often just lack an understanding of what they're capable of doing and how to go about governing their work in an ethical way. An ethical, holistic approach isn't necessarily taught in most artificial intelligence grad programs, and very few tech companies support ethicists, auditors, and researchers of Chowdhury's caliber with freedom and buy-in (see: Google's collapse of its own ethical AI work).

"Our engineers are looking for guidance and expertise. Things are actionable because they know we can do better, it's hard to know what to do differently unless you have a workflow," Font said. "People don't always know what it is they can do, even if they are smart and good-hearted."

What the META team doesn't have is serious enforcement power. They say they don't want it at the moment — "You can't really drive change through fear of enforcement, but for long-term investment in change you do much better by growing education," according to Williams — but at the end of the day, META is a knowledge-creating team, not a police force. While they can research and propose changes, they cannot necessarily force other teams to fall into line. Their work is democratic, not authoritarian.

"There's a life cycle to enacting change," Williams explained. "You have to focus on enhancement; your first iteration or two is more on monitoring than it is on auditing. This as a concept is so new that focusing very directly on discipline and enforcement, you can't really drive change through fear."

"Ethics is literally about the world of unintended consequences. We're talking about engineers who are well-mentioned in trying to build something who didn't have the background or education," Chowdhury said. "We're talking to people who wanted to do the right thing and didn't know how to do the right thing."

Chowdhury reads widely as a way of processing her thoughts — she cited countless books and papers during our conversation — and she sees herself creating a leadership style through a feminist lens. Rather than punishing or controlling the people she works with, her definition of leadership is about finding ways to share resources and power, not keep it for herself. Seeking enforcement authority would oppose that kind of leadership definition. "I worry very much about the consolidation of ruthless authority," she said.

Many of the researchers and leaders in the ethical machine-learning worlds believe that working inside a tech company and accepting a role as an adviser (rather than an enforcer) makes the work useless. That idea frustrated Chowdhury, Williams and Font, all of whom kept returning to the idea that you can't make real progress if you're forever apart from the industry you're critiquing. "Everyone outside the industry is pointing their fingers at you as if you are the problem. You are trying your best to do your job and do a good job and people are like, you are fundamentally unethical because you take a paycheck from them," Chowdhury said.

"But the goal of META is not to be this shining example of finger-pointing where we get to be the good guys while throwing our company under the bus," she added. "That's actually not very productive if our goal is to change the industry and drive the industry toward actionable positive output."

Protocol | Policy

5 things to know about FCC nominee Gigi Sohn

The veteran of some of the earliest tech policy fights is a longtime consumer champion and net-neutrality advocate.

Gigi Sohn, who President Joe Biden nominated to serve on the FCC, is a longtime net-neutrality advocate.

Photo: Alex Wong/Getty Images

President Joe Biden on Tuesday nominated Gigi Sohn to serve as a Federal Communications Commissioner, teeing up a Democratic majority at the agency that oversees broadband issues after months of delay.

Like Lina Khan, who Biden picked in June to head up the Federal Trade Commission, Sohn is a progressive favorite. And if confirmed, she'll take up a position in an agency trying to pull policy levers on net neutrality, privacy and broadband access even as Congress is stalled.

Keep Reading Show less
Ben Brody

Ben Brody (@ BenBrodyDC) is a senior reporter at Protocol focusing on how Congress, courts and agencies affect the online world we live in. He formerly covered tech policy and lobbying (including antitrust, Section 230 and privacy) at Bloomberg News, where he previously reported on the influence industry, government ethics and the 2016 presidential election. Before that, Ben covered business news at CNNMoney and AdAge, and all manner of stories in and around New York. He still loves appearing on the New York news radio he grew up with.

If you've ever tried to pick up a new fitness routine like running, chances are you may have fallen into the "motivation vs. habit" trap once or twice. You go for a run when the sun is shining, only to quickly fall off the wagon when the weather turns sour.

Similarly, for many businesses, 2020 acted as the storm cloud that disrupted their plans for innovation. With leaders busy grappling with the pandemic, innovation frequently got pushed to the backburner. In fact, according to McKinsey, the majority of organizations shifted their focus mainly to maintaining business continuity throughout the pandemic.

Keep Reading Show less
Gaurav Kataria
Group Product Manager, Trello at Atlassian
Protocol | Workplace

Adobe wants a more authentic NFT world

Adobe's Content Credentials feature will allow Creative Cloud subscribers to attach edit-tracking information to Photoshop files. The goal is to create a more trustworthy NFT market and digital landscape.

Adobe's Content Credentials will allow users to attach their identities to an image

Image: Adobe

Remember the viral, fake photo of Kurt Cobain and Biggie Smalls that duped and delighted the internet in 2017? Doctored images manipulate people and erode trust and we're not great at spotting them. The entire point of the emerging NFT art market is to create valuable and scarce digital files and when there isn't an easy way to check for an image's origin and edits, there's a problem. What if someone steals an NFT creator's image and pawns it off as their own? As a hub for all kinds of multimedia, Adobe feels a responsibility to combat misinformation and provide a safe space for NFT creators. That's why it's rolling out Content Credentials, a record that can be attached to a Photoshop file of a creator's identity and includes any edits they made.

Users can connect their social media addresses and crypto wallet addresses to images in Photoshop. This further proves the image creator's identity, but it's also helpful in determining the creators of NFTs. Adobe has partnered with NFT marketplaces KnownOrigin, OpenSea, Rarible and SuperRare in this effort. "Today there's not a way to know that the NFT you're buying was actually created by a true creator," said Adobe General Counsel Dana Rao. "We're allowing the creator to show their identity and attach it to the image."

Keep Reading Show less
Lizzy Lawrence

Lizzy Lawrence ( @LizzyLaw_) is a reporter at Protocol, covering tools and productivity in the workplace. She's a recent graduate of the University of Michigan, where she studied sociology and international studies. She served as editor in chief of The Michigan Daily, her school's independent newspaper. She's based in D.C., and can be reached at llawrence@protocol.com.

Protocol | China

Why another Chinese lesbian dating app just shut down

With neither political support nor a profitable business model, lesbian dating apps are finding it hard to survive in China.

Operating a dating app for LGBTQ+ communities in China is like walking a tightrope.

Photo: Nicolas Asfouri/AFP via Getty Images

When Lesdo, a Chinese dating app designed for lesbian women, announced it was closing down, it didn't come as a surprise to the LGBTQ+ community.

It's unclear what directly caused this decision. 2021 hasn't been kind to China's queer communities; WeChat has deactivated queer groups' public accounts and Beijing has pressured charity organizations not to work with queer activists.

Keep Reading Show less
Zeyi Yang
Zeyi Yang is a reporter with Protocol | China. Previously, he worked as a reporting fellow for the digital magazine Rest of World, covering the intersection of technology and culture in China and neighboring countries. He has also contributed to the South China Morning Post, Nikkei Asia, Columbia Journalism Review, among other publications. In his spare time, Zeyi co-founded a Mandarin podcast that tells LGBTQ stories in China. He has been playing Pokemon for 14 years and has a weird favorite pick.

The Oura Ring was a sleep-tracking hit. Can the next one be even more?

Oura wants to be a media company, an activity tracker and even a way to know you're sick before you feel sick.

Over the last few years, the Oura Ring has become one of the most recognizable wearables this side of the Apple Watch.

Photo: Oura

Oura CEO Harpreet Rai swears he didn't know Kim Kardashian was a fan. He was as surprised as anyone when she started posting screenshots from the Oura app to her Instagram story, and got into a sleep battle with fellow Oura user Gwyneth Paltrow. Or when Jennifer Aniston revealed that Jimmy Kimmel got her hooked on Oura … and how her ring fell off in a salad. "I am addicted to it," Aniston said, "and it's ruining my life" by shaming her about her lack of sleep. "I think we're definitely seeing traction outside of tech," Rai said. "Which is cool."

Over the last couple of years, Oura's ring (imaginatively named the Oura Ring) has become one of the most recognizable wearables this side of the Apple Watch. The company started with a Kickstarter campaign in 2015, but really started to find traction with its second-generation model in 2018. It's not exactly a mainstream device — Oura said it has sold more than 500,000 rings, up from 150,000 in March 2020 but still not exactly Apple Watch levels — but it has reached some of the most successful, influential and probably sleep-deprived people in the industry. Jack Dorsey is a professed fan, as is Marc Benioff.

Keep Reading Show less
David Pierce

David Pierce ( @pierce) is Protocol's editorial director. Prior to joining Protocol, he was a columnist at The Wall Street Journal, a senior writer with Wired, and deputy editor at The Verge. He owns all the phones.

Latest Stories