Source Code: Your daily look at what matters in tech.

workplaceworkplaceauthorAnna KramerNoneYour guide to the new world of work.39cd4d6373
×

Get access to Protocol

Your information will be used in accordance with our Privacy Policy

I’m already a subscriber
Protocol | Workplace

How Twitter hired tech's biggest critics to build ethical AI

Twitter's META team is made up of some of tech's most notorious critics, and two more will soon be joining them: Sarah Roberts and Kristian Lum.

Twitter's Ethical AI lead, Rumman Chowdhury

Rumman Chowdhury, the head of Twitter's META team, sees her job as finding ways to distribute power and authority, rather than collect it for herself.

Photo: Rumman Chowdhury

Machine learning engineer Ari Font was worried about the future of Twitter's algorithms. It was mid-2020, and the leader of the team researching ethics and accountability for the company's ML had just left Twitter. For Font, the future of the ethics research was unclear.

Font was the manager of Twitter's machine learning platforms teams — part of Twitter Cortex, the company's central ML organization — at the time, but she believed that ethics research could transform the way Twitter relies on machine learning. She'd always felt that algorithmic accountability and ethics should shape not just how Twitter used algorithms, but all practical AI applications.

So she volunteered to help rebuild Twitter's META team (META stands for Machine Learning, Ethics, Transparency and Accountability), embarking on what she called a roadshow to persuade Jack Dorsey and his team that ML ethics didn't only belong in research. Over the course of a few months, after a litany of conversations with Dorsey and other senior leaders, Font hadn't secured just a more powerful, operationalized place for the once-small team. Alongside the budget for increased headcount and a new director, she eventually persuaded Dorsey and Twitter's board of directors to make Responsible ML one of Twitter's main 2021 priorities, which came with the power to scale META's work inside of Twitter's products.

"I wanted to ensure that the very important research was having an impact on product, and was scaling. It was a very strategic next step for META that would allow us to take it to the next level," Font said. "We had strategy talks with Twitter staff, including Jack, and ultimately with the board. It was a very intense and fast process."

One year later, Twitter's commitment to Font's team has convinced even the most skeptical people in tech — the ethics research community itself. Rumman Chowdhury, notorious and beloved by her fellow researchers for her commitment to algorithmic auditing, announced that she would be leaving her new startup to become Twitter's META leader. Kristian Lum, a University of Pennsylvania professor renowned for her work building machine-learning models that could reshape criminal justice, will join Twitter at the end of June as their new head of research. And Sarah Roberts, famous for her critiques of tech companies and the co-director of the Center for Critical Internet Inquiry at UCLA, will become a consultant for the META team this summer, researching what Twitter users actually want from algorithmic transparency.

(If something about this team feels different, it's because all of its leaders are women, and four of them have Ph.D.s. Twitter has been on a massive hiring spree not just for META, and the outcome has been proof that actually, there is no shortage of top talent with widely-varying backgrounds in tech.)

These hires are a massive coup for a social media platform desperate to escape the waves of vitriol and criticism enveloping Google and Facebook's work around algorithms, machine learning and artificial intelligence. While Google was forcing out prominent AI ethicists and researchers Timnit Gebru and Margaret Mitchell and Facebook was trying and failing to persuade politicians and researchers that it did not have the power to manipulate the way algorithms amplified misinformation, Twitter was giving Font and Jutta Williams, the product manager in charge of helping operationalize META'S work, the resources and leeway to hire a team of people who could actually act on Twitter's promise to listen to its researchers.

Font's "roadshow" happened before Gebru and Mitchell's very public dismissals — Chowdhury said she would join Twitter the same week Google forced Mitchell out — but that explosion of attention on algorithms in 2020 nonetheless helped persuade Dorsey and his board of directors that ethical algorithms are worth spending money on.

Over the last year, the amplification of former President Donald Trump's social media posts via Facebook engagement algorithms drew widespread outrage from the left; Facebook's decision to very temporarily adapt those algorithms in response drew even sharper rebuke from the right. The spread of coronavirus misinformation followed a similar trajectory, while the nationwide conversation about criminal justice and race-based policing awakened the general public to the biases inherent in algorithms. All of this new awareness found a flashpoint in Google's Gebru. Her forced exit made the entire world pay attention to ethical AI.

"The ideological polarization … is also coming into responsible AI. We are being specifically targeted by names that I will not mention to you because then they will specifically come after me the way they have come after Timnit," Chowdhury said. "The very violent ideological divide is being pulled into our field."

The birth of META

Font wanted Chowdhury to run META from the beginning, but she thought there would be no way to persuade her. "We needed to get the right leader. I spent months doing this. I was OK that it took that long," Font said. "I wanted someone who was already established and well-respected, which, as you know, is not a community that is easy to please necessarily. This was a tricky quest."

But something about that first phone call made Chowdhury — who'd recently left her job as the senior principal for Responsible AI at Accenture to found her own startup — reconsider her future. "My goal was always to drive change in this industry. The industry is so young. I just want to see it succeed," she explained. If Twitter was actually serious about META, this job offer could be the chance she thought she might never have.

"I asked to talk to everybody. From leadership at Twitter down, I talked to everyone, from policy, from comms. It was absolutely critical to me that every single person who would be interacting with META was really on board. And I always left every interview so impressed. There was never any question of whether or not Twitter had the right kind of ethos," she said.

She took the job four months ago. Since then, in addition to the company's public commitment to its 2021 Responsible ML Initiative (which means Twitter will publicly share how it makes decisions about its algorithms, and how race and politics shape its ML), Twitter has already released an assessment of its image-cropping algorithm and removed the algorithm entirely based on the findings from the research.

Senior leadership said it would commit to Chowdhury's team, promising regular communication. They've been acting on that promise since before she arrived: Team members meet with Dorsey and his senior staff regularly to discuss progress, explain their work, secure additional resources and get buy-in from Dorsey on the research, education and changes they hope to implement.

"We present to Jack and his staff about every six weeks — we report our progress and where we are. They are most interested in learning what we've learned and how they can help. They actually really want to know — what did you learn, where are you going next — they very quickly want to help," Font said.

Williams, the program manager, was skeptical of Twitter's intentions when she agreed in 2020 to leave her job as the senior technical leader for privacy at Facebook and join the team. "It's incredibly disheartening as a very committed person, you go to a place and you think you're going to make a difference. I've had to make pivots and changes in my career because I bought into the hype," Williams said. "I was a bit disheartened about social media when Twitter told me, 'Please come and just talk to this team about this job.'"

Williams took the job, but she didn't give up on the idea that she might go back into health care privacy or nonprofit work: "I carried that healthy skepticism for quite some time."

The reality of change

Solving Twitter's problems means actually defining what users' "problems" are. "It's a lot easier to teach a model how to do something on behalf of people with their input," Williams explained. Roberts, who will be joining Twitter in early July, agreed to come on board to help answer precisely that question. She'll be given independence and latitude to help Twitter learn how to give people choice in usable ways. "We don't really know the answer to that," Williams said.

One of the few easily identifiable problems users had long vocalized was how Twitter's algorithm auto-cropped images, which many people felt often cropped uploaded images in a way that preferred lighter-skinned people and sexualized female bodies. Williams, Font and Chowdhury cited their work on that algorithm as an example for how they plan to run their team.

In their first publicly detailed research project since Chowdhury's start, META created a test to assess how the algorithms actually performed on a wide range of photos. They found a slight race-based bias, and though they could have dismissed the numbers as small, they decided instead to work with the engineers to help remove the algorithm entirely. Rather than conduct their work separately from the team that would be affected if changes were made to the algorithm, they worked alongside them, letting them know early in the process about the research project. And when their findings showed that change should happen, they helped create the plan to remove the algorithm in partnership with the engineers in question.

And after the algorithm was removed, META published both a press release explaining how they reached their conclusions and a scientific paper showing how they conducted their research.

"To be perfectly honest, people have no problem taking Jack to task on Twitter. And Congress is literally just following what they heard people say," Chowdhury said.

"That's why we just develop in the open now," Williams added.

Beyond user choice and public transparency, Chowdhury's goal is to create a system of rules and assessments that function like government over the models: a system that could prevent harms from occurring, rather than just address the causes after people are hurt.

The team centers the idea that machine-learning engineers don't have bad intentions; they often just lack an understanding of what they're capable of doing and how to go about governing their work in an ethical way. An ethical, holistic approach isn't necessarily taught in most artificial intelligence grad programs, and very few tech companies support ethicists, auditors, and researchers of Chowdhury's caliber with freedom and buy-in (see: Google's collapse of its own ethical AI work).

"Our engineers are looking for guidance and expertise. Things are actionable because they know we can do better, it's hard to know what to do differently unless you have a workflow," Font said. "People don't always know what it is they can do, even if they are smart and good-hearted."

What the META team doesn't have is serious enforcement power. They say they don't want it at the moment — "You can't really drive change through fear of enforcement, but for long-term investment in change you do much better by growing education," according to Williams — but at the end of the day, META is a knowledge-creating team, not a police force. While they can research and propose changes, they cannot necessarily force other teams to fall into line. Their work is democratic, not authoritarian.

"There's a life cycle to enacting change," Williams explained. "You have to focus on enhancement; your first iteration or two is more on monitoring than it is on auditing. This as a concept is so new that focusing very directly on discipline and enforcement, you can't really drive change through fear."

"Ethics is literally about the world of unintended consequences. We're talking about engineers who are well-mentioned in trying to build something who didn't have the background or education," Chowdhury said. "We're talking to people who wanted to do the right thing and didn't know how to do the right thing."

Chowdhury reads widely as a way of processing her thoughts — she cited countless books and papers during our conversation — and she sees herself creating a leadership style through a feminist lens. Rather than punishing or controlling the people she works with, her definition of leadership is about finding ways to share resources and power, not keep it for herself. Seeking enforcement authority would oppose that kind of leadership definition. "I worry very much about the consolidation of ruthless authority," she said.

Many of the researchers and leaders in the ethical machine-learning worlds believe that working inside a tech company and accepting a role as an adviser (rather than an enforcer) makes the work useless. That idea frustrated Chowdhury, Williams and Font, all of whom kept returning to the idea that you can't make real progress if you're forever apart from the industry you're critiquing. "Everyone outside the industry is pointing their fingers at you as if you are the problem. You are trying your best to do your job and do a good job and people are like, you are fundamentally unethical because you take a paycheck from them," Chowdhury said.

"But the goal of META is not to be this shining example of finger-pointing where we get to be the good guys while throwing our company under the bus," she added. "That's actually not very productive if our goal is to change the industry and drive the industry toward actionable positive output."

Power

Google wants to (try to) make Google Glass cool again

Also this week: savvy virtual assistants, surveillance without violating people's privacy, and more patents from Big Tech.

Is making these cool even possible?

Image: Google

This week was so full of fun patent applications that I didn't know where to start. We've got a throwback to 2013, a virtual assistant that knows when I've stopped talking, and headphones that can determine a user's hearing abilities.

But as always, remember that the big tech companies file all kinds of crazy patents for things, and though most never amount to anything, some end up defining the future

Keep Reading Show less
Karyne Levy

Karyne Levy ( @karynelevy) is the West Coast editor at Protocol. Before joining Protocol, Karyne was a senior producer at Scribd, helping to create the original content program. Prior to that she was an assigning editor at NerdWallet, a senior tech editor at Business Insider, and the assistant managing editor at CNET, where she also hosted Rumor Has It for CNET TV. She lives outside San Francisco with her wife, son and lots of pets.

As President of Alibaba Group, I am often asked, "What is Alibaba doing in the U.S.?"

In fact, most people are not aware we have a business in the U.S. because we are not a U.S. consumer-facing service that people use every day – nor do we want to be. Our consumers – nearly 900 million of them – are located in China.

Keep Reading Show less
J. Michael Evans
Michael Evans leads and executes Alibaba Group's international strategy for globalizing the company and expanding its businesses outside of China.

Does Elon Musk make Tesla tech?

Between the massive valuation and the self-driving software, Tesla isn't hard to sell as a tech company. But does that mean that, in 10 years, every car will be tech?

You know what's not tech and is a car company? Volkswagen.

Image: Tesla/Protocol

From disagreements about what "Autopilot" should mean and SolarCity lawsuits to space colonization and Boring Company tunnels, extremely online Tesla CEO Elon Musk and his company stay firmly in the news, giving us all plenty of opportunities to consider whether the company that made electric cars cool counts as tech.

The massive valuation definitely screams tech, as does the company's investment in self-driving software and battery development. But at the end of the day, this might not be enough to convince skeptics that Tesla is anything other than a car company that uses tech. It also raises questions about the role that timeliness plays in calling something tech. In a potential future where EVs are the norm and many run on Tesla's own software — which is well within the realm of possibility — will Tesla lose its claim to a tech pedigree?

Keep Reading Show less
Becca Evans
Becca Evans is a copy editor and producer at Protocol. Previously she edited Carrie Ann Conversations, a wellness and lifestyle publication founded by Carrie Ann Inaba. She's also written for STYLECASTER. Becca lives in Los Angeles.
Protocol | Workplace

Apple isn’t the only tech company spooked by the delta variant

Spooked by rising cases of COVID-19, many tech companies delay their office reopening.

Apple and at least two other Silicon Valley companies have decided to delay their reopenings in response to rising COVID-19 case counts.

Photo: Luis Alvarez via Getty

Apple grabbed headlines this week when it told employees it would delay its office reopening until October or later. But the iPhone maker wasn't alone: At least two other Silicon Valley companies decided to delay their reopenings last week in response to rising COVID-19 case counts.

Both ServiceNow and Pure Storage opted to push back their September return-to-office dates last week, telling employees they can work remotely until at least the end of the year. Other companies may decide to exercise more caution given the current trends.

Keep Reading Show less
Allison Levitsky
Allison Levitsky is a reporter at Protocol covering workplace issues in tech. She previously covered big tech companies and the tech workforce for the Silicon Valley Business Journal. Allison grew up in the Bay Area and graduated from UC Berkeley.
Protocol | Workplace

Half of working parents have felt discriminated against during COVID

A new survey found that working parents at the VP level are more likely to say they've faced discrimination at work than their lower-level counterparts.

A new survey looks at discrimination faced by working parents during the pandemic.

Photo: d3sign/Getty Images

The toll COVID-19 has taken on working parents — particularly working moms — is, by now, well-documented. The impact for parents in low-wage jobs has been particularly devastating.

But a new survey, shared exclusively with Protocol, finds that among parents who kept their jobs through the pandemic, people who hold more senior positions are actually more likely to say they faced discrimination at work than their lower-level colleagues.

Keep Reading Show less
Issie Lapowsky

Issie Lapowsky ( @issielapowsky) is Protocol's chief correspondent, covering the intersection of technology, politics, and national affairs. She also oversees Protocol's fellowship program. Previously, she was a senior writer at Wired, where she covered the 2016 election and the Facebook beat in its aftermath. Prior to that, Issie worked as a staff writer for Inc. magazine, writing about small business and entrepreneurship. She has also worked as an on-air contributor for CBS News and taught a graduate-level course at New York University's Center for Publishing on how tech giants have affected publishing.

Latest Stories