People

How a young, queer Asian-American businesswoman is rethinking user safety at Twitter

Transformative and procedural justice are the foundation of Christine Su's vision for a safer site.

How a young, queer Asian-American businesswoman is rethinking user safety at Twitter

"The point is not to make the entire world a safe space: That's not possible," Christine Su said.

Photo: Sarah Deragon, Portraits to the People

About six months ago, Twitter quietly hired someone to become the head of product for conversational safety on the platform. You'd think the person in charge of what might be the most difficult task at Twitter would have a predictable skillset: years of experience in and out of academia, politics and programming; an impenetrable wall of media savvy; close ties to the exec suite. You'd be wrong.

Instead, the woman bringing creative and somewhat radical new ideas to user safety at Twitter is a young "activist-entrepreneur" who spent the last six years running a startup to help ranchers plan climate-friendly grazing practices. Now, only half a year after PastureMap was acquired, Christine Su is the senior product manager for conversational safety at Twitter, tasked with keeping everyday users safe online and rethinking the fundamentals of the platform along the way.

A woman with an MBA and a master's in land use and agriculture might seem like an odd choice, but she's been interested in mission-driven tech work for years. It's what drove her commitment to PastureMap and then her decision to join Twitter. "As a queer women of color who is an Asian American in tech in rural America, that experience is a very intersectional one. I've had plenty of experiences moving through spaces where I wanted more safety," she said. After years of worrying for her sister's safety (she's a journalist covering sensitive topics in the Middle East and now China) as well as her own, Su knew she wanted to focus on building safety and inclusion for people who are the most vulnerable.

Transformative and procedural justice are the foundation of Su's vision for a safer Twitter. The once radical concepts challenge the notion that we should just punish people who cause harm, instead offering an alternative: a pathway to repair the harm that has been done and to prevent its recurrence (transformative justice), and a set of fair rules that make harm rarer in the first place (procedural justice). Transformative justice has recently gained attention for its role in addressing sexual assault on college campuses, and a version of it has been adopted into the official legal system in New Zealand. Academics and activists have argued for years that the concepts could transform conflict resolution.

Su's goals sit at the heart of what could become a very different Twitter one day, if — and it remains a very big conditional — the company is serious about the changes it's been signaling over the last year. Women and people from marginalized groups have documented disproportionate levels of abuse and harassment on the platform for years, and, until recently, Twitter did little to change that. Its content rules stayed stagnant, and time and again, people reported incidences when abuse went ignored and harassment continued unabated. But Twitter has bowed to recent political pressure, expressing clear interest in stricter rules for what people are allowed to do and say.

No matter how many times you monitor, report and moderate harmful posts, the reactionary model does little to reverse the damage that's already been done to the people targeted or to prevent it from happening again, Su said. So instead of putting the spotlight just on the posts causing harm, new functions coming from her team will be all about user control, she explained, giving people a wide range of capabilities to react to situations on the platform. "The point is not to make the entire world a safe space: That's not possible. The point is to empower people and communities to have the tools to heal harm themselves and to prevent harm to themselves and put them in control," Su said.

The product team gave some clues about what that user control could look like when they described the upcoming audio hangout function, Spaces, in a press call last week. Spaces will allow users to determine who is allowed in the audio room and who can speak, and the team is rolling out the function to women and people from other marginalized communities first, to test out how effective these safety functions can be in practice.

Su also cited recent election-related interventions as examples of how reimagining Twitter in the long term could work; for example, the function that encourages people to read content before reposting it has remained in place for now while the team assesses its long-term value. "You've seen over the last year, a willingness of Twitter to rethink its fundamental mechanisms," she said.

For Su, implementing transformative justice means building tools that create private pathways for apologies, forgiveness and deescalation (somehow, we'll get apologies before we get an edit button). While she didn't describe exactly how private apology tools will work just yet, they are intended to become part of "a set of controls that people can take with them around digital spaces, and be able to use them when and if circumstances warrant," she said.

Getting these ideas into practice requires rethinking the bones of Twitter, which is based on a lot of research. "The conversational safety team does a lot of reading," she laughed as she tried to explain how everyone she works with is constantly contributing to a workplace debate about the leading research into procedural justice. These ideas are rooted in more than a decade of work, she said. "Twitter has had time to observe dynamics on the platform. There was a deep literature review of procedural justice that was already there when I arrived." She was also quick to give credit to the researchers pushing these ideas, many of whom have been among Twitter's harshest critics.

If successful, the idea that a social platform could normalize apology would be, quite literally, transformational. It sounds almost too wacky to be true, and therein lies the problem. Su's passion about transformative justice and Twitter's commitment to the idea in reality may be two different things. The same day Su hinted about an upcoming apology tool and designer Maya Patterson touted the company's commitment to making Spaces safe, Twitter rolled out Fleets. And the headlines speak for themselves: "Twitter users say fleets are ripe for harassment" and "Twitter has set itself up for an enormous new content moderation problem."

So I asked Su about her 10-year vision for user safety and how it fits into the Twitter grand plan. "At the highest level, all of us at Twitter are deeply committed to the same mission, which is to serve the public conversation, which is something that I didn't fully appreciate until I got here," she said. "I would like to see more empathy and more thoughtfulness infused into how Twitter works at a fundamental level."

And how does she feel about all the negative feedback along the way? "Feedback is a gift. And feedback at Twitter is a firehose of gifts." While it might be emotionally draining on a personal level to be told "I hope you throw your computer in the garbage," it's still a signal, Su said.

The conversational safety team has big ideas for the next couple of years, and it's growing aggressively to try to make that happen. "We need all the help we can get, so I'm very excited for this article to come out," she said. Su is currently hiring for her machine learning team, which is working to build models that can determine what Twitter users see as a meaningful conversation. "Table stakes is safety, but then we also want to help define what is a meaningful conversation," she said.

Despite all the criticism, Su insisted that everyone at Twitter understands the importance of investing in safety. If nothing else, Twitter can point to hiring people like Su and its plans for growth to prove its commitment. "We have to solve this in order to get to a rich conversation," she said.

Climate

This carbon capture startup wants to clean up the worst polluters

The founder and CEO of point-source carbon capture company Carbon Clean discusses what the startup has learned, the future of carbon capture technology, as well as the role of companies like his in battling the climate crisis.

Carbon Clean CEO Aniruddha Sharma told Protocol that fossil fuels are necessary, at least in the near term, to lift the living standards of those who don’t have access to cars and electricity.

Photo: Carbon Clean

Carbon capture and storage has taken on increasing importance as companies with stubborn emissions look for new ways to meet their net zero goals. For hard-to-abate industries like cement and steel production, it’s one of the few options that exist to help them get there.

Yet it’s proven incredibly challenging to scale the technology, which captures carbon pollution at the source. U.K.-based company Carbon Clean is leading the charge to bring down costs. This year, it raised a $150 million series C round, which the startup said is the largest-ever funding round for a point-source carbon capture company.

Keep Reading Show less
Michelle Ma

Michelle Ma (@himichellema) is a reporter at Protocol covering climate. Previously, she was a news editor of live journalism and special coverage for The Wall Street Journal. Prior to that, she worked as a staff writer at Wirecutter. She can be reached at mma@protocol.com.

Sponsored Content

Great products are built on strong patents

Experts say robust intellectual property protection is essential to ensure the long-term R&D required to innovate and maintain America's technology leadership.

Every great tech product that you rely on each day, from the smartphone in your pocket to your music streaming service and navigational system in the car, shares one important thing: part of its innovative design is protected by intellectual property (IP) laws.

From 5G to artificial intelligence, IP protection offers a powerful incentive for researchers to create ground-breaking products, and governmental leaders say its protection is an essential part of maintaining US technology leadership. To quote Secretary of Commerce Gina Raimondo: "intellectual property protection is vital for American innovation and entrepreneurship.”

Keep Reading Show less
James Daly
James Daly has a deep knowledge of creating brand voice identity, including understanding various audiences and targeting messaging accordingly. He enjoys commissioning, editing, writing, and business development, particularly in launching new ventures and building passionate audiences. Daly has led teams large and small to multiple awards and quantifiable success through a strategy built on teamwork, passion, fact-checking, intelligence, analytics, and audience growth while meeting budget goals and production deadlines in fast-paced environments. Daly is the Editorial Director of 2030 Media and a contributor at Wired.
Workplace

Why companies cut staff after raising millions

Are tech firms blowing millions in funding just weeks after getting it? Experts say it's more complicated than that.

Bolt, Trade Republic, HomeLight, and Stord all drew attention from funding announcements that happened just weeks or days before layoffs.

Photo: Pulp Photography/Getty Images

Fintech startup Bolt was one of the first tech companies to slash jobs, cutting 250 employees, or a third of its staff, in May. For some workers, the pain of layoffs was a shock not only because they were the first, but also because the cuts came just four months after Bolt had announced a $355 million series E funding round and achieved a peak valuation of $11 billion.

“Bolt employees were blind sided because the CEO was saying just weeks ago how everything is fine,” an anonymous user wrote on the message board Blind. “It has been an extremely rough day for 1/3 of Bolt employees,” another user posted. “Sadly, I was one of them who was let go after getting a pay-raise just a couple of weeks ago.”

Keep Reading Show less
Nat Rubio-Licht

Nat Rubio-Licht is a Los Angeles-based news writer at Protocol. They graduated from Syracuse University with a degree in newspaper and online journalism in May 2020. Prior to joining the team, they worked at the Los Angeles Business Journal as a technology and aerospace reporter.

Climate

The fight to define the carbon offset market's future

The world’s largest carbon offset issuer is fighting a voluntary effort to standardize the industry. And the fate of the climate could hang in the balance.

It has become increasingly clear that scaling the credit market will first require clear standards and transparency.

Kevin Frayer/Getty Images

There’s a major fight brewing over what kind of standards will govern the carbon offset market.

A group of independent experts looking to clean up the market’s checkered record and the biggest carbon credit issuer on the voluntary market is trying to influence efforts to define what counts as a quality credit. The outcome could make or break an industry increasingly central to tech companies meeting their net zero goals.

Keep Reading Show less
Lisa Martine Jenkins

Lisa Martine Jenkins is a senior reporter at Protocol covering climate. Lisa previously wrote for Morning Consult, Chemical Watch and the Associated Press. Lisa is currently based in Brooklyn, and is originally from the Bay Area. Find her on Twitter ( @l_m_j_) or reach out via email (ljenkins@protocol.com).

Policy

White House AI Bill of Rights lacks specific guidance for AI rules

The document unveiled today by the White House Office of Science and Technology Policy is long on tech guidance, but short on restrictions for AI.

While the document provides extensive suggestions for how to incorporate AI rights in technical design, it does not include any recommendations for restrictions on the use of controversial forms of AI.

Photo: Ana Lanza/Unsplash

It was a year in the making, but people eagerly anticipating the White House Bill of Rights for AI will have to continue waiting for concrete recommendations for future AI policy or restrictions.

Instead, the document unveiled today by the White House Office of Science and Technology Policy is legally non-binding and intended to be used as a handbook and a “guide for society” that could someday inform government AI legislation or regulations.

Blueprint for an AI Bill of Rights features a list of five guidelines for protecting people in relation to AI use:

Keep Reading Show less
Kate Kaye

Kate Kaye is an award-winning multimedia reporter digging deep and telling print, digital and audio stories. She covers AI and data for Protocol. Her reporting on AI and tech ethics issues has been published in OneZero, Fast Company, MIT Technology Review, CityLab, Ad Age and Digiday and heard on NPR. Kate is the creator of RedTailMedia.org and is the author of "Campaign '08: A Turning Point for Digital Media," a book about how the 2008 presidential campaigns used digital media and data.

Latest Stories
Bulletins