About six months ago, Twitter quietly hired someone to become the head of product for conversational safety on the platform. You'd think the person in charge of what might be the most difficult task at Twitter would have a predictable skillset: years of experience in and out of academia, politics and programming; an impenetrable wall of media savvy; close ties to the exec suite. You'd be wrong.
Instead, the woman bringing creative and somewhat radical new ideas to user safety at Twitter is a young "activist-entrepreneur" who spent the last six years running a startup to help ranchers plan climate-friendly grazing practices. Now, only half a year after PastureMap was acquired, Christine Su is the senior product manager for conversational safety at Twitter, tasked with keeping everyday users safe online and rethinking the fundamentals of the platform along the way.
A woman with an MBA and a master's in land use and agriculture might seem like an odd choice, but she's been interested in mission-driven tech work for years. It's what drove her commitment to PastureMap and then her decision to join Twitter. "As a queer women of color who is an Asian American in tech in rural America, that experience is a very intersectional one. I've had plenty of experiences moving through spaces where I wanted more safety," she said. After years of worrying for her sister's safety (she's a journalist covering sensitive topics in the Middle East and now China) as well as her own, Su knew she wanted to focus on building safety and inclusion for people who are the most vulnerable.
Transformative and procedural justice are the foundation of Su's vision for a safer Twitter. The once radical concepts challenge the notion that we should just punish people who cause harm, instead offering an alternative: a pathway to repair the harm that has been done and to prevent its recurrence (transformative justice), and a set of fair rules that make harm rarer in the first place (procedural justice). Transformative justice has recently gained attention for its role in addressing sexual assault on college campuses, and a version of it has been adopted into the official legal system in New Zealand. Academics and activists have argued for years that the concepts could transform conflict resolution.
Su's goals sit at the heart of what could become a very different Twitter one day, if — and it remains a very big conditional — the company is serious about the changes it's been signaling over the last year. Women and people from marginalized groups have documented disproportionate levels of abuse and harassment on the platform for years, and, until recently, Twitter did little to change that. Its content rules stayed stagnant, and time and again, people reported incidences when abuse went ignored and harassment continued unabated. But Twitter has bowed to recent political pressure, expressing clear interest in stricter rules for what people are allowed to do and say.
No matter how many times you monitor, report and moderate harmful posts, the reactionary model does little to reverse the damage that's already been done to the people targeted or to prevent it from happening again, Su said. So instead of putting the spotlight just on the posts causing harm, new functions coming from her team will be all about user control, she explained, giving people a wide range of capabilities to react to situations on the platform. "The point is not to make the entire world a safe space: That's not possible. The point is to empower people and communities to have the tools to heal harm themselves and to prevent harm to themselves and put them in control," Su said.
The product team gave some clues about what that user control could look like when they described the upcoming audio hangout function, Spaces, in a press call last week. Spaces will allow users to determine who is allowed in the audio room and who can speak, and the team is rolling out the function to women and people from other marginalized communities first, to test out how effective these safety functions can be in practice.
Su also cited recent election-related interventions as examples of how reimagining Twitter in the long term could work; for example, the function that encourages people to read content before reposting it has remained in place for now while the team assesses its long-term value. "You've seen over the last year, a willingness of Twitter to rethink its fundamental mechanisms," she said.
For Su, implementing transformative justice means building tools that create private pathways for apologies, forgiveness and deescalation (somehow, we'll get apologies before we get an edit button). While she didn't describe exactly how private apology tools will work just yet, they are intended to become part of "a set of controls that people can take with them around digital spaces, and be able to use them when and if circumstances warrant," she said.
Getting these ideas into practice requires rethinking the bones of Twitter, which is based on a lot of research. "The conversational safety team does a lot of reading," she laughed as she tried to explain how everyone she works with is constantly contributing to a workplace debate about the leading research into procedural justice. These ideas are rooted in more than a decade of work, she said. "Twitter has had time to observe dynamics on the platform. There was a deep literature review of procedural justice that was already there when I arrived." She was also quick to give credit to the researchers pushing these ideas, many of whom have been among Twitter's harshest critics.
If successful, the idea that a social platform could normalize apology would be, quite literally, transformational. It sounds almost too wacky to be true, and therein lies the problem. Su's passion about transformative justice and Twitter's commitment to the idea in reality may be two different things. The same day Su hinted about an upcoming apology tool and designer Maya Patterson touted the company's commitment to making Spaces safe, Twitter rolled out Fleets. And the headlines speak for themselves: "Twitter users say fleets are ripe for harassment" and "Twitter has set itself up for an enormous new content moderation problem."
So I asked Su about her 10-year vision for user safety and how it fits into the Twitter grand plan. "At the highest level, all of us at Twitter are deeply committed to the same mission, which is to serve the public conversation, which is something that I didn't fully appreciate until I got here," she said. "I would like to see more empathy and more thoughtfulness infused into how Twitter works at a fundamental level."
And how does she feel about all the negative feedback along the way? "Feedback is a gift. And feedback at Twitter is a firehose of gifts." While it might be emotionally draining on a personal level to be told "I hope you throw your computer in the garbage," it's still a signal, Su said.
The conversational safety team has big ideas for the next couple of years, and it's growing aggressively to try to make that happen. "We need all the help we can get, so I'm very excited for this article to come out," she said. Su is currently hiring for her machine learning team, which is working to build models that can determine what Twitter users see as a meaningful conversation. "Table stakes is safety, but then we also want to help define what is a meaningful conversation," she said.
Despite all the criticism, Su insisted that everyone at Twitter understands the importance of investing in safety. If nothing else, Twitter can point to hiring people like Su and its plans for growth to prove its commitment. "We have to solve this in order to get to a rich conversation," she said.