Enterprise

The Trevor Project relies on AI to train counselors, but won’t let it near suicidal LGBTQ+ kids

The key to making AI work is to customize the models and keep the conversations away from troubled teens.

The Trevor Project relies on AI to train counselors, but won’t let it near suicidal LGBTQ+ kids

AI-powered language models can put out toxic results if trained on real-world data, but they're still valuable for counselor training.

The Trevor Project

Large AI-based language processing models such as OpenAI’s GPT-3 have been criticized for conjuring up racist or offensive text, for making up false information and for requiring enormous amounts of computing power to build and deploy. Still, countless companies have used these foundational models to form the basis of tech products such as customer service chatbots.

The Trevor Project, a nonprofit that provides counseling to LGBTQ+ youth at risk of suicide, is also an adopter of these processing models. However, while the group has found value in customizing these deep-learning neural networks to help expand its ability to help kids in crisis, the organization recognizes where to draw the line.

In fact, while The Trevor Project has used open-source AI models including OpenAI’s GPT-2 and Google’s ALBERT, it does not use tools built with them to carry on conversations directly with troubled kids. Instead, the group has deployed those models to build tools it has used internally to train more than 1,000 volunteer crisis counselors, and to help triage web chats and texts from people in order to prioritize higher-risk contacts and connect them faster to real-life counselors.

The Trevor Project fine-tuned GPT-2 to create a crisis contact simulator featuring two AI-based personas. Named Riley and Drew, these AI-based personas communicate internally with counselor trainees, helping them prepare for the sorts of conversations they will have with actual kids and teens.

Each persona represents a different life situation, background, sexual orientation, gender identity and suicide risk level. Riley mimics a teen in North Carolina who feels depressed and anxious, while Drew is in their early 20s, lives in California and deals with bullying and harassment.

Launched in 2021, Riley was the first of the two personas. Rather than simply using GPT-2 models out of the box, the organization tailored the deep-learning model for its specific purpose by training it using hundreds of role-playing discussions between actual staff counselors and an initial set of data reflecting what someone like Riley might say.

“We trained Riley on many hundreds of past Riley role-plays,” said Dan Fichter, head of AI and Engineering at The Trevor Project, which developed the Riley persona through a partnership with Google’s grant program, Google.org. “The model needs to remember everything that is said and you have asked so far. When we trained GPT on those conversations, we got something that is very reliably responsive in a way our trainers would respond [to],” he said.

The Trevor Project, which has a tech team of 30 people — including some dedicated to machine-learning-related work — later developed the Drew persona on their own.

“When youth reach out, they are always served by a trained and caring human being who is ready to listen and support them no matter what they’re going through,” said Fichter.


Retraining AI models for code-switching, and the Texas effect

While he said the persona models are relatively stable, Fichter said the organization may need to re-train them with new data as the casual language used by kids and teens evolves to incorporate new acronyms, and as current events such as a new law in Texas defining gender-affirming medical care as “child abuse” becomes a topic of conversation, he said.

“There’s a lot of code-switching that happens because they know that they are reaching out to an adult [so] it could mean that there’s a benefit from regular re-training,” Fichter said.

The Trevor Project released data from a 2021 national survey that found that more than 52% of transgender and nonbinary youth "seriously considered suicide" in the past year, and of those, one in five attempted it.

“Health care is a people-focused industry, and when machine learning intersects with people, I think we have to be careful,” said Evan Peterson, a machine-learning engineer at health and wellness tech company LifeOmic who has used open-source language models such as Hugging Face and RoBERTa, a version of BERT developed at Facebook, to build chatbots.

To gauge performance, fairness and equity when it came to certain identity groups, The Trevor Project evaluated a variety of large natural-language-processing and linguistic deep-learning models before deciding which best suited particular tasks. It turned out that when it came to holding a simulated conversation and generating the sort of long, coherent sentence required for a 60-90 minute counselor training session, GPT-2 performed best.


AI for hotline triage and prioritizing risk

But ALBERT performed better than others when testing and validating models for a separate machine-learning system The Trevor Project built to help assess the risk level of people texting or chatting with its suicide prevention hotline. The risk assessment model is deployed when people in crisis contact the hotline. Based on responses to basic intake questions about someone’s state of mind and history with suicidality, the model assesses their level of risk for suicide, classifying it with a numerical score.

Tailoring large language models for particular purposes with highly specific training data sets is one way users such as The Trevor Project have taken advantage of their benefits while taking care not to facilitate more troubling digital conversations. Photo: The Trevor Project

The model performs the evaluations according to a wide range of statements with varying levels of detail. While it may be difficult for humans — and deep-learning models — to gauge suicide risk if someone simply says, “I’m not feeling great,” the ALBERT-based model is “pretty good” at learning emotional terms that correlate with suicide risk such as language describing ideation or details of a plan, Fichter said. When configuring the model to categorize risk, the group erred on the side of caution by scoring someone as higher risk when it wasn’t entirely clear, he said.

To prepare data to train its risk assessment model, the organization looked to real-world assessments performed during full crisis conversations. Relying on people’s subjective opinions out of context can introduce bias when labeling training data, but using real-world clinical risk assessments that can be mapped back to whether a young person should or should not have been placed in the priority queue helped reduce that potential bias.

In the past, human counselors used a heuristic rules-based system to triage callers, said Fichter, who said he believes the AI-based process provides “a much more accurate prediction.”


Mocking TV shows (but evading worse problems)

The Trevor Project balances benefits of large language models against potential problems by limiting how they are used, Fichter said. He pointed to the strictly internal use of the GPT-2-based persona models for generating language for counselor training purposes, and use of the ALBERT-based risk assessment model only to prioritize how soon a counselor should speak to a contact.

Still, open-source, large natural-language processing models including various iterations of OpenAI’s GPT — generative pre-trained transformer — have generated a reputation as toxic language factories. They have been criticized for producing text that perpetuates stereotypes and spews nasty language, in part because they were trained using data gleaned from an internet where such language is commonplace. Groups including OpenAI are continuously working to improve toxicity and accuracy problems associated with large language models.

“There is ongoing research to ground them to ‘be good citizen models’” said Peterson. However, he said that machine -learning systems “can make mistakes [and] there are situations in which that is not acceptable.”

Meanwhile, large language models regularly burst on the scene. Microsoft on Tuesday introduced new AI models it said it has deployed to improve common language understanding tasks such as name entity recognition, text summarization, custom text classification and key phrase extraction.

Tailoring those models for particular purposes with highly specific training data sets is one way users such as The Trevor Project have worked to take advantage of their benefits while taking care to ensure they do not facilitate more troubling digital conversations.

“Because we were able to fine-tune it to perform very specific work, and purely for our internal [Riley and Drew personas], our model has not generated any offensive output,” Fichter said.

When developing both its crisis contact simulator and risk assessment model, the organization removed names or other personally identifiable information from data it used to train the persona models.

But privacy protection wasn’t the only reason, said Fichter. His team did not want the machine-learning models to draw conclusions about people with certain names, which could result in model bias. For example, they didn’t want them to conclude that someone with the name “Jane” was always a bully just because a teen in crisis in a role-playing scenario complained about someone with that name.

So far, Fichter said the crisis contact simulator personas have not used any inappropriate or odd words. In general, they might simply respond, “I don’t know,” if they cannot generate relevant language.

Still, he said that Drew — the 20-something Californian — has mocked Netflix’s social-media competition show “The Circle.” “Drew has made fun of some TV shows he’s been watching,” Fichter said.

This story was updated to clarify how the Trevor Project prepared data for its training models and how people in crisis use its services.

Elon Musk's influence over Twitter was clear at its annual meeting

Even though executives tried not to talk about Musk's deal to buy the company, they couldn't help but address his agenda.

Elon Musk loomed over Twitter's annual shareholder meeting.

Photoillustration: Getty Images; Unsplash; Protocol

In his opening remarks at Twitter's annual shareholder meeting on Wednesday, CEO Parag Agrawal said he wouldn't discuss the pending acquisition bid from Elon Musk, which wasn't on the agenda. That didn’t matter much: Musk’s fingerprints were all over the event, even overshadowing the expected if still-emotional news that Jack Dorsey would step away from Twitter’s board at the meeting's conclusion.

Keep Reading Show less
Hirsh Chitkara

Hirsh Chitkara ( @HirshChitkara) is a reporter at Protocol focused on the intersection of politics, technology and society. Before joining Protocol, he helped write a daily newsletter at Insider that covered all things Big Tech. He's based in New York and can be reached at hchitkara@protocol.com.

Sponsored Content

Why the digital transformation of industries is creating a more sustainable future

Qualcomm’s chief sustainability officer Angela Baker on how companies can view going “digital” as a way not only toward growth, as laid out in a recent report, but also toward establishing and meeting environmental, social and governance goals.

Three letters dominate business practice at present: ESG, or environmental, social and governance goals. The number of mentions of the environment in financial earnings has doubled in the last five years, according to GlobalData: 600,000 companies mentioned the term in their annual or quarterly results last year.

But meeting those ESG goals can be a challenge — one that businesses can’t and shouldn’t take lightly. Ahead of an exclusive fireside chat at Davos, Angela Baker, chief sustainability officer at Qualcomm, sat down with Protocol to speak about how best to achieve those targets and how Qualcomm thinks about its own sustainability strategy, net zero commitment, other ESG targets and more.

Keep Reading Show less
Chris Stokel-Walker

Chris Stokel-Walker is a freelance technology and culture journalist and author of "YouTubers: How YouTube Shook Up TV and Created a New Generation of Stars." His work has been published in The New York Times, The Guardian and Wired.

Workplace

Netflix’s layoffs reveal a larger diversity challenge in tech

Netflix just laid off 150 full-time employees and a number of agency contractors. Many of them were the company’s most marginalized employees.

It quickly became clear that many of the laid-off contractors possessed marginalized identities.

Illustration: Christopher T. Fong/Protocol

After Netflix’s first round of layoffs, there was a brief period of relief for the contractors who ran Netflix’s audience-oriented social media channels, like Strong Black Lead, Most and Con Todo. But the calm didn’t last.

Last week, Netflix laid off 150 full-time employees and a number of agency contractors. The customary #opentowork posts flooded LinkedIn, many coming from impacted members of Netflix’s talent and recruiting teams. A number of laid-off social media contractors also took to Twitter to share the news. It quickly became clear that similar to the layoffs at Tudum, Netflix’s entertainment site, many of the affected contractors possessed marginalized identities. The channels they ran focused on Black, LGBTQ+, Latinx and Asian audiences, among others.

Keep Reading Show less
Lizzy Lawrence

Lizzy Lawrence ( @LizzyLaw_) is a reporter at Protocol, covering tools and productivity in the workplace. She's a recent graduate of the University of Michigan, where she studied sociology and international studies. She served as editor in chief of The Michigan Daily, her school's independent newspaper. She's based in D.C., and can be reached at llawrence@protocol.com.

Fintech

Crypto doesn’t have to be red or blue

Sens. Cynthia Lummis and Kirsten Gillibrand are backing bipartisan legislation that establishes regulatory clarity for cryptocurrencies. This is the right way to approach a foundational technology.

"Crypto doesn’t neatly fall along party lines because, as a foundational technology, it is — or should be — inherently nonpartisan," says Diogo Mónica, co-founder and president of Anchorage Digital.

Photo: Anchorage Digital

Diogo Mónica is president and co-founder of Anchorage Digital.

When I moved from Portugal to the United States to work at Square, it was hard to wrap my head around the two-party system that dominates American politics. As I saw at home, democracies, by their very nature, can be messy. But as an outsider looking in, I can’t help but worry that the ever-widening gap between America’s two major parties looms over crypto’s future.

Keep Reading Show less
Diogo Mónica
Diogo Mónica is the co-founder and president of Anchorage Digital, the premier digital asset platform for institutions. He holds a Ph.D. in computer science from the Technical University of Lisbon, and has worked in software security for over 15 years. As an early employee at Square, he helped build security architecture that now moves $100 billion annually. At Docker, he helped secure core infrastructure used in global banks, governments and the three largest cloud providers.
Fintech

What downturn? A16z raises $4.5 billion for latest crypto fund

The new fund is more than double the $2.2 billion fund the VC firm raised just last June.

A16z general partner Arianna Simpson said that despite the precipitous drop in crypto prices in recent months, the firm is looking to stay active in the market and isn’t worried about short-term price changes.

Photo: Andreessen Horowitz

Andreessen Horowitz has raised $4.5 billion for two crypto venture funds. They’re the industry’s largest ever and represent an outsized bet on the future of Web3 startups, even with the industry in the midst of a steep market downturn.

The pool of money is technically two separate funds: a $1.5 billion fund for seed deals and a $3 billion fund for broader venture deals. That’s more than other megafunds recently raised by competitors such as Paradigm and Haun Ventures.

Keep Reading Show less
Tomio Geron

Tomio Geron ( @tomiogeron) is a San Francisco-based reporter covering fintech. He was previously a reporter and editor at The Wall Street Journal, covering venture capital and startups. Before that, he worked as a staff writer at Forbes, covering social media and venture capital, and also edited the Midas List of top tech investors. He has also worked at newspapers covering crime, courts, health and other topics. He can be reached at tgeron@protocol.com or tgeron@protonmail.com.

Latest Stories
Bulletins