Enterprise

The Trevor Project relies on AI to train counselors, but won’t let it near suicidal LGBTQ+ kids

The key to making AI work is to customize the models and keep the conversations away from troubled teens.

The Trevor Project relies on AI to train counselors, but won’t let it near suicidal LGBTQ+ kids

AI-powered language models can put out toxic results if trained on real-world data, but they're still valuable for counselor training.

The Trevor Project

Large AI-based language processing models such as OpenAI’s GPT-3 have been criticized for conjuring up racist or offensive text, for making up false information and for requiring enormous amounts of computing power to build and deploy. Still, countless companies have used these foundational models to form the basis of tech products such as customer service chatbots.

The Trevor Project, a nonprofit that provides counseling to LGBTQ+ youth at risk of suicide, is also an adopter of these processing models. However, while the group has found value in customizing these deep-learning neural networks to help expand its ability to help kids in crisis, the organization recognizes where to draw the line.

In fact, while The Trevor Project has used open-source AI models including OpenAI’s GPT-2 and Google’s ALBERT, it does not use tools built with them to carry on conversations directly with troubled kids. Instead, the group has deployed those models to build tools it has used internally to train more than 1,000 volunteer crisis counselors, and to help triage web chats and texts from people in order to prioritize higher-risk contacts and connect them faster to real-life counselors.

The Trevor Project fine-tuned GPT-2 to create a crisis contact simulator featuring two AI-based personas. Named Riley and Drew, these AI-based personas communicate internally with counselor trainees, helping them prepare for the sorts of conversations they will have with actual kids and teens.

Each persona represents a different life situation, background, sexual orientation, gender identity and suicide risk level. Riley mimics a teen in North Carolina who feels depressed and anxious, while Drew is in their early 20s, lives in California and deals with bullying and harassment.

Launched in 2021, Riley was the first of the two personas. Rather than simply using GPT-2 models out of the box, the organization tailored the deep-learning model for its specific purpose by training it using hundreds of role-playing discussions between actual staff counselors and an initial set of data reflecting what someone like Riley might say.

“We trained Riley on many hundreds of past Riley role-plays,” said Dan Fichter, head of AI and Engineering at The Trevor Project, which developed the Riley persona through a partnership with Google’s grant program, Google.org. “The model needs to remember everything that is said and you have asked so far. When we trained GPT on those conversations, we got something that is very reliably responsive in a way our trainers would respond [to],” he said.

The Trevor Project, which has a tech team of 30 people — including some dedicated to machine-learning-related work — later developed the Drew persona on their own.

“When youth reach out, they are always served by a trained and caring human being who is ready to listen and support them no matter what they’re going through,” said Fichter.


Retraining AI models for code-switching, and the Texas effect

While he said the persona models are relatively stable, Fichter said the organization may need to re-train them with new data as the casual language used by kids and teens evolves to incorporate new acronyms, and as current events such as a new law in Texas defining gender-affirming medical care as “child abuse” becomes a topic of conversation, he said.

“There’s a lot of code-switching that happens because they know that they are reaching out to an adult [so] it could mean that there’s a benefit from regular re-training,” Fichter said.

The Trevor Project released data from a 2021 national survey that found that more than 52% of transgender and nonbinary youth "seriously considered suicide" in the past year, and of those, one in five attempted it.

“Health care is a people-focused industry, and when machine learning intersects with people, I think we have to be careful,” said Evan Peterson, a machine-learning engineer at health and wellness tech company LifeOmic who has used open-source language models such as Hugging Face and RoBERTa, a version of BERT developed at Facebook, to build chatbots.

To gauge performance, fairness and equity when it came to certain identity groups, The Trevor Project evaluated a variety of large natural-language-processing and linguistic deep-learning models before deciding which best suited particular tasks. It turned out that when it came to holding a simulated conversation and generating the sort of long, coherent sentence required for a 60-90 minute counselor training session, GPT-2 performed best.


AI for hotline triage and prioritizing risk

But ALBERT performed better than others when testing and validating models for a separate machine-learning system The Trevor Project built to help assess the risk level of people texting or chatting with its suicide prevention hotline. The risk assessment model is deployed when people in crisis contact the hotline. Based on responses to basic intake questions about someone’s state of mind and history with suicidality, the model assesses their level of risk for suicide, classifying it with a numerical score.

Tailoring large language models for particular purposes with highly specific training data sets is one way users such as The Trevor Project have taken advantage of their benefits while taking care not to facilitate more troubling digital conversations. Photo: The Trevor Project

The model performs the evaluations according to a wide range of statements with varying levels of detail. While it may be difficult for humans — and deep-learning models — to gauge suicide risk if someone simply says, “I’m not feeling great,” the ALBERT-based model is “pretty good” at learning emotional terms that correlate with suicide risk such as language describing ideation or details of a plan, Fichter said. When configuring the model to categorize risk, the group erred on the side of caution by scoring someone as higher risk when it wasn’t entirely clear, he said.

To prepare data to train its risk assessment model, the organization looked to real-world assessments performed during full crisis conversations. Relying on people’s subjective opinions out of context can introduce bias when labeling training data, but using real-world clinical risk assessments that can be mapped back to whether a young person should or should not have been placed in the priority queue helped reduce that potential bias.

In the past, human counselors used a heuristic rules-based system to triage callers, said Fichter, who said he believes the AI-based process provides “a much more accurate prediction.”


Mocking TV shows (but evading worse problems)

The Trevor Project balances benefits of large language models against potential problems by limiting how they are used, Fichter said. He pointed to the strictly internal use of the GPT-2-based persona models for generating language for counselor training purposes, and use of the ALBERT-based risk assessment model only to prioritize how soon a counselor should speak to a contact.

Still, open-source, large natural-language processing models including various iterations of OpenAI’s GPT — generative pre-trained transformer — have generated a reputation as toxic language factories. They have been criticized for producing text that perpetuates stereotypes and spews nasty language, in part because they were trained using data gleaned from an internet where such language is commonplace. Groups including OpenAI are continuously working to improve toxicity and accuracy problems associated with large language models.

“There is ongoing research to ground them to ‘be good citizen models’” said Peterson. However, he said that machine -learning systems “can make mistakes [and] there are situations in which that is not acceptable.”

Meanwhile, large language models regularly burst on the scene. Microsoft on Tuesday introduced new AI models it said it has deployed to improve common language understanding tasks such as name entity recognition, text summarization, custom text classification and key phrase extraction.

Tailoring those models for particular purposes with highly specific training data sets is one way users such as The Trevor Project have worked to take advantage of their benefits while taking care to ensure they do not facilitate more troubling digital conversations.

“Because we were able to fine-tune it to perform very specific work, and purely for our internal [Riley and Drew personas], our model has not generated any offensive output,” Fichter said.

When developing both its crisis contact simulator and risk assessment model, the organization removed names or other personally identifiable information from data it used to train the persona models.

But privacy protection wasn’t the only reason, said Fichter. His team did not want the machine-learning models to draw conclusions about people with certain names, which could result in model bias. For example, they didn’t want them to conclude that someone with the name “Jane” was always a bully just because a teen in crisis in a role-playing scenario complained about someone with that name.

So far, Fichter said the crisis contact simulator personas have not used any inappropriate or odd words. In general, they might simply respond, “I don’t know,” if they cannot generate relevant language.

Still, he said that Drew — the 20-something Californian — has mocked Netflix’s social-media competition show “The Circle.” “Drew has made fun of some TV shows he’s been watching,” Fichter said.

This story was updated to clarify how the Trevor Project prepared data for its training models and how people in crisis use its services.

LA is a growing tech hub. But not everyone may fit.

LA has a housing crisis similar to Silicon Valley’s. And single-family-zoning laws are mostly to blame.

As the number of tech companies in the region grows, so does the number of tech workers, whose high salaries put them at an advantage in both LA's renting and buying markets.

Photo: Nat Rubio-Licht/Protocol

LA’s tech scene is on the rise. The number of unicorn companies in Los Angeles is growing, and the city has become the third-largest startup ecosystem nationally behind the Bay Area and New York with more than 4,000 VC-backed startups in industries ranging from aerospace to creators. As the number of tech companies in the region grows, so does the number of tech workers. The city is quickly becoming more and more like Silicon Valley — a new startup and a dozen tech workers on every corner and companies like Google, Netflix, and Twitter setting up offices there.

But with growth comes growing pains. Los Angeles, especially the burgeoning Silicon Beach area — which includes Santa Monica, Venice, and Marina del Rey — shares something in common with its namesake Silicon Valley: a severe lack of housing.

Keep Reading Show less
Nat Rubio-Licht

Nat Rubio-Licht is a Los Angeles-based news writer at Protocol. They graduated from Syracuse University with a degree in newspaper and online journalism in May 2020. Prior to joining the team, they worked at the Los Angeles Business Journal as a technology and aerospace reporter.

While there remains debate among economists about whether we are officially in a full-blown recession, the signs are certainly there. Like most executives right now, the outlook concerns me.

In any case, businesses aren’t waiting for the official pronouncement. They’re already bracing for impact as U.S. inflation and interest rates soar. Inflation peaked at 9.1% in June 2022 — the highest increase since November 1981 — and the Federal Reserve is targeting an interest rate of 3% by the end of this year.

Keep Reading Show less
Nancy Sansom

Nancy Sansom is the Chief Marketing Officer for Versapay, the leader in Collaborative AR. In this role, she leads marketing, demand generation, product marketing, partner marketing, events, brand, content marketing and communications. She has more than 20 years of experience running successful product and marketing organizations in high-growth software companies focused on HCM and financial technology. Prior to joining Versapay, Nancy served on the senior leadership teams at PlanSource, Benefitfocus and PeopleMatter.

Policy

SFPD can now surveil a private camera network funded by Ripple chair

The San Francisco Board of Supervisors approved a policy that the ACLU and EFF argue will further criminalize marginalized groups.

SFPD will be able to temporarily tap into private surveillance networks in certain circumstances.

Photo: Justin Sullivan/Getty Images

Ripple chairman and co-founder Chris Larsen has been funding a network of security cameras throughout San Francisco for a decade. Now, the city has given its police department the green light to monitor the feeds from those cameras — and any other private surveillance devices in the city — in real time, whether or not a crime has been committed.

This week, San Francisco’s Board of Supervisors approved a controversial plan to allow SFPD to temporarily tap into private surveillance networks during life-threatening emergencies, large events, and in the course of criminal investigations, including investigations of misdemeanors. The decision came despite fervent opposition from groups, including the ACLU of Northern California and the Electronic Frontier Foundation, which say the police department’s new authority will be misused against protesters and marginalized groups in a city that has been a bastion for both.

Keep Reading Show less
Issie Lapowsky

Issie Lapowsky ( @issielapowsky) is Protocol's chief correspondent, covering the intersection of technology, politics, and national affairs. She also oversees Protocol's fellowship program. Previously, she was a senior writer at Wired, where she covered the 2016 election and the Facebook beat in its aftermath. Prior to that, Issie worked as a staff writer for Inc. magazine, writing about small business and entrepreneurship. She has also worked as an on-air contributor for CBS News and taught a graduate-level course at New York University's Center for Publishing on how tech giants have affected publishing.

Enterprise

These two AWS vets think they can finally solve enterprise blockchain

Vendia, founded by Tim Wagner and Shruthi Rao, wants to help companies build real-time, decentralized data applications. Its product allows enterprises to more easily share code and data across clouds, regions, companies, accounts, and technology stacks.

“We have this thesis here: Cloud was always the missing ingredient in blockchain, and Vendia added it in,” Wagner (right) told Protocol of his and Shruthi Rao's company.

Photo: Vendia

The promise of an enterprise blockchain was not lost on CIOs — the idea that a database or an API could keep corporate data consistent with their business partners, be it their upstream supply chains, downstream logistics, or financial partners.

But while it was one of the most anticipated and hyped technologies in recent memory, blockchain also has been one of the most failed technologies in terms of enterprise pilots and implementations, according to Vendia CEO Tim Wagner.

Keep Reading Show less
Donna Goodison

Donna Goodison (@dgoodison) is Protocol's senior reporter focusing on enterprise infrastructure technology, from the 'Big 3' cloud computing providers to data centers. She previously covered the public cloud at CRN after 15 years as a business reporter for the Boston Herald. Based in Massachusetts, she also has worked as a Boston Globe freelancer, business reporter at the Boston Business Journal and real estate reporter at Banker & Tradesman after toiling at weekly newspapers.

Fintech

Kraken's CEO got tired of being in finance

Jesse Powell tells Protocol the bureaucratic obligations of running a financial services business contributed to his decision to step back from his role as CEO of one of the world’s largest crypto exchanges.

Photo: David Paul Morris/Bloomberg via Getty Images

Kraken is going through a major leadership change after what has been a tough year for the crypto powerhouse, and for departing CEO Jesse Powell.

The crypto market is still struggling to recover from a major crash, although Kraken appears to have navigated the crisis better than other rivals. Despite his exchange’s apparent success, Powell found himself in the hot seat over allegations published in The New York Times that he made insensitive comments on gender and race that sparked heated conversations within the company.

Keep Reading Show less
Benjamin Pimentel

Benjamin Pimentel ( @benpimentel) covers crypto and fintech from San Francisco. He has reported on many of the biggest tech stories over the past 20 years for the San Francisco Chronicle, Dow Jones MarketWatch and Business Insider, from the dot-com crash, the rise of cloud computing, social networking and AI to the impact of the Great Recession and the COVID crisis on Silicon Valley and beyond. He can be reached at bpimentel@protocol.com or via Google Voice at (925) 307-9342.

Latest Stories
Bulletins