AI is helping save LGBTQ+ kids
Hello and welcome to Protocol Enterprise! Today: how the Trevor Project is using OpenAI’s GPT software to train counselors, the Lapsus$ hacking gang probably doesn’t need to shave, and this week in enterprise tech moves.
Spin up
Ransomware is getting really expensive. The average ransomware demand handled by Palo Alto Networks rose 144% last year to $2.2 million, but the good news is you can negotiate: The average payment was $541,010.
The promise and perils of AI
Large AI-based language-processing models such as OpenAI’s GPT-3 have been criticized for conjuring up racist or offensive text, for making up false information and for requiring enormous amounts of computing power to build and deploy. Still, countless companies have used these foundational models to form the basis of tech products such as customer service chatbots.
The Trevor Project, a nonprofit that provides counseling to LGBTQ+ youth at risk of suicide, is also an adopter of these processing models. However, while the group has found value in customizing these deep-learning neural networks to help expand its ability to help kids in crisis, the organization recognizes where to draw the line.
- While The Trevor Project has used open-source AI models including OpenAI’s GPT-2 and Google’s ALBERT, it does not use tools built with them to carry on conversations directly with troubled kids.
- Instead, the group has deployed those models to build tools it has used internally to train more than 1,000 volunteer crisis counselors, and to help triage calls and texts from people in order to prioritize higher-risk patients and connect them faster to real-life counselors.
- The Trevor Project fine-tuned GPT-2 to create a crisis contact simulator featuring two AI-based personas. Named Riley and Drew, these AI-based personas communicate internally with counselor trainees, helping them prepare for the sorts of conversations they will have with actual kids and teens.
- Each persona represents a different life situation, background, sexual orientation, gender identity and suicide risk level. Riley mimics a teen in North Carolina who feels depressed and anxious, while Drew is in their early 20s, lives in California and deals with bullying and harassment.
“We trained Riley on many hundreds of past Riley role-plays,” said Dan Fichter, head of AI and Engineering at The Trevor Project, which developed the Riley persona through a partnership with Google’s grant program, Google.org.
- While he said the persona models are relatively stable, Fichter said the organization may need to retrain them with new data as the casual language used by kids and teens evolves to incorporate new acronyms, and as current events such as a new law in Texas defining gender-affirming medical care as “child abuse” becomes a topic of conversation, he said.
- The Trevor Project released data from a 2021 national survey that found that more than 52% of transgender and nonbinary youth "seriously considered suicide" in the past year, and of those, one in five attempted it.
- “Health care is a people-focused industry, and when machine learning intersects with people, I think we have to be careful,” said Evan Peterson, a machine-learning engineer at health and wellness tech company LifeOmic.
To gauge performance, fairness and equity when it came to certain identity groups, The Trevor Project evaluated a variety of large natural-language-processing and linguistic deep-learning models before deciding which best suited particular tasks.
- It turned out that when it came to holding a simulated conversation and generating the sort of long, coherent sentence required for a 60-90 minute counselor training session, GPT-2 performed best.
- But ALBERT performed better than others when testing and validating models for a separate machine-learning system The Trevor Project built to help assess the risk level of people calling, texting or chatting with its suicide prevention hotline.
- While it may be difficult for humans — and deep-learning models — to gauge suicide risk if someone simply says, “I’m not feeling great,” the ALBERT-based model is “pretty good” at learning emotional terms that correlate with suicide risk such as language describing ideation or details of a plan, Fichter said.
The Trevor Project balances benefits of large language models against potential problems by limiting how they are used, Fichter said. He pointed to the strictly internal use of the GPT-2-based persona models for generating language for counselor training purposes, and use of the ALBERT-based risk assessment model only to prioritize how soon a counselor should speak to a patient.
- Still, open-source, large natural-language processing models including various iterations of OpenAI’s GPT — generative pre-trained transformer — have generated a reputation as toxic language factories.
- They have been criticized for producing text that perpetuates stereotypes and spews nasty language, in part because they were trained using data gleaned from an internet where such language is commonplace.
- “There is ongoing research to ground them to ‘be good citizen models’” said Peterson. However, he said that machine-learning systems “can make mistakes [and] there are situations in which that is not acceptable.”
A MESSAGE FROM UPWORK

Seeking to triple its employee base, Whisk, a fully remote team, sought diverse talent from a wide variety of regions through Upwork, a work marketplace that connects businesses with independent professionals and agencies around the globe.
I was a teenage hacker
We’re learning more about the people behind Lapsus$, the hacking group that has infiltrated enterprise tech giants such as Nvidia, Microsoft and Okta in recent months. It’s not clear if they’re allowed to drive a car.
According to Bloomberg, the mastermind behind Lapsus$ is a teenage Brit living with his mom just outside of Oxford. Another teenager in Brazil was apparently part of the group, which forced Okta to disclose that the group took over a laptop belonging to a customer-support contractor working for the company for five days in January.
The BBC reported that the teenager has “amassed a $14m (£10.6m) fortune from hacking,” which is more than anyone at Protocol Enterprise made when they were sixteen. (I think: Max is out today.) As noted above, ransomware is a serious crime that has put countless businesses and organizations in a terrible place over the last few years, but as NBC’s Kevin Collier put it, “the most interesting thing about Lapsus$, or that Twitter hack last year, is it's proof yet again that the hundreds-of-billion-of-dollars cybersecurity industry is often foiled by rambunctious teens.”
Enterprise moves
Over the past week Dataiku and Proofpoint added new C-suite members and content management unicorn Contentful snagged talent from Zendesk and Atlassian.
Adam Towns is the new CFO at Dataiku. Towns was previously CFO of data analytics company Sisense.
Ashan Willy is the new CEO of Proofpoint. Willy formerly held leadership roles at Cisco, Juniper Networks and Polycom.
Brian Lanigan is now a VP at Lacework overseeing worldwide channels and alliances. Lanigan formerly led alliances for Splunk and HP.
Søren Abildgaard is now Contentful’s executive VP of Engineering. Abildgaard formerly led engineering teams at Zendesk, Adobe and Autodesk.
Mairead O'Donovan is the new CPO at Contentful. O’Donovan was previously the head of Jira work management at Atlassian.
— Aisha Counts (email | twitter)
Around the enterprise
Enterprise software companies need enterprise software too: SAP announced that Microsoft is using its Rise service to migrate its internal systems to S/4HANA.
SoftBank is reportedly seeking a $60 billion valuation for Arm in its upcoming IPO, which would be $20 billion more than Nvidia offered for the company. Good luck, Masa.
A MESSAGE FROM UPWORK

Whisk isn’t alone in unlocking the global marketplace to find the right types of employees to support its business goals. More than three-quarters of U.S. companies have used remote freelancers, according to research from Upwork, and more than a quarter of businesses plan to go fully remote in the next five years.
Thanks for reading — see you tomorrow!
Correction: This story has been updated to correct the spelling of Contentful. This story was updated March 24, 2022.
Recent Issues
In a tough economy, benefits of the cloud 'only magnify'
November 14, 2022
Twitter’s security leads just quit. Now what?
November 10, 2022
Intel finally serves up a chip
November 09, 2022
The great AI race that wasn’t
November 08, 2022
Cloudflare sets a target
November 07, 2022
How Elon will bring back the Fail Whale
November 04, 2022
See more
To give you the best possible experience, this site uses cookies. If you continue browsing. you accept our use of cookies. You can review our privacy policy to find out more about the cookies we use.