People

New Unity study shows just how toxic online gaming can be

Unity is acquiring AI moderation company OTO to combat online abuse.

Man gaming on PC

Most people who play online multiplayer games experience toxicity in some form, Unity said.

Photo: Eduardo Toro/ EyeEm/Getty Images

Game development platform Unity has released a new study on online behavior in video games, and its findings paint a particularly grim picture of the gaming community's propensity for toxicity. The study found that seven out of 10 players say they've experienced some form of toxic behavior, described as "sexual harassment, hate speech, threats of violence, doxing" and other abusive chat or insulting voice activity.

Nearly half of players say they at least "sometimes" experience toxic behavior while playing, and around 21% report experiencing it "every time" or "often." Around two-thirds of players admit they're somewhat likely to stop playing a game if someone else is abusive toward them or exhibits toxic behavior. A vast majority, or about 92%, think there should be better solutions to enforce compliance with in-game codes of conduct and rules around toxic behavior. The study was conducted with 2,076 U.S.-based participants ages 18 or older in partnership with The Harris Poll.

To that end, Unity said toxicity is an addressable problem, and one solution is improved automated moderation. The company's study coincides with the announcement of its acquisition of OTO, an artificial intelligence firm specializing in analysis of voice chats. Unity said OTO's software, which can analyze the sentiment with which someone says something, will allow it to improve tools for game developers. That way, games can better monitor online behavior, especially spoken conversations over in-game voice chats, to try to root out abusive actors.

"We believe that online communities, especially with the rise of online multiplayer, have a rise in toxic behavior," said Felix Thé, a vice president of product management at Unity, in an interview with Protocol. "Toxic interactions at any level are something we need to address."

Thé said online games have seen a surge in new players during the COVID-19 pandemic, and that, in turn, has exacerbated toxicity issues in gaming. Newer players are more likely to be turned off by toxic behavior, while more-seasoned players may be more inclined to act abusively toward less-experienced newcomers.

Toxicity also has an acute effect on women who play video games. The study found that men were more willing to engage in voice chat and other communication tools while playing games, and therefore experience toxic behavior more often. However, women often avoid online communication tools because of gendered abuse such as sexual harassment. Women who do participate, the study found, are more likely to stop playing a game because of such toxicity.

Thé said Unity's plan is to incorporate OTO's technology into its existing Vivox tool, which gives game makers easy-to-use voice and text chat features. There's no concrete timeline for when the new AI-based tools will be incorporated, but Thé said they're working to do so soon.

Thé also stressed the effectiveness of OTO's sentiment-based moderation, pointing out how it can help reduce the error rate in issuing bans and other forms of punishment by differentiating between when someone is using expletives or other language in a friendly manner among people they know versus being legitimately abusive toward strangers.

"The tech is being used to assign a propensity of whether an interaction is toxic or not. From a privacy standpoint, it's far superior and more importantly, more effective," Thé said, adding that it's able to scale to more languages faster because sentiment when speaking is often the same in languages of similar origin, like European romance languages.

Because only the AI sentiment analysis of reported incidents is later reviewed by human moderators, Thé said it's a better method than constantly recording human conversations and storing them for review. "We believe this is fast, privacy-centric, much more effective and scalable," he added.

Protocol | Enterprise

Startups are pouncing as SaaS giants struggle in the intelligence race

Companies like Salesforce and Workday spent the last two decades building walled gardens around their systems. Now, it's a mad dash to make those ecosystems more open.

Companies want to predict the future, and "systems of intelligence" might be their best bet.

Image: Yuichiro Chino / Getty Images

Take a look at any software vendor's marketing materials and you're sure to see some variation of the word "intelligence" splattered everywhere.

It's part of a tectonic shift happening within enterprise technology. Companies spent the last several years moving their systems to the internet and, along the way, rapidly adopting new applications.

Keep Reading Show less
Joe Williams

Joe Williams is a senior reporter at Protocol covering enterprise software, including industry giants like Salesforce, Microsoft, IBM and Oracle. He previously covered emerging technology for Business Insider. Joe can be reached at JWilliams@Protocol.com. To share information confidentially, he can also be contacted on a non-work device via Signal (+1-309-265-6120) or JPW53189@protonmail.com.


Keep Reading Show less
Nasdaq
A technology company reimagining global capital markets and economies.
Protocol | Workplace

The hottest new perk in tech: A week off for burnout recovery

In an industry where long hours are a "badge of honor," a week of rest may be the best way to retain talent.

Tech companies are giving their employees a week to rest and recover from burnout.

Photo: Kinga Cichewicz/Unsplash

In early May, the founder of Lessonly, a company that makes training software, sent out a companywide email issuing a mandate to all employees. But it wasn't the sort of mandate employees around the world have been receiving related to vaccines and masks. This mandate required that every worker take an entire week off in July.

The announcement took Lessonly's staff by surprise. "We had employees reach out and share that they were emotional, just thankful that they had the opportunity to do this," said Megan Jarvis, who leads the company's talent team and worked on planning the week off.

Keep Reading Show less
Aisha Counts
Aisha J. Counts is a reporting fellow at Protocol, based out of Los Angeles. Previously, she worked for Ernst & Young, where she researched and wrote about the future of work, the gig economy and startups. She is a graduate of the University of Southern California, where she studied business and philosophy.
Power

Chip costs are rising. How will that affect gadget prices?

The global chip shortage is causing component costs to go up, so hardware makers are finding new ways to keep their prices low.

Chips are getting more expensive, but most consumer electronics companies have so far resisted price increases.

Photo: Chris Hondros/Getty Images

How do you get people to pay more for your products while avoiding sticker shock? That's a question consumer electronics companies are grappling with as worldwide chip shortages and component cost increases are squeezing their bottom lines.

One way to do it: Make more expensive and higher-margin products seem like a good deal to customers.

Keep Reading Show less
Janko Roettgers

Janko Roettgers (@jank0) is a senior reporter at Protocol, reporting on the shifting power dynamics between tech, media, and entertainment, including the impact of new technologies. Previously, Janko was Variety's first-ever technology writer in San Francisco, where he covered big tech and emerging technologies. He has reported for Gigaom, Frankfurter Rundschau, Berliner Zeitung, and ORF, among others. He has written three books on consumer cord-cutting and online music and co-edited an anthology on internet subcultures. He lives with his family in Oakland.

Protocol | Policy

Laws want humans to check biased AI. Research shows they can’t.

Policymakers want people to oversee — and override — biased AI. But research suggests there's no evidence to prove humans are up to the task.

The recent trend toward requiring human oversight of automated decision-making systems runs counter to mounting research about humans' inability to effectively override AI tools.

Photo: Jackal Pan/Getty Images

There was a time, not long ago, when a certain brand of technocrat could argue with a straight face that algorithms are less biased decision-makers than human beings — and not be laughed out of the room. That time has come and gone, as the perils of AI bias have entered mainstream awareness.

Awareness of bias hasn't stopped institutions from deploying algorithms to make life-altering decisions about, say, people's prison sentences or their health care coverage. But the fear of runaway AI has led to a spate of laws and policy guidance requiring or recommending that these systems have some sort of human oversight, so machines aren't making the final call all on their own. The problem is: These laws almost never stop to ask whether human beings are actually up to the job.

Keep Reading Show less
Issie Lapowsky

Issie Lapowsky ( @issielapowsky) is Protocol's chief correspondent, covering the intersection of technology, politics, and national affairs. She also oversees Protocol's fellowship program. Previously, she was a senior writer at Wired, where she covered the 2016 election and the Facebook beat in its aftermath. Prior to that, Issie worked as a staff writer for Inc. magazine, writing about small business and entrepreneurship. She has also worked as an on-air contributor for CBS News and taught a graduate-level course at New York University's Center for Publishing on how tech giants have affected publishing.

Latest Stories