A search for “suicide hotspots” on Google turns up a Wikipedia page listing places across the globe where people have flocked to take their own lives, from a forest on Mt. Fuji to the Golden Gate Bridge. The same search on YouTube turns up videos about common spots for suicide, too.
The humans working at Google realize that when people search for information like this, it may not be a voyeuristic inquiry or a recreational travel query. It may be a cry for help.
Now, the company said its latest AI model, MUM, can pick up on those communication signals, too. “Our previous systems understood this query to be generally information-seeking because ‘hot spots’ is language that can often be used to seek out information, such as in travel cases, for example. But MUM is able to detect that ‘Sydney suicide hot spots’ relates to common jumping spots for suicide in Sydney,” said Anne Merritt, product manager for Health and Information Quality at Google.
The company said it will apply the AI language model to Google and YouTube searches that it detects as related to suicide, domestic violence, sexual assault and substance abuse in the hopes of providing search results linking to trustworthy information to help people stay safer.
When Google introduced MUM – or Multitask Unified Model -- about a year ago, it said the language model was more powerful than its open-source language model BERT. MUM can learn from multimodal inputs, such as text, video, images and voice data. It is also capable of few-shot and zero-shot learning, meaning it requires fewer data inputs than BERT does to learn and improve, including when it comes to information involving multiple languages.
“MUM is able to help us understand longer or more complex queries like: ‘Why did he attack me when I said I don't love him.’ It may be obvious to humans that this query is about domestic violence, but long, natural-language queries like these are difficult for our systems to understand without advanced AI,” said Merritt.
While Google touts MUM’s ability to understand more nuanced human communications, it also said its BERT model has helped to reduce the portion of shocking or unexpected search results in the last year by 30%. “It’s been especially effective in reducing explicit content for searches related to ethnicity, sexual orientation and gender, which can disproportionately impact women and especially women of color,” the company said.
However, the company said those improvements only affect Google web, image and video search results, rather than the autocomplete feature of its search engine. That autocomplete function – which generates possible keywords to complete a search query as someone is entering it – has drawn criticism over the years for generating misinformation or racist or sexist keywords.