Hey, Siri and Alexa: Your Chinese competitors are developing something you don't have. Opinions.
Xiaoice, a Chinese chatbot developed by Microsoft in 2014, is different from other AI assistants. Its makers imbued it with a distinct personality: an 18-year-old girl who can sing, dance, write poetry and paint. The team behind Xiaoice has often said it's an AI with emotional intelligence.
But since becoming independent from Microsoft in July 2020, the company, also called Xiaoice, is aiming at much more than that. With Xiaoice integrated in almost all Chinese-brand smartphones, the company is training its algorithms on data from 758 million online users. Its AI framework has enabled users to create over 6 million "virtual boyfriends/girlfriends" — customized "AI beings" that can have completely different personalities from the original Xiaoice. The company also said its annual revenue had exceeded $15 million, mostly from Xiaoice's enterprise products, like in-car assistants and financial news-gathering services.
In an interview with Protocol, Li Di, CEO of Xiaoice and former deputy director of Microsoft's Search Technology Center Asia, talked about why it's important for AI beings to have opinions, what advantages Chinese companies enjoy relative to American peers and why he doesn't think government should regulate his industry.
This interview has been edited for length and clarity.
Xiaoice was developed mainly in China. What advantages do Chinese AI companies have that their American counterparts don't?
Chinese companies are really good at finding new business models and operating models. That's because China has a great [market]. It's got depth — from top-tier cities to sixth-tier cities, there are vertical differences between these scenarios. It also has a large sheer volume [of people]. So usually, as long as you are going in the right direction and going fast enough, it's not hard to find a business model that works well.
But sometimes, because it's so easy, you don't have enough motivation to keep innovating. Why do I need to be original? Why don't I just get on the streets and ask everyone to sign up for our services? Xiaoice is a Chinese company now, but in this sense we are still keeping our old culture. We want to do original things.
Xiaoice is operating in overseas markets as well, like in Indonesia and Japan. Has Xiaoice ever felt any geopolitical pressure?
Not really. We have good relationships with the markets we are in. For the concerns about data security, it doesn't only happen between China and the United States. China and Japan also have different views on that. But it's not a problem for us because we've been working on localization from the get-go.
Why do you think AI assistants need to have high "emotional intelligence"?
We've always said chatbots need to have EQ, but that's the romantic way of saying it. From the technological perspective, the difference is whether it focuses on handling a fragment of the conversation [like one question], or the whole session of the conversation.
When human beings are interacting with AI beings, they aren't just talking about facts, but also opinions.
What are facts? Facts are when I ask [the chatbot] how tall the Himalayas are, or can you get me a Uber. But when I ask it whether it likes Donald Trump, that's an opinion. Do you like this song by Taylor Swift? That's also an opinion. An AI being has to have opinions. It can't both like and dislike it.
Opinions about a Taylor Swift song probably won't impact whether I want to continue talking with the person or not, but for people with strong political beliefs, opinions about, say, Donald Trump can make or break a conversation. What can Xiaoice do if users dislike her opinions?
When you realize this AI being has different opinions from you, you turn around and find another AI being that thinks the same as you. Just like in real life. All these AI beings can be created with this [Xiaoice] framework.
Xiaoice also has many enterprise products now, like news briefs about the financial market or clothing-pattern generators. Isn't there a tension between those non-human services and the personality of Xiaoice?
There's no tension. Enterprise products also need personalities. Yes, it isn't mandatory for providing services, but the more personality, the better.
How do you know Xiaoice won't develop to a point where the technology could go wrong, like in the "Black Mirror" episode where the character made an AI clone of her deceased boyfriend?
We've always had this technology that, with audio materials of several dozen sentences, we can make a voice replicate of you. But we insisted not to make public the access to this technology. Why? Because if you open it up to everyone, one day an AI bot will call your mother and say you were taken away by the police [Editor's note: This is a common type of telephone scam in China.] The only way to reduce the risk is for us to foresee the ethical border of technologies and not go there at all.
Frankly speaking, as AI technologies continue to improve and get increasingly deceptive, I'm not very optimistic about the future. The industry doesn't have enough self-discipline.
Is self-regulation the only solution?
As a private company, that's the only solution, from my perspective. Of course you can call for more regulation, but will more regulation bring about a better result than self-discipline? I'm skeptical about that. Regulations coming from any government or any industry associations tend to be overly restrictive. They tend to tell you not to develop this and not to cause trouble, instead of how you can do it the right way.