To make AR glasses useful, we may have to forget everything we think we know about virtual assistants.
Facebook gave the public a sneak peek at some of the AR technology it is developing in its labs Thursday, showing off wristbands that detect electric signals traveling through the muscles to your hand in order to measure finger movements down to the millimeter. This is impressive technology, and could one day be used to control AR glasses without the need for voice commands or finger tracking.
But Facebook researchers argued this week that a new input mechanism can only be part of reinventing human-computer interaction for AR. Just as important, they said, is an interface that does away with the metaphors used by desktop computers and mobile phones, and instead more proactively responds to the needs of the user.
"It will use personalized AI," said Facebook Reality Labs director of research science Sean Keller. "This is an interface that places us at the center of the computing experience."
In a conversation with Protocol, Facebook's VP of consumer hardware Andrew Bosworth elaborated on what it will take to build such an interface. And ultimately, he said, AR may require us to completely rethink the idea of a personal assistant.
This interview has been edited and condensed for clarity.
Facebook's AR/VR researchers have described the technology you are developing for your wristbands as an "intelligent click": an interface that can easily be controlled by small finger movements or even the intention to move a finger because it anticipates and adapts to the situational context. This sounds like you are building a kind of assistant, except one that doesn't have a name or personality?
You're hitting on something really profound here: This is not the idea of the assistant as this embodied smart person who can go do your research for you. We're talking about a contextually aware system that has a sense that if you say "Where [are] my keys," and you're about to leave the house, you're probably looking for your car keys. But if you're about to come into the house, you're probably looking for your house keys.
It's really a question of artificial intelligence and sensors and awareness. The more context a machine has, the more efficient I can be with my intentions.
When I'm telling my 6-year-old to clean up the kitchen, I have to be wildly specific: pick up that thing off the floor, clean that. If my wife is telling me to clean up the kitchen, that's all she has to say. I understand broadly the concepts. So the amount that I have to communicate, and the specificity, is inversely proportional to the amount of context the machine can bring to bear.
With other humans, the context that they have implicitly is massive, and we can communicate efficiently with few words and gestures. My wife and I can have a whole conversation, without our kids knowing, in a glance. That's based on context that we share.
To have someone develop that kind of relationship with a machine is today in the bounds of science fiction. Eventually, we don't think it will be. When you've got a machine that has sensors, that can see and hear and learn, that could work. It's a very important area of exploration, but it's very much research.
You said that you're inviting feedback on this technology, on the societal impact and possible privacy concerns. My guess is that people will focus on the neural interface of the wristband itself, and liken it to Facebook reading your mind. It seems to me that the conversations around this ability to understand context are going to be a lot more difficult?
I think that's absolutely right. Anything that's happening on neural interfaces, it's really the same as a finger moving. It is the same as you pressing the "R" keystroke. It's an intentional input to the machine. The harder conversation that we're trying to start having in public with things like Project Aria, which we announced at Facebook Connect, are the ones around: When you have always-on compute, always-on sensors in the world, what is appropriate?
What's appropriate to collect, what's appropriate to store on device or process on device? What can you send to the server? Those are really the profound questions, the biggest questions of our age.
We have been gearing up to have a very public conversation about face recognition. Can we recognize faces? Can we cluster faces without recognizing them? Can we even say: "Hey, there's a face here?" Right now, we are in very uncertain grounds as a society. That uncertainty is reflected in the very uneven patchwork of laws and regulations.
We should figure out what it's going to be. And whatever the answer is, I'm fine with it. I don't want to decide. I would like very much for the public to communicate, decide, arrive at some consensus and then go from there.
Going back to this idea of assistive technology that's not a personal assistant: Do you think Facebook is in a different position here than some of its competitors, because you don't have the legacy of an Alexa that people are used to talking to?
We've had our assistant for a while. First, it was just for Portal. Now it's the "Hey, Facebook" version. So I think there actually is room for both concepts, contextual assistance versus these kinds of personified assistants. The personified assistants, yeah, you want them to feel like they have agency and intelligence, and they can go work on your behalf. The contextual assistant is much more imperative. It's not like an intelligent thing that you can ask to do things. It's actually the system itself adapting.
The history of user interfaces that adapt is not good. Generally speaking, we as a species have preferred an interface that, as inefficient or terrible as it may be, doesn't move. I had the ignominy of working on the ribbon at Microsoft Office, which was when all the commands in the toolbar would move based on the frequency of use. People hated it. Interfaces that move around are hard to learn. It's a huge challenge here, and that's why it's a research problem. It may end up being the case that the right thing to do is just to insist on more specificity from the consumer for a while.
From the way Facebook execs have talked about future AR devices, it sounds like you are generally focused on a more assistive approach toward augmented reality. Fewer flashy holograms, more utility. Is that the way you think this technology will be going?
I think it's important to be useful, even with VR. And I think entertainment is useful. People have downtime, they want to be entertained. But it's only one of many things that we do. And I really worry sometimes that these technologies are getting pigeonholed as novelties, or toys, when the potential is so much more profound than that.
I've been thrilled in VR to have fitness taking off. I know that fitness is adjacent to gaming, but it's a little different. It's reaching a different audience, and it's going mainstream.
With AR, we take the same approach. There can be tons of fun things to do in AR, and I can't wait for those things. It's just not the only thing that we're doing. Why would you put this on and wear it around the world if all you're going to do is entertain yourself with it? If that's all you have, then just use VR.