Facebook’s Andrew Bosworth on building AR assistants you won’t hate

The company says the assistants of the future need to be more aware of what's happening around them. But how far should the technology go?

Andrew Bosworth

Andrew "Boz" Bosworth thinks you need to think very differently about how assistants will work in AR.

Photo: Christian Charisius/Getty Images

To make AR glasses useful, we may have to forget everything we think we know about virtual assistants.

Facebook gave the public a sneak peek at some of the AR technology it is developing in its labs Thursday, showing off wristbands that detect electric signals traveling through the muscles to your hand in order to measure finger movements down to the millimeter. This is impressive technology, and could one day be used to control AR glasses without the need for voice commands or finger tracking.

But Facebook researchers argued this week that a new input mechanism can only be part of reinventing human-computer interaction for AR. Just as important, they said, is an interface that does away with the metaphors used by desktop computers and mobile phones, and instead more proactively responds to the needs of the user.

"It will use personalized AI," said Facebook Reality Labs director of research science Sean Keller. "This is an interface that places us at the center of the computing experience."

In a conversation with Protocol, Facebook's VP of consumer hardware Andrew Bosworth elaborated on what it will take to build such an interface. And ultimately, he said, AR may require us to completely rethink the idea of a personal assistant.

This interview has been edited and condensed for clarity.

Facebook's AR/VR researchers have described the technology you are developing for your wristbands as an "intelligent click": an interface that can easily be controlled by small finger movements or even the intention to move a finger because it anticipates and adapts to the situational context. This sounds like you are building a kind of assistant, except one that doesn't have a name or personality?

You're hitting on something really profound here: This is not the idea of the assistant as this embodied smart person who can go do your research for you. We're talking about a contextually aware system that has a sense that if you say "Where [are] my keys," and you're about to leave the house, you're probably looking for your car keys. But if you're about to come into the house, you're probably looking for your house keys.

It's really a question of artificial intelligence and sensors and awareness. The more context a machine has, the more efficient I can be with my intentions.

When I'm telling my 6-year-old to clean up the kitchen, I have to be wildly specific: pick up that thing off the floor, clean that. If my wife is telling me to clean up the kitchen, that's all she has to say. I understand broadly the concepts. So the amount that I have to communicate, and the specificity, is inversely proportional to the amount of context the machine can bring to bear.

With other humans, the context that they have implicitly is massive, and we can communicate efficiently with few words and gestures. My wife and I can have a whole conversation, without our kids knowing, in a glance. That's based on context that we share.

To have someone develop that kind of relationship with a machine is today in the bounds of science fiction. Eventually, we don't think it will be. When you've got a machine that has sensors, that can see and hear and learn, that could work. It's a very important area of exploration, but it's very much research.

You said that you're inviting feedback on this technology, on the societal impact and possible privacy concerns. My guess is that people will focus on the neural interface of the wristband itself, and liken it to Facebook reading your mind. It seems to me that the conversations around this ability to understand context are going to be a lot more difficult?

I think that's absolutely right. Anything that's happening on neural interfaces, it's really the same as a finger moving. It is the same as you pressing the "R" keystroke. It's an intentional input to the machine. The harder conversation that we're trying to start having in public with things like Project Aria, which we announced at Facebook Connect, are the ones around: When you have always-on compute, always-on sensors in the world, what is appropriate?

What's appropriate to collect, what's appropriate to store on device or process on device? What can you send to the server? Those are really the profound questions, the biggest questions of our age.

We have been gearing up to have a very public conversation about face recognition. Can we recognize faces? Can we cluster faces without recognizing them? Can we even say: "Hey, there's a face here?" Right now, we are in very uncertain grounds as a society. That uncertainty is reflected in the very uneven patchwork of laws and regulations.

We should figure out what it's going to be. And whatever the answer is, I'm fine with it. I don't want to decide. I would like very much for the public to communicate, decide, arrive at some consensus and then go from there.

Going back to this idea of assistive technology that's not a personal assistant: Do you think Facebook is in a different position here than some of its competitors, because you don't have the legacy of an Alexa that people are used to talking to?

We've had our assistant for a while. First, it was just for Portal. Now it's the "Hey, Facebook" version. So I think there actually is room for both concepts, contextual assistance versus these kinds of personified assistants. The personified assistants, yeah, you want them to feel like they have agency and intelligence, and they can go work on your behalf. The contextual assistant is much more imperative. It's not like an intelligent thing that you can ask to do things. It's actually the system itself adapting.

The history of user interfaces that adapt is not good. Generally speaking, we as a species have preferred an interface that, as inefficient or terrible as it may be, doesn't move. I had the ignominy of working on the ribbon at Microsoft Office, which was when all the commands in the toolbar would move based on the frequency of use. People hated it. Interfaces that move around are hard to learn. It's a huge challenge here, and that's why it's a research problem. It may end up being the case that the right thing to do is just to insist on more specificity from the consumer for a while.

From the way Facebook execs have talked about future AR devices, it sounds like you are generally focused on a more assistive approach toward augmented reality. Fewer flashy holograms, more utility. Is that the way you think this technology will be going?

I think it's important to be useful, even with VR. And I think entertainment is useful. People have downtime, they want to be entertained. But it's only one of many things that we do. And I really worry sometimes that these technologies are getting pigeonholed as novelties, or toys, when the potential is so much more profound than that.

I've been thrilled in VR to have fitness taking off. I know that fitness is adjacent to gaming, but it's a little different. It's reaching a different audience, and it's going mainstream.

With AR, we take the same approach. There can be tons of fun things to do in AR, and I can't wait for those things. It's just not the only thing that we're doing. Why would you put this on and wear it around the world if all you're going to do is entertain yourself with it? If that's all you have, then just use VR.

Photo Illustration: Igor Golovniov/SOPA Images/LightRocket via Getty Images

On this episode of the Source Code podcast: First, a brief update on the Facebook Files, as more stories start to come out. Then, Owen Thomas joins the show to discuss PayPal's reported interest in acquiring Pinterest, and why that deal might actually make sense for both sides. Janko Roettgers then discusses the good, bad and complicated of Netflix's last few weeks, before Lizzy Lawrence joins the show to talk about the world of productivity influencers.

For more on the topics in this episode:

Keep Reading Show less
David Pierce

David Pierce ( @pierce) is Protocol's editorial director. Prior to joining Protocol, he was a columnist at The Wall Street Journal, a senior writer with Wired, and deputy editor at The Verge. He owns all the phones.

The way we work has fundamentally changed. COVID-19 upended business dealings and office work processes, putting into hyperdrive a move towards digital collaboration platforms that allow teams to streamline processes and communicate from anywhere. According to the International Data Corporation, the revenue for worldwide collaboration applications increased 32.9 percent from 2019 to 2020, reaching $22.6 billion; it's expected to become a $50.7 billion industry by 2025.

"While consumers and early adopter businesses had widely embraced collaborative applications prior to the pandemic, the market saw five years' worth of new users in the first six months of 2020," said Wayne Kurtzman, research director of social and collaboration at IDC. "This has cemented collaboration, at least to some extent, for every business, large and small."

Keep Reading Show less
Kate Silver

Kate Silver is an award-winning reporter and editor with 15-plus years of journalism experience. Based in Chicago, she specializes in feature and business reporting. Kate's reporting has appeared in the Washington Post, The Chicago Tribune, The Atlantic's CityLab, Atlas Obscura, The Telegraph and many other outlets.

Theranos’ investor pitches go on trial

Prosecutors in the Elizabeth Holmes fraud case are now highlighting allegations the company sought to mislead investors.

The fresh details of unproven claims made about the viability of Theranos' blood tests and efforts to conceal errors when demonstrating testing equipment added to the evidence against Holmes, who is accused of fraud in her role leading the company.

Photo: David Paul Morris/Bloomberg via Getty Images

The Theranos trial continued this week with testimony from Daniel Edlin, a former product manager at the blood-testing startup, and Shane Weber, a scientist from Pfizer. Their testimonies appeared to bolster the government's argument that Holmes intentionally defrauded investors and patients.

The fresh details about audacious and unproven claims made about the viability of Theranos' blood tests and efforts to conceal errors when demonstrating testing equipment added to the evidence against Holmes, who is accused of fraud in her role leading the company.

Keep Reading Show less
Aisha Counts

Aisha Counts (@aishacounts) is a reporting fellow at Protocol, based out of Los Angeles. Previously, she worked for Ernst & Young, where she researched and wrote about the future of work, emerging technologies and startups. She is a graduate of the University of Southern California, where she studied business and philosophy. She can be reached at acounts@protocol.com.

Protocol | Policy

8 takeaways from states’ new filing against Google

New details have been unsealed in the states' antitrust suit against Google for anticompetitive behavior in the ads market.

Google is facing complaints by government competition enforcers on several fronts.

Photo: Drew Angerer/Getty Images

Up to 22%: That's the fee Google charges publishers for sales on its online ad exchanges, according to newly unredacted details in a complaint by several state attorneys general.

The figure is just one of the many details that a court allowed the states to unveil Friday. Many had more or less remained secrets inside Google and the online publishing industry, even through prior legal complaints and eager public interest.

Keep Reading Show less
Ben Brody

Ben Brody (@ BenBrodyDC) is a senior reporter at Protocol focusing on how Congress, courts and agencies affect the online world we live in. He formerly covered tech policy and lobbying (including antitrust, Section 230 and privacy) at Bloomberg News, where he previously reported on the influence industry, government ethics and the 2016 presidential election. Before that, Ben covered business news at CNNMoney and AdAge, and all manner of stories in and around New York. He still loves appearing on the New York news radio he grew up with.

Protocol | Workplace

This tech founder uses a converted Sprinter van as an office on wheels

The CEO of productivity startup Rock likes to work on the road. Here's how he does it — starting with three different WiFi hotspots.

Kenzo Fong, founder and CEO of the 20-person productivity software startup Rock, has been working out of his converted Mercedes-Benz Sprinter van since the pandemic began.

Photo: Kenzo Fong/Rock

Plenty of techies have started companies in garages. Try running a startup from a van.

In San Francisco, one software company founder has been using a converted Mercedes-Benz Sprinter van — picture an Amazon delivery vehicle — as a mobile office.

Keep Reading Show less
Allison Levitsky
Allison Levitsky is a reporter at Protocol covering workplace issues in tech. She previously covered big tech companies and the tech workforce for the Silicon Valley Business Journal. Allison grew up in the Bay Area and graduated from UC Berkeley.
Latest Stories