Facebook’s Andrew Bosworth on building AR assistants you won’t hate

The company says the assistants of the future need to be more aware of what's happening around them. But how far should the technology go?

Andrew Bosworth

Andrew "Boz" Bosworth thinks you need to think very differently about how assistants will work in AR.

Photo: Christian Charisius/Getty Images

To make AR glasses useful, we may have to forget everything we think we know about virtual assistants.

Facebook gave the public a sneak peek at some of the AR technology it is developing in its labs Thursday, showing off wristbands that detect electric signals traveling through the muscles to your hand in order to measure finger movements down to the millimeter. This is impressive technology, and could one day be used to control AR glasses without the need for voice commands or finger tracking.

But Facebook researchers argued this week that a new input mechanism can only be part of reinventing human-computer interaction for AR. Just as important, they said, is an interface that does away with the metaphors used by desktop computers and mobile phones, and instead more proactively responds to the needs of the user.

"It will use personalized AI," said Facebook Reality Labs director of research science Sean Keller. "This is an interface that places us at the center of the computing experience."

In a conversation with Protocol, Facebook's VP of consumer hardware Andrew Bosworth elaborated on what it will take to build such an interface. And ultimately, he said, AR may require us to completely rethink the idea of a personal assistant.

This interview has been edited and condensed for clarity.

Facebook's AR/VR researchers have described the technology you are developing for your wristbands as an "intelligent click": an interface that can easily be controlled by small finger movements or even the intention to move a finger because it anticipates and adapts to the situational context. This sounds like you are building a kind of assistant, except one that doesn't have a name or personality?

You're hitting on something really profound here: This is not the idea of the assistant as this embodied smart person who can go do your research for you. We're talking about a contextually aware system that has a sense that if you say "Where [are] my keys," and you're about to leave the house, you're probably looking for your car keys. But if you're about to come into the house, you're probably looking for your house keys.

It's really a question of artificial intelligence and sensors and awareness. The more context a machine has, the more efficient I can be with my intentions.

When I'm telling my 6-year-old to clean up the kitchen, I have to be wildly specific: pick up that thing off the floor, clean that. If my wife is telling me to clean up the kitchen, that's all she has to say. I understand broadly the concepts. So the amount that I have to communicate, and the specificity, is inversely proportional to the amount of context the machine can bring to bear.

With other humans, the context that they have implicitly is massive, and we can communicate efficiently with few words and gestures. My wife and I can have a whole conversation, without our kids knowing, in a glance. That's based on context that we share.

To have someone develop that kind of relationship with a machine is today in the bounds of science fiction. Eventually, we don't think it will be. When you've got a machine that has sensors, that can see and hear and learn, that could work. It's a very important area of exploration, but it's very much research.

You said that you're inviting feedback on this technology, on the societal impact and possible privacy concerns. My guess is that people will focus on the neural interface of the wristband itself, and liken it to Facebook reading your mind. It seems to me that the conversations around this ability to understand context are going to be a lot more difficult?

I think that's absolutely right. Anything that's happening on neural interfaces, it's really the same as a finger moving. It is the same as you pressing the "R" keystroke. It's an intentional input to the machine. The harder conversation that we're trying to start having in public with things like Project Aria, which we announced at Facebook Connect, are the ones around: When you have always-on compute, always-on sensors in the world, what is appropriate?

What's appropriate to collect, what's appropriate to store on device or process on device? What can you send to the server? Those are really the profound questions, the biggest questions of our age.

We have been gearing up to have a very public conversation about face recognition. Can we recognize faces? Can we cluster faces without recognizing them? Can we even say: "Hey, there's a face here?" Right now, we are in very uncertain grounds as a society. That uncertainty is reflected in the very uneven patchwork of laws and regulations.

We should figure out what it's going to be. And whatever the answer is, I'm fine with it. I don't want to decide. I would like very much for the public to communicate, decide, arrive at some consensus and then go from there.

Going back to this idea of assistive technology that's not a personal assistant: Do you think Facebook is in a different position here than some of its competitors, because you don't have the legacy of an Alexa that people are used to talking to?

We've had our assistant for a while. First, it was just for Portal. Now it's the "Hey, Facebook" version. So I think there actually is room for both concepts, contextual assistance versus these kinds of personified assistants. The personified assistants, yeah, you want them to feel like they have agency and intelligence, and they can go work on your behalf. The contextual assistant is much more imperative. It's not like an intelligent thing that you can ask to do things. It's actually the system itself adapting.

The history of user interfaces that adapt is not good. Generally speaking, we as a species have preferred an interface that, as inefficient or terrible as it may be, doesn't move. I had the ignominy of working on the ribbon at Microsoft Office, which was when all the commands in the toolbar would move based on the frequency of use. People hated it. Interfaces that move around are hard to learn. It's a huge challenge here, and that's why it's a research problem. It may end up being the case that the right thing to do is just to insist on more specificity from the consumer for a while.

From the way Facebook execs have talked about future AR devices, it sounds like you are generally focused on a more assistive approach toward augmented reality. Fewer flashy holograms, more utility. Is that the way you think this technology will be going?

I think it's important to be useful, even with VR. And I think entertainment is useful. People have downtime, they want to be entertained. But it's only one of many things that we do. And I really worry sometimes that these technologies are getting pigeonholed as novelties, or toys, when the potential is so much more profound than that.

I've been thrilled in VR to have fitness taking off. I know that fitness is adjacent to gaming, but it's a little different. It's reaching a different audience, and it's going mainstream.

With AR, we take the same approach. There can be tons of fun things to do in AR, and I can't wait for those things. It's just not the only thing that we're doing. Why would you put this on and wear it around the world if all you're going to do is entertain yourself with it? If that's all you have, then just use VR.

Enterprise

Why foundation models in AI need to be released responsibly

Foundation models like GPT-3 and DALL-E are changing AI forever. We urgently need to develop community norms that guarantee research access and help guide the future of AI responsibly.

Releasing new foundation models doesn’t have to be an all or nothing proposition.

Illustration: sorbetto/DigitalVision Vectors

Percy Liang is director of the Center for Research on Foundation Models, a faculty affiliate at the Stanford Institute for Human-Centered AI and an associate professor of Computer Science at Stanford University.

Humans are not very good at forecasting the future, especially when it comes to technology.

Keep Reading Show less
Percy Liang
Percy Liang is Director of the Center for Research on Foundation Models, a Faculty Affiliate at the Stanford Institute for Human-Centered AI, and an Associate Professor of Computer Science at Stanford University.

Every day, millions of us press the “order” button on our favorite coffee store's mobile application: Our chosen brew will be on the counter when we arrive. It’s a personalized, seamless experience that we have all come to expect. What we don’t know is what’s happening behind the scenes. The mobile application is sourcing data from a database that stores information about each customer and what their favorite coffee drinks are. It is also leveraging event-streaming data in real time to ensure the ingredients for your personal coffee are in supply at your local store.

Applications like this power our daily lives, and if they can’t access massive amounts of data stored in a database as well as stream data “in motion” instantaneously, you — and millions of customers — won’t have these in-the-moment experiences.

Keep Reading Show less
Jennifer Goforth Gregory
Jennifer Goforth Gregory has worked in the B2B technology industry for over 20 years. As a freelance writer she writes for top technology brands, including IBM, HPE, Adobe, AT&T, Verizon, Epson, Oracle, Intel and Square. She specializes in a wide range of technology, such as AI, IoT, cloud, cybersecurity, and CX. Jennifer also wrote a bestselling book The Freelance Content Marketing Writer to help other writers launch a high earning freelance business.
Climate

The West’s drought could bring about a data center reckoning

When it comes to water use, data centers are the tech industry’s secret water hogs — and they could soon come under increased scrutiny.

Lake Mead, North America's largest artificial reservoir, has dropped to about 1,052 feet above sea level, the lowest it's been since being filled in 1937.

Photo: Mario Tama/Getty Images

The West is parched, and getting more so by the day. Lake Mead — the country’s largest reservoir — is nearing “dead pool” levels, meaning it may soon be too low to flow downstream. The entirety of the Four Corners plus California is mired in megadrought.

Amid this desiccation, hundreds of the country’s data centers use vast amounts of water to hum along. Dozens cluster around major metro centers, including those with mandatory or voluntary water restrictions in place to curtail residential and agricultural use.

Keep Reading Show less
Lisa Martine Jenkins

Lisa Martine Jenkins is a senior reporter at Protocol covering climate. Lisa previously wrote for Morning Consult, Chemical Watch and the Associated Press. Lisa is currently based in Brooklyn, and is originally from the Bay Area. Find her on Twitter ( @l_m_j_) or reach out via email (ljenkins@protocol.com).

Workplace

Indeed is hiring 4,000 workers despite industry layoffs

Indeed’s new CPO, Priscilla Koranteng, spoke to Protocol about her first 100 days in the role and the changing nature of HR.

"[Y]ou are serving the people. And everything that's happening around us in the world is … impacting their professional lives."

Image: Protocol

Priscilla Koranteng's plans are ambitious. Koranteng, who was appointed chief people officer of Indeed in June, has already enhanced the company’s abortion travel policies and reinforced its goal to hire 4,000 people in 2022.

She’s joined the HR tech company in a time when many other tech companies are enacting layoffs and cutbacks, but said she sees this precarious time as an opportunity for growth companies to really get ahead. Koranteng, who comes from an HR and diversity VP role at Kellogg, is working on embedding her hybrid set of expertise in her new role at Indeed.

Keep Reading Show less
Amber Burton

Amber Burton (@amberbburton) is a reporter at Protocol. Previously, she covered personal finance and diversity in business at The Wall Street Journal. She earned an M.S. in Strategic Communications from Columbia University and B.A. in English and Journalism from Wake Forest University. She lives in North Carolina.

Climate

New Jersey could become an ocean energy hub

A first-in-the-nation bill would support wave and tidal energy as a way to meet the Garden State's climate goals.

Technological challenges mean wave and tidal power remain generally more expensive than their other renewable counterparts. But government support could help spur more innovation that brings down cost.

Photo: Jeremy Bishop via Unsplash

Move over, solar and wind. There’s a new kid on the renewable energy block: waves and tides.

Harnessing the ocean’s power is still in its early stages, but the industry is poised for a big legislative boost, with the potential for real investment down the line.

Keep Reading Show less
Lisa Martine Jenkins

Lisa Martine Jenkins is a senior reporter at Protocol covering climate. Lisa previously wrote for Morning Consult, Chemical Watch and the Associated Press. Lisa is currently based in Brooklyn, and is originally from the Bay Area. Find her on Twitter ( @l_m_j_) or reach out via email (ljenkins@protocol.com).

Latest Stories
Bulletins