Facebook saw the future of AR, and put it on your wrist
Photo: Facebook

Facebook saw the future of AR, and put it on your wrist

Protocol Next Up

Good morning, and welcome to Protocol Next Up. This week's edition is about Facebook's work on a wristband to control your AR glasses and Amazon's patent of AI voice-dubbing technology.

And a quick reminder: Our live event on the state of VR in 2021 is happening today! Tune in at 10 a.m. PT for insights from Baobab Studios CEO Maureen Fan, HP global head of virtual reality for location-based entertainment Joanna Popper and Survios co-founder and president Nathan Burba.

(Was this email forwarded to you? Sign up here to get Next Up every week.)

The Big Story

Facebook is developing a wristband for AR glasses

The future of computing may be on your wrist: Facebook is working on wristbands capable of measuring electrical signals sent from the brain to the hand to control its future AR glasses products.

Researchers from the company's AR/VR arm, also known as the Facebook Reality Labs, stressed this week that it may take years before any of it makes it into a finished product — time that they want to use to get feedback on this work. "We want to be transparent about what we are working on," said Sean Keller, FRL's research science director. "We want to open up an important discussion with the public about how to build these technologies responsibly."

There simply isn't a good existing interaction model for AR glasses. Hand tracking would technically work, but it's not something you may want to do on the subway. "That's a little weird if I am in public," Facebook's VP of consumer hardware, Andrew Bosworth, told Protocol. The same is true for voice commands. "There's lots of places that I want to use my face computer that I don't want to use my voice," he said.

Another reason has to do with the ability to tap into a lot of data. "You have more neurons of your brain dedicated to controlling your wrist than any other parts of your body," said FRL's neuromotor interfaces director, Thomas Reardon.

  • At the center of Facebook's research work are wristbands capable of measuring the electrical activity of nerves connected to one's muscles, a process also known as electromyography. For this, the company relies on technology developed by Ctrl-Labs, a neural interface startup it acquired in 2019.
  • The wristbands under development are capable of detecting movements of hand muscles down to the millimeter, and can even detect nerve signals when you are just thinking about moving a finger. Researchers shared a video that included a test participant whose hand was missing multiple fingers; wearing one of these special wristbands, he was able to move virtual representations of those missing fingers on screen, one by one.
  • In addition to using wristbands as input devices, Facebook's researchers are also exploring them as a way to provide haptic feedback via a variety of mechanisms. "We created about a dozen prototypes" for wristbands with haptic feedback, revealed FRL research science manager Nicholas Colonnese. "Some of them are incredible, most of them not so much."

"What we're talking about right now is a research device," Bosworth said. "The eventual consumer product that you want to have is a wristband that is comfortable for all-day wear, that hopefully doesn't look too different from the types of watches that people are already comfortable with."

That may still be a few years down the road, said Facebook CTO Mike Schroepfer. "We don't have a timetable today."

"It's moving fast, that's the good news," Reardon said.

Building a wristband is only one part of the puzzle in Facebook's quest to reinvent interaction for AR glasses. The other is a whole new type of AR assistant. Read more about that in my interview with Andrew Bosworth.


"The irrational spending by most streaming services is 🤯. This is not sustainable. But does the street understand that?" Tubi CEO Farhad Massoudi, tweeting about the billions of dollars newcomers like Peacock and Discovery+ are spending on content.

"If you want to really compete with the depth and breadth of content that it takes to compete on a global platform in a global sense, yeah, you need more consolidation." —Former Disney exec Kevin Mayer, discussing whether media companies like Discovery, WarnerMedia and ViacomCBS can be successful on their own in a streaming-first world.



Section 230 of the Communications Decency Act is the most-discussed and least-understood law governing the modern internet. This event will delve into the future of Section 230 and how to change the law without compromising the internet as we know it. Join Protocol's Emily Birnbaum and Issie Lapowsky in conversation with Senator Mark Warner. This event is presented by Internet Association.

RSVP for this event.

Watch Out

Amazon wants to use AI to automate dubbing

Amazon Prime Video content could soon be available in many more languages, if a recently published patent is any indication. Granted last month, the patent titled "Automatic voice dubbing for media content localization" outlines ways to use AI for automatic dubbing, complete with computer-generated voices that sound like those of the original actors.

Dubbing is important for companies like Amazon, as the patent itself mentions: "Having video content with localized voices, rather than only having localized text subtitles, significantly impacts the global reach of streaming video service providers."

But it's also expensive and can slow down localization efforts considerably. "Content studios spend significant amounts of time and money every year to generate localized voices for video media content," the patent notes. "Dubbing is an especially challenging step in the media content localization process for the movie industry, since the process of dialogue translation relies on experts in the targeted localization language."

Neural networks could help automate this process and produce dubbing tracks that aren't just reflective of the local language, but actually sound like the original talent. Amazon envisions using other movies starring the same actors as training material to get the sound right, and may use multiple deep learning techniques in concert to fine-tune the results.

There is some precedent for this approach:

  • Tel Aviv-based AI startup Deepdub, which came out of stealth in December, has been using AI for automated dubbing as well. Hollywood insiders have been impressed by the technology, according to Deepdub CMO Oz Krakowski: "People's jaws are dropping to the floor."
  • Amazon has been using AI voice creation for its Alexa smart assistant, allowing users of its Echo smart speaker to replace the default Alexa voice with that of Samuel L. Jackson.
  • Amazon's own AI researchers published a paper on automated dubbing last year. However, that paper used the automated translation of TED talks as an example, and the researchers didn't actually attempt to make the speaker sound the same in every language. Instead, they simply tried to re-create the audio atmosphere of a video, and make it sound like the translated speech was performed on the same TED stage.

The patent awarded in February makes a lot more specific references to the world of streaming, and predicts that this technology could result in the dubbing of a movie taking "hours, rather than weeks/months."

Now for the caveats: Companies regularly patent all kinds of things, and many of those patents don't result in actual products. And even if Amazon did decide to productize AI dubbing, it's not clear that Amazon Prime Video would be the primary use case. It's equally possible that the company would make the technology available to media companies that already use Amazon's cloud services. That would also allow Amazon to test and refine the technology before it is being used on its own crown jewels.

Then again, the patent does prominently mention the Tom Cruise movie "The Last Samurai" as an example, explaining how AI could make the Japanese dub sound like it was spoken by Cruise himself. "The Last Samurai" happens to be a movie that's actually available for rent on Amazon Prime Video — albeit, at least for now, with traditional dubbing.

Fast Forward

  • The Xbox is losing its live TV guide. Many years ago, Microsoft had ambitions to turn its game console business into a media play. Now, it's getting rid of one of the last relics of that era.
  • On Protocol: Google's new Nest Hub includes a radar for your bedroom. The smart display makes use of Google's Soli tech to monitor sleep.
  • The climate impact of binge watching. Netflix and the University of Bristol have teamed up to calculate the carbon footprint of streaming.
  • A closer look at "Dr. Crumb's School for Disobedient Pets." Adventure Lab has big ambitions for its VR escape room.
  • Netflix would like you to stop sharing your password. The streaming service is testing alerts that encourage people to sign up for their own accounts.
  • Apple discontinues the original HomePod speaker. Turns out Big Tech isn't actually that good at making and selling big, expensive speakers.
  • MoviePass is teasing … something? A website associated with the film financing arm of the defunct theater subscription service started to display a countdown timer this week. Will we see a return of MoviePass? Or just a first trailer for the MoviePass documentary? We may learn more early next week.

Auf Wiedersehen

Google's Arts & Culture group has released a project called AR Synth that lets you make music with classic synthesizers and drum machines in augmented reality. It's fun, it's free and it finally gives me a chance to pretend I'm a member of Kraftwerk. I got the accent down already; now all I need is a red button-down shirt.

Thanks for reading — see you next week!

Recent Issues