Power

How Facebook prepared for the next ‘glasshole’ backlash

Facebook's new Ray-Ban Stories smart glasses are poised to provoke strong reactions, but execs believe the company can convince the world of the device's benefits.

Photo of model wearing Facebook’s Ray-Ban Stories smart glasses

Facebook's Ray-Ban Stories smart glasses are equipped with two cameras and an LED light designed to indicate ongoing video recording.

Photo: Facebook

A face computer that can take pictures and videos of innocent bystanders, made by one of the world's most powerful companies and worn by rich techies with little regard for the people around them: When Google launched Google Glass in 2013, the blowback was brutal and relentless, ultimately dooming the product.

Now, Facebook is giving the camera-in-your-face idea another shot with its new Ray-Ban Stories glasses, and Facebook Reality Labs head Andrew Bosworth is ready for the inevitable backlash. "I don't fear the controversy," he told Protocol during an interview this week.

One reason for Bosworth's confidence is that the company did its homework, which included working with civil rights groups to make the glasses more privacy friendly. But Bosworth also knows that the tech industry is as a whole moving toward AR devices, with Microsoft, Magic Leap, Snap, Apple and even Google all once again working on their own headgear. Cameras will be an inevitable part of these devices, and Ray-Ban Stories give Facebook a chance to learn early on.

"You can't really get that feedback until you're in a market with it," Bosworth said.

Ray-Ban Stories began selling to consumers in the U.S. and a handful of other markets Thursday. The glasses are equipped with two 5-megapixel cameras that can be used to take pictures as well as 30-second video clips. Built-in open-ear audio makes it possible to use the glasses as a headphone replacement, and three beamforming microphones offer access to both Bluetooth phone calls and a pared-down version of the Facebook Assistant to snap hands-free pictures and start video captures.

Stories are based on classic Ray-Ban frame designs and sell with a variety of lenses and colors, with a total number of 20 unique SKUs and a starting price of $300. The two companies had originally planned to introduce even more variations, but nixed some colors when a lack of contrast made it hard to see a front-facing LED that indicates active video recording to bystanders.

"If we had to make a priority between style and privacy, privacy came first," said Matteo Battiston, head of design for Ray-Ban maker EssilorLuxottica, which struck a multiyear partnership with Facebook to build smart eyewear.

A big emphasis on privacy is necessary for the product to succeed, said Facebook Reality Labs Policy Director James Hairston. "At the end of the day, people won't feel comfortable wearing them if they don't feel in control of their privacy," he said. This includes clear rules for how content is stored and shared: Photos and videos captured by Ray-Ban Stories are encrypted on the device, and can only be offloaded with a standalone app that is tied to an owner's Facebook account. None of the content is being shared automatically; users have to specifically pick footage to share to third-party apps, including Facebook and Instagram.

The company also consulted with outside groups like the Information Technology and Innovation Foundation and the National Network to End Domestic Violence on both the hardware design and user education around privacy and appropriate use. "As we work to normalize these smart glasses in everyday life, we have both a big opportunity and a big responsibility to help establish norms around wearables in an open and inclusive way," Hairston said. "We know we can't do this in a silo."

Those outside experts gave input on the color and functionality of the front-facing LED, which is hard-wired to the camera to prevent tampering. Facebook also included a physical on-off switch to allow wearers of the glasses to disable recording when they enter their gym's locker room or other sensitive areas. The company launched a dedicated website to explain both the device's privacy features as well as give guidance on responsible usage.

That's not to say that Facebook doesn't expect abuse, however. "People will try to tamper with devices, and there will be things that we haven't anticipated," Hairston said. That's why Facebook designed some of the device's core functionality to be obvious to bystanders. To take a photo or start video capture, people either have to audibly invoke the Facebook Assistant, or reach up to their temple and press the shutter button.

"This was the number one area of user research from the very first prototype," Bosworth said. "The standard I gave the team was very clear: It has to be more overt than a photo I take with my phone."

Talk to executives at Facebook about Ray-Ban Stories, and you'll hear them compare it to phones a lot. They will make the case that the glasses are less prone to abuse than the phones people are already using, and point to the fact that social norms around smartphone usage, including photography and video recording, have evolved over time.

They will also stress that smart glasses may be able to mitigate some of the downsides of mobile technology, with Bosworth relaying how the glasses allowed him to take videos of his kids without being forced to stare at the screen, unable to participate in the scene unfolding in front of him. "Today, [I am] forced to make a choice between my phone and the world around me," he said. "With these glasses, I'm able to do both."

The comparison to the phone is telling for another reason: Facebook missed the boat on mobile platforms, and failed to make its own phone. AR holds the promise of a do-over, of becoming the next big thing after the phone. Getting some real-world feedback on the dos and don'ts of headworn cameras early could be a massive competitive advantage for Facebook and its partners on this journey — even if it is poised to provoke some initial backlash.

"We build our trust with the consumer one frame at a time," Battison said.

Protocol | Enterprise

Startups are pouncing as SaaS giants struggle in the intelligence race

Companies like Salesforce and Workday spent the last two decades building walled gardens around their systems. Now, it's a mad dash to make those ecosystems more open.

Companies want to predict the future, and "systems of intelligence" might be their best bet.

Image: Yuichiro Chino / Getty Images

Take a look at any software vendor's marketing materials and you're sure to see some variation of the word "intelligence" splattered everywhere.

It's part of a tectonic shift happening within enterprise technology. Companies spent the last several years moving their systems to the internet and, along the way, rapidly adopting new applications.

Keep Reading Show less
Joe Williams

Joe Williams is a senior reporter at Protocol covering enterprise software, including industry giants like Salesforce, Microsoft, IBM and Oracle. He previously covered emerging technology for Business Insider. Joe can be reached at JWilliams@Protocol.com. To share information confidentially, he can also be contacted on a non-work device via Signal (+1-309-265-6120) or JPW53189@protonmail.com.


Keep Reading Show less
Nasdaq
A technology company reimagining global capital markets and economies.
Protocol | Workplace

The hottest new perk in tech: A week off for burnout recovery

In an industry where long hours are a "badge of honor," a week of rest may be the best way to retain talent.

Tech companies are giving their employees a week to rest and recover from burnout.

Photo: Kinga Cichewicz/Unsplash

In early May, the founder of Lessonly, a company that makes training software, sent out a companywide email issuing a mandate to all employees. But it wasn't the sort of mandate employees around the world have been receiving related to vaccines and masks. This mandate required that every worker take an entire week off in July.

The announcement took Lessonly's staff by surprise. "We had employees reach out and share that they were emotional, just thankful that they had the opportunity to do this," said Megan Jarvis, who leads the company's talent team and worked on planning the week off.

Keep Reading Show less
Aisha Counts
Aisha J. Counts is a reporting fellow at Protocol, based out of Los Angeles. Previously, she worked for Ernst & Young, where she researched and wrote about the future of work, the gig economy and startups. She is a graduate of the University of Southern California, where she studied business and philosophy.
Power

Chip costs are rising. How will that affect gadget prices?

The global chip shortage is causing component costs to go up, so hardware makers are finding new ways to keep their prices low.

Chips are getting more expensive, but most consumer electronics companies have so far resisted price increases.

Photo: Chris Hondros/Getty Images

How do you get people to pay more for your products while avoiding sticker shock? That's a question consumer electronics companies are grappling with as worldwide chip shortages and component cost increases are squeezing their bottom lines.

One way to do it: Make more expensive and higher-margin products seem like a good deal to customers.

Keep Reading Show less
Janko Roettgers

Janko Roettgers (@jank0) is a senior reporter at Protocol, reporting on the shifting power dynamics between tech, media, and entertainment, including the impact of new technologies. Previously, Janko was Variety's first-ever technology writer in San Francisco, where he covered big tech and emerging technologies. He has reported for Gigaom, Frankfurter Rundschau, Berliner Zeitung, and ORF, among others. He has written three books on consumer cord-cutting and online music and co-edited an anthology on internet subcultures. He lives with his family in Oakland.

Protocol | Policy

Laws want humans to check biased AI. Research shows they can’t.

Policymakers want people to oversee — and override — biased AI. But research suggests there's no evidence to prove humans are up to the task.

The recent trend toward requiring human oversight of automated decision-making systems runs counter to mounting research about humans' inability to effectively override AI tools.

Photo: Jackal Pan/Getty Images

There was a time, not long ago, when a certain brand of technocrat could argue with a straight face that algorithms are less biased decision-makers than human beings — and not be laughed out of the room. That time has come and gone, as the perils of AI bias have entered mainstream awareness.

Awareness of bias hasn't stopped institutions from deploying algorithms to make life-altering decisions about, say, people's prison sentences or their health care coverage. But the fear of runaway AI has led to a spate of laws and policy guidance requiring or recommending that these systems have some sort of human oversight, so machines aren't making the final call all on their own. The problem is: These laws almost never stop to ask whether human beings are actually up to the job.

Keep Reading Show less
Issie Lapowsky

Issie Lapowsky ( @issielapowsky) is Protocol's chief correspondent, covering the intersection of technology, politics, and national affairs. She also oversees Protocol's fellowship program. Previously, she was a senior writer at Wired, where she covered the 2016 election and the Facebook beat in its aftermath. Prior to that, Issie worked as a staff writer for Inc. magazine, writing about small business and entrepreneurship. She has also worked as an on-air contributor for CBS News and taught a graduate-level course at New York University's Center for Publishing on how tech giants have affected publishing.

Latest Stories