Andrew Bosworth on Meta’s next big challenge: Harassment in the metaverse

As it envisions a new crop of social apps in VR and beyond, Meta has to balance safety and privacy.

Andrew Bosworth

Andrew Bosworth wants to give developers tools to fight harassment, but not police everything that people do in VR.

Photo: Glenn Chapman/AFP via Getty Images

How do you keep people safe in the metaverse? That's a question Meta, the company formerly known as Facebook, has been grappling with for some time. And the answer isn't all that simple.

The metaverse may be little more than a concept for now, but the safety problem is anything but theoretical: People regularly experience harassment in VR apps and experiences, including those running on Meta's Quest VR headset. Even the company's own employees are not immune. Earlier this year, an unnamed employee told co-workers in the company's internal Workplace forums that they had been accosted in Rec Room, with other players shouting the N-word at them without an obvious way to identify or stop the harasser. "Spoiler alert: I did not have a good time," the employee summarized.

The discussion, which became part of the public record when it was included in leaked Facebook documents supplied to Congress, shows that the problem is not isolated. One participant noted that similar cases are being brought up internally every few weeks, while another personally experienced harassment as well. "Multiple games have similar issues," one participant noted in the exchange.

Meta's head of consumer hardware and incoming CTO, Andrew Bosworth, told Protocol on Friday that the specific incident discussed in the leaked document could have been mitigated if the employee had made use of existing reporting tools. "The tenor of the post [is] overstated and misinformed," Bosworth said. However, he also acknowledged that the problem of harassment in VR is real. He laid out ways the company is aiming to solve it, while pointing to trade-offs between making VR spaces safe and not policing people's private conversations. "We have [to strike] a pretty tough balance between privacy and integrity," Bosworth said.

This interview has been edited and condensed for clarity.

Are your current reporting options enough to fight harassment in VR?

I think the tools that we have in place are a good start. Blocking in virtual spaces is a very powerful tool, much more powerful than it is in asynchronous spaces. We can have someone not appear to exist to you. In addition, we can do reporting. This is a little bit similar to how you think of reporting in WhatsApp. Locally, on your device, totally private and secure, [you] have a little rolling buffer of what's the activity that happened. And you can say, "I want to report it," [and] send it to the platform developer or to us.

That kind of continuous recording is something you are only testing in Horizon so far, right?

It's a first-party tool that we built. It's the kind of thing that we encourage developers to adopt, or even make it easier for them to adopt over time. And we feel good about what that represents from a standpoint of a privacy integrity trade-off, because it's keeping the incidents private until somebody chooses of their own volition to say, "This is a situation that I want to raise visibility to."

But it's also just recording audio. How much does that have to do with the technical limitations of the Quest?

It's audio plus some metadata right now, [including which] users were in the area, for example. I don't think there is a technical limitation that prevents us from doing more. We're just trying to strike a trade-off between the privacy and the integrity challenges. That's going to be an area [where] we tread lightly, make sure [tools we roll out are] really well understood before we expand them.

You've been saying that you want to put privacy first when building new products for Meta. How does that conflict with building safe products?

Safety and privacy are highly related concepts and are both very high on our list of priorities. But, you know, even my friends say mean things to me sometimes. The path to infinite privacy is no product. The path to infinite safety is no social interaction. I don't think anyone's proposing we take these to their extremes.

The question is: What are healthy balances that give consumers control? And when you have privacy and safety trade-offs, that's super tough. The more [social VR spaces] are policed, the less privacy you're fundamentally able to ensure that people have. So it's case by case. There's not a one-size-fits-all solution on how to resolve those priorities when they compete.

You are also dealing with a space that's still very new, with a lot of VR games coming from relatively small companies. How can you help those developers fight harassment?

We want to build tools that developers can use, at the very least on our platforms. Identity is a strong example. If developers integrate our identity systems, even behind the scenes, they have a stronger ability to inherit things like blocks that suggest that two people don't want to be exposed to one another. There are tools that we can build — APIs, SDKs — that developers will be able to integrate. That's going to take time for us to build, but that's the direction we want to go in. Some of them we could potentially require for our own platform, some we would offer for those who choose to use [them].

As we move toward a metaverse world, what role will platform providers play in enforcing those rules? Right now, there seem to be two blueprints: game consoles, where companies have very strict safety requirements, and mobile platforms, where a company like Apple doesn't tell app developers how to do moderation. What will this look like for AR and VR devices in the future?

Our vision for the metaverse is very interoperable. We very much expect a large number of the social spaces that people occupy in the metaverse to be cross-platform. To have people in them who are on mobile devices, in VR headsets, on PCs or laptops and on consoles and more. So this is kind of my point: You have to give a lot of the responsibility to the person hosting the social space. Are they informing customers of what the policies are and what the risks are? And if they're informed, are consumers allowed to make that decision for themselves?

I don't want to be in a position where we're asserting control over what consumers are allowed to do in third-party applications, and what they're allowed to engage with.

How much does Meta's plan of getting a billion people to use the metaverse within the next decade depend on getting safety right from the get-go?

I think it's hugely important. If the mainstream consumer puts a headset on for the first time and ends up having a really bad experience, that's obviously deleterious to our goals of growing the entire ecosystem. I don't think this is the kind of thing that can wait.


Racism in VR by Protocol on Scribd

Protocol | Workplace

Pay audits catch your salary mistakes. Here's how to conduct one.

It’s not unlawful to pay people differently. You just have to be able to justify the difference.

Pay audits reveal the truth about how you’re paying employees.

Illustration: Christopher T. Fong/Protocol

This story is part of our Salary Series, where we take a deep dive into the world of pay: how it's set, how it's changing and what's next. Read the rest of the series.

In 2015, Marc Benioff famously signed off on a salary review of every employee at Salesforce on the urging of then-Chief People Officer Cindy Robbins and another senior woman executive, Leyla Seka. They suspected that women employees at the company were being paid less than men for the same work.

Keep Reading Show less
Michelle Ma

Michelle Ma (@himichellema) is a reporter at Protocol, where she writes about management, leadership and workplace issues in tech. Previously, she was a news editor of live journalism and special coverage for The Wall Street Journal. Prior to that, she worked as a staff writer at Wirecutter. She can be reached at mma@protocol.com.

The fintech developers who made mobile banking as routine as texting or online shopping aren't done. The next frontier for innovation is open banking – fintech builders are enabling consumers to be at the center of where and how their data is used to provide the services they want and need.

Most people don't even realize they're using open banking services today. If they connected their investment and banking accounts in a personal financial management solution or app, they're using open banking. Perhaps they've seen ads about how they can improve their credit score by uploading pay stubs or utility records to that same app – this is also powered by open banking.

Keep Reading Show less
Bob Schukai
Bob Schukai is Executive Vice President of Technology Development, New Digital Infrastructure & Fintech at Mastercard, where he leads the technical design, execution and support of innovative open banking and fintech solutions, as well as next generation technologies to support global payment and data capabilities. Prior to Mastercard, Schukai’s work focused on cognitive computing, financial technology, blockchain, user experience and digital identity. He is also a member of the Institute for Electrical and Electronics Engineers.
Protocol | China

Chinese ed-tech firms’ poignant pivots

Beijing’s tutoring ban has forced ed tech and private tuition companies to explore new opportunities, from clothing to coffee to agriculture.

Some Chinese online tutoring firms are pivoting away from education. Others continue offering classes, but a different kind.

Photo: Liu Ying/Xinhua via Getty Images

Management at China’s leading tutoring and ed-tech firms has been racking brains in an effort to pivot away from the once-lucrative but now moribund K-9 tutoring business. Pivoting has become a necessity ever since Beijing delivered a devastating blow to the private tutoring industry this past summer by banning many types of after-school tutoring outright.

The Wall Street Journal in November reported that several major Chinese tutoring and ed-tech companies were in discussions with China's government to resume K-9 tutoring, under the condition they run their businesses as nonprofits. But some companies have decided to sever their K-9 operations altogether, exploring completely different businesses: agriculture ecommerce, garment-making and even coffee houses. Others will stay in the business of teaching but place their bets on professional education and “well-rounded education”(素质教育), which is not schoolwork-oriented and instead involves extracurricular activities such as arts, sports, science, technology, engineering and civics.

Keep Reading Show less
Shen Lu

Shen Lu is a reporter with Protocol | China. Her writing has appeared in Foreign Policy, The New York Times and POLITICO, among other publications. She can be reached at shenlu@protocol.com, or via Twitter @shenlulushen.

Protocol | Workplace

Calendly thinks it can save you from group meeting scheduling hell

Add another tool to your arsenal for that 15-person, multi-time zone meeting you need to schedule.

Calendly now offers meeting polls.

Image: Calendly

Scheduling a one-on-one meeting over email requires its own multi-email song and dance. But scheduling a meeting for seven people over email is a full-blown nightmare. Add in multiple time zones and incomplete email responses, and you’re deep in a distressingly long email thread. So far, scheduling app Calendly has tackled one-on-one scenarios: The host sends a Calendly link and the invitee chooses the time slot that works for them. Group meetings were still a hassle, despite a few features allowing for round robins or multihost meetings. With the company’s Thursday launch of meeting polls, Calendly joins tools like Doodle and When2meet in solving group scheduling nightmares.

Srinivas Somayajula, Calendly's head of Product Operations, hopes that new and existing users will recognize Calendly as a tool for both the one-on-one use case and complex group scheduling. “We've got the capabilities in the toolset to support either of those extremes and everything in the middle,” Somayajula said.

Keep Reading Show less
Lizzy Lawrence

Lizzy Lawrence ( @LizzyLaw_) is a reporter at Protocol, covering tools and productivity in the workplace. She's a recent graduate of the University of Michigan, where she studied sociology and international studies. She served as editor in chief of The Michigan Daily, her school's independent newspaper. She's based in D.C., and can be reached at llawrence@protocol.com.

Protocol | Workplace

CTO to CEO: The case for putting the tech expert in charge

Parag Agrawal is one of the few tech industry CTOs to nab the top job. But the tides may be shifting.

Parag Agrawal’s appointment to Twitter's CEO seat is already alerting a new generation of CTOs that the top job may not be so out of reach.

Photo: Twitter

Parag Agrawal’s ascension to CEO of Twitter is notable for a few reasons. For one, at 37, he’s now the youngest CEO of an S&P 500 company, beating out Mark Zuckerberg. For another, his path to the top as a CTO-turned-CEO is still relatively rare in the corporate world.

His leap suggests that CEO succession trends may be shifting, as technology increasingly takes the center stage in business and strategy decisions not just for tech companies, but for the business world more broadly.

Keep Reading Show less
Michelle Ma

Michelle Ma (@himichellema) is a reporter at Protocol, where she writes about management, leadership and workplace issues in tech. Previously, she was a news editor of live journalism and special coverage for The Wall Street Journal. Prior to that, she worked as a staff writer at Wirecutter. She can be reached at mma@protocol.com.

Latest Stories