How do you keep people safe in the metaverse? That's a question Meta, the company formerly known as Facebook, has been grappling with for some time. And the answer isn't all that simple.
The metaverse may be little more than a concept for now, but the safety problem is anything but theoretical: People regularly experience harassment in VR apps and experiences, including those running on Meta's Quest VR headset. Even the company's own employees are not immune. Earlier this year, an unnamed employee told co-workers in the company's internal Workplace forums that they had been accosted in Rec Room, with other players shouting the N-word at them without an obvious way to identify or stop the harasser. "Spoiler alert: I did not have a good time," the employee summarized.
The discussion, which became part of the public record when it was included in leaked Facebook documents supplied to Congress, shows that the problem is not isolated. One participant noted that similar cases are being brought up internally every few weeks, while another personally experienced harassment as well. "Multiple games have similar issues," one participant noted in the exchange.
Meta's head of consumer hardware and incoming CTO, Andrew Bosworth, told Protocol on Friday that the specific incident discussed in the leaked document could have been mitigated if the employee had made use of existing reporting tools. "The tenor of the post [is] overstated and misinformed," Bosworth said. However, he also acknowledged that the problem of harassment in VR is real. He laid out ways the company is aiming to solve it, while pointing to trade-offs between making VR spaces safe and not policing people's private conversations. "We have [to strike] a pretty tough balance between privacy and integrity," Bosworth said.
This interview has been edited and condensed for clarity.
Are your current reporting options enough to fight harassment in VR?
I think the tools that we have in place are a good start. Blocking in virtual spaces is a very powerful tool, much more powerful than it is in asynchronous spaces. We can have someone not appear to exist to you. In addition, we can do reporting. This is a little bit similar to how you think of reporting in WhatsApp. Locally, on your device, totally private and secure, [you] have a little rolling buffer of what's the activity that happened. And you can say, "I want to report it," [and] send it to the platform developer or to us.
That kind of continuous recording is something you are only testing in Horizon so far, right?
It's a first-party tool that we built. It's the kind of thing that we encourage developers to adopt, or even make it easier for them to adopt over time. And we feel good about what that represents from a standpoint of a privacy integrity trade-off, because it's keeping the incidents private until somebody chooses of their own volition to say, "This is a situation that I want to raise visibility to."
But it's also just recording audio. How much does that have to do with the technical limitations of the Quest?
It's audio plus some metadata right now, [including which] users were in the area, for example. I don't think there is a technical limitation that prevents us from doing more. We're just trying to strike a trade-off between the privacy and the integrity challenges. That's going to be an area [where] we tread lightly, make sure [tools we roll out are] really well understood before we expand them.
You've been saying that you want to put privacy first when building new products for Meta. How does that conflict with building safe products?
Safety and privacy are highly related concepts and are both very high on our list of priorities. But, you know, even my friends say mean things to me sometimes. The path to infinite privacy is no product. The path to infinite safety is no social interaction. I don't think anyone's proposing we take these to their extremes.
The question is: What are healthy balances that give consumers control? And when you have privacy and safety trade-offs, that's super tough. The more [social VR spaces] are policed, the less privacy you're fundamentally able to ensure that people have. So it's case by case. There's not a one-size-fits-all solution on how to resolve those priorities when they compete.
You are also dealing with a space that's still very new, with a lot of VR games coming from relatively small companies. How can you help those developers fight harassment?
We want to build tools that developers can use, at the very least on our platforms. Identity is a strong example. If developers integrate our identity systems, even behind the scenes, they have a stronger ability to inherit things like blocks that suggest that two people don't want to be exposed to one another. There are tools that we can build — APIs, SDKs — that developers will be able to integrate. That's going to take time for us to build, but that's the direction we want to go in. Some of them we could potentially require for our own platform, some we would offer for those who choose to use [them].
As we move toward a metaverse world, what role will platform providers play in enforcing those rules? Right now, there seem to be two blueprints: game consoles, where companies have very strict safety requirements, and mobile platforms, where a company like Apple doesn't tell app developers how to do moderation. What will this look like for AR and VR devices in the future?
Our vision for the metaverse is very interoperable. We very much expect a large number of the social spaces that people occupy in the metaverse to be cross-platform. To have people in them who are on mobile devices, in VR headsets, on PCs or laptops and on consoles and more. So this is kind of my point: You have to give a lot of the responsibility to the person hosting the social space. Are they informing customers of what the policies are and what the risks are? And if they're informed, are consumers allowed to make that decision for themselves?
I don't want to be in a position where we're asserting control over what consumers are allowed to do in third-party applications, and what they're allowed to engage with.
How much does Meta's plan of getting a billion people to use the metaverse within the next decade depend on getting safety right from the get-go?
I think it's hugely important. If the mainstream consumer puts a headset on for the first time and ends up having a really bad experience, that's obviously deleterious to our goals of growing the entire ecosystem. I don't think this is the kind of thing that can wait.
Racism in VR by Protocol on Scribd