People

Amazon’s head of Alexa Trust on how Big Tech should talk about data

Anne Toth, Amazon's director of Alexa Trust, explains what it takes to get people to feel comfortable using your product — and why that is work worth doing.

Anne Toth

Anne Toth, Amazon's director of Alexa Trust, has been working on tech privacy for decades.

Photo: Amazon

Anne Toth has had a long career in the tech industry, thinking about privacy and security at companies like Yahoo, Google and Slack, working with the World Economic Forum and advising companies around Silicon Valley.

Last August she took on a new job as the director of Alexa Trust, leading a big team tackling a big question: How do you make people feel good using a product like Alexa, which is designed to be deeply ingrained in their lives? "Alexa in your home is probably the closest sort of consumer experience or manifestation of AI in your life," she said. That comes with data questions, privacy questions, ethical questions and lots more.

During CES week, when Toth was also on a popular panel about the future of privacy, she hopped on a Chime call to talk about her team, her career and what it takes to get users to trust Big Tech.

This interview has been edited and condensed for clarity.

How does the Trust team work? That's what it's called, right?

Yeah. I work on the Alexa Trust org, it's part of the Alexa organization overall. The team that I work on is responsible for building all the privacy features, all the privacy controls and settings, and thinking about privacy across Alexa, as well as accessibility. So the aging teams, the accessibility teams, Alexa for Everyone, all reside within this organization. And we're thinking about content issues and the whole gamut. So really all of the all of the policy dimensions and how they manifest in consumer-accessible controls and features is what this team thinks about.

Why is that one team? You just named a bunch of different, equally important things. What is the tie that binds all of those things?

Well … I think it's trust, right? It's, how do we develop trustworthy experiences? We are very much a horizontal organization that works throughout the Alexa org.

And the Alexa org actually is much larger than I even anticipated. When I first was interviewing with the organization, I was really surprised at how big and how quickly it's grown. So it's truly a horizontal effort to think about the customer experience, and all of these areas where there are potential trust issues, and try to deal with them very proactively.

That's a much bigger definition of trust than I would have guessed. I feel like we talk about trust a lot as sort of synonymous with privacy, and so "what do you do with my data" is the core question. But then when you put things like accessibility and ethics in there, it broadens the definition of what you're looking at in this really interesting way.

Yeah, it's a very expansive view. I have worked on privacy for most of my career. It often presents as a defensive issue for companies, right? And even the word "privacy" brings up a sort of connotation that makes you think about all the things you don't want people to know.

But I think of it really as an opportunity to innovate and to try to create more positive experiences, rather than to think of it as a defensive posture. How are we enabling the usage of data to help create better experiences for customers? Because that's really, ultimately what customers want: for you to use data to make this better for me. And I'm totally good with that. The concern is when I'm not sure what I'm getting out of you using my data, and you have it, and why do you have it? That's the area that's problematic. And what I see, and what we're trying to do, is to be very transparent, and to demonstrate time and again how your data is actually benefiting you in this product experience.

That's actually one of the things I wanted to talk about. You said in another interview that so much of privacy and security is basically just, like, don't screw up. There's no positive experience, it's just there until you ruin it. It's interesting to think about it the other way: to say, "What does it look like to be more proactive about privacy and data security?" What does that look like for you, in terms of how to actually put it in front of people in a way that feels useful, instead of just having pop-ups that say, "Don't worry, we're not using your data for bad things?"

Designing for privacy, or designing for trust more specifically, is about baby steps. It's like developing a relationship with a person, right? You have to earn the trust. And you have to do things in the beginning that over time become less important. So the wake word: We rely very heavily on the wake word. You have to invoke Alexa. But the use of the wake word, and the training around that, is in order to make people comfortable with the fact that we are only streaming your requests to the cloud once the wake word has been invoked.

That is about interjecting some conscious friction, to create a trusted experience so that later when we have more advanced features, more advanced conversational-type elements, you'll be in a place where you're comfortable with that experience. It moved you along that learning curve, and got to that place where you trust us to do that for you effectively.

I think it was on Twitter or on LinkedIn, I saw that there was an article that had gone viral. There was a security organization that did a breakdown of an Echo device because there was a theory the mic-off button was in fact just cosmetic. So they did a whole breakdown and sort of mapped out the electronics to prove, in fact, if the red light is on, the wiring to the mic is disabled. The red light and the mic cannot both be on at the same time and vice versa. That was a design choice. There are a lot of choices that are about getting people comfortable with the device, and feeling that degree of trust so that later down the road, we can introduce more features that people will be more likely to use.

But it's telling even that those sort of conspiracy theories exist, right? People think the same thing about Facebook turning on their microphone. Does it feel like there is this perception hole that every big tech company is in right now? That you, as Alexa, have to go out of your way to convince people that you're doing the right thing, as opposed to even starting in a neutral place? It just feels like we're in this place where people are learning to be suspicious about things that they don't understand.

I'm kind of a hardened cynic. That's just my natural disposition on things. So yes, I think we are in a period of time right now where skepticism is at an all-time high. And I think deservedly so, in the world we're living in at the present moment.

But what I'm often heartened by is that people have put this device into their homes, into their most sacred private spaces with their families with their loved ones. To do that is a big leap of faith and trust in Amazon and Alexa. So the mere fact that we're there is already a sign that people have extended to us the benefit of the doubt and have said "we trust you."

So it's not even so much about having to earn that trust in the first place as it is having to be worthy of that trust, right? Or be worthy of that privilege of being in that space. That's the goal for me: to make sure that we continue to be worthy of the trust they've already placed in us, which is not a hurdle everyone gets over.

What about as you think about things like default settings versus giving people choice? You can give people all the options in the world, but we know for a fact that most people are never going to change anything. So I'm suspicious of the idea that that's a solution to these problems, but it's definitely part of the solution. How do you think about making good decisions for people versus letting people make decisions?

First of all, no two people have the exact same notions of privacy. It's different generations, different cultures, different backgrounds, different experiences, all driving different expectations. No matter where you set a default, it's not going to be right for everybody. So there has to be the ability to change it. And you have to make that easy to find.

So in the Alexa context, voice-forward commands to try to make it as easy as possible to be able to say, "why did you do that," or "delete what I just said," or "delete everything I have ever said" — those kinds of interactions help reduce the friction in privacy, they make it easier for people to exercise those those options.

Your default settings generally represent your organizational bias, in one way or the other. And in this case, the default settings that we have reflect our ability to use data in a way that's going to improve the product and make it better for the customer. So that's where they are. And that's how they've been determined. But they're not immovable. And that's the most important part.

Well, that education piece seems hard, though. We've seen Facebook, for instance, try to explain why it collects a lot of data. And it doesn't necessarily track for a lot of people. There was this big dustup with WhatsApp, people lost their minds. Is it harder than you're making it sound to help people understand what you're doing with their data?

In some ways, I think that this product gives you a more immediate example of that data benefit than other products. I mean, I spent a lot of my career talking about the benefits of targeted advertising, and how if you're going to get an ad, better to get a targeted ad than one that's irrelevant. But the relative benefit to you as a customer, as an individual, for that use of your data doesn't really feel as meaningful as the types of experiences or improvements we're able to make.

And there's lots of data, particularly looking at introducing Alexa into new countries and languages and dialects, where the ability to use that voice data to dramatically improve our responses and our accuracy is something that is noticeable by people over time. I think people would recognize that, that trajectory.

That's fair. And it does seem like most people, when you explain it to them, will understand pieces of it like that.

This is why I love "Alexa, why did you do that?" Because Alexa doesn't always get it right the first time, and to be able to actually ask and get a response about what that reasoning was, and you can see it in real time — you can't do that with a lot of other experiences. That's a cool one that I hope more people use.

But we are faced with some real challenges under regulation about explainable AI. These technologies are getting more and more sophisticated, and when they work really well, sometimes we're delighted, and sometimes we're creeped out. It's that balancing act of like, "Wait a minute, that was really useful … should I be worried?" Which is why trust is so important to develop, so that when you get to that moment, you can offer a customer benefit without it being intrusive, or invasive or feeling somehow uncomfortable. Devices should learn!

You're a privacy person, so I have a current events question for you: We're in the middle right now of this privacy versus transparency debate, where it's either better to let people use encrypted services because they can't be watched or becomes a problem because bad people can do bad stuff in those encrypted services and nobody can find them. And obviously, this shows up in lots of scary ways recently. Where do you fall on the debate?

I will have to speak to this on a personal level, but all the messaging apps I use are end-to-end encrypted for primary messaging. I think it's important, and I think that there's a role that they play that is important. There are lots of people who think that people aren't really concerned about privacy in the world, and we've passed that moment where privacy is an issue. Just based on the number of people that I've seen crop up on Signal and Telegram in the last week, I can tell you that people really are paying attention. So I think that if that's the indicator that we should be looking at, then I would say privacy is not dead. People really do care. And it's something everybody should be paying attention to.

Podcasts

1Password's CEO is ready for a password-free future

Fresh off a $620 million raise, 1Password CEO Jeff Shiner talks about the future of passwords.

1Password is a password manager, but it has plans to be even more.

Business is booming for 1Password. The company just announced it has raised $620 million, at a valuation of $6.8 billion, from a roster of A-list celebrities and well-known venture capitalists.

But what does a password manager need with $620 million? Jeff Shiner, 1Password’s CEO, has some plans. He’s building the team fast — 1Password has tripled in size in the last two years, up to 500 employees, and plans to double again this year — while also expanding the vision of what a password manager can do. 1Password has long been a consumer-first product, but the biggest opportunity lies in bringing the company’s knowhow, its user experience, and its security chops into the business world. 1Password already has more than 100,000 business customers, and it plans to expand fast.

Keep Reading Show less
David Pierce

David Pierce ( @pierce) is Protocol's editorial director. Prior to joining Protocol, he was a columnist at The Wall Street Journal, a senior writer with Wired, and deputy editor at The Verge. He owns all the phones.

Sponsored Content

A CCO’s viewpoint on top enterprise priorities in 2022

The 2022 non-predictions guide to what your enterprise is working on starting this week

As Honeywell’s global chief commercial officer, I am privileged to have the vantage point of seeing the demands, challenges and dynamics that customers across the many sectors we cater to are experiencing and sharing.

This past year has brought upon all businesses and enterprises an unparalleled change and challenge. This was the case at Honeywell, for example, a company with a legacy in innovation and technology for over a century. When I joined the company just months before the pandemic hit we were already in the midst of an intense transformation under the leadership of CEO Darius Adamczyk. This transformation spanned our portfolio and business units. We were already actively working on products and solutions in advanced phases of rollouts that the world has shown a need and demand for pre-pandemic. Those included solutions in edge intelligence, remote operations, quantum computing, warehouse automation, building technologies, safety and health monitoring and of course ESG and climate tech which was based on our exceptional success over the previous decade.

Keep Reading Show less
Jeff Kimbell
Jeff Kimbell is Senior Vice President and Chief Commercial Officer at Honeywell. In this role, he has broad responsibilities to drive organic growth by enhancing global sales and marketing capabilities. Jeff has nearly three decades of leadership experience. Prior to joining Honeywell in 2019, Jeff served as a Partner in the Transformation Practice at McKinsey & Company, where he worked with companies facing operational and financial challenges and undergoing “good to great” transformations. Before that, he was an Operating Partner at Silver Lake Partners, a global leader in technology and held a similar position at Cerberus Capital LP. Jeff started his career as a Manufacturing Team Manager and Engineering Project Manager at Procter & Gamble before becoming a strategy consultant at Bain & Company and holding executive roles at Dell EMC and Transamerica Corporation. Jeff earned a B.S. in electrical engineering at Kansas State University and an M.B.A. at Dartmouth College.
Policy

Biden wants to digitize the government. Can these techies deliver?

A December executive order requires federal agencies to overhaul clunky systems. Meet the team trying to make that happen.

The dramatic uptick in people relying on government services, combined with the move to remote work, rendered inconvenient government processes downright painful.

Photo: Joe Daniel Price/Getty Images

Early last year, top White House officials embarked on a fact-finding mission with technical leaders inside government agencies. They wanted to know the answer to a specific question: If there was anything federal agencies could do to improve the average American’s experience interacting with the government, what would it be?

The list, of course, was a long one.

Keep Reading Show less
Issie Lapowsky

Issie Lapowsky ( @issielapowsky) is Protocol's chief correspondent, covering the intersection of technology, politics, and national affairs. She also oversees Protocol's fellowship program. Previously, she was a senior writer at Wired, where she covered the 2016 election and the Facebook beat in its aftermath. Prior to that, Issie worked as a staff writer for Inc. magazine, writing about small business and entrepreneurship. She has also worked as an on-air contributor for CBS News and taught a graduate-level course at New York University's Center for Publishing on how tech giants have affected publishing.

Boost 2

Can Matt Mullenweg save the internet?

He's turning Automattic into a different kind of tech giant. But can he take on the trillion-dollar walled gardens and give the internet back to the people?

Matt Mullenweg, CEO of Automattic and founder of WordPress, poses for Protocol at his home in Houston, Texas.
Photo: Arturo Olmos for Protocol

In the early days of the pandemic, Matt Mullenweg didn't move to a compound in Hawaii, bug out to a bunker in New Zealand or head to Miami and start shilling for crypto. No, in the early days of the pandemic, Mullenweg bought an RV. He drove it all over the country, bouncing between Houston and San Francisco and Jackson Hole with plenty of stops in national parks. In between, he started doing some tinkering.

The tinkering is a part-time gig: Most of Mullenweg’s time is spent as CEO of Automattic, one of the web’s largest platforms. It’s best known as the company that runs WordPress.com, the hosted version of the blogging platform that powers about 43% of the websites on the internet. Since WordPress is open-source software, no company technically owns it, but Automattic provides tools and services and oversees most of the WordPress-powered internet. It’s also the owner of the booming ecommerce platform WooCommerce, Day One, the analytics tool Parse.ly and the podcast app Pocket Casts. Oh, and Tumblr. And Simplenote. And many others. That makes Mullenweg one of the most powerful CEOs in tech, and one of the most important voices in the debate over the future of the internet.

Keep Reading Show less
David Pierce

David Pierce ( @pierce) is Protocol's editorial director. Prior to joining Protocol, he was a columnist at The Wall Street Journal, a senior writer with Wired, and deputy editor at The Verge. He owns all the phones.

Entertainment

5 takeaways from Microsoft's Activision Blizzard acquisition

Microsoft just bought one of the world’s largest third-party game publishers. What now?

The nearly $70 billion acquisition gives Microsoft access to some of the most valuable brands in gaming.

Image: Microsoft Gaming

Just one week after Take-Two took the crown for biggest-ever industry acquisition, Microsoft strolled in Tuesday morning and dropped arguably the most monumental gaming news bombshell in years with its purchase of Activision Blizzard. The deal, at nearly $70 billion in all cash, dwarfs Take-Two’s purchase of Zynga, and it stands to reshape gaming as we know it.

The deal raises a number of pressing questions about the future of Activision Blizzard’s workplace culture issues, exclusivity in the game industry and whether such massive consolidation may trigger a regulatory response. None of these may be easily answered anytime soon, as the deal could take up to 18 months to close. But the question marks hanging over Activision Blizzard will loom large in the industry for the foreseeable future as Microsoft navigates its new role as one of the three largest game makers on the planet.

Keep Reading Show less
Nick Statt
Nick Statt is Protocol's video game reporter. Prior to joining Protocol, he was news editor at The Verge covering the gaming industry, mobile apps and antitrust out of San Francisco, in addition to managing coverage of Silicon Valley tech giants and startups. He now resides in Rochester, New York, home of the garbage plate and, completely coincidentally, the World Video Game Hall of Fame. He can be reached at nstatt@protocol.com.
Enterprise

Why AMD is waiting for China to approve its $35B bid for Xilinx

There’s another big chip deal in regulatory limbo. AMD’s $35 billion bid for Xilinx, which would transform its data-center business, is being held up by China.

AMD announced a $35 billion bid to acquire Xilinx more than a year ago.

Photographer: Chris Ratcliffe/Bloomberg via Getty Images

AMD has spent its entire corporate life as a second-class citizen to Intel. That’s just one reason why CEO Lisa Su seized an opportunity with a $35 billion stock deal to snap up programmable chipmaker Xilinx more than a year ago at one of Intel’s weakest moments in decades.

The full extent of a manufacturing stumble that delayed Intel's next-generation chips six months became apparent in 2020, to Su and AMD's considerable advantage. AMD’s share price soared as it became clear the longtime also-ran stood to gain significant market share, granting Su a considerably more valuable currency for acquisitions such as Xilinx, which makes chips for data center networking, cars, military use and satellites.

Keep Reading Show less
Max A. Cherney

Max A. Cherney is a Technology Reporter at Protocol covering the semiconductor industry. He has worked for Barron's magazine as a Technology Reporter, and its sister site MarketWatch. He is based in San Francisco.

Latest Stories
Bulletins