People

Amazon Halo’s principal medical officer talks privacy, shortcomings with competitors, and why it’s now or never for the device

The company's new health-monitoring device has received plenty of criticism since it was announced. Dr. Maulik Majmudar hopes science can win over the skeptics.

Amazon Halo’s principal medical officer talks privacy, shortcomings with competitors, and why it’s now or never for the device

Halo, Amazon's entrance into health and wellness wearables, is ramping up to general availability.

Image: Amazon

Amazon Halo's Principal Medical Officer Dr. Maulik Majmudar says he hears your concerns about the company's newly unveiled Halo device — but not in a creepy way.

Halo, Amazon's entrance into the health and wellness wearables space, launched on Aug. 27 to the company's usual fanfare. But it immediately faced skepticism, centered primarily around the product's new Tone feature, which provides insights about how users sound to other people throughout the day by analyzing audio samples from their conversations.

Currently, Halo is only available through an early-access program. But as the company ramps up to general availability, Majmudar says his focus has been to "add clinical credibility to the features within Halo, making sure that the product experience is rooted in science and then validating all those features to make sure we can earn and maintain customer trust."

In an interview with Protocol, Majmudar described how Amazon has tried to prioritize privacy and avoid algorithmic bias in Halo's design, explained why the device needs to be a success from the get-go, and raised doubts about sophisticated cardiology features in competing devices.

This interview has been edited and condensed for clarity.

Wearables is a space that's getting crowded. Why did you choose now to launch the Halo?

Whenever we think about entering a new category at Amazon, there are two key questions we always ask ourselves. First and foremost: Does this matter to customers? Is there an unmet customer need here? And the second is: Can we as Amazon meaningfully differentiate and add innovation and value of the customer experience in the space?

In the case of health and wellness, it was very clear to us that there's an obvious customer need. If you look at the proliferation of digital health devices and services in the last decade or so, unfortunately there hasn't been an equally meaningful change in the health of the population. One data point that I always refer to is the prevalence of obesity. The latest data from the CDC from February 2020 suggested that the prevalence of obesity in the U.S. is up to 40% now, and it's been going up steadily for over decade now.

The second part is my more critical one: Why Amazon, and why now? We really believe that we could leverage our expertise and experience in artificial intelligence, including machine learning and computer vision, to invent new metrics and new features for our customers to give them a much more holistic view of health than traditionally is done. The traditional focus has always been around physical health: activity, nutrition, sleep. Can we go beyond that to the social and emotional aspects of health? There are lots of products and services focusing on information, but very few of them actually get to the next step of insights and then actions. And one of the core components of Halo is giving customers tools to actually help them improve their own health and wellness.

Is the distribution of the device intended to be consumer-led, or are you exploring other channels like hospitals and clinics to have this be used as a medical tool in those environments?

Given [this is] Amazon … I think it makes a lot of sense to go direct-to-consumer. The purpose of the product and the features is to improve individual health and wellness. I strongly believe that the product's design and the features have incredible value for our enterprise customers, whether it be employers who are prioritizing the employee health and wellness, whether it be hospitals or insurance companies [that] want to optimize their members' health and wellness. I believe that the features and the product and the service really tie to that, and it would help in all those different corporate channels.

But I want to be clear on one point: Halo is not a medical device. It has a lot of uses in those environments, but at the end of the day, the device is not a medical device. Even these really valid, accurate features like Body and Tone are incredibly helpful for improving customers' lives, but they're not diagnostic.

There's incredible value in those enterprise customers; we will just see as we launch and go to [general availability] how we can have dialogues with those types of customers to make sure we can deploy Halo within those enterprises.

Analyzing how you sound to others with the Tone feature has been a focus of attention so far. From a medical perspective, do you see the Tone data having medical benefits beyond being an interesting self-evaluation?

Absolutely. I strongly believe that your tone of voice and the perception of that tone of voice to others has a lot of implications on two specific things: social relationships and your interpersonal communication. There's a lot of really solid science on the consequences of poor social interactions and poor social relationships on not just emotional mental health, [but] actually physical health. There's a really fantastic meta-analysis by Julianne Holt-Lunstad out of Brigham Young University that looked at 25 years of research and showed that poor social relationships actually lead to premature death and chronic diseases.

The question is, how do you evolve over time from taking the Tone feature as a perception of tone of voice into getting more clinical benefit in those environments? And I think we're going to continue to do that over time.

Dr. Maulik Majmudar joined Amazon in 2018 following his tenure as the associate director of Massachusetts General Hospital's Healthcare Transformation Lab.Photo: Amazon

With the understanding that Tone is evolving, can you walk me through the data that was used to train the insights?

Tone leverages machine learning and was trained on thousands of hours of beta trials data. Between the band — where there are microphones and sensors —and then the app, we use machine learning models: audio algorithms that remove the background noise, process the speech, and then analyze the acoustic characteristics of that tone of voice. So we are analyzing the tempo, intensity, pitch and rhythm to then help customers understand how they sound to others. You can imagine that the whole chain of data is incredibly sensitive, and the type of information is sensitive, so there are a number of privacy controls and specific machine learning models we built to do this. First and foremost, the audio data never goes to the cloud. It is all processed locally on the phone and automatically deleted. So no one, including me, can hear the actual audio ever.

The output itself is [made up of] three things. We output something called positivity and [another called] energy. So positivity is how happy or sad you sound, and then energy is how excited or tired you sound. The third thing is we actually provide descriptors or words that describe how you sound to others. It can say "Maulik sounded confident," or compassionate or irritated. And we do this throughout the day. There are a couple of modes for tone, but the traditional mode is that it passively and intermittently collects samples of speech throughout the day, about 30 to 60 seconds of speech every 10 minutes or so.

You get this entire day's view of how you communicated and how you sound to others, and then we highlight notable moments. And the privacy controls are in place to make it super easy to use for customers and also to drive transparency and trust in that feature.

One concern, specifically among women I've spoken to, is the idea of tone policing in a workplace environment. Were there controls put in effect to handle the differences in the way different people communicate?

It's a very relevant question in this day and age, where algorithmic bias is a hot topic, and there have been studies coming out in clinical literature about concerns around that. This was top of mind for our machine learning scientists and the product team, privacy team, everybody else. When we look at thousands of hours of data collected that we talked about, we really ensured that [when] we had a blinded panel of annotators annotate these samples of speech and label them, we didn't introduce any bias as part of that development and validation pathway.

It was very important to us that we analyzed our own models to make sure that for gender diversity, age or ethnicity, there's not any bias and introduced by the algorithms to say, "Hey, women always sound mad or angry." That is obviously a very fair concern.

But as you know, with all machine learning models, there's always room for improvement over time. And the goal is to get this out of early access and hopefully to general availability, and then continue to learn over time and keep iterating. But at least as far as all the testing has been done, we made sure we weren't introducing bias.

Getting back to privacy and security, you mentioned the data is all local, but I want to focus on third-party security specifically. Halo has third-party integration opportunities, how did privacy factor into that development?

We announced a few third-party partnerships through what's called account linking. The fundamental principle behind all of this was [twofold]. One was [that] privacy was foundational. Two was customer transparency. We wanted to provide customers full control of not just who they link accounts with, but also what data is actually being shared. If you go to the settings page of the app today, [we have] the opt in feature; you have to link the accounts voluntarily. We surface to customers specifically which individual data points will be shared, and we don't share all of them by design. For each partner that we announced, we want to figure out what metrics are necessary for them to deliver the right customer experience and we only share those data points.

[For] Cerner, it was only body fat percentage, nothing else. For WW, it was [the] activity score. We wanted to be very specific and deliberate about what we share and why we're sharing it and explain that to our customers. Beyond that, customers can request to delete the data at any point in time. So you sign up voluntarily, it's an opt-in feature, you know exactly what you're sharing at the time you sign up, you read the policies and procedures and accept them, and then you can request to delete the data later on. And, of course, [you can] delete the account link as well.

Unveiled at the Apple event a few weeks ago, the new Apple Watch has wellness-focused features: increased ECG functionality, a blood oximeter. Was there any feature envy on your part? And should we expect a future iteration of Halo to have something similar?

As a cardiologist, you're always dreaming up metrics, and you're like, I wish we could measure this all the time. You're trying to innovate about what could be interesting. The challenge is, when you get out of the clinical environment and apply the same metrics that doctors use in the hospital setting to the general population, you open up a Pandora's box. This idea of — to get nerdy for a second here — prevalence of disease and the false positive rate is happening with COVID, so people are somewhat familiar with this now, but it's what happens in the medical world all the time.

Just [last week], a paper came out from the Mayo Clinic that looked at Apple's AFib feature that was FDA-cleared a few years ago and the amount of excessive health care testing and costs that it resulted in. You have a disease where prevalence is 1% or 2% at best in the general population. Even a product that has 99% accuracy is going to have thousands and thousands of false positives. So any time we think about a feature, we want to be very mindful about the value to customers balanced with the anxiety generation of having false positives.

Of course you want to deliver more valuable features to customers over time, but we want to be very mindful of what those features are and what the level of accuracy is and how clinicians and the health care system would digest that data and what the downstream actions will be. Because if [we're] just saying, "Hey, here's the feature, go use it." And then we don't really care what happens next, that's not great.

In the next six to 12 months, as usage and engagement data comes in, what are you hoping to learn about Halo?

The No. 1 thing we're all looking forward to is [learning] what aspects of the product and the service customers really engage with and doubling down there. The other one is, what are the features they're looking for that we may have overlooked or incorrectly assumed customers are not interested in? Once we get customer engagement and the use of the feature and the service on a regular basis, it will give us the opportunity to build more features and more metrics and deliver [them]. But if [we] lose the customer now, I think we don't have a chance.

We decided to launch as a membership service. One of the critical reasons we did that is that it actually makes us accountable to deliver value. Traditionally, you buy a device, you paid for it, and it's all done. But if [we] don't deliver value — as Amazon, as Halo — customers will stop purchasing the memberships. I think that is an onus on us to make sure we continue to add value.

Theranos’ investor pitches go on trial

Prosecutors in the Elizabeth Holmes fraud case are now highlighting allegations the company sought to mislead investors.

The fresh details of unproven claims made about the viability of Theranos' blood tests and efforts to conceal errors when demonstrating testing equipment added to the evidence against Holmes, who is accused of fraud in her role leading the company.

Photo: David Paul Morris/Bloomberg via Getty Images

The Theranos trial continued this week with testimony from Daniel Edlin, a former product manager at the blood-testing startup, and Shane Weber, a scientist from Pfizer. Their testimonies appeared to bolster the government's argument that Holmes intentionally defrauded investors and patients.

The fresh details about audacious and unproven claims made about the viability of Theranos' blood tests and efforts to conceal errors when demonstrating testing equipment added to the evidence against Holmes, who is accused of fraud in her role leading the company.

Keep Reading Show less
Aisha Counts

Aisha Counts (@aishacounts) is a reporting fellow at Protocol, based out of Los Angeles. Previously, she worked for Ernst & Young, where she researched and wrote about the future of work, emerging technologies and startups. She is a graduate of the University of Southern California, where she studied business and philosophy. She can be reached at acounts@protocol.com.

The way we work has fundamentally changed. COVID-19 upended business dealings and office work processes, putting into hyperdrive a move towards digital collaboration platforms that allow teams to streamline processes and communicate from anywhere. According to the International Data Corporation, the revenue for worldwide collaboration applications increased 32.9 percent from 2019 to 2020, reaching $22.6 billion; it's expected to become a $50.7 billion industry by 2025.

"While consumers and early adopter businesses had widely embraced collaborative applications prior to the pandemic, the market saw five years' worth of new users in the first six months of 2020," said Wayne Kurtzman, research director of social and collaboration at IDC. "This has cemented collaboration, at least to some extent, for every business, large and small."

Keep Reading Show less
Kate Silver

Kate Silver is an award-winning reporter and editor with 15-plus years of journalism experience. Based in Chicago, she specializes in feature and business reporting. Kate's reporting has appeared in the Washington Post, The Chicago Tribune, The Atlantic's CityLab, Atlas Obscura, The Telegraph and many other outlets.

Protocol | Policy

8 takeaways from states’ new filing against Google

New details have been unsealed in the states' antitrust suit against Google for anticompetitive behavior in the ads market.

Google is facing complaints by government competition enforcers on several fronts.

Photo: Drew Angerer/Getty Images

Up to 22%: That's the fee Google charges publishers for sales on its online ad exchanges, according to newly unredacted details in a complaint by several state attorneys general.

The figure is just one of the many details that a court allowed the states to unveil Friday. Many had more or less remained secrets inside Google and the online publishing industry, even through prior legal complaints and eager public interest.

Keep Reading Show less
Ben Brody

Ben Brody (@ BenBrodyDC) is a senior reporter at Protocol focusing on how Congress, courts and agencies affect the online world we live in. He formerly covered tech policy and lobbying (including antitrust, Section 230 and privacy) at Bloomberg News, where he previously reported on the influence industry, government ethics and the 2016 presidential election. Before that, Ben covered business news at CNNMoney and AdAge, and all manner of stories in and around New York. He still loves appearing on the New York news radio he grew up with.

Protocol | Workplace

This tech founder uses a converted Sprinter van as an office on wheels

The CEO of productivity startup Rock likes to work on the road. Here's how he does it — starting with three different WiFi hotspots.

Kenzo Fong, founder and CEO of the 20-person productivity software startup Rock, has been working out of his converted Mercedes-Benz Sprinter van since the pandemic began.

Photo: Kenzo Fong/Rock

Plenty of techies have started companies in garages. Try running a startup from a van.

In San Francisco, one software company founder has been using a converted Mercedes-Benz Sprinter van — picture an Amazon delivery vehicle — as a mobile office.

Keep Reading Show less
Allison Levitsky
Allison Levitsky is a reporter at Protocol covering workplace issues in tech. She previously covered big tech companies and the tech workforce for the Silicon Valley Business Journal. Allison grew up in the Bay Area and graduated from UC Berkeley.
Protocol | Policy

Most Americans want AI regulation — and they want it yesterday

In a poll, people said they wanted to see artificial intelligence technologies develop in the U.S. — alongside rules governing their use.

U.S. lawmakers have only just begun the long process of regulating the use of AI.

Photo: Louis Velazquez/Unsplash

Nearly two-thirds of Americans want the U.S to regulate the development and use of artificial intelligence in the next year or sooner — with half saying that regulation should have begun yesterday, according to a Morning Consult poll. Another 13% say that regulation should start in the next year.

"You can thread this together," Austin Carson, founder of new nonprofit group SeedAI and former government relations lead for Nvidia, said in an email. "Half or more Americans want to address all of these things, split pretty evenly along ideological lines."

Keep Reading Show less
Ben Brody

Ben Brody (@ BenBrodyDC) is a senior reporter at Protocol focusing on how Congress, courts and agencies affect the online world we live in. He formerly covered tech policy and lobbying (including antitrust, Section 230 and privacy) at Bloomberg News, where he previously reported on the influence industry, government ethics and the 2016 presidential election. Before that, Ben covered business news at CNNMoney and AdAge, and all manner of stories in and around New York. He still loves appearing on the New York news radio he grew up with.

ai
Latest Stories