People

Amazon Halo’s principal medical officer talks privacy, shortcomings with competitors, and why it’s now or never for the device

The company's new health-monitoring device has received plenty of criticism since it was announced. Dr. Maulik Majmudar hopes science can win over the skeptics.

Amazon Halo’s principal medical officer talks privacy, shortcomings with competitors, and why it’s now or never for the device

Halo, Amazon's entrance into health and wellness wearables, is ramping up to general availability.

Image: Amazon

Amazon Halo's Principal Medical Officer Dr. Maulik Majmudar says he hears your concerns about the company's newly unveiled Halo device — but not in a creepy way.

Halo, Amazon's entrance into the health and wellness wearables space, launched on Aug. 27 to the company's usual fanfare. But it immediately faced skepticism, centered primarily around the product's new Tone feature, which provides insights about how users sound to other people throughout the day by analyzing audio samples from their conversations.

Currently, Halo is only available through an early-access program. But as the company ramps up to general availability, Majmudar says his focus has been to "add clinical credibility to the features within Halo, making sure that the product experience is rooted in science and then validating all those features to make sure we can earn and maintain customer trust."

In an interview with Protocol, Majmudar described how Amazon has tried to prioritize privacy and avoid algorithmic bias in Halo's design, explained why the device needs to be a success from the get-go, and raised doubts about sophisticated cardiology features in competing devices.

This interview has been edited and condensed for clarity.

Wearables is a space that's getting crowded. Why did you choose now to launch the Halo?

Whenever we think about entering a new category at Amazon, there are two key questions we always ask ourselves. First and foremost: Does this matter to customers? Is there an unmet customer need here? And the second is: Can we as Amazon meaningfully differentiate and add innovation and value of the customer experience in the space?

In the case of health and wellness, it was very clear to us that there's an obvious customer need. If you look at the proliferation of digital health devices and services in the last decade or so, unfortunately there hasn't been an equally meaningful change in the health of the population. One data point that I always refer to is the prevalence of obesity. The latest data from the CDC from February 2020 suggested that the prevalence of obesity in the U.S. is up to 40% now, and it's been going up steadily for over decade now.

The second part is my more critical one: Why Amazon, and why now? We really believe that we could leverage our expertise and experience in artificial intelligence, including machine learning and computer vision, to invent new metrics and new features for our customers to give them a much more holistic view of health than traditionally is done. The traditional focus has always been around physical health: activity, nutrition, sleep. Can we go beyond that to the social and emotional aspects of health? There are lots of products and services focusing on information, but very few of them actually get to the next step of insights and then actions. And one of the core components of Halo is giving customers tools to actually help them improve their own health and wellness.

Is the distribution of the device intended to be consumer-led, or are you exploring other channels like hospitals and clinics to have this be used as a medical tool in those environments?

Given [this is] Amazon … I think it makes a lot of sense to go direct-to-consumer. The purpose of the product and the features is to improve individual health and wellness. I strongly believe that the product's design and the features have incredible value for our enterprise customers, whether it be employers who are prioritizing the employee health and wellness, whether it be hospitals or insurance companies [that] want to optimize their members' health and wellness. I believe that the features and the product and the service really tie to that, and it would help in all those different corporate channels.

But I want to be clear on one point: Halo is not a medical device. It has a lot of uses in those environments, but at the end of the day, the device is not a medical device. Even these really valid, accurate features like Body and Tone are incredibly helpful for improving customers' lives, but they're not diagnostic.

There's incredible value in those enterprise customers; we will just see as we launch and go to [general availability] how we can have dialogues with those types of customers to make sure we can deploy Halo within those enterprises.

Analyzing how you sound to others with the Tone feature has been a focus of attention so far. From a medical perspective, do you see the Tone data having medical benefits beyond being an interesting self-evaluation?

Absolutely. I strongly believe that your tone of voice and the perception of that tone of voice to others has a lot of implications on two specific things: social relationships and your interpersonal communication. There's a lot of really solid science on the consequences of poor social interactions and poor social relationships on not just emotional mental health, [but] actually physical health. There's a really fantastic meta-analysis by Julianne Holt-Lunstad out of Brigham Young University that looked at 25 years of research and showed that poor social relationships actually lead to premature death and chronic diseases.

The question is, how do you evolve over time from taking the Tone feature as a perception of tone of voice into getting more clinical benefit in those environments? And I think we're going to continue to do that over time.

Dr. Maulik Majmudar joined Amazon in 2018 following his tenure as the associate director of Massachusetts General Hospital's Healthcare Transformation Lab.Photo: Amazon

With the understanding that Tone is evolving, can you walk me through the data that was used to train the insights?

Tone leverages machine learning and was trained on thousands of hours of beta trials data. Between the band — where there are microphones and sensors —and then the app, we use machine learning models: audio algorithms that remove the background noise, process the speech, and then analyze the acoustic characteristics of that tone of voice. So we are analyzing the tempo, intensity, pitch and rhythm to then help customers understand how they sound to others. You can imagine that the whole chain of data is incredibly sensitive, and the type of information is sensitive, so there are a number of privacy controls and specific machine learning models we built to do this. First and foremost, the audio data never goes to the cloud. It is all processed locally on the phone and automatically deleted. So no one, including me, can hear the actual audio ever.

The output itself is [made up of] three things. We output something called positivity and [another called] energy. So positivity is how happy or sad you sound, and then energy is how excited or tired you sound. The third thing is we actually provide descriptors or words that describe how you sound to others. It can say "Maulik sounded confident," or compassionate or irritated. And we do this throughout the day. There are a couple of modes for tone, but the traditional mode is that it passively and intermittently collects samples of speech throughout the day, about 30 to 60 seconds of speech every 10 minutes or so.

You get this entire day's view of how you communicated and how you sound to others, and then we highlight notable moments. And the privacy controls are in place to make it super easy to use for customers and also to drive transparency and trust in that feature.

One concern, specifically among women I've spoken to, is the idea of tone policing in a workplace environment. Were there controls put in effect to handle the differences in the way different people communicate?

It's a very relevant question in this day and age, where algorithmic bias is a hot topic, and there have been studies coming out in clinical literature about concerns around that. This was top of mind for our machine learning scientists and the product team, privacy team, everybody else. When we look at thousands of hours of data collected that we talked about, we really ensured that [when] we had a blinded panel of annotators annotate these samples of speech and label them, we didn't introduce any bias as part of that development and validation pathway.

It was very important to us that we analyzed our own models to make sure that for gender diversity, age or ethnicity, there's not any bias and introduced by the algorithms to say, "Hey, women always sound mad or angry." That is obviously a very fair concern.

But as you know, with all machine learning models, there's always room for improvement over time. And the goal is to get this out of early access and hopefully to general availability, and then continue to learn over time and keep iterating. But at least as far as all the testing has been done, we made sure we weren't introducing bias.

Getting back to privacy and security, you mentioned the data is all local, but I want to focus on third-party security specifically. Halo has third-party integration opportunities, how did privacy factor into that development?

We announced a few third-party partnerships through what's called account linking. The fundamental principle behind all of this was [twofold]. One was [that] privacy was foundational. Two was customer transparency. We wanted to provide customers full control of not just who they link accounts with, but also what data is actually being shared. If you go to the settings page of the app today, [we have] the opt in feature; you have to link the accounts voluntarily. We surface to customers specifically which individual data points will be shared, and we don't share all of them by design. For each partner that we announced, we want to figure out what metrics are necessary for them to deliver the right customer experience and we only share those data points.

[For] Cerner, it was only body fat percentage, nothing else. For WW, it was [the] activity score. We wanted to be very specific and deliberate about what we share and why we're sharing it and explain that to our customers. Beyond that, customers can request to delete the data at any point in time. So you sign up voluntarily, it's an opt-in feature, you know exactly what you're sharing at the time you sign up, you read the policies and procedures and accept them, and then you can request to delete the data later on. And, of course, [you can] delete the account link as well.

Unveiled at the Apple event a few weeks ago, the new Apple Watch has wellness-focused features: increased ECG functionality, a blood oximeter. Was there any feature envy on your part? And should we expect a future iteration of Halo to have something similar?

As a cardiologist, you're always dreaming up metrics, and you're like, I wish we could measure this all the time. You're trying to innovate about what could be interesting. The challenge is, when you get out of the clinical environment and apply the same metrics that doctors use in the hospital setting to the general population, you open up a Pandora's box. This idea of — to get nerdy for a second here — prevalence of disease and the false positive rate is happening with COVID, so people are somewhat familiar with this now, but it's what happens in the medical world all the time.

Just [last week], a paper came out from the Mayo Clinic that looked at Apple's AFib feature that was FDA-cleared a few years ago and the amount of excessive health care testing and costs that it resulted in. You have a disease where prevalence is 1% or 2% at best in the general population. Even a product that has 99% accuracy is going to have thousands and thousands of false positives. So any time we think about a feature, we want to be very mindful about the value to customers balanced with the anxiety generation of having false positives.

Of course you want to deliver more valuable features to customers over time, but we want to be very mindful of what those features are and what the level of accuracy is and how clinicians and the health care system would digest that data and what the downstream actions will be. Because if [we're] just saying, "Hey, here's the feature, go use it." And then we don't really care what happens next, that's not great.

In the next six to 12 months, as usage and engagement data comes in, what are you hoping to learn about Halo?

The No. 1 thing we're all looking forward to is [learning] what aspects of the product and the service customers really engage with and doubling down there. The other one is, what are the features they're looking for that we may have overlooked or incorrectly assumed customers are not interested in? Once we get customer engagement and the use of the feature and the service on a regular basis, it will give us the opportunity to build more features and more metrics and deliver [them]. But if [we] lose the customer now, I think we don't have a chance.

We decided to launch as a membership service. One of the critical reasons we did that is that it actually makes us accountable to deliver value. Traditionally, you buy a device, you paid for it, and it's all done. But if [we] don't deliver value — as Amazon, as Halo — customers will stop purchasing the memberships. I think that is an onus on us to make sure we continue to add value.

Fintech

Judge Zia Faruqui is trying to teach you crypto, one ‘SNL’ reference at a time

His decisions on major cryptocurrency cases have quoted "The Big Lebowski," "SNL," and "Dr. Strangelove." That’s because he wants you — yes, you — to read them.

The ways Zia Faruqui (right) has weighed on cases that have come before him can give lawyers clues as to what legal frameworks will pass muster.

Photo: Carolyn Van Houten/The Washington Post via Getty Images

“Cryptocurrency and related software analytics tools are ‘The wave of the future, Dude. One hundred percent electronic.’”

That’s not a quote from "The Big Lebowski" — at least, not directly. It’s a quote from a Washington, D.C., district court memorandum opinion on the role cryptocurrency analytics tools can play in government investigations. The author is Magistrate Judge Zia Faruqui.

Keep ReadingShow less
Veronica Irwin

Veronica Irwin (@vronirwin) is a San Francisco-based reporter at Protocol covering fintech. Previously she was at the San Francisco Examiner, covering tech from a hyper-local angle. Before that, her byline was featured in SF Weekly, The Nation, Techworker, Ms. Magazine and The Frisc.

The financial technology transformation is driving competition, creating consumer choice, and shaping the future of finance. Hear from seven fintech leaders who are reshaping the future of finance, and join the inaugural Financial Technology Association Fintech Summit to learn more.

Keep ReadingShow less
FTA
The Financial Technology Association (FTA) represents industry leaders shaping the future of finance. We champion the power of technology-centered financial services and advocate for the modernization of financial regulation to support inclusion and responsible innovation.
Enterprise

AWS CEO: The cloud isn’t just about technology

As AWS preps for its annual re:Invent conference, Adam Selipsky talks product strategy, support for hybrid environments, and the value of the cloud in uncertain economic times.

Photo: Noah Berger/Getty Images for Amazon Web Services

AWS is gearing up for re:Invent, its annual cloud computing conference where announcements this year are expected to focus on its end-to-end data strategy and delivering new industry-specific services.

It will be the second re:Invent with CEO Adam Selipsky as leader of the industry’s largest cloud provider after his return last year to AWS from data visualization company Tableau Software.

Keep ReadingShow less
Donna Goodison

Donna Goodison (@dgoodison) is Protocol's senior reporter focusing on enterprise infrastructure technology, from the 'Big 3' cloud computing providers to data centers. She previously covered the public cloud at CRN after 15 years as a business reporter for the Boston Herald. Based in Massachusetts, she also has worked as a Boston Globe freelancer, business reporter at the Boston Business Journal and real estate reporter at Banker & Tradesman after toiling at weekly newspapers.

Image: Protocol

We launched Protocol in February 2020 to cover the evolving power center of tech. It is with deep sadness that just under three years later, we are winding down the publication.

As of today, we will not publish any more stories. All of our newsletters, apart from our flagship, Source Code, will no longer be sent. Source Code will be published and sent for the next few weeks, but it will also close down in December.

Keep ReadingShow less
Bennett Richardson

Bennett Richardson ( @bennettrich) is the president of Protocol. Prior to joining Protocol in 2019, Bennett was executive director of global strategic partnerships at POLITICO, where he led strategic growth efforts including POLITICO's European expansion in Brussels and POLITICO's creative agency POLITICO Focus during his six years with the company. Prior to POLITICO, Bennett was co-founder and CMO of Hinge, the mobile dating company recently acquired by Match Group. Bennett began his career in digital and social brand marketing working with major brands across tech, energy, and health care at leading marketing and communications agencies including Edelman and GMMB. Bennett is originally from Portland, Maine, and received his bachelor's degree from Colgate University.

Enterprise

Why large enterprises struggle to find suitable platforms for MLops

As companies expand their use of AI beyond running just a few machine learning models, and as larger enterprises go from deploying hundreds of models to thousands and even millions of models, ML practitioners say that they have yet to find what they need from prepackaged MLops systems.

As companies expand their use of AI beyond running just a few machine learning models, ML practitioners say that they have yet to find what they need from prepackaged MLops systems.

Photo: artpartner-images via Getty Images

On any given day, Lily AI runs hundreds of machine learning models using computer vision and natural language processing that are customized for its retail and ecommerce clients to make website product recommendations, forecast demand, and plan merchandising. But this spring when the company was in the market for a machine learning operations platform to manage its expanding model roster, it wasn’t easy to find a suitable off-the-shelf system that could handle such a large number of models in deployment while also meeting other criteria.

Some MLops platforms are not well-suited for maintaining even more than 10 machine learning models when it comes to keeping track of data, navigating their user interfaces, or reporting capabilities, Matthew Nokleby, machine learning manager for Lily AI’s product intelligence team, told Protocol earlier this year. “The duct tape starts to show,” he said.

Keep ReadingShow less
Kate Kaye

Kate Kaye is an award-winning multimedia reporter digging deep and telling print, digital and audio stories. She covers AI and data for Protocol. Her reporting on AI and tech ethics issues has been published in OneZero, Fast Company, MIT Technology Review, CityLab, Ad Age and Digiday and heard on NPR. Kate is the creator of RedTailMedia.org and is the author of "Campaign '08: A Turning Point for Digital Media," a book about how the 2008 presidential campaigns used digital media and data.

Latest Stories
Bulletins