People

The risk for Amazon of relying on AI to decide someone's mood

Amazon's new wearable wants to check users' body fat and "tone" with AI, which historically struggles with women and people of color.

The risk for Amazon of relying on AI to decide someone's mood

Amazon's new Halo app and wearable.

Photo: Amazon

On the surface, Amazon's new Halo wearable seems like a pretty straightforward offering, on par with Apple's offerings and Google's Fitbit. The water resistant wearable band tracks your activity and sleep quality, but the app also has a few features that go well beyond what competitors offer. Unlike the next Apple Watch or the newest Fitbit, they could have potentially damaging ramifications if rolled out improperly.

The wearable will track what it calls your "tone," listening to how you sound when you speak all day, as well as giving you suggestions on how to improve your tone. "For example, tone results may reveal that a difficult work call leads to less positivity in communication with a customer's family, an indication of the impact of stress on emotional well-being," the company said in a release.

This tweet from Ali Alkhatib, a research fellow at the Center for Applied Data Ethics, had me thinking about Amazon's past history with automated systems. Earlier this year, Amazon announced a one-year pause on allowing its facial-recognition software to be used by police. That same software in the past incorrectly matched 28 U.S. congresspeople with mugshots, and the ACLU found "the false matches were disproportionately of people of color." Amazon's Ring video doorbells have reportedly been accessed by the company's developers in the past. Ring has partnered with over 600 police forces across the U.S., having made a concerted effort to woo them into selling these cameras to their local communities.

Are we on the precipice of people being convicted of crimes on the basis of AI data that shows they had an aggressive tone? Or worse, should cops think to use deadly force in the heat of the moment when confronting yet another young Black man?

Let's step back and look at the new types of data that Amazon's Halo is collecting. The first measurement looks at body fat: The app uses your phone's depth-sensor cameras and machine-learning algorithms to build a 3D model of your body, which is uploaded to the cloud, processed (and then deleted, Amazon says), and used to determine what percentage of your body is fat. Amazon says it's as accurate as any measurement a doctor could make. The app can even show what your body might look like if you gain or lose weight.

The second, which is definitely the riskier of the two, is a measurement of the wearer's "tone." The wearable, much like an Amazon Echo, is apparently listening to what you say and how you say it over the course of a day. Amazon says its "AI analyzes qualities of the customer's voice such as pitch, intensity, tempo, and rhythm to predict how others would perceive and describe the customer's tone of voice." It then categorizes those tones with labels like "happy," "confused," or "worried," and gives you suggestions on how you can improve your interactions. It's like an overbearing parent telling their child that they don't like their tone. The Halo cannot, as of yet, put you into time out.

These are complex systems that would require diverse datasets to be able to pull off things like this with any level of accuracy. If Amazon relied on data that only used male bodies for the body-fat algorithms, or white voices for the tone system, it's likely the software would not work as well for everyone. Bias creeps into AI when datasets, and unwitting researchers, do not have a holistic, diverse representation of the general population. An Amazon spokesperson told Protocol that both features were "trained on a large amount of data across demographics," from both internal and publicly available sources.

People of color and women are traditionally more discriminated against by AI systems, and Amazon's own AI research has struggled with this very problem in the past. It's not much of a stretch to see a system trained on white women's voices finding a Black man's voice as sounding more aggressive than it actually was.

Climate

This carbon capture startup wants to clean up the worst polluters

The founder and CEO of point-source carbon capture company Carbon Clean discusses what the startup has learned, the future of carbon capture technology, as well as the role of companies like his in battling the climate crisis.

Carbon Clean CEO Aniruddha Sharma told Protocol that fossil fuels are necessary, at least in the near term, to lift the living standards of those who don’t have access to cars and electricity.

Photo: Carbon Clean

Carbon capture and storage has taken on increasing importance as companies with stubborn emissions look for new ways to meet their net zero goals. For hard-to-abate industries like cement and steel production, it’s one of the few options that exist to help them get there.

Yet it’s proven incredibly challenging to scale the technology, which captures carbon pollution at the source. U.K.-based company Carbon Clean is leading the charge to bring down costs. This year, it raised a $150 million series C round, which the startup said is the largest-ever funding round for a point-source carbon capture company.

Keep Reading Show less
Michelle Ma

Michelle Ma (@himichellema) is a reporter at Protocol covering climate. Previously, she was a news editor of live journalism and special coverage for The Wall Street Journal. Prior to that, she worked as a staff writer at Wirecutter. She can be reached at mma@protocol.com.

Sponsored Content

Great products are built on strong patents

Experts say robust intellectual property protection is essential to ensure the long-term R&D required to innovate and maintain America's technology leadership.

Every great tech product that you rely on each day, from the smartphone in your pocket to your music streaming service and navigational system in the car, shares one important thing: part of its innovative design is protected by intellectual property (IP) laws.

From 5G to artificial intelligence, IP protection offers a powerful incentive for researchers to create ground-breaking products, and governmental leaders say its protection is an essential part of maintaining US technology leadership. To quote Secretary of Commerce Gina Raimondo: "intellectual property protection is vital for American innovation and entrepreneurship.”

Keep Reading Show less
James Daly
James Daly has a deep knowledge of creating brand voice identity, including understanding various audiences and targeting messaging accordingly. He enjoys commissioning, editing, writing, and business development, particularly in launching new ventures and building passionate audiences. Daly has led teams large and small to multiple awards and quantifiable success through a strategy built on teamwork, passion, fact-checking, intelligence, analytics, and audience growth while meeting budget goals and production deadlines in fast-paced environments. Daly is the Editorial Director of 2030 Media and a contributor at Wired.
Workplace

Why companies cut staff after raising millions

Are tech firms blowing millions in funding just weeks after getting it? Experts say it's more complicated than that.

Bolt, Trade Republic, HomeLight, and Stord all drew attention from funding announcements that happened just weeks or days before layoffs.

Photo: Pulp Photography/Getty Images

Fintech startup Bolt was one of the first tech companies to slash jobs, cutting 250 employees, or a third of its staff, in May. For some workers, the pain of layoffs was a shock not only because they were the first, but also because the cuts came just four months after Bolt had announced a $355 million series E funding round and achieved a peak valuation of $11 billion.

“Bolt employees were blind sided because the CEO was saying just weeks ago how everything is fine,” an anonymous user wrote on the message board Blind. “It has been an extremely rough day for 1/3 of Bolt employees,” another user posted. “Sadly, I was one of them who was let go after getting a pay-raise just a couple of weeks ago.”

Keep Reading Show less
Nat Rubio-Licht

Nat Rubio-Licht is a Los Angeles-based news writer at Protocol. They graduated from Syracuse University with a degree in newspaper and online journalism in May 2020. Prior to joining the team, they worked at the Los Angeles Business Journal as a technology and aerospace reporter.

Climate

The fight to define the carbon offset market's future

The world’s largest carbon offset issuer is fighting a voluntary effort to standardize the industry. And the fate of the climate could hang in the balance.

It has become increasingly clear that scaling the credit market will first require clear standards and transparency.

Kevin Frayer/Getty Images

There’s a major fight brewing over what kind of standards will govern the carbon offset market.

A group of independent experts looking to clean up the market’s checkered record and the biggest carbon credit issuer on the voluntary market is trying to influence efforts to define what counts as a quality credit. The outcome could make or break an industry increasingly central to tech companies meeting their net zero goals.

Keep Reading Show less
Lisa Martine Jenkins

Lisa Martine Jenkins is a senior reporter at Protocol covering climate. Lisa previously wrote for Morning Consult, Chemical Watch and the Associated Press. Lisa is currently based in Brooklyn, and is originally from the Bay Area. Find her on Twitter ( @l_m_j_) or reach out via email (ljenkins@protocol.com).

Policy

White House AI Bill of Rights lacks specific guidance for AI rules

The document unveiled today by the White House Office of Science and Technology Policy is long on tech guidance, but short on restrictions for AI.

While the document provides extensive suggestions for how to incorporate AI rights in technical design, it does not include any recommendations for restrictions on the use of controversial forms of AI.

Photo: Ana Lanza/Unsplash

It was a year in the making, but people eagerly anticipating the White House Bill of Rights for AI will have to continue waiting for concrete recommendations for future AI policy or restrictions.

Instead, the document unveiled today by the White House Office of Science and Technology Policy is legally non-binding and intended to be used as a handbook and a “guide for society” that could someday inform government AI legislation or regulations.

Blueprint for an AI Bill of Rights features a list of five guidelines for protecting people in relation to AI use:

Keep Reading Show less
Kate Kaye

Kate Kaye is an award-winning multimedia reporter digging deep and telling print, digital and audio stories. She covers AI and data for Protocol. Her reporting on AI and tech ethics issues has been published in OneZero, Fast Company, MIT Technology Review, CityLab, Ad Age and Digiday and heard on NPR. Kate is the creator of RedTailMedia.org and is the author of "Campaign '08: A Turning Point for Digital Media," a book about how the 2008 presidential campaigns used digital media and data.

Latest Stories
Bulletins