yesMike MurphyNone
×

Get access to Protocol

I’ve already subscribed

Will be used in accordance with our Privacy Policy

Where should we send your daily tech briefing?

×
People

The risk for Amazon of relying on AI to decide someone's mood

Amazon's new wearable wants to check users' body fat and "tone" with AI, which historically struggles with women and people of color.

The risk for Amazon of relying on AI to decide someone's mood

Amazon's new Halo app and wearable.

Photo: Amazon

On the surface, Amazon's new Halo wearable seems like a pretty straightforward offering, on par with Apple's offerings and Google's Fitbit. The water resistant wearable band tracks your activity and sleep quality, but the app also has a few features that go well beyond what competitors offer. Unlike the next Apple Watch or the newest Fitbit, they could have potentially damaging ramifications if rolled out improperly.

Get daily insights from the Protocol team in your inbox

The wearable will track what it calls your "tone," listening to how you sound when you speak all day, as well as giving you suggestions on how to improve your tone. "For example, tone results may reveal that a difficult work call leads to less positivity in communication with a customer's family, an indication of the impact of stress on emotional well-being," the company said in a release.

This tweet from Ali Alkhatib, a research fellow at the Center for Applied Data Ethics, had me thinking about Amazon's past history with automated systems. Earlier this year, Amazon announced a one-year pause on allowing its facial-recognition software to be used by police. That same software in the past incorrectly matched 28 U.S. congresspeople with mugshots, and the ACLU found "the false matches were disproportionately of people of color." Amazon's Ring video doorbells have reportedly been accessed by the company's developers in the past. Ring has partnered with over 600 police forces across the U.S., having made a concerted effort to woo them into selling these cameras to their local communities.

Are we on the precipice of people being convicted of crimes on the basis of AI data that shows they had an aggressive tone? Or worse, should cops think to use deadly force in the heat of the moment when confronting yet another young Black man?

Let's step back and look at the new types of data that Amazon's Halo is collecting. The first measurement looks at body fat: The app uses your phone's depth-sensor cameras and machine-learning algorithms to build a 3D model of your body, which is uploaded to the cloud, processed (and then deleted, Amazon says), and used to determine what percentage of your body is fat. Amazon says it's as accurate as any measurement a doctor could make. The app can even show what your body might look like if you gain or lose weight.

The second, which is definitely the riskier of the two, is a measurement of the wearer's "tone." The wearable, much like an Amazon Echo, is apparently listening to what you say and how you say it over the course of a day. Amazon says its "AI analyzes qualities of the customer's voice such as pitch, intensity, tempo, and rhythm to predict how others would perceive and describe the customer's tone of voice." It then categorizes those tones with labels like "happy," "confused," or "worried," and gives you suggestions on how you can improve your interactions. It's like an overbearing parent telling their child that they don't like their tone. The Halo cannot, as of yet, put you into time out.

These are complex systems that would require diverse datasets to be able to pull off things like this with any level of accuracy. If Amazon relied on data that only used male bodies for the body-fat algorithms, or white voices for the tone system, it's likely the software would not work as well for everyone. Bias creeps into AI when datasets, and unwitting researchers, do not have a holistic, diverse representation of the general population. An Amazon spokesperson told Protocol that both features were "trained on a large amount of data across demographics," from both internal and publicly available sources.

People of color and women are traditionally more discriminated against by AI systems, and Amazon's own AI research has struggled with this very problem in the past. It's not much of a stretch to see a system trained on white women's voices finding a Black man's voice as sounding more aggressive than it actually was.

Latest Stories