Microsoft will remove controversial automated tools that predict a person’s age, gender and emotional state from its Azure Face API artificial intelligence service that analyzes faces in images, according to a report published by The New York Times on Tuesday.
The technology giant said the AI features, which have been criticized as potentially biased and unreliable, will no longer be available to new users beginning this week and will be phased out for existing users within the year, the newspaper reported.
Microsoft also will restrict the use of the facial recognition tool as it adheres to a new “Responsible AI Standard,” a Microsoft-produced document that dictates requirements and tighter controls for its AI systems following a two-year review. Those requirements, according to The New York Times, were designed to prevent Microsoft’s AI systems from having a detrimental effect on society by ensuring they provide “valid solutions for the problems they are designed to solve” and “a similar quality of service for identified demographic groups, including marginalized groups.”
A team headed by Natasha Crampton, Microsoft’s chief responsible AI officer, will review any new technologies that could be used to make decisions about a person’s access to employment, education, health care, financial services or a “life opportunity” before they are released, The New York Times reported. Some companies have started to market AI tools that claim they can assess a person's emotional state, which has set off alarm bells among privacy advocates.
“The potential of AI systems to exacerbate societal biases and inequities is one of the most widely recognized harms associated with these systems,” Crampton said in a blog post on Tuesday.
“The Responsible AI Standard sets out our best thinking on how we will build AI systems to uphold these values and earn society’s trust,” she said. “It provides specific, actionable guidance for our teams that goes beyond the high-level principles that have dominated the AI landscape to date … The Standard details concrete goals or outcomes that teams developing AI systems must strive to secure.”