Is there a place for emotion AI?
Hello and welcome to Protocol Enterprise! Today: why Microsoft and Google think controversial emotion-AI detection software has a place in accessibility work, the U.S. government sets new goals for ransomware reporting and the golden age of the deepfake might be here sooner than we'd like.
Google and Microsoft: Emotion AI is risky, but we’re using it anyway
Despite its own splashy admission that its emotion recognition technology creates “risks” so it will retire it, Microsoft will retain the capability in an app used by people with vision loss. Google also incorporates controversial emotion recognition features in some products.
- Microsoft said it would retire its facial analysis because of privacy concerns, a lack of consensus around how emotions are defined and questionable links between facial expressions and emotional states.
- “We worked alongside people from the blind and low vision community who provided key feedback that the emotion recognition feature is important to them, in order to close the equity gap between them and [the] experience of sighted individuals,” said Microsoft in a statement sent to Protocol.
- Microsoft said its decision to continue use of emotion recognition in Seeing AI will help advance its accessibility mission, but although people have asked for years for an Android version of the app, it is only accessible to people with Apple devices.
Google offers similar computer vision technology in its Cloud Vision API, which includes “pre-trained Vision API models to detect emotion” and rates the likelihood that a face in an image is expressing anger, joy, sorrow or surprise.
- The company also includes a feature in its ML Kit tool for mobile apps that classifies facial characteristics to show whether or not someone has their eyes open or is smiling.
- Google’s ethics team decided against expanding Cloud Vision’s emotion detection, limiting it to the four emotions.
- The ethics team that helped make that decision included Margaret Mitchell, now chief ethics scientist at Hugging Face who helped develop Microsoft’s Seeing AI in 2014 and was since fired last year from Google amid a stream of high-profile ethics team firings.
Researchers are pushing to advance emotion AI.
- At the international Computer Vision and Pattern Recognition conference held in New Orleans in June, accepted research papers involved work related to facial expression recognition and facial landmark detection, for example.
- But as emotion AI is built into everyday tech products — from virtual meeting platforms and online classroom tools to in-car software to detect driver distraction or road rage — even its proponents are cautious.
- “I definitely see its uses, and I also envision many of its misuses. I think the mission of a company is really critical,” said Nirit Pisano, chief psychology officer at Cognovi Labs, which provides emotion AI technology to advertisers and pharmaceutical-makers.
Check out my in-depth story about Microsoft’s Seeing AI and Google’s emotion recognition tech.
SPONSORED CONTENT FROM SAP
The competitive edge of digital solutions: For the last 50 years, SAP has worked closely with our customers to solve some of the world’s most intricate problems. We have also seen, and have been a part of, rapid accelerations in technology in response. Across industries, certain paths have emerged to help businesses manage the unexpected challenges over the last few years.
A friendly call from the feds
Here's a telling bit of information: When the DOJ released its new strategic plan for the next four years on Friday, it highlighted two priority areas in the category of keeping Americans safe. The first was reducing gun-related violent crime. Combating ransomware came in at No. 2.
In other words, the ransomware problem is really bad (we knew that) and the DOJ thinks there's a lot more it could be doing (we maybe didn't realize that part). Specifically, the department wants to get to a point where a timely response by its agencies to ransomware attacks becomes the norm: By the fall of 2023, the DOJ set a goal for 65% of reported incidents to at least have a case opened within 72 hours.
But the emphasis, perhaps, should be on "reported." A CISA official was recently quoted as saying that the government is only hearing about a "tiny fraction" of ransomware attacks. Which is exactly why Congress earlier this year passed a bill requiring critical infrastructure operators to disclose major incidents to CISA within 72 hours. But the requirement may not take effect until more than three years from now — an eternity-and-a-half in cyber time.
Regardless, we do now have a pledge from the DOJ to respond more quickly to the ransomware attacks it does know about (the department didn't supply current figures for comparison). Whether the promise of a prompt call from the feds will lead to more ransomware reporting or less, it's hard to know. What we do know is that, unhappily for everyone except Russia and North Korea, ransomware is not going away anytime soon.
— Kyle Alspach (email | twitter)
Seeing is (not) believing
Right now, deepfakes still aren't very good. The gestures aren't synchronized, or the person's speech just sounds a bit … off. But it might not be too long before deepfakes are a lot more convincing — and possibly a greater threat from a cybersecurity perspective, security researcher Cameron Camp told me.
In fact, there's reason to suspect the timetable may be speeding up a bit, Camp says. One of the big challenges with creating believable-but-fake video or audio is the need for lots of CPUs and GPUs.
But the recent plunge in value for cryptocurrencies such as bitcoin means some crypto miners probably have "a lot of GPUs sitting around and not too much to do with them." It's likely that some have been reallocating their GPUs to deepfakes, which could be more lucrative for cybercrime purposes, according to Camp, who works for cybersecurity vendor ESET.
The threat posed by a fake voicemail from a CEO, for instance, could be a serious one (business email compromise scams are already successful way too often). And that's just one of the possible dangers of convincing deepfakes. When asked how soon he thinks we might get there, Camp told me he thinks it might only be another year or two. With deepfakes currently, "you can spot the difference," he says — but a future where you can't trust your eyes or ears may be here sooner than we'd like.
— Kyle Alspach (email | twitter)
Around the enterprise
Microsoft Azure Vice President Tom Keane said he’s leaving the company, after a report that described a pattern of verbal abuse in recent years.
The U.S. Air Force is so fed up with the delays around the DOD’s JEDI cloud contract and the forthcoming JWCC that it plans to go ahead with a multicloud infrastructure platform called “Cloud One.”
SPONSORED CONTENT FROM SAP
The competitive edge of digital solutions: When companies invest in maintaining their “green ledger” with the same commitment they have to their financial ledgers, they will be able to connect their environmental, social, and financial data holistically so they can steer their business towards sustainability. At the end of the day, what gets measured, gets managed.
Thanks for reading — see you tomorrow!
Recent Issues
In a tough economy, benefits of the cloud 'only magnify'
November 14, 2022
Twitter’s security leads just quit. Now what?
November 10, 2022
Intel finally serves up a chip
November 09, 2022
The great AI race that wasn’t
November 08, 2022
Cloudflare sets a target
November 07, 2022
How Elon will bring back the Fail Whale
November 04, 2022
See more
To give you the best possible experience, this site uses cookies. If you continue browsing. you accept our use of cookies. You can review our privacy policy to find out more about the cookies we use.