You’re showing too much AI bias
Hello, and welcome to Protocol Enterprise! Today: Why critics of AI and its ethical practices miss so much work that doesn’t involve humans, Heroku trickles out the bad news, and this week in enterprise tech moves.
Spin up
There’s a reason so many financial services companies still depend on mainframes: It’s really difficult, and takes a lot of time, to move critical applications to modern tech. However, according to research from Accenture, 82% of banking executives “plan to move more than half of their mainframe workloads to the cloud … and the vast majority plan to do so in the next two to five years.”
The nuanced debate over AI ethics
“You won't see many people with my background talking about ethics,” said Beena Ammanath, executive director of the Global Deloitte AI Institute and head of Trustworthy AI and Ethical Tech at the global consulting company.
A computer scientist who worked as a database and SQL developer and held data science- and AI-related technology roles at Bank of America, GE and Hewlett Packard before joining Deloitte in 2019, Ammanath wasn’t always gung-ho to talk AI ethics. Then she decided to write a book about it.
“There has arguably never been a more exciting time in AI,” she wrote in her book “Trustworthy AI.” “Alongside the arrival of so much promise and potential, however, the attention placed on AI ethics has been relatively slight.”
Protocol spoke with Ammanath about why ethical AI practices should be part of every employee’s training, the limitations of providing internal guidance inside a sprawling consultancy and why she finally gave in and joined the AI ethics conversation.
This interview has been edited for clarity. A longer version can be found here.
There’s no shortage of guidance, advice and lists of principles for ensuring AI is ethical or responsible. Why did you want to join the fray?
I really didn't want to.
You really didn't want to?
I didn't want to, but it reached a point — just like I didn't want to join the ethics and bias conversation four years ago, but I got pulled into all these discussions.
By training, I'm a computer scientist. In my prior lives, I built AI products, then [took them to] market. A lot of work at GE and HP. So I have very much been focused on all the cool things and the value AI can bring to humans.
I realized that a lot of what was getting out was just one side of the story. When you think about AI ethics, the first thing that comes to mind is fairness and bias. And, yes, those are important, but that's not the only thing. Fairness and bias doesn't even apply in every possible scenario. Some of the work that I've done in the past was very much around predicting jet engine failure, or predicting, “How much power will a wind turbine generate,” optimizing your IT servers and doing document management. And those are scenarios where it's not so much about fairness and bias, but it is more about, say, reliability — the robustness, the safety and security aspect of it.
Do you think that some of this ethics conversation is steering people to think AI is just inherently bad, and they should avoid it altogether and be scared of it?
That's the message that's going across: that it's a terrible thing for humanity. And I don't think it's all bad; neither is it all good. There are risks with it, and we need to address it. I want to bring more of that balanced perspective: a pragmatic, optimistic perspective.
I've talked to companies that do deep learning for detecting defects in manufacturing, for example. Those are some really practical things.
We hear about bias in the context of health care a lot, right? Just think of two scenarios where AI is used. One is in patient diagnosis: AI being used to predict health and diagnose a disease earlier. In that scenario, bias is a terrible, terrible thing. But if you're using AI to predict when an MRI machine might fail, or X-ray machine might fail, so that you can proactively send an engineer to go fix it, then, bias? Not so much. So anywhere where there is no human data being used, usually, bias doesn't come into play.
The other one you hear a lot about is facial recognition. If it's biased in a law enforcement scenario, where you're flagging people as criminals, bias is a terrible thing. We want it to be absolutely fair. But if you're using facial recognition [in] traffic lights to identify potential human trafficking victims, in that scenario, do we still want to use it because it's 60% better than just humans trying to do it by themselves, even though it is biased? It's more weighted, and it's more nuanced. It is not a one-size-fits-all.
Some companies that create AI technologies, especially if they are controversial, promote “AI for good” — pro bono donations of data or tech for COVID-19, cancer or climate research, for example. It can be seen as a red herring, like, “Look, we're doing good work, don't pay attention to the fact that we are working on controversial AI or have military contracts.” What are your thoughts about that kind of approach?
I think of AI at companies or organizations at the high level [as] two categories. There are the ones who are building the AI tools, Big Tech. They are building AI tools, they're pushing the levels on it, building those core capabilities. And then there are a lot more companies that are just using those tools in their specific context: the facial recognition example. The company that uses it in law enforcement is probably different than the company that uses it for human trafficking with identification.
We tend to think it is just a Big Tech problem. It's just a problem of the company that's building the tool. But the companies that are using it are equally responsible, and there are things they can do and consider and weigh, because at the end of the day, the tool and its ethical implications, the risks, are going to depend on how it's used.
A MESSAGE FROM WORKPLACE FROM META

100% of C-suite staff surveyed by Workplace by Meta said that frontline workers were a strategic priority for their business in 2022, but nearly two in three of them said that keeping their frontline staff, who bear the brunt of the stresses of the workplace most acutely, had only become a priority since the pandemic hit.
PaaSword blunders
Heroku disclosed on Thursday that customer passwords were stolen during a cyberattack that took place a month ago, acknowledging that an incident that also involved code repository GitHub was worse than initially indicated.
Heroku initially revealed on April 15 that a threat actor had likely accessed Heroku's GitHub account using a stolen authorization token, or OAuth token, and downloaded certain private Heroku repositories on April 9. The download included "some" Heroku source code, according to the disclosure.
In an update posted Wednesday evening, Heroku said the attacker actually gained access to a Heroku database on April 7, and downloaded GitHub integration OAuth tokens belonging to customers at the time. Heroku, owned by Salesforce, is a widely used platform for building, running and operating applications, and touts on its website that it has been used to develop 13 million apps.
"Access to the environment was gained by leveraging a compromised token for a Heroku machine account," Heroku said in the update. Most concerning for customers: Heroku said the investigation into the incident found that the compromised token was used by the attacker to steal hashed and salted passwords for user accounts belonging to customers.
"For this reason, Salesforce is ensuring all Heroku user passwords are reset and potentially affected credentials are refreshed. We have rotated internal Heroku credentials and put additional detections in place," Heroku said.
The update did not specify how many customers or user accounts may have been impacted, or explain why details about stolen customer passwords are only being disclosed now.
Enterprise moves
Over the past week, another tech executive switched to venture, Intel and DocuSign added new talent to drive growth and Target named a new CIO.
Brett Craig was promoted to CIO at Target. Craig was previously Target’s SVP of Digital.
Sameer Dholakia is now a partner at Bessemer Ventures. Dholakia was formerly the CEO of SendGrid, and continued his tenure through the company’s acquisition by Twilio.
Matt Poirier was named SVP of Corporate Development at Intel. Poirier was most recently SVP of Strategy and Corporate Development for AMD.
Steve Shute is now president of Worldwide Field Operations at DocuSign. Shute previously led sales at SAP and IBM.
Jesse Hulsing joined LaunchDarkly as its first CFO. Hulsing was formerly SVP of Finance for Slack.
— Aisha Counts (email | twitter)A MESSAGE FROM WORKPLACE FROM META

Businesses are starting to turn to workplace communication tools. Such tools enable frontline workers to feel more connected to the rest of their business, to raise concerns and to provide feedback on potential pain points or points of improvement. By bridging that divide, companies can unlock new savings and efficiencies, and build a business that can last for the long run.
Around the enterprise
AWS plans to build five new data centers in Eastern Oregon, presumably to increase capacity at its U.S. West region hosted in the state.
Datadog revenue surged 83% during the first quarter, and the monitoring and observability company spent some of the proceeds on Hdiv Security, a Spanish startup.
The Biden administration urged businesses to start preparing for the arrival of quantum computers that could break their data encryption strategies, although at the rate we’re going, they’ve got some time.
Thanks for reading — see you tomorrow!
Recent Issues
In a tough economy, benefits of the cloud 'only magnify'
November 14, 2022
Twitter’s security leads just quit. Now what?
November 10, 2022
Intel finally serves up a chip
November 09, 2022
The great AI race that wasn’t
November 08, 2022
Cloudflare sets a target
November 07, 2022
How Elon will bring back the Fail Whale
November 04, 2022
See more
To give you the best possible experience, this site uses cookies. If you continue browsing. you accept our use of cookies. You can review our privacy policy to find out more about the cookies we use.