“You won't see many people with my background talking about ethics,” said Beena Ammanath, executive director of the Global Deloitte AI Institute and head of Trustworthy AI and Ethical Tech at the global consulting company.
A computer scientist who worked as a database and SQL developer and held data science and AI-related technology roles at Bank of America, GE and Hewlett Packard before joining Deloitte in 2019, Ammanath wasn’t always gung-ho to talk AI ethics. Then she decided to write a book about it.
“There has arguably never been a more exciting time in AI,” she wrote in her book, “Trustworthy AI.” “Alongside the arrival of so much promise and potential, however, the attention placed on AI ethics has been relatively slight. What passes for public scrutiny is too often just seductive, click bait headlines that fret over AI bias and point to a discrete use case. There’s a lot of noise on AI ethics and trust, and it does not move us closer to clarity or consensus on how we keep trust in AI commensurate with its power.”
Ammanath calls the book, which attempts to move beyond hand wringing over AI’s problems toward practical ways to develop AI with ethical considerations in mind, “a synthesis of especially the last 10 years of my professional experience.”
Protocol spoke with Ammanath about why ethical AI practices should be part of every employee’s training, the limitations of providing internal guidance inside a sprawling consultancy and why she finally gave in and joined the AI ethics conversation.
This interview has been edited for clarity.
There’s no shortage of guidance, advice and lists of principles for ensuring AI is ethical or responsible. Why did you want to join the fray?
I really didn't want to.
You really didn't want to?
I didn't want to, but it reached a point — just like I didn't want to join the ethics and bias conversation four years ago, but I got pulled into all these discussions.
By training, I'm a computer scientist. In my prior lives, I built AI products, then [took them to] market. A lot of work at GE and HP. So, I have very much been focused on all the cool things and the value AI can bring to humans.
I realized that a lot of what was getting out was just one side of the story. When you think about AI ethics, the first thing that comes to mind is fairness and bias. And, yes, those are important, but that's not the only thing. Fairness and bias doesn't even apply in every possible scenario. Some of the work that I've done in the past was very much around predicting jet engine failure, or predicting how much power will a wind turbine generate, optimizing your IT servers and doing document management. And those are scenarios where it's not so much about fairness and bias, but it is more about, say, reliability — the robustness, the safety and security aspect of it.
Do you think that some of this ethics conversation is steering people to think AI is just inherently bad and they should just avoid it altogether and be scared of it?
That's the message that's going across: that it's a terrible thing for humanity. And I don't think it's all bad; either is it all good. There are risks with it, and we need to address it. I want to bring more of that balanced perspective, a pragmatic, optimistic perspective.
I've talked to companies that do deep learning for detecting defects in manufacturing, for example. Those are some really practical things.
We hear about bias in the context of health care a lot, right? Just think of two scenarios where AI is used. One is in patient diagnosis — AI being used to predict health and diagnose a disease earlier. In that scenario, bias is a terrible, terrible thing. But if you're using AI to predict when an MRI machine might fail, or X-ray machine might fail, so that you can proactively send an engineer to go fix it, then, bias? Not so much. So anywhere where there is no human data being used, usually, bias doesn't come into play.
The other one you hear a lot about is facial recognition. If it's biased in a law enforcement scenario, where you're flagging people as criminals, bias is a terrible thing. We want it to be absolutely fair. But if you're using facial recognition [in] traffic lights to identify potential human trafficking victims, in that scenario, do we still want to use it because it's 60% better than just humans trying to do it by themselves, even though it is biased? It's more weighted, and it's more nuanced. It is not a one-size-fits-all.
Some companies that create AI technologies, especially if they are controversial, promote “AI for good” — pro bono donations of data or tech for COVID, cancer or climate research, for example. It can be seen as a red herring, like, “Look, we're doing good work, don't pay attention to the fact that we are working on controversial AI or have military contracts.” What are your thoughts about that kind of approach?
I think of AI at companies or organizations at the high level [as] two categories. There are the ones who are building the AI tools, Big Tech. They are building AI tools, they're pushing the levels on it, building those core capabilities. And then there are a lot more companies that are just using those tools in their specific context — the facial recognition example. The company that uses it in law enforcement is probably different than the company that uses it for human trafficking with identification.
We tend to think it is just a Big Tech problem. It's just a problem of the company that's building the tool. But the companies that are using it are equally responsible, and there are things they can do and consider and weigh, because at the end of the day, the tool and its ethical implications, the risks, are going to depend on how it's used.
How are you applying some of the concepts that you discuss in your book in a concrete way at Deloitte?
The first step you can do, which we’re doing at Deloitte, [is] training — making sure every employee in the organization understands the ethical principles that the company believes. Every company has integrity training, day one. So just add an extension to it on what are AI ethics principles. That does assume that the key stakeholders, the C-suite, board members, have agreed upon the principles.
The next step is making sure every employee — not just your IT team, not just the data scientists, [but] every employee … should know what questions to ask and whom to call if he or she is not getting the right answers. It is that intern in your marketing department who's evaluating an AI tool for recruiting. So every employee should be empowered, because in some form or the other, they're using AI in their daily work.
Every project I've worked on, there's always a column or section which talks about the ROI. What's the value this project is going to bring, whether it's cost savings or new revenue. Add just one additional step which says, what are the ways this could go wrong? Who is it going to leave behind? What are the vulnerabilities?
In today's world, there is no thinking of the bad things that can happen. I'm an engineer by trade. Trust me, I like to focus on all the cool things technology can do. It's not in my DNA to think of, what are the ways this could go wrong? That's the bare minimum you can do.
Deloitte got a $106 million contract with the U.S. Defense Information Systems Agency in 2020 to build the Pentagon's AI development platform. Are there Pentagon projects that you're working on in any capacity? Is “what could go wrong?” a question that is asked in those kinds of environments?
I don't do client work. What I'm doing is internal transformation, so that team serving a client would reach out to me. In fact, I think I have one in my mailbox about another government client [asking] what are the ways that the team should be thinking about [these issues] and what should we be asking our client. The other one is making sure we are including an element of thinking about the ethical implications, the risks, the ways it could go wrong as part of the project.
I have to wonder if that actually gets translated. Sure, they're asking you, but do they actually bring that to the customer — especially when the customer is the U.S. government or the Pentagon? What's the real impact?
I know the conversation is happening. The team is offering scenarios that could go wrong. They can inform the client that we are going to be putting in the guardrails. It's not a single playbook. It really depends on the exact scenario, the solution that you're building. So even for me, it becomes very hard to be just very prescriptive. It's more initiating that conversation, putting that in the back of your mind and proposing that and weaving in as much as you can.