Enterprise

Stop worrying about AI bias all the time, says Deloitte’s AI ethics lead

AI that discriminates against people is a big problem, but Beena Ammanath, executive director of the Global Deloitte AI Institute and head of Trustworthy AI and Ethical Tech, says AI ethics is about a lot more than bias.

Beena Ammanath, executive director of the Global Deloitte AI Institute and head of Trustworthy AI and Ethical Tech

Beena Ammanath wasn’t always gung-ho to talk AI ethics. Then she decided to write a book about it.

Photo: Deloitte

“You won't see many people with my background talking about ethics,” said Beena Ammanath, executive director of the Global Deloitte AI Institute and head of Trustworthy AI and Ethical Tech at the global consulting company.

A computer scientist who worked as a database and SQL developer and held data science and AI-related technology roles at Bank of America, GE and Hewlett Packard before joining Deloitte in 2019, Ammanath wasn’t always gung-ho to talk AI ethics. Then she decided to write a book about it.

“There has arguably never been a more exciting time in AI,” she wrote in her book, “Trustworthy AI.” “Alongside the arrival of so much promise and potential, however, the attention placed on AI ethics has been relatively slight. What passes for public scrutiny is too often just seductive, click bait headlines that fret over AI bias and point to a discrete use case. There’s a lot of noise on AI ethics and trust, and it does not move us closer to clarity or consensus on how we keep trust in AI commensurate with its power.”

Ammanath calls the book, which attempts to move beyond hand wringing over AI’s problems toward practical ways to develop AI with ethical considerations in mind, “a synthesis of especially the last 10 years of my professional experience.”

Protocol spoke with Ammanath about why ethical AI practices should be part of every employee’s training, the limitations of providing internal guidance inside a sprawling consultancy and why she finally gave in and joined the AI ethics conversation.

This interview has been edited for clarity.

There’s no shortage of guidance, advice and lists of principles for ensuring AI is ethical or responsible. Why did you want to join the fray?

I really didn't want to.

You really didn't want to?

I didn't want to, but it reached a point — just like I didn't want to join the ethics and bias conversation four years ago, but I got pulled into all these discussions.

By training, I'm a computer scientist. In my prior lives, I built AI products, then [took them to] market. A lot of work at GE and HP. So, I have very much been focused on all the cool things and the value AI can bring to humans.

I realized that a lot of what was getting out was just one side of the story. When you think about AI ethics, the first thing that comes to mind is fairness and bias. And, yes, those are important, but that's not the only thing. Fairness and bias doesn't even apply in every possible scenario. Some of the work that I've done in the past was very much around predicting jet engine failure, or predicting how much power will a wind turbine generate, optimizing your IT servers and doing document management. And those are scenarios where it's not so much about fairness and bias, but it is more about, say, reliability — the robustness, the safety and security aspect of it.

Do you think that some of this ethics conversation is steering people to think AI is just inherently bad and they should just avoid it altogether and be scared of it?

That's the message that's going across: that it's a terrible thing for humanity. And I don't think it's all bad; either is it all good. There are risks with it, and we need to address it. I want to bring more of that balanced perspective, a pragmatic, optimistic perspective.

I've talked to companies that do deep learning for detecting defects in manufacturing, for example. Those are some really practical things.

We hear about bias in the context of health care a lot, right? Just think of two scenarios where AI is used. One is in patient diagnosis — AI being used to predict health and diagnose a disease earlier. In that scenario, bias is a terrible, terrible thing. But if you're using AI to predict when an MRI machine might fail, or X-ray machine might fail, so that you can proactively send an engineer to go fix it, then, bias? Not so much. So anywhere where there is no human data being used, usually, bias doesn't come into play.

The other one you hear a lot about is facial recognition. If it's biased in a law enforcement scenario, where you're flagging people as criminals, bias is a terrible thing. We want it to be absolutely fair. But if you're using facial recognition [in] traffic lights to identify potential human trafficking victims, in that scenario, do we still want to use it because it's 60% better than just humans trying to do it by themselves, even though it is biased? It's more weighted, and it's more nuanced. It is not a one-size-fits-all.

Some companies that create AI technologies, especially if they are controversial, promote “AI for good” — pro bono donations of data or tech for COVID, cancer or climate research, for example. It can be seen as a red herring, like, “Look, we're doing good work, don't pay attention to the fact that we are working on controversial AI or have military contracts.” What are your thoughts about that kind of approach?

I think of AI at companies or organizations at the high level [as] two categories. There are the ones who are building the AI tools, Big Tech. They are building AI tools, they're pushing the levels on it, building those core capabilities. And then there are a lot more companies that are just using those tools in their specific context — the facial recognition example. The company that uses it in law enforcement is probably different than the company that uses it for human trafficking with identification.

We tend to think it is just a Big Tech problem. It's just a problem of the company that's building the tool. But the companies that are using it are equally responsible, and there are things they can do and consider and weigh, because at the end of the day, the tool and its ethical implications, the risks, are going to depend on how it's used.

How are you applying some of the concepts that you discuss in your book in a concrete way at Deloitte?

The first step you can do, which we’re doing at Deloitte, [is] training — making sure every employee in the organization understands the ethical principles that the company believes. Every company has integrity training, day one. So just add an extension to it on what are AI ethics principles. That does assume that the key stakeholders, the C-suite, board members, have agreed upon the principles.

The next step is making sure every employee — not just your IT team, not just the data scientists, [but] every employee … should know what questions to ask and whom to call if he or she is not getting the right answers. It is that intern in your marketing department who's evaluating an AI tool for recruiting. So every employee should be empowered, because in some form or the other, they're using AI in their daily work.

Every project I've worked on, there's always a column or section which talks about the ROI. What's the value this project is going to bring, whether it's cost savings or new revenue. Add just one additional step which says, what are the ways this could go wrong? Who is it going to leave behind? What are the vulnerabilities?

In today's world, there is no thinking of the bad things that can happen. I'm an engineer by trade. Trust me, I like to focus on all the cool things technology can do. It's not in my DNA to think of, what are the ways this could go wrong? That's the bare minimum you can do.

Deloitte got a $106 million contract with the U.S. Defense Information Systems Agency in 2020 to build the Pentagon's AI development platform. Are there Pentagon projects that you're working on in any capacity? Is “what could go wrong?” a question that is asked in those kinds of environments?

I don't do client work. What I'm doing is internal transformation, so that team serving a client would reach out to me. In fact, I think I have one in my mailbox about another government client [asking] what are the ways that the team should be thinking about [these issues] and what should we be asking our client. The other one is making sure we are including an element of thinking about the ethical implications, the risks, the ways it could go wrong as part of the project.

I have to wonder if that actually gets translated. Sure, they're asking you, but do they actually bring that to the customer — especially when the customer is the U.S. government or the Pentagon? What's the real impact?

I know the conversation is happening. The team is offering scenarios that could go wrong. They can inform the client that we are going to be putting in the guardrails. It's not a single playbook. It really depends on the exact scenario, the solution that you're building. So even for me, it becomes very hard to be just very prescriptive. It's more initiating that conversation, putting that in the back of your mind and proposing that and weaving in as much as you can.

Fintech

Judge Zia Faruqui is trying to teach you crypto, one ‘SNL’ reference at a time

His decisions on major cryptocurrency cases have quoted "The Big Lebowski," "SNL," and "Dr. Strangelove." That’s because he wants you — yes, you — to read them.

The ways Zia Faruqui (right) has weighed on cases that have come before him can give lawyers clues as to what legal frameworks will pass muster.

Photo: Carolyn Van Houten/The Washington Post via Getty Images

“Cryptocurrency and related software analytics tools are ‘The wave of the future, Dude. One hundred percent electronic.’”

That’s not a quote from "The Big Lebowski" — at least, not directly. It’s a quote from a Washington, D.C., district court memorandum opinion on the role cryptocurrency analytics tools can play in government investigations. The author is Magistrate Judge Zia Faruqui.

Keep ReadingShow less
Veronica Irwin

Veronica Irwin (@vronirwin) is a San Francisco-based reporter at Protocol covering fintech. Previously she was at the San Francisco Examiner, covering tech from a hyper-local angle. Before that, her byline was featured in SF Weekly, The Nation, Techworker, Ms. Magazine and The Frisc.

The financial technology transformation is driving competition, creating consumer choice, and shaping the future of finance. Hear from seven fintech leaders who are reshaping the future of finance, and join the inaugural Financial Technology Association Fintech Summit to learn more.

Keep ReadingShow less
FTA
The Financial Technology Association (FTA) represents industry leaders shaping the future of finance. We champion the power of technology-centered financial services and advocate for the modernization of financial regulation to support inclusion and responsible innovation.
Enterprise

AWS CEO: The cloud isn’t just about technology

As AWS preps for its annual re:Invent conference, Adam Selipsky talks product strategy, support for hybrid environments, and the value of the cloud in uncertain economic times.

Photo: Noah Berger/Getty Images for Amazon Web Services

AWS is gearing up for re:Invent, its annual cloud computing conference where announcements this year are expected to focus on its end-to-end data strategy and delivering new industry-specific services.

It will be the second re:Invent with CEO Adam Selipsky as leader of the industry’s largest cloud provider after his return last year to AWS from data visualization company Tableau Software.

Keep ReadingShow less
Donna Goodison

Donna Goodison (@dgoodison) is Protocol's senior reporter focusing on enterprise infrastructure technology, from the 'Big 3' cloud computing providers to data centers. She previously covered the public cloud at CRN after 15 years as a business reporter for the Boston Herald. Based in Massachusetts, she also has worked as a Boston Globe freelancer, business reporter at the Boston Business Journal and real estate reporter at Banker & Tradesman after toiling at weekly newspapers.

Image: Protocol

We launched Protocol in February 2020 to cover the evolving power center of tech. It is with deep sadness that just under three years later, we are winding down the publication.

As of today, we will not publish any more stories. All of our newsletters, apart from our flagship, Source Code, will no longer be sent. Source Code will be published and sent for the next few weeks, but it will also close down in December.

Keep ReadingShow less
Bennett Richardson

Bennett Richardson ( @bennettrich) is the president of Protocol. Prior to joining Protocol in 2019, Bennett was executive director of global strategic partnerships at POLITICO, where he led strategic growth efforts including POLITICO's European expansion in Brussels and POLITICO's creative agency POLITICO Focus during his six years with the company. Prior to POLITICO, Bennett was co-founder and CMO of Hinge, the mobile dating company recently acquired by Match Group. Bennett began his career in digital and social brand marketing working with major brands across tech, energy, and health care at leading marketing and communications agencies including Edelman and GMMB. Bennett is originally from Portland, Maine, and received his bachelor's degree from Colgate University.

Enterprise

Why large enterprises struggle to find suitable platforms for MLops

As companies expand their use of AI beyond running just a few machine learning models, and as larger enterprises go from deploying hundreds of models to thousands and even millions of models, ML practitioners say that they have yet to find what they need from prepackaged MLops systems.

As companies expand their use of AI beyond running just a few machine learning models, ML practitioners say that they have yet to find what they need from prepackaged MLops systems.

Photo: artpartner-images via Getty Images

On any given day, Lily AI runs hundreds of machine learning models using computer vision and natural language processing that are customized for its retail and ecommerce clients to make website product recommendations, forecast demand, and plan merchandising. But this spring when the company was in the market for a machine learning operations platform to manage its expanding model roster, it wasn’t easy to find a suitable off-the-shelf system that could handle such a large number of models in deployment while also meeting other criteria.

Some MLops platforms are not well-suited for maintaining even more than 10 machine learning models when it comes to keeping track of data, navigating their user interfaces, or reporting capabilities, Matthew Nokleby, machine learning manager for Lily AI’s product intelligence team, told Protocol earlier this year. “The duct tape starts to show,” he said.

Keep ReadingShow less
Kate Kaye

Kate Kaye is an award-winning multimedia reporter digging deep and telling print, digital and audio stories. She covers AI and data for Protocol. Her reporting on AI and tech ethics issues has been published in OneZero, Fast Company, MIT Technology Review, CityLab, Ad Age and Digiday and heard on NPR. Kate is the creator of RedTailMedia.org and is the author of "Campaign '08: A Turning Point for Digital Media," a book about how the 2008 presidential campaigns used digital media and data.

Latest Stories
Bulletins