Enterprise

Stop worrying about AI bias all the time, says Deloitte’s AI ethics lead

AI that discriminates against people is a big problem, but Beena Ammanath, executive director of the Global Deloitte AI Institute and head of Trustworthy AI and Ethical Tech, says AI ethics is about a lot more than bias.

Beena Ammanath, executive director of the Global Deloitte AI Institute and head of Trustworthy AI and Ethical Tech

Beena Ammanath wasn’t always gung-ho to talk AI ethics. Then she decided to write a book about it.

Photo: Deloitte

“You won't see many people with my background talking about ethics,” said Beena Ammanath, executive director of the Global Deloitte AI Institute and head of Trustworthy AI and Ethical Tech at the global consulting company.

A computer scientist who worked as a database and SQL developer and held data science and AI-related technology roles at Bank of America, GE and Hewlett Packard before joining Deloitte in 2019, Ammanath wasn’t always gung-ho to talk AI ethics. Then she decided to write a book about it.

“There has arguably never been a more exciting time in AI,” she wrote in her book, “Trustworthy AI.” “Alongside the arrival of so much promise and potential, however, the attention placed on AI ethics has been relatively slight. What passes for public scrutiny is too often just seductive, click bait headlines that fret over AI bias and point to a discrete use case. There’s a lot of noise on AI ethics and trust, and it does not move us closer to clarity or consensus on how we keep trust in AI commensurate with its power.”

Ammanath calls the book, which attempts to move beyond hand wringing over AI’s problems toward practical ways to develop AI with ethical considerations in mind, “a synthesis of especially the last 10 years of my professional experience.”

Protocol spoke with Ammanath about why ethical AI practices should be part of every employee’s training, the limitations of providing internal guidance inside a sprawling consultancy and why she finally gave in and joined the AI ethics conversation.

This interview has been edited for clarity.

There’s no shortage of guidance, advice and lists of principles for ensuring AI is ethical or responsible. Why did you want to join the fray?

I really didn't want to.

You really didn't want to?

I didn't want to, but it reached a point — just like I didn't want to join the ethics and bias conversation four years ago, but I got pulled into all these discussions.

By training, I'm a computer scientist. In my prior lives, I built AI products, then [took them to] market. A lot of work at GE and HP. So, I have very much been focused on all the cool things and the value AI can bring to humans.

I realized that a lot of what was getting out was just one side of the story. When you think about AI ethics, the first thing that comes to mind is fairness and bias. And, yes, those are important, but that's not the only thing. Fairness and bias doesn't even apply in every possible scenario. Some of the work that I've done in the past was very much around predicting jet engine failure, or predicting how much power will a wind turbine generate, optimizing your IT servers and doing document management. And those are scenarios where it's not so much about fairness and bias, but it is more about, say, reliability — the robustness, the safety and security aspect of it.

Do you think that some of this ethics conversation is steering people to think AI is just inherently bad and they should just avoid it altogether and be scared of it?

That's the message that's going across: that it's a terrible thing for humanity. And I don't think it's all bad; either is it all good. There are risks with it, and we need to address it. I want to bring more of that balanced perspective, a pragmatic, optimistic perspective.

I've talked to companies that do deep learning for detecting defects in manufacturing, for example. Those are some really practical things.

We hear about bias in the context of health care a lot, right? Just think of two scenarios where AI is used. One is in patient diagnosis — AI being used to predict health and diagnose a disease earlier. In that scenario, bias is a terrible, terrible thing. But if you're using AI to predict when an MRI machine might fail, or X-ray machine might fail, so that you can proactively send an engineer to go fix it, then, bias? Not so much. So anywhere where there is no human data being used, usually, bias doesn't come into play.

The other one you hear a lot about is facial recognition. If it's biased in a law enforcement scenario, where you're flagging people as criminals, bias is a terrible thing. We want it to be absolutely fair. But if you're using facial recognition [in] traffic lights to identify potential human trafficking victims, in that scenario, do we still want to use it because it's 60% better than just humans trying to do it by themselves, even though it is biased? It's more weighted, and it's more nuanced. It is not a one-size-fits-all.

Some companies that create AI technologies, especially if they are controversial, promote “AI for good” — pro bono donations of data or tech for COVID, cancer or climate research, for example. It can be seen as a red herring, like, “Look, we're doing good work, don't pay attention to the fact that we are working on controversial AI or have military contracts.” What are your thoughts about that kind of approach?

I think of AI at companies or organizations at the high level [as] two categories. There are the ones who are building the AI tools, Big Tech. They are building AI tools, they're pushing the levels on it, building those core capabilities. And then there are a lot more companies that are just using those tools in their specific context — the facial recognition example. The company that uses it in law enforcement is probably different than the company that uses it for human trafficking with identification.

We tend to think it is just a Big Tech problem. It's just a problem of the company that's building the tool. But the companies that are using it are equally responsible, and there are things they can do and consider and weigh, because at the end of the day, the tool and its ethical implications, the risks, are going to depend on how it's used.

How are you applying some of the concepts that you discuss in your book in a concrete way at Deloitte?

The first step you can do, which we’re doing at Deloitte, [is] training — making sure every employee in the organization understands the ethical principles that the company believes. Every company has integrity training, day one. So just add an extension to it on what are AI ethics principles. That does assume that the key stakeholders, the C-suite, board members, have agreed upon the principles.

The next step is making sure every employee — not just your IT team, not just the data scientists, [but] every employee … should know what questions to ask and whom to call if he or she is not getting the right answers. It is that intern in your marketing department who's evaluating an AI tool for recruiting. So every employee should be empowered, because in some form or the other, they're using AI in their daily work.

Every project I've worked on, there's always a column or section which talks about the ROI. What's the value this project is going to bring, whether it's cost savings or new revenue. Add just one additional step which says, what are the ways this could go wrong? Who is it going to leave behind? What are the vulnerabilities?

In today's world, there is no thinking of the bad things that can happen. I'm an engineer by trade. Trust me, I like to focus on all the cool things technology can do. It's not in my DNA to think of, what are the ways this could go wrong? That's the bare minimum you can do.

Deloitte got a $106 million contract with the U.S. Defense Information Systems Agency in 2020 to build the Pentagon's AI development platform. Are there Pentagon projects that you're working on in any capacity? Is “what could go wrong?” a question that is asked in those kinds of environments?

I don't do client work. What I'm doing is internal transformation, so that team serving a client would reach out to me. In fact, I think I have one in my mailbox about another government client [asking] what are the ways that the team should be thinking about [these issues] and what should we be asking our client. The other one is making sure we are including an element of thinking about the ethical implications, the risks, the ways it could go wrong as part of the project.

I have to wonder if that actually gets translated. Sure, they're asking you, but do they actually bring that to the customer — especially when the customer is the U.S. government or the Pentagon? What's the real impact?

I know the conversation is happening. The team is offering scenarios that could go wrong. They can inform the client that we are going to be putting in the guardrails. It's not a single playbook. It really depends on the exact scenario, the solution that you're building. So even for me, it becomes very hard to be just very prescriptive. It's more initiating that conversation, putting that in the back of your mind and proposing that and weaving in as much as you can.

Policy

We’ll be here again: How tech companies fail to prevent terrorism

Social media platforms are playing defense to stop mass shootings. Without cooperation and legislation, it’s not working.

The Buffalo attack showed that tech’s best defenses against online hate aren’t sophisticated enough to fight the algorithms designed by those same companies to promote content.

Photo: Kent Nishimura / Los Angeles Times via Getty Images

Tech platforms' patchwork approach to content moderation has made them a hotbed for hate speech that can turn deadly, as it did this weekend in Buffalo. The alleged shooter that killed 10 in a historically Black neighborhood used Discord to plan his rampage for months and livestreamed it on Twitch.

The move mirrors what happened in Christchurch, New Zealand, when a white supremacist murdered 51 people in a mosque in 2019. He viewed the killings as a meme. To disseminate that meme, he turned to the same place more than 1 billion other users do: Facebook. This pattern is destined to repeat itself as long as tech companies continue to play defense instead of offense against online hate and fail to work together.

Keep Reading Show less
Sarah Roach

Sarah Roach is a news writer at Protocol (@sarahroach_) and contributes to Source Code. She is a recent graduate of George Washington University, where she studied journalism and mass communication and criminal justice. She previously worked for two years as editor in chief of her school's independent newspaper, The GW Hatchet.

Sponsored Content

Foursquare data story: leveraging location data for site selection

We take a closer look at points of interest and foot traffic patterns to demonstrate how location data can be leveraged to inform better site selecti­on strategies.

Imagine: You’re the leader of a real estate team at a restaurant brand looking to open a new location in Manhattan. You have two options you’re evaluating: one site in SoHo, and another site in the Flatiron neighborhood. Which do you choose?

Keep Reading Show less
Enterprise

SAP’s leadership vacuum on display with Hasso Plattner’s last stand

Conflict of interest questions, blowback to the Ukraine response and a sinking stock price hang in the backdrop of Plattner’s last election to the SAP supervisory board.

Plattner will run for a final two-year transition term atop SAP’s supervisory board.

Photo: Soeren Stache/picture alliance via Getty Images

Just one man has been with SAP over its entire 50-year history: co-founder Hasso Plattner. Now, the 78-year-old software visionary is making his last stand.

On Wednesday, Plattner will run for a final two-year transition term atop SAP’s supervisory board, an entity mandated by law in Germany that basically oversees the executive team. Leaders at SAP, for example, report to the supervisory board, not the CEO.

Keep Reading Show less
Joe Williams

Joe Williams is a writer-at-large at Protocol. He previously covered enterprise software for Protocol, Bloomberg and Business Insider. Joe can be reached at JoeWilliams@Protocol.com. To share information confidentially, he can also be contacted on a non-work device via Signal (+1-309-265-6120) or JPW53189@protonmail.com.

Enterprise

Why Google Cloud is providing security for AWS and Azure users too

“To just focus on Google Cloud, we wouldn't be serving our customers,” Google Cloud security chief Phil Venables told Protocol.

Google Cloud announced the newest addition to its menu of security offerings.

Photo: G/Unsplash

In August, Google Cloud pledged to invest $10 billion over five years in cybersecurity — a target that looks like it will be easily achieved, thanks to the $5.4 billion deal to acquire Mandiant and reported $500 million acquisition of Siemplify in the first few months of 2022 alone.

But the moves raise questions about Google Cloud’s main goal for its security operation. Does Google want to offer the most secure cloud platform in order to inspire more businesses to run on it — or build a major enterprise cybersecurity products and services business, in whatever environment it’s chosen?

Keep Reading Show less
Kyle Alspach

Kyle Alspach ( @KyleAlspach) is a senior reporter at Protocol, focused on cybersecurity. He has covered the tech industry since 2010 for outlets including VentureBeat, CRN and the Boston Globe. He lives in Portland, Oregon, and can be reached at kalspach@procotol.com.

Workplace

The tools that make you pay for not getting stuff done

Some tools let you put your money on the line for productivity. Should you bite?

Commitment contracts are popular in a niche corner of the internet, and the tools have built up loyal followings of people who find the extra motivation effective.

Photoillustration: Anna Shvets/Pexels; Protocol

Danny Reeves, CEO and co-founder of Beeminder, is used to defending his product.

“When people first hear about it, they’re kind of appalled,” Reeves said. “Making money off of people’s failure is how they view it.”

Keep Reading Show less
Lizzy Lawrence

Lizzy Lawrence ( @LizzyLaw_) is a reporter at Protocol, covering tools and productivity in the workplace. She's a recent graduate of the University of Michigan, where she studied sociology and international studies. She served as editor in chief of The Michigan Daily, her school's independent newspaper. She's based in D.C., and can be reached at llawrence@protocol.com.

Latest Stories
Bulletins