Enterprise

Stop worrying about AI bias all the time, says Deloitte’s AI ethics lead

AI that discriminates against people is a big problem, but Beena Ammanath, executive director of the Global Deloitte AI Institute and head of Trustworthy AI and Ethical Tech, says AI ethics is about a lot more than bias.

Beena Ammanath, executive director of the Global Deloitte AI Institute and head of Trustworthy AI and Ethical Tech

Beena Ammanath wasn’t always gung-ho to talk AI ethics. Then she decided to write a book about it.

Photo: Deloitte

“You won't see many people with my background talking about ethics,” said Beena Ammanath, executive director of the Global Deloitte AI Institute and head of Trustworthy AI and Ethical Tech at the global consulting company.

A computer scientist who worked as a database and SQL developer and held data science and AI-related technology roles at Bank of America, GE and Hewlett Packard before joining Deloitte in 2019, Ammanath wasn’t always gung-ho to talk AI ethics. Then she decided to write a book about it.

“There has arguably never been a more exciting time in AI,” she wrote in her book, “Trustworthy AI.” “Alongside the arrival of so much promise and potential, however, the attention placed on AI ethics has been relatively slight. What passes for public scrutiny is too often just seductive, click bait headlines that fret over AI bias and point to a discrete use case. There’s a lot of noise on AI ethics and trust, and it does not move us closer to clarity or consensus on how we keep trust in AI commensurate with its power.”

Ammanath calls the book, which attempts to move beyond hand wringing over AI’s problems toward practical ways to develop AI with ethical considerations in mind, “a synthesis of especially the last 10 years of my professional experience.”

Protocol spoke with Ammanath about why ethical AI practices should be part of every employee’s training, the limitations of providing internal guidance inside a sprawling consultancy and why she finally gave in and joined the AI ethics conversation.

This interview has been edited for clarity.

There’s no shortage of guidance, advice and lists of principles for ensuring AI is ethical or responsible. Why did you want to join the fray?

I really didn't want to.

You really didn't want to?

I didn't want to, but it reached a point — just like I didn't want to join the ethics and bias conversation four years ago, but I got pulled into all these discussions.

By training, I'm a computer scientist. In my prior lives, I built AI products, then [took them to] market. A lot of work at GE and HP. So, I have very much been focused on all the cool things and the value AI can bring to humans.

I realized that a lot of what was getting out was just one side of the story. When you think about AI ethics, the first thing that comes to mind is fairness and bias. And, yes, those are important, but that's not the only thing. Fairness and bias doesn't even apply in every possible scenario. Some of the work that I've done in the past was very much around predicting jet engine failure, or predicting how much power will a wind turbine generate, optimizing your IT servers and doing document management. And those are scenarios where it's not so much about fairness and bias, but it is more about, say, reliability — the robustness, the safety and security aspect of it.

Do you think that some of this ethics conversation is steering people to think AI is just inherently bad and they should just avoid it altogether and be scared of it?

That's the message that's going across: that it's a terrible thing for humanity. And I don't think it's all bad; either is it all good. There are risks with it, and we need to address it. I want to bring more of that balanced perspective, a pragmatic, optimistic perspective.

I've talked to companies that do deep learning for detecting defects in manufacturing, for example. Those are some really practical things.

We hear about bias in the context of health care a lot, right? Just think of two scenarios where AI is used. One is in patient diagnosis — AI being used to predict health and diagnose a disease earlier. In that scenario, bias is a terrible, terrible thing. But if you're using AI to predict when an MRI machine might fail, or X-ray machine might fail, so that you can proactively send an engineer to go fix it, then, bias? Not so much. So anywhere where there is no human data being used, usually, bias doesn't come into play.

The other one you hear a lot about is facial recognition. If it's biased in a law enforcement scenario, where you're flagging people as criminals, bias is a terrible thing. We want it to be absolutely fair. But if you're using facial recognition [in] traffic lights to identify potential human trafficking victims, in that scenario, do we still want to use it because it's 60% better than just humans trying to do it by themselves, even though it is biased? It's more weighted, and it's more nuanced. It is not a one-size-fits-all.

Some companies that create AI technologies, especially if they are controversial, promote “AI for good” — pro bono donations of data or tech for COVID, cancer or climate research, for example. It can be seen as a red herring, like, “Look, we're doing good work, don't pay attention to the fact that we are working on controversial AI or have military contracts.” What are your thoughts about that kind of approach?

I think of AI at companies or organizations at the high level [as] two categories. There are the ones who are building the AI tools, Big Tech. They are building AI tools, they're pushing the levels on it, building those core capabilities. And then there are a lot more companies that are just using those tools in their specific context — the facial recognition example. The company that uses it in law enforcement is probably different than the company that uses it for human trafficking with identification.

We tend to think it is just a Big Tech problem. It's just a problem of the company that's building the tool. But the companies that are using it are equally responsible, and there are things they can do and consider and weigh, because at the end of the day, the tool and its ethical implications, the risks, are going to depend on how it's used.

How are you applying some of the concepts that you discuss in your book in a concrete way at Deloitte?

The first step you can do, which we’re doing at Deloitte, [is] training — making sure every employee in the organization understands the ethical principles that the company believes. Every company has integrity training, day one. So just add an extension to it on what are AI ethics principles. That does assume that the key stakeholders, the C-suite, board members, have agreed upon the principles.

The next step is making sure every employee — not just your IT team, not just the data scientists, [but] every employee … should know what questions to ask and whom to call if he or she is not getting the right answers. It is that intern in your marketing department who's evaluating an AI tool for recruiting. So every employee should be empowered, because in some form or the other, they're using AI in their daily work.

Every project I've worked on, there's always a column or section which talks about the ROI. What's the value this project is going to bring, whether it's cost savings or new revenue. Add just one additional step which says, what are the ways this could go wrong? Who is it going to leave behind? What are the vulnerabilities?

In today's world, there is no thinking of the bad things that can happen. I'm an engineer by trade. Trust me, I like to focus on all the cool things technology can do. It's not in my DNA to think of, what are the ways this could go wrong? That's the bare minimum you can do.

Deloitte got a $106 million contract with the U.S. Defense Information Systems Agency in 2020 to build the Pentagon's AI development platform. Are there Pentagon projects that you're working on in any capacity? Is “what could go wrong?” a question that is asked in those kinds of environments?

I don't do client work. What I'm doing is internal transformation, so that team serving a client would reach out to me. In fact, I think I have one in my mailbox about another government client [asking] what are the ways that the team should be thinking about [these issues] and what should we be asking our client. The other one is making sure we are including an element of thinking about the ethical implications, the risks, the ways it could go wrong as part of the project.

I have to wonder if that actually gets translated. Sure, they're asking you, but do they actually bring that to the customer — especially when the customer is the U.S. government or the Pentagon? What's the real impact?

I know the conversation is happening. The team is offering scenarios that could go wrong. They can inform the client that we are going to be putting in the guardrails. It's not a single playbook. It really depends on the exact scenario, the solution that you're building. So even for me, it becomes very hard to be just very prescriptive. It's more initiating that conversation, putting that in the back of your mind and proposing that and weaving in as much as you can.

Fintech

Gavin Newsom shows crypto some California love

“A more flexible approach is needed,” Gov. Newsom said in rejecting a bill that would require crypto companies to get a state license.

Strong bipartisan support wasn’t enough to convince Newsom that requiring crypto companies to register with the state’s Department of Financial Protection and Innovation is the smart path for California.

Photo: Jerod Harris/Getty Images for Vox Media

The Digital Financial Assets Law seemed like a legislative slam dunk in California for critics of the crypto industry.

But strong bipartisan support — it passed 71-0 in the state assembly and 31-6 in the Senate — wasn’t enough to convince Gov. Gavin Newsom that requiring crypto companies to register with the state’s Department of Financial Protection and Innovation is the smart path for California.

Keep Reading Show less
Benjamin Pimentel

Benjamin Pimentel ( @benpimentel) covers crypto and fintech from San Francisco. He has reported on many of the biggest tech stories over the past 20 years for the San Francisco Chronicle, Dow Jones MarketWatch and Business Insider, from the dot-com crash, the rise of cloud computing, social networking and AI to the impact of the Great Recession and the COVID crisis on Silicon Valley and beyond. He can be reached at bpimentel@protocol.com or via Google Voice at (925) 307-9342.

Sponsored Content

Great products are built on strong patents

Experts say robust intellectual property protection is essential to ensure the long-term R&D required to innovate and maintain America's technology leadership.

Every great tech product that you rely on each day, from the smartphone in your pocket to your music streaming service and navigational system in the car, shares one important thing: part of its innovative design is protected by intellectual property (IP) laws.

From 5G to artificial intelligence, IP protection offers a powerful incentive for researchers to create ground-breaking products, and governmental leaders say its protection is an essential part of maintaining US technology leadership. To quote Secretary of Commerce Gina Raimondo: "intellectual property protection is vital for American innovation and entrepreneurship.”

Keep Reading Show less
James Daly
James Daly has a deep knowledge of creating brand voice identity, including understanding various audiences and targeting messaging accordingly. He enjoys commissioning, editing, writing, and business development, particularly in launching new ventures and building passionate audiences. Daly has led teams large and small to multiple awards and quantifiable success through a strategy built on teamwork, passion, fact-checking, intelligence, analytics, and audience growth while meeting budget goals and production deadlines in fast-paced environments. Daly is the Editorial Director of 2030 Media and a contributor at Wired.
Workplace

Slack’s rallying cry at Dreamforce: No more meetings

It’s not all cartoon bears and therapy pigs — work conferences are a good place to talk about the future of work.

“We want people to be able to work in whatever way works for them with flexible schedules, in meetings and out of meetings,” Slack chief product officer Tamar Yehoshua told Protocol at Dreamforce 2022.

Photo: Marlena Sloss/Bloomberg via Getty Images

Dreamforce is primarily Salesforce’s show. But Slack wasn’t to be left out, especially as the primary connector between Salesforce and the mainstream working world.

The average knowledge worker spends more time using a communication tool like Slack than a CRM like Salesforce, positioning it as the best Salesforce product to concern itself with the future of work. In between meeting a therapy pig and meditating by the Dreamforce waterfall, Protocol sat down with several Slack execs and conference-goers to chat about the shifting future.

Keep Reading Show less
Lizzy Lawrence

Lizzy Lawrence ( @LizzyLaw_) is a reporter at Protocol, covering tools and productivity in the workplace. She's a recent graduate of the University of Michigan, where she studied sociology and international studies. She served as editor in chief of The Michigan Daily, her school's independent newspaper. She's based in D.C., and can be reached at llawrence@protocol.com.

LA is a growing tech hub. But not everyone may fit.

LA has a housing crisis similar to Silicon Valley’s. And single-family-zoning laws are mostly to blame.

As the number of tech companies in the region grows, so does the number of tech workers, whose high salaries put them at an advantage in both LA's renting and buying markets.

Photo: Nat Rubio-Licht/Protocol

LA’s tech scene is on the rise. The number of unicorn companies in Los Angeles is growing, and the city has become the third-largest startup ecosystem nationally behind the Bay Area and New York with more than 4,000 VC-backed startups in industries ranging from aerospace to creators. As the number of tech companies in the region grows, so does the number of tech workers. The city is quickly becoming more and more like Silicon Valley — a new startup and a dozen tech workers on every corner and companies like Google, Netflix, and Twitter setting up offices there.

But with growth comes growing pains. Los Angeles, especially the burgeoning Silicon Beach area — which includes Santa Monica, Venice, and Marina del Rey — shares something in common with its namesake Silicon Valley: a severe lack of housing.

Keep Reading Show less
Nat Rubio-Licht

Nat Rubio-Licht is a Los Angeles-based news writer at Protocol. They graduated from Syracuse University with a degree in newspaper and online journalism in May 2020. Prior to joining the team, they worked at the Los Angeles Business Journal as a technology and aerospace reporter.

Policy

SFPD can now surveil a private camera network funded by Ripple chair

The San Francisco Board of Supervisors approved a policy that the ACLU and EFF argue will further criminalize marginalized groups.

SFPD will be able to temporarily tap into private surveillance networks in certain circumstances.

Photo: Justin Sullivan/Getty Images

Ripple chairman and co-founder Chris Larsen has been funding a network of security cameras throughout San Francisco for a decade. Now, the city has given its police department the green light to monitor the feeds from those cameras — and any other private surveillance devices in the city — in real time, whether or not a crime has been committed.

This week, San Francisco’s Board of Supervisors approved a controversial plan to allow SFPD to temporarily tap into private surveillance networks during life-threatening emergencies, large events, and in the course of criminal investigations, including investigations of misdemeanors. The decision came despite fervent opposition from groups, including the ACLU of Northern California and the Electronic Frontier Foundation, which say the police department’s new authority will be misused against protesters and marginalized groups in a city that has been a bastion for both.

Keep Reading Show less
Issie Lapowsky

Issie Lapowsky ( @issielapowsky) is Protocol's chief correspondent, covering the intersection of technology, politics, and national affairs. She also oversees Protocol's fellowship program. Previously, she was a senior writer at Wired, where she covered the 2016 election and the Facebook beat in its aftermath. Prior to that, Issie worked as a staff writer for Inc. magazine, writing about small business and entrepreneurship. She has also worked as an on-air contributor for CBS News and taught a graduate-level course at New York University's Center for Publishing on how tech giants have affected publishing.

Latest Stories
Bulletins