Salesforce’s AI software had it way wrong.
When the company’s engineers were incorporating tools that use natural-language processing and machine learning to analyze people’s sentiments toward products or brands in social media or reviews, they discovered a problem. The system automatically labeled content featuring words such as gay, Black or Muslim as negative. It even happened if the statements around them were positive, as in “I’m gay and proud.”
Kathy Baxter and her Ethical AI team, part of the Ethics by Design group inside Salesforce, stepped in to assess the algorithmic tool. But despite the potential for discrimination, some product management decision-makers at the company still wanted to roll out the product in beta anyway to get customer feedback.
In the end, the ethics group won. “We said no — we need to wait on this,” said Baxter, Salesforce’s ethical AI principal architect. “And so the team held back.”
It was a rare example of actual power for tech ethics watchdogs over product decisions that could affect countless people. In the end, Baxter’s ethical design group worked with engineers to conduct bias mitigation on the product’s machine-learning models. They tested and watched how those changes took effect before finally launching the tool as part of its Einstein software in 2019, which is now a component of the company’s Tableau CRM software.
However, there are limits to what her team has the power to change. There are third-party technologies living on the Salesforce platform that they simply cannot inspect. And as the company’s software application universe continues to expand, watching out for problematic AI will only become more challenging.
Since she started her ethics role after joining the company in 2015 as a user experience researcher, Baxter’s team has been folded into a larger Office of Ethical and Humane Use of Technology that sits inside its product division and includes people with backgrounds from philosophy and international studies to more traditional tech disciplines like computer and data science.
“In 2016, when Marc Benioff, our CEO, announced that we were going to become an AI-first company, and we started acquiring a bunch of AI companies, my concern was, how do we ensure that we are going to use this technology in a way that's responsible?” Baxter said.
Problem solving
Some of the companies that are subject to the most intense scrutiny of their AI practices say they have embedded ethics into product development. Facebook parent Meta has an interdisciplinary responsible AI group that works with its product teams to address fairness in its AI-enabled products. Microsoft’s responsible AI office seeds governance processes across the company, while another group there is dedicated to incorporating the company’s responsible AI rules into engineering work.
Google famously shuttered its AI ethics committee following criticism over its choices for committee members. The company also angered the AI ethics community after firing one of the industry's most respected AI ethics researchers. But today it still has a Responsible AI and Human-Centered Technology group that works with product and engineering teams.
In general, however, for many companies putting AI principles into practice is not as common, and many are looking to government to create regulations to guide that process.
Some ethics roles sit inside a legal compliance team or are relegated to a monthly governance or privacy check-in. However, Baxter’s team has a surprising amount of day-to-day interaction and input into decisions made throughout the organization.
When Salesforce was developing templates in its Einstein Discovery software for predictive analytics, also now part of Tableau CRM, her team convinced engineers to remove data fields showing zip codes. That neighborhood-level data can be a proxy for race, and therefore risked enabling racial bias in how the system made predictions.
“Although from a legal perspective, it probably would have been okay, from an ethical perspective, we said no. And so the team had to remove those fields,” she said.
Baxter even assists in sales interactions; for instance, if a customer wants to do something with the company’s software that doesn’t seem quite kosher, she said. “We will actually sit in on calls with our sales engineers,” said Baxter, who said she tries to find alternatives to misguided approaches.
“It’s important to me to never come out of the gate and say no – that's against our acceptable-use policy,” she said. “Instead, tell me what the problem is you're trying to solve, because sometimes what they're trying to solve, their idea for the solution is just not the best one.” So, she said, she tries to work with the customer to find “a better solution.”
In general, Salesforce has what Baxter calls “red lines” for how its AI products can be used. For instance, its acceptable-use policy prohibits customers from using its computer vision, natural-language processing or predictive-analytics tools to estimate someone’s race or ethnic origin, religious or political beliefs, gender, sexual orientation, health or financial status or to predict likelihood to commit a crime.
Beyond their control
But Salesforce isn’t just what is built by Salesforce.
Like many cloud platforms, Salesforce is a dense forest of software applications and tools. The company’s AppExchange hosts applications built by other companies that do anything from giving automated financial planning advice to managing patients with addictions.
Salesforce gently nudges tech partners toward more ethical practices. For instance, Baxter authored a paper published last year that presented steps for organizations to take to address ethical questions during the AI development and implementation phases. The company also offers a course for Responsible Creation of Artificial Intelligence for Salesforce users. But that educational module includes just four short classes lasting 45 minutes in total.
Indeed, Baxter’s team can only go so far to ensure that everything happening in the vast Salesforce universe is fair, non-discriminatory and privacy-preserving.
“There are real legal restrictions,” said Baxter, pointing to the company’s status as a data processor rather than a data controller under European data privacy regulations. “From a contractual standpoint as well, we can't look at our customers’ data or their models, or futz with them or tell them, no, you're not allowed to do that.”
Baxter said the pandemic put those limitations in stark relief. “We recognized that at the very beginning of the pandemic, we were probably going to see a whole lot of apps popping up on AppExchange. And there could be real concerns as far as: What data would they collect? How would they handle that data? Would it be inclusive?” said Baxter.
Salesforce put up some special barriers and created guidelines for application partners in a Vaccine Cloud it launched a year ago to provide technologies to help public health agencies and businesses manage COVID-19 vaccine programs, and handle employee health and in-person events. Baxter said the company required app providers in that health and vaccine category to have conversations with Salesforce about how they handle data and other aspects of how their technologies are built and how they work.
Today, the Vaccine Cloud lets customers access all sorts of partner technologies used for things like digital vaccine credentialing for international travel or employee vaccination and testing management. “In that particular case, we recognize that it is very important to ensure that those apps are inclusive and as helpful and privacy-preserving as possible,” said Baxter.
For now, along with Salesforce’s internal product development policies and stopgaps, Baxter said education is the company’s primary tool for fostering more ethical practices among partners in its sprawling app marketplace. She also said in 2022 the company’s account reps are expected to offer customers additional AI ethics education and services.
In coming years, Baxter said, more software providers will have internal ethics staff, especially as regulators in Europe and elsewhere demand more transparent and accountable AI. For now, “It's a nice-to-have, it's a pretty rare thing,” to have someone on staff dedicated to ethics in development of products or services, Baxter said. “But in a few years, it will be madness not to have at least one of these in your company.”
This story was updated to clarify the organizational structure of Salesforce's ethics team and to correct the year the product flagged by the team was released.