Enterprise

Salesforce empowered an AI ethics team to keep it honest. Dealing with third-party apps is trickier.

Kathy Baxter and her product design ethics team at Salesforce have a lot of sway over when tech is ready to ship and when it’s not. But there are lots of technologies living on the Salesforce platform beyond their control.

Kathy Baxter

Some ethics roles are relegated to a monthly governance or privacy check-in. However, Salesforce's AI ethics team has a surprising amount of input into decisions made throughout the organization.

Photo: Salesforce

Salesforce’s AI software had it way wrong.

When the company’s engineers were incorporating tools that use natural-language processing and machine learning to analyze people’s sentiments toward products or brands in social media or reviews, they discovered a problem. The system automatically labeled content featuring words such as gay, Black or Muslim as negative. It even happened if the statements around them were positive, as in “I’m gay and proud.”

Kathy Baxter and her Ethical AI team, part of the Ethics by Design group inside Salesforce, stepped in to assess the algorithmic tool. But despite the potential for discrimination, some product management decision-makers at the company still wanted to roll out the product in beta anyway to get customer feedback.

In the end, the ethics group won. “We said no — we need to wait on this,” said Baxter, Salesforce’s ethical AI principal architect. “And so the team held back.”

It was a rare example of actual power for tech ethics watchdogs over product decisions that could affect countless people. In the end, Baxter’s ethical design group worked with engineers to conduct bias mitigation on the product’s machine-learning models. They tested and watched how those changes took effect before finally launching the tool as part of its Einstein software in 2019, which is now a component of the company’s Tableau CRM software.

However, there are limits to what her team has the power to change. There are third-party technologies living on the Salesforce platform that they simply cannot inspect. And as the company’s software application universe continues to expand, watching out for problematic AI will only become more challenging.

Since she started her ethics role after joining the company in 2015 as a user experience researcher, Baxter’s team has been folded into a larger Office of Ethical and Humane Use of Technology that sits inside its product division and includes people with backgrounds from philosophy and international studies to more traditional tech disciplines like computer and data science.

“In 2016, when Marc Benioff, our CEO, announced that we were going to become an AI-first company, and we started acquiring a bunch of AI companies, my concern was, how do we ensure that we are going to use this technology in a way that's responsible?” Baxter said.

Problem solving

Some of the companies that are subject to the most intense scrutiny of their AI practices say they have embedded ethics into product development. Facebook parent Meta has an interdisciplinary responsible AI group that works with its product teams to address fairness in its AI-enabled products. Microsoft’s responsible AI office seeds governance processes across the company, while another group there is dedicated to incorporating the company’s responsible AI rules into engineering work.

Google famously shuttered its AI ethics committee following criticism over its choices for committee members. The company also angered the AI ethics community after firing one of the industry's most respected AI ethics researchers. But today it still has a Responsible AI and Human-Centered Technology group that works with product and engineering teams.

In general, however, for many companies putting AI principles into practice is not as common, and many are looking to government to create regulations to guide that process.

Some ethics roles sit inside a legal compliance team or are relegated to a monthly governance or privacy check-in. However, Baxter’s team has a surprising amount of day-to-day interaction and input into decisions made throughout the organization.

When Salesforce was developing templates in its Einstein Discovery software for predictive analytics, also now part of Tableau CRM, her team convinced engineers to remove data fields showing zip codes. That neighborhood-level data can be a proxy for race, and therefore risked enabling racial bias in how the system made predictions.

“Although from a legal perspective, it probably would have been okay, from an ethical perspective, we said no. And so the team had to remove those fields,” she said.

Baxter even assists in sales interactions; for instance, if a customer wants to do something with the company’s software that doesn’t seem quite kosher, she said. “We will actually sit in on calls with our sales engineers,” said Baxter, who said she tries to find alternatives to misguided approaches.

“It’s important to me to never come out of the gate and say no – that's against our acceptable-use policy,” she said. “Instead, tell me what the problem is you're trying to solve, because sometimes what they're trying to solve, their idea for the solution is just not the best one.” So, she said, she tries to work with the customer to find “a better solution.”

In general, Salesforce has what Baxter calls “red lines” for how its AI products can be used. For instance, its acceptable-use policy prohibits customers from using its computer vision, natural-language processing or predictive-analytics tools to estimate someone’s race or ethnic origin, religious or political beliefs, gender, sexual orientation, health or financial status or to predict likelihood to commit a crime.

Beyond their control

But Salesforce isn’t just what is built by Salesforce.

Like many cloud platforms, Salesforce is a dense forest of software applications and tools. The company’s AppExchange hosts applications built by other companies that do anything from giving automated financial planning advice to managing patients with addictions.

Salesforce gently nudges tech partners toward more ethical practices. For instance, Baxter authored a paper published last year that presented steps for organizations to take to address ethical questions during the AI development and implementation phases. The company also offers a course for Responsible Creation of Artificial Intelligence for Salesforce users. But that educational module includes just four short classes lasting 45 minutes in total.

Indeed, Baxter’s team can only go so far to ensure that everything happening in the vast Salesforce universe is fair, non-discriminatory and privacy-preserving.

“There are real legal restrictions,” said Baxter, pointing to the company’s status as a data processor rather than a data controller under European data privacy regulations. “From a contractual standpoint as well, we can't look at our customers’ data or their models, or futz with them or tell them, no, you're not allowed to do that.”

Baxter said the pandemic put those limitations in stark relief. “We recognized that at the very beginning of the pandemic, we were probably going to see a whole lot of apps popping up on AppExchange. And there could be real concerns as far as: What data would they collect? How would they handle that data? Would it be inclusive?” said Baxter.

Salesforce put up some special barriers and created guidelines for application partners in a Vaccine Cloud it launched a year ago to provide technologies to help public health agencies and businesses manage COVID-19 vaccine programs, and handle employee health and in-person events. Baxter said the company required app providers in that health and vaccine category to have conversations with Salesforce about how they handle data and other aspects of how their technologies are built and how they work.

Today, the Vaccine Cloud lets customers access all sorts of partner technologies used for things like digital vaccine credentialing for international travel or employee vaccination and testing management. “In that particular case, we recognize that it is very important to ensure that those apps are inclusive and as helpful and privacy-preserving as possible,” said Baxter.

For now, along with Salesforce’s internal product development policies and stopgaps, Baxter said education is the company’s primary tool for fostering more ethical practices among partners in its sprawling app marketplace. She also said in 2022 the company’s account reps are expected to offer customers additional AI ethics education and services.

In coming years, Baxter said, more software providers will have internal ethics staff, especially as regulators in Europe and elsewhere demand more transparent and accountable AI. For now, “It's a nice-to-have, it's a pretty rare thing,” to have someone on staff dedicated to ethics in development of products or services, Baxter said. “But in a few years, it will be madness not to have at least one of these in your company.”

This story was updated to clarify the organizational structure of Salesforce's ethics team and to correct the year the product flagged by the team was released.

Fintech

Judge Zia Faruqui is trying to teach you crypto, one ‘SNL’ reference at a time

His decisions on major cryptocurrency cases have quoted "The Big Lebowski," "SNL," and "Dr. Strangelove." That’s because he wants you — yes, you — to read them.

The ways Zia Faruqui (right) has weighed on cases that have come before him can give lawyers clues as to what legal frameworks will pass muster.

Photo: Carolyn Van Houten/The Washington Post via Getty Images

“Cryptocurrency and related software analytics tools are ‘The wave of the future, Dude. One hundred percent electronic.’”

That’s not a quote from "The Big Lebowski" — at least, not directly. It’s a quote from a Washington, D.C., district court memorandum opinion on the role cryptocurrency analytics tools can play in government investigations. The author is Magistrate Judge Zia Faruqui.

Keep ReadingShow less
Veronica Irwin

Veronica Irwin (@vronirwin) is a San Francisco-based reporter at Protocol covering fintech. Previously she was at the San Francisco Examiner, covering tech from a hyper-local angle. Before that, her byline was featured in SF Weekly, The Nation, Techworker, Ms. Magazine and The Frisc.

The financial technology transformation is driving competition, creating consumer choice, and shaping the future of finance. Hear from seven fintech leaders who are reshaping the future of finance, and join the inaugural Financial Technology Association Fintech Summit to learn more.

Keep ReadingShow less
FTA
The Financial Technology Association (FTA) represents industry leaders shaping the future of finance. We champion the power of technology-centered financial services and advocate for the modernization of financial regulation to support inclusion and responsible innovation.
Enterprise

AWS CEO: The cloud isn’t just about technology

As AWS preps for its annual re:Invent conference, Adam Selipsky talks product strategy, support for hybrid environments, and the value of the cloud in uncertain economic times.

Photo: Noah Berger/Getty Images for Amazon Web Services

AWS is gearing up for re:Invent, its annual cloud computing conference where announcements this year are expected to focus on its end-to-end data strategy and delivering new industry-specific services.

It will be the second re:Invent with CEO Adam Selipsky as leader of the industry’s largest cloud provider after his return last year to AWS from data visualization company Tableau Software.

Keep ReadingShow less
Donna Goodison

Donna Goodison (@dgoodison) is Protocol's senior reporter focusing on enterprise infrastructure technology, from the 'Big 3' cloud computing providers to data centers. She previously covered the public cloud at CRN after 15 years as a business reporter for the Boston Herald. Based in Massachusetts, she also has worked as a Boston Globe freelancer, business reporter at the Boston Business Journal and real estate reporter at Banker & Tradesman after toiling at weekly newspapers.

Image: Protocol

We launched Protocol in February 2020 to cover the evolving power center of tech. It is with deep sadness that just under three years later, we are winding down the publication.

As of today, we will not publish any more stories. All of our newsletters, apart from our flagship, Source Code, will no longer be sent. Source Code will be published and sent for the next few weeks, but it will also close down in December.

Keep ReadingShow less
Bennett Richardson

Bennett Richardson ( @bennettrich) is the president of Protocol. Prior to joining Protocol in 2019, Bennett was executive director of global strategic partnerships at POLITICO, where he led strategic growth efforts including POLITICO's European expansion in Brussels and POLITICO's creative agency POLITICO Focus during his six years with the company. Prior to POLITICO, Bennett was co-founder and CMO of Hinge, the mobile dating company recently acquired by Match Group. Bennett began his career in digital and social brand marketing working with major brands across tech, energy, and health care at leading marketing and communications agencies including Edelman and GMMB. Bennett is originally from Portland, Maine, and received his bachelor's degree from Colgate University.

Enterprise

Why large enterprises struggle to find suitable platforms for MLops

As companies expand their use of AI beyond running just a few machine learning models, and as larger enterprises go from deploying hundreds of models to thousands and even millions of models, ML practitioners say that they have yet to find what they need from prepackaged MLops systems.

As companies expand their use of AI beyond running just a few machine learning models, ML practitioners say that they have yet to find what they need from prepackaged MLops systems.

Photo: artpartner-images via Getty Images

On any given day, Lily AI runs hundreds of machine learning models using computer vision and natural language processing that are customized for its retail and ecommerce clients to make website product recommendations, forecast demand, and plan merchandising. But this spring when the company was in the market for a machine learning operations platform to manage its expanding model roster, it wasn’t easy to find a suitable off-the-shelf system that could handle such a large number of models in deployment while also meeting other criteria.

Some MLops platforms are not well-suited for maintaining even more than 10 machine learning models when it comes to keeping track of data, navigating their user interfaces, or reporting capabilities, Matthew Nokleby, machine learning manager for Lily AI’s product intelligence team, told Protocol earlier this year. “The duct tape starts to show,” he said.

Keep ReadingShow less
Kate Kaye

Kate Kaye is an award-winning multimedia reporter digging deep and telling print, digital and audio stories. She covers AI and data for Protocol. Her reporting on AI and tech ethics issues has been published in OneZero, Fast Company, MIT Technology Review, CityLab, Ad Age and Digiday and heard on NPR. Kate is the creator of RedTailMedia.org and is the author of "Campaign '08: A Turning Point for Digital Media," a book about how the 2008 presidential campaigns used digital media and data.

Latest Stories
Bulletins