Fintech

It’s never too early to get your AI ethics right

The Ethical AI Governance Group wants to give startups a framework for avoiding scandals and blunders while deploying new technology.

Anik Bose

Anik Bose is executive director of the Ethical AI Governance Group.

Image: BGV; Protocol

We all know when AI crosses an ethical line. Automated lending systems charging higher rates for people of color? Bad. News-feed algorithms feeding diet pill ads to teenagers with eating disorders? Yeah, that’s not so good either.

What’s less easy is understanding what each of these examples have in common, and drawing lessons that apply to early-stage companies. There are plenty of broad statements of AI ethics principles, but few tools for putting them into practice, especially ones tuned for the harsh realities of startups tight on money and time.

That challenge extends to VCs too, who must increasingly attempt to assess whether founders have thought through how customers, partners and regulators might react to the ways they’re using artificial intelligence.

Even when founders have the best intentions, it’s easy to cut corners. But without a clear ethics framework, the consequences can include regulatory delays, a longer road to profitability and even real-world harms that can do long-term damage to a company’s reputation.

To solve this problem, a group of consultants, venture capitalists and executives in AI created the Ethical AI Governance Group last September. In March, it went public, and published a survey-style “continuum” for investors to use in advising the startups in their portfolio.

The continuum conveys clear guidance for startups at various growth stages, recommending that startups have people in charge of AI governance and data privacy strategy, for example. EAIGG leadership argues that using the continuum will protect VC portfolios from value-destroying scandals.

Anik Bose, general partner at Benhamou Global Ventures, is the executive director and founder of EAIGG. He spoke with Protocol about how startups can align their processes with their values and why he’s making sure companies in his firm’s portfolio follow the continuum’s advice.

This interview was edited for brevity and clarity.

How did you know that now was the right time to begin standardizing AI ethics? Hasn’t it always been important?

AI is a double-edged sword. For one, it has tremendous promise across industries: manufacturing, health care, consumer products, insurance, banking, you name it. Some of that promise people are betting on: Private investments in AI are booming. If you look at the patent filings in AI, they’re skyrocketing. And if you look at the top skills sought after by employers today, No. 1 is a Ph.D. in AI.

Along with that comes fear of AI. The first fear, which is very visceral, is robots replacing humans, like the Terminator. The second fear is the fear of the concentration of AI assets. If you look at the FAANG companies, there's this fear that these guys will prevent the democratization of AI, because they have all the resources, all the people and are basically doing all the acquisitions in the space.

Then you look at AI policy today, and it's the Wild West. There's very little or no regulation in the U.S., it’s just now coming in Europe, and there's a lack of general awareness about things like social exclusion, privacy intrusion and discrimination.

Given all that, we believe that now is the right time to operationalize AI ethics. You can't really wait for regulation to show up and tell you what to do.

Why do AI ethics matter from a business perspective?

It’s about customer trust and market adoption. With early-stage startups, you're going and making evangelical sales to large enterprises. If they don't trust you or your product, you're in deep doo-doo. If your AI model is doing things that it is not supposed to do, you're done.

Secondly, regulation is coming. If one starts addressing this now, while they’re still a startup, they will be much more ready to handle when the guillotine drops.

The other two reasons are equally important, though people often don’t get this: Attracting and retaining top talent is the No. 1 issue for startups. More and more people want to make sure that the startups they work for have a deeper purpose beyond making money. They want to make the earth a better place. You're not going to recruit talent if you're building products in a mercenary way and not dealing with these issues.

Last, once you get to the place where you want to be acquired — like, let's say Microsoft or Google approaches you — I can tell you that during diligence on M&A, they're going to look at your ethics framework. If there’s some liability, not only will you not get acquired, but the valuation of your company might drop by a factor of 10.

Why is it important to have a single person in charge of AI ethics, rather than just making sure all employees are on board with the company’s values?

We fundamentally believe that accountability is best established by assigning clear-cut responsibility. Someone has to own it. We have learned through our experience in startups that the No. 1 reason for poor implementation of really anything is a lack of clear accountability. So we fundamentally believe that unless you assign AI governance to a person, it’s not going to get done.

Think about it: The title “chief information security officer” did not exist in enterprises in the 1990s. Today, every enterprise has one. Is that one person responsible for the actions of the entire company? No, but they're ensuring the processes are in place. They're ensuring the tools are being used. At the end of the day, the board or the CEO can go to one person and say, “Where are we on this?”

What should that person’s title be within the organization?

In the early days it's going to be the VP of Product Management, chief product officer or the founder driving the product, because they're the ones actually building with AI. They are the guys who can figure out, “Are the right data sets being used?” or “Is there model drift?”

Later, when you're creating $20 million, $50 million in revenue, you might have multiple products, and you might be using data in different ways. At that point, it makes sense to have someone in charge of just ethics, like an ethical AI officer or an adviser. You see a lot of late-stage startups today have a chief ethics officer. We believe that's going to become more common.

What are the next steps toward getting tech startups on board with AI ethics?

If you take a step back, education is a big part of the conversation we are having. Part of why we founded EAIGG was to open source best practices, so everyone can learn from each other. The continuum is just one tool, but we also hosted a panel discussion about what financial services are doing in terms of AI governance and what their best practices are. We had another panel discussion with IBM, where they talked about Fairness 360, a tool they've open sourced and that we promote as a tool to use with AI models.

I think the continuum is a powerful tool for startups, but what we want with EAIGG is both to do more research to create other tools and also push to open source tools that companies are already using today. I'm sure Google has got a lot of best practices that not many people know about, for example.

Last, we’re also going to compile tools to help people get on board with regulation. We believe Europe will lead the way with regulation, like they did with GDPR, and that the U.S. will follow. When regulation comes on a broader scale, and people get fined $5 million, $10 million, $50 million — I can tell you that people will start paying attention to AI ethics.

Fintech

Gavin Newsom shows crypto some California love

“A more flexible approach is needed,” Gov. Newsom said in rejecting a bill that would require crypto companies to get a state license.

Strong bipartisan support wasn’t enough to convince Newsom that requiring crypto companies to register with the state’s Department of Financial Protection and Innovation is the smart path for California.

Photo: Jerod Harris/Getty Images for Vox Media

The Digital Financial Assets Law seemed like a legislative slam dunk in California for critics of the crypto industry.

But strong bipartisan support — it passed 71-0 in the state assembly and 31-6 in the Senate — wasn’t enough to convince Gov. Gavin Newsom that requiring crypto companies to register with the state’s Department of Financial Protection and Innovation is the smart path for California.

Keep Reading Show less
Benjamin Pimentel

Benjamin Pimentel ( @benpimentel) covers crypto and fintech from San Francisco. He has reported on many of the biggest tech stories over the past 20 years for the San Francisco Chronicle, Dow Jones MarketWatch and Business Insider, from the dot-com crash, the rise of cloud computing, social networking and AI to the impact of the Great Recession and the COVID crisis on Silicon Valley and beyond. He can be reached at bpimentel@protocol.com or via Google Voice at (925) 307-9342.

Sponsored Content

Great products are built on strong patents

Experts say robust intellectual property protection is essential to ensure the long-term R&D required to innovate and maintain America's technology leadership.

Every great tech product that you rely on each day, from the smartphone in your pocket to your music streaming service and navigational system in the car, shares one important thing: part of its innovative design is protected by intellectual property (IP) laws.

From 5G to artificial intelligence, IP protection offers a powerful incentive for researchers to create ground-breaking products, and governmental leaders say its protection is an essential part of maintaining US technology leadership. To quote Secretary of Commerce Gina Raimondo: "intellectual property protection is vital for American innovation and entrepreneurship.”

Keep Reading Show less
James Daly
James Daly has a deep knowledge of creating brand voice identity, including understanding various audiences and targeting messaging accordingly. He enjoys commissioning, editing, writing, and business development, particularly in launching new ventures and building passionate audiences. Daly has led teams large and small to multiple awards and quantifiable success through a strategy built on teamwork, passion, fact-checking, intelligence, analytics, and audience growth while meeting budget goals and production deadlines in fast-paced environments. Daly is the Editorial Director of 2030 Media and a contributor at Wired.
Workplace

Slack’s rallying cry at Dreamforce: No more meetings

It’s not all cartoon bears and therapy pigs — work conferences are a good place to talk about the future of work.

“We want people to be able to work in whatever way works for them with flexible schedules, in meetings and out of meetings,” Slack chief product officer Tamar Yehoshua told Protocol at Dreamforce 2022.

Photo: Marlena Sloss/Bloomberg via Getty Images

Dreamforce is primarily Salesforce’s show. But Slack wasn’t to be left out, especially as the primary connector between Salesforce and the mainstream working world.

The average knowledge worker spends more time using a communication tool like Slack than a CRM like Salesforce, positioning it as the best Salesforce product to concern itself with the future of work. In between meeting a therapy pig and meditating by the Dreamforce waterfall, Protocol sat down with several Slack execs and conference-goers to chat about the shifting future.

Keep Reading Show less
Lizzy Lawrence

Lizzy Lawrence ( @LizzyLaw_) is a reporter at Protocol, covering tools and productivity in the workplace. She's a recent graduate of the University of Michigan, where she studied sociology and international studies. She served as editor in chief of The Michigan Daily, her school's independent newspaper. She's based in D.C., and can be reached at llawrence@protocol.com.

LA is a growing tech hub. But not everyone may fit.

LA has a housing crisis similar to Silicon Valley’s. And single-family-zoning laws are mostly to blame.

As the number of tech companies in the region grows, so does the number of tech workers, whose high salaries put them at an advantage in both LA's renting and buying markets.

Photo: Nat Rubio-Licht/Protocol

LA’s tech scene is on the rise. The number of unicorn companies in Los Angeles is growing, and the city has become the third-largest startup ecosystem nationally behind the Bay Area and New York with more than 4,000 VC-backed startups in industries ranging from aerospace to creators. As the number of tech companies in the region grows, so does the number of tech workers. The city is quickly becoming more and more like Silicon Valley — a new startup and a dozen tech workers on every corner and companies like Google, Netflix, and Twitter setting up offices there.

But with growth comes growing pains. Los Angeles, especially the burgeoning Silicon Beach area — which includes Santa Monica, Venice, and Marina del Rey — shares something in common with its namesake Silicon Valley: a severe lack of housing.

Keep Reading Show less
Nat Rubio-Licht

Nat Rubio-Licht is a Los Angeles-based news writer at Protocol. They graduated from Syracuse University with a degree in newspaper and online journalism in May 2020. Prior to joining the team, they worked at the Los Angeles Business Journal as a technology and aerospace reporter.

Policy

SFPD can now surveil a private camera network funded by Ripple chair

The San Francisco Board of Supervisors approved a policy that the ACLU and EFF argue will further criminalize marginalized groups.

SFPD will be able to temporarily tap into private surveillance networks in certain circumstances.

Photo: Justin Sullivan/Getty Images

Ripple chairman and co-founder Chris Larsen has been funding a network of security cameras throughout San Francisco for a decade. Now, the city has given its police department the green light to monitor the feeds from those cameras — and any other private surveillance devices in the city — in real time, whether or not a crime has been committed.

This week, San Francisco’s Board of Supervisors approved a controversial plan to allow SFPD to temporarily tap into private surveillance networks during life-threatening emergencies, large events, and in the course of criminal investigations, including investigations of misdemeanors. The decision came despite fervent opposition from groups, including the ACLU of Northern California and the Electronic Frontier Foundation, which say the police department’s new authority will be misused against protesters and marginalized groups in a city that has been a bastion for both.

Keep Reading Show less
Issie Lapowsky

Issie Lapowsky ( @issielapowsky) is Protocol's chief correspondent, covering the intersection of technology, politics, and national affairs. She also oversees Protocol's fellowship program. Previously, she was a senior writer at Wired, where she covered the 2016 election and the Facebook beat in its aftermath. Prior to that, Issie worked as a staff writer for Inc. magazine, writing about small business and entrepreneurship. She has also worked as an on-air contributor for CBS News and taught a graduate-level course at New York University's Center for Publishing on how tech giants have affected publishing.

Latest Stories
Bulletins