Fintech

It’s never too early to get your AI ethics right

The Ethical AI Governance Group wants to give startups a framework for avoiding scandals and blunders while deploying new technology.

Anik Bose

Anik Bose is executive director of the Ethical AI Governance Group.

Image: BGV; Protocol

We all know when AI crosses an ethical line. Automated lending systems charging higher rates for people of color? Bad. News-feed algorithms feeding diet pill ads to teenagers with eating disorders? Yeah, that’s not so good either.

What’s less easy is understanding what each of these examples have in common, and drawing lessons that apply to early-stage companies. There are plenty of broad statements of AI ethics principles, but few tools for putting them into practice, especially ones tuned for the harsh realities of startups tight on money and time.

That challenge extends to VCs too, who must increasingly attempt to assess whether founders have thought through how customers, partners and regulators might react to the ways they’re using artificial intelligence.

Even when founders have the best intentions, it’s easy to cut corners. But without a clear ethics framework, the consequences can include regulatory delays, a longer road to profitability and even real-world harms that can do long-term damage to a company’s reputation.

To solve this problem, a group of consultants, venture capitalists and executives in AI created the Ethical AI Governance Group last September. In March, it went public, and published a survey-style “continuum” for investors to use in advising the startups in their portfolio.

The continuum conveys clear guidance for startups at various growth stages, recommending that startups have people in charge of AI governance and data privacy strategy, for example. EAIGG leadership argues that using the continuum will protect VC portfolios from value-destroying scandals.

Anik Bose, general partner at Benhamou Global Ventures, is the executive director and founder of EAIGG. He spoke with Protocol about how startups can align their processes with their values and why he’s making sure companies in his firm’s portfolio follow the continuum’s advice.

This interview was edited for brevity and clarity.

How did you know that now was the right time to begin standardizing AI ethics? Hasn’t it always been important?

AI is a double-edged sword. For one, it has tremendous promise across industries: manufacturing, health care, consumer products, insurance, banking, you name it. Some of that promise people are betting on: Private investments in AI are booming. If you look at the patent filings in AI, they’re skyrocketing. And if you look at the top skills sought after by employers today, No. 1 is a Ph.D. in AI.

Along with that comes fear of AI. The first fear, which is very visceral, is robots replacing humans, like the Terminator. The second fear is the fear of the concentration of AI assets. If you look at the FAANG companies, there's this fear that these guys will prevent the democratization of AI, because they have all the resources, all the people and are basically doing all the acquisitions in the space.

Then you look at AI policy today, and it's the Wild West. There's very little or no regulation in the U.S., it’s just now coming in Europe, and there's a lack of general awareness about things like social exclusion, privacy intrusion and discrimination.

Given all that, we believe that now is the right time to operationalize AI ethics. You can't really wait for regulation to show up and tell you what to do.

Why do AI ethics matter from a business perspective?

It’s about customer trust and market adoption. With early-stage startups, you're going and making evangelical sales to large enterprises. If they don't trust you or your product, you're in deep doo-doo. If your AI model is doing things that it is not supposed to do, you're done.

Secondly, regulation is coming. If one starts addressing this now, while they’re still a startup, they will be much more ready to handle when the guillotine drops.

The other two reasons are equally important, though people often don’t get this: Attracting and retaining top talent is the No. 1 issue for startups. More and more people want to make sure that the startups they work for have a deeper purpose beyond making money. They want to make the earth a better place. You're not going to recruit talent if you're building products in a mercenary way and not dealing with these issues.

Last, once you get to the place where you want to be acquired — like, let's say Microsoft or Google approaches you — I can tell you that during diligence on M&A, they're going to look at your ethics framework. If there’s some liability, not only will you not get acquired, but the valuation of your company might drop by a factor of 10.

Why is it important to have a single person in charge of AI ethics, rather than just making sure all employees are on board with the company’s values?

We fundamentally believe that accountability is best established by assigning clear-cut responsibility. Someone has to own it. We have learned through our experience in startups that the No. 1 reason for poor implementation of really anything is a lack of clear accountability. So we fundamentally believe that unless you assign AI governance to a person, it’s not going to get done.

Think about it: The title “chief information security officer” did not exist in enterprises in the 1990s. Today, every enterprise has one. Is that one person responsible for the actions of the entire company? No, but they're ensuring the processes are in place. They're ensuring the tools are being used. At the end of the day, the board or the CEO can go to one person and say, “Where are we on this?”

What should that person’s title be within the organization?

In the early days it's going to be the VP of Product Management, chief product officer or the founder driving the product, because they're the ones actually building with AI. They are the guys who can figure out, “Are the right data sets being used?” or “Is there model drift?”

Later, when you're creating $20 million, $50 million in revenue, you might have multiple products, and you might be using data in different ways. At that point, it makes sense to have someone in charge of just ethics, like an ethical AI officer or an adviser. You see a lot of late-stage startups today have a chief ethics officer. We believe that's going to become more common.

What are the next steps toward getting tech startups on board with AI ethics?

If you take a step back, education is a big part of the conversation we are having. Part of why we founded EAIGG was to open source best practices, so everyone can learn from each other. The continuum is just one tool, but we also hosted a panel discussion about what financial services are doing in terms of AI governance and what their best practices are. We had another panel discussion with IBM, where they talked about Fairness 360, a tool they've open sourced and that we promote as a tool to use with AI models.

I think the continuum is a powerful tool for startups, but what we want with EAIGG is both to do more research to create other tools and also push to open source tools that companies are already using today. I'm sure Google has got a lot of best practices that not many people know about, for example.

Last, we’re also going to compile tools to help people get on board with regulation. We believe Europe will lead the way with regulation, like they did with GDPR, and that the U.S. will follow. When regulation comes on a broader scale, and people get fined $5 million, $10 million, $50 million — I can tell you that people will start paying attention to AI ethics.

Fintech

Judge Zia Faruqui is trying to teach you crypto, one ‘SNL’ reference at a time

His decisions on major cryptocurrency cases have quoted "The Big Lebowski," "SNL," and "Dr. Strangelove." That’s because he wants you — yes, you — to read them.

The ways Zia Faruqui (right) has weighed on cases that have come before him can give lawyers clues as to what legal frameworks will pass muster.

Photo: Carolyn Van Houten/The Washington Post via Getty Images

“Cryptocurrency and related software analytics tools are ‘The wave of the future, Dude. One hundred percent electronic.’”

That’s not a quote from "The Big Lebowski" — at least, not directly. It’s a quote from a Washington, D.C., district court memorandum opinion on the role cryptocurrency analytics tools can play in government investigations. The author is Magistrate Judge Zia Faruqui.

Keep ReadingShow less
Veronica Irwin

Veronica Irwin (@vronirwin) is a San Francisco-based reporter at Protocol covering fintech. Previously she was at the San Francisco Examiner, covering tech from a hyper-local angle. Before that, her byline was featured in SF Weekly, The Nation, Techworker, Ms. Magazine and The Frisc.

The financial technology transformation is driving competition, creating consumer choice, and shaping the future of finance. Hear from seven fintech leaders who are reshaping the future of finance, and join the inaugural Financial Technology Association Fintech Summit to learn more.

Keep ReadingShow less
FTA
The Financial Technology Association (FTA) represents industry leaders shaping the future of finance. We champion the power of technology-centered financial services and advocate for the modernization of financial regulation to support inclusion and responsible innovation.
Enterprise

AWS CEO: The cloud isn’t just about technology

As AWS preps for its annual re:Invent conference, Adam Selipsky talks product strategy, support for hybrid environments, and the value of the cloud in uncertain economic times.

Photo: Noah Berger/Getty Images for Amazon Web Services

AWS is gearing up for re:Invent, its annual cloud computing conference where announcements this year are expected to focus on its end-to-end data strategy and delivering new industry-specific services.

It will be the second re:Invent with CEO Adam Selipsky as leader of the industry’s largest cloud provider after his return last year to AWS from data visualization company Tableau Software.

Keep ReadingShow less
Donna Goodison

Donna Goodison (@dgoodison) is Protocol's senior reporter focusing on enterprise infrastructure technology, from the 'Big 3' cloud computing providers to data centers. She previously covered the public cloud at CRN after 15 years as a business reporter for the Boston Herald. Based in Massachusetts, she also has worked as a Boston Globe freelancer, business reporter at the Boston Business Journal and real estate reporter at Banker & Tradesman after toiling at weekly newspapers.

Image: Protocol

We launched Protocol in February 2020 to cover the evolving power center of tech. It is with deep sadness that just under three years later, we are winding down the publication.

As of today, we will not publish any more stories. All of our newsletters, apart from our flagship, Source Code, will no longer be sent. Source Code will be published and sent for the next few weeks, but it will also close down in December.

Keep ReadingShow less
Bennett Richardson

Bennett Richardson ( @bennettrich) is the president of Protocol. Prior to joining Protocol in 2019, Bennett was executive director of global strategic partnerships at POLITICO, where he led strategic growth efforts including POLITICO's European expansion in Brussels and POLITICO's creative agency POLITICO Focus during his six years with the company. Prior to POLITICO, Bennett was co-founder and CMO of Hinge, the mobile dating company recently acquired by Match Group. Bennett began his career in digital and social brand marketing working with major brands across tech, energy, and health care at leading marketing and communications agencies including Edelman and GMMB. Bennett is originally from Portland, Maine, and received his bachelor's degree from Colgate University.

Enterprise

Why large enterprises struggle to find suitable platforms for MLops

As companies expand their use of AI beyond running just a few machine learning models, and as larger enterprises go from deploying hundreds of models to thousands and even millions of models, ML practitioners say that they have yet to find what they need from prepackaged MLops systems.

As companies expand their use of AI beyond running just a few machine learning models, ML practitioners say that they have yet to find what they need from prepackaged MLops systems.

Photo: artpartner-images via Getty Images

On any given day, Lily AI runs hundreds of machine learning models using computer vision and natural language processing that are customized for its retail and ecommerce clients to make website product recommendations, forecast demand, and plan merchandising. But this spring when the company was in the market for a machine learning operations platform to manage its expanding model roster, it wasn’t easy to find a suitable off-the-shelf system that could handle such a large number of models in deployment while also meeting other criteria.

Some MLops platforms are not well-suited for maintaining even more than 10 machine learning models when it comes to keeping track of data, navigating their user interfaces, or reporting capabilities, Matthew Nokleby, machine learning manager for Lily AI’s product intelligence team, told Protocol earlier this year. “The duct tape starts to show,” he said.

Keep ReadingShow less
Kate Kaye

Kate Kaye is an award-winning multimedia reporter digging deep and telling print, digital and audio stories. She covers AI and data for Protocol. Her reporting on AI and tech ethics issues has been published in OneZero, Fast Company, MIT Technology Review, CityLab, Ad Age and Digiday and heard on NPR. Kate is the creator of RedTailMedia.org and is the author of "Campaign '08: A Turning Point for Digital Media," a book about how the 2008 presidential campaigns used digital media and data.

Latest Stories
Bulletins