Enterprise

Why Capital One CIO Mike Eason loves a visit to its big data lake

Protocol caught up with Capital One’s Mike Eason to talk about the credit card giant’s data setup, a new in-house data platform for building machine learning models, and why the company wants to automate how it explains its AI.

Capital One SVP of CIO Enterprise Data and Machine Learning Mike Eason.

Capital One's Mike Eason and his team of 1,800 engineers and technology staff are busy developing a self-service data pipeline and platform with tools for in-house staff to access data to build and train machine-learning models.

Photo: Capital One

Skittish, reluctant, hesitant — downright scared. Financial services companies have been all that and more when it comes to migrating their heavily regulated, data-heavy businesses from legacy systems to the cloud.

But while some banks and credit card providers are still just dipping their toes, Capital One has been “all in” on the public cloud since 2015, according to the company’s senior vice president of CIO Enterprise Data and Machine Learning at Capital One, Mike Eason. By 2020, Capital One had completed its full migration to AWS, even declaring, “We left our data centers behind” in a special website section.

“You see a bunch of our competitors following in our footsteps here,” Eason said, noting that choosing AWS as its primary cloud provider gives Capital One advantages. “We get to influence their roadmap,” he said.

But the company does have other data services partners including Snowflake and Databricks, and mixes and matches AWS cloud software such as SageMaker with other things such as open-source components to customize the tech it uses.

These days Eason and his team of 1,800 engineers and technology staff are busy developing a self-service data pipeline and platform with tools for in-house staff to access data to build and train machine-learning models. “Rather than having a whole bunch of different platforms, how do we invest in one that everyone can take advantage of?” Eason said.

The company’s data lake is the destination for all of the data that flows into that system. “We’ve got a big lake that’s in the cloud,” Eason said, like an excited kid bragging about summer vacation.

Protocol caught up with Eason this week to talk about why the data lake is making a difference, why the company wants to automate how it explains its AI models and its efforts to expand Capital One’s company-wide team of 11,000 engineers from the inside.

This interview was edited for clarity.

Capital One has a data lake. Why is there a need for that? What’s unique about what you can do in a cloud data lake environment?

There’s a couple of different things. One is just from a macro standpoint, the cost of data and compute is just dramatically reduced. When we were on prem, we were using the Teradatas of the world and others, and the cost of compute and space is dramatically different than it is today.

And then you're also just contained in what you can put in the four walls of your data center. And here [in cloud computing], it’s the elastic nature of it.

We're a big credit-card provider, and during the holidays, we can spin up more compute and more space and everything to handle the different loads as everyone's doing their holiday shopping, and so that aspect of the cloud has just been phenomenally important to us, and just a game changer.

From a lake standpoint, the amount of data that we can capture and utilize in our models is just tremendously different — like exponentially different. The lake provides that one copy of everything for us, and is the one place where all the data will be.

And so we use a combination of the lake and Snowflake for some more of the structured, traditional warehouse data.

What types of data points or data sources would be flowing into the lake versus a more structured environment?

Well, it would go into both. The lake is everything. It’s the receipt and the copy of all data from the company. So we've built a data pipeline to publish our data. And as an end user, you can then determine, I want to publish the data, so I’m gonna go to the lake, but I want to publish these attributes or this data to Snowflake.

Or – and this is something we just recently built – I might want to put data into a low-latency operational type of database that our operational systems can hit, or our models can hit.

So it's one pipeline that gets to publish to many different locations. It’s a simple, more self-service kind of platform for end users of publishing data. The lake is the copy of everything. And then there might be a subset of needs of things in Snowflake for reporting, doing some general analysis, the munging of data together.

And then there’s the low-latency environment for more back-end, really quick models, making a fraud decision in the moment, when you're using the data to determine if Kate’s transaction is going to go through.

What’s an example of a low-latency use for a data lake?

Fraud is a great case of that. You're swiping the card, we have less than 100 milliseconds of determining if this is a fraudulent transaction or not. And you want as much data and as [many] data points to be able to make that decision.

There’s increasing pressure on companies to audit, explain and monitor algorithmic and automated systems and provide reports on how they work or how they made decisions. What’s Capital One’s approach to this and how has that evolved?

This has been discussed in our world since the start of Capital One. We've used models since day one to predict credit loss and predict who we should give credit to based on the background of their financial performance and credit scores and all that. And then, like I said, making a call on a transaction. So obviously, they've progressed over the years [in] that you've got more data, more data points, more history. But we have a whole group that is focused on model governance, and so all of our models and the decisions are all vetted and monitored really closely.

As you get into a space of now having more data, being able to make more decisions more quickly, how do you ensure that you can grow that kind of model governance piece with this process? And this gets into explainable AI and ML, which is a top focus within our whole ML strategy — ensuring that as we start to take advantage of more data that we're doing that in a very non-biased, explainable way. Today, even though I think we are very progressed on our models, nothing is totally self-learning AI, like hands off. It’s still all modeled and governed and there's human eyes on it; that's going through the whole explainability piece.

What we’re focusing on internally is how do you automate all the documentation around this and prove out the bias aspect and other pieces where you can kind of prove it out, but do it in a way that it's all done much more from an automated standpoint. Especially as you're using more and more attributes in the model, it's incredibly time-consuming to try to do all this manually. And so how do you show all this explainability in a much more automated way? That's a big focus of a lot of our research that we're doing with colleges right now. And, there's a bunch of small companies that are focusing on this.

Are you exploring partnering with vendors that either help do monitoring to watch for problems like model drift or to help automate things like documenting explainability?

Yeah, we've done a bunch of this discovery with some of the players in the space. Even though there's a lot of companies out there that are focusing on it, it's still kind of greenfield. And so we're working with a bunch of universities as well, from a research standpoint specifically in this space. So we haven't picked one company that is gonna do the explainability for us. There's still many that we're investigating. And I think it's gonna be for different situations, different models. It won't be one size fits all.

Speaking of universities, it’s not easy to find people with engineering and machine-learning expertise.

Yeah, there’s a whole talent piece to this. We created the MLE job family to attract more talent, have specialization in the ML space, which has gone well for us. [Capital One categorizes employees and jobs into “families.”] And so trying to hire and retain and develop is a huge focus for us. We have a big tech college internal to Capital One that we've invested millions in. I happen to be the accountable executive for our college, and a big focus this year is a whole ML training program to develop people within the company to have this expertise, both from an engineering standpoint, but also from a data science standpoint, and for a general business leader: How do you need to know more about ML, and where's the leverage?

Where are you getting people for that program? What types of roles do they have in Capital One?

We have a tech college; they are folks that develop the curriculum and they go outside to partners to get some of that curriculum, or we might just develop it ourselves. Engineers are taking time to actually create a module that others would learn from. And so it's a combination of things.

What about on the flip side — the students, the people who are doing that training? They already have some sort of job at Capital One. What kinds of jobs might they have?

More of your general software engineer. They might be doing front-end or back-end development, database or UI or mobile. And I'm assuming that over time, we'll probably do that more in the ML space. Right now, it's a general software-engineering-type program. But as ML becomes more commonplace, there will be modules specific to that.

Enterprise

UiPath had a rocky few years. Rob Enslin wants to turn it around.

Protocol caught up with Enslin, named earlier this year as UiPath’s co-CEO, to discuss why he left Google Cloud, the untapped potential of robotic-process automation, and how he plans to lead alongside founder Daniel Dines.

Rob Enslin, UiPath's co-CEO, chats with Protocol about the company's future.

Photo: UiPath

UiPath has had a shaky history.

The company, which helps companies automate business processes, went public in 2021 at a valuation of more than $30 billion, but now the company’s market capitalization is only around $7 billion. To add insult to injury, UiPath laid off 5% of its staff in June and then lowered its full-year guidance for fiscal year 2023 just months later, tanking its stock by 15%.

Keep Reading Show less
Aisha Counts

Aisha Counts (@aishacounts) is a reporter at Protocol covering enterprise software. Formerly, she was a management consultant for EY. She's based in Los Angeles and can be reached at acounts@protocol.com.

Sponsored Content

Great products are built on strong patents

Experts say robust intellectual property protection is essential to ensure the long-term R&D required to innovate and maintain America's technology leadership.

Every great tech product that you rely on each day, from the smartphone in your pocket to your music streaming service and navigational system in the car, shares one important thing: part of its innovative design is protected by intellectual property (IP) laws.

From 5G to artificial intelligence, IP protection offers a powerful incentive for researchers to create ground-breaking products, and governmental leaders say its protection is an essential part of maintaining US technology leadership. To quote Secretary of Commerce Gina Raimondo: "intellectual property protection is vital for American innovation and entrepreneurship.”

Keep Reading Show less
James Daly
James Daly has a deep knowledge of creating brand voice identity, including understanding various audiences and targeting messaging accordingly. He enjoys commissioning, editing, writing, and business development, particularly in launching new ventures and building passionate audiences. Daly has led teams large and small to multiple awards and quantifiable success through a strategy built on teamwork, passion, fact-checking, intelligence, analytics, and audience growth while meeting budget goals and production deadlines in fast-paced environments. Daly is the Editorial Director of 2030 Media and a contributor at Wired.
Workplace

Figma’s chief product officer: We can do more with Adobe

Yuhki Yamashita thinks Figma might tackle video or 3D objects someday.

Figman CPO Yuhki Yamashita told Protocol about Adobe's acquisition of the company.

Photo: Figma

Figma CPO Yuhki Yamashita’s first design gig was at The Harvard Crimson, waiting for writers to file their stories so he could lay them out in Adobe InDesign. Given his interest in computer science, pursuing UX design became the clear move. He worked on Outlook at Microsoft, YouTube at Google, and user experience at Uber, where he was a very early user of Figma. In 2019, he became a VP of product at Figma; this past June, he became CPO.

“Design has been really near and dear to my heart, which is why when this opportunity came along to join Figma and rethink design, it was such an obvious opportunity,” Yamashita said.

Keep Reading Show less
Lizzy Lawrence

Lizzy Lawrence ( @LizzyLaw_) is a reporter at Protocol, covering tools and productivity in the workplace. She's a recent graduate of the University of Michigan, where she studied sociology and international studies. She served as editor in chief of The Michigan Daily, her school's independent newspaper. She's based in D.C., and can be reached at llawrence@protocol.com.

Climate

Microsoft lays out its climate advocacy goals

The tech giant has staked out exactly what kind of policies it will support to decarbonize the world and clean up the grid.

On Sept. 22, Microsoft — seen here, CEO Satya Nadella — published two briefs explaining what new climate policies it will advocate for.

Photo: Simon Dawson/Bloomberg via Getty Images

The tech industry has no shortage of climate goals, but they’ll be very hard to achieve without the help of sound public policy.

Microsoft published two new briefs on Sept. 22 explaining what policies it will advocate for in the realm of reducing carbon and cleaning up the grid. With policymakers in the U.S. and around the world beginning to weigh more stringent climate policies (or in the U.S.’s case, any serious climate policies at all), the briefs will offer a measuring stick for whether Microsoft is living up to its ideals.

Keep Reading Show less
Brian Kahn

Brian ( @blkahn) is Protocol's climate editor. Previously, he was the managing editor and founding senior writer at Earther, Gizmodo's climate site, where he covered everything from the weather to Big Oil's influence on politics. He also reported for Climate Central and the Wall Street Journal. In the even more distant past, he led sleigh rides to visit a herd of 7,000 elk and boat tours on the deepest lake in the U.S.

Climate

The next generation of refrigerants is on the way

It’s never been cooler to reconsider the substances that keep us cool. Here’s what could replace super-polluting greenhouse gases in refrigerators and air conditioners.

It’s incumbent on refrigeration tech companies to not repeat past mistakes.

Photo: VCG via Getty Images

In a rare display of bipartisan climate action, the Senate ratified the Kigali Amendment last week. The U.S. joins 137 other nations in the global effort to curb the use of hydrofluorocarbons, or HFCs. Now the race is on to replace them for climate tech startups and traditional HVAC and refrigeration companies alike.

Most HFCs have a global warming potential (GWP) more than 1,000 times that of carbon dioxide — though some are as much as 14,800 times more potent — which makes reducing them a high priority to protect the climate. The treaty mandates that the U.S. and other industrialized nations decrease their use of HFCs to roughly 15% of 2012 levels by 2036.

Keep Reading Show less
Lisa Martine Jenkins

Lisa Martine Jenkins is a senior reporter at Protocol covering climate. Lisa previously wrote for Morning Consult, Chemical Watch and the Associated Press. Lisa is currently based in Brooklyn, and is originally from the Bay Area. Find her on Twitter ( @l_m_j_) or reach out via email (ljenkins@protocol.com).

Latest Stories
Bulletins