Skittish, reluctant, hesitant — downright scared. Financial services companies have been all that and more when it comes to migrating their heavily regulated, data-heavy businesses from legacy systems to the cloud.
But while some banks and credit card providers are still just dipping their toes, Capital One has been “all in” on the public cloud since 2015, according to the company’s senior vice president of CIO Enterprise Data and Machine Learning at Capital One, Mike Eason. By 2020, Capital One had completed its full migration to AWS, even declaring, “We left our data centers behind” in a special website section.
“You see a bunch of our competitors following in our footsteps here,” Eason said, noting that choosing AWS as its primary cloud provider gives Capital One advantages. “We get to influence their roadmap,” he said.
But the company does have other data services partners including Snowflake and Databricks, and mixes and matches AWS cloud software such as SageMaker with other things such as open-source components to customize the tech it uses.
These days Eason and his team of 1,800 engineers and technology staff are busy developing a self-service data pipeline and platform with tools for in-house staff to access data to build and train machine-learning models. “Rather than having a whole bunch of different platforms, how do we invest in one that everyone can take advantage of?” Eason said.
The company’s data lake is the destination for all of the data that flows into that system. “We’ve got a big lake that’s in the cloud,” Eason said, like an excited kid bragging about summer vacation.
Protocol caught up with Eason this week to talk about why the data lake is making a difference, why the company wants to automate how it explains its AI models and its efforts to expand Capital One’s company-wide team of 11,000 engineers from the inside.
This interview was edited for clarity.
Capital One has a data lake. Why is there a need for that? What’s unique about what you can do in a cloud data lake environment?
There’s a couple of different things. One is just from a macro standpoint, the cost of data and compute is just dramatically reduced. When we were on prem, we were using the Teradatas of the world and others, and the cost of compute and space is dramatically different than it is today.
And then you're also just contained in what you can put in the four walls of your data center. And here [in cloud computing], it’s the elastic nature of it.
We're a big credit-card provider, and during the holidays, we can spin up more compute and more space and everything to handle the different loads as everyone's doing their holiday shopping, and so that aspect of the cloud has just been phenomenally important to us, and just a game changer.
From a lake standpoint, the amount of data that we can capture and utilize in our models is just tremendously different — like exponentially different. The lake provides that one copy of everything for us, and is the one place where all the data will be.
And so we use a combination of the lake and Snowflake for some more of the structured, traditional warehouse data.
What types of data points or data sources would be flowing into the lake versus a more structured environment?
Well, it would go into both. The lake is everything. It’s the receipt and the copy of all data from the company. So we've built a data pipeline to publish our data. And as an end user, you can then determine, I want to publish the data, so I’m gonna go to the lake, but I want to publish these attributes or this data to Snowflake.
Or – and this is something we just recently built – I might want to put data into a low-latency operational type of database that our operational systems can hit, or our models can hit.
So it's one pipeline that gets to publish to many different locations. It’s a simple, more self-service kind of platform for end users of publishing data. The lake is the copy of everything. And then there might be a subset of needs of things in Snowflake for reporting, doing some general analysis, the munging of data together.
And then there’s the low-latency environment for more back-end, really quick models, making a fraud decision in the moment, when you're using the data to determine if Kate’s transaction is going to go through.
What’s an example of a low-latency use for a data lake?
Fraud is a great case of that. You're swiping the card, we have less than 100 milliseconds of determining if this is a fraudulent transaction or not. And you want as much data and as [many] data points to be able to make that decision.
There’s increasing pressure on companies to audit, explain and monitor algorithmic and automated systems and provide reports on how they work or how they made decisions. What’s Capital One’s approach to this and how has that evolved?
This has been discussed in our world since the start of Capital One. We've used models since day one to predict credit loss and predict who we should give credit to based on the background of their financial performance and credit scores and all that. And then, like I said, making a call on a transaction. So obviously, they've progressed over the years [in] that you've got more data, more data points, more history. But we have a whole group that is focused on model governance, and so all of our models and the decisions are all vetted and monitored really closely.
As you get into a space of now having more data, being able to make more decisions more quickly, how do you ensure that you can grow that kind of model governance piece with this process? And this gets into explainable AI and ML, which is a top focus within our whole ML strategy — ensuring that as we start to take advantage of more data that we're doing that in a very non-biased, explainable way. Today, even though I think we are very progressed on our models, nothing is totally self-learning AI, like hands off. It’s still all modeled and governed and there's human eyes on it; that's going through the whole explainability piece.
What we’re focusing on internally is how do you automate all the documentation around this and prove out the bias aspect and other pieces where you can kind of prove it out, but do it in a way that it's all done much more from an automated standpoint. Especially as you're using more and more attributes in the model, it's incredibly time-consuming to try to do all this manually. And so how do you show all this explainability in a much more automated way? That's a big focus of a lot of our research that we're doing with colleges right now. And, there's a bunch of small companies that are focusing on this.
Are you exploring partnering with vendors that either help do monitoring to watch for problems like model drift or to help automate things like documenting explainability?
Yeah, we've done a bunch of this discovery with some of the players in the space. Even though there's a lot of companies out there that are focusing on it, it's still kind of greenfield. And so we're working with a bunch of universities as well, from a research standpoint specifically in this space. So we haven't picked one company that is gonna do the explainability for us. There's still many that we're investigating. And I think it's gonna be for different situations, different models. It won't be one size fits all.
Speaking of universities, it’s not easy to find people with engineering and machine-learning expertise.
Yeah, there’s a whole talent piece to this. We created the MLE job family to attract more talent, have specialization in the ML space, which has gone well for us. [Capital One categorizes employees and jobs into “families.”] And so trying to hire and retain and develop is a huge focus for us. We have a big tech college internal to Capital One that we've invested millions in. I happen to be the accountable executive for our college, and a big focus this year is a whole ML training program to develop people within the company to have this expertise, both from an engineering standpoint, but also from a data science standpoint, and for a general business leader: How do you need to know more about ML, and where's the leverage?
Where are you getting people for that program? What types of roles do they have in Capital One?
We have a tech college; they are folks that develop the curriculum and they go outside to partners to get some of that curriculum, or we might just develop it ourselves. Engineers are taking time to actually create a module that others would learn from. And so it's a combination of things.
What about on the flip side — the students, the people who are doing that training? They already have some sort of job at Capital One. What kinds of jobs might they have?
More of your general software engineer. They might be doing front-end or back-end development, database or UI or mobile. And I'm assuming that over time, we'll probably do that more in the ML space. Right now, it's a general software-engineering-type program. But as ML becomes more commonplace, there will be modules specific to that.