Enterprise

Why AT&T moved its core tech — but not everything — to Microsoft Azure

Like many telcos building 5G networks, AT&T thinks the cloud is ready to support that challenge. But AT&T's Jeremy Legg thinks the cloud is too expensive for some apps.

​AT&T Chief Technology Officer Jeremy Legg

AT&T Chief Technology Officer Jeremy Legg discusses the move to Microsoft Azure.

Photo: AT&T

Telecommunications companies, traditionally concerned with stability and control, have been slower than companies in other industries to embrace cloud computing. Those days are long gone.

Last week AT&T announced plans to move its 5G network technology to Microsoft Azure. It not only signed a deal with the cloud provider to host its mobile workloads, but also transferred its core Network Cloud technology to Microsoft's budding Azure for Operators division. The two companies have been working together since 2019, but this announcement was the telco equivalent of landing in America and burning your ships: There's no going back once the core intellectual property has left the building.

For AT&T, it was simply time to acknowledge that big cloud providers like Microsoft finally offer the hardware, software and networking expertise required to run their networks, said Jeremy Legg, chief technology officer for AT&T Communications, in an interview with Protocol. That just wasn't the case when mobile carriers started rolling out 4G networks more than a decade ago. And AT&T isn't alone in making this conclusion: Verizon signed a partnership deal with AWS in 2019, and Google Cloud is working with European carriers Orange and Telefónica.

But don't yet consider AT&T a total convert to the cloud. Like a lot of companies of a certain age, AT&T wants to shed two-thirds of its sprawling data-center operation, but it also believes that certain applications with certain performance requirements will always make more sense to run in-house both for cost and performance reasons.

Right before the holiday weekend, Legg talked about the Microsoft deal, AT&T Communications's broader cloud strategy and how the company is applying AI in hopes of making its customer service less painful.

This interview was edited for length and clarity.

Can you tell me a little bit about the Microsoft deal and how it came together?

We've been migrating to [the] cloud for some time, but we originally left a lot of the packet core networking stuff inside of our own data centers. But as you sort of forecast out over time the same macro forces that make it reasonable and cost-effective ... to move IT workloads to the cloud, it increasingly applies to network workloads.

Originally, [the] public cloud wasn't really set up to do network workloads at the level that we're talking about. But now the public cloud is capable of doing those things in combination with central compute as well as edge compute. So as you look at that, and you look at, from a pure cost basis, how much is it going to cost us to expand our on-premises footprints over time, and forecast that model out given the increase in consumption across these platforms, it begins to make sense to move it. And then as you also think about it from a software and software development standpoint, it increasingly makes sense to partner with one of the hyperscalers to make sure that you're innovating at the level that you need to.

The deal that was just announced was a little bit more than a standard moving-the-workloads-over-to-the-cloud deal, in that Microsoft acquired AT&T's network platform technology as well. Can you explain a little bit more about what exactly that means and why that became part of this deal?

The intellectual property aspect of that and the employee aspect of that is actually moving and incorporating parts of our core infrastructure into Azure. They will become responsible for the development and upgrading of our software packet cores and how we move wireless packets around. And we have a lot of packet cores.

They bought Affirmed, which is a packet core provider. We think that, in general, it's a better idea to have a company that wants to build the best possible packet core to serve an industry than us just trying to build one for ourselves.

Where does AT&T see the potential of edge computing? What are the types of things that you think will run best at the edge, both now and in the future?

Well, I think this is going to be a long road, not a short road, as it relates to edge compute. What it really boils down to is products.

Network packet cores have to run closer to the consumer in order to move the packets in the most efficient way possible. But when you begin to also think about the types of applications or services or products that you build at the edge, the architectures change quite a bit from things that were traditionally driven off of central compute, or cloud, or something that's in a traditional on-premises data center.

When you go to the edge, there's a lot of edges. You start talking about hundreds of edges around the country, let alone if you started thinking globally.

We talk about connected cars, well, they're moving; they've got to be able to go from one edge location to another edge location depending on where [they're] going. That architecture is very different from doing something through a central computer.

There are also things from a privacy and security standpoint that I think are important as people are working from home. This gets into things like extending corporate networks into the home, and how do you do that as a network provider, to essentially create a home as an endpoint on a corporate network?

What are your relationships with other cloud providers, and what is your long-term commitment to operating your own data centers?

Historically, AT&T has been a "host it and build it yourself'' company, and we're in the midst of transitioning from that model to a public cloud model. We have [around] 30 physical data centers sprinkled across the company that we're trying to consolidate down to single digits.

That's being done in a number of ways. One is to just simply reduce the number of applications that we have; we have more than 7,000 applications sprinkled across the company and we want to eliminate as many of those as we can, particularly where they're redundant or legacy. And then we want to move certain strategic applications into our own data centers that don't necessarily pencil out to the cloud, but move the bulk of the balance up into the public cloud itself.

We've been doing that with Microsoft for some time. But we also have relationships with Amazon as well as with [Google Cloud Platform], where we use certain sets of capabilities in both of those clouds where it makes sense. So you can think about [machine learning] and AI layers, you can think about specific applications that they built on their service layers that we do take advantage of in addition to Azure.

Then what you also have is an increasing desire on the behalf of AT&T but also [other] companies like us to move some of that central cloud, service layer compute and storage capabilities closer to the edge. Many of these companies have these kinds of models — AWS has had Outposts for some time, for example — and so we're in the midst of crafting the relationship with those providers to have relationships with them at the edge.

This gets into a lot of technology governance, particularly in large organizations, where you really have to control what goes into one cloud versus another cloud, but also recognize that there are certain capabilities in each of these clouds that are best in class; it would be silly of us to not leverage those.

You mentioned a few minutes ago that you are keeping some strategic applications in your own data centers. Can you give me some sense of what you consider strategic, what types of things you really want to make sure are running on infrastructure that you directly control?

We have certain things from the public sector standpoint in our data centers. And then we also have situations where certain types of workloads don't make sense to run in the cloud.

If you're running super-high compute 24/7, it's probably cheaper to run on-premises. The beauty of the cloud is you can spin things up and tear them down, and you're only paying on a consumption basis. But if you're consuming 100% of the time compute and storage and the whole nine yards, there's a cost equation that begins to get into that.

You'd have to run those models and look at certain applications and say, "OK, it doesn't make sense to re-architect this application and move it to the cloud. Do you actually save any money or do you gain capabilities?"

What emerging enterprise technology do you think is the most interesting or exciting? You're not allowed to say edge computing for that category, because we've been talking about edge a lot.

[Laughs.] Well, I'd put ML and AI that leads to automation in there. I mean, it really is becoming real.

The way a customer interacts with a customer service agent, and actually automating that through AI and ML so that they're interacting with a computer, not necessarily a person for certain use cases, and that thing is smart enough to solve that customer problem — that's pretty incredible stuff that, if you think about five years ago being able to do that, not many people would have said you could.

Those things are becoming real, and we've deployed some of this. When you're talking about operating at the scale that we do, finding intelligent solutions that enable automation can be pretty game-changing. When we can serve a customer and keep them off the phone waiting to talk to an agent for five minutes and solve that problem with an AI/ML application, I think our customers are going to be happier.

Fintech

Judge Zia Faruqui is trying to teach you crypto, one ‘SNL’ reference at a time

His decisions on major cryptocurrency cases have quoted "The Big Lebowski," "SNL," and "Dr. Strangelove." That’s because he wants you — yes, you — to read them.

The ways Zia Faruqui (right) has weighed on cases that have come before him can give lawyers clues as to what legal frameworks will pass muster.

Photo: Carolyn Van Houten/The Washington Post via Getty Images

“Cryptocurrency and related software analytics tools are ‘The wave of the future, Dude. One hundred percent electronic.’”

That’s not a quote from "The Big Lebowski" — at least, not directly. It’s a quote from a Washington, D.C., district court memorandum opinion on the role cryptocurrency analytics tools can play in government investigations. The author is Magistrate Judge Zia Faruqui.

Keep Reading Show less
Veronica Irwin

Veronica Irwin (@vronirwin) is a San Francisco-based reporter at Protocol covering fintech. Previously she was at the San Francisco Examiner, covering tech from a hyper-local angle. Before that, her byline was featured in SF Weekly, The Nation, Techworker, Ms. Magazine and The Frisc.

The financial technology transformation is driving competition, creating consumer choice, and shaping the future of finance. Hear from seven fintech leaders who are reshaping the future of finance, and join the inaugural Financial Technology Association Fintech Summit to learn more.

Keep Reading Show less
FTA
The Financial Technology Association (FTA) represents industry leaders shaping the future of finance. We champion the power of technology-centered financial services and advocate for the modernization of financial regulation to support inclusion and responsible innovation.
Enterprise

AWS CEO: The cloud isn’t just about technology

As AWS preps for its annual re:Invent conference, Adam Selipsky talks product strategy, support for hybrid environments, and the value of the cloud in uncertain economic times.

Photo: Noah Berger/Getty Images for Amazon Web Services

AWS is gearing up for re:Invent, its annual cloud computing conference where announcements this year are expected to focus on its end-to-end data strategy and delivering new industry-specific services.

It will be the second re:Invent with CEO Adam Selipsky as leader of the industry’s largest cloud provider after his return last year to AWS from data visualization company Tableau Software.

Keep Reading Show less
Donna Goodison

Donna Goodison (@dgoodison) is Protocol's senior reporter focusing on enterprise infrastructure technology, from the 'Big 3' cloud computing providers to data centers. She previously covered the public cloud at CRN after 15 years as a business reporter for the Boston Herald. Based in Massachusetts, she also has worked as a Boston Globe freelancer, business reporter at the Boston Business Journal and real estate reporter at Banker & Tradesman after toiling at weekly newspapers.

Image: Protocol

We launched Protocol in February 2020 to cover the evolving power center of tech. It is with deep sadness that just under three years later, we are winding down the publication.

As of today, we will not publish any more stories. All of our newsletters, apart from our flagship, Source Code, will no longer be sent. Source Code will be published and sent for the next few weeks, but it will also close down in December.

Keep Reading Show less
Bennett Richardson

Bennett Richardson ( @bennettrich) is the president of Protocol. Prior to joining Protocol in 2019, Bennett was executive director of global strategic partnerships at POLITICO, where he led strategic growth efforts including POLITICO's European expansion in Brussels and POLITICO's creative agency POLITICO Focus during his six years with the company. Prior to POLITICO, Bennett was co-founder and CMO of Hinge, the mobile dating company recently acquired by Match Group. Bennett began his career in digital and social brand marketing working with major brands across tech, energy, and health care at leading marketing and communications agencies including Edelman and GMMB. Bennett is originally from Portland, Maine, and received his bachelor's degree from Colgate University.

Enterprise

Why large enterprises struggle to find suitable platforms for MLops

As companies expand their use of AI beyond running just a few machine learning models, and as larger enterprises go from deploying hundreds of models to thousands and even millions of models, ML practitioners say that they have yet to find what they need from prepackaged MLops systems.

As companies expand their use of AI beyond running just a few machine learning models, ML practitioners say that they have yet to find what they need from prepackaged MLops systems.

Photo: artpartner-images via Getty Images

On any given day, Lily AI runs hundreds of machine learning models using computer vision and natural language processing that are customized for its retail and ecommerce clients to make website product recommendations, forecast demand, and plan merchandising. But this spring when the company was in the market for a machine learning operations platform to manage its expanding model roster, it wasn’t easy to find a suitable off-the-shelf system that could handle such a large number of models in deployment while also meeting other criteria.

Some MLops platforms are not well-suited for maintaining even more than 10 machine learning models when it comes to keeping track of data, navigating their user interfaces, or reporting capabilities, Matthew Nokleby, machine learning manager for Lily AI’s product intelligence team, told Protocol earlier this year. “The duct tape starts to show,” he said.

Keep Reading Show less
Kate Kaye

Kate Kaye is an award-winning multimedia reporter digging deep and telling print, digital and audio stories. She covers AI and data for Protocol. Her reporting on AI and tech ethics issues has been published in OneZero, Fast Company, MIT Technology Review, CityLab, Ad Age and Digiday and heard on NPR. Kate is the creator of RedTailMedia.org and is the author of "Campaign '08: A Turning Point for Digital Media," a book about how the 2008 presidential campaigns used digital media and data.

Latest Stories
Bulletins