Enterprise

Why AT&T moved its core tech — but not everything — to Microsoft Azure

Like many telcos building 5G networks, AT&T thinks the cloud is ready to support that challenge. But AT&T's Jeremy Legg thinks the cloud is too expensive for some apps.

​AT&T Chief Technology Officer Jeremy Legg

AT&T Chief Technology Officer Jeremy Legg discusses the move to Microsoft Azure.

Photo: AT&T

Telecommunications companies, traditionally concerned with stability and control, have been slower than companies in other industries to embrace cloud computing. Those days are long gone.

Last week AT&T announced plans to move its 5G network technology to Microsoft Azure. It not only signed a deal with the cloud provider to host its mobile workloads, but also transferred its core Network Cloud technology to Microsoft's budding Azure for Operators division. The two companies have been working together since 2019, but this announcement was the telco equivalent of landing in America and burning your ships: There's no going back once the core intellectual property has left the building.

For AT&T, it was simply time to acknowledge that big cloud providers like Microsoft finally offer the hardware, software and networking expertise required to run their networks, said Jeremy Legg, chief technology officer for AT&T Communications, in an interview with Protocol. That just wasn't the case when mobile carriers started rolling out 4G networks more than a decade ago. And AT&T isn't alone in making this conclusion: Verizon signed a partnership deal with AWS in 2019, and Google Cloud is working with European carriers Orange and Telefónica.

But don't yet consider AT&T a total convert to the cloud. Like a lot of companies of a certain age, AT&T wants to shed two-thirds of its sprawling data-center operation, but it also believes that certain applications with certain performance requirements will always make more sense to run in-house both for cost and performance reasons.

Right before the holiday weekend, Legg talked about the Microsoft deal, AT&T Communications's broader cloud strategy and how the company is applying AI in hopes of making its customer service less painful.

This interview was edited for length and clarity.

Can you tell me a little bit about the Microsoft deal and how it came together?

We've been migrating to [the] cloud for some time, but we originally left a lot of the packet core networking stuff inside of our own data centers. But as you sort of forecast out over time the same macro forces that make it reasonable and cost-effective ... to move IT workloads to the cloud, it increasingly applies to network workloads.

Originally, [the] public cloud wasn't really set up to do network workloads at the level that we're talking about. But now the public cloud is capable of doing those things in combination with central compute as well as edge compute. So as you look at that, and you look at, from a pure cost basis, how much is it going to cost us to expand our on-premises footprints over time, and forecast that model out given the increase in consumption across these platforms, it begins to make sense to move it. And then as you also think about it from a software and software development standpoint, it increasingly makes sense to partner with one of the hyperscalers to make sure that you're innovating at the level that you need to.

The deal that was just announced was a little bit more than a standard moving-the-workloads-over-to-the-cloud deal, in that Microsoft acquired AT&T's network platform technology as well. Can you explain a little bit more about what exactly that means and why that became part of this deal?

The intellectual property aspect of that and the employee aspect of that is actually moving and incorporating parts of our core infrastructure into Azure. They will become responsible for the development and upgrading of our software packet cores and how we move wireless packets around. And we have a lot of packet cores.

They bought Affirmed, which is a packet core provider. We think that, in general, it's a better idea to have a company that wants to build the best possible packet core to serve an industry than us just trying to build one for ourselves.

Where does AT&T see the potential of edge computing? What are the types of things that you think will run best at the edge, both now and in the future?

Well, I think this is going to be a long road, not a short road, as it relates to edge compute. What it really boils down to is products.

Network packet cores have to run closer to the consumer in order to move the packets in the most efficient way possible. But when you begin to also think about the types of applications or services or products that you build at the edge, the architectures change quite a bit from things that were traditionally driven off of central compute, or cloud, or something that's in a traditional on-premises data center.

When you go to the edge, there's a lot of edges. You start talking about hundreds of edges around the country, let alone if you started thinking globally.

We talk about connected cars, well, they're moving; they've got to be able to go from one edge location to another edge location depending on where [they're] going. That architecture is very different from doing something through a central computer.

There are also things from a privacy and security standpoint that I think are important as people are working from home. This gets into things like extending corporate networks into the home, and how do you do that as a network provider, to essentially create a home as an endpoint on a corporate network?

What are your relationships with other cloud providers, and what is your long-term commitment to operating your own data centers?

Historically, AT&T has been a "host it and build it yourself'' company, and we're in the midst of transitioning from that model to a public cloud model. We have [around] 30 physical data centers sprinkled across the company that we're trying to consolidate down to single digits.

That's being done in a number of ways. One is to just simply reduce the number of applications that we have; we have more than 7,000 applications sprinkled across the company and we want to eliminate as many of those as we can, particularly where they're redundant or legacy. And then we want to move certain strategic applications into our own data centers that don't necessarily pencil out to the cloud, but move the bulk of the balance up into the public cloud itself.

We've been doing that with Microsoft for some time. But we also have relationships with Amazon as well as with [Google Cloud Platform], where we use certain sets of capabilities in both of those clouds where it makes sense. So you can think about [machine learning] and AI layers, you can think about specific applications that they built on their service layers that we do take advantage of in addition to Azure.

Then what you also have is an increasing desire on the behalf of AT&T but also [other] companies like us to move some of that central cloud, service layer compute and storage capabilities closer to the edge. Many of these companies have these kinds of models — AWS has had Outposts for some time, for example — and so we're in the midst of crafting the relationship with those providers to have relationships with them at the edge.

This gets into a lot of technology governance, particularly in large organizations, where you really have to control what goes into one cloud versus another cloud, but also recognize that there are certain capabilities in each of these clouds that are best in class; it would be silly of us to not leverage those.

You mentioned a few minutes ago that you are keeping some strategic applications in your own data centers. Can you give me some sense of what you consider strategic, what types of things you really want to make sure are running on infrastructure that you directly control?

We have certain things from the public sector standpoint in our data centers. And then we also have situations where certain types of workloads don't make sense to run in the cloud.

If you're running super-high compute 24/7, it's probably cheaper to run on-premises. The beauty of the cloud is you can spin things up and tear them down, and you're only paying on a consumption basis. But if you're consuming 100% of the time compute and storage and the whole nine yards, there's a cost equation that begins to get into that.

You'd have to run those models and look at certain applications and say, "OK, it doesn't make sense to re-architect this application and move it to the cloud. Do you actually save any money or do you gain capabilities?"

What emerging enterprise technology do you think is the most interesting or exciting? You're not allowed to say edge computing for that category, because we've been talking about edge a lot.

[Laughs.] Well, I'd put ML and AI that leads to automation in there. I mean, it really is becoming real.

The way a customer interacts with a customer service agent, and actually automating that through AI and ML so that they're interacting with a computer, not necessarily a person for certain use cases, and that thing is smart enough to solve that customer problem — that's pretty incredible stuff that, if you think about five years ago being able to do that, not many people would have said you could.

Those things are becoming real, and we've deployed some of this. When you're talking about operating at the scale that we do, finding intelligent solutions that enable automation can be pretty game-changing. When we can serve a customer and keep them off the phone waiting to talk to an agent for five minutes and solve that problem with an AI/ML application, I think our customers are going to be happier.

Enterprise

UiPath had a rocky few years. Rob Enslin wants to turn it around.

Protocol caught up with Enslin, named earlier this year as UiPath’s co-CEO, to discuss why he left Google Cloud, the untapped potential of robotic-process automation, and how he plans to lead alongside founder Daniel Dines.

Rob Enslin, UiPath's co-CEO, chats with Protocol about the company's future.

Photo: UiPath

UiPath has had a shaky history.

The company, which helps companies automate business processes, went public in 2021 at a valuation of more than $30 billion, but now the company’s market capitalization is only around $7 billion. To add insult to injury, UiPath laid off 5% of its staff in June and then lowered its full-year guidance for fiscal year 2023 just months later, tanking its stock by 15%.

Keep Reading Show less
Aisha Counts

Aisha Counts (@aishacounts) is a reporter at Protocol covering enterprise software. Formerly, she was a management consultant for EY. She's based in Los Angeles and can be reached at acounts@protocol.com.

Sponsored Content

Great products are built on strong patents

Experts say robust intellectual property protection is essential to ensure the long-term R&D required to innovate and maintain America's technology leadership.

Every great tech product that you rely on each day, from the smartphone in your pocket to your music streaming service and navigational system in the car, shares one important thing: part of its innovative design is protected by intellectual property (IP) laws.

From 5G to artificial intelligence, IP protection offers a powerful incentive for researchers to create ground-breaking products, and governmental leaders say its protection is an essential part of maintaining US technology leadership. To quote Secretary of Commerce Gina Raimondo: "intellectual property protection is vital for American innovation and entrepreneurship.”

Keep Reading Show less
James Daly
James Daly has a deep knowledge of creating brand voice identity, including understanding various audiences and targeting messaging accordingly. He enjoys commissioning, editing, writing, and business development, particularly in launching new ventures and building passionate audiences. Daly has led teams large and small to multiple awards and quantifiable success through a strategy built on teamwork, passion, fact-checking, intelligence, analytics, and audience growth while meeting budget goals and production deadlines in fast-paced environments. Daly is the Editorial Director of 2030 Media and a contributor at Wired.
Workplace

Figma CPO: We can do more with Adobe

Yuhki Yamashita thinks Figma might tackle video or 3D objects someday.

Figman CPO Yuhki Yamashita told Protocol about Adobe's acquisition of the company.

Photo: Figma

Figma CPO Yuhki Yamashita’s first design gig was at The Harvard Crimson, waiting for writers to file their stories so he could lay them out in Adobe InDesign. Given his interest in computer science, pursuing UX design became the clear move. He worked on Outlook at Microsoft, YouTube at Google, and user experience at Uber, where he was a very early user of Figma. In 2019, he became a VP of product at Figma; this past June, he became CPO.

“Design has been really near and dear to my heart, which is why when this opportunity came along to join Figma and rethink design, it was such an obvious opportunity,” Yamashita said.

Keep Reading Show less
Lizzy Lawrence

Lizzy Lawrence ( @LizzyLaw_) is a reporter at Protocol, covering tools and productivity in the workplace. She's a recent graduate of the University of Michigan, where she studied sociology and international studies. She served as editor in chief of The Michigan Daily, her school's independent newspaper. She's based in D.C., and can be reached at llawrence@protocol.com.

Climate

Microsoft lays out its climate advocacy goals

The tech giant has staked out exactly what kind of policies it will support to decarbonize the world and clean up the grid.

Microsoft published two briefs explaining what new climate policies it will advocate for.

Photo by Jeremy Bezanger on Unsplash

The tech industry has no shortage of climate goals, but they’ll be very hard to achieve without the help of sound public policy.

Microsoft published two new briefs on Sept. 22 explaining what policies it will advocate for in the realm of reducing carbon and cleaning up the grid. With policymakers in the U.S. and around the world beginning to weigh more stringent climate policies (or in the U.S.’s case, any serious climate policies at all), the briefs will offer a measuring stick for whether Microsoft is living up to its ideals.

Keep Reading Show less
Brian Kahn

Brian ( @blkahn) is Protocol's climate editor. Previously, he was the managing editor and founding senior writer at Earther, Gizmodo's climate site, where he covered everything from the weather to Big Oil's influence on politics. He also reported for Climate Central and the Wall Street Journal. In the even more distant past, he led sleigh rides to visit a herd of 7,000 elk and boat tours on the deepest lake in the U.S.

Climate

The next generation of refrigerants is on the way

It’s never been cooler to reconsider the substances that keep us cool. Here’s what could replace super-polluting greenhouse gases in refrigerators and air conditioners.

It’s incumbent on refrigeration tech companies to not repeat past mistakes.

Photo: VCG via Getty Images

In a rare display of bipartisan climate action, the Senate ratified the Kigali Amendment last week. The U.S. joins 137 other nations in the global effort to curb the use of hydrofluorocarbons, or HFCs. Now the race is on to replace them for climate tech startups and traditional HVAC and refrigeration companies alike.

Most HFCs have a global warming potential (GWP) more than 1,000 times that of carbon dioxide — though some are as much as 14,800 times more potent — which makes reducing them a high priority to protect the climate. The treaty mandates that the U.S. and other industrialized nations decrease their use of HFCs to roughly 15% of 2012 levels by 2036.

Keep Reading Show less
Lisa Martine Jenkins

Lisa Martine Jenkins is a senior reporter at Protocol covering climate. Lisa previously wrote for Morning Consult, Chemical Watch and the Associated Press. Lisa is currently based in Brooklyn, and is originally from the Bay Area. Find her on Twitter ( @l_m_j_) or reach out via email (ljenkins@protocol.com).

Latest Stories
Bulletins