Taylor Swift tickets are hard to get. Edge computing could fix that.

The CEO of StackPath thinks it will be a few years before decentralized edge computing makes sense for more applications, but there are a few in-demand services that could benefit from that approach today.

A Taylor Swift concert in 2015

After pandemic restrictions lift, Taylor Swift concerts could be hard to get into. Might edge computing be just the ticket?

Photo: Chaz McGregor/Unsplash

StackPath CEO Kip Turco has an easy reference when asked about the potential benefits of edge computing platforms: Taylor Swift.

Whenever Swift returns to live performances as the pandemic abates, demand for tickets to her shows could be unprecedented. Hundreds of thousands of people will be competing for seats against bots and professional ticket buying operations sure to swarm Ticketmaster's services, some of which run in AWS regional data centers.

"Milliseconds matter," Turco told Protocol, when it comes to a situation like the release of Taylor Swift tickets. A fan request to purchase tickets will have to travel to that regional data center and back, and even the smartest minds in enterprise tech haven't figured out how to increase the speed of light.

But edge computing, or decentralized computing, could change the way businesses offering time-sensitive services build and deploy those apps. By moving a modest amount of computing power closer to end users and connecting it with a super-efficient network, Turco thinks StackPath's services could help level the playing field, especially as the underwhelming rollout of 5G wireless networks really starts to gain traction.

StackPath, which has raised nearly $400 million since it was founded in 2015 to build out its content-delivery network, thinks it will be a few years before the market is ready for this technology. But lots of companies are planning for a world in which computing resources start to move away from centralized cloud data centers and closer to the edge of the network, and Turco shared his thoughts on how this market will evolve in a recent interview.

This interview has been edited and condensed for clarity.

StackPath CEO Kip Turco.StackPath CEO Kip Turco sees edge computing as the future.Photo: StackPath

What's the most clear example of why edge computing is needed today?

We tend to agree that the potential from an edge compute marketplace, we really view as kind of like a 2024, '25, '26, '27 evolution. Some of that is associated with the timeline of things that we're doing internally, but more of it I think is based on the marketplace.
The drivers that will open up that marketplace are things like 5G; as you know, everyone's advertising 5G, but you know as well as I do [that] the train has left the station, but it's not even halfway there. So it's the evolution of 5G, it's the continued development of IoT products, and associated AI with those IoT products closer to the end user.

And then fourth, the only thing that I think really helped accelerate it this year, COVID; not COVID itself, but creating that diverse work environment, which has changed how everyone's thinking about applications in the network and their proximity to the end users.

What kind of computing resources can you expect at the edge?

We think about it in terms of, generally speaking, data in motion or data at rest. If you think of data at rest, that's like a typical primary [server], whether it's in a hyperscaler like Google, Microsoft, or Amazon, or your own core data center. That's where you're keeping critical company data.

Processing and storing the data in motion, which I believe is what we're helping to solve for our edge use case, is the data that needs to be quickly accessed, stored and analyzed and then dumped, that you don't need to send back.

Here in Brookline near Boston, there's not a straight street in the whole damn town. And you have all the weather conditions, you have all the different traffic that changes on an hourly basis, and then you roll something forward like an autonomous car. To drive around and make a determination like "Hey, there's an accident up here," or "Hey, it's gonna start to snow, we need to go left or right," applications like that for an autonomous car to get a reliable, millisecond-quick decision are the data sets that we see moving towards [the edge].

It sounds like most of those are not going to be sort of your typical corporate applications.

From my perspective, the main corporate applications that would need to reside out there are ones that are time- or latency-sensitive. So for example, one of the applications that we're working on to be deployed at the edge is a VDI [virtual desktop infrastructure] solution. And this company made a VDI investment, but has struggled having their employees use it. Now, we've taken that VDI and they've rewritten that application so it's a bit lighter. And we're deploying that application closer to their employees or end users, and to marketplaces, to see if it works better for them.

What kind of network infrastructure investment do you need to have in order to really support this?

I believe when you roll it all forward, the more critical component of it will be network than actual processing or storage. We were initially a network-driven CDN company, so for us, connectivity was the star in the choosing point for all our locations.
The hyperscalers are attacking edge compute going out from their huge centralized clouds closer to their end users. I feel like someone like us or Fastly are doing the same thing, but we're starting by virtue of what we did with CDN out of the internet edge. We're attacking the edge compute market from different directions.

We're not trying to be everywhere on the globe, we're trying to be where we think are the 50, to top 100 places that you need to be to have a premier global network. And from there, we'll distribute to the edge as our customers need us to or as they would want us to, instead of trying to build out 3,000 POPs [points of presence] globally all over the place and throw dots all over a map and hope we're in the right place.

So certainly you know the hyperscalers have invested a lot in networking capabilities. When you think about their private transit networks, they're some of the best in the world. That would appear to give them a fair amount of capacity and capability to be able to extend these types of services and ideas pretty far out onto the edge as well.

There's no doubt that they have expansive networking capabilities. But at a high level, I think a majority of those are from those hyperscaler pods to other massive [internet exchanges] or data centers. Where the network that we've built or someone like Fastly is built, our networks are more expansive from the end user: that TV sitting in the house, a mobile phone in your hand or the car that you're driving around out towards the internet edge.

I'm not trying to compete with Google, Amazon, or Microsoft. I am trying to coexist to make sure that our platform or edge POPs are interoperable with all of those different providers or possible competitors.

From an application architectural standpoint, is there a lot of work that needs to be done to sort of make those applications edge-sensitive?

I'd say it's a fair amount of work. A majority of the work revolves around decoupling all of their applications to really be written in environments where you have data at rest, like in a large mammoth centralized data center, to a lighter-weight version of that as data in motion that is pushed out from the centralized location, and just gradually updates the core app. I don't think it's super easy, but I don't think it's super complex.

I'm thinking about the companies who are rewriting applications now for cloud, so if you make that effort to do that for cloud, then you have to make an additional effort to do it for edge...

I don't think it's going to be nearly as hard as writing an application to the cloud.

What are people overlooking about edge and decentralized computing?

To me, it's like we're back 15 years ago doing the whole cloud thing, where it's moved to the market and it's about how we get there and when we get there. I think it's going to be similar in that we have an evolution in technology, which is going to enable folks to think about their applications differently, and run them more efficiently in the future.

Even something crazy simple; my son, who was huge into baseball. I remember [a few years ago], we'd be coming back from baseball practice and I'd be like, "Let's get McDonald's on the way home," which when I was a kid that's like, hands down, yes, I'm all over it. And he's like, "No, we've got to go home."

He's looking to jump on the Xbox to play Fortnite. The reason none of these kids play something like Fortnite on the mobile phone is because the experience sucks because the throughput is bad. As 5G opens up, that's going to push companies to rethink their delivery model differently for the economic benefit from it.

Two years ago, my daughters were like, "Taylor Swift is coming to Gillette [Stadium], we're going to all stay up and we need everyone's device in the home, you can't go out of the home because it's not quick enough to sign up to get Taylor Swift tickets at 12:01 a.m." And I'm sitting there thinking to myself we've got zero chance, because they're all being served out of the Amazon data center probably on the East Coast, either in New York or in Montreal, and the people with the lowest ping time — milliseconds matter — are going to be able to get it.

Unless you're sitting right next to that data center, you're not getting something like edge delivery for both of those scenarios. You actually could create a fair marketplace where when people put something out that is either super expensive or super scarce, they can run that auction and a set point in time on my infrastructure or infrastructure like mine, where all the locations are out at the edge. That would provide a much more equal playing field for people.


Judge Zia Faruqui is trying to teach you crypto, one ‘SNL’ reference at a time

His decisions on major cryptocurrency cases have quoted "The Big Lebowski," "SNL," and "Dr. Strangelove." That’s because he wants you — yes, you — to read them.

The ways Zia Faruqui (right) has weighed on cases that have come before him can give lawyers clues as to what legal frameworks will pass muster.

Photo: Carolyn Van Houten/The Washington Post via Getty Images

“Cryptocurrency and related software analytics tools are ‘The wave of the future, Dude. One hundred percent electronic.’”

That’s not a quote from "The Big Lebowski" — at least, not directly. It’s a quote from a Washington, D.C., district court memorandum opinion on the role cryptocurrency analytics tools can play in government investigations. The author is Magistrate Judge Zia Faruqui.

Keep ReadingShow less
Veronica Irwin

Veronica Irwin (@vronirwin) is a San Francisco-based reporter at Protocol covering fintech. Previously she was at the San Francisco Examiner, covering tech from a hyper-local angle. Before that, her byline was featured in SF Weekly, The Nation, Techworker, Ms. Magazine and The Frisc.

The financial technology transformation is driving competition, creating consumer choice, and shaping the future of finance. Hear from seven fintech leaders who are reshaping the future of finance, and join the inaugural Financial Technology Association Fintech Summit to learn more.

Keep ReadingShow less
The Financial Technology Association (FTA) represents industry leaders shaping the future of finance. We champion the power of technology-centered financial services and advocate for the modernization of financial regulation to support inclusion and responsible innovation.

AWS CEO: The cloud isn’t just about technology

As AWS preps for its annual re:Invent conference, Adam Selipsky talks product strategy, support for hybrid environments, and the value of the cloud in uncertain economic times.

Photo: Noah Berger/Getty Images for Amazon Web Services

AWS is gearing up for re:Invent, its annual cloud computing conference where announcements this year are expected to focus on its end-to-end data strategy and delivering new industry-specific services.

It will be the second re:Invent with CEO Adam Selipsky as leader of the industry’s largest cloud provider after his return last year to AWS from data visualization company Tableau Software.

Keep ReadingShow less
Donna Goodison

Donna Goodison (@dgoodison) is Protocol's senior reporter focusing on enterprise infrastructure technology, from the 'Big 3' cloud computing providers to data centers. She previously covered the public cloud at CRN after 15 years as a business reporter for the Boston Herald. Based in Massachusetts, she also has worked as a Boston Globe freelancer, business reporter at the Boston Business Journal and real estate reporter at Banker & Tradesman after toiling at weekly newspapers.

Image: Protocol

We launched Protocol in February 2020 to cover the evolving power center of tech. It is with deep sadness that just under three years later, we are winding down the publication.

As of today, we will not publish any more stories. All of our newsletters, apart from our flagship, Source Code, will no longer be sent. Source Code will be published and sent for the next few weeks, but it will also close down in December.

Keep ReadingShow less
Bennett Richardson

Bennett Richardson ( @bennettrich) is the president of Protocol. Prior to joining Protocol in 2019, Bennett was executive director of global strategic partnerships at POLITICO, where he led strategic growth efforts including POLITICO's European expansion in Brussels and POLITICO's creative agency POLITICO Focus during his six years with the company. Prior to POLITICO, Bennett was co-founder and CMO of Hinge, the mobile dating company recently acquired by Match Group. Bennett began his career in digital and social brand marketing working with major brands across tech, energy, and health care at leading marketing and communications agencies including Edelman and GMMB. Bennett is originally from Portland, Maine, and received his bachelor's degree from Colgate University.


Why large enterprises struggle to find suitable platforms for MLops

As companies expand their use of AI beyond running just a few machine learning models, and as larger enterprises go from deploying hundreds of models to thousands and even millions of models, ML practitioners say that they have yet to find what they need from prepackaged MLops systems.

As companies expand their use of AI beyond running just a few machine learning models, ML practitioners say that they have yet to find what they need from prepackaged MLops systems.

Photo: artpartner-images via Getty Images

On any given day, Lily AI runs hundreds of machine learning models using computer vision and natural language processing that are customized for its retail and ecommerce clients to make website product recommendations, forecast demand, and plan merchandising. But this spring when the company was in the market for a machine learning operations platform to manage its expanding model roster, it wasn’t easy to find a suitable off-the-shelf system that could handle such a large number of models in deployment while also meeting other criteria.

Some MLops platforms are not well-suited for maintaining even more than 10 machine learning models when it comes to keeping track of data, navigating their user interfaces, or reporting capabilities, Matthew Nokleby, machine learning manager for Lily AI’s product intelligence team, told Protocol earlier this year. “The duct tape starts to show,” he said.

Keep ReadingShow less
Kate Kaye

Kate Kaye is an award-winning multimedia reporter digging deep and telling print, digital and audio stories. She covers AI and data for Protocol. Her reporting on AI and tech ethics issues has been published in OneZero, Fast Company, MIT Technology Review, CityLab, Ad Age and Digiday and heard on NPR. Kate is the creator of RedTailMedia.org and is the author of "Campaign '08: A Turning Point for Digital Media," a book about how the 2008 presidential campaigns used digital media and data.

Latest Stories