Enterprise

How IBM lost the cloud

Insiders say that marketing missteps and duplicated development processes meant IBM Cloud was doomed from the start, and eight years after it attempted to launch its own public cloud the future of its effort is in dire straits.

The IBM logo surrounded by storm clouds.

Beset by marketing missteps and conflicting development priorities, IBM Cloud was doomed from the start.

Image: Christopher T. Fong / Protocol

The words stunned IBM's cloud executives in November 2013. Former CEO Ginni Rometty had just told them that Watson, IBM's dubious crown jewel, should run on the company's own Power chips inside SoftLayer, IBM's recently acquired cloud-computing division.

There was one big problem: SoftLayer, like all major cloud efforts at that point, only used x86 chips from Intel and AMD.

What came next can only be described as a scramble, according to sources who worked for IBM at the time. After throwing together a barely working demo for IBM's Pulse conference in February 2014, where Rometty publicly announced the news, executives quickly convened in Texas, home to SoftLayer. They realized fulfilling Rometty's pledge would be daunting: They would have to rewrite parts of the Watson code base for the cloud, and quickly find, and then configure, enough Power servers to run alongside the all-x86 SoftLayer environment.

So began IBM's experiments with cloud computing, imperiled from the start by a maniacal focus on selling Watson at the height of its public awareness and doting obedience to a customer base that still didn't trust the cloud.

IBM was once — and still is, for people whose main sources of information about technology are television ads during sporting events — an American innovation icon, a company that literally created what we now think of as information technology. Its fortunes have risen and fallen with broader trends in computing, but around the time of that meeting in late 2013, its business and technology reputation began a steady decline that it has yet to avert.

Today, Rometty is gone, replaced by Arvind Krishna, the first technologist to hold the top seat at IBM since the 1970s. But IBM finds itself almost entirely dependent on its $34 billion purchase of Red Hat in order to stay relevant among modern IT buyers, and IBM executives don't really talk about its own public cloud division these days.

"They've given up on the idea of, 'we're going to be a major contender in the public cloud space,'" said Tracy Woo of Forrester Research. "Everyone is trying to win with edge [computing] in some way, and trying to create the most compelling story."

IBM CEO Arvind Krishna speaks while seated in front of an open water bottle. IBM CEO Arvind Krishna Photo: Brian Ach/Getty Images for Wired

The opportunity was there for IBM: Longtime rival Microsoft successfully executed a pivot to cloud computing following the appointment of Satya Nadella in 2014, and while Azure was several years old at that point it had only started offering Linux virtual machines, the lingua franca of the cloud era, the year before IBM's SoftLayer acquisition.

Now, IBM considers itself a "hybrid cloud" company, according to its executive talking points and commercials. But IBM's use of such tech buzzwords is a familiar strategy to those who have followed the company over the last decade: It's trying to convince longtime customers to stick with the partner that brought them to the dance despite there being a plethora of interesting alternatives.

"IBM is all-in on hybrid cloud and AI, determining years ago that our clients' only feasible path to rapid digital transformation is through a hybrid cloud strategy. Public cloud is an integral piece of that strategy," IBM said in a statement.

So how did IBM miss the cloud? Interviews with more than a dozen current and former IBM executives and employees painted a picture of a company caught moving in two directions: a group that correctly understood how the cloud was going to play an enormous role in the future of enterprise computing, matched up against a sales-driven culture that prioritized the custom needs of its large customers over the work required to catch up with AWS.

The SoftLayer bulletin

It was an AWS deal with the CIA that made IBM think differently about the cloud.

In 2013, the now-dominant cloud provider won a contract to build the next-generation enterprise-tech infrastructure for the country's spies. That forced IBM to acknowledge that the cloud era had not only arrived, but also that it was losing, according to multiple sources who worked for the company at the time. And as it lodged an ultimately unsuccessful protest bid against the decision to award AWS the contract, IBM announced it had acquired SoftLayer in June 2013.

The takeover was problematic almost from the start, according to multiple sources who worked for IBM at the time. At the outset, IBM was content to let SoftLayer continue to grow with a decent degree of autonomy, but the two companies looked at the world from different vantage points.

SoftLayer was built and designed for small and medium-sized businesses and its leadership team believed that was the market around which it was designing its infrastructure strategy. That market was mostly concerned with cost and less concerned with features and availability, and SoftLayer designed its cloud services accordingly.

A Softlayer data center in Dallas, Texas. A Softlayer data center in Dallas, Texas.Photo: Bloomberg / Contributor

It operated 13 data centers when IBM acquired the company, but those data centers utilized relatively simple designs and were based almost exclusively around off-the-rack servers from Supermicro, according to sources. There's nothing inherently wrong with that approach, but at the time major cloud vendors were, and still are, designing their own servers with strict enterprise-grade performance and reliability criteria.

After a few years, IBM salespeople were eager to sell cloud services alongside a package of IBM's more traditional enterprise software, yet quickly found that SoftLayer didn't offer many of the services that huge corporations needed to embrace the cloud, according to the sources. Its data centers lacked some of the resiliency features that were table stakes at AWS, such as availability zones, and the servers weren't powerful enough to support large application deployments, they said.

And one of the biggest obstacles was SoftLayer's lack of support for virtual private cloud technology, which gives cloud customers additional control over how their applications run on cloud services. AWS introduced such a service in 2009, but IBM Cloud didn't get what one source called a "true" virtual private cloud service until 2019.

Some of these problems were understood at the time of the SoftLayer acquisition and IBM tech executives thought they could fix them in short order, according to sources. But IBM's culture during those years proved too much of a road block.

If there's one common thread through the experiences of multiple current and former IBM employees, including those who didn't work for the cloud division, it's the power that current customers had over everything IBM did.

Over and over again during the last decade, IBM engineers were asked to build special one-off projects for key clients at the expense of their road maps for building the types of cross-customer cloud services offered by the major clouds. Top executives at some of the largest companies in the country — the biggest banks, airlines and insurance companies — knew they could call IBM management and get what they wanted because the company was so eager to retain their business, the sources said.

This practice, which delayed work on key infrastructure services for months or even years, was still happening inside IBM as recently as last year, according to one source.

"To the extent IBM is a public cloud provider, they do so as it adds to their broader orientation as a hybrid cloud platform provider," said Melanie Posey, an analyst with S&P Global Market Intelligence. "And the stuff that's on IBM's hybrid cloud platform includes IBM's public cloud, which some of their traditional long-standing IT enterprise customers prefer, like, 'let's keep it all in the family.'"

Build it once, build it twice

Just a few years after acquiring SoftLayer, IBM's top executives knew their cloud strategy as designed was not going to work. Convinced they needed fresh eyes, they hired several executives from Verizon's cloud services business — which it would later acquire — to rebuild IBM Cloud.

John Considine became general manager of IBM Cloud Infrastructure in November 2016 and was given the leeway to install a brand-new cloud infrastructure architecture to replace SoftLayer's approach. He began work on a project internally code-named "Genesis," an ambitious attempt to build an enterprise-grade cloud system from scratch.

Before too long, however, IBM began to realize that Genesis was unlikely to scale well enough to be a competitive threat to AWS or Microsoft. A decision to use Intel's Red Rock Canyon networking chip proved particularly troublesome, according to sources, as it caused IBM to rank very poorly on a key (if not exactly workaday) test used by Gartner to rate cloud vendors: launching 1,000 virtual machines at the same time.

And at first, Genesis still lacked support for the key virtual private cloud technology that both engineers and salespeople had identified as important to most prospective cloud buyers.

This caused a split inside IBM Cloud: A group headed by the former Verizon executives continued to work on the Genesis project, while another group, persuaded by a team from IBM Research that concluded Genesis would never work, began designing a separate infrastructure architecture called GC that would achieve the scaling goals and include the virtual private cloud technology using the original SoftLayer infrastructure design.

Genesis would never ship. It was scrapped in 2017, and that team began work on its own new architecture project, internally called NG, that ran in parallel to the GC effort.

For almost two years, two teams inside IBM Cloud worked on two completely different cloud infrastructure designs, which led to turf fights, resource constraints and internal confusion over the direction of the division. The cancellation of Genesis forced IBM to write off nearly $250 million in Dell servers (a bitter irony, in that IBM sold its own server group just before acquiring SoftLayer) that had been purchased for that project, according to one source.

And the two architectures — which IBM had intended to be compatible but due to subtle design differences, were not — became generally available within four months of each other in 2019. IBM continued to maintain two different cloud architectures until earlier this year, according to one source, when the GC effort was scrapped.

Presented with a detailed account of what this story would contain, IBM declined to dispute any of the facts, and sent over the following statement:

We spent more than two years evolving IBM's cloud to be the industry's most secure, enterprise-grade cloud built on a foundation of open source software – offering our clients choice, instead of locking them in. We have integrated key capabilities from across the IBM portfolio – from Software and AI to System Z and Power to our Services offerings. And we continue to invest in our global cloud footprint, making IBM Cloud the right choice for clients in highly regulated industries such as Financial Services, Government, and Telco – where it's essential to balance modernization with data privacy and compliance requirements.

Too late to the game

But by the time IBM finally shipped not one, but two different next-generation cloud infrastructure designs with support for virtual-private cloud technology in 2019, it was too late.

Right around the time the parallel development efforts kicked off, many of IBM's longtime clients in heavily regulated industries like banking had begun to understand how they could operate safely on cloud services, and were looking for options. Buying enterprise technology is a lot like hiring a contractor for a home-improvement job: The only sensible thing to do is get a few bids.

Most major companies considering cloud services in 2017 (and today) would get a bid from AWS, given its leadership position in the market and track record of stability. In most cases, however, they would only get two additional bids: Microsoft and either IBM or Google Cloud.

And it was at this point that IBM Cloud staff began to realize that they had lost the opportunity to win that business, according to sources. When big businesses make an IT decision they're deciding which technology they are going to use for a significant number of years; it can take up to two years just to move complex operations to cloud services, and AWS started encouraging potential customers to sign multiyear contracts in exchange for pricing discounts around this time.

"I think the realization [for IBM] was, do we really want to do this?" Posey said. "Does it really make a whole lot of sense for us to build up all of this infrastructure to be sort of a general-purpose cloud or is there a better way to go?"

IBM Cloud simply wasn't competitive. Genesis was an attempt to move beyond SoftLayer's reputation as a hosting provider for small businesses. But it didn't work, and cost the company years before it rolled out a feature-competitive cloud service with the coveted — but by then table stakes — VPC technology.

An image of an eye, a bee, and the letter M. Thanks to years of delays and mismanagement, IBM will never be a major public cloud player.Image: Christopher T. Fong / Protocol

There is some small hope for IBM more broadly to cling to in this story, though. One of Krishna's first acts when he took over IBM Cloud in January 2019, before he became CEO, was to end the double-track infrastructure design strategy and get the team to focus on a singular approach going forward, sources said. That gave employees familiar with the overall saga and his leadership confidence that he might yet be able to turn the company around.

But, thanks to years of delays and mismanagement, IBM will never be a major cloud player. It's not entirely clear how committed the company is to its public cloud service, which still has thousands of customers. In the past year it has suffered several major outages that have gone virtually unnoticed by the broader internet community, which is using services built on other clouds.

Sources were evenly divided on the long-term prospects of the group, although a steady decline in IBM's capital expenditures this year does not bode well for a capital-intensive business like cloud computing.

And while it has become clear even to AWS that hybrid and multicloud strategies will be popular for the foreseeable future, which does bode well for Red Hat's software business, cloud computing is growing at around 35% a year and generating enormous profits for its top two contenders.

IBM had everything in place to become a major cloud provider. But technology shifts like cloud computing don't come along every decade, and while IBM has survived every shift in technology since the 1930s, its inability to capitalize on that historic shift was a huge strategic oversight — and one that has left its status as an American technology icon hanging in the balance.

Correction: An earlier version of this story contained a misspelled version of Ginni Rometty's name. This story was updated on Sept. 30, 2021.

Entertainment

'The Wilds' is a must-watch guilty pleasure and more weekend recs

Don’t know what to do this weekend? We’ve got you covered.

Our favorite things this week.

Illustration: Protocol

The East Coast is getting a little preview of summer this weekend. If you want to stay indoors and beat the heat, we have a few suggestions this week to keep you entertained, like a new season of Amazon Prime’s guilty-pleasure show, “The Wilds,” a new game from Horizon Worlds that’s fun for everyone and a sneak peek from Adam Mosseri into what Instagram is thinking about Web3.

Keep Reading Show less
Janko Roettgers

Janko Roettgers (@jank0) is a senior reporter at Protocol, reporting on the shifting power dynamics between tech, media, and entertainment, including the impact of new technologies. Previously, Janko was Variety's first-ever technology writer in San Francisco, where he covered big tech and emerging technologies. He has reported for Gigaom, Frankfurter Rundschau, Berliner Zeitung, and ORF, among others. He has written three books on consumer cord-cutting and online music and co-edited an anthology on internet subcultures. He lives with his family in Oakland.

Sponsored Content

Why the digital transformation of industries is creating a more sustainable future

Qualcomm’s chief sustainability officer Angela Baker on how companies can view going “digital” as a way not only toward growth, as laid out in a recent report, but also toward establishing and meeting environmental, social and governance goals.

Three letters dominate business practice at present: ESG, or environmental, social and governance goals. The number of mentions of the environment in financial earnings has doubled in the last five years, according to GlobalData: 600,000 companies mentioned the term in their annual or quarterly results last year.

But meeting those ESG goals can be a challenge — one that businesses can’t and shouldn’t take lightly. Ahead of an exclusive fireside chat at Davos, Angela Baker, chief sustainability officer at Qualcomm, sat down with Protocol to speak about how best to achieve those targets and how Qualcomm thinks about its own sustainability strategy, net zero commitment, other ESG targets and more.

Keep Reading Show less
Chris Stokel-Walker

Chris Stokel-Walker is a freelance technology and culture journalist and author of "YouTubers: How YouTube Shook Up TV and Created a New Generation of Stars." His work has been published in The New York Times, The Guardian and Wired.

Workplace

Work expands to fill the time – but only if you let it

The former Todoist productivity expert drops time-blocking tips, lofi beats playlists for concentrating and other knowledge bombs.

“I do hope the productivity space as a whole is more intentional about pushing narratives that are about life versus just work.”

Photo: Courtesy of Fadeke Adegbuyi

Fadeke Adegbuyi knows how to dole out productivity advice. When she was a marketing manager at Doist, she taught users via blogs and newsletters about how to better organize their lives. Doist, the company behind to-do-list app Todoist and messaging app Twist, has pushed remote and asynchronous work for years. Adegbuyi’s job was to translate these ideas to the masses.

“We were thinking about asynchronous communication from a work point of view, of like: What is most effective for doing ambitious and awesome work, and also, what is most advantageous for living a life that feels balanced?” Adegbuyi said.

Keep Reading Show less
Lizzy Lawrence

Lizzy Lawrence ( @LizzyLaw_) is a reporter at Protocol, covering tools and productivity in the workplace. She's a recent graduate of the University of Michigan, where she studied sociology and international studies. She served as editor in chief of The Michigan Daily, her school's independent newspaper. She's based in D.C., and can be reached at llawrence@protocol.com.

Workplace

It's OK to cry at work

Our comfort with crying at work has changed drastically over the past couple years. But experts said the hard part is helping workers get through the underlying mental health challenges.

Tech workers and workplace mental health experts said discussing emotions at work has become less taboo over the past couple years, but we’re still a ways away from completely normalizing the conversation — and adjusting policies accordingly.

Photo: Teerasak Ainkeaw / EyeEm via Getty Images

Everyone seems to be ugly crying on the internet these days. A new Snapchat filter makes people look like they’re breaking down on television, crying at celebratory occasions or crying when it sounds like they’re laughing. But one of the ways it's been used is weirdly cathartic: the workplace.

In one video, a creator posted a video of their co-worker merely sitting at a desk, presumably giggling or smiling, but the Snapchat tool gave them a pained look on their face. The video was captioned: “When you still have two hours left of your working day.” Another video showed someone asking their co-workers if they enjoy their job. Everyone said yes, but the filter indicated otherwise.

Keep Reading Show less
Sarah Roach

Sarah Roach is a news writer at Protocol (@sarahroach_) and contributes to Source Code. She is a recent graduate of George Washington University, where she studied journalism and mass communication and criminal justice. She previously worked for two years as editor in chief of her school's independent newspaper, The GW Hatchet.

Enterprise

Arm’s new CEO is planning the IPO it sought to avoid last year

Arm CEO Rene Haas told Protocol that Arm will be fine as a standalone company, as it focuses on efficient computing and giving customers a more finished product than a basic chip core design.

Rene Haas is taking Arm on a fresh trajectory.

Photo: Arm

The new path for Arm is beginning to come into focus.

Weeks after Nvidia’s $40 bid to acquire Arm from SoftBank collapsed, the appointment of Rene Haas to replace longtime chief executive Simon Segars has set the business on a fresh trajectory. Haas appears determined to shake up the company, with plans to lay off as much as 15% of the staff ahead of plans to take the company public once again by the end of March next year.

Keep Reading Show less
Max A. Cherney

Max A. Cherney is a senior reporter at Protocol covering the semiconductor industry. He has worked for Barron's magazine as a Technology Reporter, and its sister site MarketWatch. He is based in San Francisco.

Latest Stories
Bulletins