Braintrust
Presented by
Protocol's experts on the biggest questions in tech.

What operational challenges should companies anticipate when deciding to use multiple cloud vendors?

What operational challenges should companies anticipate when deciding to use multiple cloud vendors?

Security vulnerabilities, data portability and specialized talent retention are among the challenges highlighted by members of Protocol's Braintrust.


Abby Kearns

CTO at Puppet

Today, executives look to the cloud to help them accelerate innovation. However, as more workloads move to public clouds, and even more stay on prem, executives also need to manage the risks associated with a growing application portfolio and an increasingly complex infrastructure.

No one wants to be that company that made the news for another massive data breach that was due to an easy-to-fix vulnerability, and no executive wants to spend millions of dollars because they weren't in compliance. As you move to multicloud, you have a larger and more complex infrastructure surface, which means an increased risk of security vulnerabilities and more complex issues with compliance.

As the number of applications scale out, this is not a problem you can just solve by adding more people to the mix. At the scale most enterprises run, it is impossible to manage these risks effectively without automation. I can't tell you how many times I hear organizations that are managing their security and compliance mandates via all of our favorite database, Excel.

In short, automation is a requirement for large-scale multicloud deployments. I continue to see executives push their teams to use automation to identify and address risks and drift against a predefined set of baselines. When you can drive this level of automation, teams can spend more time on development and innovation initiatives.

At the end of the day, it's not the implementation of a multicloud environment that is hard (though it is certainly not simple). It's about the Day 2 efforts. The work that happens after an application is deployed. Day 2 includes, among many other things, operational resilience, security and compliance, as well as consistency across teams.

Steven Long

Regional CTO at AppDynamics, part of Cisco

Containerization and microservices enabled rapid cloud migration for many companies in the first months of the pandemic, accelerating digital transformation timelines from years to condensed months or weeks. While many companies successfully migrated applications and data to the cloud, many are now experiencing the complexities of multicloud deployments. Lack of data portability, incompatible APIs or limited visibility can all affect IT managers' ability to effectively control costs and efficiencies.

By implementing cloud automation and orchestration tools that take advantage of artificial intelligence capabilities, IT teams can regain control of their multicloud ecosystems. Cloud automation handles tasks such as provisioning servers, backing up data or managing underutilized processes and resources. Cloud orchestration can then take low-level tasks across multiple platforms and streamline them into more-complex processes, saving the team from manually interfering with routine tasks. Once the development, security and operations teams have full visibility and can identify any gaps in the infrastructure, cloud orchestration can help ensure that cost overruns and under-utilization of services no longer affect the bottom line.

Purnima Padmanabhan

General Manager, Cloud Automation at VMware

An effective cloud strategy comes not just from understanding the characteristics of applications and matching them to the characteristics of different clouds, but also having the ability to deploy, manage and secure those applications across clouds. The intrinsic value of multicloud is flexibility and choice. But you risk agility if you fail to address the biggest operational challenge for managing a multicloud environment: silos.

Building management capabilities in silos for each cloud causes duplication of effort, increases complexity, jeopardizes service quality and security, and risks poor performance and agility. A single, simple change to applications or infrastructure has far-reaching implications for organizations juggling multiple operations models for multiple clouds.

There are benefits to achieving operational consistency where it matters across multiple clouds.

The answer isn't a mythical "single pane of glass" for all services management. This approach didn't work in the data center when technologies changed every couple of years. It certainly won't work across multiple clouds where services change regularly.

The goal is to apply operational consistency across the areas of visibility, operations, automation, security and governance. Achieving consistent operations in these areas delivers efficiency, simplifies management, and mitigates risks around security, agility and complexity.

Learn more here.

Wendy M. Pfeiffer

CIO at Nutanix

Companies face multiple operational challenges when dealing with multiple cloud vendors, each affecting the organization's ability to efficiently consume capacity from multiple public clouds:

  • Application mobility and shared standards: Because each public cloud vendor uses different underlying technology and operating standards, porting applications to other public clouds is not simple. Application mobility requires significant work via containerization/microservices technologies such as Docker or Kubernetes or significant abstraction related to the application's use of data, compute, storage, network and location services. Many application vendors will rely on companies to create significant bridge code or operational processes to enable their applications to run anywhere.
  • Manageability and automation: Operating services in public clouds requires that both application and infrastructure-layer services be software-defined, meaning operators must write new code to spin up a server, configure a network or offer geolocated performance enhancements. Writing and deploying performant and secure automation coding for these key services can be resource-intensive in the early going. In IT, we like to write code once and reuse it many times; however, the fundamental differences in infrastructure between each public cloud vendor make it impossible to repurpose code across different environments.
  • Skills and integration: As it is with unique hardware environments, each public cloud vendor's distinctive technology has driven the need to employ environment-specific experts with talents and skillsets that don't often translate across vendor environments. Companies often underestimate the degree of specialization — and the size of the talent budget — required to operate within multiple clouds. The parallel real-world analogy: Being fluent in Spanish doesn't mean I'll be fluent in French. They may both be languages based on the same linguistic base, but each has diverged in usage, pronunciation and practice, and specialized training is required to use both.

Jonathan Heiliger

General Partner at Vertex Ventures U.S.

The unwitting wedding. Leveraging last generation's vendors, such as Cisco, IBM, etc., it was obvious when you would take advantage of proprietary features. In the cloud era, mapping the line between truly open (portable) and proprietary is much more challenging. For example, AWS and GCP offer different compute and storage units, making direct comparisons nearly impossible.

A level deeper is how the services themselves function, e.g., the interfaces for Google Dataflow are proprietary and can subtly lock you in, with your data tucked behind cryptic export controls. Another example is how server instances themselves function differently between clouds; each takes advantage of different hardware platforms, which are configured differently, resulting in subtly different performance characteristics.

David Linthicum

Chief Cloud Strategy Officer and Managing Director at Deloitte Consulting LLP

There are a few things going wrong now that need some attention, including:

  • The number of types of operational tools, such as management and monitoring, are so numerous that they unto themselves drive more operational complexity that ends up with human error causing operational issues.
  • The number of resources needed to drive CloudOps is underestimated. Considering that we're moving to largely heterogeneous multicloud deployments, the number of resources under management has tripled in the last four years. This is why the ops staff has remained the same.
  • Security is often an afterthought, and risk and vulnerabilities are beginning to emerge, considering that there are often no common security solutions across cloud, and leveraging native security services on each public cloud typically won't be able to scale.

See who's who in Protocol's Braintrust (updated Oct. 28, 2020).

Questions, comments or suggestions? Email braintrust@protocol.com

More from Braintrust
Latest Stories