What operational challenges should companies anticipate when deciding to use multiple cloud vendors?

Security vulnerabilities, data portability and specialized talent retention are among the challenges highlighted by members of Protocol's Braintrust.
CTO at Puppet
Today, executives look to the cloud to help them accelerate innovation. However, as more workloads move to public clouds, and even more stay on prem, executives also need to manage the risks associated with a growing application portfolio and an increasingly complex infrastructure.
No one wants to be that company that made the news for another massive data breach that was due to an easy-to-fix vulnerability, and no executive wants to spend millions of dollars because they weren't in compliance. As you move to multicloud, you have a larger and more complex infrastructure surface, which means an increased risk of security vulnerabilities and more complex issues with compliance.
As the number of applications scale out, this is not a problem you can just solve by adding more people to the mix. At the scale most enterprises run, it is impossible to manage these risks effectively without automation. I can't tell you how many times I hear organizations that are managing their security and compliance mandates via all of our favorite database, Excel.
In short, automation is a requirement for large-scale multicloud deployments. I continue to see executives push their teams to use automation to identify and address risks and drift against a predefined set of baselines. When you can drive this level of automation, teams can spend more time on development and innovation initiatives.
At the end of the day, it's not the implementation of a multicloud environment that is hard (though it is certainly not simple). It's about the Day 2 efforts. The work that happens after an application is deployed. Day 2 includes, among many other things, operational resilience, security and compliance, as well as consistency across teams.
Regional CTO at AppDynamics, part of Cisco
Containerization and microservices enabled rapid cloud migration for many companies in the first months of the pandemic, accelerating digital transformation timelines from years to condensed months or weeks. While many companies successfully migrated applications and data to the cloud, many are now experiencing the complexities of multicloud deployments. Lack of data portability, incompatible APIs or limited visibility can all affect IT managers' ability to effectively control costs and efficiencies.
By implementing cloud automation and orchestration tools that take advantage of artificial intelligence capabilities, IT teams can regain control of their multicloud ecosystems. Cloud automation handles tasks such as provisioning servers, backing up data or managing underutilized processes and resources. Cloud orchestration can then take low-level tasks across multiple platforms and streamline them into more-complex processes, saving the team from manually interfering with routine tasks. Once the development, security and operations teams have full visibility and can identify any gaps in the infrastructure, cloud orchestration can help ensure that cost overruns and under-utilization of services no longer affect the bottom line.
General Manager, Cloud Automation at VMware
An effective cloud strategy comes not just from understanding the characteristics of applications and matching them to the characteristics of different clouds, but also having the ability to deploy, manage and secure those applications across clouds. The intrinsic value of multicloud is flexibility and choice. But you risk agility if you fail to address the biggest operational challenge for managing a multicloud environment: silos.
Building management capabilities in silos for each cloud causes duplication of effort, increases complexity, jeopardizes service quality and security, and risks poor performance and agility. A single, simple change to applications or infrastructure has far-reaching implications for organizations juggling multiple operations models for multiple clouds.
There are benefits to achieving operational consistency where it matters across multiple clouds.
The answer isn't a mythical "single pane of glass" for all services management. This approach didn't work in the data center when technologies changed every couple of years. It certainly won't work across multiple clouds where services change regularly.
The goal is to apply operational consistency across the areas of visibility, operations, automation, security and governance. Achieving consistent operations in these areas delivers efficiency, simplifies management, and mitigates risks around security, agility and complexity.
CIO at Nutanix
Companies face multiple operational challenges when dealing with multiple cloud vendors, each affecting the organization's ability to efficiently consume capacity from multiple public clouds:
General Partner at Vertex Ventures U.S.
The unwitting wedding. Leveraging last generation's vendors, such as Cisco, IBM, etc., it was obvious when you would take advantage of proprietary features. In the cloud era, mapping the line between truly open (portable) and proprietary is much more challenging. For example, AWS and GCP offer different compute and storage units, making direct comparisons nearly impossible.
A level deeper is how the services themselves function, e.g., the interfaces for Google Dataflow are proprietary and can subtly lock you in, with your data tucked behind cryptic export controls. Another example is how server instances themselves function differently between clouds; each takes advantage of different hardware platforms, which are configured differently, resulting in subtly different performance characteristics.
Chief Cloud Strategy Officer and Managing Director at Deloitte Consulting LLP
There are a few things going wrong now that need some attention, including:
See who's who in Protocol's Braintrust (updated Oct. 28, 2020).
Questions, comments or suggestions? Email braintrust@protocol.com
Kevin McAllister ( @k__mcallister) is a Research Editor at Protocol, leading the development of Braintrust. Prior to joining the team, he was a rankings data reporter at The Wall Street Journal, where he oversaw structured data projects for the Journal's strategy team.
To give you the best possible experience, this site uses cookies. If you continue browsing. you accept our use of cookies. You can review our privacy policy to find out more about the cookies we use.