Multicloud strategies, application performance metrics and microservices are three areas Protocol Braintrust members say still have room to grow.
COO at VMware
Today, cloud technology means companies can adapt at speed and scale, accelerate innovation, drive business agility, streamline operations and reduce costs. But customers are only in the early innings of driving their hybrid cloud computing journeys and balancing the best aspects of private and public clouds. Over the course of the next decade, there will be a shift of more workloads to the public cloud, but there will also be a shift of workloads to the edge. This perfect balance of private cloud, public cloud and the edge is the multicloud journey customers, large and small, will seek to achieve.
As companies increasingly look to accelerate their journeys to a multicloud world, we believe there will also be a significant increase in modernization of their applications. Many of these apps were built decades ago for client-server technologies, but the advent of cloud and container technologies enables a significantly faster application development cycle that is also in the early innings of exploding. This has a significant upside.
The transformation of the employee experience to a work-from-anywhere economy means digital technologies need to be cloud-optimized for everything from unified collaboration (video, audio, messaging) to employee onboarding to cybersecurity. Cloud computing will lead to new approaches in digital workspace and security that were never possible before.
The most important aspect of all this change will be on people and processes. Leaders who embrace these changes will be the pioneers of the future, and the laggards will get left behind. Given its myriad benefits, we expect to see an acceleration of cloud activities and adoption to come. We are only at the early stages of a massive digital transformation revolution.
Wendy M. Pfeiffer
CIO at Nutanix
Given the remote nature of resources in cloud computing environments, software developers must write code to remotely configure, secure and scale that infrastructure. And, although software developers are brilliant at writing code, they are less brilliant at building performant, secure and managed infrastructure. This has led to significant cybersecurity, financial and operational challenges associated with running applications in the public cloud, even as those applications have benefitted from the scale-out technology on which public clouds are based.
Enter IT: an organization known for its efficiency and effectiveness at running infrastructure operations, but not known for its ability to write effective software code. With the advent of today's low-code and no-code tools, IT now has the ability to compose infrastructure as code.
However, when IT creates this software-defined infrastructure, it is purpose-built to make efficient use of cloud compute, storage and network resources, while cost-effectively scaling as performance needs require. This next wave of IT-led cloud computing, fueled by the availability of hybrid cloud operating systems and low-code tools, has the potential to transform cloud computing into one of several viable options in a company's resource consumption portfolio.
Chief Cloud Strategy Officer at Deloitte Consulting LLP
The challenge emerging is really about dealing with the operational complexity around leveraging multicloud deployments. Most Global 2000 enterprises can't shut down existing legacy systems, and these systems must work and play well with the new cloud-based systems. Thus, the operational complexity becomes overwhelming.
The cloud computing technology that is still considered underdeveloped, as well as underappreciated, is technology that's able to manage and operate these complex cloud and noncloud deployments using automation layers, with some leveraging AI/ML and advanced analytics. The idea is that these tools place an abstraction layer between the CloudOps and IT Ops teams that are charged with dealing with this complexity during operations.
This is done by automating the operational processes and removing the human beings from constantly having to watch thousands of systems and data points, all with their own interfaces and APIs. They hit a tipping point where the number of systems under management, and the different purposes and interfaces of those systems, become so overwhelming that mistakes are made that can cause outages and data breaches.
Emerging tool categories that can provide this automation are cloud management platforms, cloud services brokers and AIOps tools, to name just three. These tools hold the most promise for not only providing automation, but also learning from the massive amounts of data that are coming from the cloud and traditional systems under management. AIOps tools have the potential of spotting issues before they cause major outages, and even autocorrecting those issues when found.
Principal Technical Evangelist at F5 Office of the CTO
Cloud computing has produced an operating model that enables consistent, repeatable and scalable processes. Those processes deliver business value to organizations by enabling the rapid deployment and easy operation of applications. What cloud hasn't yet developed is the ability to easily — or rapidly — troubleshoot problems with those applications.
One of the top challenges cited by consumers of cloud computing is a lack of visibility, i.e. access to telemetry (data) about application performance and security. The irony is that the network, infrastructure, platforms and services that constitute the cloud can produce this data in vast quantities.
Cloud, however, does not provide an easy means to access such data in a way that makes it helpful to quickly pinpoint problems and, more importantly, resolve them. The ability to analyze data from every layer of the application stack — including that of cloud infrastructure and services — is important to the security and performance of applications, and, by extension, the businesses that deploy them in the cloud. Visibility remains one of the most underdeveloped — and desirable — capabilities in the cloud.
General Partner at Vertex Ventures US
Cloud computing as a whole still has an underappreciated upside. Looking at it in parts, as we all have for so long, limits its value. The more information, people, management, etc,. that we move to the cloud, the more opportunity is unlocked.
This is just another way to say that the more data that exists in the cloud, the more insights we can gather from that data, which will (and does) result in better, faster decision-making. Any company enabling this collective intelligence is best positioned to become the next enterprise giant.
VP, Products and BD at Kontain; Former CEO at RackWare
I believe truly going cloud native with the use of microservices is still quite underdeveloped and also underappreciated by a majority of larger enterprises. There has been a lot of hype around this topic over the past several years, which has led to the increased use of containers. However, the majority of enterprises simply have packaged their existing (monolithic) applications in containers, but this won't deliver the benefits and promise of containers. In reality, the benefits of using microservices comes from rearchitecting applications or moving to modern applications programming.
To do this, each application must separate stateful from stateless, and the stateless portion should be broken into single function pieces: small, mobile and ephemeral. Orchestration can be brought in to manage the software, scheduling resources and building redundancy.
Microservices can then talk to each other through a distributed network, via standard API/interfaces. This will lead to breaking the interdependencies in software code and software teams. Short-term benefits will be accelerating the time to market of applications and new features, and long-term benefits will be a shift in the culture of the organization for future development and growth.
Organizations will be able to react faster and more successfully to the changing economy and thrive in what we are calling the new normal work environment. There is, of course, an initial investment, but that can quickly pay off with greater agility and flexibility, resulting in top-line growth. Greater resiliency will be built into applications, infrastructure and the organization leading to bottom-line savings.
Managing Partner at Menlo Ventures
The shift to the cloud is not only inexorable, it's accelerating. Cost and even the COVID pandemic are factors, but creativity is a bigger one.
Like most things in life, interesting things happen when you take away a constraint. Developers and operations people faced the constraint of getting access to everything from a physical server to a database to an app or web server. Now, those constraints are gone, as important technology building blocks like open-source (Kafka, Kubernetes), APIs (Stripe, Twilio, et al.), frameworks like Jamstack, and DevOps tools like Harness have exploded.
The amount of technology it takes to build applications of any kind is stunning, and while a lot of the conversation about the cloud is about cost, efficiency and ease of use, the real win is creativity. How do you allow more developers to create faster and avoid spending time on functions that are table stakes versus their unique value-add? This is the big win. The API economy and open-source have saved so much time, and a globally connected group of developers helping each other is a force that will only grow in the coming years.
Regional CTO at AppDynamics, part of Cisco
Real-time analysis and insights can not only save resources and prevent downtime for companies, but they can also keep the IT and development, security and operations teams more informed. As this global COVID-19 pandemic continues, IT organizations are under incredible new pressures, with 81% of technologists in a recent AppDynamics survey stating COVID-19 has created the biggest technology pressure for their organization that they have ever experienced. Leaders need to trust their IT and development, security and operations teams to identify the needs, find the solutions and adapt them to what works best.
For example, the use of containers and microservices for most common cloud computing tasks has been growing, and those tools are even more important now. Cloud computing has created a massive ecosystem of interconnected applications and services that we're only beginning to tackle. IT budgets and staff are currently strained, so implementing the right tools for orchestration and visibility across this "new normal" network infrastructure can take previously underappreciated microservices and containers through a successful digital transformation and streamline work for IT teams.
See who's who in Protocol's Braintrust.
Questions, comments or suggestions? Email firstname.lastname@example.org.
Kevin McAllister ( @k__mcallister) is a Research Editor at Protocol, leading the development of Braintrust. Prior to joining the team, he was a rankings data reporter at The Wall Street Journal, where he oversaw structured data projects for the Journal's strategy team.
More from Braintrust