Geographic expansion, the discovery of data silos and vendor evaluation could play a part in the decision, members of the Braintrust say.
Good afternoon! In today's Braintrust, we asked the experts to take us inside their processes for evaluating their tech stacks and the signals they look for to know that new vendors or new tech might be necessary. Questions or comments? Send us a note at email@example.com
CIO at Oracle
There are a number of catalysts that can become the tipping point for updating your tech stack.
When entering a new geographic market or introducing a new product, you would need to align with security and data residency policies. As such, to be compliant with data and security regulations in the new market, you may need to update your technology stack.
Another major driver could be related to bottlenecks and constraints in your legacy environment that impact performance, scalability and agility. Modifying your architecture to a more scalable and performant platform allows you to handle increasing volumes of data and users while taking advantage of the latest product features.
Legacy environments can be limited with siloed designs or lack flexibility. Ensuring that application architecture adheres to open standards eases migration from on-premises to cloud and allows adaption of services across multiple vendors.
Another critical tipping point is when you’re not able to create a view for your customers and employees when analyzing your unstructured, semi-structured and structured data. An upgrade can provide the opportunity to set standards and APIs allowing you to analyze all your data types.
The more outdated your current technology is and fraught with tech debt, the harder it is to manage. It’s like buying an older home with outdated electrical: You can’t put in new appliances until you deal with the wiring. These jobs are challenging, but maintaining a solid tech foundation is key to the success of any future projects.
CIO at Equinix
When should you consider updating parts of your technology stack? Continually. The longer you delay, the more complicated it gets.
Digital transformation is not a singular event. IT leaders must institutionalize a regular audit process to evaluate the efficacy and security gaps, feeding into a holistic upgrade roadmap. It is not just about getting every solution right, but choosing the right mix of solutions that operate with architecture harmony.
Scale your tech as the business grows: The technology stack can be a powerful business strategy enabler for companies to gain a competitive advantage. My philosophy is that a company should refresh its business process, operating model and technology stack every time it doubles its revenue.
Anchor your technology around customer and employee experiences: With growing digital maturity, employee and customer expectations evolve at an unprecedented pace. At its core, IT is all about human enablement and people consume technology differently now. This effectively means that the nature of digital infrastructure is changing — it's more software and security-oriented, distributed, on-demand and ecosystem-driven than ever before.
Keep your eyes on data insights: Data silos slow down the digital leverage. Rearchitect your infrastructure to deliver more predictive, prescriptive and retrospective insights. An underlying integrated, data-oriented approach supporting future AI and ML use cases is a critical decision factor for stack upgrades.
In all this, security is the underlying tipping point to solve with urgency. Also, stay abreast of industry innovations and trends to adopt new solutions that solve timely business problems.
Chief Product Officer at Hitachi Vantara
There are many factors that come into play when examining your infrastructure stack to determine when and how to modernize. While there is no magic bullet or defined tipping point for a complete overhaul, and the knowledge needed to keep a tech stack operational is progressing faster than many internal team’s ability to keep up, a modern tech stack is crucial for digital transformation and there are several considerations for exploring your options.
One trend we are seeing is that customers are evaluating architectures when the complexity of their systems begins to interfere with their workflow. This applies to a variety of areas throughout the organization and can manifest in the inability to find, access and analyze your data quickly, or when the data you’re collecting is going unused altogether — something we call dark data. Applying DataOps methodologies throughout the business can have a profound impact on digital transformation and that all starts with the underlying data environment and connective data fabric.
Application Infrastructure modernization is certainly a forcing function as well. As applications become increasingly distributed and performance, reliability and efficiency become paramount, cloud-native infrastructures will need to seamlessly integrate with traditional, on-premises assets.
Lastly, organizations are taking new models that offer significantly elevated value propositions into consideration as cloud-like operations and capacity management proliferate the stack. This new, as-a-service operating model will have profound impacts on an organization‘s ability to toggle capex and opex models for improved agility and operations.
CIO at Freshworks
Most modern software companies like mine are stacked with SaaS-based applications — a world that’s always changing. That’s why constant evaluation of your tech stack is key to ensuring that you have the right tools to empower your people and enable your business process.
My advice is to update early and often. You might be tempted to go for a two- to three-year contract and “save” on some costs, but the truth is you won’t. A shorter contract obligates you to evaluate your tech stack more frequently. It also holds the vendor accountable to “earn” your business year after year, and mitigates the complacence syndrome with vendors.
Things to consider when deciding upon contract renewal:
- Has your business process changed dramatically? While the tool itself might be great, it may no longer enable your business process.
- Has the vendor stopped innovating, making the tool outdated or complicated for users?
- Has your business outgrown the tool? If your company went from 100 employees to 1,000, chances are it has!
SVP & Chief Product Officer at Splunk
There are three drivers that urge Splunk customers to reevaluate portions of their tech stack — scale, security or cost. Typically, a service outage caused by inflexible scale of monolithic client-server architecture triggers a top-down cloud transformation initiative to move to a microservices model built on cloud-native technology and processes. This is a sweet spot for Observability Cloud at Splunk. From a security perspective, application modernization is a big driver. Outdated code tends to accumulate vulnerabilities especially if built on open source packages without credible commercial entity support and maintenance. Lastly, we find that customers locked in perpetual license contracts with traditional software vendors move to a hybrid cloud pay-as-you-go subscription model aligned to utility and usage patterns, enabling them to lower cost and also phase workloads to the cloud at their own pace.
SVP & CIO at Principal Financial Group
Principal has 49 million customers globally, and helping them feel more financially secure is our company’s purpose. So, when we identify new ways to deliver on that purpose, we look to make changes that create more personalized, cohesive and connected experiences for our customers.
Our technology strategy starts with our business strategy, and our customer is at the center of both. When we evaluate new technology or any potential updates to current technology, the first and most important question we ask is: “Does this enable the outcomes our business is driving toward?” Technology at Principal is critical to providing market-specific innovation, enabling growth and delivering features that our customers appreciate. Our cloud-first strategy provides speed, scale and security across our diverse businesses.
An additional key consideration as we evaluate the evolution of our tech stack is how we move to a more simple, rationalized application footprint. You can’t trick your customers into a good experience — they see through the façade to the complexity behind the scenes if your technology is not simple.
It all goes back to the customer — what serves them best serves the business best — so we should always be evaluating through that lens.
Senior Vice President of Infrastructure at Target
Technology has never been more critical to business success. Target has invested heavily in our technology strategy, and we accelerated our journey to a multi-hybrid cloud infrastructure in support of the company’s stores-as-hubs model announced in 2017. To make the model work, Target redesigned operations around an omnichannel store experience at scale.
We embraced code ownership as the strongest way to compete. Consequently, our engineers modernized our application stack on an event-driven, microservices architecture rather than lift and shift legacy apps to the cloud. We modularized and rewrote applications, leveraged open source and even built our own homegrown cluster management platform — Target Application Platform — designed to operate workloads and manage clusters across the hybrid cloud, making it easier for app developers to configure a pipeline and deploy a binary to Target’s stores, data centers and to the public cloud.
This purpose-built tech stack has been an instrumental part of Target’s outsized growth over recent years and has helped enable and scale new omnichannel capabilities in service to our guest, be it contactless in-store experience or same-day services like Drive Up & Order pickup.
Vice President of Cloud Experience at Linode
A lot of companies are at that tipping point, partly because of the rapid response to the pandemic and partly because of the overall pace of the shift to the cloud. So while there are a number of considerations up the stack, some of the most important signals come from the infrastructure layer.
The cloud has become so integral to everything we do that even basic requirements we take for granted like reliability, resiliency and security have become more critical than ever. If there is any single source of failure in your stack, it’s likely a good time to make an upgrade or a shift. It can be anything from relying on proprietary tools that are not easily replaceable for critical parts of your operation to being able to manage a spike in web traffic or defend against a DDoS attack. Even at the personnel level, having the inability to maintain your infrastructure if your best engineer leaves your company is a problem that needs to be addressed. Either way, if a singular point in your infrastructure can result in your service going down, it’s likely it will.
Companies are also starting to re-evaluate how they’ve allocated spending on their stack, especially their cloud infrastructure. They spent quickly out of necessity. Now, they are starting to reassess how and where they spent and using that to find the best fit for their business.
If we’ve learned anything over the past two years, it’s that change happens, often faster than we expect, and you need to be agile. Too often tech upgrades are executed reactively, when something slows down, becomes inefficient or breaks altogether. Infrastructure investments should be made proactively, solving challenges before they become problems. At the highest level, your livelihood depends on your technology working for the customers that rely on it. That’s when a combination of high reliability, scalability and redundancy matters most.
SEVP & CIO at Global Payments
Global Payments has always invested heavily in technology. Since 2020, Global Payments has entered into unique partnerships with Google Cloud and AWS to deliver scaled payments innovation to millions of businesses around the world. These updates to our tech stack occurred because we wanted to lead the move to the cloud ahead of client demands and ahead of other players in our space. For example, TSYS, our issuing business, serves many highly-regulated financial institutions that need an experienced partner that can orchestrate a move to the cloud while minimizing disruption.
You could say the tipping point that caused us to enter the partnerships with Google and AWS occurred when we realized that we are uniquely positioned to help introduce clients — across all our businesses — to leading-edge cloud technologies while continuing to help them operate their businesses seamlessly. We also knew that failure to act could limit our ability to meet future business needs and customer expectations.
In addition to serving existing clients better, these technologies have also provided us with an even more compelling value proposition for a host of other businesses, from startups to fintechs, neobanks, retailers and more.
Kevin McAllister ( @k__mcallister) is a Research Editor at Protocol, leading the development of Braintrust. Prior to joining the team, he was a rankings data reporter at The Wall Street Journal, where he oversaw structured data projects for the Journal's strategy team.
More from Braintrust