Enterprise

Fault-finding, not firefighting: Why observability is the new monitoring

Understanding software performance is an extremely important — and complex — undertaking for the modern enterprise. Simply watching the meter no longer works.

Fault-finding, not firefighting: Why observability is the new monitoring

There's a lot to keep track of in modern software.

Image: Alexander Sinn/Kwamina2

No unhappy complex system is alike: Each is unhappy in its own way. A growing line of business in software development, observability seeks to understand how and why modern software applications and teams become unhappy in order to set them on a path toward happiness, uptime and profit.

An evolution of monitoring software — which became popular during the rise of Web 2.0 applications and spawned companies such as Splunk, Datadog, New Relic and SolarWinds — observability takes the idea of simply watching IT systems a step further. While it's helpful to have dashboards that let administrators determine the health and performance of their applications at a glance, observability advocates believe what modern businesses really need are tools that help them understand the root cause of software issues.

"To help me build better software, you can't just do everything reactively anymore," Bill Staples, president of New Relic, told Protocol. "If you work reactively in today's cloud environment, you're firefighting constantly."

The idea is that it's better for software developers to understand exactly which part of their code is causing a problem and why than rely on alerts that flag problems but require painstaking analysis to identify the cause. The time that can save is important: Forward-thinking software organizations move very quickly these days, preferring to deploy small changes to their code on a frequent basis rather than deploying big changes at a slower pace.

"If it takes you two months to ship your code, you're probably not high enough to ride this ride," said Charity Majors, co-founder and CTO of Honeycomb, one of several startups pushing the boundaries of this emerging field.

But it's not just coding: Observability tools can also help companies understand how their people are performing, and how the structure of their organization might be causing more problems than it is solving.

"The tech metrics don't mean anything without understanding the pressures that people are under when they are building the systems," said Nora Jones, co-founder and CEO of startup Jeli.

Watching, waiting, commiserating

Administrators have been monitoring the performance of computers since the first was plugged into a wall. But the modern concept of application performance management started to come together alongside the wave of enterprise software innovation that came out of the Great Recession.

As new SaaS tools started to become some of the most important operational tools inside businesses, performance — always important — took on new meaning. Customers increasingly had expectations for how software delivered over the web should perform, and as lots of these application vendors built their services atop metered cloud computing platforms from AWS and others, they had to be very aware of how much computing resource they were consuming on the back end.

The growing consensus around the value of frequent deployment meant that software developers needed tools to quickly measure the impact of those changes so they could pull back a change that introduces a new problem, said John Allspaw, co-founder of Adaptive Capacity Labs, who played key roles at Yahoo's Flickr and later Etsy during the period in which monitoring became table stakes.

"There was a period of time where some companies totally got [the idea of frequent deployment], and can't imagine working a different way and other companies can't even imagine why you would even try to deploy more than once a day," he said. That latter group gets the benefits of continuous deployment now, he said, which has lifted the fortunes of companies like JFrog, CircleCI and CloudBees, which have all built businesses around making software pipelines more efficient..

But early monitoring tools that were used to study software once it was deployed were passive, and didn't provide as complete a picture of how an application was performing.

"Maybe 10 years ago, the way things would work is developers would write the application and then hand it off to an IT pro, who would probably deploy it onto a server in a data center," Staples said. "Meanwhile, they hope that the system keeps up and running and the IT pro will tell them if something breaks — otherwise, they just go on to the next feature."

A decade later, that approach will not fly. Companies no longer separate software development from operations: a shift known as DevOps, which calls for closer cooperation between the teams and forces developers to be more aware of the impact of their changes.

One of the big risks of making changes to a monolithic application was the chance that you could cause a difficult-to-detect problem in a completely unrelated part of the app. Microservices changed that, allowing developers to break their applications down into lots of smaller pieces that can be operated and tweaked separately. At the same time, cloud adoptees started moving toward deploying their applications in containers, which meant they could be deployed across a wide range of servers.

Something more sophisticated was required to understand how all of that was working.

Disturbance in the system

"Observability is all about looking at [the application]; shipping the code and looking through instrumentation to know if it is doing what I expected it to do," Majors said.

Honeycomb's approach was modeled after the notion of control theory in mechanical engineering, she said. Its tools give operations engineers a way to build instrumentation into their code to flag problems as they happen, allowing them to discover exactly where something has gone wrong rather than seeing a poor end-user performance and digging through the code to find the problem.

Usually software organizations aren't dealing with issues like the massive outage that took down Slack last week: Most software incidents are minor, Staples said.

"When things fail, they don't fail completely. You see drop-off rates, you see errors fail for 10 percent of users," he said. "What you do as an engineer is you're constantly using those signals: to know where to go invest more, whether it's improving the feature to get the customer through a trouble spot that's slowing them down and keeping them on, whether it's scaling a part of the system to increase the performance of that component or other things."

Fixing those problems quickly, rather than spending time debugging a poorly-performing application, can give software teams more time to focus on improving their products.

"Code is like food; it rots," Majors said. Preventing that rot from dragging down an entire system as quickly as possible can prevent bigger outages down the road that cost companies money.

But it's not just the code that needs observing; it's the people.

Jones is a veteran of high-performing software teams at Slack, Netflix and Jet.com. Yet even within companies at the forefront of software development practices, organizational structures can make as much of an impact on healthy applications as coding practices, she said.

At one of those companies (she declined to share which), a disproportionately large number of performance issues happened within a short period of time each year, and the company was having a hard time figuring out what was causing the problem. Turns out, those problematic periods came just after its annual promotions cycle, during which engineers had scrambled to ship as much code as they could in a short period of time to hit their goals for the year.

"It wasn't their fault. It was the system that was created at the company," Jones said. "Understanding that these promotion cycles were being correlated to an increase in incidents, because people were trying to get things done really quickly, actually incentivized the company to completely restructure how they did their promotion cycles, which led to this kind of stuff not happening as much."

Insights like that led Jones to found Jeli, which allows companies to evaluate and monitor how their organizational structures map against their coding practices. The company just raised a $4 million seed round to build out tools for that type of customer.

Infinite runway

The promise of observability tools is preventative maintenance: Not only will you be able to see and react to problems faster than current monitoring tools allow, but you'll also be able to glean insights from that data in a way that helps protect against future problems yet to rear their head.

The surge of interest in this space from upstarts and traditional monitoring companies has lots of ideas flying fast and furious, but it will take some time before that promise will be met, according to Allspaw.

"We have enough problems with the known unknowns," he said. "The runway to make progress on that is as close to infinite as we can get."

Still, 25 years into the internet revolution, we've come to expect certain levels of performance and reliability from our web and mobile applications. Big organizations like AWS, Google, Netflix and others are well down the observability road inside their own companies. And now the tools and companies that will bring those insights to the rest of us are starting to get traction.

Fintech

Judge Zia Faruqui is trying to teach you crypto, one ‘SNL’ reference at a time

His decisions on major cryptocurrency cases have quoted "The Big Lebowski," "SNL," and "Dr. Strangelove." That’s because he wants you — yes, you — to read them.

The ways Zia Faruqui (right) has weighed on cases that have come before him can give lawyers clues as to what legal frameworks will pass muster.

Photo: Carolyn Van Houten/The Washington Post via Getty Images

“Cryptocurrency and related software analytics tools are ‘The wave of the future, Dude. One hundred percent electronic.’”

That’s not a quote from "The Big Lebowski" — at least, not directly. It’s a quote from a Washington, D.C., district court memorandum opinion on the role cryptocurrency analytics tools can play in government investigations. The author is Magistrate Judge Zia Faruqui.

Keep ReadingShow less
Veronica Irwin

Veronica Irwin (@vronirwin) is a San Francisco-based reporter at Protocol covering fintech. Previously she was at the San Francisco Examiner, covering tech from a hyper-local angle. Before that, her byline was featured in SF Weekly, The Nation, Techworker, Ms. Magazine and The Frisc.

The financial technology transformation is driving competition, creating consumer choice, and shaping the future of finance. Hear from seven fintech leaders who are reshaping the future of finance, and join the inaugural Financial Technology Association Fintech Summit to learn more.

Keep ReadingShow less
FTA
The Financial Technology Association (FTA) represents industry leaders shaping the future of finance. We champion the power of technology-centered financial services and advocate for the modernization of financial regulation to support inclusion and responsible innovation.
Enterprise

AWS CEO: The cloud isn’t just about technology

As AWS preps for its annual re:Invent conference, Adam Selipsky talks product strategy, support for hybrid environments, and the value of the cloud in uncertain economic times.

Photo: Noah Berger/Getty Images for Amazon Web Services

AWS is gearing up for re:Invent, its annual cloud computing conference where announcements this year are expected to focus on its end-to-end data strategy and delivering new industry-specific services.

It will be the second re:Invent with CEO Adam Selipsky as leader of the industry’s largest cloud provider after his return last year to AWS from data visualization company Tableau Software.

Keep ReadingShow less
Donna Goodison

Donna Goodison (@dgoodison) is Protocol's senior reporter focusing on enterprise infrastructure technology, from the 'Big 3' cloud computing providers to data centers. She previously covered the public cloud at CRN after 15 years as a business reporter for the Boston Herald. Based in Massachusetts, she also has worked as a Boston Globe freelancer, business reporter at the Boston Business Journal and real estate reporter at Banker & Tradesman after toiling at weekly newspapers.

Image: Protocol

We launched Protocol in February 2020 to cover the evolving power center of tech. It is with deep sadness that just under three years later, we are winding down the publication.

As of today, we will not publish any more stories. All of our newsletters, apart from our flagship, Source Code, will no longer be sent. Source Code will be published and sent for the next few weeks, but it will also close down in December.

Keep ReadingShow less
Bennett Richardson

Bennett Richardson ( @bennettrich) is the president of Protocol. Prior to joining Protocol in 2019, Bennett was executive director of global strategic partnerships at POLITICO, where he led strategic growth efforts including POLITICO's European expansion in Brussels and POLITICO's creative agency POLITICO Focus during his six years with the company. Prior to POLITICO, Bennett was co-founder and CMO of Hinge, the mobile dating company recently acquired by Match Group. Bennett began his career in digital and social brand marketing working with major brands across tech, energy, and health care at leading marketing and communications agencies including Edelman and GMMB. Bennett is originally from Portland, Maine, and received his bachelor's degree from Colgate University.

Enterprise

Why large enterprises struggle to find suitable platforms for MLops

As companies expand their use of AI beyond running just a few machine learning models, and as larger enterprises go from deploying hundreds of models to thousands and even millions of models, ML practitioners say that they have yet to find what they need from prepackaged MLops systems.

As companies expand their use of AI beyond running just a few machine learning models, ML practitioners say that they have yet to find what they need from prepackaged MLops systems.

Photo: artpartner-images via Getty Images

On any given day, Lily AI runs hundreds of machine learning models using computer vision and natural language processing that are customized for its retail and ecommerce clients to make website product recommendations, forecast demand, and plan merchandising. But this spring when the company was in the market for a machine learning operations platform to manage its expanding model roster, it wasn’t easy to find a suitable off-the-shelf system that could handle such a large number of models in deployment while also meeting other criteria.

Some MLops platforms are not well-suited for maintaining even more than 10 machine learning models when it comes to keeping track of data, navigating their user interfaces, or reporting capabilities, Matthew Nokleby, machine learning manager for Lily AI’s product intelligence team, told Protocol earlier this year. “The duct tape starts to show,” he said.

Keep ReadingShow less
Kate Kaye

Kate Kaye is an award-winning multimedia reporter digging deep and telling print, digital and audio stories. She covers AI and data for Protocol. Her reporting on AI and tech ethics issues has been published in OneZero, Fast Company, MIT Technology Review, CityLab, Ad Age and Digiday and heard on NPR. Kate is the creator of RedTailMedia.org and is the author of "Campaign '08: A Turning Point for Digital Media," a book about how the 2008 presidential campaigns used digital media and data.

Latest Stories
Bulletins