A major change is coming to Kubernetes, the open-source project at the heart of many modern cloud deployments: a breakup, of sorts, with its legacy Docker container runtime.
Built-in dockershim support for the Docker Engine runtime will be removed from the upcoming new release of Kubernetes, version 1.24, which was supposed to arrive this week but now has May 3 as a scheduled release date. The change requires users who want to be running the latest version of the container-orchestration system to shift to another runtime that’s compatible with Kubernetes’ Container Runtime Interface (CRI) if they haven’t already, or to use dockershim’s external replacement, known as cri-dockerd.
Developers and administrators who fail to make necessary changes could risk breaking their clusters and corresponding apps. But for most users, dockershim’s removal should be relatively straightforward to handle, according to James Laverack, who’s leading the release team for Kubernetes 1.24.
“The major change will be that the Kubernetes nodes themselves — this is a running Kubernetes cluster — will no longer, by default, be able to use Docker as a container runtime,” said Laverack, a senior solutions engineer for Jetstack. “People have done this change a lot before. When we first introduced alternative container runtimes, many operators and users changed to using those runtimes instead of Docker for a variety of reasons, which is why we introduced the change at all.”
Developers use containers to speed up software development because they isolate all that’s needed to build and deploy applications without the overhead of an operating system. Early versions of Kubernetes worked only with Docker Engine as a container runtime, which is software that can execute the containers making up a Kubernetes pod.
The Cloud Native Computing Foundation (CNCF)-hosted Kubernetes project introduced CRI in 2016 as a plug-in interface that enables interoperability between Kubernetes and a variety of container runtimes. Docker Engine itself isn’t CRI-compatible; it is dockershim, a container runtime interface shim, that allows developers to use Docker Engine as if it was compatible.
“It's a great time to move on,” said Mrunal Patel, senior principal software engineer for Red Hat OpenShift, a hybrid-cloud, Kubernetes application platform. “These alternative runtimes have been proven in production already, so users shouldn't be afraid of this change. We should usher in this new era of CRI-based runtime that will help us move faster in adopting newer features.”
Red Hat has been using CRI-O in production for nearly three years. The first and subsequent versions of OpenShift shipped with CRI-O, and thousands of customers have been using it in production, Patel said.
Pulling away from the dock
The Kubernetes project deprecated dockershim in December 2020 with Kubernetes 1.20, giving notice that it subsequently would be removed from Kubernetes and lead time to make needed adjustments to avoid breaking clusters. Docker-produced images are Open Container Initiative (OCI)-compliant and will continue to work in clusters with all CRI-compliant runtimes.
Dockershim, which is built into Kubernetes’ kubelet code base, had always been viewed as a temporary solution, and maintaining it has been cited as a burden. A kubelet, which is an agent that runs on each node in a cluster, ensures that containers are running in a pod. The CRI standard allows container runtimes to be decoupled from the kubelet code base for simplified maintenance.
“[Docker] has features for building containers as well as running containers,” Patel said. “When you talk about running containers in production, you don't necessarily need the same privileges as when you are a developer developing an application on your laptop. You need them more locked down. You need a more minimal runtime, which is more suitable for doing just exactly what Kubernetes needs and nothing more.”
The removal of dockershim requires developers and cluster administrators to go through an “inconvenient, but necessary” migration as described by Víctor Jiménez Cerrada, a content manager engineer at container security software vendor Sysdig.
“The community has been very conscious to provide lots of instruction and lots of information and context around this change over the past couple of years,” Laverack said. “[CRI is] an open standard, and there are a number of [runtime] implementations provided by companies and by the wider community as well. Any of those will work and will be supported by Kubernetes in current versions as well as future versions.”
After determining whether a Kubernetes cluster has been using Docker Engine, the mechanics involve changing kubelet configurations so they’re pointed to the sockets of either containerd or CRI-O, for example, so the kubelets will begin talking to those runtimes to start managing one’s containers, Patel said.
“That's the simple part of it,” he said. “And the good thing is Kubernetes upstream already is running end-to-end tests with these runtimes. Whenever new code is added to Kubernetes now, all the tests are done against these runtimes.”
Cluster operators should also determine if they have existing code that’s talking directly to Docker, “behind the back of Kubernetes,” Patel said.
“Kubernetes itself needs to talk to a runtime, and we have CRI as this interface that it's supposed to use, but what if you have some workloads that are directly talking to the Docker socket to say ‘perform builds’ or so on?” he said. “Those are the things that users should be auditing and checking.”
Developers can still use Docker locally to develop or test their containers, no matter which container runtime they use for Kubernetes clusters.
“You can push them to any OCI-compliant registry, and Kubernetes will be able to pull them and run those applications,” Patel said. “That's not going away. With the OCI standardization, all these container runtimes and how those images are stored and distributed … has been standardized.”
Those using a managed Kubernetes service from a cloud provider likely can just sit back if they haven’t explicitly changed their container runtime, according to Kat Cosgrove, developer advocate for cloud engineering company Pulumi. Amazon Elastic Kubernetes Service, Microsoft’s Azure Kubernetes Service and Google Kubernetes Engine all now default to containerd, “though you should make sure they do not need updating if you have any node customizations,” Cosgrove noted in a recent Kubernetes blog post.
Cluster operators who want to upgrade to Kubernetes 1.24 but maintain compatibility with Docker as a runtime have an option that isn’t as risky as running an old version of Kubernetes, according to Cosgrove.
“Mirantis and Docker have jointly released, and are maintaining, a replacement for dockershim,” she wrote in the blog. “That replacement is called cri-dockerd. If you do need to maintain compatibility with Docker as a runtime, install cri-dockerd following the instructions in the project’s documentation.”
Those who stick with the latest version of Kubernetes with dockershim eventually risk operating without security fixes while also not benefitting from new features, according to Patel.
Under current Kubernetes project policy, support is provided for the most recent three releases. Kubernetes 1.23, the last that supports dockershim, will receive patch support until the release of Kubernetes 1.26, which is currently expected in December.
“When you're running Kubernetes, one thing that should be at the top of your mind is security,” Patel said. “You're living dangerously if you're not moving to one of the recommended CRI runtimes.”