The history of modern containers is long and complicated, going back to the days of the mainframe and then through technologies like Solaris Zones to Linux’s adoption of cgroups as a foundation of its OS-level virtualization features. Those Linux Containers (LXC) were a key part of the early Docker platform, providing an isolated userspace to host and run Docker containers.

As containers continued to evolve, Docker developed its own runtime environment, which was adopted by many open source microservice platforms such as Kubernetes. That has led to Docker being the most common way to build, package, and deploy containers. However, it also caused early versions of Kubernetes to support multiple container runtime interfaces, allowing you to deploy containers using different runtimes in the same application.

Kubernetes’ move to using OCI and Dockershim

Over time both Docker and Kubernetes have evolved. Docker’s container image format was adopted as the basis for the Open Container Initiative’s (OCI) runtime definition alongside a standard Kubernetes CRI (container runtime interface) implemented in the OCI runc standard container runtime. That led to the development of the open container specification, which provides tools to manage the complete life cycle of a container in much the same way as Docker but with deep integration into the Kubernetes ecosystem.

Kubernetes’ move to using OCI to manage pod containers using the CRI required implementing a shim that converted OCI calls to Docker calls, putting an extra layer into Kubernetes’ container management that other fully OCI-compliant containers don’t need. With all Kubelets’ container management now going through the CRI, the Kubernetes team decided that this Dockershim would only be a stopgap, allowing Kubernetes installations time to migrate to CRI-ready container platforms, especially as there wasn’t a CRI-ready container host for Windows containers—an essential requirement for Azure.

An additional problem was that the hard-coded Dockershim support was being used by other parts of Kubernetes and by other projects that were built on top of the platform. The result was code that could be fragile and buggy. The Kubernetes team finally deprecated Dockershim, allowing developers time to move off of it before it was removed. The original announcement said it would go sometime after the release of Kubernetes 1.23.

That day is coming very soon. With the April 2022 release of Kubernetes 1.24, Dockershim support will be completely removed. Microsoft supports new Kubernetes releases very close to launch, so it’s time to check if this breaking change will affect your code.

How Azure uses Dockershim today

Currently, Azure Kubernetes Linux node pools created with Kubernetes 1.19 or later are already running containerd. This means you don’t have to use Dockershim, with AKS using Kubernetes container runtime interface to connect your Kubelets directly to containerd. This removes a set of management steps and interfaces from AKS, so your applications should be more responsive, scaling more quickly and using fewer resources. With Docker support, your Kubelets would need to first connect to Dockershim before connecting to the underlying Docker engine before connecting to the underlying Docker containerd implementation.

Those two points are important, especially if you’re using Kubernetes in conjunction with KEDA (Kubernetes-based Event-Driven Autoscaling) or other event-driven tools. Creating new pods as required will be faster, allowing your app to respond more quickly to increased demand. It could also lead to a long-term cost savings, by allowing you to scale down to zero in more cases where your application’s tolerance for latency can support the time taken to start up a container instance.

Windows-based containers may be more of an issue. Microsoft only started to make a preview of Windows support for containerd available in 2021, needing explicit headers in your cluster configuration. General availability will come with AKS’s release of Kubernetes 1.23, sometime in February 2022.

It’s important to understand that removing Dockershim from Kubernetes doesn’t stop Docker images from running in your AKS environment. However, those containers won’t run on Docker, as Docker doesn’t support the Kubernetes CRI. In practice they’ll run on other OCI-compliant runtimes, as Docker implements the OCI container image specification.

Updating AKS node pools to use containerd

Although some older Kubernetes instances will continue to run, they won’t be supported. As Microsoft updates Azure’s Kubernetes tools it will eventually remove support for older versions, so you will need to update Docker-based clusters where necessary. Kubernetes’ own support life cycle is to support each minor version for as long as 12 months (an increase from the original nine months of support). With a new minor release coming roughly every three to four months, Microsoft is committed to supporting the last three minor versions of Kubernetes. That gives you about a year to upgrade your AKS applications when Kubernetes 1.22 will roll out of support with the general release of Kubernetes 1.25, likely in January or February of 2023.

Luckily the upgrade process for Kubernetes applications running on AKS is relatively simple. If you’re using Linux, then you’re already using a containerd-based environment. If you’re still using an older, unsupported version, then upgrading your instance will automatically update you to using containerd. There’s no change needed to your registries or to your containers, and you can carry on using Docker to build and test on your own systems. There shouldn’t be any issues, but it’s a good idea to set up a test system using the latest AKS Kubernetes version to ensure that your application works in the latest environment.

Things are a bit more complex if you’re using Windows containers. The easiest option is to first add a containerd node pool to your existing AKS cluster. You need to explicitly add a custom header to the node pool definition that sets the value of WindowsContainerRuntime to containerd. You can then experiment with moving containers or adding new containers to the new node pool. It’s also possible to upgrade a single node pool or an entire cluster to containerd, using the Azure CLI. This gets your code running on containerd, but unless you remember to explicitly make new node pools containerd, they’ll be based on Docker.

With the general availability release of Kubernetes 1.23 on AKS, containerd will be the default for new Windows containers as well as for Linux. This will make it easier to complete your migration before Kubernetes 1.24 rolls out later in 2022.

There are some additional recommendations. As the Docker CLI isn’t supported in Kubernetes, you’ll need to use a different CLI to troubleshoot running pods. Microsoft recommends using crictl, which has a Kubernetes-centric way of working. This does have a bit of a learning curve, but it’s not too onerous. There are changes to how containerd logs are written, and you may need to change your logging platform to one that supports the Kubernetes CRI log formats. Azure’s own monitoring tools already support this format. They’re recommended as a replacement for working with the Docker engine, which is no longer accessible.

Both the developers of Kubernetes and Microsoft’s Azure team have gone a long way to remove risk from the Dockershim transition. If you’re using Dockershim in AKS, it’s now time to move to containerd. There shouldn’t be any issues beyond switching to a new log format and learning how to use new troubleshooting tools. Although that does require some changes to how you might have been working with AKS, they’re relatively minor. The result is a good example of how development teams like Kubernetes and platforms like Azure can manage fundamental technology transitions, keeping your applications running with minimal work on your part.