Containers vs. Docker vs. Kubernetes vs. containerd vs. runC… Part 2: Psyduck is confused.

Originally published at: Containers vs. Docker vs. Kubernetes vs. containerd vs. runC… Part 2: Psyduck is confused. - Skycrafters

This is part two of a two-part article. Click here to read part one!

Now that we know what containers really are, and how to create our own, we can appreciate Docker more. Docker makes it easier to run and manage containers – through the CLI or GUI. But, we still don’t know what exactly Docker is, how it compares to Kubernetes, or even containerd and runC.

So, let’s get to it.

Container Runtime

We don’t want to run a bunch of different CLI commands to run a container. Or even worse, make a bunch of system API calls. That’s exactly why Docker was once so popular. But, as you’d assume, that popularity made other players try to enter the same market, such as rkt. This sudden container boom started to fragment the market quickly. That was even more apparent when Kubernetes (more on it later) started to be more broadly used and developers had to create pretty much the same code in order to support these different container “vendors”.

OCI, the Open Container Initiative, was then created to try and fix this fragmented market by providing community-built and supported standards that “containers” should follow. This move was hugely backed by the Docker company by donating parts of Docker (software) open-source code to the foundation. The first part was runc.

But what is a container runtime? According to OCI’s glossary, it reads the configuration files from a bundle, uses that information to create a container, launches a process inside the container, and performs other lifecycle actions. It’s the abstraction layer that is the closest to the kernel. runc is OCI’s reference implementation and is a dependency in many popular projects nowadays, such as Docker cli itself. Other popular container runtimes are crun, a C implementation quite similar to runc, written in Go, or kata containers, a hypervisor-based implementation combining two other once-popular runtimes, clear-containers and runv.

A bundle is mode of a JSON file that contains metadata necessary to implement standard operations against the container. This includes the process to run, environment variables to inject, sandboxing features to use, etc. and the container’s root filesystem. But how would one go to build either of them? Simple, use a container engine.

Container Engine

This piece sits “on top of” the runtime. It takes human friendly inputs (CLI, API calls, etc) and leverages the container runtime of choice to actively run the container and to manage its lifecycle. It usually implements features such as image pulling from a repository, expanding/decompressing images on disk, preparing a container mount point, and building the JSON file that the container runtime requires in order to run the container.

Yes, you guessed right, Docker is a Container Engine. But there’s more to it. Remember that Docker donated parts of its code to OCI? It also agreed to, and later delivered, a different part of its code to the Cloud Native Computing Foundation, the CNCF. That part is the core piece of its container engine and is now known as containerd. Think containerd as pretty much everything that I mentioned above minus a CLI. That still is part of Docker’s proprietary, yet open, code.

There are other popular container engines out there, such as LXD (an evolution of LXC), CRI-O (a really big name nowadays) and each Cloud Provider specific engine, like AWS Fargate. Many of these, such as containerd and CRI-O, are Kubernetes Container Runtime Interface compatible. Meaning, all of them expose the same set of APIs. This solves that problem discussed before where Kubernetes had to write code for each runtime. Now the Kubernetes community can focus solely on expanding and improving Kubernetes itself.

CRI-O was first created by Red Hat to be a Kubernetes-centric Engine, without focus on developing tools around final user interaction. Today, it’s Kubernetes default engine is widely supported by big players, such as Intel and SUSE, and is built on top of runc.

Note: This is a fairly polemic topic. Many consider all the above as Container Runtime as well, reserving the “Container Engine” name for solutions like Docker and Podman; that offer tooling for the Container developer, such as friendly CLI and build capabilities.

Kubernetes

Everything we discussed up to now have something in common: they are tooling/interfaces to run a container in a single computer. How to scale that to multiple servers? We have finally arrived at Kubernetes town.

According to its own documentation, Kubernetes, also known as K8s, is an open-source system for automating deployment, scaling, and management of containerized applications. It can help organizations automate applications rollouts and rollbacks, orchestrate storage, provide service discovery and load balancing, secret and configuration management, self-healing capabilities, and many others. All that communicating with the Container Runtimes via the container engines CRI compatible APIs. If you come from the VMware world, think Kubernetes is your vSphere.

Kubernetes on its own is a huge topic, so we are not going to dive into its details. It has, however, a surprisingly good documentation for an open-source project. I highly recommend it as a resource.

Conclusion

The Container world is an increasingly confusing one. Honestly, I probably made mistakes over the last few paragraphs. Do you agree or disagree with what was said? Was this information helpful? Comment below and let me know!

Also, anybody here going to KubeCon North America 2021 in Los Angeles? If so, let’s continue the conversation the in-person, I’ll be around 🙂