12 Sep Debunking Red Hat’s Assertions in ‘Containers Debunked’

I recently came across an article titled ‘Containers Debunked’ at CBR Online. In the article, Lars Hermann, GM of Integrated Solutions at my previous employer Red Hat (which had acquired my previous startup Qumranet, Inc) wrote about DevOps and containers. But while Lars attempted to explain the relationship between DevOps and containers, he actually created a bigger mess and only left the reader confused.

As a founder of four virtualization companies, including XenSource (Xen) and Qumranet (the creators of KVM), as well as an investor in a several others, I read the article in question with great interest. I appreciate Lars’ good-willed attempt at conceptualizing the state of the art in virtualization very much. Indeed, the explosive growth of Docker and other similar technologies is changing the technology landscape faster than most of us can keep up with. In response to his article, I believe there is a need to clarify which technology makes most sense for which use case. Containers have been around for decades, and virtualization has been around since at the least the early 1960s. Linux has had LXC containers almost ten years now. In the early 2000s, Xen revolutionized the use of Linux on servers; then in 2006 KVM brought Linux virtualization to the next level.  At the beginning of this decade, DevOps conquered the world of agile development within just a few short years.

Nowadays, we are looking at Xen, KVM, Vmware’s ESXi and a few other, more esoteric hypervisors on the virtualization side. Containers come with names such as LXC, openVZ, Virtuozzo, or Docker. Luckily most of the above are free (as in free beer or free speech) software, so we can try them all out, which people often do. However, this begs my original question: when should we use which technology, and for what purpose?

Hypervisors such as KVM and Xen recreate a full machine abstraction, and are ideal for running applications where you want full control of the operation system kernel. While VMs deliver on the promise of increasing server utilization, the fact is that with Hypervisors, each VM requires its own full OS, TCP, and file system stacks, which uses significant processing power and memory of the host machine. Advances in virtual memory architecture allow for some sharing of resources among similar VMs, but the overhead of a complete machine abstraction is substantial.

Containers abstract the namespace of things like process IDs, file system mount points, users and more. Containers take performance and resource utilization a step further by sharing the OS kernel and other system resources (like disk cache) of the host machine, TCP and file system stacks, while using less memory and CPU overhead to run each workload. Containers were initially used for workload management purposes, and as such were utilized as operations tools (as opposed to modern DevOps tools), i.e. I call those “infrastructure containers”, which tend to be persistent and long-running.

Over time, infrastructure containers such as LXC, openVZ and Virtuozzo have proven themselves to be powerful, cost-effective tools for running production-ready apps. By sharing one OS on the same hardware, you only need to apply security patches once and all containers will benefit from those OS security enhancements.

On the other hand, DevOps containers such as Dockers have emerged as the most useful for developers that want to create, package and test applications in a highly portable and agile way. DevOps containers package up only the resources needed to run an application for easy deployment, leaving out everything that’s not needed, and as such they allow developers to easily move them around to accelerate application development.

I would like to point out that there is a place for each one of these technologies. Having said that, in the private enterprise data center, or even in the home data center, people probably use VMs as their ‘go-to’ virtualization solution still too often. In many cases, an infrastructure container technology such as LXC, openVZ or Virtuozzo would actually bring better performance, higher hardware utilization, and improved manageability compared to full stack hypervisors such as ESXi, Xen or KVM. In fact, I recently ran a benchmark in my own test data center on a bunch of HP DL580 servers, and I achieved a 3.1x higher workload density when using openVZ, compared to KVM. That is a very significant margin. With commercial infrastructure container offerings such as Virtuozzo you can also get more secure and more easily manageable environments.

So, why do people use VMs so often when infrastructure containers can do the job better? I believe the answer lies in the management layer. VMware vSphere and Red Hat RHEV today still have the widest and deepest range of management, storage and orchestration software to manage workloads. Once you are dealing with more than a few servers, management is the determining factor in a successful deployment.

Moreover, to counter one of the main points of the article: When used for DevOps, hypervisors do not provide any advantage or solutions above and beyond infrastructure containers. On the other hand, Docker does not provide an infrastructure workload management solution in the sense that ESXi, KVM or Virtuozzo provide it.

In conclusion, there is no one-size-fits-all technology out there for virtualization today, nor do I believe there should be. Use the best technology and product for what your use case requires.

My friend Ruslan Synytisky, CEO of Jelastic, has written a superb article with great graphs to explain this complex topic much better than I could. Take a gander at it here.

-Moshe Bar, General Partner