Why government can’t – and shouldn’t – contain containers
Connecting state and local government leaders
Containers bring all of the benefits of virtualization, but with better use of hardware, an already broad ecosystem of applications and a clean separation of responsibility between contractors and agencies, which eases procurement pain.
Much has been written about the “need for speed” in government IT circles. Today, agencies are under pressure to develop applications faster and with more flexibility than ever before. Both of these attributes are hallmarks of a Linux container.
However, while this technology has grown in popularity, it is still misunderstood in many government circles. Properly deployed, Linux containers can be valuable in helping agencies achieve the speed and efficiency that they desire.
Containers bring all of the benefits of virtualization, but with better use of hardware, an already broad ecosystem of applications and a clean separation of responsibility between contractors and agencies, which eases procurement pain.
With basic Linux container technology, agencies can run installed applications on a host system in isolation. A more specific way to describe this technology is as an application container image file. With it you can package applications made of several libraries, configuration files and executables into a single compressed archive file that can run on multiple systems. These can be given to colleagues or posted publicly on an application container image registry, all without the need for traditional virtualization.
But containers are far more than just “sexy wrappers” on your favorite applications. Linux containers bring a host of benefits to the agency IT world beyond general isolation and standardization, including:
Speed: As opposed to a traditional virtual machine (VM), which can take minutes to days to provision, a Linux containers can be created in seconds. The bulk of that time is spent simply starting the desired application. Plus, their runtime performance is close to bare metal.
Density: Application container images only need to store the application and its runtime dependencies. This is different than an entire disk image, which has to include an entire operating system and enough disk space for the operating system to run and store files. As a result, the application container images are measured in megabytes, and VM disk files are measured in gigabytes.
It’s important to note that Linux containers aren’t running complete operating systems. They all run using the same kernel, but are securely separated. If you want to run 10 web servers on a hypervisor using traditional virtualization, you need 10 VMs, operating system and software licenses, disk files, etc.
With Linux containers, you simply run 100 applications and the operating system kernel space is shared. If you are seeing a 10 to 1 density when you use virtualization, you can expect to achieve 100 to 1 densities with Linux containers.
Flexibility: Containers offer a great amount of flexibility by decoupling the application from the underlying host operating system. This creates enormously flexible deployment options.
Also, you probably noticed more than a few comparisons to traditional virtualization. Don’t let this lead you to believe you need to choose traditional virtualization or containers. In fact, containers can be easily run on any combination of physical, virtual, and cloud systems. Not only does this let you move applications to the cloud more easily, it insulates you from vendor lock-in as your images aren’t tightly coupled to one vendor’s cloud or virtualized infrastructure.
With all of these benefits, agency IT shops are more than likely chomping at the bit to get their hands dirty in the container world. Like any new technology, however, it is best to start from the ground floor, so consider the below a “checklist” for getting started.
Many Linux operating systems now include the ability to run application container images in a Linux container. Check with your vendors for getting started guides based on vendor best practices and reach out to them if you’re stuck.
Next, focus on select workloads. Large, legacy, monolithic applications may not be the best place to start, especially if they already work well in your infrastructure. New projects, particularly web applications, should definitely be on your list as you don’t have to migrate.
Linux container and packaging technologies are available now, but many of the tools for orchestration, scheduling and lifecycle management are still in their infancy. Open source projects like Kubernetes, Cockpit, and Apache Mesos show a great deal of promise, but agencies should collaborate with industry partners to ensure requirements like security and disconnected operation are built in from the beginning instead of grafted on as an afterthought.
Following these steps can help government IT managers take better advantage of containers and turn them into the ideal solution for the hyper-paced, efficiency-starved IT environment.