Unlocking the mysteries of science with Linux containers
Connecting state and local government leaders
Containers give agencies the ability to maximize their HPC workloads while making the process more efficient, repeatable and secure.
From COVID-19 to national security challenges and space exploration, high-performance computing (HPC) is being leveraged to solve some of our nation’s most pressing issues. In combination with modern application development, HPC is advancing the science and understanding needed to address these challenges.
In a traditional HPC environment, computational scientists set up clusters and run workloads. Quite often, these workloads must be rebuilt for each machine. This makes sharing and reuse of applications difficult. Running cloud-based applications alongside HPC jobs is a better option, but it makes little sense to expend precious HPC compute cycles on jobs that aren’t computationally intensive.
Time spent on computer science could be better spent on scientific research. This is where Linux containers come in. Containers give agencies the ability to maximize their HPC workloads while making the process more efficient, repeatable and secure. They’re ideal for running smaller yet valuable applications, like web apps that users log into or smaller databases that contain important data critical to the research being compiled -- in short, applications that can run on their own without needing to wait in line for supercomputing resources. Containers can also be used for HPC itself, as is the case with the Singularity open source project.
Let’s take a look at why agencies should leverage containers today to further accelerate their HPC workloads.
Containers and clusters, side by side
Running containerized web applications outside of an HPC cluster is a great way for agencies to get started while enjoying a return on their HPC hardware investment. Containers can augment the power of HPC clusters, allowing applications and their corresponding data to be used as input to larger jobs running in the HPC cluster.
A good example is the containerization of the open source Apache Kafka event streaming platform. Event streaming has the potential to be unpredictable and bursty. Containerizing Kafka to scale out to meet demand surges is not only critical to ensure messages do not get dropped, but also ensures the HPC cluster does not stall due to bottlenecks on the ingest side.
Containers allow developers to package applications with all of the necessary application files and shared library dependencies required to run those applications. They can then be easily moved from one team’s container hosts to another and even shared globally via a container registry, all of which can increase reuse and accelerate science. This can significantly reduce the time and effort required to set up HPC workloads, and makes sharing the necessary workloads easier, since all of the dependencies are stored in the highly portable container.
For example, researchers can run very fine-grained data models on their main HPC clusters while using containers for smaller, more coarse models running databases, orchestrators, user portal applications and other public-facing web apps. This is a more cost-effective means of performing research, since it would be far too expensive to run everything in the HPC cluster.
Choosing the right components
While using containers side-by-side with an HPC cluster is possible (and, indeed, is in practice at many labs), choosing the right combination of supported components is critical for not only performance but repeatable science. This is true at the bare metal level on the HPC cluster, and is equally true with the container platform that would augment that cluster. HPC requires speed, but science is all about repeatability.
Containers can provide speed while supporting repeatability, so long as agencies have the right connective tissue in place. Not every container base image is supported or tested on every container host. As such, untested combinations of container base images and hosts have the potential to yield different results.
That’s why it’s so important for agencies to ensure that their chosen combinations of container images on container hosts are vendor-supported configurations. In addition to providing repeatability, this will help drive maximum performance from connected GPUs and other hardware accelerators, addressing that need for speed. Professional testing by hardware and software manufacturers will help further avoid performance regressions.
Benefits of containers in HPC research and future applications
Containers aren’t just appropriate for heavy-duty scientific research, however. Federal, state, and local agencies could use containerized web apps for a variety of smaller-scale projects -- assessment of COVID-19 hotspots or the creation of vaccination schedules, for example. There are any number of government-related use cases where containerization can help expedite scientific methods. Indeed, with the right supporting host, the possibilities for crunching data are nearly endless.
Today, containers play a critical role in medical research and other use cases. Tomorrow, we’ll undoubtedly see them used alongside HPC and machine learning to model statistical simulations that can positively impact everything from smart city planning to warfighter preparedness.
This work will require a much larger and integrated software ecosystem that should be containerized to ensure its many HPC applications work consistently. Agencies can prepare for this future now by starting to use containers to expedite and empower current HPC projects.