Zombies, orphans and other perils of virtualization
Connecting state and local government leaders
Virtualization offers agency IT shops many benefits, but it’s also rife with pitfalls that can make it a network killer.
“Let’s virtualize it. That will fix everything.”
The above can be both the solution to all that ails agencies' IT and the catalyst for new federal IT nightmares. Virtualization has been treated as a silver bullet for a host of data center problems, from overloaded servers and poor performance to skyrocketing cooling, power and licensing costs.
But virtualization has its own set of pitfalls that, if ignored, can open a Pandora’s box of problems that some agencies are simply not equipped to handle.
Related story:
Complexity could jeopardize data center consolidation
Compounding these problems is the fact that many of the pitfalls start out as benefits – especially ease of use and simplicity of deployment. But problems arise when agencies adapt virtualization on a larger scale, turning perceived benefits into IT migraines.
Ease of use? Or easy to abuse?
Virtualization turns one of IT’s biggest chores, provisioning and deploying servers, into a process on par with copy-paste. Where agency IT departments once had to find and specify an approved vendor, provision cooling and power resources and then install the box, virtual machines require only a few mouse clicks. However, this simplicity easily backfires.
At a data-center level, virtual machine proliferation can quickly spiral out of control, as machines are commissioned for highly specific or one-off uses. This “throwaway” mentality swiftly leads to virtualization sprawl, birthing zombie VMs (virtual machines that have no specific role, but drain server resources) and orphaned VM disk kernels (where a VM is not in service, but remains in use).
Decommissioning VMs is also tricky – as deployments grow, taking a VM down without fully understanding its purpose could lead to service outages or poor performance, something no agency IT administrator wants to be responsible for.
Deploying virtualization on a large scale — such as the Office of Management and Budget’s Data Center Consolidation Initiative — is also far different from testing in a controlled environment. Agency networks are heterogeneous – many federal IT environments will employ different virtualization solutions at different levels of maturity, along with multiple SANs and dozens of legacy systems, leading to “hyperdense” computing environments.
Suddenly, what seemed so simple at the lab level is now reminiscent of an M.C. Escher painting.
Data centers: Too skinny for virtualization
Another virtualization problem for agencies is bandwidth – specifically, network traffic inside the data center. Typically, federal IT teams are more concerned with the outbound traffic, from the primary location to a redundant facility or end users. However, virtualization turns these concerns inside out…literally.
Thanks to virtualization’s hyperdense and dynamic nature, in-data center traffic becomes a serious problem. Virtualized and consolidated technologies, from virtual machines and infrastructure to SANs and backup systems, easily clog internal network traffic.
Most federal data centers never worry about internal network congestion and often deploy virtualization on network gear that was never designed to handle it. As a result, when consolidation ramps up and VMs explode by 300 percent over a three- to five-year period, IT is completely unprepared for the resulting bandwidth problems.
Consider the cloud
When it comes to virtualization, there's no easy alternative – either an agency consolidates its IT operations or it does not. This does not mean, however, that consolidation must take place in-house. Enter: the cloud.
Although the notion of federal cloud computing could easily fill an entire book (and likely has many times over), this is a relatively straightforward answer for agencies dealing with skyrocketing data center costs that are not yet ready for internal virtualization. While based on virtualization (as are all as-a-service solutions), the cloud’s technology is transparent to the agency end user, giving federal IT teams relief from infrastructure management while retaining virtualization’s benefits.
What to do?
As agency virtualization deployments pick up speed, it’s vital that IT personnel not blind themselves by looking only at the technology’s positives. Instead, IT teams must take a critical look at their existing network, and follow a simple checklist to avoid falling into virtualization’s pitfalls.
Environmental inventory. You can’t stress enough the importance of knowing what an existing agency network entails, from physical equipment and existing virtual machines to SANs and network capacity. This leads to the second item on the virtualization checklist…
Set a baseline. Federal IT administrators need to know how their networks are functioning prior to virtualization, so that they can easily gauge the implementation’s success. This also helps admins respond to the eventual user complaint of “It was better before we virtualized.”
Skills inventory. Team leaders need to ensure that the personnel working under them actually UNDERSTAND virtualization. Are they certified in the chosen technology or are training programs on the horizon? Can they effectively respond to the new needs of a virtualized, hyperdense, dynamic environment?
Capacity optimization. Planning and modeling around virtualization must take place on a weekly -- if not daily -- basis and be done prior to the actual deployment. This helps detect overused or underused servers, as well as where bandwidth bottlenecks and application problems are occurring.
The right tools. Finally, every agency taking on virtualization needs to have the right tools, beyond those supplied by virtualization vendors. The ability to measure and optimize network performance before, during and after the virtualization deployment is vital, and a challenge that vendor tools struggle to meet.
The last point is particularly important. With the right management tools in place, federal IT teams can bypass many of the pitfalls inherent in virtual infrastructure and storage. By finding management solutions that support heterogeneous computing environments and are “vendor agnostic,” much of the complexity associated with virtualization deployments disappears or, at the least, is heavily marginalized.
Not a silver bullet, but…
Although it's not a silver bullet, virtualization remains a key technology in the federal IT world. Without it, the notion of a federal cloud falls flat and data center consolidation is nearly impossible to achieve.
The technology can help agencies dramatically cut costs and improve efficiencies, but its complexities must be carefully managed if federal agencies want to realize virtualization’s benefits. To do so, federal IT teams need to do more than select the right virtualization vendor – they need to carefully plan every aspect of the deployment, from skills, inventories to management tools.
With the right planning, these pitfalls are easily bridged, helping agencies realize virtualization’s potential as a platform for innovation, which can lead to the cloud and beyond.
NEXT STORY: Going mobile? The people are already there.