Comatose servers draining data center efficiency
Connecting state and local government leaders
While most of the challenges involve management practices and information flows, technical solutions can help.
With government IT departments facing tight budgets, looking for ways wring every bit of performance from computers in data centers makes sense. Unfortunately, not every data center machine is running at peak efficiency. In fact, some are hardly running at all.
In one data sampling, 30 percent of physical servers had not delivered information or computing services in six months or more, according to a report released by IT consulting firm Anthesis Group in conjunction with Stanford University and TSO Logic, a provider of data center analytics tools. And those idle data center servers represent a $30 billion dollar worldwide loss, Data Center Knowledge reported.
The findings support previous research by the Uptime Institute, which also found that 30 percent of servers worldwide are unused. With 10 million “comatose” servers worldwide, that translates into $30 billion in data center capital idling, assuming an average server cost of $3,000, not including infrastructure and operating costs.
Co-author Jonathan Koomey, research fellow at Stanford’s Steyer-Taylor Center for Energy Policy and Finance, said the cause of comatose servers is lack of communications between IT infrastructure teams. “The needed changes are not primarily technical, but revolve instead around management practices, information flows, and incentives,” he wrote.
“There really needs to be one boss, one team and one budget,” Koomey said. “We need to change the way the data center is managed.”
One way government IT departments are addressing the issue of idle computers is distributed computing – linking computers to work as if they were a single machine. Scaling up the number of computers working together increases the processing power, thereby reducing or eliminating the need for separate computers for different tasks.
A second idea is an on/off switch. However, while algorithms exist to turn servers on and off based on capacity needs to reduce energy consumption, turning the server off may not necessarily be the best answer, according to the MERCi-MIsS report, which noted that “the turning off/on of servers consumes a certain amount of energy and also induces the wear and tear of disks.”
The software-defined data center – in which networking, storage, compute and security are virtualized and delivered as a service – can also help reduce the number of comatose servers and increase efficiency.
However, even virtualization may not fully address the problem – or at least not at first. More than half of virtual server implementations experience “virtual sprawl” in the first year of implementation, which can lead to losing the gains initially achieved with virtualization, according to blog post by Transworld Data president Mary Shacklett. “First, the ease of resource provisioning in the virtual environment makes it almost a mindless exercise for IT. Second, most sites are focused on time to market with their virtual system deployments,” she wrote. "No one is watching the backend of this process."