Air flow control can yield more efficient data centers
Connecting state and local government leaders
DOE's Berkeley National Laboratory is engaged in several projects to demonstrate how cooling and technology can more effectively manage air flow in data centers, thus improving energy efficiency.
One project, with Intel, IBM and Hewlett-Packard, funded by the California Energy Commission, will explore the possibility of using temperature sensors that are already inside servers to directly control the computer room air conditioning (CRAC) units that regulate the flow of cool and hot air into and out of the data center.
The idea is to ensure that the right amount of cool air is being delivered to the server inlet. CRAC systems in most data centers typically focus on cooling the entire room, but that can result in uneven and inefficient distribution.
Additionally, LBNL is working on a separate demonstration to show how data center managers can use wireless temperature sensors to directly control computer room air handlers, which push air into ducts, said William Tschudi, project leader of LBNL’s Building Technologies Department.
“The idea is you have a finer mesh of being able to monitor temperature and then control the computer room air handlers to give [the facility] exactly what it wants rather than oversupplying air,” Tschudi said.
“We’re also working with Sun Microsystems on the demonstration of different cooling technologies," he said. "All companies are trying to demonstrate different pieces” to improve energy efficiency. The results of some of these demonstrations will be shared with people attending the Silicon Valley Leadership Group’s conference in the fall, he added.
Demonstrations on air management and cooling techniques are just part of government and industry efforts to advance innovation and spur greater energy efficiency in data centers. The Environmental Protection Agency is leading efforts to establish an Energy Star specification for enterprise servers so IT managers can buy systems that deliver performance but reduce energy consumption. EPA also is working on an Energy Star rating for data centers with the Green Grid, a consortium of industry and government organizations.
But measuring energy efficiency in data centers could be a tougher nut to crack, experts say.
A view of two networks
The LBNL and Intel demonstration is slated to happen by the summer.
“Right now we’re working on how to get the IT network to work with the building control network,” Tschudi said.
Those two networks are separated, but the LBNL/Intel team is developing a management console that will give data and facility managers a view of both networks, said Michael Patterson, senior power and thermal architect at Intel’s Eco Technology Program Office.
The demonstration is being conducted at an Intel data center in California, which has IBM and HP servers and Liebert CRAC units, Patterson said.
The goal is not to develop a product, Patterson added. Because the California Energy Commission is funding the project, the goal is to document the results of the demonstration so data center and facility operators can learn from the team’s efforts.
“They can learn what the challenges are, how we did the interconnection and what some of the tricky bits were so if they want [they can] implement the same control strategy into their data center,” Patterson said. “So they can go into it smart rather than blindly and hoping for success.”
Blowing hot and cold
CRAC systems in most data centers pump pressurized air to maintain a server inlet temperature within a proper range. The American Society of Heating, Refrigerating and Air Conditioning Engineers recommends inlet temperatures of 64.4 degrees Fahrenheit through 80.6 degrees Fahrenheit. ASHRAE also recommends an absolute humidity/dew point range of 41.9 to 59 degrees Fahrenheit.
The cooling units are positioned around the perimeter of a standard data center. They have a couple of different features. CRAC units receive chilled water from the buildings’ central utility plant, or the facility has localized air-conditioning plants with a CRAC unit in each, Patterson said.
There is a cooling component that takes heat out and, at the same time, there is an air flow component. Motorized fans in the unit move the cool air around the room, usually beneath raised floors and up through perforated tiles to servers mounted in racks. The hot air is blown out of servers, usually to hot aisles and returned to the cooling unit.
During the era of mainframe computers, there was no need for air flow segregation. A lot of cold air was dumped into the room and the computer released heat back in the room, which was sent to the cooling unit.
Ultimately, managers shouldn’t be concerned with the temperature returning to the CRAC unit, Patterson said. What really matters is the inlet temperature to the server.
“To maximize efficiency you want to have just enough air flow and just enough cooling through chilled water or the refrigeration system in the CRAC,” Patterson said. “You can’t get this balance with the temperature sensor in the return air to the CRAC unit.”
However, you can if you tap into the temperature at the inlet of the server. Most server manufacturers put a front panel temperature sensor in their systems that reads the temperature of the air coming into the server, Patterson explained.
“If we can control that temperature and provide the front of the server with enough air flow, then we will have done our job to provide the most efficient cooling possible,” he said.
Essentially, the demonstration is intended to show how data center and facility operators can replace the control functionality of the cooling system with instrumentation that is already in the servers.
“We’re not saying add extra sensors or redesign servers or spend additional money when a new data center is spun out. The beauty of the project is that we are demonstrating the integration of the facility and the computer, providing a wall between them,” Patterson said.
Thermo map
The building control system will be able to communicate with the management server that monitors systems for hard drive failures or memory upgrades and seek information on the front panel temperatures. The team is deploying some complex algorithms that will allow the sensors to tell the cooling system if the air is cold enough and that will drive the chill water pump, Patterson said.
The team also will use sensors to measure the temperature at the bottom and top of the server rack to determine if there is enough air flow. Too little air flow means a large temperature differential between the bottom and top.
“With this thermo map of server inlets, we are going to have the control system be smart to modulate the whole load to significantly reduce the amount of energy we’re going to be using in the data center,” Patterson said.
The project team is expecting a more than 70 percent reduction in energy use in the particular cooling units, he said. Most data centers run the fans in the cooling systems at 100 percent all the time.
“We only need 47 percent of the peak air flow on the average, so we’re going to only use 10 percent of the power compared to if these [cooling system fans] were turned on to run at full speed,” Patterson said.
Benchmarking data centers
There is no silver bullet for improving energy efficiency in data centers, LBNL’s Tschudi said. A lot of areas interact with one another, and improvements can be made in power conversion and distribution, load management, server innovation and cooling equipment, he said.
But coming up with metrics to benchmark those improvements could be difficult, some industry experts say.
“We suspect the federal government is the largest operator of data centers probably in the world,” said Andrew Fanara, the EPA Energy Star product development team lead. As such, the opportunity is there for the federal sector to lead the way in improving data center operations, he said.
However, there has to be a way for data center operators to benchmark performance against the entire facility and measure against themselves over time to improve their efficiency, he added.
EPA has worked with various types of facility managers to come up with Energy Star ratings for facilities from schools to supermarkets. So EPA decided to design a benchmark specifically for data centers, whether they are in a stand-alone facility or inside another commercial office building. The agency is working with the Green Grid to fine-tune that protocol, Fanara said. It will provide advice to data center operators on measuring the performance and energy efficiency of IT equipment.
“Unless you have the means to measure your performance, how do you know the investments are taking you in the right direction?” Fanara asked.
At the end of the research and analysis stage, EPA could have an Energy Star benchmark for data centers, though the analysis isn’t finished, he said.
So far, the Green Grid has proposed the Power Usage Effectiveness (PUE) and its reciprocal Data Center Infrastructure Efficiency (DCiE) benchmarks that compare an organization's data center infrastructure to its existing IT load.
After initial benchmarking using the PUE/DCiE metrics, data center operators have an efficiency score. They can then set up a testing framework for the facility to repeat and can compare initial and subsequent scores to gauge the impact of ongoing energy efficiency efforts.
DOE has also developed the Data Center Energy Profiler tool, which offers a first step to help companies and government agencies identify potential savings and reduce environmental emissions associated with energy production and use.
DC Pro, an online tool, provides a customized overview of energy purchases, data center energy use, savings potential and a list of actions that can be taken to realize these savings.
Microsoft has developed a suite of sophisticated reporting tools to measure efficiency in its own data centers, said Kim Nelson, executive director of e-government at Microsoft.
The company uses the Green Grid’s PUE but has its own tools — based on business intelligence capabilities — that measure server utilization and CPU and wattage usage per server.
“We measure PUE and carbon emission factors that are generated by where you live geographically," she said. "We’ve been reporting those to EPA.”
Because EPA collects information from different organizations and agencies, Microsoft will need to evaluate whether the information it has given to the agency can be reasonably collected across the board.
IBM officials also have collected information on energy efficiency in the company's data centers and sent it to EPA for consideration and analysis.
“We provided a year’s worth of data on six of our data center buildings to EPA as part of their data collection process for the Energy Star building work for data centers,” said Jay Dietrich, program manager for IBM’s corporate environmental affairs group.
There is no meaningful metric for measuring workload at this point, Dietrich said. Data center operators can be very efficient with facilities power and IT power, but if they are not optimizing the amount of work their servers are doing, those metrics might not be the most efficient answer to a particular application, he said.
For now, EPA is just going to get data on IT power and the power needed to run the facility, he said. But the agency is interested in exploring how to introduce that workload component, as is the Green Grid, Dietrich said.
Many data centers don’t have sufficient instrumentation to get the information they need to come up with some measure of efficiency in the data center, Intel’s Patterson said, adding that the company is working to promote a minimum level of instrumentation.
However, “you can’t wait until you have the right instrumentation suite out there. You may never actually start,” Patterson said.
Data center managers can still go around with a clipboard and write things down, Patterson said. “If you don’t measure, you can’t improve and you don’t know where to focus your effort for improvement,” he added.
NEXT STORY: Energy Star zeros in on green servers