Flash! Supercomputing goes solid-state
Connecting state and local government leaders
A testbed computer at Lawerence Livermore National Laboratory is operating with a flash-based memory. The capability and associated Linux technologies will appear in the next generation of U.S. Energy Department supercomputers.
A prototype computer system is demonstrating the use of flash memory in supercomputing. The Hyperion Data Intensive Testbed at Lawrence Livermore National Laboratory uses more than 100 terabytes of flash memory.
Hyperion is designed to support the development of new computing capabilities for the next generation of supercomputers as part of the Energy Department's high-performance computing initiatives. Specifically, it will help test the technologies that will be a part of Lawrence Livermore’s upcoming Sequoia supercomputer.
The Hyperion testbed is an 1,152-node Linux cluster, said Mark Seager, assistant department head for advanced technology at Lawrence Livermore. It was delivered in 2008, but is only now at the point where serious operational testing can begin with the recent addition of the solid-state flash input/output memory.
Flash memory is a key component of the Hyperion system, Seager said. The memory is in the form of 320-gigabyte enterprise MLC ioMemory modules and cards developed by Fusion-io.
Supercomputers access data from long-term memory stored on disks to augment what is in their active memory. Desingers typically use dynamic random access memory chips to serve as a temporary repository for active data in use before it is stored. Shortening this transfer time between long-term storage and accessible memory is key to higher supercomputer speeds. Flash memory eliminates the need for DRAMs, shortening the transfer time; it also greatly reduces the amount of hardware needed, thereby significantly cutting space and power requirements. Unlike DRAMs, flash memory chips retain data when electrical current is cut off.
Related stories
China threatens U.S. supercomputer supremacy
Scientists creating advanced computer simulation of nuclear reactor
Seager said that the testbed is a partnership between Lawrence Livermore and 10 participating commercial firms that are testing technologies that will be used in Sequoia. He noted that Red Hat has been testing its Linux kernel and Oracle has been testing and developing its Lustre 1.8 and 2.0 releases on the machine for six months. Other Linux-based technologies being evaluated include cluster distributions of Linux software and the Infiniband software stack.
Testing for the Hyperion system will include trials of the Lustre object storage code on the array’s devices. Seager said the goal is to see how much faster various processes can be made to operate by using flash memory. He added that Lawrence Livermore researchers also want to use an open source project called FlashDisk, which combines flash memory with rotating media in a transparent, hierarchical storage device behind the Lustre server.
Seager said that the project will also examine methods to directly use flash memory without a file system. “We think that that will probably give us the best random [input/output operations per second] performance,” he said. Achieving a performance in excess of 40 million IOPS is a key goal of the effort.
The Hyperion system uses 80 one-use servers occupying two racks and not even filling them. A similar system using conventional data storage technology would occupy about 46 racks, Seager said. This provides a power savings that is an order of magnitude better than current systems, he added.
All of these technologies are used to support Lawrence Livermore’s large, high-performance computing efforts. The data intensive testbed extension of Hyperion was designed to meet the goals of the Sequoia next generation advanced strategic computing system being built by IBM and scheduled for delivery in mid-2011.
Sequoia will be a third-generation Blue Gene system with a compute capability of about 20 petaflops and 1.6 petabytes of memory. Another goal is achieving one terabyte per second random IO bandwidth performance.
When Hyperion’s technologies are used in Sequoia, the supercomputer will take up relatively little space and save power. Seager said that IBM’s Blue Gene line is focused on exceptional flops per watt. He noted that one of the goals of the Blue Gene line is high end performance at low power. Sequoia is a third generation Blue Gene computer.
The Lawrence Livermore research is funded by the National Nuclear Security Administration. Lawrence Livermore, Sandia National Laboratory and Los Alamos National Laboratory will be using Sequoia to support the Stockpile Stewardship mission to test the security and reliability of the nation’s nuclear stockpile without the need for underground testing.
NEXT STORY: Security washes out cloud savings