DOE wish list: Exascale computing at a price it can afford
Connecting state and local government leaders
The Department of Energy has enlisted a seven-lab consortium to develop hardware and software technologies capable of one quintillion calculations per second.
The U.S. government wants to build exascale computers -- supercomputer on steroids -- for a wide range of activities, ranging from improving national security to studying climate change and finding cures for diseases.
But the energy needed to power an exascale computer using today’s technology would be more than that required for a sizeable city. It would also come with a price tag greater than the GDP of a few small countries.
The Department of Energy wants to change all that.
In a joint effort with the National Nuclear Security Administration (NNSA), the Energy's Office of Science recently issued several awards through its FastForward program to develop future hardware and software technologies to support these machines, specifically memory, processors, storage and input/output, which is the communication between an information processing system and the outside world.
FastForward is contracted through Lawrence Livermore National Laboratory as part of a seven lab consortium - Argonne National Laboratory, Lawrence Berkeley National Laboratory, Lawrence Livermore National Laboratory, Los Alamos National Laboratory, Oak Ridge National Laboratory, Pacific Northwest National Laboratory and Sandia National Laboratories.
So far there have been four announced awards: Intel: $19 million for both processor and memory technologies; AMD: $12.6 million for processor and memory technologies; NVIDIA: $12.4 million for processor technology and Whamcloud (along with EMC, Cray and HDF Group): Undisclosed dollar amount for storage and I/O technologies.Additionally, according to U.K.-based The Register, IBM also received an award but the company has not yet released details.
In addition, not all the subcontracts have been made public so far.
The goal of the program is to create a computer capable of performing one quintillion -- one billion billion -- calculations per second, roughly one thousand times faster than today’s speediest supercomputers, including LLNL’s Sequoia supercomputer. Sequoia, which has an operating speed of 16.32 petaflops (a petaflop is a quadrillion floating-point operations/sec), won the title of the world’s fastest supercomputer, according to the Top500 list released June 18 at the International Supercomputing Conference (ISC12) in Hamburg, Germany, reported GCN.
The Defense Advanced Research Projects Agency announced a similar program in 2010, the Omnipresent High Performance Computing program.
The problem is that the technology structure used to power today’s supercomputers hasn’t really changed since the early 1990s and isn’t all that different from the technology used for desktop computers. The only sizable difference is scale. Supercomputers require hundreds or thousands of chips.
As a result, these computers are costly energy-guzzling beasts. Even Sequoia, one of the most energy-efficient supercomputers, has power usage rate of around 2 gigaflops/watt.
An exascale computer built using today’s technology could have an electric bill of over $500 million a year, said Richard Murphy, computer architect at Sandia National Laboratories, in Discover magazine last year.
Of course, once these computers are developed someone needs to make sense of the information. To address that issue, Argonne opened the Scalable Data Management, Analysis and Visualization Institute to develop ways to let scientists spend less time sifting through data and more time on science, according to Robert Ross, a computer scientist and deputy director at Argonne.
And last month IBM and LLNL announced they had formed a collaboration called Deep Computing Solutions, to be housed within LLNL’s High Performance Computing Innovation Center, to help U.S. industry harness the power of supercomputing to better compete in the global marketplace.