Energy lab's Catalyst could spur next-gen of HPC clusters
Connecting state and local government leaders
Livermore, Intel and Cray are deploying the uniquely designed Catalyst to explore new frontiers in HPC simulation and big data innovation.
Lawrence Livermore National Laboratory is collaborating with Intel and Cray on a “one-of-a-kind” high performance computing cluster designed to break new ground in HPC simulation and lead to better big data analysis.
Catalyst, a Cray CS300 cluster supercomputer, has a unique design and is intended to serve as something of a foundation for future approaches to supercomputing, particularly with regard to data-intensive applications.
The 150 teraflop machine will be shared among the three partners with access rights based on level of investment and managed through LLNL's High Performance Computing Innovation Center, LLNL officials said. HPCIC will offer access to Catalyst and the expected big data innovations it will provide as new options for its ongoing collaborations with U.S. companies and research institutions. Delivered to LLNL in late October, Catalyst is expected to be in limited use this month and general use by December.
In addition to LLNL, Intel and Cray, Catalyst will support the HPC requirements of the three weapons laboratories that serve the National Nuclear Security Administration's Advanced Simulation and Computing (ASC) Program at Livermore, Los Alamos and Sandia national labs. Officials said the supercomputer's architecture should provide insights into the technologies ASC could need over the next decade.
"The partnership between Intel, Cray and LLNL allows us to explore different approaches for utilizing large amounts of high performance non-volatile memory in HPC simulation and Big Data analytics,” Matt Leininger, deputy of Advanced Technology Projects for LLNL, said in the release. Non-volatile memory is computer memory that can get back information even when the power is turned off.
The increased storage capacity of the system — in both volatile and non-volatile memory — is a step up from classic simulation-based computing architectures used at Energy Department laboratories, officials said.
The storage advances will open new doors for exploring the potential of combining floating-point-focused capability with data analysis in one environment. Floating-point operations boost the speed and performance of computers performing large-scale mathematical calculations.
Catalyst should extend the range of possibilities for the processing, analysis and management of the larger and more complex data sets that many areas of business and science now confront, officials said.
What’s in it?
The Catalyst, a Cray CS300 cluster supercomputer with Intel technology, being deployed by Lawrence Livermore National Laboratories sports advanced features for high-performance computing and big data analytics, including:
- 150 teraflop/s (trillion floating operations per second) capacity
- 324 nodes, 7,776 cores
- 12-core Intel E5-2695v2 processors
- 128 GB of dynamic RAM per node
- 800 GB of non-volatile memory per compute node
- 3.2TB of non-volatile memory
- National Nuclear Security Administration-funded Tri-lab Open Source Software for a common user environment across NNSA Tri-lab clusters
- Improved cluster networking
- An expanded node local non-volatile storage tier, for application check-pointing, visualization, out-of-core algorithms and big data analytics.
NEXT STORY: IEEE wants the cloud to grow like the Internet