Genohm releases version v5.4 of SLims

Cray has announced the launch of two new Cray CS-Storm accelerated cluster supercomputers designed to address the artificial intelligence (AI) workloads. The two new systems, the Cray CS-Storm 500GT and the Cray CS-Storm 500NX will provide customers with accelerator-optimised systems for large scale machine learning and deep learning applications.

The new Cray CS-Storm systems are designed for organisations looking for the fastest path to new discoveries, a building block approach to scalability, and the assurance of collaborating with a trusted partner with a long history of designing and deploying tightly-integrated, highly-scalable systems. Leveraging NVIDIA Tesla GPU accelerators, the new Cray CS-Storm systems expand Cray’s portfolio of integrated systems and will give customers a broader range of accelerated supercomputers for computational and data-intensive applications.

‘Customer demand for AI-capable infrastructure is growing quickly, and the introduction of our new CS-Storm systems will give our customers a powerful solution for tackling a broad range of deep learning and machine learning workloads at scale with the power of a Cray supercomputer,’ said Fred Kohout, Cray’s senior vice president of products and chief marketing officer. ‘The exponential growth of data sizes, coupled with the need for faster time-to-solutions in AI, dictates the need for a highly-scalable and tuned infrastructure.’

The Cray CS-Storm systems provide up to 187 TOPS (tera operations per second) per node, 2,618 TOPS per rack for machine learning application performance, and up to 658 double precision TFLOPS per rack for HPC application performance. Delivered as a fully integrated cluster supercomputer, the Cray CS-Storm systems include the Cray Programming Environment, Cray Sonexion® scale out storage, and full cluster systems management.

The Cray CS-Storm 500GT includes support for up to ten NVIDIA Tesla P40 or P100 PCIe accelerators leveraging balanced or single-root configurations for CPU-to-GPU communications. The CS-Storm 500NX includes support for eight Tesla P100 SXM2 accelerators, utilizing the NVIDIA NVLink high speed interconnect for GPU-to-GPU communications.

‘Early adopters of big data analytics and AI have learned a painful lesson as they have struggled to scale their applications and keep pace with data growth and use more sophisticated models,’ said Shahin Khan, founding partner at OrionX Research, ‘You must have the right systems from the beginning to be able to scale, otherwise inefficiencies accumulate and multiply. Expertise in large-scale system design and application optimization is critical. That’s an area that Cray has led for decades.’


For functionality and security for externalised research, software providers have turned to the cloud, writes Sophia Ktori


Robert Roe looks at the latest simulation techniques used in the design of industrial and commercial vehicles


Robert Roe investigates the growth in cloud technology which is being driven by scientific, engineering and HPC workflows through application specific hardware


Robert Roe learns that the NASA advanced supercomputing division (NAS) is optimising energy efficiency and water usage to maximise the facility’s potential to deliver computing services to its user community


Robert Roe investigates the use of technologies in HPC that could help shape the design of future supercomputers