Thanks for visiting Scientific Computing World.

You're trying to access an editorial feature that is only available to logged in, registered users of Scientific Computing World. Registering is completely free, so why not sign up with us?

By registering, as well as being able to browse all content on the site without further interruption, you'll also have the option to receive our magazine (multiple times a year) and our email newsletters.

Cray Introduces ClusterStor E1000 storage

Share this on social media:

Cray, a Hewlett Packard Enterprise company, has unveiled its Cray ClusterStor E1000 system, an entirely new parallel storage platform for massively scalable workloads such as AI, analytics, simulation and modelling.

ClusterStor E1000 addresses the explosive growth of data from converged workloads and the need to access that data at unprecedented speed, by offering an optimal balance of storage performance, efficiency and scalability, effectively eliminating job pipeline congestion caused by I/O bottlenecks. 

The next-generation global file storage system has already been selected by the US Department of Energy (DOE) for use at the Argonne Leadership Computing Facility, Oak Ridge National Laboratory and Lawrence Livermore National Laboratory, where the first three US exascale supercomputers will be housed. With the introduction of the ClusterStor E1000 storage system, Cray has completed the re-architecture of its end-to-end infrastructure portfolio, which encompasses Cray Shasta supercomputers, Cray Slingshot interconnect, and the Cray software platform. With Cray’s next-generation end-to-end supercomputing architecture, available for any datacenter environment, customers around the world can unleash the full potential of their data.

‘To handle the massive growth in data that corporations worldwide are dealing with in their digital transformations, a completely new approach to storage is required,’ said Peter Ungaro, president and CEO of Cray. ‘Cray’s new storage platform is a comprehensive rethinking of what high performance storage means for the Exascale Era. The intelligent software and hardware design of ClusterStor E1000 orchestrates data flow with the workflow – that’s something no other solution on the market can do.’

As the external high performance storage system for the first three US exascale systems, Cray ClusterStor E1000 will total more than1.3 exabytes of storage for all three systems combined. The National Energy Research Scientific Computing Center (NERSC) also selected ClusterStor E1000, which will be the industry’s first all NVMe parallel file system at a scale of 30 petabytes of usable capacity.

‘NERSC will deploy the new ClusterStor E1000 on Perlmutter as our fast all-flash storage tier, which will be capable of over four terabytes per second write bandwidth. This architecture will support our diverse workloads and research disciplines,’ said NERSC director Sudip Dosanjh. ‘Because this file system will be the first all-NVMe file system deployed at a scale of 30 petabytes usable capacity, extensive quantitative analysis was undertaken by NERSC to determine the optimal architecture to support the workflows our researchers and scientists use across biology, environment, chemistry, nuclear physics, fusion energy, plasma physics and computing research.’

Recognising the data access challenges presented by the Exascale Era, Cray’s ClusterStor E1000 enables organisations to achieve their research missions and business objectives faster by offering:

  • Unprecedented storage performance: ClusterStor E1000 systems can deliver up to 1.6 terabytes per second and up to 50 million I/O operations per second per rack – more than double compared to other parallel storage systems in the market today.

  • Maximum performance efficiency: New purpose-engineered end-to-end PCIe 4.0 storage controllers serve the maximum performance of the underlying storage media to the compute nodes and new intelligent Cray software, ClusterStor Data Services, allows customers to align the data flow with their specific workflow, meaning they can place the application data at the right time on the right storage media (SSD pool or HDD pool) in the file system.

  • Massive scalability: An entry-level system starts at 30 gigabytes per second and at less than 60 terabytes usable capacity. Customers can start at the size dictated by their current needs and scale as those needs grow, with maximum architectural headroom for future growth. The ClusterStor E1000 storage system can connect to any HPC compute system that supports high-speed networks like 200 Gbps Cray Slingshot, Infiniband EDR/HDR and 100/200 Gbps Ethernet.

‘Cray’s Shasta architecture substantially expands the company’s addressable market to include HPC simulation, AI, enterprise analytics and cloud computing. ClusterStor E1000 is an integral part of this strategy,’ said Steve Conway, senior vice president of research at Hyperion Research. ‘This new edition of the proven ClusterStor solution is designed to enable leading enterprises to consolidate their AI, HPC and High Performance Data Analysis stacks, efficiently and easily. The fast-growing contingent of enterprises that are adopting HPC now have the cost-effective option to acquire a unified Cray Shasta-Slingshot-ClusterStor infrastructure.’

 

Company: 
Other tags: