PRESS RELEASE

Cray announces Shasta

Cray has announced its new, supercomputing system code-named ‘Shasta’. The system will be showcased next month at the 30th anniversary of the SC conference in Dallas, Texas. Shasta is an entirely new design and is set to be the technology that underpins the next era of supercomputing, characterised by exascale performance capability, new data-centric workloads, and an explosion of processor architectures.  With sweeping hardware and software innovations, Shasta incorporates new Cray system software to enable modularity and extensibility, a next-generation Cray-designed system interconnects, and a software environment that provides for scalability. The National Energy Research Scientific Computing Center also announced today that it has chosen a Cray ‘Shasta’ supercomputer for its NERSC-9 system, named ‘Perlmutter,’ in 2020.  The program contract is valued at $146 million, one of the largest in Cray’s history, and will feature a 32-cabinet Shasta system.

Supercomputing Redesigned

Cray supercomputer systems consistently lead in performance and efficient scaling. Shasta continues this leadership into much larger capability systems, up to exascale and beyond, to answer more questions in all fields in support of extreme-scale science, innovation and discovery.  At a time when demand is rising for single systems to handle converged modeling, simulation, AI, and analytics workloads, Shasta’s data-centric design allows it to run diverse workloads and workflows all on one system, all at the same time. Shasta’s hardware and software innovations tackle the bottlenecks, manageability, and job completion issues that emerge or are magnified as core counts grow, compute node architectures proliferate, and workflows expand to incorporate AI at scale.  Shasta eliminates the distinction between clusters and supercomputers with a single new breakthrough supercomputing system architecture, enabling customers to choose the computational infrastructure that best fits their mission, without tradeoffs. With Shasta you can mix and match processor architectures in the same system (X86, ARM, GPUs), as well as choose system interconnects from Cray (Slingshot), Intel (Omni-Path) or Mellanox (Infiniband).

‘Shasta will usher in a new era of supercomputing and represents a true game-changer at a time when artificial intelligence and analytics are being brought to bear on increasingly large and complex problems, including classic HPC modeling and simulation challenges, across an ever-broadening set of companies and industries,’ said Peter Ungaro, president and CEO of Cray. ‘It is also very exciting to announce one of the largest contracts in the history of our company was just signed with NERSC. We are honored to continue our partnership with NERSC and put Shasta to work in support of their broad mission to enable computational and data science at scale.’

‘Our scientists gather massive amounts of data from scientific instruments like telescopes and detectors that our supercomputers analyse every day,’ said Dr Sudip Dosanjh, Director of the NERSC Center at Lawrence Berkeley National Laboratory. ‘The Shasta system’s ease of use and adaptability to modern workflows and applications will allow us to broaden access to supercomputing and enable a whole new pool of users. The ability to bring this data into the supercomputer will allow us to quickly and efficiently scale and reduce overall time to discovery. We value being able to work closely with Cray to provide our feedback on this next generation system which is so critical to extending our Center’s innovation.’

‘Cray is widely seen as one of only a few HPC vendors worldwide that is capable of aggressive technology innovation at the system architecture level,’ said Steve Conway, Hyperion Research senior vice president of research. ‘Cray's Shasta architecture closely matches the wish list that leading HPC users have for the exascale era, but didn't expect to be available this soon.  This is truly a breakthrough achievement.’

Cray Slingshot Network – Designed for Data-Centric Computing at Extreme Scale

With Shasta, Cray is also announcing Slingshot, a new high speed, purpose-built supercomputing interconnect.  Slingshot advances Cray’s industry leadership in scalable network performance and adds capabilities that broaden Cray’s market reach.  The Cray-developed Slingshot interconnect will have up to 5x more bandwidth per node and is designed for data-centric computing. Slingshot will feature Ethernet compatibility, advanced adaptive routing, first-of-a-kind congestion control, and sophisticated quality-of-service capabilities.  Support for both IP-routed and remote memory operations will broaden the range of applications beyond traditional modelling and simulation. Quality-of-service and novel congestion management features will limit the impact to critical workloads from system services, I/O traffic, and co-tenant workloads, to increase realised performance and limit performance variation. Reduction in the network diameter from five hops (in the current Cray XC generation) to three will reduce latency and power, while improving sustained bandwidth and reliability.

‘We listened closely to our customers and dug into the future needs of AI and HPC applications as we designed Shasta,’ said Steve Scott, senior vice president and CTO of Cray. ‘Customers wanted leading-edge, scalable performance, but with lots of flexibility and easy upgradeability over time.  I’m happy to say we’ve nailed this with Shasta. The Shasta infrastructure accommodates a wide variety of processor and network options, allowing customers to run diverse workloads on a single system. And it’s got the headroom to accommodate increasingly power-hungry processors and accelerators coming in future years.  The Slingshot network tightly binds the compute and storage resources in the system, with ground-breaking congestion control to isolate applications from other network traffic, and Ethernet compatibility for datacentre and storage integration. We’re immensely excited to bring this new network to market to help accelerate our customers’ discoveries.’

Flexibility

Shasta lets customers fully realise Cray’s longtime vision of adapting supercomputing systems to workloads using optimised processing and networking. This becomes additionally valuable as customers are increasingly concerned about choosing the optimal architecture as their workloads rapidly evolve. With Shasta, Cray can incorporate any processor choice — or a heterogeneous mix — with a single management and application development infrastructure.  Customers can flex from single to multi-socket processor nodes, GPUs, FPGAs and other processing options that will emerge, such as AI specialised accelerators. Customers can make late-binding decisions on compute technology and not sacrifice capability, because Shasta’s design allows tailoring of system density and injection bandwidth to optimise price and performance.

Supercharged TCO

Designed from the ground up to support a decade or more of advancements in computational processing, Shasta eliminates the need for frequent, expensive upgrades and will drive exceptionally low total cost of ownership over the lifetime of the system.  Shasta packaging comes in two options: a 19” air- or liquid-cooled, standard data centre rack and a high-density, liquid-cooled rack designed to hold 64 compute blades with multiple processors per blade. Both options can scale to well over 100 cabinets. As processor wattage increases over time to boost computational performance, Shasta’s flexible cooling design eliminates the need to do forklift upgrades of system infrastructure to accommodate higher power processors.  Cray designed Shasta to support processors exceeding 500 Watts with highly efficient cooling resulting in less waste and lower costs while meeting critical processor density requirements. Shasta systems are also designed to meet warm water-cooling data center standards like W3 and W4 requirements throughout the world. 

Company: 
Feature

Robert Roe reports on developments in AI that are helping to shape the future of high performance computing technology at the International Supercomputing Conference

Feature

James Reinders is a parallel programming and HPC expert with more than 27 years’ experience working for Intel until his retirement in 2017. In this article Reinders gives his take on the use of roofline estimation as a tool for code optimisation in HPC

Feature

Sophia Ktori concludes her two-part series exploring the use of laboratory informatics software in regulated industries.

Feature

As storage technology adapts to changing HPC workloads, Robert Roe looks at the technologies that could help to enhance performance and accessibility of
storage in HPC

Feature

By using simulation software, road bike manufacturers can deliver higher performance products in less time and at a lower cost than previously achievable, as Keely Portway discovers