NEWS
Tags: 

OSC installs new Pitzer Cluster

Ohio Supercomputer Center engineers and Dell EMC specialists are testing and preparing to deploy the centre’s newest supercomputer system the Pitzer Cluster.

‘The Pitzer Cluster follows the long-running HPC trend of higher performance in a smaller footprint, offering clients nearly as much performance as the centre’s most powerful cluster, but in less than half the space and with less power,’ said David Hudak, executive director of OSC. ‘This valuable new addition to our data centre allows OSC to continue addressing the growing computational, storage and analysis needs of our client communities in academia, science and industry.’

The liquid-cooled, Dell EMC-built Pitzer Cluster is named after Russell M Pitzer, a co-founder of the centre and emeritus professor of chemistry at The Ohio State University, the Pitzer Cluster is expected to be at full production status and available to clients in November. The new system will power a wide range of research that will help further the understanding of the human genome to mapping the global spread of viruses.

‘Ohio continues to make significant investments in the Ohio Supercomputer Center to benefit higher education institutions and industry throughout the state by making additional high performance computing (HPC) services available,’ said John Carey, chancellor of the Ohio Department of Higher Education. ‘This newest supercomputer system gives researchers yet another powerful tool to accelerate innovation.’

The theoretical peak performance of the new Dell EMC-built cluster is about 1.3 petaflops, meaning it is capable of performing 1.3 quadrillion calculations per second. In other words, to match the potential of what the Pitzer Cluster could do in just one second, a single person would have to perform one calculation every second for 41,195,394.5 years. The cluster also can achieve seven petaflops of theoretical peak performance for mixed-precision artificial intelligence workloads.

The system will feature 260 nodes, including Dell EMC PowerEdge C6420 servers with CoolIT Systems’ Direct Contact Liquid Cooling (DCLC) coupled with PowerEdge R740 servers. In total, the cluster will include 528 Intel Xeon Gold 6148 processors, 64 NVIDIA Tesla V100 Tensor Core GPUs, all connected with EDR InfiniBand network.

‘We worked with Dell EMC to create a highly efficient, dense and flexible petaflop-class system,’ said Douglas Johnson, chief systems architect at OSC. ‘We have designed the Pitzer Cluster with some unique components to complement our existing systems and boost our total centre performance to more than 2.8 petaflops.

‘Dell EMC is thrilled to continue our great collaboration with OSC with this new dense, efficient and liquid cooled system,’ said Thierry Pellegrino, vice president, Dell EMC High Performance Computing. ‘The Pitzer Cluster brings to bear a multitude of new technologies to help OSC and its researchers more quickly and efficiently tackle immense challenges, using artificial intelligence and deep learning to ultimately drive human progress.’

To speed up data flow within the new cluster, Dell EMC used components that improve memory bandwidth on each CPU node and increase network capacity between them. The Intel processors feature 6-channel integrated memory controllers, improving bandwidth by 50 per cent compared to cores in the Owens Cluster. Mellanox EDR InfiniBand 100 Gigabit per second provided provides high data throughput, low latency and high message rate of 200 million messages per second. Additionally, the smart In-Network Computing acceleration engine provides higher application performance and overall improved efficiency.

 

Twitter icon
Google icon
Del.icio.us icon
Digg icon
LinkedIn icon
Reddit icon
e-mail icon
Feature

Robert Roe reports on developments in AI that are helping to shape the future of high performance computing technology at the International Supercomputing Conference

Feature

James Reinders is a parallel programming and HPC expert with more than 27 years’ experience working for Intel until his retirement in 2017. In this article Reinders gives his take on the use of roofline estimation as a tool for code optimisation in HPC

Feature

Sophia Ktori concludes her two-part series exploring the use of laboratory informatics software in regulated industries.

Feature

As storage technology adapts to changing HPC workloads, Robert Roe looks at the technologies that could help to enhance performance and accessibility of
storage in HPC

Feature

By using simulation software, road bike manufacturers can deliver higher performance products in less time and at a lower cost than previously achievable, as Keely Portway discovers