Skip to main content

NCSA to add 62 teraflops of HPC power

Installation has begun on a computational resource at the National Center for Supercomputing Applications (NCSA) at the University of Illinois at Urbana-Champaign.

The HPC facility, known as Lincoln, will deliver peak performance of 62.3 teraflops and is designed to push the envelope in the use of heterogeneous processors for scientific computing. The system is expected to be online in October, bringing NCSA's total computational resources to nearly 170 teraflops.

‘Achieving performance at the petascale and exascale and beyond may well depend on a heterogeneous mix of processors,’ said NCSA director Thom Dunning. ‘The use of novel architectures for scientific computing is part of ongoing work at NCSA.’

Lincoln will consist of 192 compute nodes (Dell PowerEdge 1950 III dual-socket nodes with quad-core Intel Harpertown 2.33GHz processors and 16GB of memory) and 96 Nvidia Tesla S1070 accelerator units. Each Tesla unit provides 500 gigaflops of double-precision performance and 16GB of memory.

Lincoln's InfiniBand interconnect fabric will be linked to the interconnect fabric of Abe, the 89-teraflop cluster that is currently NCSA's largest resource. This will enable certain applications to run across the entire complex, providing a peak 'Abe Lincoln' performance of 152 teraflops.

NCSA's Innovative Systems Laboratory has worked with researchers in many disciplines, from weather modelling to biomolecular simulation, to explore the use of many-core processors, field-programmable gate arrays (FPGAs), and other novel architectures as accelerators for scientific computing. The centre maintains a 16-node research cluster, called QP, which includes hardware donated by Nvidia. NCSA and its collaborators have seen significant speed-ups on a number of applications, including a chemistry direct SCF code, the NAMD molecular dynamics code, and the WRF weather forecasting and research code.

‘The NCSA GPU cluster, one of the largest of its kind, is an invaluable resource as we search to solve new classes of weather and climate problems at petascale,’ said John Michalakes, a scientist at the National Center for Atmospheric Research and the University of Colorado. ‘Beyond scalable inter-node parallelism, we must have much faster nodes themselves-and applications able to exploit these architectures. QP and its successor Lincoln provide both, giving us a springboard to solving this next tier of earth science problems.’

‘We anticipate that even more applications will be able to take advantage of Lincoln, given the diverse characteristics of our early-adopter applications,’ said John Towns, leader of NCSA's Persistent Infrastructure Directorate.

Other University of Illinois efforts also drive heterogeneous computing. Wen-mei Hwu, the Sanders-AMD endowed chair in Electrical and Computer Engineering, leads a project to develop application algorithms, programming tools, and software for accelerators at Illinois' Institute for Advanced Computing Applications and Technologies. Hwu also leads the Nvidia Cuda Center of Excellence at Illinois. Schools receiving this accreditation integrate the Cuda software environment into their curriculum. Cuda is a software development tool that allows programmers to run scientific codes like WRF and NAMD on many-core processors.

‘There is a whole new constellation of parallel processing architectures now entering the mainstream,’ said Hwu. ‘It is crucial that we begin making use of them to drive scientific discovery and that we prepare the next generation of researchers to harness them.’

Media Partners