Skip to main content

Purdue launches new research supercomputing cluster

A new research supercomputing cluster at Purdue, named Hansen, will be used to help advance the work of campus research groups and international researchers.

The cluster features 108 Dell compute nodes with four 12-core AMD Opteron 6176 processors, 48 cores per node. The nodes include 96, 192 or, in a large-memory option, 512 gigabytes of memory. In addition, the new cluster uses 10 gigabit Ethernet interconnects and has a high-performance Lustre scratch storage system. Hansen nodes run Red Hat Enterprise Linux 5.5 and use Portable Batch System Professional 11.1.0 for resource and job management.

Among the users will be mechanical engineering professor Qingyan Chen and his students, who are looking at aircraft environmental control systems with an eye to making them less apt to circulate airborne contaminants, as well making the systems less of a drain on fuel consumption.

Chen, principal investigator for the Air Transportation Center of Excellence for Airline Cabin Environment at Purdue, combines physical experiments and intensive computer modelling in his lab's research. His models not only consider the three-dimensional nature of air circulation in spaces like airline cabins but often a fourth dimension changes over time.

'Our work involves the solution of very complex equations and then we do iterations, which is why it takes so much computing power,' Chen says.

Besides Chen, researchers in fields including earth and atmospheric sciences, chemistry, physics, computer science, aeronautics and astronautics, electrical and computer engineering, materials engineering, and more are using the new cluster.

This is the fourth research cluster Purdue has built in as many years for campus researchers, as well as use on DiaGrid, a Purdue-based multi-campus distributed computing system, the National Science Foundation's TeraGrid and XSEDE networks and the Open Science Grid. The three previous clusters have delivered more than 300 million research computing hours to researchers and their students. The new cluster should push that to a half billion hours by year's end. Together the four clusters deliver more than 331 teraflops at peak.

Media Partners