Thanks for visiting Scientific Computing World.

You're trying to access an editorial feature that is only available to logged in, registered users of Scientific Computing World. Registering is completely free, so why not sign up with us?

By registering, as well as being able to browse all content on the site without further interruption, you'll also have the option to receive our magazine (multiple times a year) and our email newsletters.

NASA scales Pleiades InfiniBand cluster

Share this on social media:

A scientist at the NASA Advanced Supercomputing (NAS) Division at NASA Ames Research Center in Moffett Field, California, has utilised 25,000 SGI ICE Intel Xeon processor cores on Pleiades to run a space weather simulation.

One particular area of study is magnetic reconnection, a physical process in highly conducting plasmas such as those that occur in the Earth's magnetosphere, in which the magnetic topology is rearranged and magnetic energy converted to kinetic or thermal energy. This field of research is critical as these disturbances can disable wide-scale power grids, affect satellite transmissions and disrupt airline communications.

As detailed in the article Cracking the Mysteries of Space Weather by Jarrett Cohen of NASA, Earth is mostly protected from solar flares, coronal mass ejections and other space weather events by the magnetosphere, a magnetic field cocoon that surrounds it. But sometimes Earth's magnetosphere ‘cracks’ and lets space weather inside, where it can cause damage. Getting space weather details right means capturing everything from the 1.28 million kilometer-sized magnetosphere down to subatomic-scale electrons. Doing that in one simulation would require supercomputers more than 1,000 times faster than those available today, so the research team breaks the problem into two parts. They start with local simulations that include full electron physics of regions in the magnetosphere where reconnection is known to occur, followed by global simulations.

Accessing up to 25,000 processor cores on Pleiades, Dr Homa Karimabadi, space physics group leader at the University of California, San Diego, said that his group can run kinetic simulations that treat each electron with its full properties and understand how electrons allow reconnection to occur. In the local simulations, electrons are treated as individual particles. In the global simulations, electrons are treated as fluids and ions (electrically charged atoms) as particles. With Pleiades, simulations can run for five days straight, enabling many parameter studies. Among recent findings is that magnetic reconnection by itself is quite turbulent, producing vortices in the plasma that create many interacting flux ropes — twisted bundles of magnetic field. As observed by spacecraft, flux ropes can extend several times the radius of Earth.

Maximising productivity in today’s HPC cluster platforms requires enhanced data messaging techniques. On Pleiades, every single node has a ‘direct’ InfiniBand connection to the rest of the network. That is why Pleiades has the largest InfiniBand network of any HPC system in the current Top500 list. By providing low-latency, high-bandwidth and a large message rate, high efficiency interconnect solutions are used as high-speed interconnects for large-scale simulations such as these conducted by NASA, and are replacing proprietary or low-performance solutions.

The Pleiades supercomputer is currently ranked the 7th most powerful HPC system in the world based upon the Top500 list published in November 2011.