Earthquake in Titan makes for safer buildings

When the next big earthquake hits the San Andreas Fault, some Californian buildings will have a better chance of withstanding the shock thanks to calculations conducted on the Titan supercomputer at the US Oak Ridge National Laboratory (ORNL).

Researchers at the Southern California Earthquake Centre (SCEC) used ORNL’s Titan machine to simulate a major earthquake at high frequencies up to 10Hz.  The calculations will give structural engineers the data required to make predictions about the scale of the damage to buildings caused by the next big earthquake to hit the San Andreas Fault.

Yifeng Cui, a computational scientist at the University of California, San Diego, and Kim Olsen, a geophysicist at San Diego State University, have been able to perform the simulations at much higher frequencies than was previously possible due to the computational power of Titan,  a 27-petaflop Cray XK7 machine..

In 2010, the SCEC team used the Oak Ridge leadership computing facility’s 1.75-petaflop Jaguar supercomputer to simulate a magnitude 8 earthquake along the San Andreas fault. At that time, the simulations peaked at 2 Hertz because doubling wave frequency would have required a 16-fold increase in computational power.

But on Titan in 2013, the team was able to run simulations of a 7.2-magnitude earthquake up to their goal of 10 Hertz, which can better inform building design. By modifying the code originally designed for CPUs to be used on the hybrid CPU/GPU architecture of Titan, the team was able to improve simulation speed significantly. The simulations ran 5.2 times faster than they would have on a comparable CPU machine without GPU accelerators.

‘We redesigned the code to exploit high performance and throughput,’ said Yifeng Cui, ‘we made some changes in the communications schema and reduced the communication required between the GPUs and CPUs, and that helped speed up the code.’ The 2010 San Andreas Fault simulations took 24 hours to run on Jaguar, but the higher frequency, higher-resolution simulations took only five and a half hours on Titan.

Twitter icon
Google icon icon
Digg icon
LinkedIn icon
Reddit icon
e-mail icon

For functionality and security for externalised research, software providers have turned to the cloud, writes Sophia Ktori


Robert Roe investigates the growth in cloud technology which is being driven by scientific, engineering and HPC workflows through application specific hardware


Robert Roe learns that the NASA advanced supercomputing division (NAS) is optimising energy efficiency and water usage to maximise the facility’s potential to deliver computing services to its user community


Robert Roe investigates the use of technologies in HPC that could help shape the design of future supercomputers