Thanks for visiting Scientific Computing World.

You're trying to access an editorial feature that is only available to logged in, registered users of Scientific Computing World. Registering is completely free, so why not sign up with us?

By registering, as well as being able to browse all content on the site without further interruption, you'll also have the option to receive our magazine (multiple times a year) and our email newsletters.

Fraunhofer Institute installs new cluster

Share this on social media:

The Fraunhofer Institute for Laser Technology (ILT) has installed a new high-performance computing cluster within its Centre for Nanophotonics. The cluster will allow researchers to simulate laser-based production processes across a wide range of time and length scales, including novel techniques in micro- and nanophotonics.

Variables for industrial processes are difficult to optimise through direct measurements in the micrometer-scale process zones owing to the tiny dimensions and very high temperatures involved. Computer simulations therefore offer an attractive alternative for those looking to optimise performance; simulations are easier to automate and often more cost-effective than experiments, while also allowing fluctuations and measurement uncertainties to be excluded or specifically taken into account. Additionally, simulations of laser-based production processes tend to be multi-scale problems in which a large expansion of the component has to be calculated at a very high resolution. While this would be difficult to account for in an experimental set-up, it is easy to model in silico.

The required large number of grid points in the simulations exceeded the capacity of conventional workstations in terms of processing time and storage space, and so the institute required a purpose-built supercomputer for these applications. The funding provided by the state of North Rhine-Westphalia for the new Centre for Nanophotonics in Aachen allowed the Fraunhofer ILT to build a cluster capable of handling simulations of these multi-scale tasks. The final stage of the high-power computer system was installed and started up in November 2010.

The cluster is based on a heterogeneous architecture consisting of both multi-core processors and nodes based on the Nvidia CUDA framework – a system which allows parts of the calculations to be performed on specialised graphics processors (GPUs). This modern concept is particularly suitable for the massively parallel execution of frequently recurring calculation steps. This modern concept is particularly suitable for the massively parallel execution of frequently recurring calculation steps.

The installed cluster system has 376 CPUs and eight graphics processor systems with 1,920 GPUs. The storage capacity amounts to almost two Tbytes of main memory and 67 Tbytes of hard disk storage, of which 20 Tbytes are on redundant interconnected drives. Data is exchanged within the cluster by means of a fast InfiniBand network. The theoretical total computer power approaches 10 Tflops.