Skip to main content

Catalysing, enabling and supporting computational science

Pekka Manninen, director of the Lumi supercomputer in Finland, describes how the facility supports everything from cosmology to biochemistry and humanities

Please tell us a little about your background and qualifications...

I work as a director of the Lumi supercomputer at CSC, which is the national supercomputing centre in Finland. I wrote my doctoral thesis in theoretical molecular physics, but early in my research career I realised that I was more interested in the HPC systems I was using for conducting my research than molecules themselves.

When a suitable vacancy for a specialist in high-performance computing opened at CSC, the move from being a supercomputer user to the other side of the table was quite easy to make. The door to the university world was not fully closed, and I am still a docent (adjunct professor) at the University of Helsinki. That was back in 2007; since then I have worked in various expert and management positions at CSC, and did a stint of a few years with a supercomputer vendor, but rejoined CSC later. The current position started in 2019 when we got selected as the home for the EU’s flagship supercomputer, Lumi.

How does your HPC centre/research centre use computing for research?

CSC – IT Center for Science – is a Finnish centre of expertise in information technology, owned by the Finnish state and higher education institutions. We provide ICT expert services for higher education institutions, research institutes, culture, public administration and enterprises. We differ from many supercomputing centres in that we do not carry out a research portfolio of our own, but rather catalyse, enable and support computational science and data-intensive computing in Finnish universities and research institutions. We do participate in research projects driven by our customers, but are positioned as a research infrastructure provider rather than a research centre.

Can you outline the type(s) of projects undertaken at your facility, and how researchers access computing resources?

As the national supercomputing centre, and the only one in Finland, we have a mandate to support all computing and data management needs of the research carried out in the country. The active projects range from cosmology to biochemistry to humanities. The largest domains are life sciences and physical sciences.

You mention that life sciences and physics are the biggest users. Could you tell us a little about different user requirements? 

Obviously, they vary a lot. For instance, often in life sciences the cases are very 'data-intensive', so they may employ only a limited number of processor cores, but have pronounced demands for I/O capacity and bandwidth. Also, the datasets may be subject to higher security standards if they contain personal information. On the other hand, we may find workloads that only need a lot of CPU cores or GPUs with trivial I/O demands. In addition to this somewhat traditional division, a demand has emerged to support streaming data (in or out of the infrastructure), and the artificial intelligence use cases need new adaptation from the HPC environment.

What are the computing trends you see happening in research done in your facility?

Throughout the different domains, we see strong trends in the uptake of machine learning in the analysis and reanalysis of experimental and simulated data, and more and more interplay betweeen machine learning and simulations in the same workflow. This converge of large-scale data analytics, artificial intelligence and high-performance computing is expected to be very fruitful, carrying a lot of potential for scientific discovery and innovation. On the infrastructure side we need to be in position to enable this ambition and need in our platforms and computing environments. At CSC, the Lumi supercomputer was designed ground-up to support this convergence.

What are the main computing challenges you face?

I don’t think these have changed dramatically over the course of years. There are always user communities for which any amount computing power will never be enough, but they can always get added value by running their workloads more accurately and using more and more complex models. I guess I'm saying that the challenges are mostly related to moving data at different scales (to and from the HPC facility, from storage to memory, from memory to compute units).

How could you further increase the speed or efficiency of the research at CSC in the future?

With the Lumi supercomputer, we aim to provide next-generation capacities to European researchers via the 'Swiss army knife' approach. This means that we aim to offer value-add features to the wide spectrum of workloads encountered in scientific computing in the 2020s. In our view, accelerators are the key to increasing speed and efficiency of scientific computing workloads.

Today, and for the foreseeable future, GPUs provide the sweet spot in the trade-off between general applicability and performance gained from specialisation. We are also going to provide early quantum processing units as a part of the Lumi computing environment, but it is unlikely they will be a game-changer in algorithm or use case during the current generation of HPC systems. But they are definitely interesting and it is also likely that there will be algorithms where the quantum advantage will be shown within the next couple of years, and therefore offering QPUs, as they are today, in the context of HPC environment is important.

How is Lumi different from a traditional supercomputer? What changes or compromises have to be made to support these different areas alongside traditional HPC?

To support extreme-scale deep neural network training, the ideal platform contains a lot of GPUs and a very fast access to large training datasets. On the other hand, the grand challenges in simulations – like in climate modelling, molecular dynamics, or plasma physics – will need humongous floating-point operation capability. In addition, we wished to generate a platform where different node architectures (multi-core and GPU, that is) could be harnessed simultaneously to the same workflow and accessed seamlessly within the same computing environment.

On the hardware architecture, this means that Lumi features extreme number-crunching capabilities based on GPUs, supplemented with multi-core and interactive pre/post-processing capacity, together with performant storage systems. For example, we wanted to achieve two terabytes per second I/O bandwidth (from storage system to compute node memory), which we considered the minimum necessary for the compute capability of this size. Possibly even more interesting and novel is the software environment. The HPE Cray Shasta software stack in Lumi will allow cloud-like resource provision and traditional batch-job driven HPC on the same platform. In the future, we will keep on exploring other 'cloud-native' concepts, like multi-tenancy, on Lumi. If there were compromises, I am not aware of them. I think we have built the best possible scientific computing infrastructure money can buy.

Finally, do you have any fascinating hobbies, facts or pastimes you'd like to admit to?

I don’t know how fascinating these are… but... I am a (non-electronic) gaming geek, spending a lot of time and other resources on collecting and playing board games, role-playing games and so on. In addition, I am a gym addict and geeky about exercise and nutrition science and data. These, combined with being a dad for two pre-teenagers that both have lots of sporty hobbies, mean that I don’t really have to ponder what to do with my spare time!

Interview by Tim Gillett

 

Topics

Read more about:

HPC

Media Partners