Skip to main content

Tachyum announces a partnership with the JSC

Tachyum has announced that it has entered into an agreement with the German  Jülich Supercomputing Centre (JSC) to collaborate on open supercomputing projects, scientific research and innovations in artificial intelligence (AI). 

JSC will test an HPC infrastructure based on the Tachyum platform with both parties collaborating on projects in fields including biosciences, progressive materials, green energetics through Al and their transfer into industrial technology.

Professor Thomas Lippert, director of JSC states: ‘It is one of our guiding principles to collaborate with innovative processor developers worldwide. Prodigy was designed to largely avoid silicon underutilisation, which is what makes the processor so attractive for energy-efficient simulations, data analytics and AI applications.’

JSC launched the first German supercomputing centre in 1987 and currently operates one of the most powerful supercomputers in Europe –JUWELS. JSC’s research and development concentrates on mathematical modelling, numerical molecular dynamics and Monte-Carlo simulations (a mathematical technique, which is used to estimate the possible outcomes of an uncertain event). About 200 experts and contacts for all aspects of supercomputing and simulation sciences work at JSC.

JSC meets the challenges that arise from the development of exaflop systems – the computers of the next supercomputer generation. As a member of the German Gauss Centre for Supercomputing, the JSC has coordinated the construction of the European research infrastructure PRACE (Partnership for Advanced Computing in Europe) since 2008.

Dr Radoslav Danilak, founder and CEO of Tachyum said: ‘It was my pleasure to have had a discussion with JSC director Dr Lippert regarding one of the most powerful supercomputers in Europe when I visited JSC last November. I believe our collaboration can help put the EU in the lead position on the supercomputer and data centre markets.’

Tachyum’s Prodigy processor can run HPC applications, convolutional AI, explainable AI, general AI, bio AI, and spiking neural networks, plus normal data centre workloads, on a single homogeneous processor platform, using existing standard programming models. Without Prodigy, hyperscale data centres must use a combination of disparate CPU, GPU and TPU hardware, for these different workloads, creating inefficiency, expense, and the complexity of separate supply and maintenance infrastructures. Using specific hardware dedicated to each type of workload (e.g. data centre, AI, HPC), results in underutilisation of hardware resources, and more challenging programming, support, and maintenance. Prodigy’s ability to seamlessly switch among these various workloads dramatically changes the competitive landscape and the economics of data centres.

 

Topics

Read more about:

HPC, Artificial intelligence (AI)

Media Partners