Skip to main content

Japan announces AI supercomputer

The Tokyo Institute of Technology has announced plans to build a new supercomputer designed to accelerate Artificial Intelligence (A.I) research.

The new system, known as TSUBAME3.0 is expected to deliver 12 petaflops of double precision performance - more than two times the performance of its predecessor, TSUBAME2.5.

‘Artificial intelligence is rapidly becoming a key application for supercomputing,’ said Ian Buck, vice president and general manager of Accelerated Computing at Nvidia. ‘Nvidia’s GPU computing platform merges AI with HPC, accelerating computation so that scientists and researchers can drive life-changing advances in such fields as healthcare, energy, and transportation.’

The system will use the latest Nvidia GPUs, the Pascal-based Tesla P100 GPUs, to reach a performance figure of roughly 12.2 petaflops which would rank the system among the world’s ten fastest systems according to the latest TOP500 list, released in November 2016.

TSUBAME3.0 is designed with AI computation in mind and is expected to deliver more than 47 PFLOPS of AI data crunching horsepower. When operated concurrently with TSUBAME2.5, the total performance is expected to be close to 64.3 PFLOPS, making it Japan’s highest performing supercomputer for AI applications.

Once up and running this summer, TSUBAME3.0 is expected to be used for education and high-technology research at Tokyo Tech, and be accessible to outside researchers in the private sector. It will also serve as an information infrastructure centre for leading Japanese universities.

Tokyo Tech’s Satoshi Matsuoka, a professor of computer science who is building the system, said, ‘Nvidia’s broad AI ecosystem, including thousands of deep learning and inference applications, will enable Tokyo Tech to begin training TSUBAME3.0 immediately to help us more quickly solve some of the world’s once unsolvable problems.’

Topics

Read more about:

HPC

Media Partners