Game and first set to Japanese cerebellum simulators

Researchers in Japan have used GPUs and the Cuda parallel programming model to create a 100,000-neuron simulation of the human cerebellum, one of the largest simulations of its kind in the world – and have tested their model by applying this knowledge to teach a robot to learn to hit a ball.

Tadashi Yamazaki at the University of Electro-Communications in Tokyo, and Jun Igarashi at Okinawa Institute of Science and Technology Graduate University in Okinawa, recently issued a paper detailing how they used Nvidia GPUs to build a large-scale network model of the human cerebellum. They began this work while at the RIKEN Brain Science Institute near Tokyo, a top international centre for advanced brain research.

The two believe that modelling the cerebellum could help robots move around more easily and learn to respond autonomously to their environments, a problem that has proven to be a daunting problem for conventional approaches. And in turn, they hope to shed more light on how cerebellum motor control works.

According to Igarashi, their work involved modelling realistic neural brain function to enable the robot to interact with its environment.

'Our physical actions change the environment, which changes the sensory input to human brain our sensation. The brain then processes this changed sensory information and determines what action to take. It is called the "sensorimotor loop",' Igarashi said. 'The brain must continue to choose appropriate actions on the basis of gradually-changing sensory information.'

One of the biggest challenges in modelling neural brain function is simulation speed. Using a CPU alone it took 98 seconds of compute time to figure out how to respond to a stimulus lasting just one second. Using GPUs resulted in a 100-times speedup, giving the GPU-based system the speed needed to handle real world tasks. To show their system in action, the researchers demonstrated their robotic system learning – in real time – how to hit a small plastic ball thrown by a toy pitching machine with a round plastic racket.

'When the ball speed is changed the robot forgets the learned timing and relearns the new timing, rather than just repeating what it learned before,' said Yamazaki.

Yamazaki believes his work could result in robots within five years that rely on a silicon cerebellum that will allow them to 'think' – that is, they would be able to assess their environment and organise movements autonomously. 'GPUs would play an essential role because, in my opinion, GPUs are the supercomputer for the rest of us.'

Twitter icon
Google icon icon
Digg icon
LinkedIn icon
Reddit icon
e-mail icon

For functionality and security for externalised research, software providers have turned to the cloud, writes Sophia Ktori


Robert Roe investigates the growth in cloud technology which is being driven by scientific, engineering and HPC workflows through application specific hardware


Robert Roe learns that the NASA advanced supercomputing division (NAS) is optimising energy efficiency and water usage to maximise the facility’s potential to deliver computing services to its user community


Robert Roe investigates the use of technologies in HPC that could help shape the design of future supercomputers