Nvidia enhances Stanford brain-power

Nvidia has collaborated with a research team at Stanford University to create the world’s largest artificial neural network built to model how the human brain learns. The network is 6.5 times bigger than the previous record-setting network, developed by Google in 2012.

Computer-based neural networks are capable of 'learning' how to model the behaviour of the brain – including recognising objects, characters, voices and audio in the same way that humans do.

Creating large-scale neural networks is extremely computationally expensive. For example, Google used approximately 1,000 CPU-based servers, or 16,000 CPU cores, to develop its neural network, which taught itself to recognise cats in a series of YouTube videos. The network included 1.7 billion parameters, the virtual representation of connections between neurons.

In contrast, the Stanford team, led by Andrew Ng, director of the university’s artificial intelligence laboratory, created an equally large network with only three servers using Nvidia GPUs to accelerate the processing of the big data generated by the network. With 16 Nvidia GPU-accelerated servers, the team then created an 11.2 billion-parameter neural network.

'Delivering significantly higher levels of computational performance than CPUs, GPU accelerators bring large-scale neural network modelling to the masses,' said Sumit Gupta, general manager of the Tesla accelerated computing business Unit at Nvidia. 'Any researcher or company can now use machine learning to solve all kinds of real-life problems with just a few GPU-accelerated servers.'

Twitter icon
Google icon icon
Digg icon
LinkedIn icon
Reddit icon
e-mail icon

For functionality and security for externalised research, software providers have turned to the cloud, writes Sophia Ktori


Robert Roe investigates the growth in cloud technology which is being driven by scientific, engineering and HPC workflows through application specific hardware


Robert Roe learns that the NASA advanced supercomputing division (NAS) is optimising energy efficiency and water usage to maximise the facility’s potential to deliver computing services to its user community


Robert Roe investigates the use of technologies in HPC that could help shape the design of future supercomputers