Data: centre stage in the data centre
Traditionally, high-performance computing has always been about number-crunching, but over the past few years data-centric computing has assumed increasing importance. This growth of interest is reflected in the technologies and services on display at SC15, as noted in this, the third news report from SC15, which supplements those already mentioned in the Show Preview.
Continuum Analytics, developer of Anaconda, the modern open source analytics platform powered by Python, is announcing the availability of Anaconda on AMD’s Accelerated Processing Units (APU), giving Anaconda users a simple way to exploit AMD's latest technology for the most demanding data science applications with the simplicity of Python.
Communication overhead has previously made GPU acceleration challenging. Such situations come up often in analysis for financial trading, implementing graph analytics algorithms, and other complex data processing workflows.
Travis Oliphant, Continuum Analytics CEO and co-founder, said: ‘Up until now, high-performance computing has been largely inaccessible, requiring knowledge of low-level programming languages and workarounds. With this collaboration, Anaconda helps democratise high-performance computing for Big Data by bringing AMD's APU to Python, the fastest growing open data science language.’
Numba, Continuum’s just-in-time (JIT) Python compiler, now supports the AMD APU heterogeneous system architecture. The APU combines the specialised capabilities of both the central processing unit (CPU) and graphics processing unit (GPU) to deliver increased performance and capabilities, expanding throughput by sharing data in memory and eliminating data movement. Typical applications that benefit from the increased throughput include geospatial analytics and facial recognition, along with business applications that involve machine learning, deep learning and artificial intelligence.
Convergence between AI and Exascale computing could result from the application of computer vision technology to analyse and interpret live streamed video as it is broadcast. The system, making live content online as searchable and categorized as the static web, will be demonstrated live at the Echostreams’ booth.
The technology results from a collaboration between Dextro, which develops technologies for advanced computer vision, machine learning, and data analytics; Orange Silicon Valley, an innovation subsidiary of global telecommunications operator Orange; and Echostreams, a US based white-box OEM/ODM solution provider.
Dextro launched its product, Stream, in April 2015. But the new Exascale system to be demonstrated at SC15 can do what previously took dozens of servers into one unit, while making the learning speed of the algorithms more than five times faster.
David Luan, CEO of Dextro, said: ‘Every minute, an incredible amount of video is uploaded for the world to see, but until Stream, there was no easy way to sift through all the information to find personally interesting or relevant content. This requires a ton of processing power, not only for the baseline analysis of high resolution images, but also in training the system to get better at identifying and categorising content in real-time.’ The speed-up provided by the Exascale platform ‘is a revolutionary feat for power-intensive applications like ours,’ he said.
The new system provide extreme computation density with 20 Nvidia GPUs in a single PCIe root complex, giving a density exceeding 60,000 Cuda cores in a single server, together with innovative thermal engineering without resorting to liquid cooling.
According to Jerome Ladouar, VP infrastructure, technologies and engineering at Orange, Convergence between AI and Exascale computing could result from the technology. He said: ‘It is now possible to run Deep Learning over massive volumes of video data at high speed and also perform contextual analysis over several hundred streams in real time. With our partners, we have prototyped a supercomputer in a box at the edge of our network.’