NEWS
Tags: 

Google launches GPUs in the cloud

The Google Cloud Platform has received a performance boost as Google launch a public beta allowing users to deploy NVIDIA Tesla K80 GPUs.

GPUs can be particularly useful for highly parallel workloads and Google is targeting application areas such as machine learning in the hopes that more customers will begin using the cloud platform for compute-intensive workloads.

Google is supporting machine learning workloads through the use of popular machine learning and deep learning frameworks such as TensorFlow, Theano, Torch, MXNet, and Caffe, as well as NVIDIA’s popular CUDA software for building GPU-accelerated applications.

The new Google Cloud GPUs are tightly integrated with Google Cloud Machine Learning (Cloud ML), which aims to slash the time it takes to train machine learning models at scale using the TensorFlow framework.

Cloud ML is a fully-managed service that provides end-to-end training and prediction workflow with cloud computing tools such as Google Cloud DataflowGoogle BigQueryGoogle Cloud Storage and Google Cloud Datalab.

Google is also offering a CloudML Bootcamp to teach new users how to Supercharge performance using GPUs in the cloud More information and documentation are available on the Google Cloud website.

However, it is not just machine learning workflows that can benefit from GPU acceleration. The company also recommends that GPUs can accelerate many workflows including video and image transcoding, seismic analysis, molecular modelling, genomics, computational finance, simulations, high-performance data analysis, computational chemistry, finance, fluid dynamics, and visualisation.

Company: 
Twitter icon
Google icon
Del.icio.us icon
Digg icon
LinkedIn icon
Reddit icon
e-mail icon
Feature

Sophia Ktori highlights the role of the laboratory software in the use of medical diagnostics

Feature

Gemma Church explores the use of modelling and simulation to predict weather and climate patterns

Feature

Robert Roe reviews the latest in accelerator technology and finds that GPUs and coprocessors will be key fixtures in the future of deep learning

Feature

Robert Roe finds that commoditisation of flash and SSD technology and the uptake of machine learning and AI applications are driving new paradigms in storage technology.