Skip to main content

OCF and Run:AI partnership aims to improve the management of AI workloads

OCF, high-performance computing (HPC), storage, cloud and AI integrator, has partnered with Run:AI, a cloud-native compute management platform for the acceleration of AI. 

Using Run:AI, OCF customers will be able to make more efficient and effective use of compute resources for building, training and executing deep learning workloads as well as HPC applications.

Vibin Vijay, AI product specialist at OCF: ‘Run:AI provides a smart way of organising your GPU’s for machine learning and AI workloads, something that has never been accomplished before. This will enable data science teams to run more experiments on their AI infrastructure and ensuring that computing resources are optimally shared throughout an organisation.’

Run:AI’s management tools ensure maximum utilisation of GPU resources. With pooled GPUs, elastic allocation, fractional GPUs and dynamic quotas, data scientists get access to all the computing power that they need, when they need it throughout the AI lifecycle — from building to training to running inference jobs in production. Run:AI is built on Kubernetes, the de facto standard for deep learning development.

Omri Geller, Run:AI’s co-founder and CEO comments: ‘As Deep Learning becomes an integral part for an increasing number of industries, Run:AI’s platform will give OCF’s customers the super-charge they need to build, train and run the AI models of today and tomorrow.’

 

Topics

Read more about:

HPC, Artificial intelligence (AI)

Media Partners