Skip to main content
Event supported by

LIVE WEBINAR: GPU Accelerated Scientific Machine learning with Large Data Sets

Tuesday 15 June @ 2pm (UK)

The time to publication of Data-Driven Research can be decreased through GPU-Accelerated Data Science.  Large proportion of tools used for scientific analysis and machine learning (e.g. Pandas, SciKit Learn, NetworkX or Spark) have GPU-accelerated variants. We'll discuss how, by changing just a few lines of code, research teams can accelerate their experiments by several orders of magnitude, significantly reducing hardware-associated cost or data processing/training times.

We will show a demo on how to dramatically accelerate Histopathological image pre-processing (tiling and thresholding) pipeline using RAPIDS.  In addition we will show how to take advantage of GPU-Accelerated graph analytics tools for the purpose of understanding the role of certain cell-types in the tumor microenvironment.
 
We will discuss the common problems that hinder Data-Driven Research.
 
It's Time Consuming: Building, training, and iterating on models takes substantial time. As scientific data sets continue to  grow, CPU computational power becomes a major bottleneck.
 
It's Costly: Large-scale CPU infrastructure is incredibly expensive for conducting scientific data operations. With growing datasets, adding CPU infrastructure continues to increase costs.
 
It's Frustrating: Productionizing large-scale data processing operations is arduous. It  often involves refactoring and hand-offs between teams adding more cycle time.
 
Join this session to learn more about the key steps for exploring and utilizing GPU-Accelerated Data Science utilities and Big Data utilities.
 
Additional examples and resources from NVIDIA for the scientific community:
 
NASA and NVIDIA Collaborate to Accelerate Scientific Data Science Use Cases
Advancing Medicine and Research

Media Partners