These new GPUs are AMD’s latest attempt to compete in high-end GPU markets such as gaming and machine learning and as such this new generation of GPUs are designed to address data-intensive workloads. While they have not been designed solely for machine learning – they include specific upgrades - such as the use of HBM2 memory technology – that is useful in data intensive applications such as machine learning and AI.
In addition to HBM2 memory the GPUs will feature a new compute engine built on flexible compute units that can natively process 8-bit, 16-bit, 32-bit or 64-bit operations in each clock cycle. These compute units are optimized to attain significantly higher frequencies than previous generations and their support of variable datatypes makes the architecture highly versatile across workloads.
AMD’s initial announcement refers to data-intensive workloads becoming the norm for many applications - increasing the need for powerful accelerator technologies. Machine learning and AI is a prime example of this as the market has developed in just the last few years. While the market is relatively new the artificial intelligence market has been predicted to grow to approximately $16 Billion by 2022.
This huge potential growth market is very attractive to tech companies that are now scrambling to demonstrate that their latest technologies can compete within the accelerator market for machine learning.
Nvidia, Intel and even the and IBM/Xilinx are all competing to deliver new products that can get the most performance from machine learning/deep learning applications. Nvidia has released the P100, Intel has its own Xeon Phi based processors specifically optimised for deep learning and Xilinx are working on their own solutions but also participating in collaboration with IBM and OpenPOWER. For example , at the end of 2016 the Chinese web services company, Baidu, announced that it would be adopting Xilinx technology for its own machine learning workloads.
Now AMD is joining the race with its own architecture that has been optimise specifically for data-intensive workloads.
Raja Koduri, senior vice president and chief architect, Radeon Technologies Group, AMD said: ‘It is incredible to see GPUs being used to solve gigabyte-scale data problems in gaming to exabyte-scale data problems in machine intelligence. We designed the Vega architecture to build on this ability, with the flexibility to address the extraordinary breadth of problems GPUs will be solving not only today but also five years from now. Our high-bandwidth cache is a pivotal disruption that has the potential to impact the whole GPU market.’
AMD will initially offer three GPUs based on this new Vega architecture which are expected early in 2017.