AMD’s Deal With Google Promises To Expand Adoption Of AMD GPUs In The Cloud

AMD has announced that Google Compute Engine and the Google Cloud Machine Learning platform will use AMD’s Radeon GPU (graphics processing units) technology to deliver accelerated performance to facilitate computationally-intensive simulations for use cases such as “complex medical and financial simulations, seismic and subsurface exploration, machine learning, video rendering and transcoding, and scientific analysis.” AMD FirePro™ S9300 x2 Server GPUs can handle parallel calculations that illustrate AMD’s progress with respect to GPU-based hardware. AMD’s deal with Google is particularly significant because most cloud vendors have used Nvidia GPUs for computationally intensive use cases such as deep learning, to date. As of 2017, Google will use the K80 and P100 GPU chips based on Nvidia’s Tesla architecture as well as AMD’s FirePro S9300, designed using AMD’s Polaris architecture.

Amazon Web Services, Microsoft Azure and the IBM Cloud, for example, use Nvidia GPUs. Google’s decision to use a combination of Nvidia and AMD GPUs for its data centers empowers Google Cloud to avoid vendor dependency and obtain greater negotiating power in the procurement process. AMD’s partnership with Google represents the second major cloud vendor that has opted to use its GPU technology, following upon Alibaba’s October 2016 decision to use AMD’s Radeon Pro chips for servers powering its cloud infrastructure. AMD’s deal with Google, however, represents a far more significant milestone in its bid to restore market share lost to Intel and Nvidia by carving out a niche within hyperscale cloud datacenters. AMD will hope to capitalize on its partnership with Google by expanding it the likes of Azure and Amazon Web Services. Meanwhile, AMD plans to release a GPU based on the combination of its forthcoming Vega architecture for GPUs and the recently announced Zen CPU that improves upon its existing Polaris GPU architecture.

Advertisement
%d bloggers like this: