Scalable accelerator for machine and deep learning inference applications.
Based on the “Fiji” GRAPHICS ARCHITECTURE built to tackle machine and deep learning applications
64 COMPUTE UNITS to accelerate demanding workloads
Up to 8.2 TFLOPS of peak FP32 and FP16 compute performance to speed up compute intensive machine intelligence
47 GFLOPS/watt peak FP16/FP32 performance. Offering amazing performance-per-watt for machine intelligence and deep learning inference applications
State-of-the-art memory technology: 4GB of HBM MEMORY
Passively cooled, 175W TDP board power – designed to fit in most standard server designs
MxGPU for Virtualized Compute Workloads – drive greater utilization and capacity in the data center
ROCm Software Platform provides open source Hyperscale and HPC-class solutions
Inference for Deep Learning
ROCm software platform provides open source Hyperscale platform
Open source Linux drivers, HCC compiler, tools and libraries for full control from the metal forward
Optimized MIOpen Deep Learning framework libraries
Large BAR support for mGPU peer to peer
MxGPU SR-IOV hardware virtualization for optimized system utilizations
Open industry standard support of multiple architectures and industry standard interconnect technologies
HPC Heterogeneous Compute
ROCm software platform provides open source HPC-Class platform
Open source Linux drivers, HCC compiler, tools and libraries for full control from the metal forward
MxGPU SR-IOV hardware virtualization for optimized system utilizations
Open industry standard support of multiple architectures and industry standard interconnect technologies