Learn about the AMD Instinct MI100 accelerator, world’s fastest HPC GPU accelerator for scientific workloads

 AMD has announced the new AMD Instinct MI100 accelerator – the world’s fastest HPC GPU and the first x86 server GPU to surpass the 10 teraflops (FP64) performance barrier. The MI100, combined with AMD EPYC CPUs and the ROCm4.0 open software platform, is designed to propel new discoveries ahead of the exascale era and it’s also supported by new accelerated compute platforms from Dell, Gigabyte, HPE, and Supermicro.

Built on the new AMD CDNA architecture, the AMD Instinct MI100 GPU enables a new class of accelerated systems for HPC and AI when paired with 2nd Gen AMD EPYC processors. The MI100 offers up to 11.5 TFLOPS of peak FP64 performance for HPC and up to 46.1 TFLOPS peak FP32 Matrix performance for AI and machine learning workloads. With new AMD Matrix Core technology, the MI100 also delivers a nearly 7x boost in FP16 theoretical peak floating point performance for AI training workloads compared to AMD’s prior generation accelerators.

Key capabilities and features of the AMD Instinct MI100 accelerator include: 

  • All-New AMD CDNA Architecture- Engineered to power AMD GPUs for the exascale era and at the heart of the MI100 accelerator, the AMD CDNA architecture offers exceptional performance and power efficiency
  • Leading FP64 and FP32 Performance for HPC Workloads – Delivers industry leading 11.5 TFLOPS peak FP64 performance and 23.1 TFLOPS peak FP32 performance, enabling scientists and researchers across the globe to accelerate discoveries in industries including life sciences, energy, finance, academics, government, defense and more.
  • All-New Matrix Core Technology for HPC and AI – Supercharged performance for a full range of single and mixed precision matrix operations, such as FP32, FP16, bFloat16, Int8 and Int4, engineered to boost the convergence of HPC and AI.
  • 2nd Gen AMD Infinity Fabric™ Technology – Instinct MI100 provides ~2x the peer-to-peer (P2P) peak I/O bandwidth over PCIe 4.0 with up to 340 GB/s of aggregate bandwidth per card  with three AMD Infinity Fabric Links. In a server, MI100 GPUs can be configured with up to two fully-connected quad GPU hives, each providing up to 552 GB/s of P2P I/O bandwidth for fast data sharing.
  • Ultra-Fast HBM2 Memory– Features 32GB High-bandwidth HBM2 memory at a clock rate of 1.2 GHz and delivers an ultra-high 1.23 TB/s of memory bandwidth to support large data sets and help eliminate bottlenecks in moving data in and out of memory.
  • Support for Industry’s Latest PCIe Gen 4.0 – Designed with the latest PCIe Gen 4.0 technology support providing up to 64GB/s peak theoretical transport data bandwidth from CPU to GPU.

MI100 Specifications

Compute UnitsStream ProcessorsFP64 TFLOPS (Peak)FP32 TFLOPS (Peak)FP32 Matrix TFLOPS(Peak)FP16/FP16 Matrix
INT4 | INT8 TOPS(Peak)bFloat16 TFLOPs(Peak)HBM2
Memory Bandwidth
1207680Up to 11.5Up to 23.1Up to 46.1Up to 184.6Up to 184.6Up to 92.3 TFLOPS32GBUp to 1.23 TB/s

You can find out more about AMD Instinct Accelerators by clicking on this link.

Drop some comments here!

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: