High bandwidth memory

from Wikipedia, the free encyclopedia

High bandwidth memory (short: HBM ) ( German  memory with high bandwidth ) is one of AMD with SK Hynix developed memory interface to a computer the memory at high transfer rate to a graphics or main processor to connect.

technology

Section through a graphics card with high bandwidth memory : each HBM stack is connected via 1024 lines to the HBM controller, which is located in the GPU.
Type Release Clock rate
(GHz)
per stack
Capacity
(2 30 bytes)
Data rate
(GByte / s)
HBM 1 Oct. 2013 0.5 0004th 0128
HBM 2 Jan. 2016 1.0 ... 1.2 0008th 0256 ... 307
HBM 2E Aug 2019 1.8 0016 0460

HBM 1

HBM 1 was adopted by JEDEC as the US industry standard in October 2013 .

HBM is a CPU / GPU memory interface that allows dies to be stacked vertically. In the current expansion stage there are 4 DRAM modules of 1 GB each. Using a so-called “ interposer ”, these stacks establish a faster connection to the CPU or GPU than the GDDR5 memory that was previously installed as standard . The bus width is 1024 data lines per stack, a total of 4096 bits for four stacks. The memory is clocked at 500 MHz, data is transmitted on rising and falling edges ( DDR ). Due to the large bus width, the data throughput reaches half a terabyte per second. Up to four of these HBM stacks are pumped onto the interposer together with a CPU or GPU and this unit is connected to a circuit board.

Although these HBM stacks are not physically integrated into the CPU or GPU, they are quickly connected via the interposer with extremely short cable paths, so that the properties of the HBM hardly differ from the RAM integrated on the chip.

HBM memory also has a lower power consumption than GDDR5. AMD states that HBM offers more than three times the memory bandwidth per watt.

Dimensions of the HBM 1

HBM requires significantly less board area than GDDR5, which can be advantageous for building notebooks or tablets with high graphics performance. The very close positioning on the graphics processor also allows the graphics chip and RAM to be covered with a single, relatively small heat sink.

HBM 2

On January 12, 2016, HBM 2 was accepted by JEDEC as JESD235a.

HBM 2 allows up to 8 dies to be stacked on top of one another and doubles the memory throughput to up to 100 GB / s per die stack. The size of the stack memory can be between 1 and 8 GiB, so that a maximum expansion to 32 GiB is possible. Both SK Hynix and Samsung have launched 4GiB stacks.

HBM2 has been used in Nvidia Tesla since 2016 - and in Nvidia Quadro graphics cards since 2017, and in the AMD Radeon Vega series since mid-2017 .

HBM 2E

Introduced on August 13, 2019, the max. Capacity per batch, the data rate increases by 50 percent.

history

AMD Fiji graphics processor : The Substrate package carries several small SMDs and the silicon interposer. On top of this is the Fiji GPU and 4 HBM stacks.

HBM's development began in 2008 at AMD. Version 1 was officially adopted by JEDEC in 2013, version 2 in 2016.

HBM 1 was installed for the first time on the graphics cards Radeon R9 Fury, Radeon R9 Fury X and the Radeon R9 Fury Nano of the AMD Radeon R300 series .

Individual evidence

  1. JESD235: High Bandwidth Memory . October 12, 2015.
  2. Christof Windeck: AMD Radeon R9 Fury X thanks to HBM memory with 512 GByte / s. Heise Verlag, June 16, 2015, accessed on January 14, 2016 .
  3. First Radeon with High Bandwidth Memory will launch at E3 on June 16th . In: Digital Trends . June 2, 2015 ( digitaltrends.com [accessed August 23, 2017]).
  4. JESD235a: High Bandwidth Memory 2 . January 12, 2016.
  5. Mark Mantel: HBM2E stack memory: high transfer rates and capacity for GPUs and FPGAs. In: heise online. August 13, 2019, accessed August 14, 2019 .
  6. http://www.grafikkarten-evaluierung.de/produkt/sapphire-r9-fury-4gb-hbm-4096-bit-pci-e-hdmi-tripl/. Retrieved November 5, 2016 .