From 080cddb83f39254d2287798ec8432c5cc1e61cae Mon Sep 17 00:00:00 2001 From: Shayna Kappel Date: Thu, 23 Oct 2025 08:58:54 +0800 Subject: [PATCH] Add High Bandwidth Memory --- High-Bandwidth-Memory.md | 7 +++++++ 1 file changed, 7 insertions(+) create mode 100644 High-Bandwidth-Memory.md diff --git a/High-Bandwidth-Memory.md b/High-Bandwidth-Memory.md new file mode 100644 index 0000000..0d21160 --- /dev/null +++ b/High-Bandwidth-Memory.md @@ -0,0 +1,7 @@ +
High Bandwidth Memory (HBM) is a computer memory interface for 3D-stacked synchronous dynamic random-access memory (SDRAM) initially from Samsung, AMD and SK Hynix. RAM in upcoming CPUs, and FPGAs and in some supercomputers (such because the NEC SX-Aurora TSUBASA and Fujitsu A64FX). HBM achieves increased bandwidth than DDR4 or GDDR5 while using much less power, and in a considerably smaller kind issue. This is achieved by stacking up to eight DRAM dies and an elective base die which might embrace buffer circuitry and take a look at logic. The stack is often linked to the memory controller on a GPU or CPU by means of a substrate, corresponding to a silicon interposer. Alternatively, the memory die may very well be stacked straight on the CPU or GPU chip. Throughout the stack the dies are vertically interconnected by through-silicon vias (TSVs) and microbumps. The HBM know-how is similar in principle but incompatible with the Hybrid Memory Cube (HMC) interface developed by Micron Know-how. HBM memory bus may be very huge compared to other DRAM recollections corresponding to DDR4 or GDDR5.
+ +
An HBM stack of four DRAM dies (4-Hi) has two 128-bit channels per die for a complete of eight channels and a width of 1024 bits in complete. A graphics card/GPU with four 4-Hello HBM stacks would due to this fact have a memory bus with a width of 4096 bits. As compared, the bus width of GDDR recollections is 32 bits, with 16 channels for a graphics card with a 512-bit memory interface. HBM helps up to 4 GB per package. The bigger variety of connections to the memory, relative to DDR4 or GDDR5, required a new methodology of connecting the HBM memory to the GPU (or different processor). AMD and Nvidia have each used purpose-built silicon chips, called interposers, to connect the memory and GPU. This interposer has the added benefit of requiring the memory and processor to be physically close, decreasing memory paths. However, as semiconductor machine fabrication is considerably costlier than printed circuit board manufacture, this provides cost to the final product.
+ +
The HBM DRAM is tightly coupled to the host compute die with a distributed interface. The interface is divided into impartial channels. The channels are utterly independent of each other and Memory Wave will not be [essentially synchronous](https://www.cbsnews.com/search/?q=essentially%20synchronous) to each other. The HBM DRAM makes use of a wide-interface structure to realize excessive-speed, low-power operation. Every channel interface maintains a 128-bit knowledge bus working at double knowledge rate (DDR). HBM helps switch charges of 1 GT/s per pin (transferring 1 bit), yielding an overall package bandwidth of 128 GB/s. The second generation of High Bandwidth Memory, HBM2, additionally specifies up to eight dies per stack and doubles pin switch rates up to 2 GT/s. Retaining 1024-bit broad entry, HBM2 is able to succeed in 256 GB/s memory bandwidth per package. The HBM2 spec permits up to eight GB per package. HBM2 is predicted to be particularly useful for performance-sensitive client functions corresponding to virtual actuality. On January 19, [MemoryWave Community](https://www.cm-concretemixers.it/mescolatore-bialbero-continuo/) 2016, Samsung announced early mass production of HBM2, at up to 8 GB per stack.
+ +
In late 2018, JEDEC introduced an replace to the HBM2 specification, providing for elevated bandwidth and capacities. As much as 307 GB/s per stack (2.5 Tbit/s efficient data rate) is now supported in the official specification, although products operating at this speed had already been accessible. Additionally, the replace added help for 12-Hi stacks (12 dies) making capacities of up to 24 GB per stack doable. On March 20, 2019, Samsung introduced their Flashbolt HBM2E, featuring eight dies per stack, a switch fee of 3.2 GT/s, providing a complete of 16 GB and 410 GB/s per stack. August 12, 2019, SK Hynix announced their HBM2E, featuring eight dies per stack, a transfer fee of 3.6 GT/s, providing a total of 16 GB and 460 GB/s per stack. On July 2, 2020, SK Hynix introduced that mass manufacturing has begun. In October 2019, Samsung announced their 12-layered HBM2E. In late 2020, Micron unveiled that the HBM2E customary can be updated and alongside that they unveiled the following commonplace often called HBMnext (later renamed to HBM3).
\ No newline at end of file