In recent SSD releases, especially entry-level NVMe products, we may see an unfamiliar term called HMB cache. Manufacturers will mention that solid-state drives with HMB caching are dram-less designs, which have no separate cache, but a good balance of cost and speed. And in the user's view, although the price is cheap, there is still the suspicion of jerry-building, and inevitably doubt whether there is a pit?
The key to finding out is to understand what HMB caching is. Note that many people may confuse this with High Bandwidth Memory (HBM) for graphics cards. Although their abbreviations are similar, HMB stands for Host Memory Buffer.
For SSDs with built-in independent caches, which can improve input/output (I/O) performance and durability, they are typically used to temporarily store data that has been read from flash, data to be written to flash or address mapping tables. In contrast, some vendors have removed separate caches on SSDS to reduce power consumption, manufacturing costs, and form factor size, but this has resulted in an inevitable decline in product I/O performance.
However, for DRAM-less SSDs that support the NVMe interface protocol, this problem can be mitigated by leveraging NVMe’s host memory buffering (HMB) capabilities. HMB is a feature introduced in the NVMe 1.2 protocol that allows SSDS to use host memory to improve their own performance.
In DRAM-less SSDs, HMB capabilities provide a variety of ways to mitigate I/O performance degradation. The NVMe interface provides very fast transfer speed between the host and the SSD controller, so the SSD controller can access the host memory without performance penalty. In addition, since host memory can be accessed from the host operating system as well as the SSD controller, if used effectively, you can gain additional benefits.
In simple terms, HMB caching is about using a fraction of the host memory to improve SSD I/O performance, and because it is not designed to replace the SSD’s built-in separate cache, but rather to supplement it, it does not consume a large amount of memory from the host, requiring only a few tens of MEgabytes.
Since HMB can provide performance improvements to DRAM-less SSDs, what is the difference between having HMB caching enabled and not enabled on DRAM-less SSDs?
We compared a DRAM-less SSD with HMB turned on and off, and it can be seen that the 4K random read performance of the SSD decreases sharply with HMB turned off. With HMB enabled, performance remained stable for 24GB before slowly declining.
Of course, there is still a gap between HMB cache DRAM-less and SSDs with built-in separate caches. At the moment, however, mid-range and high-end SSDS are typically equipped with independent caches, which are relatively expensive. By removing the independent caches, HMB technology can provide some performance compensation and relatively low purchase cost.
Therefore, it is not an IQ tax to buy SSDS with an HMB cache. After all, in different use scenarios, sometimes the price is cheaper, and the performance gap is not too large, but the BETTER choice is the DRAM-less SSDS with the HMB cache technology.
END.