SK hynix has launched the world’s first 16-layer HBM3E memory solution, with a stack capacity of up to 48GB.
SK hynix CEO Lee Seok-woo announced the release of the 16-layer HBM3E memory during the 2024 SK AI Summit.
At the summit, Lee introduced the 16-layer HBM3E solution and showcased a sample with a capacity of up to 48GB, marking the highest capacity and number of layers in the industry for HBM products.
The first samples of this expanded memory solution are expected to be available in early 2025.
Although the 16-layer HBM market is expected to open with the HBM4 generation, SK hynix has been developing the 48GB 16-layer HBM3E to ensure the stability of the technology, with plans to provide samples to customers early next year.
SK hynix has adopted advanced MR-MUF technology, which enabled the mass production of 12-layer products and thus facilitated the production of the 16-layer HBM3E.
In addition, SK hynix has developed hybrid bonding technology as an alternative.
The 16-layer product offers an 18% improvement in training performance compared to the 12-layer product, and a 32% boost in inference performance.
As the market for AI accelerators used in inference expands, the 16-layer product is expected to help the company further solidify its leadership in the AI memory sector.
SK hynix is also developing LPCAMM2 modules for PCs and data centers, as well as LPDDR5 and LPDDR6 based on 1c nm, to fully leverage its competitive edge in low-power, high-performance products.
The company is also preparing to launch its sixth-generation PCIe SSD, large-capacity QLC-based enterprise SSDs (eSSDs), and UFS 5.0 products.
SK hynix plans to collaborate with leading global logic wafer foundries starting with the HBM4 generation to employ logic processes on the base chip, providing customers with the best products.
Customized HBM will be a performance-optimized product tailored to various customer demands for capacity, bandwidth, and functionality, expected to open new development directions in the AI memory field.
Furthermore, SK hynix is developing memory technologies that enhance computing capabilities to overcome the so-called “memory wall” problem.
Technologies such as near-memory computing (PNM), processing-in-memory (PIM), and compute storage, aimed at handling massive data processing in the future, are expected to challenge and transform the structure of next-generation AI systems and the future of the AI industry.
Disclaimer:
- This channel does not make any representations or warranties regarding the availability, accuracy, timeliness, effectiveness, or completeness of any information posted. It hereby disclaims any liability or consequences arising from the use of the information.
- This channel is non-commercial and non-profit. The re-posted content does not signify endorsement of its views or responsibility for its authenticity. It does not intend to constitute any other guidance. This channel is not liable for any inaccuracies or errors in the re-posted or published information, directly or indirectly.
- Some data, materials, text, images, etc., used in this channel are sourced from the internet, and all reposts are duly credited to their sources. If you discover any work that infringes on your intellectual property rights or personal legal interests, please contact us, and we will promptly modify or remove it.