The Unsung Hero Behind the AI Revolution

Investor Education
Banner Img
March 14, 2024

In a recent interview, Jensen Huang, the CEO of Nvidia, hailed the rise of artificial intelligence (AI) as the catalyst for a new era of industrial revolution. GPUs play a pivotal role in this revolution, leveraging their powerful data processing and computing capabilities to assist enterprises in accomplishing complex computational tasks. However, to effectively handle these demanding tasks, GPUs heavily rely on high-performance memory that offers high data transfer rates. To meet this need, a cutting-edge storage technology called High Bandwidth Memory (HBM) emerged and quickly gained prominence as the preferred memory solution for AI chips. Recognizing the importance of HBM in the supply chain, Nvidia took proactive measures to secure a steady supply of HBM. This included making advance payments of millions of dollars to suppliers, ensuring an abundant availability of HBM memory.

In this edition of Poseidon Market Foresight, we will delve deep into the HBM market, analyzing the industry from various perspectives, including its historical development, market structure, and prominent companies operating within the industry.

What is HBM?

HBM represents a new generation of storage chips that are designed to cater to the high bandwidth requirements of high-performance CPUs/GPUs. For everyday consumers, when asked about memory inside a computer, they probably have heard of the term RAM when purchasing computers or mobile phones. RAM (Random Access Memory) can be thought of as the short-term memory of a processor, temporarily storing all program and process data to enable quick access for calculations and operations. However, RAM can only retain information temporarily, as it lacks the ability to store data long-term like a hard drive. When a computer is turned off or restarted, the data in RAM is lost. Moreover, the storage capacity of RAM is limited, and when it fills up, the processor needs to read data from other storage devices like hard drives to replace or expand the existing data in RAM.

Consequently, the size and speed of RAM significantly impact the performance and processing power of a computer. A larger RAM capacity means that more programs and data can be run concurrently, while faster RAM can accelerate data transmission speed and improve system response time. Similarly, for AI servers, the quantity and speed of RAM directly affect their data processing capabilities. Since AI servers need to process vast amounts of data, including tasks such as training and inference of deep learning models, sufficient RAM is crucial for accelerating data flow and algorithm processing speed.

Before the breakthrough in HBM chip technology, DRAM (Dynamic Random Access Memory) was the most widely used RAM technology, dominating the global storage market with a market share of approximately 56%. DRAM's popularity stems from its large capacity and affordable price, making it suitable for meeting the storage needs of most computer systems. Therefore, DRAM is widely used in various electronic devices, ranging from computers, mobile phones, to servers.

 DRAM and NAND Market Revenue in 2010-2024, Omdia 

However, the DRAM market is highly cyclical, with prices primarily driven by supply and demand dynamics. For instance, the DRAM market witnessed remarkable revenue growth in 2017 and 2018, but encountered a historic downturn in 2019 due to stagnant demand. After fierce competition, the DRAM market has transformed into an oligopoly, with Samsung, SK Hynix, and Micron emerging as dominant players, collectively commanding over 95% of the market share.

2023 DRAM Market Share, TrendForce

HBM utilizes 3D stacking technology to stack multiple DRAM chips together, achieving increased storage capacity, enhanced storage bandwidth, and reduced latency. In AI applications, memory bandwidth plays a critical role in system performance. If memory performance lags behind, the time taken to transport, write, and read instructions and data can be significantly longer than the time consumed by processor operations. The advent of HBM effectively overcomes this bottleneck, allowing AI chips to fully harness their computational power without being hindered by memory-related limitations. As a result, HBM has gained significant popularity in AI servers and has become a standard configuration. However, the current HBM production process is intricate and technologically advanced, leading to an average selling price that is three times higher than that of DRAM.

The HBM market has witnessed substantial growth in recent years, largely fueled by the rapid expansion of the AI technology. A notable example is Nvidia, whose most powerful GPU chip, the H100, sold 500,000 units in the third quarter of fiscal year 2024. This GPU integrates 80GB of HBM3 memory, delivering an impressive data read-write capacity of 3.5TB per second. As GPUs continue to advance in performance, their demand for memory bandwidth is also on the rise. For instance, Nvidia plans to release the X100 GPU chip in 2025, which will have an HBM memory performance five times greater than that of the current H100 GPU. Consequently, the increasing demand for generative AI and ongoing investments in related hardware by technology companies will propel the growth of the HBM market. According to Goldman Sachs' projections, the HBM market is anticipated to achieve a year-on-year growth rate of 124% in 2024, with a total addressable market size of $8.8 billion.

HBM Density Per Nvidia AI GPU,Citi Research/Nvidia

The HBM market, much like the DRAM market, is currently dominated by SK Hynix, Samsung, and Micron in terms of market share. As the exclusive HBM supplier for Nvidia's H100 GPU, SK Hynix holds a leading position in the HBM market with a 54% market share. As manufacturers begin to develop the next-generation HBM memory - HBM3E, companies such as Samsung and Micron are striving to catch up with SK Hynix. In February 2024, Micron began the mass production of high-bandwidth memory and announced that its product will be utilized in Nvidia's next-generation H200 GPU. Subsequently, Samsung and SK Hynix also announced their HBM3E production plans. For HBM manufacturers, successfully securing exclusive orders from upstream companies in the GPU industry has the potential to fuel significant revenue growth in the future.

2023 HBM Market Share, Goldman Sachs

HBM Market Key Players

Currently, three major manufacturers dominate the HBM memory market: SK Hynix, Samsung, and Micron. While Samsung operates in multiple industries, SK Hynix and Micron specialize in semiconductor supply.

SK Hynix, headquartered in South Korea, is one of the leading semiconductor suppliers worldwide. It produces products such as DRAM, NAND flash memory, and CMOS image sensors to various companies, including Nvidia and Microsoft. In terms of revenue in 2023, DRAM accounted for the largest share at 63%, followed by NAND at 30%, together contributing over 90% of SK Hynix's revenue. With the rapid advancement of technologies such as artificial intelligence and machine learning, the demand for high-speed and large-capacity memory has also increased significantly. In 2023, SK Hynix experienced a fivefold increase in HBM3 sales compared to the previous year, becoming the primary driver of revenue growth. As an early entrant in the HBM market, SK Hynix enjoys a higher gross margin compared to Samsung and Micron. The company has already collaborated with Nvidia to secure HBM supply for 2025. SK Hynix aims to capitalize on the surging demand in the HBM market by increasing investments in chip packaging process. This year, they plan to invest over $1 billion in South Korea to expand production and enhance chip manufacturing. The stock price of SK Hynix has risen by 96% over the past year, reflecting investors' enthusiasm and confidence in the company's future prospects.

Samsung, a well-known conglomerate in South Korea, formed Samsung Semiconductor in 1977. Recognizing the potential in the DRAM market, Samsung made substantial investments of over $500 million in 200mm wafers for five consecutive years. This strategic move propelled Samsung to surpass Toshiba in 1993 and secure its position as the global leader in the DRAM market. As of the third quarter of 2023, Samsung held the top spot in DRAM sales, accounting for 38.9% of the global market, while SK Hynix and Micron ranked second and third, with market shares of 34.3% and 22.8%, respectively.

However, Samsung currently faces challenges in the competition for AI chips, primarily due to its adherence to a chip manufacturing technology known as "non-conductive film" (NCF), which can result in quality issues in the production process. The yield rate of HBM3 produced using this technology ranges from 10% to 20%. In contrast, SK Hynix took the lead by introducing the "large-scale reflow molding underfill" (MUF) technology, addressing the limitations of NCF and achieving a significantly higher yield rate of 60% to 70% for HBM3. Furthermore, unlike its competitors SK Hynix and Micron, Samsung has not secured any HBM supply agreements with Nvidia, a prominent leader in the AI chip industry.  

Founded in 1978, Micron Technology emerged as one of the world's largest memory manufacturers following its acquisition of Texas Instruments' global memory business in 1998. Similar to SK Hynix, Micron specializes in producing DRAM and NAND, which serve as its primary product offerings. In terms of revenue in 2023, DRAM accounted for 71%, while NAND contributed 27% to Micron's overall revenue. Although Micron entered the HBM market relatively late, the company made significant strides recently. In 2020, Micron began offering HBM2 products for high-performance graphics cards and server processors. In February 2024, Micron announced that it had commenced mass production of HBM3E, ahead of SK Hynix, which had previously been at the forefront of HBM technology. Simultaneously, Micron remains committed to advancing the next-generation HBM technology, known as HBM4, with plans to begin mass production in 2026. The company's focus on HBM technology has garnered significant investor interest, leading to an 80% increase in its stock price compared to the previous year.

Conclusion

In the era of generative AI, HBM plays a pivotal role by significantly improving data transmission speed and effectively addressing the memory bottleneck encountered in the development of high-performance GPUs. As AI applications expand into more domains and process larger and more complex tasks and data, the chip industry's demand for advanced memory will continue to grow, particularly in terms of storage capacity and power consumption. With major manufacturers continuously optimizing their manufacturing processes and reducing production costs, HBM will not only promote further enhancement of AI computing capabilities, but also find widespread usage across diverse fields, becoming an important force driving future technological innovation.

Disclaimer

  1. The content of this website is intended for professional investors (as defined in the Securities and Futures Ordinance (Cap. 571) or regulations made thereunder).

  2. The information in this website is for informational purposes only and does not constitute a recommendation or offer to provide services.

  3. All information in this website should not be construed as professional or investment advice. Therefore, you should seek independent professional advice. Any use of this website and its contents is at your own risk.

  4. The Company may terminate or change the information, products or services provided in this website at any time without prior notice to you.

  5. No content on the website may be reproduced or publicly transmitted without the explicit consent and authorisation of the Poseidon Partner.