In the AI era, almost all artificial intelligence companies need NVIDIA’s chips to have enough computing power to train and run large models.
Undoubtedly, NVIDIA has monopolized the discourse power of the entire market. To get the goods, one not only needs to place an order in advance but also has to queue for at least a year, which is unacceptable for artificial intelligence companies.
01
NVIDIA Loses Its Foothold in the Chinese Market
Recently, Jensen Huang made a low-key visit to China, attending annual meetings of various companies. There are reports that this was an effort by Huang to appease Chinese customers and regain the market share that is being lost.
It is known that due to the U.S. export bans, many of NVIDIA’s products cannot be shipped to the Chinese market, which has caused concerns among many Chinese customers.
Although NVIDIA’s products are in a leading position and customers can benefit from advancements in acceleration technology, the inability to guarantee stable supply will affect future operations. Chinese customers have already started opting for domestic alternatives.
To continue competing in the Chinese market, NVIDIA has had to repeatedly launch special edition products. However, customers are not very satisfied with the performance of these downgraded chips, leading to a gradual loss of expected orders.
Chinese customers account for about 30% of global market sales, and NVIDIA has not yet found a good solution.
Additionally, due to the intense competition in the field of artificial intelligence and considering security factors, Chinese manufacturers are very cautious in choosing NVIDIA, currently focusing mainly on autonomous driving.
If the technology war between countries continues to escalate, NVIDIA may completely lose the Chinese market.
02
OpenAI’s Quest to Break Free from NVIDIA Dependency
This highly-watched artificial intelligence company has been troubled recently. To maintain a lead in future competition, it is now unable to ensure timely delivery of NVIDIA’s H100, which will greatly affect OpenAI’s global expansion.
According to data statistics, when training GPT-4, OpenAI had already purchased the equivalent of 2.5 A100 GPUs. If they want to continue training GPT-5, they will need at least 50,000 H100s, each priced at around $30,000.
Although funding is not an issue, not being able to acquire the products and the potential future dependency risks pose a significant threat.
OpenAI has recently been raising funds globally, aiming to build its chip factory. They plan to collaborate with TSMC, Intel, Samsung, and other foundries, trying to reduce dependence on NVIDIA’s supply.
After all, with the widespread adoption of AI, the lack of sufficient chips for large-scale AI deployment makes waiting for NVIDIA’s production capacity impractical.
Building a globally controlled chip factory network might be more interesting for OpenAI. To get fresh milk, one must own a cow to milk at any time.
OpenAI already has a team for self-developed chips. This week, CEO Sam Altman is visiting South Korea, preparing to meet and discuss with SK Hynix and Samsung Electronics, hoping to build a stable AI chip supply chain.
Although SK Hynix and Samsung Electronics do not directly produce GPUs, they can provide essential HBM (High Bandwidth Memory) for AI chips. HBM greatly enhances data processing performance by vertically stacking multiple DRAMs.
03
Large Companies Developing Their AI Chips
On January 18th, Mark Zuckerberg, CEO of Meta, announced that by the end of 2024, they plan to purchase 350,000 NVIDIA H100 graphics cards, with an estimated expenditure of around $9 billion.
Additionally, Meta has also placed an order for AMD‘s newly developed AI chip, Instinct MI300X, proving the significant demand for AI chips among multinational companies.
For long-term stable development, Microsoft, Amazon, Google, and Intel are all pushing forward with their self-developed AI chips and have introduced products like Maia 100, AWS Trainium, TPU, and Habana Gaudi, to enhance their competitiveness in developing generative AI large prediction models.
If you believe that AI will be popular in the future, you need to purchase more GPUs, which has become a basic consensus among technology companies developing artificial intelligence.
These companies’ self-developed chips, manufactured by foundries like TSMC, can greatly reduce the current risk of sole reliance on NVIDIA’s exclusive supply.