The stock price of Nvidia (NASDAQ: NVDA) is 12% less than its all-time high. In January, it dropped sharply after a Chinese start-up called DeepSeek said it had built a competitive artificial intelligence (AI) model with a small amount of the computing power used by top U.S. developers like OpenAI.
Investors were worried that other AI writers would use DeepSeek’s methods, which would cause a big drop in demand for Nvidia’s high-end graphics processing units (GPUs), which are the best hardware for building AI models. But those worries may not have been as big as they seemed.
Alphabet, the parent company of Google, buys a lot of Nvidia’s AI data centre chips. On February 4, Sundar Pichai, Alphabet’s CEO, said some things that should make Nvidia’s owners feel a lot better.
The Story Of DeepSeek
A famous Chinese hedge fund called High-Flyer started DeepSeek in 2023. For years, High-Flyer had been using AI to make trading algorithms. DeepSeek launched its V3 large language model (LLM) in December 2024 and its R1 reasoning model in January 2025. Their ability to compete with new models from OpenAI and other start-ups made the tech industry go crazy.
The business world quickly learnt some important facts because DeepSeek’s work is open source. Compared to the tens of billions of dollars that companies like OpenAI have spent to get to where they are now, the start-up says it only spent $5.6 million to train V3. This does not include the estimated $500 million that was spent on chips and infrastructure.
The U.S. government wouldn’t let Nvidia sell its newest chips to Chinese companies, so DeepSeek also used older GPUs like the H100. This was done to protect America’s position as the leader in AI.
It turns out that DeepSeek made up for the lack of computing power by adding some new features to the software. It came up with very good algorithms and ways to enter data. It also used a method called distillation, which involves teaching a smaller AI model from the knowledge of a larger AI model that has already been successful.
OpenAI has even said that DeepSeek trained DeepSeek R1 with its GPT-4o models by telling the ChatGPT chatbot to “learn” from its results on a large scale. Because the creator doesn’t have to gather or process huge amounts of data, distillation speeds up the training process very quickly. Because of this, it needs a lot less computer power, which means it needs a lot less GPUs.
Investors are naturally afraid that if every other AI developer started using this method, the demand for Nvidia’s chips would drop.GPU sales are going to be at an all-time high for Nvidia this year.
The fiscal year that finished on January 31, 2025, will be reported by Nvidia on February 26, 2026. It is expected that the company will have made a total of $128.6 billion, which is 112% more than the previous year. Thanks to rocketing GPU sales, the company’s most recent quarterly results show that about 88% of that income will come from its data centre segment.
Yahoo reported that Wall Street’s general prediction was that Nvidia could set another record in its current fiscal year 2026, with a total of $196 billion in sales. Meeting that goal will depend on AI makers needing more GPUs, which is why it’s easy to see why investors are worried about the DeepSeek news.
Even though the H100 is still very popular, Nvidia’s newest GPU, the GB200, which is built on its Blackwell architecture, can do AI inference up to 30 times faster. Inference is the process by which an AI model takes in real-time data (like a chatbot question) and gives the user a result. It usually comes after the first part of training (more on this in a moment).
The GB200 is the best AI data centre standard right now, and when it started shipping to users at the end of 2024, demand was much higher than supply.
In Response, Sundar Pichai
Analysts from Wall Street called Pichai on February 4 to talk about Alphabet’s findings for the fourth quarter of 2024. In answer to one of their questions, he said that there has been a big change in how computer power is used over the last three years, with more and more going to inference instead of training.
Pichai said that newer forms of reasoning, such as DeepSeek’s R1 and Alphabet’s Flash Thinking, will make this change happen even faster. Because these models “think” for longer before responding, they need a lot more computer power than the models that came before them. This is called test-time scaling, and it’s a way for AI models to give more accurate results without doing more pre-trailing scaling, which means giving models a lot of new data all at once.
Mark Zuckerberg, CEO of Meta Platforms, agrees with you. He recently said that even if the amount of training work goes down, developers will still need more chips because the capacity is just moving towards inference.
Finally, Alphabet told Wall Street that it plans to spend $75 billion on capital expenditures (capex) in 2025. Most of this money will be spent on chips and infrastructure for data centres. In 2024, the company planned to spend $52 billion on capital expenditures, so that number is a big jump. The company isn’t slowing down.
Also Read: Nvidia Stock Has Crossed a Red Line That Points to More Pain After This Week’s Deepseek Rout
In general, it looks like the demand picture for Nvidia’s GPUs is still pretty much the same. Given that the stock is currently selling at a good price, the recent drop could even be a chance to buy.
What do you say about this story? Visit Parhlo World For more.