• News
  • Technology News
  • Tech News
  • Nvidia CEO Jensen Huang on the crash that cost the company $600 billion: Market ‘misunderstood’ China's DeepSeek AI and …

Nvidia CEO Jensen Huang on the crash that cost the company $600 billion: Market ‘misunderstood’ China's DeepSeek AI and …

Nvidia CEO Jensen Huang addressed investor concerns after a $600 billion market value drop, clarifying that the market misunderstood the implications of DeepSeek's new AI model. Huang emphasized that the demand for powerful computing in AI development remains strong, despite the advances in model efficiency.
Nvidia CEO Jensen Huang on the crash that cost the company $600 billion: Market ‘misunderstood’ China's DeepSeek AI and …
Nvidia CEO Jensen Huang has pushed back against investor concerns that triggered a $600 billion market value drop, asserting that the market misinterpreted the significance of DeepSeek's new AI model.
"I think the market responded to R1 as an 'Oh my gosh, AI is finished,'" Huang said in a pretaped interview for Nvidia partner DDN. The selloff was sparked by the Chinese AI startup DeepSeek's release of its R1 reasoning model, which reportedly used lower-capability chips for development.
The market reaction wiped nearly 20% from Huang's personal net worth, though the company's stock has since recovered most of its losses. Investors had worried that major tech companies might reduce their demand for Nvidia's more advanced chips if AI models could be developed using less expensive alternatives.
Huang characterized DeepSeek's R1 model as "impressive" but emphasized that the market fundamentally misunderstood the ongoing need for powerful computing in AI development. "From an investor perspective, there was a mental model that the world was pretraining and then inference," he said. "Obviously, that paradigm is wrong."
The CEO stressed that teaching AI models to reason better remains computationally intensive and represents "the next scaling frontier" for the tech sector. This aligns with Nvidia's January statement to Fortune, which highlighted that "inference requires significant numbers of Nvidia GPUs and high-performance networking."
The company maintains that three scaling laws govern AI development: "pretraining and post-training, which continue, and new test-time scaling." This suggests that despite advances in model efficiency, the demand for high-performance computing in AI development and deployment remains strong.
author
About the Author
TOI Tech Desk

The TOI Tech Desk is a dedicated team of journalists committed to delivering the latest and most relevant news from the world of technology to readers of The Times of India. TOI Tech Desk’s news coverage spans a wide spectrum across gadget launches, gadget reviews, trends, in-depth analysis, exclusive reports and breaking stories that impact technology and the digital universe. Be it how-tos or the latest happenings in AI, cybersecurity, personal gadgets, platforms like WhatsApp, Instagram, Facebook and more; TOI Tech Desk brings the news with accuracy and authenticity.

End of Article
FOLLOW US ON SOCIAL MEDIA