To people who see the performance of DeepSeek and think: "China is surpassing the US in AI." You are reading this wrong. The correct reading is: "Open source models are surpassing proprietary ones." DeepSeek has profited from open research and open source (e.g. PyTorch and Llama from Meta) They came up with new ideas and built them on top of other people's work. Because their work is published and open source, everyone can profit from it. That is the power of open research and open source.
With respect, both are true. In the US, companies like Meta invest heavily into AI and open source, and kudos for that, but in the US, the government has done nothing during this AI renaissance. Playing spectator while adversaries surpass the US in AI is a clear failure. In China, the CCP has invested heavily in R&D, as they should, and in this way supports “open source”, but closes the door onto how it was achieved or any third party participation in the process. The models are full of blatant CCP propaganda. If you don’t belive me, just query their models which have built in denial of the Tiananmen Sq massacre (model v3), or tell you that the answer causes harm (model r1). Or ask either model what is the greatest nation in the World. In my opinion, DeepSeek’s $5 mil scratch training methodology, if true, is one of thr most impressive achievements. It will save billions in energy, hardware cost, and time in advancing AI. The dataset, is also certainly most novel, and how they collected it, that should make quite the story - which leads me to conclude, I am skeptical, not without knowing how it was achieved. Is any of what we are doing good for society? We should all question this every day, and strive to keep AI net positive.
This benchmarking comparison has nothing to do with US vs China AI-superpowers. Open source is open source - the entire planet has access to it, including big tech teams from Meta, Google, OpenAI etc... This has only proven that some talented and smart individuals exist and are getting results with what they had available. The beauty of having great performance from DeepSeek is having an optimization mindset; seeking performance with little resources (Energy being on this list of resources that we need to optimize here!!!). Time for BigTech to start consuming less energy while getting good performance, since smaller teams have proven that to be possible. PS: "They came up with new ideas and built them on top of other people's work." This is literally what research is supposed to be, building on top what is existing, to get better results. At some point, on va pas réinventer la roue :)
While it's true that DeepSeek benefited immensely from open-source contributions like PyTorch and Meta's Llama, it's also crucial to acknowledge the rapid advancements of Chinese researchers in the AI domain. Over 50% of papers accepted at top conferences like NeurIPS and ICML come from Chinese authors, reflecting their growing influence and expertise in the field. Even the U.S., despite its significant investment in proprietary technologies, builds heavily on open-source tools and research to maintain its edge. The real takeaway is that open-source fosters a global ecosystem where ideas build on one another, pushing boundaries collaboratively. Ignoring China's contributions or the power of open-source could mean underestimating major drivers of innovation in AI.
DeepSeek claims they managed to train their DeepSeek-V3 model with an expenditure of ~ $5.6M in compute costs. If that's entirely true, then why on Friday the Bank Of China (BOC) announced a ~ USD 137B during a 5 year period to invest in AI infrastructure? My math isn't mathing here. You think the US didn't knew about this before hand ? You think ScotiaBank, SoftBank, or companies and foreign entities didn't knew about this before committing to a massive investment in the US?
The important lesson is economic. They exploited 100s of billions of dollars of investment in the U.S. and built a competitive model with a few million dollars and ingenuity rather than brute force and scale. Just imagine what could be done if more AI research were focused on efficiency, accuracy and safety rather than scale and theft to protect the interests of the Big Tech oligopoly.
It's also altogether unsurprising given Google's been saying "we have no moat" for nearly a couple of years at this point. As I've written about here at Nscale (https://www.nscale.com/blog/why-open-source-matters), open source is the great equaliser but it does also mean everyone can benefit from the research and methods employed to further accelerate AI research. Being able to do more with less is nearly always a good thing, the corollary being you can still do more with more.
Meta didn’t start from scratch when building their AI models.DeepSeek recently invested just six million , while Meta likely spent significantly more to develop LLaMA Right now, the real competition isn’t just about building the best models; it’s about acquiring and retaining customers. OpenAI has recognized this urgency and is racing to expand its customer base and release new versions rapidly, not just to stay ahead in technology, but also in market share. Meta, on the other hand, already possesses a vast customer and user base, which is their greatest strength. By releasing open-source models, Meta helps spread the technology across industries, ultimately limiting their real competition. Since Meta already controls a significant portion of the user market, being the absolute leader in AI technology isn’t their primary goal. The real threat lies in competitors like OpenAI capturing a larger user base and establishing dominance. By making their models open source, Meta isn’t just promoting innovation and they are strategically securing their position by weakening potential competitors .This isn’t just an act of goodwill; it’s a calculated business move that aligns perfectly with Meta’s current market position.
Industrial AI: Elevating your production to the next level | Founder & CEO @NEUROLOGIQ | LinkedIn Top Voice 2025
3wFraming DeepSeek’s success as just a “win for open source” is dangerously naive. Yes, they leveraged open research, just like every major AI player does. But let’s not ignore the bigger picture: DeepSeek isn’t just another open-source model—it’s a strategic play by China to break the West’s AI monopoly while proving that cutting-edge AI can be built at a fraction of the cost. The West has been comfortable with the idea that AI leadership is theirs to lose. DeepSeek is a wake-up call: innovation isn’t just happening in Silicon Valley anymore, and China isn’t just "building on top"—they are redefining efficiency, cost, and scale. If this was just about open-source collaboration, why did the markets react like they did? Why is Nvidia bleeding? The real story isn’t open source vs. proprietary—it’s about who moves faster, smarter, and with a long-term vision. And right now, the West is still debating while China is delivering.