DeepSeek has rattled the U.S.-led AI ecosystem with its latest model, shaving hundreds of billions in chip leader Nvidia’s market cap. While the sector leaders grapple with the fallout, smaller AI companies see an opportunity to scale with the Chinese startup.
Several AI-related firms told CNBC that DeepSeek’s emergence is a “massive” opportunity for them, rather than a threat.
“Developers are very keen to replace OpenAI’s expensive and closed models with open source models like DeepSeek R1…” said Andrew Feldman, CEO of artificial intelligence chip startup Cerebras Systems.
The company competes with Nvidia’s graphic processing units and offers cloud-based services through its own computing clusters. Feldman said the release of the R1 model generated one of Cerebras’ largest-ever spikes in demand for its services.
“R1 shows that [AI market] growth will not be dominated by a single company — hardware and software moats do not exist for open-source models,” Feldman added.
Open source refers to software in which the source code is made freely available on the web for possible modification and redistribution. DeepSeek’s models are open source, unlike those of competitors such as OpenAI.
DeepSeek also claims its R1 reasoning model rivals the best American tech, despite running at lower costs and being trained without cutting-edge graphic processing units, though industry watchers and competitors have questioned these assertions.
“Like in the PC and internet markets, falling prices help fuel global adoption. The AI market is on a similar secular growth path,” Feldman said.
Inference chips
DeepSeek could increase the adoption of new chip technologies by accelerating the AI cycle from the training to “inference” phase, chip start-ups and industry experts said.
Inference refers to the act of using and applying AI to make predictions or decisions based on new information, rather than the building or training of the model.
“To put it simply, AI training is about building a tool, or algorithm, while inference is about actually deploying this tool for use in real applications,” said Phelix Lee, an equity analyst at Morningstar, with a focus on semiconductors.
While Nvidia holds a dominant position in GPUs used for AI training, many competitors see room for expansion in the “inference” segment, where they promise higher efficiency for lower costs.
AI training is very compute-intensive, but inference can work with less powerful chips that are programmed to perform a narrower range of tasks, Lee added.
A number of AI chip startups told CNBC that they were seeing more demand for inference chips and computing as clients adopt and build on DeepSeek’s open source model.
“[DeepSeek] has demonstrated that smaller open models can be trained to be as capable or more capable than larger proprietary models and this can be done at a fraction of the cost,” said Sid Sheth, CEO of AI chip start-up d-Matrix.
“With the broad availability of small capable models, they have catalyzed the age of inference,” he told CNBC, adding that the company has recently seen a surge in interest from global customers looking to speed up their inference plans.
Robert Wachen, co-founder and COO of AI chipmaker Etched, said dozens of companies have reached out to the startup since DeepSeek released its reasoning models.
“Companies are [now] shifting their spend from training clusters to inference clusters,” he said.
“DeepSeek-R1 proved that inference-time compute is now the [state-of-the-art] approach for every major model vendor and thinking isn’t cheap – we’ll only need more and more compute capacity to scale these models for millions of users.”
Jevon’s Paradox
Analysts and industry experts agree that DeepSeek’s accomplishments are a boost for AI inference and the wider AI chip industry.
“DeepSeek’s performance appears to be based on a series of engineering innovations that significantly reduce inference costs while also improving training cost,” according to a report from Bain & Company.
“In a bullish scenario, ongoing efficiency improvements would lead to cheaper inference, spurring greater AI adoption,” it added.
![DeepSeek has rattled large AI players — but smaller chip firms see it as a force multiplier 1 DeepSeek will spur new innovation in AI, says Groq COO](https://image.cnbcfm.com/api/v1/image/108093114-17380187211738018718-38181873734-1080pnbcnews.jpg?v=1738018720&w=750&h=422&vtcrop=y)
This pattern explains Jevon’s Paradox, a theory in which cost reductions in a new technology drive increased demand.
Financial services and investment firm Wedbush said in a research note last week that it continues to expect the use of AI across enterprise and retail consumers globally to drive demand.
Speaking to CNBC’s “Fast Money” last week, Sunny Madra, COO at Groq, which develops chips for AI inference, suggested that as the overall demand for AI grows, smaller players will have more room to grow.
“As the world is going to need more tokens [a unit of data that an AI model processes] Nvidia can’t supply enough chips to everyone, so it gives opportunities for us to sell into the market even more aggressively,” Madra said.