DeepSeek AI Chip Technology
Published on
5 min read

How DeepSeek Open Source Models are Fueling Growth for Small AI Chip Technology Startups

Small AI firms are seeing an opportunity to leverage DeepSeek AI chip technology as the Chinese AI chatbot continues to rattle the US AI ecosystem, CNBC has reported. Last month, DeepSeek sparked widespread market selloff that wiped off hundreds of billions off the market cap of AI chip maker Nvidia.

Opportunity, Not Threat

Although DeepSeek’s AI models run on much lower costs and have been trained without top-notch graphic processing units, the Chinese AI startup says its reasoning model rivals those developed by US big techs. Competitors and other players in the AI industry have refuted these claims.

Several AI firms have admitted that rather than threatening their existence, the emergence of DeepSeek presents a big opportunity for them.

“Developers are very keen to replace OpenAI’s expensive and closed models with open source models like DeepSeek R1. Like in the PC and internet markets, falling prices help fuel global adoption. The AI market is on a similar secular growth path,” Cerebras Systems CEO Andrew Feldman said.

Cerebras Systems provides cloud-based services through computing clusters. It also manufactures graphic processing units, similar to those that Nvidia makes. Fieldman holds that the release of DeepSeek R1 reasoning model created the biggest demand for its services.

“R1 shows that [AI market] growth will not be dominated by a single company — hardware and software moats do not exist for open-source models,” Feldman added.

Open source software codes are made freely accessible on the internet, allowing for modification and redistribution. Unlike OpenAI models, DeepSeek AI models are publicly available.

Growth Opportunities

AI industry experts and analysts seem to agree that DeepSeek’s models enhance AI inference and advance the wider AI chip industry. A report by Bain & Company underscored the role that DeepSeek lowers inference costs, which in turn increases AI adoption.

“DeepSeek’s performance appears to be based on a series of engineering innovations that significantly reduce inference costs while also improving training cost. In a bullish scenario, ongoing efficiency improvements would lead to cheaper inference, spurring greater AI adoption,” the report added.

Experts say that this pattern highlights the Jevon’s Paradox, the theory that states that cost reductions in new technologies fuel demand. A report by financial investment company, Wedbush also shows that AI use by retail and enterprise consumers will continue to drive demand for the technology. As this demand increases, it is expected that smaller players in the industry will grow more.

“As the world is going to need more tokens [a unit of data that an AI model processes], Nvidia can’t supply enough chips to everyone, so it gives opportunities for us to sell into the market even more aggressively,” Grog COO Sunny Madra said.

Grog is a tech company that manufactures chips for AI inference.

AI Cycle Acceleration

Another way experts say DeepSeek will increase AI adoption is by accelerating the AI cycle. This cycle includes training and inferencing, which involves the use and application of AI in decision making and predictions using new information.

“AI training is about building a tool, or algorithm, while inference is about actually deploying this tool for use in real applications,” Equity Analyst Phelix Lee said.

AI training requires high computation power. Inference on the other hand, works with less powerful chips programmed for less tasks. Nvidia dominates the AI training and inference space. However, rivals see an opportunity to expand the inference space where higher efficiency at low cost could be achieved. AI chip startups see demand for inference chips rising as more people build on DeepSeek’s AI model.

″DeepSeek has demonstrated that smaller open models can be trained to be as capable or more capable than larger proprietary models and this can be done at a fraction of the cost. With the broad availability of small capable models, they have catalyzed the age of inference,” D-Matrix CEO Sid Sheth said. “

Linda Hadley
Scroll to Top