Google AI chip breakthrough
Published on
5 min read

Can Google’s Chip Technology Reduce Memory Chip Requirements?

In Focus

  • Google revealed TurboQuant technology on March 25, 2026
  • The technology reduces the among of memory required to run AI models
  • Memory stocks plunged globally after Google’s compression technology reveal

Google’s AI chip breakthrough, TurboQuant, has sparked a memory stock selloff. The share prices plunged after Google researchers revealed a compression technique that’s likely to lower the memory needed for AI workloads.

In the U.S., Micron Technology and Sandisk stocks declined by 3% and 5.7% respectively on March 25, 2026. Seagate Technology also slid 4%, while Western Digital dropped by 4.7%. The memory chip stock selloff in 2026 was not limited to the U.S only.

Shares of SK Hynix, which is preparing to launch an IPO in the U.S, dropped by 6.4% on the Korea Stock Exchange. The company manufactures the memory chips used in AI applications. Shares of Japanese flash memory maker Kioxia Holdings dipped by a similar margin in Tokyo.

How Google’s Chip Compression Technology Works

According to Google, the new TurboQuant technology can lower the amount of memory needed to run large language models by about a factor of six. This, in turn, lowers the overall cost of training AI applications significantly.

The best way to speed up AI models is to reduce the amount of data they process when making decisions. This is achieved by compressing the input data they use. Most algorithms already try to apply this approach, but they only deliver modest efficiency gains. They also introduce errors during compression, which lowers the quality of a model’s output.

Google’s research chip technology compresses AI models’ data more efficiently than existing algorithms. TurboQuant technology also reduces errors by altering the mathematical properties of the data.

Impact of TurboQuant on Future Memory Needs

News about Google’s quantum computing breakthrough sparked concerns that future memory needs could be reduced. Memory is a key component of accelerators made by Nvidia. Demand for this component has surged significantly during the AI boom, leading to a global memory chip shortage.

But investors who track the global memory stocks hold a different view. They argue that better efficiency will likely boost memory demand instead. Analysts view TurboQuant as beneficial for hyperscalers due ​​to its ability to improve returns on investment.

According to Morgan Stanley analyst Shawn Kim, the technology could also benefit memory chip producers in the long run as lower cost per token could result in higher product adoption.

Application Beyond AI Models

Besides AI models, Google’s chip compression technology could be applied in other areas, including vector search systems that support large-scale search engines. This broad application could potentially increase demand for memory chips.

As context windows get bigger and bigger, the data storage in KV cache explodes higher causing the need for more memory. TurboQuant is directly attacking the cost curve here. Bullish for the cost curve, again if this gets adopted broadly,” Wells Fargo TMT Analyst, Andrew Rocha noted.

It’s still unclear whether the technology is unique to Google and how it might apply to other research labs. Questions also abound on whether Google’s lab test results will translate to real-world application of the TurboQuant technology.

Linda Hadley
Scroll to Top