Why Google’s TurboQuant Sparks Selloff in Memory Chip Stocks
Google’s TurboQuant is an AI memory-compression method that reduces the size of the key-value cache, the working memory large language models use to store context while generating answers. Google says it can cut that memory use by about six times without hurting model accuracy, and it does not require training or fine-tuning.
Why shares fell
Memory makers sell chips that help AI systems store and move large amounts of data, so investors saw TurboQuant as a possible threat to future demand. If AI models can do the same work with less memory, markets worry that chipmakers such as Micron, SanDisk, Western Digital, Samsung, and SK Hynix may need fewer high-end memory chips than previously expected.
Analysts cited by several outlets said the selloff looked more like a short-term scare than a structural problem. Their view is that lower memory use can also make AI inference cheaper, which could expand usage and eventually support more chip demand, not less.
What to know
TurboQuant is still a research-stage development, not a widely deployed industry standard, so the immediate revenue impact on memory makers is uncertain. For now, the stock moves reflect investor fear that efficiency gains in AI could slow the pace of memory demand growth, even if the long-term picture proves stronger.