Beijing, 1 February 2026 – Chinese scientists are advancing a novel supercooling technology that rapidly reduces temperatures in critical computing components, a development that experts say could improve energy efficiency and performance in next-generation AI hardware and data-centre infrastructure.
China’s research push into advanced cooling systems comes amid intensifying global competition over artificial intelligence capabilities. Supercooling refers to lowering temperatures far below typical operating levels without freezing, a method that can dramatically increase electrical efficiency, reduce thermal throttling, and extend the operational lifespan of high-performance computing components.
Researchers describe prototypes capable of creating “frost in 20 seconds,” a phrase that captures how quickly these systems can bring down temperatures under controlled conditions. If scaled for industrial use, such rapid cooling could help AI accelerators, including GPUs and specialised neural-processor units, run more reliably at high loads, a factor crucial for training and inference of large models.
Liquid-cooling systems have long been viewed as a means to address heat dissipation challenges in data centres, but the new technology’s speed and efficiency have drawn attention because they could reduce dependence on bulky cooling infrastructure and cut energy costs, which are a significant share of AI-related operating budgets.
Chinese R&D in semiconductor thermal management also aligns with other domestic advances in chip design and production. For example, recent breakthroughs in supercooling methods have been applied to gallium-nitride-based radar systems, technologies with uses in defence and next-generation wireless networks, suggesting broader strategic applications beyond AI computing.
Industry analysts note that while rapid cooling technologies alone won’t solve all performance bottlenecks, they could become a key piece of the broader ecosystem that supports high-density computing workloads. As China competes with the United States and other economies in AI hardware and software, improvements in thermals and efficiency may offer incremental advantages in cost-effective deployment of large-scale AI clusters.
Nevertheless, global players in AI infrastructure, from cloud providers to semiconductor makers, are also racing to deploy their own cooling innovations. The balance of power in AI performance will likely depend on the interplay of chip design, software optimisation, supply-chain resilience, and supportive industrial policy, as much as on breakthroughs in any single technology category.




