What Happened to Immersion Cooling? What’s Actually Practical for Data Centers

By Sukhman Sahota | January 30, 2026

Cooling is one of the defining constraints for AI data centers

Liquid cooling, once a niche solution, is now standard. NVIDIA’s Vera Rubin is fully liquid-cooled, surpassing Grace Blackwell’s hybrid approach. Google’s TPUs also use liquid cooling, reflecting the broader trend to manage extreme AI heat. 

Cold-plate liquid cooling is the clear path forward across major AI systems. Before that convergence, immersion cooling was discussed as a possible alternative. But what happened with immersion? Has it been sidelined, or is it still coming? 

The Cooling Reckoning

For years, chip infrastructure engineers treated cooling as a secondary design consideration, focusing first on chip and server performance. Back then, typical rack densities hovered around 10 kilowatts (kW), easily handled by conventional air cooling. Today, those numbers have soared. 

“Now, if you start putting together racks that are 100 to 150 kW, 600 kW to 1 megawatt, powering those racks is the first challenge,” Cosimo Pecchioli, co-leader of Open Compute Project’s (OCP) Cooling Environment Project, said. “The second challenge is to remove that power.” 

As air cooling approached its physical limits, chip designers leaned into liquid cooling to support increasingly dense, GPU-driven systems. 

Why Cold Plates Became the Default

While liquid cooling itself is not new, deploying it across hundreds of data centers is and introduces challenges that go far beyond cooling a single system. “Cooling 150 data centers spread across four continents speaking 50 languages with supply chain, customers, quality control, and maintenance requirements — it’s a completely different industry,” Pecchioli said. 

Direct-to-chip cold plates have emerged as the dominant approach for many next-generation designs. This technology addresses thermal limits by placing a liquid-cooled plate directly on the GPU, allowing heat to be carried away by circulated fluids rather than the surrounding air. 

According to Pecchioli, a broad supply chain and industrial ecosystem have already formed around cold-plate liquid cooling, reflecting long-term commitment across the industry.  

“The whole industry has been created and is evolving as we speak,” Pecchioli said. “For this technology, I don’t see companies scrapping all these investments in five years to completely change the design. I don’t see it.” 

The momentum around cold-plate systems influences how alternatives like immersion cooling are considered, with some viewing it as a larger operational change rather than a near-term replacement.  

The Immersion Question

Instead of routing liquid through plates and pipes, full-immersion submerges entire server trays in dielectric fluid, promising simplicity, efficiency, and near-total heat removal. But for Pecchioli, the performance gains are not obvious enough to justify a major switch. 

“I don’t see that (full immersion cooling) happening,” Pecchioli said. Modern cold-plate systems already remove nearly all the heat that immersion is designed to address. 

According to a report from the Western Cooling Efficiency Center at the University of California, Davis (UC Davis), immersion does offer clear advantages. Full immersion systems can achieve low partial power usage effectiveness and improve reliability by protecting components from dust and vibration. Two-phase immersion, in particular, benefits from highly efficient heat transfers through phase change. 

Immersion cooling is not dismissed because it fails; it is questioned because it may not be enough to challenge the widespread adoption of cold plates. “I’m not saying that you cannot make (immersion cooling) work, but we know cold plates work,” Pecchioli added. 

Yet, the report further states that current single-phase immersion methods are insufficient to dissipate the heat generated by the hottest components, such as GPUs. 

Hybrid Cooling: Compromise or Complexity?

Hybrid cooling offers a middle ground between cold plates and full immersion. “Cold plates make sense for high-power dissipating components like GPUs. However, there are several other components in the servers that dissipate heat,” UC Davis professor and researcher Vinod Narayana said. 

This is where immersion can play a role. Components such as memory, hard drives, digital processing boards, and voltage regulators can benefit from hybrid cooling. 

Narayana notes that an alternative — placing cold plates on all of these smaller components — is expensive and cumbersome. “It also requires a custom setup for each server, since coolant tubes and hoses need to be redesigned each time. Hybrid schemes, whether using air or immersion, can help efficiently dissipate heat from the rest of the server,” Narayana said.  

Even so, that flexibility comes at a cost. “Why do you want to have two different technologies with all the two different challenges at the same time if you can accomplish the same with one technology?” Pecchioli asked. 

Standards Lag Behind Reality

The main issue with all cooling methods is the lack of guidelines for an industry that has already taken off. “There are trade-offs to be made, and one of the key things that would be nice to figure out is some kind of recipe or drop-down menu for decision makers to say ‘this is my current data center, my constraints and compute needs — what’s the best cooling strategy?’” Narayanan said. 

This complexity points to a bigger challenge: without standards, even large companies risk creating incompatible ecosystems. “No matter how large your company is, you cannot solve this problem alone,” Pecchioli said. “There are no standards for anything.” 

Organizations like OCP help fill the gap by building collaborative public guidelines that enable industry-scale operations and avoid supply chain fragmentation through their Cooling Environment Project. 

“10 years from now, the industry will look very different, and there will be much more agreement and standard that if you buy a component…you are sure that it’s going to be compatible,” Pecchioli said. 

Where Do We Land?

The rapid growth of AI workloads has pushed rack densities far beyond the limits of air cooling, with deployments now reaching 100-150 kW per rack — making liquid cooling an operational necessity. Direct-to-chip cold-plate cooling has become the dominant approach due to its efficiency, scalability, and mature supply chain, while immersion and hybrid approaches show promise but remain niche, offering limited gains at higher complexity and up-front cost. 

Another key factor is whether a data center is new or existing. Retrofitting legacy facilities for liquid cooling is costly and complex, while greenfield builds can be designed from the ground up to support higher densities, improved efficiency, and lower long-term risk. As industry standards trail real-world deployments, initiatives like OCP’s Cooling Environment initiative will help guide cooling decisions and align best practices across the ecosystem.

Change Notice

Get all the latest news directly to your inbox

Copyright © 2025.
Instrumental
Privacy Overview

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.