It’s easy to be astonished by how fast AI has progressed. But industry insiders are equally amazed by the pace at which the infrastructure underlying artificial intelligence has developed – and the surge in power demands that comes with it, liquid cooling could be the answer.
Even the simplest prompt triggers a cascade of computation and data transfer. Every link in that chain consumes electricity and much of it funnelled into power-hungry NVIDIA GPUs.
Lower-powered alternatives are emerging, but the NVIDIA ecosystem still dominates, dictating the thermal profile of modern data centres. Without advanced cooling, GPUs can’t hit peak performance or density.
The International Energy Agency estimates global data centre energy consumption will near 1000TWh by 2030, more than doubling the 2024 total. That’s a staggering climb, with energy consumption growing 12 percent per year and now accounting for 1.5 percent of all global consumption.
But compute is only part of the equation. Every kilowatt powering a chip creates heat. According to ABI Research, 37 per cent of the energy used in data centres goes straight to cooling.
A 1MW facility was once a flagship. Today, hyperscalers are designing data centres in the hundreds of megawatts – and NVIDIA is targeting 1MW per rack by 2027. Meanwhile ABI predicts the number of public data centres will quadruple by 2030.
That’s not just a growth curve – it’s a pressure cooker and thermal management will define who can scale, who can sustain, and who can lead. While operators can count on vendors to continually deliver better and more efficient compute, the same can’t be said for cooling. Traditional air cooling is commoditised and incapable of handling the heat densities coming with the next wave of AI infrastructure.
The industry is entering a new phase – one where cooling isn’t just a backend necessity, but a strategic differentiator. Liquid cooling is the answer. The Uptime Institute reports that 22 per cent of organizations are already using some form of direct liquid cooling (DLC).
Liquid cooling is no longer exotic but, it’s still largely custom, especially outside GPU farms and hyperscale environments.
Liquid cooling for all?
What will it take to make liquid the default?
When it comes to servers, storage, or network infrastructure, operators expect easy integration. Cooling should be no different. Whether designing in or retrofitting, liquid cooling must become predictable, repeatable, and scalable.
It’s not just about day one. If every cooling system in a data centre requires custom design and management, operators can’t scale with AI. Cooling must move at the pace of compute.
It must also be easy to service. From hyperscalers supporting global SaaS platforms to enterprise data centres backing up financial services, downtime is unacceptable.
Cooling, platformised
At LiquidStack, we’ve built our approach around these needs. We started with two-phase liquid immersion – arguably the most demanding form of thermal management. We’ve since expanded to cover the full spectrum of liquid cooling needs with a major focus on CDUs for DLC systems.
Our latest solution, the GigaModular CDU, is built for scale. It’s a single-phase DLC platform that scales from 2.5MW to 10MW, with centralised control and modular pump architecture. Everything is accessible from the front, making service simple and placement flexible.
Operators see a 25 per cent saving in capex and floorspace, a critical advantage when deploying rapidly or retrofitting legacy environments. And our “pay-as-you-grow” model helps align capital flows with capacity expansion.
But scale doesn’t stop at the rack. We’ve built resilience into our ecosystem, too – because global operators can’t wait on a supply chain.
We currently operate two factories in the US and are actively expanding our manufacturing footprint globally. Our global service network ensures consistent SLAs worldwide.
Operators can’t afford to slow down – and they can’t build past their cooling capacity.
LiquidStack delivers cooling as a platform – scalable, serviceable, and globally deployable – just like the other critical infrastructure in the data center.
For electronics updates please visit: https://efemag.co.uk/category/news/