STULZ worldwide

Unlock 50kW Cooling Per Rack Without Hotspots and Downtime

 

Nowadays, when terms such as Artificial Intelligence, Machine Learning, Automation and others become part of our everyday lexicon and usage, the reliance on high-volume data processing is higher than ever before. And what’s waiting for us next? Even higher reliance on data-intensive applications. As the complexity of the systems is growing and the adoption rate among the population drives forward, the workloads are attaining levels hardly imaginable before. As Moises Levy, (Managing Director of Research and Market Intelligence at DCD) illustrates on Generative AI Workloads: “OpenAI's ChatGPT-2 model, released in 2019, ranged from 117 million to 1.5 billion parameters. GPT-3, released in 2020, contained 175 billion parameters, while GPT-4, introduced in 2023, is estimated to have around 500 billion parameters”. A key component in such high-performance AI workloads is the GPU reaching a power consumption of approximately 10.2 kW. When deploying multiple systems within one rack, it is not uncommon to exceed 20, 30, or even 50 kilowatts of IT load, far beyond the cooling capacities envisioned when traditional room-based systems were originally designed. So how do modern data centers respond to the exceeding demands of computing capacities?

 

Assessing the current solutions it is evident that air, as a cooling medium, has a limited heat removal capacity that falls short of meeting today’s cooling demands—especially at scale. Even when supply air temperatures are lowered and airflow rates increased, traditional systems quickly reach a point of diminishing returns. Delivering more cold air into the room results in escalating energy consumption and rising operational costs. Worse yet, it doesn’t ensure effective heat removal at the rack level—where it's needed most. This leads to the formation of thermal hotspots, increasing the risk of downtime. Some facilities deploy hot-aisle or cold-aisle containment systems to isolate supply and exhaust air, which can improve cooling efficiency. However, while effective to a degree, these solutions add physical infrastructure, reduce layout flexibility, and complicate future expansion.

 

An alternative lies in liquid cooling technologies—such as direct-to-chip or immersion cooling—which offer significantly higher heat removal capacity compared to air-based methods. However, those come with significant barriers to adoption. These systems often require extensive infrastructure changes, such as custom server designs, piping, fluid handling systems, and leak detection protocols. This translates into high capital expenditures and operational complexity—especially when liquid dealing directly with critical IT components. As a result, liquid cooling adoption remains niche, mostly confined to AI, HPC, or hyperscale facilities. According to the Uptime Institute’s 2024 Cooling Systems Survey, only 22% of data centers have implemented liquid cooling. While interest and necessity are growing, large-scale adoption is still constrained by practical, financial, and risk-related concerns.

 

In a world where every kilowatt matters, relying only on room-level cooling is no longer sufficient. Liquid cooling options are limited and still in their early adoption stage. So what are other options left for data centres to meet the rising cooling necessities? The rear door heat exchangers present a highly effective solution, bridging the gap between traditional air and liquid cooling technologies. By integrating a chilled-water-cooled heat exchanger into the rear door of the rack, the system removes the remaining heat at the source and prevents the formation of hotspots. In contrast to the liquid cooling solutions, the rear doors allow seamless retrofit integration with any existing rack, thanks to individual adapter frames and space-saving installation on the back of the units. For modern data centers seeking to keep up with industry requirements, rear door cooling is not only the most pragmatic solution but a strategic one.

 

 

The STULZ Active Rear Door Cooling (ARDC)* is specifically designed to meet the evolving cooling demands of data centres. Each unit is equipped with a range of advanced components designed to ensure optimal performance and energy efficiency. It can operate autonomously using chilled water, supplement existing precision air conditioning, or increase existing rack-level cooling capacity up to 50 kW. This flexibility makes it ideal for both new and retrofit deployments, supporting high-density computing as the needs grow. The maintenance-friendly design and easy installation offer rapid deployment without structural changes in the data center. Not to mention the integrated programmable controller with a built-in display which enables intuitive, real-time monitoring and streamlined system management. 

 

In conclusion, while traditional air cooling methods are becoming less effective for high-density computing and liquid cooling still faces multiple adoption challenges, Rear Door Heat Exchangers provide a practical and efficient alternative. Whether it is for autonomous chilled water cooling, supplementing precision air conditioning or increasing existing cooling capacity the solution offers a perfect balance between performance, cost, and ease of implementation.

*Active Rear Door Cooling (ARDC) is exclusively available in the APAC region and is manufactured in STULZ India factory.

STULZ Featured Products

Active Rear Door Cooling

Heat exchanger door with EC fans