Small leaks, large consequences
How minor drips become major outages in data centres
Water leaks rarely surprise anyone in theory. Most people have dealt with one at home, like a dripping pipe under the sink or a puddle next to the washing machine. In a data centre, the same principle applies, but the consequences are on an entirely different scale. In practice, water or any other liquid leaks cause some of the most expensive outages in the industry. In late November 2024, a 4-inch chilled-water pipe burst inside a server room at Stanford University's Joint Science Operations Center (JSOC), the facility responsible for processing and distributing data from NASA's spacecrafts. Water sprayed from floor to ceiling, damaging roughly 20% of the computer systems in the lab, and data processing for 3 scientific instruments was taken offline. Recovery extended into early 2025, with the first stage of the pipeline taking nearly a month to restart. The spacecraft themselves were operational all the time, but it was the ground infrastructure that failed. And the root cause wasn't a system collapse or a cyber incident but a single mechanical pipe fault. The water it released, spreading through the facility before any response, turned a brief failure into weeks of data access disruption, recovery costs and reputational damage that is difficult to quantify.
Where water travels before anyone sees it
Unlike a power failure, which triggers immediate alarms and shuts systems down in seconds, a water leak gives little warning. It drips and flows into floor voids, cable trays, and along structural surfaces long before it appears on any monitoring dashboard. By the time a visible sign appears at the surface or during routine inspection, the leak has already been spreading for some time. In a facility where a mid-size or large operator can accumulate $300,000 in downtime costs per hour (ITIC, 2024 Hourly Cost of Downtime Survey), this time window before the detection and action matters. For instance, cooling is the second most common cause of data centre outages according to Uptime Institute's 2025 Annual Outage Analysis, and the same research found that 80% of serious outages could have been prevented. Understanding where leaks form and how they spread is where effective prevention begins.
Where data centres leak
Data centre leak sources fall into three broad categories: cooling and IT infrastructure, building systems, and power areas. Each carries distinct risk profiles, and most facilities have exposure across all three.
I. Cooling and IT infrastructure
Chilled water and refrigerant piping sit under continuous pressure, and joints, valves, and fittings are where failures most often concentrate. Pipe sweating is a particular issue in humid climates, introducing moisture into areas that look completely dry at floor level. Cooling equipment such as CRAC, CRAH, FWU and CDU units brings additional risk at every coil connection and condensate point, where small drips from fittings or blocked internal drain paths can go undetected during regular operations. With the widespread use of direct-to-chip liquid cooling, server racks add a growing set of failure points. Quick-disconnect couplings, manifold connections, and cold plates are some of them. Unlike a leaking pipe, which tends to drip from a fixed point, a loose coupling in a high-density rack can release coolant directly onto active hardware before it ever reaches the floor.
II. Building systems
Sprinkler and overhead piping introduce a risk that is easy to overlook precisely because it is unrelated to the cooling infrastructure. A corroded joint or a failed fitting above the server hall can deliver large volumes of water from above with very little warning. Condensate drain lines from air conditioning units present a more constant concern, as these generate moisture continuously during normal operation, and a blocked drain overflows gradually, often beneath a raised floor, until a technician notices dampness during routine maintenance. Foundation and floor seepage is the final source in this category, particularly relevant to basement-level facilities where groundwater can infiltrate through floor slabs seasonally, going unnoticed until moisture eventually reaches electrical infrastructure.
lll. Auxiliary and power areas
Battery and UPS rooms are among the most underestimated leak sources in a data centre. Flooded-cell batteries under overcharge conditions can release electrolyte fluid that is both corrosive and conductive, capable of damaging equipment and attacking building fabric at the same time. Because UPS rooms are typically accessed during scheduled maintenance. However, a slow release can go undetected for hours before it causes visible damage. Diesel generator areas introduce a different fluid type altogether: fuel lines, tank connections, and overflow points are all potential hydrocarbon leak sources, and these areas are frequently located at the perimeter of the facility, outside the primary monitoring coverage.
The liquid cooling multiplier
The volume of liquid circulating through a typical data centre is growing, with the liquid cooling market projected to expand five-fold by 2035 (Global Market Insights, 2026). This trend is driven by AI workloads that demand rack densities above 60 kW, the point at which air cooling reaches its practical limits. A facility that ran entirely on air cooling 5 years ago may now have CDU loops, cold plates, and manifold systems installed across multiple rows, each adding connection points that didn't exist before. The failure modes introduced by direct-to-chip cooling are also different from those in a chilled water loop. With more connection points across the facility, relying on periodic inspection alone is no longer a practical approach. Continuous monitoring becomes a more practical and reliable way to cover the full risk surface.
The Stanford JSOC incident illustrates how water damage at that scale requires full equipment replacement, and the recovery timeline reflects that. Early detection won't prevent every leak, but it significantly shortens the window between the start of a leak and the moment a response team can act. That gap, between the first drip and the response, is where the cost accumulates.
How does leak detection work
Modern leak detection systems use a continuous sensing cable routed through at-risk zones across the facility: under raised floors, adjacent to pipework, inside equipment rooms, and along any area where liquid is present or could travel. The moment liquid contacts the cable, a controller identifies the precise location of the event and triggers audible, visual, and dry-contact relay alarms. Location data and event logs are transmitted simultaneously to BMS, DCIM, or facility management platforms via standard protocols, allowing the response to begin before anyone has reached the affected area. Sensing cables are available for different fluid types, including water, acid, hydrocarbons, and propylene glycol solutions, which matters in facilities where cooling loops, battery rooms, and generator areas each introduce a different leak chemistry. A well-designed system also includes fail-safe loopback technology, meaning detection continues even if the cable itself is physically cut or damaged. Coverage can scale from a single unit to an entire building, with zone-level sensitivity configured to the specific risk profile of each area. STULZ offers leak detection systems designed for data centres and mission-critical infrastructure across Asia-Pacific, with local engineering teams providing cable routing design, zone configuration, and integration planning tailored to each site.
Explore Leak Detection System
Precision water, liquid and oil detection for mission-critical infrastructure