10 December 2025
|

Jon Abbott, Technologies Director - Global Strategic Clients at Vertiv, looks at the cooling challenges shaping edge infrastructure.
Artificial intelligence (AI) is changing the demands placed on critical digital infrastructure. As workloads evolve from centralised cloud environments to edge locations, facilities teams face a new set of cooling challenges. These are driven by high-density equipment, space constraints, and the need for round-the-clock uptime under unpredictable loads.
Cooling has always been a critical part of data centre design. But the types of systems being deployed today require a different approach. Well established methods often cannot manage the thermal intensity of modern hardware alone. Engineers and operators must now find smarter ways to control heat in environments that were never designed to handle high performance computing (HPC).
AI hardware is pushing thermal boundaries
At the heart of the shift is a new class of hardware. AI models are being run on dedicated processors such as graphics processing units (GPUs), tensor processing units (TPUs), and custom-built accelerators. These chips are far more power-intensive than conventional central processing (CPUs). In turn, they produce significantly more heat per rack.
In edge computing locations, where rack densities can exceed 30 kilowatts or even 50 kilowatts or more, thermal loads are created that overwhelm established air-based cooling systems. The challenge is not just peak temperatures, but how consistently the system can manage heat spikes, maintain stable conditions, and recover from operational stress.
In older facilities or retrofitted sites, there may be no margin to absorb this extra heat. This forces operators to rethink both the type and the layout of cooling infrastructure.
Air cooling has limits in AI environments
Air cooling remains the most widely used thermal management method in IT environments. It is familiar, relatively easy to maintain, and cost-effective in lower-density deployments. However, as thermal output increases, air systems begin to show their limitations.
Fans can only move so much air through a rack. As equipment becomes more tightly packed, airflow paths are restricted, temperature differentials rise, and hot spots become more common. This not only puts hardware at risk of hardware failures, data loss, reduced component lifespans, and system outages, but also reduces overall energy efficiency.
Systems must work harder to maintain reliable conditions, consuming more power and increasing operating costs.
In many AI use cases, particularly those requiring constant inference or processing at the edge, workloads do not follow predictable patterns. Cooling systems must therefore respond dynamically to changes in usage. Air systems, especially those relying on passive or perimeter cooling, can be too slow to react.
Liquid cooling is becoming more practical
To handle these rising demands, many operators are now turning to liquid cooling. This can take several forms, including direct-to-chip cooling, rear-door heat exchangers, and fully immersive solutions. Each offers different advantages depending on the layout, density, and use case of the facility.
Liquid is more efficient at transferring heat than air. It allows systems to extract thermal energy directly at the source, rather than relying on airflow to move heat away from components. This makes it particularly useful for high-performance AI workloads that operate continuously or spike unpredictably. Liquid cooling also unlocks opportunities for waste heat reuse to support the circular economy.
Until recently, liquid cooling was mostly used in research settings or very large-scale data centres. But as AI becomes more mainstream, the business case for using liquid in smaller or distributed facilities is gaining ground. Advances in reliability, leak detection, and ease of installation have helped reduce perceived barriers to adoption.
Space constraints add complexity
Cooling at the edge presents specific challenges. Many edge facilities are housed in small, non-specialist buildings. They may occupy a corner of an office, a back room of a retail site, or a cabinet installed in an industrial environment. These locations often have limited access to utility infrastructure and physical space.
In such environments, deploying large-scale air handling systems is impractical. Liquid cooling, particularly in closed loop or compact designs, offers a viable alternative. Systems can be tailored to the site, with smaller footprints and more efficient thermal performance.
However, the design must account for environmental factors. Some edge sites are exposed to extreme ambient temperatures, vibration, or variable humidity. Cooling systems must be built with appropriate tolerances and protections, while still delivering consistent performance.
Energy use is under scrutiny
Cooling systems have long been among the largest contributors to energy use in data centre environments. As AI deployments expand and edge sites multiply, energy efficiency is even more of a priority.
Regulations are tightening across Europe and other markets. Operators are now expected to meet measurable energy efficiency targets. Power usage effectiveness (PUE) benchmarks are being applied to smaller sites, not just hyperscale facilities.
To improve performance, cooling systems must be integrated into wider energy management strategies. This includes the use of intelligent control systems, thermal analytics, and dynamic load balancing. In some cases, operators are also exploring waste heat reuse, using the output from cooling systems to provide heat to nearby buildings or facilities.
Maintenance and reliability remain critical
Cooling infrastructure must not only perform well, but also remain serviceable. At the edge, access to skilled technicians may be limited. Systems need to be easy to monitor, diagnose, and repair remotely where possible.
Designers should prioritise modularity and fault tolerance. If one component fails, the system should continue operating without service interruption. Remote sensors, predictive maintenance tools, and integration with facility monitoring platforms can all help reduce the burden on service teams.
In addition, the materials and components used must match the physical realities of the site. Cooling systems operating in outdoor enclosures or industrial zones must withstand dust, vibration, and fluctuations in ambient conditions.
Collaboration across disciplines is essential
Effective cooling design cannot be separated from the broader critical digital infrastructure. Power, cabling, network layout, and application requirements all affect thermal load and airflow.
Too often, cooling is treated as a standalone problem. In practice, it needs to be coordinated with IT, network, and facility teams from the outset. Equipment placement, airflow planning, cable management, and even lighting can all influence cooling efficiency.
The earlier these conversations happen, the easier it is to optimise the site without rework or downtime. As AI deployments grow more complex, this kind of cross-functional collaboration is becoming a competitive advantage.
Planning for change
AI infrastructure does not stand still. Workloads shift, models evolve, and usage patterns change. Cooling systems must be able to adapt. Designing for static loads or fixed rack layouts increases the risk of obsolescence.
Engineers should plan for future growth by building flexibility into the system. That might mean reserving space for additional cooling capacity, designing with scalable loops, or implementing smart controls that adapt to changing scenarios.
Choosing equipment that can be upgraded or replaced without full system shutdown is also a valuable consideration. In AI-driven environments, downtime can have a knock-on effect on operations, analytics, and customer-facing services.
AI has introduced a new level of complexity to critical digital infrastructure. Nowhere is that more visible than in the cooling systems that keep edge environments operational. Rising densities, unpredictable loads, and constrained spaces are challenging traditional methods and driving innovation.