Last week, the Crestchic team exhibited at the DataCloud event in Cannes. With over 3,500 attendees from across the data centre sector, seminars from industry experts, and a fantastic networking event the night prior, the team immersed themselves in the sector. Here, Dawn Craft, Business Development Manager at Crestchic, shares her key takeaways from the event.  

AI and the need to scale up  

Unsurprisingly, one of the big conversation points at this year’s event was the growth of AI. Global data centre power demand is projected to increase by 165% by 2030, with AI data centres representing 75% of all demand. And while the number of data centres continues to rise, rack density was the topic of the moment.  

Hyperscale datacentres operators are already talking of capacities of 600kW per rack – yet there is a discrepancy between what is currently being manufactured, and the 600 kW per rack that hyperscalers and AI developers are yearning for. Today’s typical infrastructure tops out between 50 and 120 kW per rack using current technology. While the likes of NVIDIA are projecting 600 kW- 1 MW racks by 2027-2030, this will require next-gen electrical and cooling infrastructure, which is not yet mass-produced.

Hyperscale Datacentres: Rethinking redundancy  

As the AI revolution pushes rack densities to 600 kW and beyond, the challenge isn’t just generating enough power – it’s also delivering it resiliently and efficiently. Traditional redundancy models (2N, N+1) that once defined Tier IV excellence are being re-evaluated. At this year’s event, one hot topic was whether shared redundancy could be the sector’s answer to balance uptime with smarter power use.  

In practical terms, 2N means fully mirrored systems. If you need 50 MW, you build 100 MW. While this ensures uptime, it also means only 50% utilisation of installed capacity. This can be expensive and inefficient, especially if we imagine an era of 1 MW+ racks.  

Instead, the industry is considering the concept of shared or distributed redundancy. In practice, this could be a variety of configurations, such as a 2MW requirement fed by 3 x 1MW feeds (66% utilisation), or a 3MW requirement fed by 4 x 1MW feeds (75% utilisation). This would allow sites to preserve uptime, but with fewer unused assets.  

In summary, shared redundancy challenges the assumption that resilience must mean duplication. The industry isn’t abandoning resilience; it is refining it. As AI workloads soar and sustainability pressures mount, shared redundancy offers a compromise between uptime and efficiency. Loadbanks will play a crucial role here, enabling operators to test and validate these more complex redundancy schemes under real-world conditions.  

Hyperscale Datacentres: Liquid cooling: Evolution, not replacement 

Liquid cooling was, as expected, a significant discussion point at the event. The consensus wasn’t that liquid cooling would completely replace air cooling in data centres, but that it would become an essential part of the overall cooling mix. 

 AI workloads undoubtedly drive the need for high-density racks requiring more efficient thermal management. However, not every rack in a data centre will run AI or high-performance computing workloads. Many enterprise and cloud workloads will continue to rely on more traditional, lower-density servers. 

Discussions focused on a hybrid approach – including how emerging liquid cooling technologies, such as direct-to-chip, immersion, and rear-door heat exchangers, will likely be used for high-density, AI-focused racks. BUT, that doesn’t mean air-cooling will be redundant. In fact, for standard IT loads, air cooling is here to stay. Even within AI environments, air-cooled CPUs/GPUs and air-cooled CDUs (cooling distribution units) are expected to remain in use for many applications where thermal loads are manageable without liquid.  

Loadbanks are increasingly being used to test these hybrid cooling setups before deployment, simulating high-density heat loads to ensure both liquid and air systems can operate effectively in tandem. This kind of thermal commissioning is becoming critical as cooling architectures grow more complex. 

Hyperscale Datacentres: Resilient by design: The future of backup power  

DataCloud’s energy-focused seminars highlighted the urgent need for power resilience, grid integration, and sustainability. Across the sector, operators are exploring a more diverse mix of alternative power setups designed to balance uptime with environmental performance. While backup power remains vital to data centre operations, there is growing momentum to move beyond traditional diesel generators towards cleaner options, including gas generators (often hydrogen-compatible), long-duration energy storage, high-density batteries, microgrids, and onsite renewables such as solar and wind. 

As these systems become more complex and interdependent, it becomes even more critical to test power infrastructure thoroughly – not only at the commissioning stage, but also as part of ongoing validation of backup systems. Whether switching between battery storage, gas generators, or solar-fed systems, the ability to ensure seamless power continuity under real-world conditions is essential for operational confidence and regulatory compliance, especially as the world becomes increasingly reliant on the sector.  

To get in touch with one of our team, click here to fill in a request form.