
Nvidia's GPUs are the main foundation for AI workloads, so they consume a large amount of electricity, requiring a more efficient cooling system. Amazon said it has designed a new solution called the In-Row Heat Exchanger ( IRHX), which can be integrated into both current and future data centers without the need for system overhaul.
Current liquid cooling solutions take up too much space and use too many states, which is not suitable for our scale, said Dave Brown, VP of computing and machine learning at coaches.
Previously, air cooling systems were enough to power older Nvidia GPUs. But with a new generation like the Nvidia GB200 NVL72 - integrating up to 72 GPUs in a single host, Amazon is forced to upgrade its infrastructure.
AWS has launched a new P6e server line, using Nvidia's Blackwell design, providing huge computing power to train and operate large AI models. Before Amazon, GB200 NVL72-based systems were provided through Microsoft and CoreWeave.
As the world's largest cloud provider, Amazon has long been self-developing hardware such as AI chips, servers and network routes to reduce dependence on third parties and increase profits. AWS is now a major contributor to Amazon's net profit, with its highest profit margin since 2014 in the first quarter of 2025.
According to CNBC, Microsoft ( AWS's biggest rival) is also developing its own AI hardware. In 2024, Microsoft also introduced a sidekicks cooling system for the Maia chip designed by the company.