by: Janet Morss
Source: www.delltechnologies.com
Market for effective, cost-efficient liquid cooling predicted to more than double by 2024
As artificial intelligence (AI) and high performance computing (HPC) workloads mainstream and become more commonplace, they’re driving adoption of denser computing stacks that pack more power into less space. Denser infrastructure consumes more power and puts off more heat, threatening to surpass the capabilities of traditional data center cooling methods.
This trend toward hotter equipment is driving exploration of different cooling methods that are more efficient than today’s typical air-cooling techniques. That may be why Research and Markets is predicting that the data center liquid cooling market will grow from USD 1.2 billion in 2019 to USD 3.2 billion by 2024, at a CAGR of 22.6%.¹
Many of Dell Technologies supercomputing customers and partners are on the leading edge of this trend. Here’s an overview of how some of them are using — and benefitting from — liquid cooling.
Ohio Supercomputing Center (OSC) saves 5% on power
Back in 1987, the OSC was cooling its supercomputers with a liquid submersion technique, but eventually moved to more mainstream air-cooling methods as data center architectures changed and racks became less dense. As rack density starts rising again, the center is now going back to the future, with their newest Dell EMC supercomputing cluster, Pitzer, taking advantage of liquid cooling to handle denser equipment. According to the center, the ability to run the server fans at a lower speed saves them about five percent on their power budget.
Durham University doubles the COSMA supercomputer’s performance
Durham University was able to double the performance of its COSMA supercomputing cluster in the same footprint by using processors with 64 cores instead of 28. The denser compute environment is made possible by AMD processors, Dell Cloud Service C-series chassis with a 2U form factor, and custom liquid cooling provided by CoolIT.
Texas Advanced Computing Center (TACC) enables demanding HPC and AI workloads
TACC uses liquid cooling to enable their demanding HPC and AI workloads. The center’s Frontera supercomputer is fitted with CoolIT Systems high-density Direct Contact Liquid Cooling. In fact, Dell Technologies has collaborated with CoolIT Systems to offer Dell EMC PowerEdge C6420 Servers with Direct Liquid Cooling specifically for high performance and hyperscale workloads.
Dell Technologies HPC & AI Innovation Lab increases densities and lowers costs
Our own innovation lab uses the CoolIT racks as well. Together, CoolIT and Dell have worked together on innovations such as offering Dell EMC PowerEdge C6420 Servers with Direct Liquid Cooling for HPC and AI, allowing customers to deploy 23% more equipment within existing space constraints. CoolIT and Dell EMC have also developed a configuration that utilizes rack-scale Coolant Distribution Units to service multiple racks. These offerings can help reduce energy costs for cooling by up to 56%.
Co-locate in Iceland with Verne Global
For customers who choose to consume HPC and AI as a service with a lower carbon footprint, there’s Verne Global. This Dell Technologies partner uses 100% renewable hydropower and geothermal energy to power their data center, and naturally cool outside air for free air cooling. They also offer the ability choose direct liquid cooling for higher-density configurations.
Lowering power consumption for HPC and AI is no small consideration. Cooling is becoming a major limiting factor for data center capacity, and poorly planned cooling systems can actually require more power than running the system itself. Dell Technologies has the expertise and partnerships to help you plan and execute an optimized cooling strategy for lower costs and better density for HPC and AI workloads.