Development of Liquid Cooled Standards
Liquid cooling is valuable in reducing energy consumption of cooling systems in data centers because the heat capacity of liquids is orders of magnitude larger than that of air and once heat has been transferred to a liquid, it can be removed from the data center efficiently.
LBNL is one of several industry experts currently participating in an initiative to develop a liquid cooled rack specification. The goal of the project is to develop a specification that could accommodate multiple vendors and provide a reusable infrastructure for multiple refresh cycles with a variety of liquid cooled servers/suppliers. The specification could include but is not limited to, fluid selection and quality, supply pressure, temperature, and flow, delta pressure and temperature, header size and material, connection spacing, size, and details. Learn more about the project.
Liquid Cooled Technologies
Liquid cooling in data centers can be implemented with a broad range of technologies. These technologies range from transferring heat to a liquid far from the source (e.g. computer room air handlers (CRAHs)) to immersion cooling where the heat transfer takes place on the surface of the hot electronic components. In general, when the heat is transferred close to the source the cooling liquid supply can be warmer and still provide the needed cooling. The increased efficiency is driven by improved chiller performance and greatly improved opportunity for free cooling.
Most liquid-cooled solutions are hybrid technologies where only a part of the heat load is removed by the liquid. The remaining load is removed by traditional air cooling. Thus, liquid cooling solutions that transfer heat near the source generally incur additional cost compared to air-cooled IT equipment in a standard rack. However, these additional costs may substantially offset by the improved energy efficiency and potential capital savings of a final solution that includes liquid cooling near the heat source.
When to consider liquid cooling depends greatly on the local conditions but here are some generalized guidelines:
- Want to reduce data center energy (reduced IT power + infrastructure)
- Need to cool high-density electronic equipment
- Existing air-cooled data center will not support new loads
- Extra capacity is available from the Chilled Water (CHW) plant and/or cooling tower
- Space is available for a new CHW plant and/or heat rejection equipment
- Willing to buy purpose made servers
- Finally, liquid cooling is not for everyone - yet.
The adjacent figure provides a number of liquid cooling technologies and a sample of companies that have provided or are providing solutions. LBNL researchers have reported on energy savings estimates for a number of the technologies listed. A brief description, as well as related LBL research and resources, are provided for each of the technologies.
CRAH: A high volume of initially hot (or not so hot) air has to travel a considerable distance through the computer room before the heat is transferred to the building cooling water system in the CRAH unit. In the report Demonstration of Intelligent Control and Fan Improvements in Computer Room Air Handlers, computer room air handler (CRAH) controls and improved CRAH fans were retrofitted into a colocation data center owned and operated by Digital Realty Trust. The overall yearly data center energy savings estimate was significant at 8% compared to before the retrofit.
OVERHEAD: Emerson (Liebert) offers overhead cooling units (hanging above the equipment racks or hanging above the cold aisles) that pull hot air from the hot aisles, cools the air using a pumped refrigerant heat exchanger and pushes the cold air down into the cold aisle. The primary advantage of the refrigerant is that it is non-conductive and will evaporate if there is a leak. It also has a higher heat removal capacity than water as it is uses phase change. One major advantage of this system is that no floor space is required.
IN-ROW: InRow is a Schneider Electric (APC) trade mark. Hot air is pulled from the hot equipment aisle, cooled (chilled water), and returned into the cold equipment aisle. Another manufacturer provides an in-row type of unit that uses pumped refrigerant. Either hot aisle or cold aisle containment will work with this technology. These in-row units are often controlled with variable speed fans and integrated controls.
The cooling units are placed directly in the equipment rack rows. Unlike traditional CRAHs that bring in hot air from above and blow it down under the floor these, cooling units are “turned sideways” drawing air directly from the hot aisle, cooling it down, and blowing it directly into the cold aisle at a relatively neutral temperature.
In the report Demonstrating a Dual Heat Exchanger Rack Cooler “Tower” Water for IT Cooling a prototype InRow cooling device with two heat exchangers from Schneider Electric (APC) was demonstrated to investigate potential energy efficiency advantages compared to a single heat exchanger design. The results show a significant energy efficiency improvement when the cooling tower and chilled water supplies are available near the rack. The dual heat exchanger approach acts as a localized water-side economizer process that matches local conditions inside the data center.
ENCLOSED CABINET: In this technology, the racks and servers are completely enclosed along with additional fans and an air-to-water heat exchanger. Big advantages are the fact that nearly no heat escapes to the room (room neutral) and that the cooling water is controlled so that racks with less power use less cooling water. IT requires extremely reliable cooling since very high heat loads can be cooled.
REAR-DOOR HEAT EXCHANGER (RDHx): The RDHx device, which resembles an automobile radiator, is placed in the airflow outlet of the server rack. During operation, hot server-rack exhaust air is forced through the device by the server fans. Heat is exchanged from the hot air to circulating cold water from a chiller or cooling tower. Thus, server-rack outlet air temperature is reduced before it is discharged into the data center, potentially making it room-neutral solution. In the LBNL report Data Center Rack Cooling with Rear-door Heat Exchanger, a demonstration project is described that provides lessons-learned that may be relevant to other RDHx projects.
CONDUCTION: Thermal blocks (heat risers) are attached to all heat-generating components. The purpose of these blocks is to conduct the heat to a top plate of the server. This top plate is a micro channel heat exchanger that can be cooled by either water or refrigerant. In the report Demonstration of Alternative Cooling for Rack-Mounted Computer Equipment, eleven different models of IT rack-level liquid cooling devices were compared in a demonstration named “Chill-Off 2”. The partial and overall data center efficiencies were evaluated using two different chilled water plant models and a number of different environmental conditions. The results show significant differences in energy efficiency. The Conduction technologies had by far the best results followed by the Rear Door Heat Exchangers.
CPU COLDPLATE: “Cold plates” with internal circulating cooling water through micro channels are attached to the CPUs, replacing the standard heat sinks (fins). In other words, the cold plate is the component of the liquid cooling system that interfaces with the heat source. Cold plates vary widely in complexity and construction depending on the application needs.
In the LBNL report Direct Liquid Cooling For Electronic Equipment. Cisco servers were modified with the Asetek cold plate technology and the energy efficiency was compared to unmodified servers. The percentage of the IT power captured with the liquid was determined for a variety of IT loads and environmental conditions. The percentage of heat captured had considerable variation depending on the environmental conditions and IT load. A typical heat capture was around 50-60%.
IMMERSION: In immersion cooling, the electronics are submerged in a dielectric (non-conducting) fluid. This technology can efficiently cool high-density electronics in data centers without the need for compressor-based cooling. Since this system operates well using high temperature coolant, dry coolers can be used for heat rejection to the atmosphere, thereby eliminating evaporative water use almost anywhere in the world. Liquid immersion cooling, especially with phase change “two-phase immersion cooling”, is a paradigm shift in the way electronics are cooled.
Two-phase immersion cooling using 3M Novec 649 Engineered Fluid was demonstrated at the Naval Research Laboratory in Washington D.C. The heat from electronic components consuming high power levels such as CPUs cause the engineered liquid to boil on the component surfaces resulting in exceptional heat removal potential. The technology had excellent energy efficiency performance but the engineered fluid had significant drawbacks. The results can be found in the LBNL report Immersion Cooling of Electronics in DoD Installations.
NREL's Thermosyphon Cooler Hybrid System Project
In August 2016, the National Renewable Energy Laboratory (NREL) installed a thermosyphon hybrid cooling system to reduce water usage in its already extremely energy-efficient High- Performance Computing (HPC) Data Center. In its first year of use, the system saved 4,400 m3 (1.16 million gal) of water, and 7,950 m3 (2.10 million gal) during a 2-year period, cutting the use of water in the data center by about one-half.
Click here to review fact sheets, presentations, and powerpoints covering the project's installation, outcomes, and lessons learned.