Questions?

Contact Us!

Reach us with questions or suggestions, or subscribe to our Quarterly Data Center Energy Practitioner (DCEP) newsletter at coe@lbl.gov 

Ask the Experts!

Our answers to incoming questions from the CoE community!

Q: I have been reviewing the PUE calculator on your website. I’m currently working with a client and we are in the very early stages in a project.  The client is looking for PUE and this stage, the best we can do is provide a PUE estimate using a tool like the one on your website.  I have a question regarding the inputs.  We are looking to go with 70 degree entering chilled water into the cooling coils.  The chilled water entering temperature in the PUE estimator appears to only go up to 55 deg F.  Is there any way to increase this about to 70 deg F?  Please advise.

A: Good to hear you are considering warm water cooling for your data center design.  The PUE estimator was designed to estimate the PUE on very conventional data centers that don't have adequate metering to measure PUE.  It is based on tens of thousands of Energy+ simulations, one for each combination of the limited set of variables.  Any additional option or variable, e.g. a warmer water temperature, would require significantly more runs and a larger look up table.  Therefore it is limited to the most common configurations found in practice, and not best practices.  Unfortunately I am unaware of good energy simulation tools for data centers.  Many engineers use custom spreadsheets and bin data for energy analysis of data centers.  We are currently conducting a scoping study to assess the need for a warm water liquid cooling tool- so stay tuned! 

Q: Why can't DC Pro model energy efficiency options that I have in place such as variable speed ECM CRAC fans, transformerless UPSs, and magnetically levitated chillers?

A: It is important to recognize the limitations of DC Pro.  It is based on a look-up table that involves 10s of thousands of simulation runs.  There are a limited number of variables that impact the PUE calculation (see PUE estimator) and every additional variable adds the need for thousands of additional simulations.  Therefore it was designed to accommodate the most typical conditions.  This is primarily meant for small to mid sized data centers that do not have the metering in place to more accurately estimate their PUE.  It was designed as a pre-assessment tool to get a sense of overall efficiency and where the opportunities are. Therefore it does not accommodate advanced systems such as liquid cooling, DC powering, etc. nor does it allow for granular evaluation of system options (e.g. chiller plant component efficiencies).  We do have a number of more robust assessment tools available on our website at https://datacenters.lbl.gov/tools  (e.g. the Air Management Tool and the Electric Power Chain Tool). 

Q: Can you point us to any documentation on what sorts of requirements we should be asking for in terms of selecting the exact transformer to use? This application will be single-sourced HPC systems, so we'll expect fairly high loading on the transformer most of the time. We need to understand the characteristics/attributes that we want and find them in a USA supplier or justify why we need them and that no USA supplier offers them. Thanks for any pointers you may have.

A: The DOE 2016 requirement for low-voltage distribution transformer efficiency is specified at 35% load, which is probably a good assumption for a commercial building, but a bad one for a data center striving to make the best use of limited power distribution resources. That standard requires no less than 98.83% efficiency (again, at 35% load). Overall transformer losses are the total of a fixed no-load loss and a load loss that scales with the square of the load, thus the DOE standard numbers are dominated by the no-load loss and the high loads that you are striving for will be dominated by the load loss and there are trade-offs in transformer design between the two.

I'd suggest getting the efficiency curve from Powersmiths for their ESAVER-50H (optimized for high loading) and use that to specify the efficiency requirement at e.g. 75% load, which is a standard rating point. Other parameters, especially impedance, need to be managed to optimize inrush current, fault level, and arc-flash to make sure it will integrate with the existing installation. We learned this the hard way at FLEXLAB. Hope this helps!

Q: We are looking into liquid to chip cooling for data centers. I am looking into estimating the energy and cost savings associated with switching from an air-cooled data center to a liquid-cooled data center. To avoid starting the estimate from scratch, I was wondering whether you have any spreadsheets or basic calculations that I can use as a starting point. If not, do you have any resources you can share that will help with this estimate? Right now, I am just looking for simple calculations that can be refined as we further define the approach to liquid cooling. 

A: Alas, there's no such calculation that we can offer. We hope to put together a calculator in the next few years. The savings come from two places:

1. increasing the cooling water temperature, which makes the chiller plant more efficient (including both chiller efficiency and an increase in water-side economizer utilization). Typically the energy savings of the chiller itself ranges from 1-3% per degree F that the chilled water supply temperature is raised. As we discussed before, if there's a shared chiller plant, the ability to do this might be limited, though seasonal resets in chilled water temperature should be considered. If the water-cooled IT load is large enough, a dedicated, high(er) temperature plant might make sense. If not, there are still some pump savings available from dropping the flow and getting a higher temperature rise. Another option is a water-side economizer for the water-cooled IT equipment, with a booster heat exchanger from the chilled water system to be used as needed.
2. reducing the amount of cooling provided by air trades off a relatively large amount of fan energy for a relatively small amount of pump energy. Energy is saved both in the CRAH fans and in the IT equipment fans. Remember that most liquid cooling options don't completely replace air cooling.

My approach would be a spreadsheet using bins for outside-air wet bulb temperature, or a full 8760-hour (full year of hourly data) sheet using TMY data. You'll need to gather data on the existing and proposed equipment and controls to populate the sheet. In addition to the ~2% per degree F rule of thumb for chiller energy savings noted above, other useful approximations include aggressive but achievable 5 degree F cooling tower water supply temperature approach to outside air wetbulb temperature and 2 degree F approach on a plate-and-frame heat exchanger. Hope this helps!

Q: A quick question about CRAH VFD fan control. At some WHS data centers, there are some CRAH units with VFDs controlling to an underfloor static pressure setpoint of 0.1" wc. The static pressure in the room varies from 0.02" to 0.04" wc, so the VFDs are always running at 100% trying to reach the setpoint. I am planning to propose controlling the VFDs to cold aisle temperature setpoints instead of the UF static pressure setpoints. The goal would be to reduce CRAC VFD fan speed. In you experience, do you think this is the right approach? 

A: Your approach to CRAH fan control should work well in your application, given that with your cold-aisle isolation and blanking panels, you should have good air management. So the chilled water valves in the CRAHs would be controlled on supply air temperature (correct?), and the fans would be controlled on cold-aisle temperature; we recommend using a sample of the top-of-rack temperatures for control, and monitoring the rest of the inlet temperature sensors to alarm any hot spots. This scheme is a relatively direct way to ensure that the IT equipment is getting the recommended inlet air temperature. Even better than rack inlet temperatures, is to get the inlet temperatures directly from the IT equipment, but often this access is made difficult by security protocols.

 
Do check for inadvertent recirculation paths for hot air, including above and below the IT equipment in each rack, around the sides of the IT equipment, and networking equipment that often has airflow from back to front (i.e. the inlet is on the cable-connection side of the equipment, such that if it is installed with the cables on the back, it will be a air recirculator). Sometimes other equipment is installed backwards from an airflow point of view. Below and between the racks should also be sealed off to prevent recirculation.
 
Other fan control schemes that have been used successfully, although they are less direct that the above, include differential pressure between underfloor and cold aisle, and differential pressure between the cold aisle and the hot aisle (since a slight positive pressure in the cold aisle helps ensure minimal recirculation of hot air).  Your setpoint for underfloor air pressure is at the high end of where most perforated tiles are rated, so a lower setpoint might work better, depending on whether your center is actually designed for the higher pressure, and whether the design flows are needed for your loads. But go with your plan to use temperatures per the above. And please let us know how it goes!
 

Q: How do systems that are outside air economizers with evaporative cooling work?

A: Increasingly indirect economizer/evaporative cooling schemes available. These are air-handling units outside the data center that use air-to-air heat exchangers and indirect evaporative cooling, usually with a compressor-based backup. Ducts connect these AHUs to the hot and cold air plenums in the data centers. All provide most of the energy savings of an air-side economizer without the (overblown, in most cases) concern about bringing outside air into the data center. And they are likely to be a system with a higher capital cost than a conventional ASE.