Way back in the day, data centers used to be cold. Cold air came in, and usually in hot-aisle/cold-aisle configs, left through the back.
Power per rack was measured in a few thousand watts.
Cooling per rack could be mebbe one ton of AC. Up to two in the worst case.
Then stuff got denser. Somewhere along the line someone decided they could run their stuff at higher temperatures. This works fine for machines that are actually mostly open space (blades, sparsely populated server systems, …). It doesn’t work so well for densely populated server systems.
Inlet temps above 72F can be a problem for dense electronics. Poor airflow in a data center (e.g. no real positive pressure on inlet, no real negative (relative) pressure on outlet is a real problem.
Yet we’ve seen enough of our share of such data centers in the last 6 months that I am starting to question some of the designs I see. We might have to start actively asking customers, do you have the following conditions in your data center (and then list them), for optimal use case. If not, we’ll have to ask some defensive questions, such as, do you have inlet temperatures below 72F. Do you have positive front pressure, and negative back pressure.
We’ve seen crashes we have been able to attribute to over temperature offload processors in units. We just have to convince the customers that this is a bad thing, and that they want to make sure their data center includes the ability to keep sensitive electronics cool.
Because boxes are only going to get denser, cooling and airflow only going to get harder. We can change the working fluid, but most data centers aren’t set up yet for liquid handling. A shame, as it is far more power efficient to use a higher heat capacity working fluid, and move less of it per unit time.
We could always dunk everything in oil, or in flourinert, or something like that. But liquid handling, good liquid handling, is not cheap, and if you get it wrong, it fries lots of stuff.
Viewed 14410 times by 3413 viewers