It may fall upon you, as it has fallen upon me several times (especially if, like me, you work as an occasional network or system administrator) to figure out how much cooling is needed to protect a new or existing server room/closet
/whatever. Typically, you'll be asked by your office manager for the new space; or by your boss when she fights for facilities budgets, or by your colocation
provider when they're making you up a quote.
The first time this was asked of me (unexpectedly, in a meeting about the move I was trying not to sleep through) went something like this:
HVAC Contractor: "Uh, sir, how much cooling do you want in this server room?"
Me: "Uuhhhhhh..."(stalling for time...ears to brain, respond please...)"...in what units?" Mistake. Now he has excuse to ask me and assume I know what he's talking about.
HC: "Oh, you can give it to me in tons if that's OK."
Me: "No problem. Have it for you tomorrow morning. Gotta check with vendors." The vendors bit is always a good stalling tactic.
Needless to say, some research was done that afternoon.
So, as Arlo Guthrie would say, one day, you may find yourself in a similar situation. Here, then, is what I found out.
When you size a residential air conditioner, they'll typically ask you how many square feet the area it will serve is. This is because the residential a/c isn't so much dealing with heat produced but with heat that has crept in to the internal zone through convection, radiation, etc. etc. Sun hitting the building structure is the usual method. In such a case, there is a pretty good rule of thumb - we know exactly how much energy hits the earth per square foot, so we can make a decent guess as to what we need.
No, I don't know what that number is. Something like 2 horsepower, but that's not too useful.
In any case, for a server room, you have a different problem! Typically the area to be cooled is quite small and usually isn't exposed to outside heat/sunshine (i.e. through windows or exterior walls thin enough to transmit it). However, there is a stack of merrily heat-producing devices in there. You need to know how much cooling capacity (in tons, please, Mr. Network admin) is required to offset that constant generation of heat.
Here's the basic steps involved:
- Figure out/find out how much power each machine uses.
- Figure out/find out what efficiency the machine runs at.
- Multiply out to find the amount of waste power. This is heat.
- Find out how many machines are in the room/zone. If they have different characteristics, find out (1) and (2) for each type.
- Multiply out to find the total wasted electrical power (heat) generated.
- Add these, and jigger them into your favorite units (tons if talking to HVAC people, BTUs if talking to Fedders salesmen at the local appliance shop).
- Figure in static load (square footage of floor space, difference between desired and ambient temperature, etc)
So let's go in order!
One - how much power does it use?
The easiest way to figure this out is to look at the equipment specifications. If it's commercial/industrial gear (Dell servers, Macs, Sun servers, 3Com hubs, Xerox printers, Cisco switches, etc.) it will have a power load figure associated with it, expressed in watts or in volt/amps. For example, a PC you build might have a '250 watt power supply.' A 3Com hub might have a wall-wart power block that states that it puts out 12V at 1.2A. This would be 12x1.2=14.4 watts.
Two - what's its efficiency?
This is harder. I have found, however, that most computing equipment-rated power supplies claim to run at a minimum of around 60-65% efficiency at full load. Good ones (PC Power and Cooling, etc.) may run at 75-80%. Smaller devices tend to run at 40-50% since they're stepping down the voltage in the external transformer, and their small drain means it isn't as much of a problem. So I use 70% as a rule of thumb for computers and 50% for anything with an external power supply.
Now, there is a school of thought which says that you should just use the total dissipated power rating of the machines. I don't agree for the following reasons: one, machines very very infrequently run at the max sustained rated wattage (if they are, you're doing something else wrong!). Two, *over*estimating the cooling by doing this can get very, very, very expensive both in HVAC equipment and power budgeting. If you're a colocation data center, sure, err on the high side; for an air-conditioned closet for the dev servers, well...
Three - Figure out the wasted power.
Let's say we have a 400 W machine running at 75% efficiency. That means that it's putting out 100W of straight heat energy whenever it's running. Work this out for each box.
Four and five - Count machines and multiply.
Six - Figure out your units.
Make sure your figures to date are in kilowatts. Conversion factors from that point are as follows.
BTUs/hr = kW * 3412.14
Cooling tons = BTUs/hr / 12,000
Seven - Figure in static load.
A good bit of Kentucky windage here is to use the 'residential' calculation based on the floor space of the server area. If the ambient temp around the server room is on average more than 20 degrees hotter than the desired levels, double the residential calculation load.
At this point, you should have a rough number for your cooling requirements. There are a few other things to keep in mind. First of all, if you're purchasing a residential type A/C (through window, maybe) make sure it has a duty cycle adequate for the job. Most residential units are spec-ed out assuming they will be in use around 30% of the time. Also, allow a fudge factor! I typically will double the number if I can get away with it, and won't let less than a 25% 'safety allowance' get by. Finally, think carefully about what sort of gear you have. Not all the waste heat is generated by the power supply. If you have big RAID arrays, or just lots of disks, remember that drives generate lots of heat. You can find out how much power each drive draws and use that as a guide.
Some other useful bits. Most large colocation providers use a standard estimate for cooling requirements. Finding out what this is can help figure out how on the ball they are. I have found that several use a figure of 50 W/sq. ft. when estimating what the machine load will be. Given that a standard relay rack (41U) takes up (along with its associated surrounding clearances) just over 20 sq. ft., this means they are allocating 1 kW per rack. That's not too good, especially if you're planning on using 1U servers in your colo rack to maximize space efficiency, where you might end up with say 35 or 40 servers running 150-200 watt power supplies each in that rack.
All is not lost; if their planning is designed to allow for localized hot spots but cover the average needs of the whole facility, you're OK. Still, that means that if more than a few of their clients start looking to fill up their racks, there's going to be infrastructure cooling problems. Some of the higher-end providers use higher numbers when planning (a recent Exodus facility I was jawing about with some of their guys uses a figure of 120W/sq foot and allows for hot spots).