365main Cooling
Jan 22, 2007The air coming into my rack in colo3 right now is in the mid '80s.
View 11 RepliesThe air coming into my rack in colo3 right now is in the mid '80s.
View 11 RepliesI seem to have gone over this a hundred times, but wanted to get feedback from the community. Has anyone out there been able to really make liquid cooling work in their data center considering the additional space utilization and expensive price point.
We've found that liquid cooling vastly more expensive than what we are doing and am curious what some out there might be doing.
For us we've found that high density air cooling (top down, and capped hot/cold row configuration, and overhead air extraction) can get us to a max density of about 10 kW per rack. We use a chilled water system for the additional HVAC needed to get us above the standard 5 kW max, but this is for the air handlers themselves and not chilled water for the in row rack cooling. The problem is the in row chilled water cooling makes us lose a cabinet of space for every cabinet of higher density. For this to work (simple situation) we would need to attain rack densities of at least 20 kW given the loss of floor space.
We use APC Cabinets and infrastructure and even they have said if you try and get about 20 kW with the liquid cooling it becomes problematic. Given these governing characteristics in addition to the overwhelming cost, is anyone out there actually pursuing this in a colocation environment. To me it just doesn't seem to make sense.
Perhaps the cost needs to come down, but even then what are you gaining if it is a 1 for 1 trade off.
Having cool air from the top of rack is better than from the bottom of rack (using raised floor)? Or vice versus?
View 14 Replies View RelatedThe company I work for is starting to have some cooling issues in our server area. We have basically doubled our server equipment in the room without upgrading the air conditioning unit in the room. Temperatures are getting really high in the room (sometimes reaching 85F which is just WRONG!) and we are trying to get it replaced.
Since my boss is the one making decisions I thought I would try to help him out a little bit with some of the information/calculations he needs.
My questions are:
1. When calculating watts for a redundant power supply do you double what the power supply is rated for? So on one Dell PowerEdge 2950 it shows that I have a 750W redundant supply. Would that be 1500 Watts?
2. Do you calculate your cooling needs for max wattage or do you calculate a percentage of what each power supply is rated for?
3. What percentage would you multiply your final total Watts by to accommodate possible future additions? 10%? 25%?
We currently have:
2x PowerEdge 2950 - 750W Redundant
1x PowerEdge 1950 - 670W Redundant
2x PowerVault NX 1950 - 670W Redundant
1x PowerVault MD3000
3x APC Smart UPS 3000
1x APC Smart UPS 1400
2x PowerConnect 6224
2x PowerConnect RPS-600
3x PowerConnect 5324
1x Cisco 2801
1x Cisco 2821
3x Unknown wattage servers redundant power(All the same specs)
1x Unknown wattage server redundant power
1x Unknown wattage single power supply
Then a few other small devices like external modems, KVM and a small-mid sized phone system.
I am not an electrical expert but I am calculating that it is going to be about 10,000 - 12,000 Watts?
Equinix Secaucus NY2 (275 Hartz) seems to have cooling prolem for the last 1 hour. The whole DC becomes hot. I went out one hour ago, came back and it's so hot now. I don't have temp now, ut I think it's around 85F (up from around 65F typical).
If you are in the DC, have fun working ... I wonder how much longer I can work here Hope they fix this soon, or else......
I'm specing out a box for small dedicated servers and am looking at the SC512 with a desktop board (Intel D964GZISSL since I don't have to cut anything). Should I be using active cooling since it's a desktop board, or will a large passive heatsink be sufficient? I'll be using Celeron Ds, probably a 356. It's power usage is quiet low.
It looks like the heatsink on the motherboard might restrict airflow a bit.
We are not rack dense in a standard 42 rack.
We currently put 2U solid spacers on the front. As well, I run the switch with the ports facing frontward and use a cable manager on top and bottom of the switches.
I can see from some of your pics in the other thread....that you place the networking equipment in reverse. It depends on how the fan is facing on the networking equipment as
I don't like to have fans facing the inlet portion of the rack.
Any theories or practices ya'll follow?