We currently put 2U solid spacers on the front. As well, I run the switch with the ports facing frontward and use a cable manager on top and bottom of the switches.
I can see from some of your pics in the other thread....that you place the networking equipment in reverse. It depends on how the fan is facing on the networking equipment as
I don't like to have fans facing the inlet portion of the rack.
I seem to have gone over this a hundred times, but wanted to get feedback from the community. Has anyone out there been able to really make liquid cooling work in their data center considering the additional space utilization and expensive price point.
We've found that liquid cooling vastly more expensive than what we are doing and am curious what some out there might be doing.
For us we've found that high density air cooling (top down, and capped hot/cold row configuration, and overhead air extraction) can get us to a max density of about 10 kW per rack. We use a chilled water system for the additional HVAC needed to get us above the standard 5 kW max, but this is for the air handlers themselves and not chilled water for the in row rack cooling. The problem is the in row chilled water cooling makes us lose a cabinet of space for every cabinet of higher density. For this to work (simple situation) we would need to attain rack densities of at least 20 kW given the loss of floor space.
We use APC Cabinets and infrastructure and even they have said if you try and get about 20 kW with the liquid cooling it becomes problematic. Given these governing characteristics in addition to the overwhelming cost, is anyone out there actually pursuing this in a colocation environment. To me it just doesn't seem to make sense.
Perhaps the cost needs to come down, but even then what are you gaining if it is a 1 for 1 trade off.
I'm planning on putting together a small, efficient 1u server to run some windows applications. This is what I have planned for it, I'm attempting to keep it under 1 amp @ 120v. I've been having trouble finding benchmarks for similar setups.
ASUS RS100-E5/PI2 1U Barebone Server (220w PSU) [url]
I been invegistating into the 1U-3U Cases, and I been wondering about the power supply units that powers the boxes.
I know that Supermicro have very high efficiency power supply units in their chassis, but I am also wondering about the others, like Asus and Tyan chassis. Reason is that I am looking into purchasing servers, and I rather have those that have efficent power supply units in it than those stodgy Dell units, that is known for not quite efficient power supply units in it.
If anyone know of other 1U chassis that comes with efficient power units, I would like to know.
With all the high power servers/blade servers, the 40A (@ 110V) power limit is way too small. I am wondering if there is any colo space targeted for high density application, e.g. with 10 KW/cab limit for 60A @ 208V power drops. Does anybody know of such high density colocation space? East coast is preferred.
The company I work for is starting to have some cooling issues in our server area. We have basically doubled our server equipment in the room without upgrading the air conditioning unit in the room. Temperatures are getting really high in the room (sometimes reaching 85F which is just WRONG!) and we are trying to get it replaced.
Since my boss is the one making decisions I thought I would try to help him out a little bit with some of the information/calculations he needs.
My questions are:
1. When calculating watts for a redundant power supply do you double what the power supply is rated for? So on one Dell PowerEdge 2950 it shows that I have a 750W redundant supply. Would that be 1500 Watts?
2. Do you calculate your cooling needs for max wattage or do you calculate a percentage of what each power supply is rated for?
3. What percentage would you multiply your final total Watts by to accommodate possible future additions? 10%? 25%?
We currently have: 2x PowerEdge 2950 - 750W Redundant 1x PowerEdge 1950 - 670W Redundant 2x PowerVault NX 1950 - 670W Redundant 1x PowerVault MD3000 3x APC Smart UPS 3000 1x APC Smart UPS 1400 2x PowerConnect 6224 2x PowerConnect RPS-600 3x PowerConnect 5324 1x Cisco 2801 1x Cisco 2821 3x Unknown wattage servers redundant power(All the same specs) 1x Unknown wattage server redundant power 1x Unknown wattage single power supply Then a few other small devices like external modems, KVM and a small-mid sized phone system.
I am not an electrical expert but I am calculating that it is going to be about 10,000 - 12,000 Watts?
Equinix Secaucus NY2 (275 Hartz) seems to have cooling prolem for the last 1 hour. The whole DC becomes hot. I went out one hour ago, came back and it's so hot now. I don't have temp now, ut I think it's around 85F (up from around 65F typical).
If you are in the DC, have fun working ... I wonder how much longer I can work here Hope they fix this soon, or else......
I'm specing out a box for small dedicated servers and am looking at the SC512 with a desktop board (Intel D964GZISSL since I don't have to cut anything). Should I be using active cooling since it's a desktop board, or will a large passive heatsink be sufficient? I'll be using Celeron Ds, probably a 356. It's power usage is quiet low.
It looks like the heatsink on the motherboard might restrict airflow a bit.
a company gives me 40U rack with 16A by 600€ per month. It will be enough power to fill the rack with Dell servers R200/R300 with dual and quad core processors?
I woke up to a complaint by the ISP of spamming for one of our servers. More than 10 000 spams in a shared hosting environment. Found Steven online, thank God! PM him and he went to work looking for the culprit. He spend time monitoring and putting in scripts to catch the culprit and in no time found the faulty script causing the spams.
Really saved me a big headache on how to explain to the ISP. Thanks to the consistent excellent work done by Steven of Rack911.
Those unpatched forums of clients can really be a hassle and a big source of problems.
Our server count with The Planet only seems to be increasing as of late and I'm now starting to drive myself nuts with bandwidth counts, costs, etc.
My main concern at the moment is our total bandwidth. While we might have a server with a 2500GB limit only use 50% we might have a 1500GB limit use 200%. I understand that any overages are our own fault, etc but there must be a way for us to combine all bandwidth across all servers!
Is it possible for The Planet or any of the other big boys to provide private racks with pooled bandwidth without going colo?
I honestly dont understand why does DELL, HP and others price their 1U TFT monitors at 3 times the cost of the cheapest laptop?
I mean, dont get me wrong, I am all for spending good money to get quality products but I feel very awkward spending 3 times as much for a screen and keyboard when I can get their laptops WITH OS, MEMORY AND HDD for 3 times as less and use it as the 1U TFT monitors.
I can get a powerful server from Dell and HP at that price for crying out loud.
But then again, I might be seriously overlooking something here because what justifies such high price?
We have about 50 Cpanel servers in our own AS with two upstream providers. On the cpanel servers we use the following IPs in the /etc/resolv.conf:
1. IP of the cpanel server
2. DNS IP of the 1st upstreamprovider
3. DNS IP of the 2nd upstreamprovider
I realized, that the upstreampoviders nameserver are not answering that fast and therefore I was thinking to make my own DNS Server, which I could use additionaly after the IP of the cpanel server.
Is this a good idea or is it not necessary? If it is a good idea, which dns deamon would be recommended? If we build this server, maybe would be also nice if we could offer DNS as a single service. Is there any solution where we could create user accounts where user could manage there own dns zones?
I have 10 servers and it causes me $1713 monthly. I decide to get a rack and buy 10 servers from dell but the problem is: I don’t know anything about racks
What do you guys think about putting 40 Amps into one rack? Our colocation provider wants to whine about it and not allow it. When we're paying them $1000+ a month - I think this is just shoddy. They say it's for heat concerns - but really this just makes me mad. We have fifteen 1U servers in there, and can't get much more on our existing 20 amps.
Does anyone have any recommendations where I can get a few cheap (full length) rack shelves in the UK. They don't have to be adjustable but would prefer if they where full ones and not the half ones as I don't think they would take the weight correctly.
I was having problems with a host and Ed (one of the owners) helped me out creating a custom package to fit my needs and moving my sites across.
I've posted more than my fair share of tickets and they all get responded to quickly. All the tickets have either been my own problems for example installing scripts or sales questions.
I have had bad experiences with zero-U PDUs (i have only tried the APC ones). They keep getting in the way of equipment when you put them in the back of the cabinet .... I usually end up just having them standing up in the back of the cabinet and zip tie them to something so they dont fall down.
Am I just stupid and using these wrong or do other people have this issue? if you have an extra long server that sits out the back it bumps into the PDU so you got to nestle the PDUs into a corner of the cabinet.
What PDUs is everything using in various co-los? I might go with the 2U rack PDUs but with the need for 2 of them that is 4U wasted (and also since they are not very long you need some long cable runs for all of the equipment to plug into them)..
My nearest major city is Manchester, so naturally I'm looking for rackspace in the region.
Unless anyone has better suggestions, I'm thinking of going with NorthernColo. They start at £50/month but jump to £70/month if you draw more than 1A of current.
If my basic physics is anything to go by, 0.5A at our 240V means a maximum server power rating of 120 watts.
...are there any dual-core / 2GB RAM box configurations which consume less than 300W thesedays? My own USB mouse for my laptop consumes 50mA.
Otherwise I'm begining to think of their 0.5A pricing as being a bit of a scam, since the 1A price also pays for 2U worth of space.