I need to rackmount four servers, but they're going to be very close to an employee's desk. I'm trying to decide whether to go with an open or enclosed rack. An open rack would be the best for cooling, which would mean I could use slower fans. However a enclosed rack seems like it might muffle some of the sound. Or would the additional metal just cause rattling?
My nearest major city is Manchester, so naturally I'm looking for rackspace in the region.
Unless anyone has better suggestions, I'm thinking of going with NorthernColo. They start at £50/month but jump to £70/month if you draw more than 1A of current.
If my basic physics is anything to go by, 0.5A at our 240V means a maximum server power rating of 120 watts.
...are there any dual-core / 2GB RAM box configurations which consume less than 300W thesedays? My own USB mouse for my laptop consumes 50mA.
Otherwise I'm begining to think of their 0.5A pricing as being a bit of a scam, since the 1A price also pays for 2U worth of space.
We are allowed upto 8 amps (240v) in our rack in a DC in London and the rack has a power meter. The power meter reads 6.9 amps but I had an e-mail from the DC saying that using their calibrated power meter we were drawing 9.6amps!
That is quite a difference, I've measured the kit I've put in there before using a cheap power meter and 6.9 sounds about right. The data centre is over power and cooling requirements I reckon and they are trying to get everyone to reduce their power. In fact, I've been told that at renewal I'll have to go down to 5 amps for a whole 47U rack!
Which figures should I believe, they are a major ISP but it all sounds a bit dodgy to me.
looking for a PDU with ethernet remote power reboot option which would have power connector other than NEMA L6-30P:
[url]
As i only have 1/2 rack my colo provider would not give me this kind of an outlet.
Do you guys know some PDU (ethernet remote power reboot is a must for it; with at least 15-20 outlets) with a standard NEMA connector like we have at home.
I also checked HomeDepot for such NEMA x to NEMA y connector but they do not have it.
I came to this page to see if it exists but no info there.[url]
- Found great price for a full-cab in California - They only provide power/cabinet and network drop
Now, I need Windows and Plesk licenses. I know Plesk doesn't allow resellers/external licenses. I am still waiting for a response from the message I left at their voicemail TWO WEEKS ago.
Now most importantly, is there any options I have for windows licenss? another option besides paying $800~ for them at newegg?
how often does a colo provider's datacenter go down? I'm not talking about resellers or their racks, but the primary provider itself.
This has been the 2nd time (this year I believe) that my datacenter at NAC has suffered a complete power outage [url], their backups failed, and my entire rack of servers were power-cycled.
Luckily I am not a web host but I am running some critical public web services/sites. I have all of the equipment to manage my own colocated machines from afar (monitoring, remote reboot hardware, and KVM/IP hardware for all of my machines) but I'm dead in the water if my datacenter's power is out.
I always ease my pain throughout a network outage or power outage by visiting DSLReports. Their HUGE website is hosted in the same datacenter (probably in the same room) as me and while it is a terrible thing to say, being able to share the downtime with a bigger fish is easier for me to handle.
With all the high power servers/blade servers, the 40A (@ 110V) power limit is way too small. I am wondering if there is any colo space targeted for high density application, e.g. with 10 KW/cab limit for 60A @ 208V power drops. Does anybody know of such high density colocation space? East coast is preferred.
do dual power supplies use more power than a single supply?
E.g. Say I have a server than uses two amps, powered by a single power supply. Now if I switch to a dual supply (and say each supply has the same efficiency rating as the single), does my server use more power? How much more?
My simple view of this is that it probably does, but maybe not much. The second power supply consumes some power itself, but since its not under load, it doesn't consume much. Therefore, my server with redundant supplies might use 2.1A or 2.2A.
a company gives me 40U rack with 16A by 600€ per month. It will be enough power to fill the rack with Dell servers R200/R300 with dual and quad core processors?
I woke up to a complaint by the ISP of spamming for one of our servers. More than 10 000 spams in a shared hosting environment. Found Steven online, thank God! PM him and he went to work looking for the culprit. He spend time monitoring and putting in scripts to catch the culprit and in no time found the faulty script causing the spams.
Really saved me a big headache on how to explain to the ISP. Thanks to the consistent excellent work done by Steven of Rack911.
Those unpatched forums of clients can really be a hassle and a big source of problems.
Our server count with The Planet only seems to be increasing as of late and I'm now starting to drive myself nuts with bandwidth counts, costs, etc.
My main concern at the moment is our total bandwidth. While we might have a server with a 2500GB limit only use 50% we might have a 1500GB limit use 200%. I understand that any overages are our own fault, etc but there must be a way for us to combine all bandwidth across all servers!
Is it possible for The Planet or any of the other big boys to provide private racks with pooled bandwidth without going colo?
I honestly dont understand why does DELL, HP and others price their 1U TFT monitors at 3 times the cost of the cheapest laptop?
I mean, dont get me wrong, I am all for spending good money to get quality products but I feel very awkward spending 3 times as much for a screen and keyboard when I can get their laptops WITH OS, MEMORY AND HDD for 3 times as less and use it as the 1U TFT monitors.
I can get a powerful server from Dell and HP at that price for crying out loud.
But then again, I might be seriously overlooking something here because what justifies such high price?
We have about 50 Cpanel servers in our own AS with two upstream providers. On the cpanel servers we use the following IPs in the /etc/resolv.conf:
1. IP of the cpanel server
2. DNS IP of the 1st upstreamprovider
3. DNS IP of the 2nd upstreamprovider
I realized, that the upstreampoviders nameserver are not answering that fast and therefore I was thinking to make my own DNS Server, which I could use additionaly after the IP of the cpanel server.
Is this a good idea or is it not necessary? If it is a good idea, which dns deamon would be recommended? If we build this server, maybe would be also nice if we could offer DNS as a single service. Is there any solution where we could create user accounts where user could manage there own dns zones?
I have 10 servers and it causes me $1713 monthly. I decide to get a rack and buy 10 servers from dell but the problem is: I don’t know anything about racks
What do you guys think about putting 40 Amps into one rack? Our colocation provider wants to whine about it and not allow it. When we're paying them $1000+ a month - I think this is just shoddy. They say it's for heat concerns - but really this just makes me mad. We have fifteen 1U servers in there, and can't get much more on our existing 20 amps.
Does anyone have any recommendations where I can get a few cheap (full length) rack shelves in the UK. They don't have to be adjustable but would prefer if they where full ones and not the half ones as I don't think they would take the weight correctly.
I was having problems with a host and Ed (one of the owners) helped me out creating a custom package to fit my needs and moving my sites across.
I've posted more than my fair share of tickets and they all get responded to quickly. All the tickets have either been my own problems for example installing scripts or sales questions.
I have had bad experiences with zero-U PDUs (i have only tried the APC ones). They keep getting in the way of equipment when you put them in the back of the cabinet .... I usually end up just having them standing up in the back of the cabinet and zip tie them to something so they dont fall down.
Am I just stupid and using these wrong or do other people have this issue? if you have an extra long server that sits out the back it bumps into the PDU so you got to nestle the PDUs into a corner of the cabinet.
What PDUs is everything using in various co-los? I might go with the 2U rack PDUs but with the need for 2 of them that is 4U wasted (and also since they are not very long you need some long cable runs for all of the equipment to plug into them)..