Equinix Secaucus NY2 (275 Hartz) seems to have cooling prolem for the last 1 hour. The whole DC becomes hot. I went out one hour ago, came back and it's so hot now. I don't have temp now, ut I think it's around 85F (up from around 65F typical).
If you are in the DC, have fun working ... I wonder how much longer I can work here Hope they fix this soon, or else......
Does anyone have experience with voxel.net as a bandwidth provider? I am considering arranging for IP Transit from them in one of the datacenters they operate in.
I've seen some old concerns raised on these forums about their handling of a server move some time ago (affecting their hosting customers). It could have been misrepresented and could be a thing of the past. Just wondering how professional/reliable/reliable they've been lately.
Also, any other suggestions for quality reliable bandwidth at Equinix Secaucus?
I seem to have gone over this a hundred times, but wanted to get feedback from the community. Has anyone out there been able to really make liquid cooling work in their data center considering the additional space utilization and expensive price point.
We've found that liquid cooling vastly more expensive than what we are doing and am curious what some out there might be doing.
For us we've found that high density air cooling (top down, and capped hot/cold row configuration, and overhead air extraction) can get us to a max density of about 10 kW per rack. We use a chilled water system for the additional HVAC needed to get us above the standard 5 kW max, but this is for the air handlers themselves and not chilled water for the in row rack cooling. The problem is the in row chilled water cooling makes us lose a cabinet of space for every cabinet of higher density. For this to work (simple situation) we would need to attain rack densities of at least 20 kW given the loss of floor space.
We use APC Cabinets and infrastructure and even they have said if you try and get about 20 kW with the liquid cooling it becomes problematic. Given these governing characteristics in addition to the overwhelming cost, is anyone out there actually pursuing this in a colocation environment. To me it just doesn't seem to make sense.
Perhaps the cost needs to come down, but even then what are you gaining if it is a 1 for 1 trade off.
The company I work for is starting to have some cooling issues in our server area. We have basically doubled our server equipment in the room without upgrading the air conditioning unit in the room. Temperatures are getting really high in the room (sometimes reaching 85F which is just WRONG!) and we are trying to get it replaced.
Since my boss is the one making decisions I thought I would try to help him out a little bit with some of the information/calculations he needs.
My questions are:
1. When calculating watts for a redundant power supply do you double what the power supply is rated for? So on one Dell PowerEdge 2950 it shows that I have a 750W redundant supply. Would that be 1500 Watts?
2. Do you calculate your cooling needs for max wattage or do you calculate a percentage of what each power supply is rated for?
3. What percentage would you multiply your final total Watts by to accommodate possible future additions? 10%? 25%?
We currently have: 2x PowerEdge 2950 - 750W Redundant 1x PowerEdge 1950 - 670W Redundant 2x PowerVault NX 1950 - 670W Redundant 1x PowerVault MD3000 3x APC Smart UPS 3000 1x APC Smart UPS 1400 2x PowerConnect 6224 2x PowerConnect RPS-600 3x PowerConnect 5324 1x Cisco 2801 1x Cisco 2821 3x Unknown wattage servers redundant power(All the same specs) 1x Unknown wattage server redundant power 1x Unknown wattage single power supply Then a few other small devices like external modems, KVM and a small-mid sized phone system.
I am not an electrical expert but I am calculating that it is going to be about 10,000 - 12,000 Watts?
I'm specing out a box for small dedicated servers and am looking at the SC512 with a desktop board (Intel D964GZISSL since I don't have to cut anything). Should I be using active cooling since it's a desktop board, or will a large passive heatsink be sufficient? I'll be using Celeron Ds, probably a 356. It's power usage is quiet low.
It looks like the heatsink on the motherboard might restrict airflow a bit.
We currently put 2U solid spacers on the front. As well, I run the switch with the ports facing frontward and use a cable manager on top and bottom of the switches.
I can see from some of your pics in the other thread....that you place the networking equipment in reverse. It depends on how the fan is facing on the networking equipment as
I don't like to have fans facing the inlet portion of the rack.
I am looking for small colocation space in Equinix NY2.
(There is another topic here [url]where the original poster was looking for dedi/colo in Equinix NY4; I joined that topic with the same request. But after doing my homework, I've adjusted the requirements: firstly, I need colo only, not a dedicated server; and secondly, NY2 is almost as good for my purposes as NY4. And apparently there are more providers offering colo there.)
If you know someone who has (or might have) colo space in Equinix NY2, please, could you recommend them here?
I've heard a internet bandwidth service from Equinix named Equinix direct, how about the quality of this service? Does there somebody have the experience of this service?
I've got a cabinet @ Equinix Ashburn & need to get another one. My understanding is they're full. Does anyone know if Equinix has a new facility there yet?
VPS provider to have enough support resources to provide 24/7 contact and repairs when necessary and good connectivity to UK. So, a fully managed solution.
The Ashburn datacentres seem to have good connectivity to UK.
Is anyone aware of the current availability of colocation space at Equinix in Ashburn, VA? We are currently trying to obtain some space with them but getting my dedicated sales contact to respond is like pulling teeth! Are they so full (and thus full of themselves) that we will have to operate on THEIR terms (if at all)? At this point we are soliciting quotes from other datacenters of known lesser quality just so we can get this deployment live...
4-5 continuous racks on the 5th or 6th floor of the Equinix Chicago facility.
I've already met with Server Central. Can anyone suggest anybody else who is in that facility? I know it might be a long shot to get that many together, but it doesn't hurt to ask.
colocating servers in the N VA market. I've read numerous recommendations on this forum for the Ashburn Equinix facility. Since it's filling up quickly I've also been looking at the SwitchandData facility in Reston. It appears to have more available space and lower prices. Does anyone have any experience with S&D and what tradeoffs should I be concerned about in looking at an S&D facility instead of an Equinix (in general or specifically those two centers)?
I visited the colo space (1 cabinet) we obtained through an Equinix reseller. There was some confusion as initially we were going into DC2, but they put us into DC3 as that's where they had the space (they have a lot of cages in both DCs).
In the past, I had visited DC2 and it's clear the facility was purpose-built for Equinix. You can tell just by looking at it from the outside, but also inside.
Driving up to DC3 (on Chillum Place), I was first surprised to notice glass windows on the outside of the building (they have the reinforced walls inside of that I was told).
Apparently, the building was some other company's datacenter or offices, which Equinix then refitted their standard-build datacenter inside the building. They also have different man-traps (like a rotating door) compared to DC2, raised flooring (which is not used I was told), and lower ceilings.
I drove around the DC3 building, and the other half of it appears to be some other company's datacenter (based on the generators on the roof). Any idea who that is?
Is DC3 the same quality as DC2? It didn't quite "feel" like the quailty of DC2, but that's just an impression and not based on any empirical evidence. It's also a bit further out there, while DC2 and its new "siblings" (DC4/5) are all adjacent to each other (on Filigree Court).
With the reseller we are using, most of their bandwidth in DC3 cross-connects to their network equipment in DC2, and that's where they peer. That's another thing that makes me feel like DC3 is quite secondary.
Are my feelings unfounded, or should I push our reseller to find a cabinet for us in DC2?
Besides contacting them directly and getting an overinflated price on 1GB of BW and 1u space, is there anyone i can contact who can set me up with better pricing in the Ashburn location? I'm in Baltimore. Washington and Ashburn is about 30 or 40min from away. Something to take into consideration.
Does anyone have a good contact or contractor that they can recommend in the vicinity of Equinix Ashburn? Preferably someone who's done work in the facility before. We're in DC4 - which is building E I believe.
While Equinix has great staff, sometimes I just need simple things like servers unpacked and racked - things that I'd prefer not to pay $200/hr. for.
We just signed today to move our colo to Equinix Ashburn. I'll send out an email to our customers next week, to explain the upcoming downtime for the move. We do SaaS for our own specialized software product, and all our customers are in universities.. usually professors or administrative assistants.
These people are not technical, so saying something like "really good peering" means nothing to them. However, saying "the facility is so good that Google has servers there" is something they would understand. So, who are the tenants there that an average person would know? I realize there are multiple buildings all next to each other, but I don't think it will be necessary to denote by building. Last time I toured the facility (a while ago), I saw Google and Amazon there.
Does anyone know of any providers in 1950 Stemmons inside of Equinix with space less than a full cab? I only need 1/4 cab or less for a router and some switching equipment.
i am about to sign up for equinix's colocation service in LA. I am just curious if anyone else is paying similarly outrageous cross connect fees.. they are charging $300 for ethernet, and $200 for DS3.