After running 6 or so dedicated servers purchased thru several different resellers, my company decided to get a rack at the Chicago InterNap DC.
The quote we got was $3,400 per month inclusive of cabinet, Usage based 10/100, and Cross connects.
Have a couple questions.
A) Is that price in the ballpark of where it should be?
and
B) Our quote states Usage based 100mb Ethernet (10MBps Min). Tier2 at $150 month and $1,400 for the 10.00 Mbps Base. Being new to this, I have no idea exactly how much bandwidth we can use before the "usage" fees kick in.
With all the high power servers/blade servers, the 40A (@ 110V) power limit is way too small. I am wondering if there is any colo space targeted for high density application, e.g. with 10 KW/cab limit for 60A @ 208V power drops. Does anybody know of such high density colocation space? East coast is preferred.
I see that FDC is offering $12 per Mbit/sec for InterNAP bandwidth. I'm really interest on this offer. Are there anyone colocating severs at FDC and using their InterNAP bandwidth? if so, could you please give me some reviews and your experience?
My colo is charging about $80/Mbit, they use InterNAP. Is that reasonable? This is in west coast/california.
That brings me to my other question, how do you know whats a good network? How is hurricane electric compared to InterNAP?
fixedorbit.com shows HE on the top 10 list and shows a lot of peering. I don't see InterNAP on that list at all! Does that mean thats it not as good?
The more peering, the better? (I guess we assume that the network provider isn't over selling and isn't cramming a lot of customers into a single port etc...)
I am getting my quote back Tuesday but need a little bargaining power with these guys...
Oakland, Ca datacenter
40mbps, 20A, 42U rack.
What should I be looking at price range here, how much per mbps?
Only info I've seen is from 2003 where people were saying $200/mbps. Obviously prices have come WAY down. I've seen people on here reselling internap bandwidth for $12/mbps, but they might have bought a huge commit.
We are in the process of starting a new project for a client and we are trying to decide which network to place it on.
We have a choice of a Level3/Time Warner mix or pure Internap. Obviously the Internap bandwidth is a bit more expensive, but since this customers website serves an international community we are thinking that Internap bandwidth would be well worth the cost.
What are the advantages of using Internap? How is the network performance? We've setup a machine on the Internap network and have begun running tests, but I would like to hear from people who have direct experience with Internap bandwidth.
As we are finishing our migration plans to Cisco OER. I would like to get everyone's thoughts on the low latency "brand name" internap bandwidth.
Do you think that the high priced brandname is going to continue with Cisco finally releasing OER to what a large number of datacenters use as their primary core switch? In my eyes the FCP and the Avaya/RouteScience platform just lost a lot of value. The OER product looks very complete and in testing works excellent, the final verdict will be in what the platform actually does.
If you are wondering Cisco OER information can be found here [url]
Who are the people/companies with a good reputation with the WHT community who provide Internap or Peer1 colo? And it seems it's mostly in LA or NY, I know both Peer1 and Internap have a presence in other places, is there anyone reselling out of anywhere else?
Pure Internap or Pure Peer1 hosts only please. The only one I know of right now is H4Y.us
I have been reading quite a bit lately about the Internap FCP. I am wondering how much it actually improves network performance and how it compares to BGP4. We currently use BGP4, but are considering using a data center with the Internap FCP for a project for a client.
I am looking for reviews from others who have experience with the Internap FCP and it's performance. How does it compare to a network using BGP4? I know that FCP uses more intelligent routing than BGP, but how big of an improvement does it make?
I wonder how it effects to network performance? The network will be faster? How much? The normal routers can choose the best routes too, is it correct?
My current hosting company - hostmysite.com - offers two Windows (IIS) hosting plans, that are almost identical except that one supports ASP.NET and the other only supports old-fashioned ASP, but not ASP.NET. The former is $20/month and the latter is $12/month.
Why would adding support for .NET increase the price by 67%? I run IIS on my network here and it's not obvious to me why .NET per-se increases cost, server loads, etc, by anything like that.
I have developed a forum. I don't want it to be dependent on any commercial interest, so I want to at least look into how much it will cost to set up and become my own host.
One of my clients just asked me if $4.50 per GB of transfer is a lot (as they just found out that's what their web host is charging them). I told them yes, because that seems ridiculously high to me, but I'd like to give them a ballpark figure for what that should cost. I can't find any hosts that charge per GB of transfer though. Any ideas what that should cost?
I own a few servers and looking to buy CPanel license. I place I could find is $43 /month, but I see many providers are offering it at a much lower price. What is the cheapest price I can get one and where can I get those?
Whats about the going monthly rate on a 10gbit commit from the various providers (OC-192)? I realize there is a regional difference, I'm just ball parking.
Thinking of putting together a ISCSI box with 14 sata II 750's, 3Ware sata controller (raid 6) and Intel quad port gigabit card ganged together for 4 gig transfer and tieing it all together with Open-E ISCSi or DSS module.
Anyone done something similar with good (or bad) results? Thing of using this for hosting web sites primarily as well as some storage for mail server and some databases. Servers running raid 1 and using MS iscsi initiator. Have a vlan setup just for iscsi traffic in my 48 port gigabit switch.
Are the TOE cards better to have or is the MS Initiator good enough. Plan on using the second NIC on the servers solely for ISCSI transfer.
I run a small cluster (5+) of servers and would like to move them behind a dedicated switch with my own dedicated bandwidth. I expect my bandwidth usage to be around 20 Mbps, measured at 95 percentile (greater of incoming or outgoing bandwidth). I have been quoted a price by my supplier but finding it rather high I wanted to ask users here what should be an average/reasonable cost for 1 mbps, assuming the servers are managed, the bandwidth is multi-tiered and the service is good.