I want to replace my rather basic Netgear switch with a QoS-aware one, such as Dell's PowerConnect 2708. The switch itself is not bottlenecked. However upstream is a switch that is, which I don't want to replace.
Would the backlog of packets trickle down to the QoS-aware switch and enable it to prioritise the traffic passing through it effectively, even though the bottleneck is further upstream?
Response time from my web server (running LAMP) was really slow this evening. mpstat showed nothing, CPU hovered under 5%, mem usage around 25%, iowait minimal. mysql processlist was mostly idle, no slow queries logged. Only other thing I can think of is the connection from the host itself? (Not my local connection as multiple users complained about slow performance.)
Concurrent users was higher than average, but shouldnt this manifest itself in higher CPU util?
My question was, is there something besides network latency I could be looking at? If it is network latency, what's the easiest way for me to confirm this? A simple ping?
I currently have a dedicated server, which is hosting several websites. I'm happy with the service I'm getting, but I'm trying to save money. I'm paying $120/month for the dedicated server. Spending half of that each month would be great.
Right now, the websites are either static websites, or are simple database driven websites with not much traffic. My server load averages are pretty close to 0.01 I would think a VPS would be fine for my needs. However, I may have a site I will host in the future that is database driven and uses Ruby on Rails. It would probably have 10-20 users online at any given time, and maybe several hundred subscribers total.
Would a VPS still work in this instance, or should I stick with a dedicated machine?
We have 1 Gb/s channel. We want connect it to switch, than to two routers (first - main, and second - emergency, which will began work if first one dies).
I need a basic L3 switch for maybe 25 mbps that will do hopefully up to 50 VLANs and which will not require me to hire someone to configure it.
As much as I like Cisco, that rules them out.
The reason I'd like a Layer 3 switch is so that I can run my backups and inter-server transfers without adding to my bandwidth bill. Also, VLANS are a critical requirement as i have a lot of customers with root on their managed servers.
So i am looking at HP [gasp] switches. How "easy" is the web-based configuration widget? [I'm an advanced unix admin but networking is a mystery to me.]
This is a starter switch and once i have a full cab of servers I'll be able to spend $7K on a pair of 3560s and hire someone to configure them for me ... but until then what can i get to meet my requirements?
This week connectswitch's service has not been that good. Basically first they restart the node without prior notice and our vps was down for 7 hours. and now we buy our cPanel license via them and they havent paid it so the license is now expired although we have paid them for it.
I am looking at picking up a switch to mess around with at home. I found the following within driving distance but have no idea of which one will give me more up to date, hands on experience. Any feedback is greatly appreciated.
Used Cisco WS-C5509 Chassis with power supply ( POWER SUPPLY 34-0870-01), and fan (WSC5509FAN) Cisco WS-X5530-E2 Supervisor Engine III Modules Cisco Systems WS-U5537-FETX CISCO 4 PORT 100BASETX UPLINK MODULE Cisco WS-X5234-RJ45 Switch Modules X 8
$160 each.
Cisco WS-C5500 Chassis POWER SUPPLY 34-0773-03 Cisco Ws-x5550 Supervisor Engine Iii G-series WS-X5234-RJ45 X 11
For $200
Cisco WS-C5505 Chassis Cisco WS-X5530-E2 Supervisor Engine III Modules Cisco WS-U5533-FEFX-MMF Supervisor Engine III Uplink Modules Cisco WS-X5225R Switch Modules X 2
What is the purpose of making the switch. If i were to get "unlimited/umetered" shared hosting with cpanel, how is that different then getting a vps with cpanel?
Other then getting large amounts of traffic, what is the purpose?
We have a small hosting company (currently 24 racks) that we are expanding to hold 100 racks. We have several 3640 series routers behind a 7200 series router (our edge router) that feed into numerous 2950 switches and 515 & 525 pix firewalls then into the racks with customer supplied switches within the rack. I want to replace all the 3640 and 2950 switches with a 6500 series switch. The only routing we do within the 3640's is subnet routing to the switches which make up individual networks for each customer. My goal is to use the 6500 switch to limit bandwidth for each port feeding a customer and to eliminate all but the 7200 router and the 2950 switches. Does anyone know of a reason or reasons this would not work or if it's just a bad idea. Looking for pro's and con's,
Does anyone know of a fairly low cost dual power supply Ethernet switch. Nothing fancy is needed, just a simple 12-24 port switch that has redundant power features.
Our router and four little servers all have dual power supplies. Two big UPS units in a redundant setup would work great for us. The only weak link in the setup is the switch.
I just bought 2 Gbit dedicated bandwidth for me, and my customers. This is the switch the DC gave me. I know it is a 24 port switch, that can handle up to 4 Gbit of bandwidth. And that you can give each port its own dedicated bandwidth.
But this is my question. Off this switch can I give metered bandwidth? Like 2000 GB Bandwidth?
Also how would I offer unmeterd bandwidth? Like hook up a cheap Linksys up to it and limit the bandwidth to the port that the Linksys is in?
I was just wondering what switch everyone would recommend for running a back-end network. We plan to push mainly backup and management traffic over this network. The idea is to have an NAS box connected at 1GBit/sec and all of the servers at 100Mbit/sec backing up to that.
We currently use Cisco Catalyst 2960's to connect the servers to the front-end so it would make sense to use 2960G's for the back-end to keep the overall management of things simplified. There is of course quite a big price difference between a standard 2960 and a 2960G.
one of my clients build out their network but are still green when it comes to the switch market that I decided to get some input :-)
Pretty much, we're needing the following:
- VLAN support (standard thing) - Per IP accounting (sFlow/netflow) - multiple uplinks and ability to segment?
Pretty much, we want to be able to be able to seperate our network to allow for us to have cheaper providers for high bandwidth usage and then the other side for gameservers and things like that.
Now, I'm thinking that maybe it would be better to BGP the two and simply separate clients by their IP space. Now, my next question is that sounds pretty straight forward, but can we control BGP on a low number of IP's? Say we have a user with 1 - 2 IP's for a single gameserver, can we control it so say that IP only gets Provider #1 in their transit?
I've been checking models of switches and have found both the HP 2848 and the Foundry FES4802. Both are within the same price range which is nice, but the foundry seems to offer IPv6 and layer3.
I want to do a bit of colocation, and eventhough I can pay my way out of this I might be able to save a lot each month by doing it my self.
I want to be able to get a graph of the monthly BW up/down for each IP, and it could be nice to be able to se how much of my 100mbit line the servers use.
It should be reliable so it would not be a weak point in the network and easy to use.
What are the smaller shops doing for switch redundancy? We have all our machines on dual Com Ed feeds but most switches in the $1k-$3k range only have one power supply. We recently had a power strip go flakey and of course the switch was plugged into it.
Is the best solution getting two switches and hooking each machine up to both? How hard is that to setup in Linux? I've used keepalived for whole machine failover but not for network failover.
I'm going to be setting up 2 machines in my cabinet shortly, neither one having very much traffic at all (maybe 4000 visitors - 100GB traffic - per month between them).
Once they are setup, I plan to move 2 more into the cabinet, which I'm currently leasing at another Data Center. These two are higher volume (unsure as far as the number of visitors) but I would guess about 20GB of daily transfer (spiking at about 6.0 Mbps occasionally) between the two of them.
The next piece of the puzzle that I need to figure out, is what switch to purchase (and ultimately why). My plan was to use just a 16 port D-Link to begin with, but I'm unsure as to whether or not this will be insufficient.
Switch And Data, a provider of network neutral data centers and colocation services, just had an IPO yesterday. They filed to sell about 1/3 of their shares priced at the high end ($17). I was wondering if anyone here had personal experience with the company, and could offer some insight on how they stack up versus Equinix and other competitors.
Switch And Data trading symbol: SDXC Equinix trading symbol: EQIX
Equinix has been up a lot this year and has a huge market cap of 2.3Bil ($80/share), with revenues of 269Mil, gross profit 63.7Mil, net income -49Mil.
I calculated Switch And Data has around 35.9 million total shares, which puts its current market cap at around 720Mil ($20/share). I couldn't find any 2006 numbers, but in 2005 they had 105Mil revenues and -11.3Mil net income.
How comparable are these two companies? Does SDXC deserve to trade at a comparable level to EQIX? What do the experts at WHT think?
Ok, I have a ProCurve 2900 switch. At the moment, all my servers are just connected to it, no vlans are configured on the switch since I am not selling any dedicated servers.
So I just connect my devices to the switch and configure them with one of the free IPs available.
But, when I run a traceroute, I dont see the switch. Not really an issue that I am concerned about, but I'm more curious than anything.
I've searched on this site, and there are quite a few posts about selecting appropriate switches for different applications. I'm looking for similar advice from the networking gurus out there.
I need a reliable switch (probably only need L2?) to connect 6 Sun Fire X2100 servers in a colo rack. Each server has two network ports. I use one for the public addresses; and the second for a private, management network.
In an ideal world, I'd love to run the servers diskless, and consolidate all the drives into a separate, dedicated storage server running Solaris and ZFS. I'm guessing that I would need GigE ports on the switch to get maximum performance out of an iSCSI SAN?
That might push the price much higher though.
I'm currently running 3 of these servers with a Catalyst 2912XL, and I've been very pleased with the reliability of this switch (and it was super cheap on ebay). It's setup with two VLANs (one for the public net, one for the private net). The thing has been running for about a year without a single reboot.
While Cisco seems to be the favored brand, I'm considering a few other choices as well... Foundry EdgeIron 2402CF Extreme Network Summit 200-24 Cisco Catalyst 2950-24 Cisco Catalyst 2960-24
Just wondering what the pros and cons of these various units might be. And do I need a separate (GigE?) network for iSCSI, or could it function on the same interfaces used for the management (and MySQL) traffic?