Basically I registered with a new host. They sent me the details with obviously includes the IP address. I tested the IP address on just-ping.com and it came back with all of them having between 80% to 100% packet loss. Surely this is not normal is it? I havent moved my domain yet but it doesnt look good does it? Should I cancel?
root@server [~]# tail -f /var/log/messages Jun 10 14:14:49 server kernel: printk: 56 messages suppressed. Jun 10 14:14:49 server kernel: ip_conntrack: table full, dropping packet. Jun 10 14:14:54 server kernel: printk: 59 messages suppressed. Jun 10 14:14:54 server kernel: ip_conntrack: table full, dropping packet. Jun 10 14:14:59 server kernel: printk: 85 messages suppressed. Jun 10 14:14:59 server kernel: ip_conntrack: table full, dropping packet. Jun 10 14:15:04 server kernel: printk: 90 messages suppressed. Jun 10 14:15:04 server kernel: ip_conntrack: table full, dropping packet. Jun 10 14:15:09 server kernel: printk: 58 messages suppressed. Jun 10 14:15:09 server kernel: ip_conntrack: table full, dropping packet. Jun 10 14:15:14 server kernel: printk: 70 messages suppressed. Jun 10 14:15:14 server kernel: ip_conntrack: table full, dropping packet. Jun 10 14:15:19 server kernel: printk: 193 messages suppressed. Jun 10 14:15:19 server kernel: ip_conntrack: table full, dropping packet.
Anyone know what this is about?
Using Centos / Cpanel
Linux server.domain.com 2.6.9-67.0.15.ELsmp #1 SMP Thu May 8 10:52:19 EDT 2008 i686 i686 i386 GNU/Linux
I have a dedicated windows 2003 server at a colocation facility that i use for game server hosting. Over the past 7 months, packet loss has become horrible with random periods of massive lag. My host says it's something on my end. I use a firewall with SPI enabled. Could that be causing it?
Strange thing is, the first few months my server was at that colo, they only had around 40 other servers on a single OC-192 pipe and i never had packet loss despite having the same SPI firewall. But now they have over 300 servers on the same OC-192 pipe. Could the packet loss be caused by my SPI firewall or them overloading the network with servers?
Computer A (GigE) Switch 1 (gigE) Media Converter (Fiber Run) Media Converter (gigE) Switch 2 (gigE) Computer B
We have a cross connect in our data center that uses media converters (fiber) to regular 1000FD on each end.
Each end of the 1000FD handoff is plugged into port 1 of the 3870's (switch 1 and switch 2).
Pinging from Computer A to Computer B we receive a 50% packet loss. Pinging from Computer B to Switch 1, no packet loss. Pinging from Computer A to Switch 2, 50% packet loss.
Looking in the interface, the port 1's on each switch auto negotiate to 1000FD, however flow control shows as off.
We asked our data center to run tests on the media converts and fiber runs and everything comes back 100% fine. Has anyone seen a weird issue like this before with 3com switches not playing nicely with media converters?
I have no clue whats going on and our data center said the fiber run/media converter is fine... [url]
I have smokeping monitoring my game servers and so far in the little time that it has been running all my game servers have been encountering an average of 4 to 10% packet loss. Are there are tweaks i can run on the server computer to reduce packet loss? (registry modifications, etc.)
I downloaded a TCP tweak program called "TCP Optimizer" is it safe to run on a Windows 2003 Server OS?
The colo connection is an OC 192 and i have a 100Mbit ethernet card.
Recently I have been having this problem with two high traffic servers on two different network.
Both servers are Quad-Core Xeons with CentOS 4.5 x86_64 and they are on 100mbps full duplex network. Software configuration is Nginx+Apache+MYSQL control panel is Directadmin.
The servers are serving lots static files and some php scripts.
When the servers start push near or over 30mbps, there will be packet loss when I ping them. around 5% loss, push more bandwidth the more packet loss. I have checked all the log files, I don't see any unusual errors.
Server Load is fine. The NICs were on 100mbps full-duplex mode.
The datacenters claim the networks were fine and all the other servers running on the same switches were fine with no packet loss.
I'm trying to find out why a single interface is causing packet loss on my entire network.
The network consists of four 2924's trunked to a 3550. I have about 20 vlans and a single default route for all traffic my uplink.
The network is perfect until I enable a single server. After I issue a 'no shut' on the interface packet loss is anywhere from 5% to 20% for anything going through the 3550 or even pings from the 3550 to other switches or the uplink.
Here's the statistics/settings of the interface after 1 minute of activity:
a tool that can measure how much packet loss we are having on a given server by looking at the packets being sent from it. I.e, something than looks at all TCP/80 connections and measures how many packets and bytes are being retransmitted vs actual packets and bytes sent.
This documents explains it:
[url]
We need this to measure network performance of different hosts where we have dedicated servers. This would be a good way of measuring performance with the actual data of our users.
Does anyone know of such tool? I.e, something that can say
2532 packets/second - 132 retransmits/second (4.8%) 25.43Mbps/sec total traffic - 24.84 Mbps/sec actual data sent - 0.59Mbps retransmits
Even better if it can then break it out on IP prefixes. like
I recently switched over to SoftLayer for dedicated hosting and the servers are great. However we've been getting hit on and off with massive (50-80%) packet loss, which has been crippling our performance and causing all sorts of problems
I put in a support ticket and they linked me to the Internet Health Report website and said it was due to one of their bandwidth providers (I think Global CrossinG) and not on their internal network and to be patient as it could take time to resolve
Are any other SoftLayer customers going through this? Is this an unusual occurrence? I feel like if it was really one of their partners that it would be affecting a lot of their customers and it would be a high priority issue right?
I'm kind of stuck on what to do; I just invested a lot of energy into moving content onto these new servers and am concerned about whether to wait it out or whether to start finding a new company. This kind of packet loss is really unacceptable...
I saw there are some providers offer high end xen VPS with like 2-4G memory, I am wondering is there any advantage of such a high end VPS comparing with a regular dedicated server? suppose both servers have same price.
What kind of web site is suitable on such a high end xen VPS? say, database extensive or disk operation (read/write) extensive?
I have a site that uses extreme amounts of bandwidth, I checked some of the popular companies like serverbeach and softlayer. Right now I'm leaning more towards serverbeach because they are cheaper, Are they a good company to go with? Let me know if you have any other recommendations.
I'm working on launching this online store for a poster designer, and we're becoming more and more aware that we need a really robust and fast server. This site is looking at extremely high levels of activity whenever this designer posts a new poster. We're talking 1700 people surfing the store (downloading med-high resolution poster images) and 300 posters sold in 16 seconds kind of thing.
So, we need a really robust hosting, to work with PHP5 and MYSQL.
My previous go-to hosting provider was Lunarpages, but their customer service has gone down the crapper, and I've just about had it with them. My main questions are: Should I be looking into getting a dedicated server, or are there hosting companies that can handle this kind of traffic on a shared server? I don't have experience administrating a server, so if we got a dedicated one we would have to pay the host to do at least some of the setup/administration, I would assume? Dedicated server or not, what's a hosting company that has really good customer service, where we can be assured of getting somebody knowledgeable without having to wait on hold for 20 (or even 10) minutes?
I have a site that is eating up my server resources and need to know what the best solution for this is. I'm thinking of getting another server just for mysql but do not know what specs the server should be to handle the current traffic/database load and have the site run smoothly without slowing down to a snail's pace.
An alternative is to get another server just for the videos being served and leave the database and html on the current server. This is where I'm stuck and don't know what route to take with this.
I've attached screenshots of top and bandwidth usage per day. Hopefully with this information you could tell me if I need another server or if there are any things I can do to the current server to help things move faster.
As a non-tech but looong term sufferer in the Hosting biz both as a consumer and VERY briefly a supplier ( strictly for masochists IMHO ), over the years I found it was only specific individuals - not the Hosting Company that was the key point.
Good support people are really needed more today than ever before. It is sooo complex - as follows:
We are looking at leaving our unsupported VERY small Dedicated Server after two years of frustration trying to get a secure, reliable system going without success. A mixed bag of problems: Us being non-geeks, OS problems, Server problems.
We are looking at going back to a VPS in the light of amazing claims being made for them today.
A fraction of the cost of Dedicated and yet *claims* of astounding capability. CLAIMS...
That's why I'm here today with you. I need help sorting the facts out. Can a VPS that : is "burstable" for RAM and with amazing pipe access and volume allowance and the new concept of "sorta like load balancing" sharing workload over possibly hundreds of Servers in a giant cluster" actually be real?
If this is true it would be Server heaven for me as the Provider has to do all the Geek-stuff!
I have 2 CPanel servers. I wanted to transfer multiple accounts via WHM but it gives me error that I cannot SSH into the other server.
I tried to ping the 2nd server from 1st server and it gives me: root@s [~]# ping xxx.xxx.xxx.7 PING xxx.xxx.xxx.7 (xxx.xxx.xxx.7) 56(84) bytes of data. ping: sendmsg: Operation not permitted ping: sendmsg: Operation not permitted ping: sendmsg: Operation not permitted ping: sendmsg: Operation not permitted cping: sendmsg: Operation not permitted
I added both server IP into allowed list inside my APF firewall on each server as well.
I rebooted both server earlier before, but it is still not solving my problem.
I'm currently running vBulletin & two wordpress powered blogs on my box. Number of simultaneous users is about 200~250 and about 15~20 logged in users.
The memory usage is constantly in the range of 1.8 GB - 2.00 GB. Considering that total physical RAM my box has is 2 GB, the memory usage, I believe is exceptionally high.
The same site was running very well on shared hosting environment and also on VPS [with just 512 MB RAM] very fine.
What should I do to bring the memory consumption to minimum (required to run the site well?)
So I'm wondering what if anything I can do to get a better ping from my 1and1 server.
It's located in Wayne, PA and my ping floats around 70ms and I'm in South Lake Tahoe, CA. I only ask because there are servers with Peer1 hosted in New York, NY that give me a ping around 20ms.