We use Cogent at our colo, and since this Monday (8/13/07) we've been gettin dozens of complaints from AOL users stating that connections to our sites are very, very slow and often time out.
We tested from an AOL session and were able to confirm this. We found this odd, because it came out of the blue and nothing had changed on our end. We also get excellent all-around performance from other (non AOL) connections, be it dialup or broadband.
We went ahead and did a bunch of traceroutes on the AOL session. We found things get bogged down as soon as it hits Cogent's routers in DC (timeouts / no response). These AOL sessions have no problems whatsoever with other sites and other (non cogent hosted sites) loaded fast and without issue. We then tested loading Cogent's own web site from the AOL session and saw the same sluggishness as well. The same happened when we tested a couple other Cogent client sites from AOL. They all pulled very slow (10 to 35+ secs per page) on an AOL/DSL session. This did not happen on the same broadband connection w/out AOL running on top of it. The same pages loaded very fast.
Just wondering if other Cogent clients are experiencing this? :? We have not heard from them about this, but wanted to know if this is something isolated or not.
I am having a strange DNS issue on a Cogent circuit using Cogent DNS servers at 66.28.0.45 and 66.28.0.61. What is happening is that some domain requests will timeout the first try. Then subsequent tries will be quick with no timeouts.
I am having a very hard time getting through to Cogent that there might be an issue somewhere and I was wondering if anyone on a Cogent line using the same Cogent DNS servers could also do a test for me to and see if you can reproduce any timeouts.
How I am testing: - Open nslookup (in linux use: nslookup -timeout=2, windows defaults to 2 seconds) - Picking a random domain name (favorite cereal.com, movie title.com, brand name.com, random word.com) - Repeating test for same domain if timeout occurs to see the next query resolve instantly
Here is an example of what Is happening for me: Code: [eger@womp ~]# nslookup -timeout=2 > superman.com ;; connection timed out; no servers could be reached > superman.com Server: 66.28.0.45 Address: 66.28.0.45#53
Non-authoritative answer: Name: superman.com Address: 64.12.47.7 > napaautoparts.com ;; connection timed out; no servers could be reached > napaautoparts.com Server: 66.28.0.45 Address: 66.28.0.45#53
I work for a dutch company and server speed used to be slow, but at the time being server speed is so slow that I have to wait ( 6 minutes and a half) to get a report of the job I have to do and a bit less to browse the internal database. I sent them emails and they told me speed was fine and they had even bought new servers
What can/should I do? They told me to describe ''the pattern'' of the issue but cant tell them much more than this. Other co workers that also live in Buenos Aires, also have crappy server response time.
I cant keep on working this way as I am spending way too much time and not making a good profit. What tests should I make in order for them to make things better? I sent them the time I spent waiting to have the report done ( as I told you before) but cant think of another solution.
I work as a monkey ( data entry) and dont have access to root and so on, so dont expect me to be able to try techie stuff ( I might ask them to do so).
We have a big problem with Giga-international: thay have a sluggish customer/technical support.
at the contratation, they show a sluggish assembly team from the beggining. yeah, they say 'from 3 to 7 days to setup'... they take 12 days to setup because 'half of their staff is in illness'. ok, we are humans... the humans catch ills, after all...
2 weeks after, the server need a reboot. we try to enter in the 'instant reboot area' (yeah, we contract it), and the 'instant reboot area' IS OFFLINE ¿WTF?. we send an email. they take 2 hours to info us that around 3 months ago they change the instant reboot sistem to a new IP/site, and (again) around 3 months we did receive the email. WE ARE NEW CUSTOMERS, ONLY 2 WEEKS WITH YOU... ok,ok... no probs... instant reboot... take it easy man... the problem is resolved, no probs...
2 weeks after that, we need change the OS from Debian4 to Windows2003. we send an email to Giga, and they info to us that the change don't take longer of 24h if is only install the OS. the upgrade to windows cost a bit, of course (60€ guest remember), we pay trought paypal, advertise to our customers that the server will be have a downtime of 2-3 days, down the server, and issue the ticket to giga... 3 days after, we have the server down, the customers are in rage, and giga DON'T TOUCH OUR SERVER AT ALL...
as i say, sluggish support... every problem show, Giga shows their sluggish response time and ignores the customer (yeah, that is ME)...
if any of the members of giga Staff read this, and want to resolve this issue, i will be happy to cooperate. i have mails probing all the above.
We are nearing the end of our contract with cogent and are deciding whether to continue with them. Bandcon has recently (within a last year or so) established its presence in NYC metro area.
Who would you choose among the two? Please give your input and evaluation of the two networks.
So apparently my sales rep is telling me cogent will not give me a second circuit for a redundent line so in other words, no vrrp or hsrp nothing Unless! i purchase another 200mb contract with them.
Anyone else ever have this problem?
not even are they willing to do a port fee for the second gbic i'd take up.
I have been using Cogent for many years and have always been pretty pleased with the service and bandwidth (I know many consider bottom rate/budget bandwidth). I would usually be able to call in and speak with someone who could check routing, login to switches to verify port settings, and make reverse DNS changes right then and there.
Within the last 6 months though I have been getting pretty poor support from them. Seems they are hiring more and more people just to be able to answer phones. The techs seem to have a hard time comprehending even simple reverse DNS requests and always ask me to hold for extended periods of time.
Today I called in and was even asked to hold right as they picked up the phone!! I mean, if your just going to pick up to ask me to hold, why pick up in the first place until you are ready?
I am considering getting a server from take2hosting.com. Their offer is great and sales has been very helpful and fast. Left definitely a good impression.
The downside is that they are on Cogent-only bandwidth. In the past I have really gotten something against Cogent, mainly due to one of my FDC servers being routed over Cogent and only pulling 10K/sec to Europe.
How is the Cogent network nowadays? Has it improved since a year ago? Worth considering?
From my tests the speeds are actually really great. They are located in San Jose, and I am testing speeds to Europe. For example to Surfnet Amsterdam (Cogent hands off the traffic to Surfnet in Amsterdam, so its on the Cogent network all the way) I am able to pull 2.53MB/sec. This is an incredible speed for a Westcoast<->Europe transfer. It almost makes me believe Cogent has started getting its act together.
For people who would like to test speeds, please use this test file. Especially European tests would be interesting, and it would be great if you could post where cogent hands off traffic from LosAngeles to either your network or a transit network. (In the US or Europe)
Right now I am hosting on a Level3/GlobalCrossing network @ the eastcoast. This works really well but it shows in the price. If I could combine a couple of budget boxes into one of those Quadcores it would save quite a lot. Question is: Is Cogent trustworty nowadays?
Thanks for your input
Ps. I know that hosting on a single-homed network is not the smartest thing to do. However they will add more carriers soon so this will not really be an issue. I'm expecting them to not drop Cogent tho, so my question remains
Anyone using a Cogent colocation at Marina Del Rey, CA that would care to share their experience (good or bad).
Any gotchas that we should consider ?
Considering getting a single cabinet at that location and am interested in quality and reliability of network and facilities. We rarely use remote hands so that is not much of an issue.
there is anyone out there who has their infrastructure colocated in cogent owned datacenter. And how stable and secure is it. The only reason i m interested in Cogent owned colo is that they provide solid SLA.
Could you guys look and see if what I am seeing is right? They offer Global Crossing and Cogent officially. So if I use GLBX looking glass, I get this.
Trying trace from node 'Miami, FL, US' to '96.31.73.xxx' 1 64.214.16.65 (64.214.16.65) 0.761 ms 0.608 ms 2 so0-0-0-2488M.ar2.TPA1.gblx.net (67.17.66.165) 5.690 ms 5.695 ms 3 WBS-CONNECT-LLC.ae0.409.ar2.TPA1.gblx.net (64.214.147.222) 5.731 ms 5.880 ms 4 69.46.31.106 (69.46.31.106) 7.442 ms 6.667 ms 5 node1.sarorahosting.com (96.31.73.2) 15.734 ms 15.993 ms 6 96.31.73.xxx (96.31.73.xxx) 15.861 ms 15.795 ms
Now if I tracert from the VPS to the GLBX router, I get this.
traceroute to 64.214.16.65 (64.214.16.65), 30 hops max, 40 byte packets 1 node1.sarorahosting.com (96.31.73.2) 0.072 ms 0.035 ms 0.008 ms 2 69.46.31.105 (69.46.31.105) 0.731 ms 0.863 ms 1.003 ms 3 gi0-6.na21.b001841-0.tpa01.atlas.cogentco.com (38.99.204.33) 1.147 ms 1.142 ms 1.428 ms 4 gi4-1.core01.tpa01.atlas.cogentco.com (38.20.33.89) 0.818 ms 0.814 ms 0.807 ms 5 po2-0.core01.mco01.atlas.cogentco.com (154.54.27.90) 148.004 ms * * 6 po5-0.core01.jax01.atlas.cogentco.com (66.28.4.146) 5.847 ms 5.839 ms 5.872 ms 7 po5-0.core01.atl01.atlas.cogentco.com (154.54.3.197) 11.953 ms 23.819 ms 23.870 ms 8 te3-3.ccr01.atl01.atlas.cogentco.com (154.54.5.38) 11.721 ms 11.752 ms 11.787 ms 9 te8-2.mpd01.atl04.atlas.cogentco.com (154.54.3.174) 11.962 ms 11.921 ms 11.987 ms 10 ge4-1-0-390-1000M.ar4.ATL1.gblx.net (64.208.110.97) 12.252 ms 12.359 ms 12.444 ms 11 64.214.16.65 (64.214.16.65) 16.026 ms 16.061 ms 16.594 ms
if you can share a 100MB download link that I can use to test cogent's speed to my network. Hopefully plugged into a 100MBPS port at the switch to see if it will max out or not.
how can I use to control or cap the traffic on a per server basis ? in other words, I have 15 servers in one cabinet, in this cabinet there is one switch to feed all 15 servers, the swith is a DELL 3448, one of the servers is eatingup almost all the traffic I have fro the cabinet itself, is there a way I can cap or limit traffic quota on a per port basis at the switch level? or what is the best way to manage this?
I'm up Games for Windows VPS servers with VMWare Server ESXi and wonders whether some option to control the traffic of each IP, I thought about using a "Cisco ASA 5500" but I do not know if it has this option:
Imagine you want a set of servers (VPSs would be a cheaper choice, that is why I am posting here) that do not have much outbound traffic but download from other servers (more or less as spiders, but I am not trying to create a web index). Disk space or memory size are not important, but port speed and monthly transfer should be as high as possible. As inbound traffic is less frequently used, I wonder if any provider offer cheaper rates if traffic is like this.
I have been searching the forums and have not found too much about this topic (a quite related post named "I want to download the Internet" or something similar did not get a conclusion).
I am not sure if my dedicated server is being attacked or if it is legitimate traffic. I need help figuring out the difference and if it is an attack, how to prevent it, and if it is legitimate traffic, how to configure the server to handle the load.
SoftwareCentOS 5.3-32 Apache2 MySQL 5 PHP 5 When I do ps aux|grep httpd|wc -l I get the count of current connected clients of 259 which is always maxing out my MaxClients of 256. I had increased it to 512, and it maxed out, I had increased it to 1024 and it maxed out, and lastly I had setup to 2048 and it works, but slows the entire server down.
Recently I noticed the load on one of my servers way beyound what I would expect it to be. I run multi processor servers and even during a backup the load is only around 1.5.
But lately I noticed peak loads that high under normal web traffic.
I know 1.5 is low on an multi processor server, but I am hoping to add much more to those machines and with sustained load that high it leaves no room for expansion. The servers are not cheap, so adding another server to the cluster can only be done if I make money from the last one I added.
I checked the traffic levels and they were very high. After further review I had some bots hitting sites at over 1200 pages a minute. Multiply that by a few hundred bots and clearly I could have a load issue. The potential is there to bring any server to its knees when delivering those volumes.
I created programing to watch connections and block the abusive bots. While logging I became aware of over 600 bots crawling my servers. Many bots from, Japan, China, Germany and so on and on, useless to my customers even if they are legit search indexes.
Another problem I see is that the bots are running from many ip addresses and hitting the same sites from multiple ips at the same time. Why would the need to do that?
Among other things I decided to validate googlebot, msn and yahoo with dns lookups so I could determine that they were actually their bots and not imposters. In 24 hours I found valid bots from the big three hitting one server from 1100 different ips.
Now we are looking at thousands of vaild bots and thousands more email harvesters and content theives.
As a host, the number of sites I can host on a server is greatly reduced by the bot traffic. My customers do not want to hear that their website was being crawled at 3,000 pages a minute and that is why they could not access it. Of course they will blame it on me.
I was able to filter the bots at a firewall level and drop connections based on reverse dns lookups and site crawl rates and my server sits around 0.05 most of the time even with hundreds of pages a minute being accessed.
I am wondering how the rest of you hosts deal with this problem. Do you leave it up to your hosting customers? Or do you have some type of filter to get rid of the bots.
When you have a few sites it is not really a problem, but as you grow it grows exponetially out of control.
i've a vps with iptables, but i've too much traffic (RX), there are too many packets received from random ports on both upt and tcp. Today in just 14 hours i've 2.8 gib of traffic, without any connection for web, email, etc (i've stopped all the services). How can i stop this? it's going to burn all my monthly traffic