Imagine you want a set of servers (VPSs would be a cheaper choice, that is why I am posting here) that do not have much outbound traffic but download from other servers (more or less as spiders, but I am not trying to create a web index). Disk space or memory size are not important, but port speed and monthly transfer should be as high as possible. As inbound traffic is less frequently used, I wonder if any provider offer cheaper rates if traffic is like this.
I have been searching the forums and have not found too much about this topic (a quite related post named "I want to download the Internet" or something similar did not get a conclusion).
I have 2 IPs bounded on a Windows 2003 server. These 2 IPs have different network routes (one uses network A, one uses network B). Obviously for outbound traffic I can freely choose which IP to use (I simply choose to use [url]or [url]), however I wonder if it's possible to tell the server which IP it should use for inbound traffic when I need to download something from the internet to the server?
I am moving into the world of dedicated servers (from VPS). I just got a server from Serveraday.com /OLM.net.
When I was doing bandwidth tests, I found the server's inbound speed was much slower than outbound. I tried downloading a bunch of different provider's 10MB test files from the command line of my server using wget. They were all around 20-30 Kbps.
When I take those same 10MB files and serve them from my dedicated box, the results are much different. My server can push the files out at over 1Mbps.
Why would my server be set up this way, and is this normal behaviour? I sent a ticket to OLM, but their support seems to take a long time. So I figured I would bounce the question off of all you here on WHT
I'm using the free edition of MailEnable and need to configure each post office to copy all incoming and outgoing email to one of the email accounts on the same post office.
Is there a way to configure this ?
I know I can configure mail forwarding on incoming mail per account but need to do it for all acounts (except the audit account).
e.g. anythinghere@dbnetsolutions.co.uk incoming or outgoing would be copied to audit@dbnetsolutions.co.uk
With the standard-DNS-Layout every customer has an MX-entry like MX 10 mail.customerdomainexample.com
The problem is, that inbound mailservers get a TLS warning, because the mailhostname does not match mail.companydomainexample.com, which is the domain with a valid SSL-Certificate pointing to the same server.
Wouldn't it make sense to change the default template to mail.companydomainexample.com since it is the same machine anyway?
I am having some problems with the inbound smtp sockets, we are receiving a constant attack from spammers, and they are taking all the sockets we have open for our users. We have enable SPF, greylisting, inbound control access through authentication, relay access with authentication also. but after some weeks we are on the same situation yet. We have spam assasin also installed as power pack from plesk, and we have add DSN black list from b.barracudacentral.org, bl.mailspike.net and bl.spamcop.net but we still suffer from this problem.
We have also try to increase the socket assigned to 200 and after some minutes they used all again and the CPU change increase up to 25% of the total capacity.
I ran the script in KB article 123160 [1] to disable SSLv3 and avoid the POODLE vulnerability, but I recently discovered that this has caused all inbound emails to bounce. The bounce message says, "TLS Negotiation failed."
1/ What is the difference between maillog and maillog.processed? I want to keep a permanent record of all mail inbound and outbound even if delivery is deferred by the gray listing. I'm not sure which one is the best to keep.
2/ I would like to change the way that the mail logs get log rotated. I am struggling to work out exactly what happens at the moment but I would like to rotate the log out every day regardless of size. I think currently that the maillog.processed is rotated daily if it is over a specific size.
how can I use to control or cap the traffic on a per server basis ? in other words, I have 15 servers in one cabinet, in this cabinet there is one switch to feed all 15 servers, the swith is a DELL 3448, one of the servers is eatingup almost all the traffic I have fro the cabinet itself, is there a way I can cap or limit traffic quota on a per port basis at the switch level? or what is the best way to manage this?
I'm up Games for Windows VPS servers with VMWare Server ESXi and wonders whether some option to control the traffic of each IP, I thought about using a "Cisco ASA 5500" but I do not know if it has this option:
I am not sure if my dedicated server is being attacked or if it is legitimate traffic. I need help figuring out the difference and if it is an attack, how to prevent it, and if it is legitimate traffic, how to configure the server to handle the load.
SoftwareCentOS 5.3-32 Apache2 MySQL 5 PHP 5 When I do ps aux|grep httpd|wc -l I get the count of current connected clients of 259 which is always maxing out my MaxClients of 256. I had increased it to 512, and it maxed out, I had increased it to 1024 and it maxed out, and lastly I had setup to 2048 and it works, but slows the entire server down.
Recently I noticed the load on one of my servers way beyound what I would expect it to be. I run multi processor servers and even during a backup the load is only around 1.5.
But lately I noticed peak loads that high under normal web traffic.
I know 1.5 is low on an multi processor server, but I am hoping to add much more to those machines and with sustained load that high it leaves no room for expansion. The servers are not cheap, so adding another server to the cluster can only be done if I make money from the last one I added.
I checked the traffic levels and they were very high. After further review I had some bots hitting sites at over 1200 pages a minute. Multiply that by a few hundred bots and clearly I could have a load issue. The potential is there to bring any server to its knees when delivering those volumes.
I created programing to watch connections and block the abusive bots. While logging I became aware of over 600 bots crawling my servers. Many bots from, Japan, China, Germany and so on and on, useless to my customers even if they are legit search indexes.
Another problem I see is that the bots are running from many ip addresses and hitting the same sites from multiple ips at the same time. Why would the need to do that?
Among other things I decided to validate googlebot, msn and yahoo with dns lookups so I could determine that they were actually their bots and not imposters. In 24 hours I found valid bots from the big three hitting one server from 1100 different ips.
Now we are looking at thousands of vaild bots and thousands more email harvesters and content theives.
As a host, the number of sites I can host on a server is greatly reduced by the bot traffic. My customers do not want to hear that their website was being crawled at 3,000 pages a minute and that is why they could not access it. Of course they will blame it on me.
I was able to filter the bots at a firewall level and drop connections based on reverse dns lookups and site crawl rates and my server sits around 0.05 most of the time even with hundreds of pages a minute being accessed.
I am wondering how the rest of you hosts deal with this problem. Do you leave it up to your hosting customers? Or do you have some type of filter to get rid of the bots.
When you have a few sites it is not really a problem, but as you grow it grows exponetially out of control.
i've a vps with iptables, but i've too much traffic (RX), there are too many packets received from random ports on both upt and tcp. Today in just 14 hours i've 2.8 gib of traffic, without any connection for web, email, etc (i've stopped all the services). How can i stop this? it's going to burn all my monthly traffic
I've only ever had a shared hosting account with Hostgator, plus a few freebie hosts. However, I'm now pulling some heavy traffic and I'm concerned that Hostgator is going to suspend me soon.
My traffic on Saturday for example was ~2600 unique visitors and ~5000 page views. All of this traffic was from WordPress blogs and a small SMF forum. I've since converted one of the blogs to a static site to limit my CPU usage and I've setup caching for my other WordPress blogs. Advice I've heard on the Hostgator forums is that 7000 page views per day for a database driven site is around the time you should be upgrading and based on my traffic from Saturday (which admittedly was a bit of a spike) I could potentially be receiving 150,000 page views/month, so about 20x the point at which they recommend upgrading at.
Anyhows, in a nutshell I need to upgrade, or risk Hostgator throwing a tantrum at me ... but I don't have a lot of cash to pay for an upgrade Due to my lack of cashflow I've been considering moving to a VPS. The company which has interested me the most is HostV.com who offer a 256 MB (with 1000 MB 'burst' RAM) for only US$39.99 which seems quite reasonable to me.
They say that their 256 MB plan should be able to handle over 5000 page views per day for a WordPress run site, but I'm a little suspect. Do any of you know if this is a reasonable expectation from a 256 MB chunk of a virtual server? I have no idea and am always wary of believing the sales pitch of a random company across the other side of the world.
I just want to ask. my ISP told me my server is generating high traffic from outside and paste me their traffic log with 1 IP address (xx.xx.xx.xx)
They rebooted my server and the problem disappear but I need to check what has been going on and where do I start? The only information I have is the IP xx.xx.xx.xx
I just recently upgraded my website from WordPress to WordPress Mu.
Everything went smoothly except for one problem. On WordPress, all my posts would appear as [url] but with WordPress Mu, it is now [url]. So whenever someone visits ht[url] or [url] they are given a 404 error because it no longer exists at that location.
I know there is a way, like a wildcard or something, that makes it so that wheneever anyone visits [url]anything it would change it to[url]whatever else was typed/, no? I can't figure out how to search for that exactly and tried reading through .htaccess docs and can't figure out how to make this work.
I have 4 sub domains on qisoftware.com and most have network traffic between 30-34%. Unresolved traffic about 12-14%. Is the network traffic statistic high? What would be considered normal?
A proxy server can mask IP address, right? Does a proxy server show up as network traffic in site statistics reports?
Okay, maybe that's enough questions for right now. I have been researching the internet for terms but I am not finding what would be considered normal.
An ad-network requires my website to have certain amount of traffic for x days to qualify, but they won't provide stats and have asked me to log the stats myself.
For incoming traffic stats, I already use AWstats etc, but is there anything available for logging outgoing traffic as well?