if anyone has had problems with high cpu load after upgrading APACHE to 2.2.8. We were running 1.3.5 with comfortable CPU loads of 2-10 on dual Xeon 2.8's. Now the loads are 20-70 with most of the CPU being used by many many httpd processes. I've heard that APACHE v2 does consume more resources, but am wondering if there was a problem with the build, or is it that much more demanding.
OTHER INFO:
WHM:11.15
CPANEL: 11.18
OS: CENTOS 4.6
KERNAL: 2.6.20.4-ts.grh.mh.i386
At the time I took this particular snapshot, it's not near its peak... it's not uncommon for the CPU load approach 60%. Reading around, it seems the CPU load should normally be under 1% (such as 0.0139% or what-not). Is this true?
The weird thing though... I have no idea where that number is coming from, because according to "top", the CPU is actually 90% idle.
I actually just raised the MaxClients from 512 to 1024 because I was hitting a constant cap of 40 requests/sec... and I was worried it was going to bottleneck. When I raised that value, the max requests per second now seem to be freed up.
If the actual CPU of the server is 90% idle... am I okay? Anyone know where Apache's getting the CPU Load info from?
They don't appear for some time if I kill them (a day or more). But it repeats again and again. One day there were 8 similar processes in total in max which used all 4 cores at 100% (and even ssh console was extremely slow to do something there).
I think that somebody is trying to make a small attack of some sort but I need to check it first. I tried to look at apache logs but there were too many posting requests from different IPs and no dublicates for little period of time so I had no success.
Anyway, that script worked for us for 4 years already and we didn't have any problems with it even on our old single core P4 2.8 ghz.
way to make sure is this an attack of some sort or just this script doesn't work correctly on our new machine?
Are there any ways to get IPs of visitors who are running posting.php with CPU overloading?
been checking out this site for a while and finally decided to register because I have a problem. Also hope this is the correct forum for this topic, sorry if it isn't.
So I have a problem with Apache. One of the sites that I run/host has a moderately large vBulletin board, and Apache just seems to eat up the CPU. Load averages have shot up between 20-30 and I've seen it as high as 80. Apache and MySQL are optimized already, I'm using suPHP for security because there are other sites on this box.
The funny thing about this is that it only started happening about a week ago. After checking for rootkits and all that garbage, I reinstalled the OS just to be on the safe side. Everything comes back clean still. I also got fed up and hired Platinum Server Management for a month, to see if they could find a solution (and I've been interested in reselling their services, but that's not relevant). So far the only thing they can come up with is disable suPHP, which isn't an option. I do realize that suPHP is ~20-25 times slower than mod_php, but what totally baffles me is that it worked beforehand and started going all crazy like this. I did try running the site using an dso configuration, the load did drop, but nothing to be proud of.
This site, and the server overall hasn't had any increase in load, I've held off putting new accounts on it until I get this fixed.
In the meantime, I have said forums running on lighttpd, which lowered the load. (Also writing a tutorial on having lighty work with cPanel)
I deal with a server that gets positivey slammed once a week for a few months per year. I'd tell you how many hits we got tonight, but I'm still waiting for AWSTATS to chew through the 2gb access_log file...
Tonight, I made some changes that SEEM to work, but I"mn not sure what the long-term effects could be. If we have any apache experts on the forums, I'd really like to bend your ear for a few to see what you know.
Obviously, with PHP, we're limited to prefork MPM.
First of all, I dropped Timeout from 300 to 120. That should be MORE than enough time to know that we've timed out. Then I dropped KeepAliveTimeout to 5 from 15.
Here's the radical one. Watching the process list and the load, it seemed that load spiked when the processes hit their end of useful life and respawned. Duh. This was happening every four seconds at the load we were under. MaxRequestsPerChild was set to 10,000. I upped this to 80,000 over a period of hours that we were under the load. I didn't see any significant memory leakage, but it's the change I'm worried about the most. I've seen Apache do some bad things when people allow this to go unlimited, and had always used the relatively low default as a guide.
Besides not loading a bunch of dynamic modules (also done, I usually do this so I'm not worried about it), what else can I do tuning-wise to keep load down? Please note that caching and load-balancing aren't acceptable solutions; I have one server to work with (for now) and the boss says no to caching because of how frequently our information updates. We also have extensive .htaccess files, so there's no LHTTPD in my future.
I have a site that is eating up my server resources and need to know what the best solution for this is. I'm thinking of getting another server just for mysql but do not know what specs the server should be to handle the current traffic/database load and have the site run smoothly without slowing down to a snail's pace.
An alternative is to get another server just for the videos being served and leave the database and html on the current server. This is where I'm stuck and don't know what route to take with this.
I've attached screenshots of top and bandwidth usage per day. Hopefully with this information you could tell me if I need another server or if there are any things I can do to the current server to help things move faster.
I am running in a High load problem lately. I have one of those cheap 1and1 servers which was running fine until 2 weeks ago. Once I rebooted accidentaly, it did not come back with some unrepairable kernel errors and I had to re-image it.
I chose to reimage the server with CentOS 5, for better support. The new re-image worked fine for some days, at least so I thought and now I am having high loads. The server crashes if not monitored every moment as the load is unpredictable.
Just a restart of the Apache will bring the server back to normality, but I am not sure if it is apache or some other script to be blamed. I have beeing monitoring through apache server-status, but I cannot organize something unusual in the high load moments.
12:00:29 AM CPU %user %nice %system %iowait %steal %idle 12:10:01 AM all 9.14 0.00 5.52 44.66 0.00 40.68 12:20:14 AM all 6.83 0.00 3.98 27.88 0.00 61.32 12:30:10 AM all 6.44 0.00 4.20 81.25 0.00 8.11 12:40:09 AM all 5.25 0.00 4.09 81.93 0.00 8.73 12:50:15 AM all 5.11 0.00 3.79 90.74 0.00 0.36 01:00:07 AM all 7.22 0.00 4.52 57.11 0.00 31.15 01:10:13 AM all 6.89 0.00 4.01 55.38 0.00 33.71 01:20:14 AM all 4.37 0.00 3.27 41.88 0.00 50.48 01:30:25 AM all 4.26 0.00 3.29 63.42 0.00 29.03 01:40:06 AM all 27.18 0.00 4.75 58.27 0.00 9.80 01:50:03 AM all 29.64 0.00 6.61 51.50 0.00 12.25 02:00:07 AM all 27.00 0.00 8.48 55.49 0.00 9.03 02:10:10 AM all 19.29 0.00 4.97 73.80 0.00 1.94 02:20:04 AM all 37.85 0.00 6.78 40.70 0.00 14.67 02:30:05 AM all 15.65 0.00 4.80 68.47 0.00 11.08 02:40:08 AM all 9.06 0.00 5.60 37.49 0.00 47.86 02:50:07 AM all 5.36 0.00 3.62 42.29 0.00 48.73 03:00:02 AM all 6.05 0.00 4.08 47.27 0.00 42.60 03:10:02 AM all 4.22 0.00 3.68 38.17 0.00 53.93 03:20:02 AM all 4.06 0.00 3.75 41.37 0.00 50.82 03:30:22 AM all 4.42 0.00 3.93 45.25 0.00 46.41 03:40:11 AM all 4.34 0.00 3.95 39.58 0.00 52.13 03:50:02 AM all 4.67 0.00 4.01 32.53 0.00 58.80 04:00:08 AM all 3.72 0.00 3.87 28.40 0.00 64.02 04:10:02 AM all 13.49 0.00 6.58 20.82 0.00 59.10 04:20:01 AM all 6.70 0.00 4.63 6.06 0.00 82.61 04:30:02 AM all 1.44 0.00 1.21 4.75 0.00 92.59 04:40:01 AM all 12.42 0.00 8.12 7.65 0.00 71.81 04:50:02 AM all 1.43 0.00 1.07 4.02 0.00 93.47 05:00:02 AM all 1.60 0.00 1.40 8.62 0.00 88.38 05:10:10 AM all 3.80 0.00 3.02 17.86 0.00 75.32 05:20:06 AM all 5.10 0.00 4.22 23.34 0.00 67.34 05:30:02 AM all 1.54 0.00 1.40 11.22 0.00 85.85 05:40:05 AM all 1.75 0.00 1.89 13.12 0.00 83.23 05:50:12 AM all 2.15 0.00 2.22 18.92 0.00 76.72 06:00:02 AM all 1.92 0.00 2.01 12.87 0.00 83.20 06:10:02 AM all 2.27 0.00 2.16 11.53 0.00 84.04 06:20:03 AM all 3.56 0.00 3.02 25.26 0.00 68.16 06:30:10 AM all 2.66 0.00 2.05 18.13 0.00 77.16 06:40:02 AM all 2.58 0.00 2.25 22.87 0.00 72.30 06:50:02 AM all 2.68 0.00 1.92 15.77 0.00 79.63 07:00:03 AM all 3.06 0.00 2.48 26.01 0.00 68.46 07:10:03 AM all 3.65 0.00 3.20 36.54 0.00 56.61
07:10:03 AM CPU %user %nice %system %iowait %steal %idle 07:20:03 AM all 4.40 0.00 3.28 43.86 0.00 48.46 07:30:02 AM all 4.10 0.00 3.17 31.30 0.00 61.43 07:40:06 AM all 7.67 0.00 3.95 50.79 0.00 37.59 07:50:02 AM all 4.72 0.00 3.11 44.30 0.00 47.86 08:00:03 AM all 5.57 0.00 3.72 47.15 0.00 43.56 08:10:07 AM all 10.66 0.00 3.59 71.62 0.00 14.13 08:20:17 AM all 5.67 0.00 3.42 58.81 0.00 32.10 08:30:10 AM all 11.12 0.00 3.49 76.71 0.00 8.67 08:40:03 AM all 7.00 0.00 3.36 47.94 0.00 41.71 Average: all 7.53 0.00 3.76 38.90 0.00 49.81 Some configurations: The reimage partittioning looks like this:
processor : 1 vendor_id : GenuineIntel cpu family : 15 model : 3 model name : Intel(R) Pentium(R) 4 CPU 2.80GHz stepping : 4 cpu MHz : 2793.324 cache size : 1024 KB
So my server is "unresponsive" for abour 18 hours, burst net didnt answer my tickets and I dont know what to do. Ive been with this setup for almost 5 months with no problems, No changes have been made to hardware or software.
My VPS holds about 80 domains and low-use accounts.
Every night, from around 1.30am, the load suddenly skyrockets and will usually be around 5 to 10 for a few hours. Occasionally it'll spike to 30+ for a few minutes.
I had some antispam software running, and a couple of other packages (mail queues, mail manage etc), so I disabled all of that and removed all the crontab entries etc.
It's not really made any difference.
I can see the load stats going back 8 hours, as part of the ASSP spam package (I've just left the ASSP server load cron running just so I can continue monitoring it!)
Can the apparent load on my VPS be caused by other VPS's on the same node?? So in reality, my load is fine but is being affected by other people's VPS's?
I hope that makes sense. I'm 99% sure that my VPS is 'clean' (in so far as cron entries)
I'm asking the question because I took a second VPS on the same node and that one too has high loads overnight when there's nothing running on it (ie, no add-on software, no Cpanel accounts added)
I have a VPS with LXadmin control panel and 1024 Ram.
the memory always 333-450 but now it is more than 800 and the VPS stopped I restarted it many times but after restarting it the memory gone crazy again.
So, how to know what causes high load and eating my Memory?
i was previously using maxclients 256 in httpd.conf and the load was normal even when the server was processing 256 requests..
recently i had changed it to 320 coz of which my load had increased a lot.. Now when i have decreased it back to 256 the load is still high even when the no of requests are just 150..
How can you tell if your server was a victim of a DDOS attack? Server load goes so high that you cannot get in via SSH, and the server has to be power cycled to get back online. When it comes back online, load goes back up, we shut apache down, and the let the load come back down and then restart apache and everything is fine. Any way to determine what caused the load to skyrocket? System has APF, BFD, and mod_evasive installed.
I have one large website , when i put this website in one server i mean by one server is making the website work with mysql and httpd in one server no external server it's load seems to be on 1 - 5 but when i but the mysql on other server the load got high and i got high httpd connections , also when i try to see the the amount connections between the servers it's too high in one of them and low in the other and i am always suffering from load .
here the details 192.168.0.1 the mysql server 192.168.0.2 httpd server
now i show this error for load in CPU/Memory/MySQL Usage
-------------------------------------------- /usr/sbin/mysqld --basedir/ --datadir/var/lib/mysql --usermysql --pid-file/var/lib/mysql/server.nogomhost.net.pid --skip-external-locking --------------------------------------------- this " mysql " used 90% and 80 % from CPU
And all account used sql 0.0 , why load mysql ?
2nd problem <<<<
some times my vps is down , i don't why
3rd prblem <<<
my vps is not working to some clients , i make allow after user's sent to me ip's by apf program
I am using rsync to transfer files (tar.gz) between servers. However, it makes server load increasing 3-4. Normally, server load can be around 1, but when doing the transfer, it can go up to 5+
Is there anyway to reduce the load when doing rsync?
I am just a beginner and it is about a month that the mysql loads on my server is so high , I have checked the mysql processlist via cpanel and there is no account that takes high mysql and make it high , and even I have checked for any possible rotten databases. but still the server load is so high and I can not control it , even when I restart it just after a minute it goes up again.
just the spamd command takes a high CPU usage sometimes. what it should be from? the spamd for a special account!
I need to optimize mysql , and need help. please do not tell me to hire an expert , just help me thanks
and another question is that how can I check which account is sending spam and stop it?
I have to move about 50 GB each night from one server to another.
This is the command I'm using:
Code:
/bin/nice -n +19 scp -c blowfish -l 18000 -P 22222 root@XX.XX.XX.XX:/backup/dailybacks/*.tar.gz ./ There is no private lan so i have to use internet. I'm using blowfish, which will reduce cpu load, and also limiting bandwidth transfer, usign -l 18000. How ever, I still do see some high load averages..
Do you have any other suggestion to optimize CPU performance while running scp?
do you know what my server having very strange problem my server load suddenly increas after every 2 or 3 days some time after 1 days but when we talk about those days in which server load is fine then that time server load very low almost 0.4 to 0.1 .
But on that high load day server load reached upto 500.
when i try to find out what wrong then i only get that there is too much connection of http when i kill httpd through this killall -KILL httpd then server load suddenly decrease and then stable.
I have a VPS with Future Hosting and recently I have been getting more and more notifications from LFD regarding high CPU load. For example:
Time: Sun Jun 14 06:50:48 2009 -0500 1 Min Load Avg: 9.47 5 Min Load Avg: 6.25 15 Min Load Avg: 3.68 Running/Total Processes: 2/105
I am getting at least one of these a day now and I am also getting alerts about services failing, SPAMD in particular but also EXIM (and messages about LFD being unable to determine the exim queue length). External monitors are also warning me about SMTP timeouts during the same time period that I get the "high load" errors.
Tech support seems a bit stumped by this one and ALWAYS come back with "load looks fine right now". With the frequency of the warning emails increasing I am getting very concerned about the stability of my VPS.
I am not running anything significant on my VPS yet with minimal visitors and load (RAM usage consistently stays below 300MB on a VPS with 1+GB RAM.