I have to move about 50 GB each night from one server to another.
This is the command I'm using:
Code:
/bin/nice -n +19 scp -c blowfish -l 18000 -P 22222 root@XX.XX.XX.XX:/backup/dailybacks/*.tar.gz ./ There is no private lan so i have to use internet. I'm using blowfish, which will reduce cpu load, and also limiting bandwidth transfer, usign -l 18000. How ever, I still do see some high load averages..
Do you have any other suggestion to optimize CPU performance while running scp?
I have a new Quad core server which has a high load average. Only hosting about 200 sites on it and they are not database sites. Only a few MySQL sites.
I use CSF on a VPS with 512 RAM and 1024 Busrt and the other day I received the below notification. My hostsaid it was Mailman and since I don't use mailing lists the recommendationwas to disable it. So I did. I'm curious tho as to why this happened in the first place.
Time: Wed Mar 19 17:53:33 2008 1 Min Load Avg: 11.41 5 Min Load Avg: 6.37 15 Min Load Avg: 2.70 Running/Total Processes: 12/94 ...............
2 of our our servers are suddenly experiencing high Wait I/O Times, and high Load Averages during the backup process. During this period the Plesk grinds to a halt, sometimes crashing out completely (although SSh is still possible. We have been in talks with our server suppliers (assuming this would be node related) however they have done a lot of testing etc. and categorically state the node is fine , with no other users affecting it.
STEPS TO REPRODUCE : We backup the server using the scheduled backup service and Wait I/O immediately goes up.
ACTUAL RESULT: Plesk downtime / Website downtime
EXPECTED RESULT: No downtime, successful back up
Some other info : All other processes (MYSQL, apache, Nginx etc) are all running between 1 - 10%
Partition "/usr" utilization 4.2% used (1.81 GB of 43.3 GB) (?) Partition "/var" utilization 50.6% used (61.8 GB of 122 GB) (?)
We are struggling to identify what has changed on the server that would cause this sudden change.
If everyone can take a look at this (while all domains were turned off) and give me any advice to what is eating up resources so badly? The load averages are at 1.3+ with only yum trying to run on the box and it has already been running to nearly two hours were this would typically take 10-15 minutes max prior.
Host: FutureHost Type: VPS DC: Dallas Plan: VPS Elite with 1.5GB of memory
I have a VPS, and i wanted to know if the CPU / Load Average that shows at my Plesk control panel reports only the usage of my part of the server cpu, or if it is the real/complete cpu usage for the entire server?
So in a simple way, is another user with a vps at the same server is doing a high cpu task at is vps, will it show at my plesk cpu load average?
I have a server with Core2Duo E6600 CPU, 4 gigs of RAM and 2x250 GB HDD's. What is the "normal" load average I can expect having ~50 websites running on it with medium visits. My current load averages (and most of the time ones) are: load average: 0.24, 0.13, 0.16
I have a site that is eating up my server resources and need to know what the best solution for this is. I'm thinking of getting another server just for mysql but do not know what specs the server should be to handle the current traffic/database load and have the site run smoothly without slowing down to a snail's pace.
An alternative is to get another server just for the videos being served and leave the database and html on the current server. This is where I'm stuck and don't know what route to take with this.
I've attached screenshots of top and bandwidth usage per day. Hopefully with this information you could tell me if I need another server or if there are any things I can do to the current server to help things move faster.
I am running in a High load problem lately. I have one of those cheap 1and1 servers which was running fine until 2 weeks ago. Once I rebooted accidentaly, it did not come back with some unrepairable kernel errors and I had to re-image it.
I chose to reimage the server with CentOS 5, for better support. The new re-image worked fine for some days, at least so I thought and now I am having high loads. The server crashes if not monitored every moment as the load is unpredictable.
Just a restart of the Apache will bring the server back to normality, but I am not sure if it is apache or some other script to be blamed. I have beeing monitoring through apache server-status, but I cannot organize something unusual in the high load moments.
12:00:29 AM CPU %user %nice %system %iowait %steal %idle 12:10:01 AM all 9.14 0.00 5.52 44.66 0.00 40.68 12:20:14 AM all 6.83 0.00 3.98 27.88 0.00 61.32 12:30:10 AM all 6.44 0.00 4.20 81.25 0.00 8.11 12:40:09 AM all 5.25 0.00 4.09 81.93 0.00 8.73 12:50:15 AM all 5.11 0.00 3.79 90.74 0.00 0.36 01:00:07 AM all 7.22 0.00 4.52 57.11 0.00 31.15 01:10:13 AM all 6.89 0.00 4.01 55.38 0.00 33.71 01:20:14 AM all 4.37 0.00 3.27 41.88 0.00 50.48 01:30:25 AM all 4.26 0.00 3.29 63.42 0.00 29.03 01:40:06 AM all 27.18 0.00 4.75 58.27 0.00 9.80 01:50:03 AM all 29.64 0.00 6.61 51.50 0.00 12.25 02:00:07 AM all 27.00 0.00 8.48 55.49 0.00 9.03 02:10:10 AM all 19.29 0.00 4.97 73.80 0.00 1.94 02:20:04 AM all 37.85 0.00 6.78 40.70 0.00 14.67 02:30:05 AM all 15.65 0.00 4.80 68.47 0.00 11.08 02:40:08 AM all 9.06 0.00 5.60 37.49 0.00 47.86 02:50:07 AM all 5.36 0.00 3.62 42.29 0.00 48.73 03:00:02 AM all 6.05 0.00 4.08 47.27 0.00 42.60 03:10:02 AM all 4.22 0.00 3.68 38.17 0.00 53.93 03:20:02 AM all 4.06 0.00 3.75 41.37 0.00 50.82 03:30:22 AM all 4.42 0.00 3.93 45.25 0.00 46.41 03:40:11 AM all 4.34 0.00 3.95 39.58 0.00 52.13 03:50:02 AM all 4.67 0.00 4.01 32.53 0.00 58.80 04:00:08 AM all 3.72 0.00 3.87 28.40 0.00 64.02 04:10:02 AM all 13.49 0.00 6.58 20.82 0.00 59.10 04:20:01 AM all 6.70 0.00 4.63 6.06 0.00 82.61 04:30:02 AM all 1.44 0.00 1.21 4.75 0.00 92.59 04:40:01 AM all 12.42 0.00 8.12 7.65 0.00 71.81 04:50:02 AM all 1.43 0.00 1.07 4.02 0.00 93.47 05:00:02 AM all 1.60 0.00 1.40 8.62 0.00 88.38 05:10:10 AM all 3.80 0.00 3.02 17.86 0.00 75.32 05:20:06 AM all 5.10 0.00 4.22 23.34 0.00 67.34 05:30:02 AM all 1.54 0.00 1.40 11.22 0.00 85.85 05:40:05 AM all 1.75 0.00 1.89 13.12 0.00 83.23 05:50:12 AM all 2.15 0.00 2.22 18.92 0.00 76.72 06:00:02 AM all 1.92 0.00 2.01 12.87 0.00 83.20 06:10:02 AM all 2.27 0.00 2.16 11.53 0.00 84.04 06:20:03 AM all 3.56 0.00 3.02 25.26 0.00 68.16 06:30:10 AM all 2.66 0.00 2.05 18.13 0.00 77.16 06:40:02 AM all 2.58 0.00 2.25 22.87 0.00 72.30 06:50:02 AM all 2.68 0.00 1.92 15.77 0.00 79.63 07:00:03 AM all 3.06 0.00 2.48 26.01 0.00 68.46 07:10:03 AM all 3.65 0.00 3.20 36.54 0.00 56.61
07:10:03 AM CPU %user %nice %system %iowait %steal %idle 07:20:03 AM all 4.40 0.00 3.28 43.86 0.00 48.46 07:30:02 AM all 4.10 0.00 3.17 31.30 0.00 61.43 07:40:06 AM all 7.67 0.00 3.95 50.79 0.00 37.59 07:50:02 AM all 4.72 0.00 3.11 44.30 0.00 47.86 08:00:03 AM all 5.57 0.00 3.72 47.15 0.00 43.56 08:10:07 AM all 10.66 0.00 3.59 71.62 0.00 14.13 08:20:17 AM all 5.67 0.00 3.42 58.81 0.00 32.10 08:30:10 AM all 11.12 0.00 3.49 76.71 0.00 8.67 08:40:03 AM all 7.00 0.00 3.36 47.94 0.00 41.71 Average: all 7.53 0.00 3.76 38.90 0.00 49.81 Some configurations: The reimage partittioning looks like this:
processor : 1 vendor_id : GenuineIntel cpu family : 15 model : 3 model name : Intel(R) Pentium(R) 4 CPU 2.80GHz stepping : 4 cpu MHz : 2793.324 cache size : 1024 KB
So my server is "unresponsive" for abour 18 hours, burst net didnt answer my tickets and I dont know what to do. Ive been with this setup for almost 5 months with no problems, No changes have been made to hardware or software.
My VPS holds about 80 domains and low-use accounts.
Every night, from around 1.30am, the load suddenly skyrockets and will usually be around 5 to 10 for a few hours. Occasionally it'll spike to 30+ for a few minutes.
I had some antispam software running, and a couple of other packages (mail queues, mail manage etc), so I disabled all of that and removed all the crontab entries etc.
It's not really made any difference.
I can see the load stats going back 8 hours, as part of the ASSP spam package (I've just left the ASSP server load cron running just so I can continue monitoring it!)
Can the apparent load on my VPS be caused by other VPS's on the same node?? So in reality, my load is fine but is being affected by other people's VPS's?
I hope that makes sense. I'm 99% sure that my VPS is 'clean' (in so far as cron entries)
I'm asking the question because I took a second VPS on the same node and that one too has high loads overnight when there's nothing running on it (ie, no add-on software, no Cpanel accounts added)
I have a VPS with LXadmin control panel and 1024 Ram.
the memory always 333-450 but now it is more than 800 and the VPS stopped I restarted it many times but after restarting it the memory gone crazy again.
So, how to know what causes high load and eating my Memory?
i was previously using maxclients 256 in httpd.conf and the load was normal even when the server was processing 256 requests..
recently i had changed it to 320 coz of which my load had increased a lot.. Now when i have decreased it back to 256 the load is still high even when the no of requests are just 150..
How can you tell if your server was a victim of a DDOS attack? Server load goes so high that you cannot get in via SSH, and the server has to be power cycled to get back online. When it comes back online, load goes back up, we shut apache down, and the let the load come back down and then restart apache and everything is fine. Any way to determine what caused the load to skyrocket? System has APF, BFD, and mod_evasive installed.