we have a dual xeon 2.8G +2G ram CPANEL server normally load is well under2 and stable
we also use Incremental Backup and choose Per Account Only for MySQL backup
This server mainly hosts a big site, whose SQL is 1.2G big
Everytime server is running backup, load burst to 7 and accessing to website hanged. We are thinking to change SQL backup method to Entire MySQL Directory , but was told while it is processing SQL server will be stopped ?? wil Entire MySQL Directory reduce server load while processing SQL backup?
2) Open Internet Information Services (IIS) Manager > Right Click On "Web Sites" select properties > Click Service Tab > Open HTTP Compression > Select Compress Application Files and Compress Static Files
3) Use eAccelerator (PHP accelerator, optimizer, and dynamic content cache) with This options;
4) Don't load this extensions, extension=php_mbstring.dll extension=php_domxml.dll extension=php_xslt.dll
Only use this Extensions in php.ini extension=php_sqlite.dll extension=php_curl.dll extension=php_gd2.dll extension=php_gettext.dll extension=php_iconv.dll extension=php_imap.dll extension=php_mssql.dll extension=php_sockets.dll extension="eaccelerator.dll" upload_tmp_dir= "C:WINDOWSTemp"
This Settings tested on Windows Server 2003 SP2 IIS6 with PHP v4.4.7
I have my WHM/cPanel installation configured with daily and weekly backups. I checked at what time of the day the server load was at the minimum and configured the cPanel backup cron to run then.
The problem now is: Backing up a few hundred accounts results in a high server load. My server configuration:
Dual Processor Quad Core Xeon 5335 2.0GHz with 4GB RAM and 2 x 250GB SATA HDD hosted at SoftLayer.
The accounts are located on the first HDD and the backup archives are placed on the second HDD.
What can I do about this? I'd like to take daily backups of all accounts but not if my server load increases up to 10... That kind of renders the cPanel backup feature useless if it doesn't even work on a powerful server like this one...
Would it help if I use an application such as Auto Nice Daemon to give the backup process a lower priority? But then again that won't work on the MySQL dumps? And I think it's not a CPU problem but an I/O wait problem? Other processes have to wait for disk access because the disk-intensive backup process is running?
to decrease the load in server when daily backup start ,, the load in server before backup start from 0.80 to 1.20 after daily backup started i see very high load from 16.00 to 32 and 40
any solve for decrease load when backup start from 3 to 7 alot
2 of our our servers are suddenly experiencing high Wait I/O Times, and high Load Averages during the backup process. During this period the Plesk grinds to a halt, sometimes crashing out completely (although SSh is still possible. We have been in talks with our server suppliers (assuming this would be node related) however they have done a lot of testing etc. and categorically state the node is fine , with no other users affecting it.
STEPS TO REPRODUCE : We backup the server using the scheduled backup service and Wait I/O immediately goes up.
ACTUAL RESULT: Plesk downtime / Website downtime
EXPECTED RESULT: No downtime, successful back up
Some other info : All other processes (MYSQL, apache, Nginx etc) are all running between 1 - 10%
Partition "/usr" utilization 4.2% used (1.81 GB of 43.3 GB) (?) Partition "/var" utilization 50.6% used (61.8 GB of 122 GB) (?)
We are struggling to identify what has changed on the server that would cause this sudden change.
Last year my web host stated that my site was over utilizing allowed resources for my plan. Specifically, they state that I was overusing the CPU. At the time, I had to upgrade my plan in order to stay online. I would like to move to a new host, but the prospective hosts are all suggesting a dedicated server because of my CPU usage. I don't want to pay that kind of money, so I would really like to curb the CPU problem. Does anyone know how to reduce the CPU of a Wordpress blog? I tried posting this question over at Wordpress.org, but I haven't received a single reply.
now iam haveing more that 0.50-1.20 cpu useage,also cpu useage is also increased i hosted only 6 site out what only one site use MSQL all other site are plain download site,how to reduce the cpu load can u figure me out this issue also give me some tips to reduce the cpu usage
I have smokeping monitoring my game servers and so far in the little time that it has been running all my game servers have been encountering an average of 4 to 10% packet loss. Are there are tweaks i can run on the server computer to reduce packet loss? (registry modifications, etc.)
I downloaded a TCP tweak program called "TCP Optimizer" is it safe to run on a Windows 2003 Server OS?
The colo connection is an OC 192 and i have a 100Mbit ethernet card.
I know there is no device can protect you from ddos attacks, but I wonder which one is the best to help you reduce the attacks? It might be intelligent to "feel" the attacks? Brand names from Cisco, Foundry, Nokia...?
Say I have two different hosting accounts at two different web hosts. One at host1.com and another at host2.com. In both the hosts I keep the same files. I use an external registrar and use the name servers for one of my domains as follows:
ns1.host1.com
ns1.host2.com
ns2.host1.com
ns2.host2.com
What would happen in that case, if say host1 is down sometime? Will the name servers point to host2.com?
If this could work, then the probability of downtime of a site would become almost 0 . Google will like this ?
Another question is how to easily sincronize both cpanel accounts?
Few months ago I bought new small VPS box (OpenVZ, 128 MB RAM) in order to place there a new monitoring node of my site monitoring system. Such small amount of RAM is a challenge for operating system optimisation techniques (OpenVZ doesn’t have “swap” as Xen does).
First of all I discovered that apache2-mpm-worker (Apache implementation that uses threads) consumes more memory (100MB) than the classic version that use separate processes (20MB). I had to switch to apache2-mpm-prefork version then.
Next unpleasant suprise: small Python app eats 100MB of virtual memory! I checked that virtual (not resident) memory is taken into account by VPS. I applied some tools to locate memory bottleneck, but without success. Next I added logs with current memory usage to track call that causes big memory consumption. I tracked the following line:
server = WSGIServer(app)
is guilty for high memory increase. After few minutes of googling I located problem: default stack size for a thread. Details:This line creates few threads to handle concurrent calls
Stack size is counted towards virtual memory
Default stack size is very high on Linux (8MB)
Every thread uses separate stack => multi threaded application will use at least number_of_threads * 8MB virtual memory!
First solution: use limits.conf file. I altered /etc/security/limits.conf file and changed default stack size. But I couldn’t make this change to alter Python scripts called from Apache (any suggestions why?).
Second (working) solution: lower default stack size using ulimit. For processes launched from Apache I altered /etc/init.d/apache2 script and added:
ulimit -s 256
Now every thread (in apache / Python application) will use only 128 kB of virtual memory (I lowered VSZ from 70 MB to 17 MB this way). Now I have additional space to enlarge MySQL buffers to make DB operations faster.
There’s even better place to inject ulimit system-wide: you can insert this call in:
/etc/init.d/rc
script. Then ulimit will be applied to all daemons (as Apache) and all login sessions. I reduced virtual memory usage by 50% this way.
Note: you may increase stack size on stack overflow errors. In my opinion 256 kb is safe option for most systems, you may increase if in doubt. Still memory savings are big.
its 2am night here, and my sites are down....now there is no way i have too much traffic at midnight, also all my websites are new !
this is happening consistently since today morning and im getting no support apart from jargon filled replies from customer care
how do i tweak apache settings and what settings do i make to avoid this ?
im wondering what will happen after few months when my websites actually have good traffic coming in ?
We have checked your server. Please see the load average and process list given below:
The value 4.42 was the CPU load average at the time. A normal load should be below 1.00. I could see that Apache service is causing high load in your server. So you can tweak Apache in order to reduce the CPU load. Please check and let us know if you need any further assistance.
++++++++++++++++++ [root@chi07 ~]# vzctl exec 18403 w 03:16:20 up 2 min, 0 users, load average: 4.42, 1.42, 0.50 USER TTY FROM LOGIN@ IDLE JCPU PCPU WHAT
I have registered a domain using godaddy. I have hosted my site on a server of my shared hosting provider(lets call them X).Currently I have pointed mydomain.com to the server and it is up and running.Sometimes, I have experienced downtimes.
In order to solve this problem, I have hosted a clone of my site on another server from another hosting company(lets call them Y).
1. I want mydomain.com to point to Y when X is down
2. again point it to X when it is up.
My main aim is to have my site live with less downtime. The probability having both servers down is very less.
I dont know if it is technically feasible, just a thought out of dirty mind. I tried to google but was not able to find an answer specific to my problem.
Can anybody tell me how to achieve this through godaddy domain.
Added note, My site is not a commercial site and I cant afford large dedicated servers with clustering and failovers.
I would like to know if there's any way we could reduce "conversion times" for videos while converting them with "mencoder" and "x264 codec"?
Is it possible to cluster 2-3 or more servers (Quad core/8GB RAM) so that I can reduce the conversion timings effectively?
The original video sizes are around 500-700MB on an average and I'd like to convert them using mencoder in x264 codec @ 500Kbps bitrate and 2pass settings.
Ofcourse it'll take atleast 1-2hours to encode 1 such video at these settings on a 'single' such server, so is there a way to reduce the conversion times to around 10-15minutes/video by using "parallel encoding" with x264 codec?P.S: I know how to form a cluster, using beowulf/rocks, etc...what I need help with is 'using' the cluster with x264 codec
I did a quick search on this and could not see it as already being posted
It seems quite a clever but simple idea - remove a lot of the oxygen from the air to help reduce the risk of fire. What do those of you operating your own facilities make of this? Is anyone already doing this? [url]
I've been having trouble with my VPS for a while now. In the QoS alerts page in Virtuozzo it seems to be a problem with numtcpsock and tcprcvbuf, mainly numtcpsock.
Copy these into the browser: i18.photobucket.com/albums/b106/gnatfish/qosnumtcpsock2.jpg
Anyone know of some good server load testers ( commercial )?
Im not looking for application based load testing, I need real web server load testing... need to see how much traffic this one site can take before it cries.
I'm having the oddest issue. For some reason, some of the websites on my server load fine, and some take a really long time to load (2 minutes).
Now, the server load is fine, and the size of the sites aren't the issue either. I've restarted Apache and a couple more services, and still the same sites seem to load very slow.
What could be causing this since it's only effecting certain websites?