Alright we just bought a new customer aboard with a ChatBox on the site it seems as if this chatbox is causing higher load we went from 0.00 to 0.68-1.12 now this happened on our old server before too with another chatbox. They use vBulletin any ideas as to what would be causing this?
I have a site that is eating up my server resources and need to know what the best solution for this is. I'm thinking of getting another server just for mysql but do not know what specs the server should be to handle the current traffic/database load and have the site run smoothly without slowing down to a snail's pace.
An alternative is to get another server just for the videos being served and leave the database and html on the current server. This is where I'm stuck and don't know what route to take with this.
I've attached screenshots of top and bandwidth usage per day. Hopefully with this information you could tell me if I need another server or if there are any things I can do to the current server to help things move faster.
So... I just had an interesting chat over at BlueHost.
One of my clients' sites has been really sluggish lately, and I'm trying to patch and streamline what I can until we can get them migrated over to something more stable.
Anyway, I was looking over things in cPanel and saw that the Server Load was way in the red zone (over 24 on an 8 CPU box)! So I jumped into their live chat, thinking that was the best way to help bring it to their attention quickly.
Here's what I got instead:
Travis [10:13:19 AM]: box439's Server Load is skyrocketing -- in just the past three minutes it has gone from 21.24 (8 cpus) up to 34.12! Even now, it's "stabilizing" at 25 (which is WAY too much for an 8 cpu server).
Brent [10:13:46 AM]: actually, it's not. It's a percentage. [10:13:52 AM]: 25% of the load Travis [10:14:03 AM]: No, it's not. 25 = 25cpu equivalent. Brent [10:14:32 AM]: no, really, it is. Travis [10:15:24 AM]: Brent, anything over 8.5 generates the "red exclamation mark" icon on the server status page Brent [10:16:13 AM]: yes it doesn, but it doesn't mean there is an overload Travis [10:17:06 AM]: ((BlueHost's Server Status page)) says, "2009-06-01 09:10:45: Box under heavy load -- you may experience degraded system performance" Brent [10:17:39 AM]: i didn't say it wasn't under a heavy load. I was trying ot explain more of what that server cpu item means Travis [10:18:25 AM]: from ((url)): "At this time of writing, red light is shown for the server loads where the numbers are 18.12 (8cpus). This means there are 18.12 processes in queue - which is over 2 times of CPU capacity." Brent [10:18:49 AM]: that information is not correct, as we have modified the way our servers display the cpu.
[10:23:32 AM]: I apologize, but it seems you've stepped away. Due to other customers awaiting assistance I am going to have to end this chat. If you have additional questions, we are here for you at all times. Thank you for trying our Live Chat! If you have additional questions, many answers can be found through our Knowledgebase. If you have other issues arise please let us know over live chat or give our Ticket Center a try. Remember our World Class Support is just a click away!
So I'm wondering whether he was completely BS-ing me, or if BlueHost really did modify that output. (And if it's the latter, then why would 25% CPU usage bring the server to a slow crawl?)
do you know what my server having very strange problem my server load suddenly increas after every 2 or 3 days some time after 1 days but when we talk about those days in which server load is fine then that time server load very low almost 0.4 to 0.1 .
But on that high load day server load reached upto 500.
when i try to find out what wrong then i only get that there is too much connection of http when i kill httpd through this killall -KILL httpd then server load suddenly decrease and then stable.
I have a couple of sites that are generating errors because the server load is too high and when I check service status I am seeing the following: Server Load 21.49 (8 cpus)
How can I tell if the problem is one of my sites in my VPS or a different site on a different VPS on the same server?
I know there are thousands of topics about this and yes i did use search function to try doing some changes myself and didnt wanna hijack someone else's topic so started my own
well my problem is i run a torrent site which puts a lot of load on my server. Just got upgraded to P4 2.8 ghz with 2GB ram , iam running fedora with WHM/Cpanel
i will do anything with server to put load down cos of load i have turned off my 4 other big sites ...
I've got a server that suddenly over the last three days has exploded as far as server load. Watching top I have some httpd processes that are using up all of the cpu and lasting for quite some time. How can I find out more about these hanging processes? I need to track this down as quickly as possible and find out what the cause is.
if upgrading to that new server that I'll mention will probably solve my problems. Whatever help you can provide would be greatly appreciated. Below are the details:
In the GMT evenings and nights my current server gets so loaded that every page load takes 10 - 30 seconds. Even the pure html pages will be so slow to load. It seems that after a certain treshold it just suddenly becomes that much slower. Not much middleground there. I have high MaxClients and ServerLimit values now and the error log doesn't say that they are exceeded anymore but that didn't help enough.
I have a high traffic website that is using latest version of apache (2.2.x) with the prefork MPM and apache is optimized, PHP 5.2.5 and APC 3.0.15.
I get 160,000 - 210,000 pageloads per day. 32,000 - 45,000 visits per day.
Most of its pages are PHP but shouldn't be too CPU or databes intensive. Mysql isn't used and I mostly used sharedmem (php's shm functions) for databases. 2 semaphores are quite heavily used but that can't explain how a few more users would make the server serve pages so much slower.
Swap usage is practically 0 and CPU user % usage is like 1 - 2 % and CPU system % is also about the same even during peak times. However the Average Load or whatever that "top" reports is 6 - 9.
My current server scecs: 1 GB Ram, Pentium D 3 ghz, CentOS 5 32bit fully updated.
I load all pictures and even the stylesheet from a secondary server by using href="$secondaryserverIP..." in the html code, so the main server practically just serves the pages.
My new server will have apache with the worker MPM and latest versions of every software. Also its specs are: 2 GB of RAM, Intel Dual Core Xeon 2.40GHz, CentOS 5.1 32bit fully updated.
I have a sophisticated netstat based ddos script that is an improved version of DDoS Deflate and while some of these slowdowns seem to have been caused by attacks that it then was able to defend me from, most of them are not. I am even protected from users who constantly have 7+ connections to my site and if someone has a way too high number of connections, the script won't even check if it constantly has it and the script just bans that user outright. It probably is banning a bunch of innocent proxy users too but that is a small price to pay.
MySQL the last few days seems to be constantly the most demanding process in top, which it never was before. As far as I can tell, nothing has substantially changed with regards to traffic to MySQL driven sites on the server. Is there anything that might be wrong with the databases, etc., that might throw MySQL into a tizzy?
I have two quad core processors and load is like 15.
May it be caused by switch if it doesnt let traffic trough properly?
if dmesg grep eth shows 100 full duplex is it normal or should it be 1000 full duplex?
how can I make it 1000 full duplex on centos 5?
Quote:
0000:0a:02.0: eth0: (PCI Express:2.5GB/s:Width x4) 0000:0a:02.0: eth0: Intel(R) PRO/1000 Network Connection 0000:0a:02.0: eth0: MAC: 3, PHY: 5, PBA No: ffffff-0ff 0000:0a:02.0: eth1: (PCI Express:2.5GB/s:Width x4) 0000:0a:02.0: eth1: Intel(R) PRO/1000 Network Connection 0000:0a:02.0: eth1: MAC: 3, PHY: 5, PBA No: ffffff-0ff ADDRCONF(NETDEV_UP): eth0: link is not ready 0000:0a:02.0: eth0: Link is Up 100 Mbps Full Duplex, Flow Control: RX/TX 0000:0a:02.0: eth0: 10/100 speed: disabling TSO ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready ADDRCONF(NETDEV_UP): eth1: link is not ready 0000:0a:02.0: eth1: Link is Up 100 Mbps Full Duplex, Flow Control: RX/TX 0000:0a:02.0: eth1: 10/100 speed: disabling TSO ADDRCONF(NETDEV_CHANGE): eth1: link becomes ready eth0: no IPv6 routers present eth1: no IPv6 routers present ADDRCONF(NETDEV_UP): eth0: link is not ready 0000:0a:02.0: eth0: Link is Up 100 Mbps Full Duplex, Flow Control: RX/TX 0000:0a:02.0: eth0: 10/100 speed: disabling TSO ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready ADDRCONF(NETDEV_UP): eth1: link is not ready 0000:0a:02.0: eth1: Link is Up 100 Mbps Full Duplex, Flow Control: RX/TX 0000:0a:02.0: eth1: 10/100 speed: disabling TSO ADDRCONF(NETDEV_CHANGE): eth1: link becomes ready eth0: no IPv6 routers present eth1: no IPv6 routers present
I am currently hosting my website on one server with the specs:
2.8ghz Dual Quad-Core processor + 8 gigs of ram + two 500 hard drives with a 50 mbps unmetered bandwidth package.
My current problem lies in high server loads and very slow server performance throughout the day.
I am considering migrating over to The Planet onto server with the specs:
3.0ghz Dual Quad-Core + 18 gigs of ram + two 50gb hard drives with 2TB of monthly bandwidth transfer.
In an attempt to have great bandwidth pricing and server performance, I plan on downgrading my current server with my current host to a lowe-end server and keeping it only to host my VIDEO and MUSIC files with the 50mbps unmetered package. The Planet will then host my database and all other web related files on their new server.
Is this a good idea as an attempt to save money in bandwidth costs and eliminating my server lag issues?
I was offered a setup of a separate web and database server at my current host but from what I have read, no one touches the performance and reliability The Planet has to offer.
So the site got featured on [url]and now the server is drowning...
The Coppermine Gallery usually hovers around 30~50 users daily and now, 1800, and im at a lost as how I should configure mysql to take on such a load. right now it takes about 10 secs or more to load a page and sometimes it would time out. Because it si coppermine, all pages are dynamic and can't be cached -_-"
Here's the my.cnf right now after i played around with the numbers
server spec Opteron 170 (2ghz) 2gb ram 250 7200rpm
# # KeepAlive: Whether or not to allow persistent connections (more than # one request per connection). Set to "Off" to deactivate. # KeepAlive Off
# # MaxKeepAliveRequests: The maximum number of requests to allow # during a persistent connection. Set to 0 to allow an unlimited amount. # We recommend you leave this number high, for maximum performance. # MaxKeepAliveRequests 100
# # KeepAliveTimeout: Number of seconds to wait for the next request from the # same client on the same connection. # KeepAliveTimeout 15
# prefork MPM # StartServers: number of server processes to start # MinSpareServers: minimum number of server processes which are kept spare # MaxSpareServers: maximum number of server processes which are kept spare # ServerLimit: maximum value for MaxClients for the lifetime of the server # MaxClients: maximum number of server processes allowed to start # MaxRequestsPerChild: maximum number of requests a server process serves <IfModule prefork.c> StartServers 8 MinSpareServers 5 MaxSpareServers 20 ServerLimit 200 MaxClients 200 MaxRequestsPerChild 1500 </IfModule>
# worker MPM # StartServers: initial number of server processes to start # MaxClients: maximum number of simultaneous client connections # MinSpareThreads: minimum number of worker threads which are kept spare # MaxSpareThreads: maximum number of worker threads which are kept spare # ThreadsPerChild: constant number of worker threads in each server process # MaxRequestsPerChild: maximum number of requests a server process serves <IfModule worker.c> StartServers 2 MaxClients 150 MinSpareThreads 25 MaxSpareThreads 75 ThreadsPerChild 25 MaxRequestsPerChild 0 </IfModule>
a topic long time ago that my server load is frequently high.
I'm talking about something like this Server Load 158.86 Memory Used 28.2 % Swap Used 99.57 %
[url]
The only way to solve this problem is to identify the load earlier and kill all httpd process. What I did was
#killall -9 httpd #killall -9 httpd #killall -9 httpd x 30~40 times until no pid process found & the server load is back to normal.
On previous thread, I tried to update mysql & php and it works,
Right now again I am experiencing high server load again...
I'm very sure it's caused by httpd but I am still unable to find out the real cause of the problem and which account user is the culprit for causing this high load.
Can someone assist me by telling me where/how to begin with?
I'm having problems with incoming spam on my dedicated server the load average of the server is around 80 or 100.2 I know it's incoming spam because when I check the exim process I see a lot of ips from russia, germany, taiwan then I block that ips with the /etc/apf/apf -d ip command and then the load of the server drops down to 7 or less, so the cuestion is how can I detect and block the spammers ips automatically on the server? I have spam assassin running and blocks the spam emails right, the real problem is the high load generated for spamd application and all the incoming connections of the spammers ips on the server.
Server specifications:
Cent OS 4 Control panel: directadmin Dual-Core AMD Opteron(tm) Processor 2214 HE 1GB RAM Exim 4.68 Apache 1.3.39 MySQL 5.0.37 vm-Pop3d 1.1.7f-DA-2
I deal with a server that gets positivey slammed once a week for a few months per year. I'd tell you how many hits we got tonight, but I'm still waiting for AWSTATS to chew through the 2gb access_log file...
Tonight, I made some changes that SEEM to work, but I"mn not sure what the long-term effects could be. If we have any apache experts on the forums, I'd really like to bend your ear for a few to see what you know.
Obviously, with PHP, we're limited to prefork MPM.
First of all, I dropped Timeout from 300 to 120. That should be MORE than enough time to know that we've timed out. Then I dropped KeepAliveTimeout to 5 from 15.
Here's the radical one. Watching the process list and the load, it seemed that load spiked when the processes hit their end of useful life and respawned. Duh. This was happening every four seconds at the load we were under. MaxRequestsPerChild was set to 10,000. I upped this to 80,000 over a period of hours that we were under the load. I didn't see any significant memory leakage, but it's the change I'm worried about the most. I've seen Apache do some bad things when people allow this to go unlimited, and had always used the relatively low default as a guide.
Besides not loading a bunch of dynamic modules (also done, I usually do this so I'm not worried about it), what else can I do tuning-wise to keep load down? Please note that caching and load-balancing aren't acceptable solutions; I have one server to work with (for now) and the boss says no to caching because of how frequently our information updates. We also have extensive .htaccess files, so there's no LHTTPD in my future.
Anyone know a good script that I can run with cron or something? Mysql seems to be the #1 problem with a lot of my web sites, a restart usually fixes it right away for me, but I can't keep restarting my servers everyday manually.
It's been a while since I've posted in this area of the forum, but was just wondering if by now someone or some company has developed a script/software that you can install on your server that will tell you the exact area where the high CPU load is coming from, such as from someone sending email, a certain users account, and etc?
Most times, you have to be mointoring your server at the time of the high load to be exactly sure of what is causing the high load, so, I'm talking about something that will email you right on the spot of the high load in case you are away and will know what caused it.
Most I've seen only tell you that you have a high load, but don't tell you what exactly caused it.