I have a dual xeon server that host around 15 small website, but these days I see that the load generated from mysql is very high as you can see below is consumes 32.6 CPU
I have a site that is eating up my server resources and need to know what the best solution for this is. I'm thinking of getting another server just for mysql but do not know what specs the server should be to handle the current traffic/database load and have the site run smoothly without slowing down to a snail's pace.
An alternative is to get another server just for the videos being served and leave the database and html on the current server. This is where I'm stuck and don't know what route to take with this.
I've attached screenshots of top and bandwidth usage per day. Hopefully with this information you could tell me if I need another server or if there are any things I can do to the current server to help things move faster.
I am just a beginner and it is about a month that the mysql loads on my server is so high , I have checked the mysql processlist via cpanel and there is no account that takes high mysql and make it high , and even I have checked for any possible rotten databases. but still the server load is so high and I can not control it , even when I restart it just after a minute it goes up again.
just the spamd command takes a high CPU usage sometimes. what it should be from? the spamd for a special account!
I need to optimize mysql , and need help. please do not tell me to hire an expert , just help me thanks
and another question is that how can I check which account is sending spam and stop it?
it was very fast until mysql upgraded to 5.0.45 it was 4.. i can’t even turn my forum if it is a busy time cos it is so slow i get page not found after a while but when it is quiet it is not too bad... but it was alot more faster with mysql 4 i don't really want to downgrade please give me some ideas to fix this issue ...........
MySQL the last few days seems to be constantly the most demanding process in top, which it never was before. As far as I can tell, nothing has substantially changed with regards to traffic to MySQL driven sites on the server. Is there anything that might be wrong with the databases, etc., that might throw MySQL into a tizzy?
My server very slow and the is very loaded. How can I fix this problem? My server specialities and loaded softwares which are below. And sometimes there is "mysql connect failed", "Lost connection to MySQL server during query" errors. Thanks for kinds...
-- MYSQL PERFORMANCE TUNING PRIMER -- - By: Matthew Montgomery -
MySQL Version 5.0.41-log i686
Uptime = 3 days 4 hrs 34 min 17 sec Avg. qps = 92 Total Questions = 25406284 Threads Connected = 1
Server has been running for over 48hrs. It should be safe to follow these recommendations
To find out more information on how each of these runtime variables effects performance visit: .............
SLOW QUERIES Current long_query_time = 5 sec. You have 115 out of 25406302 that take longer than 5 sec. to complete The slow query log is enabled. Your long_query_time seems to be fine
WORKER THREADS Current thread_cache_size = 8 Current threads_cached = 7 Current threads_per_sec = 0 Historic threads_per_sec = 0 Your thread_cache_size is fine
MAX CONNECTIONS Current max_connections = 500 Current threads_connected = 2 Historic max_used_connections = 110 The number of used connections is 22% of the configured maximum. Your max_connections variable seems to be fine.
MEMORY USAGE Max Memory Ever Allocated : 835 M Configured Max Per-thread Buffers : 3 G Configured Max Global Buffers : 46 M Configured Max Memory Limit : 3 G Total System Memory : 3.96 G
Max memory limit exceeds 85% of total system memory
KEY BUFFER Current MyISAM index space = 35 M Current key_buffer_size = 32 M Key cache miss rate is 1 : 73658 Key buffer fill ratio = 34.00 % Your key_buffer_size seems to be too high. Perhaps you can use these resources elsewhere
QUERY CACHE Query cache is enabled Current query_cache_size = 4 M Current query_cache_used = 1 M Current query_cach_limit = 1 M Current Query cache fill ratio = 29.83 % MySQL won't cache query results that are larger than query_cache_limit in size
SORT OPERATIONS Current sort_buffer_size = 2 M Current record/read_rnd_buffer_size = 1 M Sort buffer seems to be fine
JOINS Current join_buffer_size = 1.00 M You have had 14127 queries where a join could not use an index properly You should enable "log-queries-not-using-indexes" Then look for non indexed joins in the slow query log. If you are unable to optimize your queries you may want to increase your join_buffer_size to accommodate larger joins in one pass.
Note! This script will still suggest raising the join_buffer_size when ANY joins not using indexes are found.
OPEN FILES LIMIT Current open_files_limit = 2500 files The open_files_limit should typically be set to at least 2x-3x that of table_cache if you have heavy MyISAM usage. Your open_files_limit value seems to be fine
TABLE CACHE Current table_cache value = 256 tables You have a total of 636 tables You have 256 open tables. Current table_cache hit rate is 1% , while 100% of your table cache is in use You should probably increase your table_cache
TEMP TABLES Current max_heap_table_size = 16 M Current tmp_table_size = 32 M Of 2271787 temp tables, 3% were created on disk Effective in-memory tmp_table_size is limited to max_heap_table_size. Created disk tmp tables ratio seems fine
TABLE SCANS Current read_buffer_size = 1 M Current table scan ratio = 28 : 1 read_buffer_size seems to be fine
TABLE LOCKING Current Lock Wait ratio =1 : 112 You may benefit from selective use of InnoDB. If you have long running SELECT's against MyISAM tables and perform frequent updates consider setting 'low_priority_updates=1' If you have a high concurrentcy of inserts on Dynamic row-length tables consider setting 'concurrent_insert=2'.
Quad Core server, 4GB ram. MySQL runs at all times between 200 - 300% CPU. Server does only 5K unique per day, and runs zen cart.
I am at a loss, I have experiece with tracking dowen reasons for this but this one has stumpped me. So I was hoping to get new eyes on this and see if anyone had any ideas.
# The following directives should be commented out # but included as they are things that get added # very frequently on tickets. These are more in a # need-this-feature basis.
# The below 2 cannot be set on the fly. If the customer already has # InnoDB tables and wants to change the size of the InnoDB tablespace # and InnoDB logs, then: # 1. Run a full backup with mysqldump # 2. Stop MySQL # 3. Move current ibdata and ib_logfiles out of /var/lib/mysql # 4. Uncomment the below innodb_data_file_path and innodb_log_file_size # 5. Start MySQL (it will recreate new InnoDB files) # 6. Restore data from backup #innodb_data_file_path = ibdata1:2000M;ibdata2:10M:autoextend #innodb_log_file_size = 100M
So the site got featured on [url]and now the server is drowning...
The Coppermine Gallery usually hovers around 30~50 users daily and now, 1800, and im at a lost as how I should configure mysql to take on such a load. right now it takes about 10 secs or more to load a page and sometimes it would time out. Because it si coppermine, all pages are dynamic and can't be cached -_-"
Here's the my.cnf right now after i played around with the numbers
server spec Opteron 170 (2ghz) 2gb ram 250 7200rpm
# # KeepAlive: Whether or not to allow persistent connections (more than # one request per connection). Set to "Off" to deactivate. # KeepAlive Off
# # MaxKeepAliveRequests: The maximum number of requests to allow # during a persistent connection. Set to 0 to allow an unlimited amount. # We recommend you leave this number high, for maximum performance. # MaxKeepAliveRequests 100
# # KeepAliveTimeout: Number of seconds to wait for the next request from the # same client on the same connection. # KeepAliveTimeout 15
# prefork MPM # StartServers: number of server processes to start # MinSpareServers: minimum number of server processes which are kept spare # MaxSpareServers: maximum number of server processes which are kept spare # ServerLimit: maximum value for MaxClients for the lifetime of the server # MaxClients: maximum number of server processes allowed to start # MaxRequestsPerChild: maximum number of requests a server process serves <IfModule prefork.c> StartServers 8 MinSpareServers 5 MaxSpareServers 20 ServerLimit 200 MaxClients 200 MaxRequestsPerChild 1500 </IfModule>
# worker MPM # StartServers: initial number of server processes to start # MaxClients: maximum number of simultaneous client connections # MinSpareThreads: minimum number of worker threads which are kept spare # MaxSpareThreads: maximum number of worker threads which are kept spare # ThreadsPerChild: constant number of worker threads in each server process # MaxRequestsPerChild: maximum number of requests a server process serves <IfModule worker.c> StartServers 2 MaxClients 150 MinSpareThreads 25 MaxSpareThreads 75 ThreadsPerChild 25 MaxRequestsPerChild 0 </IfModule>
Anyone know a good script that I can run with cron or something? Mysql seems to be the #1 problem with a lot of my web sites, a restart usually fixes it right away for me, but I can't keep restarting my servers everyday manually.
we have mysql 5 setup and this morning at around 10:07 tables started disappearing as they were being accessed by different clients.
Databases that had 40 tables now had 30, etc. Only the tables that were attempted to be accessed were gone. This is the first time something like this has happened.
The following output was given:
This could be because you hit a bug. It is also possible that this binary or one of the libraries it was linked against is corrupt, improperly built, or misconfigured. This error can also be caused by malfunctioning hardware.
We will try our best to scrape up some info that will hopefully help diagnose the problem, but since we have already crashed, something is definitely wrong and this may fail.
key_buffer_size=8388600 read_buffer_size=131072 max_used_connections=208 max_connections=500 threads_connected=156 It is possible that mysqld could use up to key_buffer_size + (read_buffer_size + sort_buffer_size)*max_connections = 1096188 K bytes of memory
Hope that's ok; if not, decrease some variables in the equation.
thd=0xaf82930 Attempting backtrace. You can use the following information to find out where mysqld died. If you see no messages after this, something went terribly wrong...
Cannot determine thread, fp=0xb143932c, backtrace may not be correct.
Stack range sanity check OK, backtrace follows: 0x816b1a0 0xaf1898 0x20 0x81ac49d 0x8182914 0x8189010 0x8189df1 0x818a738 0x818ae5c 0xaeb371 0x9c4ffe New value of fp=(nil) failed sanity check, terminating stack trace! Please read [url]and follow instructions on how to resolve the stack trace. Resolved stack trace is much more helpful in diagnosing the problem, so please do resolve it Trying to get some variables. Some pointers may be invalid and cause the dump to abort... thd->query at 0xaf36620 = SELECT * FROM `sessions` WHERE `PHPSESSID` = '5e6775cd3c6f187d8c575127ba73be19' thd->thread_id=113407 The manual page at [url]contains information that should help you find out what is causing the crash. mysqld: my_new.cc:51: int __cxa_pure_virtual(): Assertion `"Pure virtual method called." == "Aborted"' failed.
Number of processes running now: 0 070427 10:07:49 mysqld restarted 070427 10:07:50 InnoDB: Database was not shut down normally! InnoDB: Starting crash recovery. InnoDB: Reading tablespace information from the .ibd files... InnoDB: Restoring possible half-written data pages from the doublewrite InnoDB: buffer... 070427 10:07:53 InnoDB: Starting log scan based on checkpoint at InnoDB: log sequence number 0 227822203. InnoDB: Doing recovery: scanned up to log sequence number 0 227822203 070427 10:07:53 InnoDB: Started; log sequence number 0 227822203 070427 10:07:54 [Note] /usr/sbin/mysqld: ready for connections.
I am running in a High load problem lately. I have one of those cheap 1and1 servers which was running fine until 2 weeks ago. Once I rebooted accidentaly, it did not come back with some unrepairable kernel errors and I had to re-image it.
I chose to reimage the server with CentOS 5, for better support. The new re-image worked fine for some days, at least so I thought and now I am having high loads. The server crashes if not monitored every moment as the load is unpredictable.
Just a restart of the Apache will bring the server back to normality, but I am not sure if it is apache or some other script to be blamed. I have beeing monitoring through apache server-status, but I cannot organize something unusual in the high load moments.
12:00:29 AM CPU %user %nice %system %iowait %steal %idle 12:10:01 AM all 9.14 0.00 5.52 44.66 0.00 40.68 12:20:14 AM all 6.83 0.00 3.98 27.88 0.00 61.32 12:30:10 AM all 6.44 0.00 4.20 81.25 0.00 8.11 12:40:09 AM all 5.25 0.00 4.09 81.93 0.00 8.73 12:50:15 AM all 5.11 0.00 3.79 90.74 0.00 0.36 01:00:07 AM all 7.22 0.00 4.52 57.11 0.00 31.15 01:10:13 AM all 6.89 0.00 4.01 55.38 0.00 33.71 01:20:14 AM all 4.37 0.00 3.27 41.88 0.00 50.48 01:30:25 AM all 4.26 0.00 3.29 63.42 0.00 29.03 01:40:06 AM all 27.18 0.00 4.75 58.27 0.00 9.80 01:50:03 AM all 29.64 0.00 6.61 51.50 0.00 12.25 02:00:07 AM all 27.00 0.00 8.48 55.49 0.00 9.03 02:10:10 AM all 19.29 0.00 4.97 73.80 0.00 1.94 02:20:04 AM all 37.85 0.00 6.78 40.70 0.00 14.67 02:30:05 AM all 15.65 0.00 4.80 68.47 0.00 11.08 02:40:08 AM all 9.06 0.00 5.60 37.49 0.00 47.86 02:50:07 AM all 5.36 0.00 3.62 42.29 0.00 48.73 03:00:02 AM all 6.05 0.00 4.08 47.27 0.00 42.60 03:10:02 AM all 4.22 0.00 3.68 38.17 0.00 53.93 03:20:02 AM all 4.06 0.00 3.75 41.37 0.00 50.82 03:30:22 AM all 4.42 0.00 3.93 45.25 0.00 46.41 03:40:11 AM all 4.34 0.00 3.95 39.58 0.00 52.13 03:50:02 AM all 4.67 0.00 4.01 32.53 0.00 58.80 04:00:08 AM all 3.72 0.00 3.87 28.40 0.00 64.02 04:10:02 AM all 13.49 0.00 6.58 20.82 0.00 59.10 04:20:01 AM all 6.70 0.00 4.63 6.06 0.00 82.61 04:30:02 AM all 1.44 0.00 1.21 4.75 0.00 92.59 04:40:01 AM all 12.42 0.00 8.12 7.65 0.00 71.81 04:50:02 AM all 1.43 0.00 1.07 4.02 0.00 93.47 05:00:02 AM all 1.60 0.00 1.40 8.62 0.00 88.38 05:10:10 AM all 3.80 0.00 3.02 17.86 0.00 75.32 05:20:06 AM all 5.10 0.00 4.22 23.34 0.00 67.34 05:30:02 AM all 1.54 0.00 1.40 11.22 0.00 85.85 05:40:05 AM all 1.75 0.00 1.89 13.12 0.00 83.23 05:50:12 AM all 2.15 0.00 2.22 18.92 0.00 76.72 06:00:02 AM all 1.92 0.00 2.01 12.87 0.00 83.20 06:10:02 AM all 2.27 0.00 2.16 11.53 0.00 84.04 06:20:03 AM all 3.56 0.00 3.02 25.26 0.00 68.16 06:30:10 AM all 2.66 0.00 2.05 18.13 0.00 77.16 06:40:02 AM all 2.58 0.00 2.25 22.87 0.00 72.30 06:50:02 AM all 2.68 0.00 1.92 15.77 0.00 79.63 07:00:03 AM all 3.06 0.00 2.48 26.01 0.00 68.46 07:10:03 AM all 3.65 0.00 3.20 36.54 0.00 56.61
07:10:03 AM CPU %user %nice %system %iowait %steal %idle 07:20:03 AM all 4.40 0.00 3.28 43.86 0.00 48.46 07:30:02 AM all 4.10 0.00 3.17 31.30 0.00 61.43 07:40:06 AM all 7.67 0.00 3.95 50.79 0.00 37.59 07:50:02 AM all 4.72 0.00 3.11 44.30 0.00 47.86 08:00:03 AM all 5.57 0.00 3.72 47.15 0.00 43.56 08:10:07 AM all 10.66 0.00 3.59 71.62 0.00 14.13 08:20:17 AM all 5.67 0.00 3.42 58.81 0.00 32.10 08:30:10 AM all 11.12 0.00 3.49 76.71 0.00 8.67 08:40:03 AM all 7.00 0.00 3.36 47.94 0.00 41.71 Average: all 7.53 0.00 3.76 38.90 0.00 49.81 Some configurations: The reimage partittioning looks like this:
processor : 1 vendor_id : GenuineIntel cpu family : 15 model : 3 model name : Intel(R) Pentium(R) 4 CPU 2.80GHz stepping : 4 cpu MHz : 2793.324 cache size : 1024 KB
So my server is "unresponsive" for abour 18 hours, burst net didnt answer my tickets and I dont know what to do. Ive been with this setup for almost 5 months with no problems, No changes have been made to hardware or software.
My VPS holds about 80 domains and low-use accounts.
Every night, from around 1.30am, the load suddenly skyrockets and will usually be around 5 to 10 for a few hours. Occasionally it'll spike to 30+ for a few minutes.
I had some antispam software running, and a couple of other packages (mail queues, mail manage etc), so I disabled all of that and removed all the crontab entries etc.
It's not really made any difference.
I can see the load stats going back 8 hours, as part of the ASSP spam package (I've just left the ASSP server load cron running just so I can continue monitoring it!)
Can the apparent load on my VPS be caused by other VPS's on the same node?? So in reality, my load is fine but is being affected by other people's VPS's?
I hope that makes sense. I'm 99% sure that my VPS is 'clean' (in so far as cron entries)
I'm asking the question because I took a second VPS on the same node and that one too has high loads overnight when there's nothing running on it (ie, no add-on software, no Cpanel accounts added)
I have a VPS with LXadmin control panel and 1024 Ram.
the memory always 333-450 but now it is more than 800 and the VPS stopped I restarted it many times but after restarting it the memory gone crazy again.
So, how to know what causes high load and eating my Memory?
i was previously using maxclients 256 in httpd.conf and the load was normal even when the server was processing 256 requests..
recently i had changed it to 320 coz of which my load had increased a lot.. Now when i have decreased it back to 256 the load is still high even when the no of requests are just 150..
How can you tell if your server was a victim of a DDOS attack? Server load goes so high that you cannot get in via SSH, and the server has to be power cycled to get back online. When it comes back online, load goes back up, we shut apache down, and the let the load come back down and then restart apache and everything is fine. Any way to determine what caused the load to skyrocket? System has APF, BFD, and mod_evasive installed.
I have one large website , when i put this website in one server i mean by one server is making the website work with mysql and httpd in one server no external server it's load seems to be on 1 - 5 but when i but the mysql on other server the load got high and i got high httpd connections , also when i try to see the the amount connections between the servers it's too high in one of them and low in the other and i am always suffering from load .
here the details 192.168.0.1 the mysql server 192.168.0.2 httpd server
now i show this error for load in CPU/Memory/MySQL Usage
-------------------------------------------- /usr/sbin/mysqld --basedir/ --datadir/var/lib/mysql --usermysql --pid-file/var/lib/mysql/server.nogomhost.net.pid --skip-external-locking --------------------------------------------- this " mysql " used 90% and 80 % from CPU
And all account used sql 0.0 , why load mysql ?
2nd problem <<<<
some times my vps is down , i don't why
3rd prblem <<<
my vps is not working to some clients , i make allow after user's sent to me ip's by apf program
I am using rsync to transfer files (tar.gz) between servers. However, it makes server load increasing 3-4. Normally, server load can be around 1, but when doing the transfer, it can go up to 5+
Is there anyway to reduce the load when doing rsync?
I have to move about 50 GB each night from one server to another.
This is the command I'm using:
Code:
/bin/nice -n +19 scp -c blowfish -l 18000 -P 22222 root@XX.XX.XX.XX:/backup/dailybacks/*.tar.gz ./ There is no private lan so i have to use internet. I'm using blowfish, which will reduce cpu load, and also limiting bandwidth transfer, usign -l 18000. How ever, I still do see some high load averages..
Do you have any other suggestion to optimize CPU performance while running scp?