MySQL the last few days seems to be constantly the most demanding process in top, which it never was before. As far as I can tell, nothing has substantially changed with regards to traffic to MySQL driven sites on the server. Is there anything that might be wrong with the databases, etc., that might throw MySQL into a tizzy?
I have a site that is eating up my server resources and need to know what the best solution for this is. I'm thinking of getting another server just for mysql but do not know what specs the server should be to handle the current traffic/database load and have the site run smoothly without slowing down to a snail's pace.
An alternative is to get another server just for the videos being served and leave the database and html on the current server. This is where I'm stuck and don't know what route to take with this.
I've attached screenshots of top and bandwidth usage per day. Hopefully with this information you could tell me if I need another server or if there are any things I can do to the current server to help things move faster.
So the site got featured on [url]and now the server is drowning...
The Coppermine Gallery usually hovers around 30~50 users daily and now, 1800, and im at a lost as how I should configure mysql to take on such a load. right now it takes about 10 secs or more to load a page and sometimes it would time out. Because it si coppermine, all pages are dynamic and can't be cached -_-"
Here's the my.cnf right now after i played around with the numbers
server spec Opteron 170 (2ghz) 2gb ram 250 7200rpm
# # KeepAlive: Whether or not to allow persistent connections (more than # one request per connection). Set to "Off" to deactivate. # KeepAlive Off
# # MaxKeepAliveRequests: The maximum number of requests to allow # during a persistent connection. Set to 0 to allow an unlimited amount. # We recommend you leave this number high, for maximum performance. # MaxKeepAliveRequests 100
# # KeepAliveTimeout: Number of seconds to wait for the next request from the # same client on the same connection. # KeepAliveTimeout 15
# prefork MPM # StartServers: number of server processes to start # MinSpareServers: minimum number of server processes which are kept spare # MaxSpareServers: maximum number of server processes which are kept spare # ServerLimit: maximum value for MaxClients for the lifetime of the server # MaxClients: maximum number of server processes allowed to start # MaxRequestsPerChild: maximum number of requests a server process serves <IfModule prefork.c> StartServers 8 MinSpareServers 5 MaxSpareServers 20 ServerLimit 200 MaxClients 200 MaxRequestsPerChild 1500 </IfModule>
# worker MPM # StartServers: initial number of server processes to start # MaxClients: maximum number of simultaneous client connections # MinSpareThreads: minimum number of worker threads which are kept spare # MaxSpareThreads: maximum number of worker threads which are kept spare # ThreadsPerChild: constant number of worker threads in each server process # MaxRequestsPerChild: maximum number of requests a server process serves <IfModule worker.c> StartServers 2 MaxClients 150 MinSpareThreads 25 MaxSpareThreads 75 ThreadsPerChild 25 MaxRequestsPerChild 0 </IfModule>
Anyone know a good script that I can run with cron or something? Mysql seems to be the #1 problem with a lot of my web sites, a restart usually fixes it right away for me, but I can't keep restarting my servers everyday manually.
we have mysql 5 setup and this morning at around 10:07 tables started disappearing as they were being accessed by different clients.
Databases that had 40 tables now had 30, etc. Only the tables that were attempted to be accessed were gone. This is the first time something like this has happened.
The following output was given:
This could be because you hit a bug. It is also possible that this binary or one of the libraries it was linked against is corrupt, improperly built, or misconfigured. This error can also be caused by malfunctioning hardware.
We will try our best to scrape up some info that will hopefully help diagnose the problem, but since we have already crashed, something is definitely wrong and this may fail.
key_buffer_size=8388600 read_buffer_size=131072 max_used_connections=208 max_connections=500 threads_connected=156 It is possible that mysqld could use up to key_buffer_size + (read_buffer_size + sort_buffer_size)*max_connections = 1096188 K bytes of memory
Hope that's ok; if not, decrease some variables in the equation.
thd=0xaf82930 Attempting backtrace. You can use the following information to find out where mysqld died. If you see no messages after this, something went terribly wrong...
Cannot determine thread, fp=0xb143932c, backtrace may not be correct.
Stack range sanity check OK, backtrace follows: 0x816b1a0 0xaf1898 0x20 0x81ac49d 0x8182914 0x8189010 0x8189df1 0x818a738 0x818ae5c 0xaeb371 0x9c4ffe New value of fp=(nil) failed sanity check, terminating stack trace! Please read [url]and follow instructions on how to resolve the stack trace. Resolved stack trace is much more helpful in diagnosing the problem, so please do resolve it Trying to get some variables. Some pointers may be invalid and cause the dump to abort... thd->query at 0xaf36620 = SELECT * FROM `sessions` WHERE `PHPSESSID` = '5e6775cd3c6f187d8c575127ba73be19' thd->thread_id=113407 The manual page at [url]contains information that should help you find out what is causing the crash. mysqld: my_new.cc:51: int __cxa_pure_virtual(): Assertion `"Pure virtual method called." == "Aborted"' failed.
Number of processes running now: 0 070427 10:07:49 mysqld restarted 070427 10:07:50 InnoDB: Database was not shut down normally! InnoDB: Starting crash recovery. InnoDB: Reading tablespace information from the .ibd files... InnoDB: Restoring possible half-written data pages from the doublewrite InnoDB: buffer... 070427 10:07:53 InnoDB: Starting log scan based on checkpoint at InnoDB: log sequence number 0 227822203. InnoDB: Doing recovery: scanned up to log sequence number 0 227822203 070427 10:07:53 InnoDB: Started; log sequence number 0 227822203 070427 10:07:54 [Note] /usr/sbin/mysqld: ready for connections.
I am just a beginner and it is about a month that the mysql loads on my server is so high , I have checked the mysql processlist via cpanel and there is no account that takes high mysql and make it high , and even I have checked for any possible rotten databases. but still the server load is so high and I can not control it , even when I restart it just after a minute it goes up again.
just the spamd command takes a high CPU usage sometimes. what it should be from? the spamd for a special account!
I need to optimize mysql , and need help. please do not tell me to hire an expert , just help me thanks
and another question is that how can I check which account is sending spam and stop it?
I have a dual xeon server that host around 15 small website, but these days I see that the load generated from mysql is very high as you can see below is consumes 32.6 CPU
it was very fast until mysql upgraded to 5.0.45 it was 4.. i can’t even turn my forum if it is a busy time cos it is so slow i get page not found after a while but when it is quiet it is not too bad... but it was alot more faster with mysql 4 i don't really want to downgrade please give me some ideas to fix this issue ...........
My server very slow and the is very loaded. How can I fix this problem? My server specialities and loaded softwares which are below. And sometimes there is "mysql connect failed", "Lost connection to MySQL server during query" errors. Thanks for kinds...
-- MYSQL PERFORMANCE TUNING PRIMER -- - By: Matthew Montgomery -
MySQL Version 5.0.41-log i686
Uptime = 3 days 4 hrs 34 min 17 sec Avg. qps = 92 Total Questions = 25406284 Threads Connected = 1
Server has been running for over 48hrs. It should be safe to follow these recommendations
To find out more information on how each of these runtime variables effects performance visit: .............
SLOW QUERIES Current long_query_time = 5 sec. You have 115 out of 25406302 that take longer than 5 sec. to complete The slow query log is enabled. Your long_query_time seems to be fine
WORKER THREADS Current thread_cache_size = 8 Current threads_cached = 7 Current threads_per_sec = 0 Historic threads_per_sec = 0 Your thread_cache_size is fine
MAX CONNECTIONS Current max_connections = 500 Current threads_connected = 2 Historic max_used_connections = 110 The number of used connections is 22% of the configured maximum. Your max_connections variable seems to be fine.
MEMORY USAGE Max Memory Ever Allocated : 835 M Configured Max Per-thread Buffers : 3 G Configured Max Global Buffers : 46 M Configured Max Memory Limit : 3 G Total System Memory : 3.96 G
Max memory limit exceeds 85% of total system memory
KEY BUFFER Current MyISAM index space = 35 M Current key_buffer_size = 32 M Key cache miss rate is 1 : 73658 Key buffer fill ratio = 34.00 % Your key_buffer_size seems to be too high. Perhaps you can use these resources elsewhere
QUERY CACHE Query cache is enabled Current query_cache_size = 4 M Current query_cache_used = 1 M Current query_cach_limit = 1 M Current Query cache fill ratio = 29.83 % MySQL won't cache query results that are larger than query_cache_limit in size
SORT OPERATIONS Current sort_buffer_size = 2 M Current record/read_rnd_buffer_size = 1 M Sort buffer seems to be fine
JOINS Current join_buffer_size = 1.00 M You have had 14127 queries where a join could not use an index properly You should enable "log-queries-not-using-indexes" Then look for non indexed joins in the slow query log. If you are unable to optimize your queries you may want to increase your join_buffer_size to accommodate larger joins in one pass.
Note! This script will still suggest raising the join_buffer_size when ANY joins not using indexes are found.
OPEN FILES LIMIT Current open_files_limit = 2500 files The open_files_limit should typically be set to at least 2x-3x that of table_cache if you have heavy MyISAM usage. Your open_files_limit value seems to be fine
TABLE CACHE Current table_cache value = 256 tables You have a total of 636 tables You have 256 open tables. Current table_cache hit rate is 1% , while 100% of your table cache is in use You should probably increase your table_cache
TEMP TABLES Current max_heap_table_size = 16 M Current tmp_table_size = 32 M Of 2271787 temp tables, 3% were created on disk Effective in-memory tmp_table_size is limited to max_heap_table_size. Created disk tmp tables ratio seems fine
TABLE SCANS Current read_buffer_size = 1 M Current table scan ratio = 28 : 1 read_buffer_size seems to be fine
TABLE LOCKING Current Lock Wait ratio =1 : 112 You may benefit from selective use of InnoDB. If you have long running SELECT's against MyISAM tables and perform frequent updates consider setting 'low_priority_updates=1' If you have a high concurrentcy of inserts on Dynamic row-length tables consider setting 'concurrent_insert=2'.
Quad Core server, 4GB ram. MySQL runs at all times between 200 - 300% CPU. Server does only 5K unique per day, and runs zen cart.
I am at a loss, I have experiece with tracking dowen reasons for this but this one has stumpped me. So I was hoping to get new eyes on this and see if anyone had any ideas.
# The following directives should be commented out # but included as they are things that get added # very frequently on tickets. These are more in a # need-this-feature basis.
# The below 2 cannot be set on the fly. If the customer already has # InnoDB tables and wants to change the size of the InnoDB tablespace # and InnoDB logs, then: # 1. Run a full backup with mysqldump # 2. Stop MySQL # 3. Move current ibdata and ib_logfiles out of /var/lib/mysql # 4. Uncomment the below innodb_data_file_path and innodb_log_file_size # 5. Start MySQL (it will recreate new InnoDB files) # 6. Restore data from backup #innodb_data_file_path = ibdata1:2000M;ibdata2:10M:autoextend #innodb_log_file_size = 100M
do you know what my server having very strange problem my server load suddenly increas after every 2 or 3 days some time after 1 days but when we talk about those days in which server load is fine then that time server load very low almost 0.4 to 0.1 .
But on that high load day server load reached upto 500.
when i try to find out what wrong then i only get that there is too much connection of http when i kill httpd through this killall -KILL httpd then server load suddenly decrease and then stable.
I have a couple of sites that are generating errors because the server load is too high and when I check service status I am seeing the following: Server Load 21.49 (8 cpus)
How can I tell if the problem is one of my sites in my VPS or a different site on a different VPS on the same server?
I know there are thousands of topics about this and yes i did use search function to try doing some changes myself and didnt wanna hijack someone else's topic so started my own
well my problem is i run a torrent site which puts a lot of load on my server. Just got upgraded to P4 2.8 ghz with 2GB ram , iam running fedora with WHM/Cpanel
i will do anything with server to put load down cos of load i have turned off my 4 other big sites ...
I've got a server that suddenly over the last three days has exploded as far as server load. Watching top I have some httpd processes that are using up all of the cpu and lasting for quite some time. How can I find out more about these hanging processes? I need to track this down as quickly as possible and find out what the cause is.
if upgrading to that new server that I'll mention will probably solve my problems. Whatever help you can provide would be greatly appreciated. Below are the details:
In the GMT evenings and nights my current server gets so loaded that every page load takes 10 - 30 seconds. Even the pure html pages will be so slow to load. It seems that after a certain treshold it just suddenly becomes that much slower. Not much middleground there. I have high MaxClients and ServerLimit values now and the error log doesn't say that they are exceeded anymore but that didn't help enough.
I have a high traffic website that is using latest version of apache (2.2.x) with the prefork MPM and apache is optimized, PHP 5.2.5 and APC 3.0.15.
I get 160,000 - 210,000 pageloads per day. 32,000 - 45,000 visits per day.
Most of its pages are PHP but shouldn't be too CPU or databes intensive. Mysql isn't used and I mostly used sharedmem (php's shm functions) for databases. 2 semaphores are quite heavily used but that can't explain how a few more users would make the server serve pages so much slower.
Swap usage is practically 0 and CPU user % usage is like 1 - 2 % and CPU system % is also about the same even during peak times. However the Average Load or whatever that "top" reports is 6 - 9.
My current server scecs: 1 GB Ram, Pentium D 3 ghz, CentOS 5 32bit fully updated.
I load all pictures and even the stylesheet from a secondary server by using href="$secondaryserverIP..." in the html code, so the main server practically just serves the pages.
My new server will have apache with the worker MPM and latest versions of every software. Also its specs are: 2 GB of RAM, Intel Dual Core Xeon 2.40GHz, CentOS 5.1 32bit fully updated.
I have a sophisticated netstat based ddos script that is an improved version of DDoS Deflate and while some of these slowdowns seem to have been caused by attacks that it then was able to defend me from, most of them are not. I am even protected from users who constantly have 7+ connections to my site and if someone has a way too high number of connections, the script won't even check if it constantly has it and the script just bans that user outright. It probably is banning a bunch of innocent proxy users too but that is a small price to pay.
Alright we just bought a new customer aboard with a ChatBox on the site it seems as if this chatbox is causing higher load we went from 0.00 to 0.68-1.12 now this happened on our old server before too with another chatbox. They use vBulletin any ideas as to what would be causing this?
I have two quad core processors and load is like 15.
May it be caused by switch if it doesnt let traffic trough properly?
if dmesg grep eth shows 100 full duplex is it normal or should it be 1000 full duplex?
how can I make it 1000 full duplex on centos 5?
Quote:
0000:0a:02.0: eth0: (PCI Express:2.5GB/s:Width x4) 0000:0a:02.0: eth0: Intel(R) PRO/1000 Network Connection 0000:0a:02.0: eth0: MAC: 3, PHY: 5, PBA No: ffffff-0ff 0000:0a:02.0: eth1: (PCI Express:2.5GB/s:Width x4) 0000:0a:02.0: eth1: Intel(R) PRO/1000 Network Connection 0000:0a:02.0: eth1: MAC: 3, PHY: 5, PBA No: ffffff-0ff ADDRCONF(NETDEV_UP): eth0: link is not ready 0000:0a:02.0: eth0: Link is Up 100 Mbps Full Duplex, Flow Control: RX/TX 0000:0a:02.0: eth0: 10/100 speed: disabling TSO ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready ADDRCONF(NETDEV_UP): eth1: link is not ready 0000:0a:02.0: eth1: Link is Up 100 Mbps Full Duplex, Flow Control: RX/TX 0000:0a:02.0: eth1: 10/100 speed: disabling TSO ADDRCONF(NETDEV_CHANGE): eth1: link becomes ready eth0: no IPv6 routers present eth1: no IPv6 routers present ADDRCONF(NETDEV_UP): eth0: link is not ready 0000:0a:02.0: eth0: Link is Up 100 Mbps Full Duplex, Flow Control: RX/TX 0000:0a:02.0: eth0: 10/100 speed: disabling TSO ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready ADDRCONF(NETDEV_UP): eth1: link is not ready 0000:0a:02.0: eth1: Link is Up 100 Mbps Full Duplex, Flow Control: RX/TX 0000:0a:02.0: eth1: 10/100 speed: disabling TSO ADDRCONF(NETDEV_CHANGE): eth1: link becomes ready eth0: no IPv6 routers present eth1: no IPv6 routers present
I am currently hosting my website on one server with the specs:
2.8ghz Dual Quad-Core processor + 8 gigs of ram + two 500 hard drives with a 50 mbps unmetered bandwidth package.
My current problem lies in high server loads and very slow server performance throughout the day.
I am considering migrating over to The Planet onto server with the specs:
3.0ghz Dual Quad-Core + 18 gigs of ram + two 50gb hard drives with 2TB of monthly bandwidth transfer.
In an attempt to have great bandwidth pricing and server performance, I plan on downgrading my current server with my current host to a lowe-end server and keeping it only to host my VIDEO and MUSIC files with the 50mbps unmetered package. The Planet will then host my database and all other web related files on their new server.
Is this a good idea as an attempt to save money in bandwidth costs and eliminating my server lag issues?
I was offered a setup of a separate web and database server at my current host but from what I have read, no one touches the performance and reliability The Planet has to offer.
a topic long time ago that my server load is frequently high.
I'm talking about something like this Server Load 158.86 Memory Used 28.2 % Swap Used 99.57 %
[url]
The only way to solve this problem is to identify the load earlier and kill all httpd process. What I did was
#killall -9 httpd #killall -9 httpd #killall -9 httpd x 30~40 times until no pid process found & the server load is back to normal.
On previous thread, I tried to update mysql & php and it works,
Right now again I am experiencing high server load again...
I'm very sure it's caused by httpd but I am still unable to find out the real cause of the problem and which account user is the culprit for causing this high load.
Can someone assist me by telling me where/how to begin with?