Basically mysql is behaving very very intermittently. Crashes were every 4 hours, I've brought them down to once every 8 or so hours but mysql keeps dying.
the error log will show the same routine each time.
on mysql start:
Quote:
091101 21:58:03 [Warning] option 'open_files_limit': unsigned value 120000 adjusted to 65535 091101 21:58:03 [Warning] Could not increase number of max_open_files to more than 65535 (request: 200110) 091101 21:58:03 [Note] /usr/sbin/mysqld: ready for connections.
Then we'll see errors due to crashed databases:
Quote:
091102 0:33:07 [ERROR] /usr/sbin/mysqld: Incorrect information in file: './<nameofdatabase.frm>'
following this a heap of:
Quote:
091102 0:36:35 [ERROR] /usr/sbin/mysqld: Can't open file: '>another database here.frm> 091102 0:36:36 [ERROR] /usr/sbin/mysqld: Sort aborted 091102 0:36:52 [ERROR] /usr/sbin/mysqld: Sort aborted 091102 0:43:00 [ERROR] Error in accept: Too many open files
I been having a heck of a time just with this one cpanel server and open files limit. At first using open_files_limit did not work so I changed it to open-files-limit that seemed to work but now it rejects the value and sets its down to 65535.
Then system open files limit is 500000. I try to set it to any value about 65535 in my.cnf and here is usual error:
090630 9:32:07 [Warning] option 'open_files_limit': unsigned value 120510 adjusted to 65535 090630 9:32:07 [Warning] option 'open_files_limit': unsigned value 120510 adjusted to 65535
When I run something like the tuning-primer it shows:
Current open_files_limit = 120510 files
The open_files_limit should typically be set to at least 2x-3x that of table_cache if you have heavy MyISAM usage.
Your open_files_limit value seems to be fine
But Im not sure if it is just reading my.cnf or something. I am still getting complaints from users about lost connections and I see the errors in error log. Ive looked everywhere and cant seem to find a solution to this.
My IIS W3Log File is under C:Inetpubw3LogFilesW3SVC1
looks like this which is daily log file: C:Inetpubw3LogFilesW3SVC1ex081022.log C:Inetpubw3LogFilesW3SVC1ex081023.log C:Inetpubw3LogFilesW3SVC1ex081024.log All my web site is under c:InetpubVS c:InetpubVSWebSite1.com c:InetpubVSWebSite2.com c:InetpubVSWebSite3.com
There is a webalizer directory under each domain folder.
Does anyone know how to setup a webalizer which can run weekly to process W3log files and put the result into corresponding domain webalizer folder?
We have three virtual hosts on our Apache 2.2 installation on Windows Server 2003. For some reason, I'm unable to open log files (error.log and each virtual hosts-specific log), even though I have full administrator rights. (The log folder is full access to admins.) Every time I try to open the file or even copy it to another location, it just says "Access Denied." I temporarily solved the issue for one of the logs by adding BufferedLogs On
How to increase the Open Files limits descriptor in Apache. In the earlier version of Cpanel, we had an option of Raise FD Size Limit to 16384, but the option no longer appear while rebuilding Apache. What is the way to do it and make the change permanent?
i have Made a VPs on my Own dedicated Server Which i use to run TorrentFlux for Personal Use. I am facing a few problems and dont know where to askf or help.
when i start more than about 12, i get errors in SSh (if i llogin) or th4e Apache Restarts killing all the Transfers.
I ahve 2 Gb Ram, Dual Core CPU.
the Error Via SSh is: sh: pipe error: Too many open files in system
and i ahve attached a Errors Log From Apache.
i am a Noob in Servers so i ahve Lxadmin Contorl Panel Installed and the Log is generated by it.
I`d like to run php per user using spawn-fcgi (I want to user have own pid file for php process). Is it possible to work? I thing that the way is to add something in VirtualHost file for users something like spawn-fcgi -f /usr/local/php4/bin/php -P /tmp/$user.pid -s /tmp/$user.sock
I have php running as fastcgi for nginx and I'm trying to setup monit to monitor the php process and restart it when it crashes. The problem is that I can't seem to figure out which pidfile I should have monit look at.
Here's part of my script that starts the spawn-fcgi process:
## ABSOLUTE path to the spawn-fcgi binary SPAWNFCGI="/usr/local/bin/spawn-fcgi"
## ABSOLUTE path to the PHP binary FCGIPROGRAM="/etc/lighttpd/php/bin/php"
FCGIPID="/var/run/php-fcgi.pid"
## TCP port to which to bind on localhost FCGIPORT="1026"
## number of PHP children to spawn PHP_FCGI_CHILDREN=8
## maximum number of requests a single PHP process can serve before it is restarted PHP_FCGI_MAX_REQUESTS=1500
I added the FCGIPID (saw it in an example), but it doesn't seem to do anything. I tried creating a pid file but the pid in it doesn't get updated when I start/stop the script.
I have just moved my sites from shared host to a vps. After several initial problems (cpanel issue, config not set up correctly, memory spikes and sites down every morning due to backup and stats) I thought everything was going to be ok. hmm.
My server load starts off fine first thing (less than 1) then creeps up nearly 1 per hour, ie been 3.5 hours now and it is at 2.57. Sites are ok at the moment but yesterday load got up to nearly 7 and sites were extremely slow.
CPU usage is HIGH for one site and goes up very quickly throughout the day. Yesterday it reached well over 90%. First thing today it was already amber and showing 14%. It is now 70.54% and shows below it :
Top Process %CPU 80.2 spamd child Top Process %CPU 79.8 spamd child Top Process %CPU 79.4 spamd child
I have a ticket open and they (Liquid Web) they are not sure what the deal is, but are apparently monitoring it to see if they can isolate the cause of the problem. But that was a couple of days ago and now the ticket is due for closure.
I telephoned them (expensive as I'm in the UK!) and raised my concerns, but was just told that everything looked ok on the vps, cpu usage was in fact not high and to ignore the warnings.
So, I am posting here to see if anybody can help me get to the root of this.
I understand that spamd child is to do with email / spam assassin?
My problem is that this is the first vps I have had and don't have a clue now where to go or what to do.
Anybody understand about spamd child that can explain it to a poor dumb blonde and how to fix it?
Recently the Apache recompailed with eAccelerator after that the below error log has created and also the Apache got crash when reach high traffic.
[notice] child pid 13013 exit signal Segmentation fault (11) [notice] child pid 13054 exit signal Segmentation fault (11)
Due to this problem I ran the /scripts/upcp --force and again recompailed the Apache with eAccelerator.
later the signal Segmentation fault error not created but instead of above error the following error has been creating. And also again recompailed the without eAccelerator still the below error creating.
Each one takes up like 4% of the available ram - and when the ram is gone, the server dies (it doesn't have a swap file - half the time you can't even log in to it), and you have to reboot Apache.
I thought of limiting maxchilds, but would that break something else?
Should I just make a swap file? Will that defeat the point of creating child processes?
I have seen 1 vps provider having very poor ping results in few online ping sites and they have a very cluster slow loading pages as well.
One of my friend has a package with them, the ping results are very poor even for him as well. Just made an traceroute found its on some node1.vpsprovider.com
Ya, one more major important similarity noticed was, the vps provider emails weren't set properly to yahoo mail, and my friend's emails sent from the server to yahoo weren't delivered as well.
So if the vps provider has poor content may be due to firewall or internal settings do the systems under the node also be affected?
I have no problems with my host so nothing to be worried about, but need to help him as he is just starting it out with a cheaper vps
I'm running CentOS with Paralells Plesk bundled Paralellls Premium Antivirus (Dr Web). After the latest yum updates DrWeb continously seems to crash and be restarted by the Parallells watchdog. By default there were no logs for DrWeb, but when I enable logging to a file it gets spammed continously with the following error:
Cannot create pipe for communication with scanning childs (Too many open files)and the Drweb process runs at 99% CPU for long periods. This totally fills the disk with logs and I've now disabled logging again and Drweb is back to continously being restarted by the watchdog.
I couldn't keep my mouth shut (technically fingers). A customer wanted to upgrade servers and he needed a way to move the data across. Since I don't allow hard drives to be swapped, they have to do it manually all by themselves. I generally allow up-to 4 days for them to transfer data and make DNS changes, etc. But this time, I offered help! I agreed to move the data (darn me) and it just came out of me, involuntarily.
God knows what just happened... but in a positive way, customer is extremely happy!
So...
Both servers are on cPanel - with root access (duh)
200 odd files which total to 25 GB
1 database about 100 MB in size (no biggie)
I was planning on using one of my Windows 2003 servers (via remote desktop) to download the 25 GB and upload the 25 GB, but that sounds like a waste of resources and time.
I'd like to put up here a base question which I hope some will have the goodwill to answer even though it might touch some business secrecies.
We're a gameserver hoster since around ten years, running also vserver products since over two years now. Renting a few Racks in Europe since some time we're a bit in a question mark how rootserver companies deal with the initial hardware costs for every new customer.
Rackspace and today specially power costs are huge cash eaters here in Europe. Dedicated Rootservers are huge space & power consumers per customer ratio. The initial Hw costs for every new rootserver customer might be covered after 4-6 months (if the machine has to be bought newly), adding the bandwidth and power costs it might take up to 8-9 months until a benefit might come in.
Is this the business normality in the rootserver market (waiting 9 months for any benefit, or counting only on the benefit of the 2nd customer using the older Hw), or are the better ways to handle those "initial" costs or keep them affordably low?