On my server, i have one user ho create load on my server.
user 29508 22.0 0.0 0 0 ? Z 15:18 0:00 [php] <defunct>
That user has more site added with addons from cpanel. How can I found witch site is generating that high load ? Also some time, I have php index.php ( and that don't help me very much )
Recently I use suphp with fcgi on my cPanel Server. When I apply RLimitCPU for each vhost, I see that some scripts that potentially overloading the server is killed. I think this is a good way to control load on server.
But each time a php process killed, apache create a coredump files under users's directory that is large enough to fill user's space limit.
How to disable apache from creating core dump files?
I have try:
- set "ulimit -c 0" on users and root - set "/proc/sys/fs/suid_dumpable" to 0 - set /etc/security/limits.conf with 0 limit for core parameter - set CoreDumpDirectory to specific directory ...
There are two vbulletin forums hosted on my colocated, Q6600 Quad Core, 4GB RAM server and this process [php] <defunct> keeps spawning constantly and hogging the CPU resources.
Each process lasts a matter of seconds and dies but it keeps spawning itself constantly. Heres a screenshot:
[url]
Ive tried restarting Apache but it doesn't solve it.
I've had a shared hosting account for several years and never had this problem before. Since yesterday I occasionally get 500 internal server errors on all my websites simultaneously due to a large number of processes on my account. When I log into cPanel and click on View Processes I only see 1 or 2 at a time, but support tells me that there are actually more than 25 processes and this is not allowed.
Apparently they are defunct PHP processes (zombies?) that are waiting on their 'parent' processes to clean them up and for some reason my account is accumulating a lot of these.
Support is not able to tell me which of my PHP scripts is causing this. All they can give me is something like this:
I have several websites on the account that use my own written PHP. I especially use a lot of functions related to mySQL, simplexml_load_file and reading/writing cache files. I don't know where to start looking to find the PHP that is causing these defunct processes.
LCMlinux ~> uname -a Linux LCMlinux 3.2.29-smp #2 SMP Mon Sep 17 13:16:43 CDT 2012 i686 LCMlinux ~> httpd -v Server version: Apache/2.4.3 (Unix) Server built: Aug 23 2012 11:07:26 LCMlinux ~>
We are using this both for the Trac issue-tracking application and for a small, simple internal mirror web site. Trac is working perfectly; the web site works if exact URLs are provided (as in <a href=...>
Is there a way you can just disable the Generate Full Backup link in the Backup section of CPanel, and not disable the Backup section completely?
Meaning, I just want to disable that one function a client has the ability to do, but I don't want to stop them from being able to download their home directory, mysql databases,etc.
I am using the plesk firewall and trying to set up SSH rule which only allows from my IP but deny from everywhere else. In previous versions this worked fine by adding an ip selecting Allow from selected sources, deny from others and the icon in the rules would be orange with the lines
allow incoming from xxx.xxx.xxx.xx Deny incoming from all others
However this no longer works as the deny from all others is not appearing and is not being generated in the iptables by plesk.
So I've got a problem where a small percentage of incoming requests are resulting in "400 bad request" errors and I could really use some input. At first I thought they were just caused by malicious spiders, scrapers, etc. but they seem to be legitimate requests.
I'm running Apache 2.2.15 and mod_perl2.
The first thing I did was turn on mod_logio and interestingly enough, for every request where this happens the request headers are between 8000-9000 bytes, whereas with most requests it's under 1000. Hmm.
There are a lot of cookies being set, and it's happening across all browsers and operating systems, so I assumed it had to be related to bad or "corrupted" cookies somehow - but it's not.
I added "%{Cookie}i" to my LogFormat directive hoping that would provide some clues, but as it turns out half the time the 400 error is returned the client doesn't even have a cookie. Darn.
Next I fired up mod_log_forensic hoping to be able to see ALL the request headers, but as luck would have it nothing is logged when it happens. I guess Apache is returning the 400 error before the forensic module gets to do its logging?
By the way, when this happens I see this in the error log:
request failed: error reading the headers
To me this says Apache doesn't like something about the raw incoming request, rather than a problem with our rewriting, etc. Or am I misunderstanding the error?
I'm at a loss where to go from here. Is there some other way that I can easily see all the request headers? I feel like that's the only thing that will possibly provide a clue as to what's going on.
Rapidly growing error logs showing the same message
$ug-non-zts-20020429/ffmpeg.so' - /usr/local/lib/php/extensions/no-debug-non-zts-20020429//usr/local/lib/php/extensions/no-debug-non-zts-20020429/ffmpeg.so: cannot open shared object file: No such file or directory in Unknown on line 0
root@server [~]# ls /usr/local/lib/php/extensions/no-debug-non-zts-20020429 ./ ../ eaccelerator.so* root@server [~]# ls /usr/local/lib/php/extensions/no-debug-non-zts-20020429 ./ ../ eaccelerator.so*
i manage linux apache webserver with a few wordpress blogs and from time to time i see someone inject a malicious .php file into wp-content/uploads/2014/10/ directory.
i think its some bad plugin or theme, but these is more blogs, i ugrade, update, WP, but
how can i setup some monitor to tell me which php file (or even line in php file) injected that malicious .php ? I have linux root access so i can setup anything
I'd like you " to move my web sites that I have in a FTD file from were it is now to GoDaddy? I have the accounts already set up all that has to happen is the move."
I must confess that I have no clue what he means by FTD file. What am I missing?
BTW, I have Googled it with no results that make sense to me.
I have a web site backup file (let's call it 'filename.tar.tgz') that was generated from a home-grown web hosting panel and is ~1.6GB in size. It is resident on a WinXP computer, but I also have it copied to a *nix machine.
I have attempted to restore the backup using the normal restore process provided by the site admin panel, but it will never complete because of the size of the file. So, I need to retrieve the folders/files from within the 'filename.tar.tgz' file so that I can re-upload the files/folders through normal FTP.
I have had no success extracting the files/folders when using tar, gtar, gunzip, etc on the Linux box. 7Zip won't open it either. The Linux terminal reports a 'stdin: not in gzip format' error when trying to decompress/extract the files.
What I need is the exact syntax (with any switches) that I can use in my Linux Terminal Shell for extracting this archive so that I can access the files within.
If a hosting company offers both paypal and CC on file (CC on file with auto-subscription/-recurring ability), which payment method do you prefer to use as a customer?
I've been recently trying to move an account between servers, but the backup file is always incomplete. I was told that it is possible there are too many files.
I decided to tar some of them and move manually, but I cannot access the tar file. I already changed all permissions (644), owner, group, but I still get 403 Forbidden error. Is it possible that the file is too big (9 BGs), and if it is, how do I change the file size limit?