My server has been crashing quite alot lately, it does have some high traffic sites on there but it has never really been this bad before. Today i noticed these in cpanel, what are they and is there anyway I can control them?
I was on a 100mbps shared port with a dedicated server from FDC and I use it only for downloads. The downloads took a little long to start but once they did, were as fast as they could be.
Thinking this was definitely a shared bandwidth problem, I ordered a dedicated port of 25mbps from FDC to fix it but it seems to have gotten worse.
The website uses around 10/15mbps but it takes forever for the server to respond. Even logging into cPanel takes around 40/50 seconds for the dialog box to appear but everything is fast once I log into my cPanel.
I also hired a sys admin to look into the server and He says everything is fine. I don't know what to do. I could increase my port bandwidth higher but it'll be disappointing if I do and it's just a waste of bandwidth (and money!)
suddendly some of my sites in my server is taking sessions errors...then after a while all its going ok and then again the same problem...the problem still continues.from what might be the problem?a php update?mysql update?any exprerience?
i havent made any change.my server is linux has centos 4.7
For some reason, one of the servers can't connect to my mail server. Whenever a user tries to send email from that server to my server, the message won't go through and I see the following in the logs (var/log/exim/mainlog):
2007-02-13 23:56:06 SMTP connection from (***.ca) [***.***.***.***] lost while reading message data (header)
this problem occurs only with this ***.ca mail server (as far as I know).
In fact, trying dnsreport.com tool on any of my server domains, I am getting the error message
"ERROR: I could not complete a connection to any of your mailservers!
******.com: Timed out [Last data sent: RCPT TO: ]
If this is a timeout problem, note that the DNS report only waits about 40 seconds for responses, so your mail *may* work fine in this case but you will need to use testing tools specifically designed for such situations to be certain.
i am facing slight problem with one of my VPSes. It had happened earlier also but had got resolved automatically.
Please see this screenshot: [url]
i know that the server load is not that great to cause this much SWAP usage. i think this is because of the processes not getting killed.
UPDATE: here is the screenshot of my other server with the same provider. which is not really overloaded but i think is facing the same problem of processes not getting killed [url]
In learning that some hosts seem to be tightening shared hosting specs, I'm wondering what a 'simultaneous process' is... as from this clip: 'number of simultaneous processes should not exceed 5'.
Is each part - for example, graphics and includes - of an individual webpage a 'process'?
I have a server that has server load showing at 25-40 (once it was even 53!), running like that for hours. The server has 4 cpus - and yet the sites on the server seem to run fine when I check them. What I'm wondering is, what exactly is load in this context; and how can load run so high like that without the server crashing?
According to top, the load is caused by httpd processes running under user 'nobody', that often take up double digit CPU percentage.
Does Apache always run under 'nobody'?
Is there any way to trace an httpd processes - which account it's for, or which physical script or URL is calling it?
And for top itself, the TIME field on one server of mine is in the format xx:xx (e.g. 3:25), on another it's TIME+ and in the format xx:xx.xx (e.g. 30:02.77). What exactly does this mean? I would asume it's minutes:seconds and minutes:seconds:hundredths, but while watching top it doesn't seem to correlate with that.
Each one takes up like 4% of the available ram - and when the ram is gone, the server dies (it doesn't have a swap file - half the time you can't even log in to it), and you have to reboot Apache.
I thought of limiting maxchilds, but would that break something else?
Should I just make a swap file? Will that defeat the point of creating child processes?
I update the sources.list on server 1 to mirrors of the new debian 4 etc . I run apt-get update and apt-get dist-upgrade . A whole bunch of things get updated (it was long time ago that I did this anyway). After some troubles with /boot/grub/menu.lst the server boots ok, and everything is well. This server used to have loads of 15-25 at peak times, but after the update its running very smooth with loads of 2-3 at the same peak times. I dont know why exactly, as I noticed updates in OS , kernel version (from 2.6.8x to 2.6.18) , apache2 , php (4.4.4-8+etch1) , and I also needed to update eaccelerator from 0.94 to 0.95 .
A few days later I update server 2. Everything seems to go the same, although the kernel version stays at 2.6.8-3-686. I dont think kernel version at start was exactly same at server 1. But the new php version is the same as server 1, and everything else looks the same too.
But when peak times are coming up, this server starts to have troubles. It is quickly rising to total of 200-300 processes , while server 1 always stays stable at 60-70.
Server 2 also reacts slow if I click somewhere on the site. It takes 5-10 seconds to show a new page. However the load stays pretty low at 1-2 . I see no big cpu usage and also no big memory usage. I have the impression that this server 2 is somehow wasting a lot of apache processes and is making things hard for itself without a real reason.
When I check the seperate mysql database server, I also notice a lot of processes.
Around 200-250 whereas it used to be 40-60. Sometimes this adds up so hard, that all webservers are blocked because mysql has too much processes. When I check the mysql connections, I see a few dozen things like 'unauthen ip:port Connect login' just hanging. All of them have the ip of webserver 2. Those extra apache processes are somehow hanging on to the mysql server without really doing something.
I dont know what is happening, but this server is underperforming very badly now. I managed to limit the problem by drastically lowering ServerLimit and MaxClients on webserver 2 , but this is no real solution. The server is still slow, at least now its not bringing down the others.
My question : what should I check for now ? I noticed a different structure in the conf files in debian etch, maybe something new has a bad influence on my old conf files? Is there something wrong with the combination of kernel+php version? I have no idea, please point me in the right direction so I can learn from this.
a topic long time ago that my server load is frequently high.
I'm talking about something like this Server Load 158.86 Memory Used 28.2 % Swap Used 99.57 %
[url]
The only way to solve this problem is to identify the load earlier and kill all httpd process. What I did was
#killall -9 httpd #killall -9 httpd #killall -9 httpd x 30~40 times until no pid process found & the server load is back to normal.
On previous thread, I tried to update mysql & php and it works,
Right now again I am experiencing high server load again...
I'm very sure it's caused by httpd but I am still unable to find out the real cause of the problem and which account user is the culprit for causing this high load.
Can someone assist me by telling me where/how to begin with?
I'm not sure I understand the server-status page enough to know if this is a problem or not, but I have several processes that seem to run forever, or until I restart Apache. e.g.
Code: 13-1 21045 0/697/4264 W 59.45 19641 0 0.0 43.28 274.97 66.249.66.133 www.example.com GET /wp/2005/01/ HTTP/1.1 19-1 408 1/834/1831 C 83.52 32463 0 14.8 149.66 263.48 66.249.66.133 www.example.com GET /wp/ HTTP/1.1 30-1 14416 0/430/431 W 35.19 13347 0 0.0 37.42 37.44 66.249.66.133 www.example.com GET /wp/category/issues/ HTTP/1.1
They are almost always on a single domain (there's about 100 on the server) that's a Wordpress site. These processes are also almost always a search engine.
On the rare case I see them running on other domains on the same server they're always on Wordpress sites.
The longer the processes run, the more processor/memory they use, the more they slow the server down.
It seems to have just started in the past few weeks, I've had the site there for a couple years.
Our server is running; Plesk 11.0.9 and CentOS 5.7 it has a Q8200 CPU @ 2.33GHz and 2GB of RAM. Now there are just two websites on the server plus a couple of redirects/forwarding domains, although lots of domains are still on the server but turned off in Plesk. Both websites are OSCommerce sites and I just need to keep these sites going until the end of the year when we will switch to our new Joomla based website.
We have seen an increasing number of server crashes and after various checks of the logs, fitting a new BIOS battery, check of the hardware by EasySpace who host the server, installation of ClamAV, LMD and RKHunter (which did find some Trojans and Suspect software), I have traced it down to some external Http activity that is taking all of my CPU time and RAM. Here is a screen capture of the Htop listing and when I killed these processes the CPU and RAM went back to normal. The problem is that I usually have to restart the HTTPD service and sometimes things get so bad that the server crashes and I have to request a power cycle.
It takes up pretty much 90-95% of the cpu and memory at times if I do not kill the process. But even after I kill the process it comes back and immediately hogs up cpu load again causing it to go into loads of 8.00 or higher ( I have 8cpus ).
I installed Direct Admin on my 192mb RAM vps and right now my VPS is at 270mb (I'm going into burst). I found that if I stopped named, it goes down to less than 70. Why is Bind taking up so much RAM?
I'm currently considering a host change, so I'm putting out feelers to potential candidates. As always, I'm putting on my difficult customer mask (turning down my rationality and patience module) to find out if the host can actually handle real-life customers (one of the things I find most important and that I don't want to find out once the server is already on fire). Most companies pass the test very well. Here's how LiquidWeb handles new customers:
Quote:
Originally Posted by yosmc
Hi guys,
I'm looking to switch hosts in the next couple of months. I'd probably wait until January, but since the recent experience has been a bit bumpy with our current host, I'd like to get some basic info now so we can move more quickly if circumstances force us to do so.
MY SITUATION: I'm a do-it-yourself webmaster who has been managing his own server for years. It's become a curse though because managing your own server means you have to be online virtually every day. I'm looking for a solution that will allow me to be offline for several weeks (a REAL vacation, something I haven't had in a decade), knowing that whatever major issue there is with my sites, someone will take action and make sure the service stays available.
- Last year, I've switched to my first managed solution, but as it turns out, they're not doing what I need. Yesterday, for example, I came home to find my sites offline. The site was unavailable for over 40 minutes, and after asking about it I learned that they didn't take action because the server wasn't quite dead yet, only really, really, really slow. To me, this is hairsplitting, the only thing that matters is whether or not my site is available to visitors. - And once the service has been restored, I would also expect a managed host to figure out what caused the issue, and to propose a solution (or just implement one, e.g. change the mysql configuration) so that a similar issue won't happen anymore under the same circumstances.
- If my sites are unavailable due to a fatal error (e.g. a table needing repairs, or max users reached, "can't connect" or whatever else) I would also expect my managed host to catch it on their own, restore things to normality, and possibly think of ways to keep similar issues from happening in the future.
- If my site suffers a DOS attack, I would expect a managed host to think about how my site can be protected.
And so on.
- My largest database tables are 2.5 GIGs in size, but the /tmp disk my host configured has only 600 MB available, so everytime I perform a major operation (even if it's about slimming it down and running an OPTIMIZE afterwards) everything goes down the crapper (/tmp 100% full and load average shooting up to 200). Seems like the fact that /tmp is 100% full doesn't even trigger any alarms with my host, they send the alert to me, and expect me to contact them and ask for a fix. - When I needed to run a business-critical script that keept failing due to the small /tmp, it was me who reconfigured mysql so that it would temporarily use another partition for /tmp - no suggested solution from the host whatsoever. Not good at all.
- I would also like to see a host being able to learn from past incidents. This would require the host admitting though when they made a mistake, or gave the wrong advice. A host not admitting mistakes means that they will not learn, and will therefore keep making the same mistakes all over again (for the client that's a horrible outlook).
- I also think it's embarrassing if a host tells the client that fixing a certain issue is beyond the scope of their support, if it turns out afterwards that the issue happened because of some update done by the host. If in doubt, the host should always provide assistance.
- And if an issue does go beyond what can be expected from managed hosting, it would be the icing on the cake if the host could offer to fix it anyway, possibly against a fee. Such a situation could occur if a major site error is due to a broken script that was provided by the client. ("Looks like your script blah.php is causing the fatal error, we can look into it but this will likely take X hours and cost you Y USD.") Again, the ultimate goal for me is to be able to be offline for several weeks at a time, knowing that any major interruptions to my sites can be resolved without me.
- I would also appreciate a system that will allow trusted site members to report issues - i.e. one where I can give users the ability to report problems without at the same time giving them the privilege to push any red buttons that may damage my site.
So in a nutshell I'm trying to figure out if Liquid Web is the right hosting solution for me. Please let me know if your hosting philosophy meets me needs (and don't hesitate to let me know if it doesn't ).
Thanks!
Quote:
Greetings,
Thank you for contacting us. Liquid Web offers Heroic Support which covers the hardware, OS, and installed components. We will also monitor your server, and if a service fails one of our reps will log into your box and restart the service. We do not provide support for your content (including backups). If you are having a problem we will help you to troubleshoot the problem, however if the fault is in your content or scripts we will not be able to assist you with that.
For more information on what your support covers please see our website at: [url]
If you have any further questions please let us know.
Quote:
Originally Posted by yosmc
Hi,
I hadn't written such a long email because I'm bored, but because I wanted to know where Liquid Web stands on the issues mentioned ("what would have happened in these situations if I was hosting with Liquid Web"). You have basically answered the question about fixing script problems, and for the rest sent me to a page with unspecific promotional teasers. If that's all I can get as a reply I guess that also answers my questions (I'm already Googling for alternatives) but then again maybe you just want to give it another try?
Thank you.
Quote:
Originally Posted by LiquidWeb
Greetings,
We will take care of server administration issues, we do not take care of any content issues. From the email you sent it sounds as if you are looking for a web developer that can watch over your site, and make corrections and adjustments as needed. This is beyond the scope of what we offer.
If you have further questions please let us know.
Quote:
Originally Posted by yosmc
XY, right now I am just looking for someone to answer my questions. For what it's worth, I didn't draw the name "Liquid Web" out of a hat, and I had already been to your website prior to sending you my mail. Anyway, here's what I read from your responses:
THE BAD NEWS: - Even if it's a one-time emergency, you are paid extra and not providing help would ruin the client's business because the client is currently in a thunderstorm in the middle of the Atlantic, it is not possible to convince Liquid Web support to fix a fatal error that may have been triggered by a programming error in one of the client's scripts. - Although Liquid Web's server monitoring is called "Sonar" it is - in practice - just as slow as the one I've described in my intitial mail (because if it was any better, you would have told me by now how LW would have handled the given example differently). - Even if all my sites are down because your staff has misconfigured mysql to break under heavier traffic, or because one of the tables crashed, Liquid Web's staff will do nothing until notified because as long as the mysql service itself is up, you don't see any reason to intervene (if this is something you'd care about and fix, I'm sure you would have let me in on it by now). - EDIT: Or wait - you guys are installing mySQL but you're not configuring/tweaking it so it actually works for the client? Not sure, seems like I actually have to *guess* on that one.
- Liquid Web's ticket system cannot provide sub-accounts with lesser privileges (because if it could, you would have advertised it to me).
- When Liquid Web sets up new servers, /tmp is below 1 gigabyte as well, and when this causes issues, it is definitely not Liquid Web's fault (because if you would be handling this any differently, you would have pointed it out).
- Liquid Web has too many customers already, which is why even customers who know what they want aren't told what they can get, but instead receive links to canned information that doesn't answer their questions, along with the info that Liquid Web probably isn't for them anyway.
- Generally you're in a hurry and can't spend more than 5 minutes on the average ticket.
THE GOOD NEWS:
- LiquidWeb offers DoS protection (I had missed that, but see it clearly now).
Hope there was nothing I missed. So - thanks for all the extensive information you gave me (and sorry for using up so much of your precious time), I will make sure to honor it when I reach my decision.
No further replies.
Anyone know what's wrong with these people? Are they full, or do they only take on easy customers who need nothing?
I have a VPS where i have cpanel installed. I have noticed quite a number of times through my WHM Cpu/Memory usage that there are 3 instances of MRTG and they seem to be taking up a lot of resources.
I did not install mrtg and i don't even know how do i go ahead and view them
Can someone tell me how do i remove them and is it just me or are there actually 3 instances of MRTG running for everyone?
I have been battling this for a while. A user will setup a CMS like joomla, e107, etc and every time the CMS changes files either with user interaction on the website or the admin changing things in their cms admin web page, apache takes ownership of the files.
I have tried installing suPHP, FastCGI, and most recently suexec. I am not having any luck with this. I really don't know what I am doing with these recent additions but meanly going on suggestions. Does anyone know of a walk through to fix this permission problem? Anyone with some good advise? Surely not everyone is having to write a script to chown each user's dir and run a cronjob every 5 minutes.
HostGator is the only one I know of taking your 404 traffic by default. I have never experienced this with any other host I have used.
Personally it does not bother me much because I know how to change it simple. I'm a big fan of HostGator otherwise. They do provide a great service. I just find it weird your 404 page is a HostGator ad with a coupon code.
Is this a popular thing I have just never run into? I know it is the norm with free hosting providers.
It's been a while since I've made a backup in HyperVM but in the past I've had no problems. I decided to make a new one after so long, but when I did I got the error "no permission to make back up []". I contacted my provider to fix it, but it's been almost 24 hours and they still haven't gotten anywhere it seems. They even asked for my DA login information which I don't think is needed to fix the problem but I provided it anyways.
My question is, is this problem really that hard to fix? How would one fix it? Maybe I can just tell the provider how to fix it so I can get this done ASAP and then I can get back to using my VPS.
- I make changes to the PHP settings but they don't seem to take effect. I even had tried making the changes in the php.ini file, but some of the changes here don't take effect either. I have found similiar posts, but resolutions that work. I have restarted the IIS service after the changes, but this did not change the results I see in phpinfo();.
Examples of Changes Not Taking Effect:
- I changed "error_log" in PHP Settings. phpinfo showed no value for error_log. I changed error_log in php.ini and the change took effect for both local and global. - memory_limit is set to 128M in php.ini. It shows as 32M for local and 128M for global with phpinfo(). No matter what I change this to (some value, "-1", default) in 'PHP Settings', the value does not change for local. - The same problem with 'memory_limit' also occurs for post_max_size. - PHP 5.2 and 5.4 are installed. If I change the version under the 'General' tab, it stays as 5.2.17 in phpinfo(). - I have changed the error_log setting in php.ini and 'PHP Settings', but still nothing is logged in the error_log file with safe_mode on or off (set to local directory). There is a note out there saying that with PHP 5.2, safe_mode on will not write to file. - I have performed IIS Restarts, but this did not make any settings take effect. - I also have tried changing PHP settings under the 'general' and "PHP Settings' tab, both under the website area and the advanced options->Website Scripting and Security. So the 'website' settings would be specific for the website and under 'Website Scripting and Security' would be for the webspace. Changing in either location does not make a difference.
Other Note - I discovered this, because a client was getting a 501 when performing a post, which also sent an email. If he attached a file larger than 7MB to his form, the code would fail with a 501 error. After investigating, the "To" field was blank if a person attached a file larger than 7MB. Defnitely seems to be a memory issue. But since no log file, nor will my settings take effect, I have not been able to resolve this.
PLESK Version - 11.0.9 Update #62 on Windows 2008 and IIS is the web server.