i want to kill apache/http and restart it again automatically. i need this because sometime we are not in front of the server to fix an overload issue immediately, which can affect a server very badly. i believe many of us already face this kind of situation and hope there is some kind of script or way to do this.
I have heard a lot of cases when customers used forbidden PHP scripts on shared servers and as a result their accounts were suspended due to the server overload. I am just wondering what scripts it is desirable not to use within shared hosting packages?
I have heard a lot of cases when customers used forbidden PHP scripts on shared servers and as a result their accounts were suspended due to the server overload. I am just wondering what scripts it is desirable not to use within shared hosting packages?
Is there a way to protect apache server from overload? For example Nginx has a module called SysGuard when system load or memory use goes too high all subsequent requests will be redirected to the URL specified by the 'action' parameter.
I have a fairly busy server, and received a High Load warning from my firewall monitoring software. Showing a high 5 minute load average alert of 13.89.
I'm presuming extra memory and a more powerful CPU would be required to sort this out?
Time: Thu Jul 3 12:22:06 2008 1 Min Load Avg: 42.90 5 Min Load Avg: 13.89 15 Min Load Avg: 5.82 Running/Total Processes: 51/359
Output from ps: USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND ...
I have been receiving these a couple time a day lately and not sure what to do or how to go about checking what might be overloading the server. IF this looks familiar to anyone, I'd appreciate some helpful tips. I'm still a novice, but can muddle my way around the server if given enough guidance. Here is the email I've been getting:
"IMPORTANT: Do not ignore this email. This is cPanel stats runner on host.myserverhost.com! While processing the log files for user xxxxxxx, the cpu has been maxed out for more than a 6 hour period. The current load/uptime line on the server at the time of this email is 19:20:10 up 2 days, 7:06, 0 users, load average: 15.17, 13.24, 8.00 You should check the server to see why the load is so high and take steps to lower the load. If you want stats to continue to run even with a high load; Edit /var/cpanel/cpanel.config and change extracpus to a number larger then 0 (run /usr/local/cpanel/startup afterwards to pickup the changes)."
I guess my question is, how would I go about determining what is causing the excess load? Seems to happen even when not many folks are on my site.
I'm on a low-end dedicated server that I run 2 decent sized blogs on. I'm getting several traffic spikes a day where the load goes through the roof and I think I need my server optimized.
My server admin says I need a bigger server and he has never steered me wrong but this is ridiculous:
My blogs use Wordpress as its blogging platform....I know they hog server resources and I've recently installed Super cache so that seems to help.
I average about 5,000 pageviews a day and I would think even a low-end box should handle this but maybe I am wrong.
Example: I have 1 server for hosting, 1 server as Dedicated Backup server (Pls don't recommend me outsourced backup server as I already got one)
I am looking for a solution which I am able to do daily backup while keeping low server load as well as able to restore it quickly when I need it. It's a cpanel servers.
I understand that CPanel own backup method will tar my files (but I have too many accounts) and it also takes a lot of my server CPU resources which slows down my hosting server.
I am now using rsync which does incremental backup that really works and I am happy about it but I paid $55 per month for 1 cpanel servers as I oursourced the rsync installation & server management backup. I have few servers and I don't find it cost effective.
Anyone here can suggest different kind of method which I have not known yet? Or I would really appreciate if someone do not mind to share installation procedure for rsync with me. I tried to google for it and find them all very shallow information. This is not a try-and-error.
My company requires a Mirrored Server setup. I hope someone can direct me to the right solution which guarantees the least downtime.
- We have 20+ php/mysql websites. - We need two dedicated servers hosted in 2 different datacentres. - Users are directed to the first server. - If the first server is down the users are automatically directed to the 2nd server @ the 2nd datacentre. - The software/hardware which redirects the users need to be fail proof or have an instant backup which takes over incase that goes down too. - Data (databases and files) needs to be synced correctly to avoid data not being mirrored correctly.
I've done some reading and there is alot of mention of the DNS round and load balancers.
However, it seems these two options are also not fail proof.
Would appreciate if someone could simply outlining what system would be best for us for 100% uptime incase of server failure.
backup service for my dedicated server that can do the following:
Take backups of the server without my laptop being involved when the backup takes place.
(So I can take a vacation for two weeks and have my laptop turned off - while backups still taking place).
Strangely I have found it impossible to find such a service ... All services I have looked at require my laptop is turned on and running some software while backup of server is takeing place.
good monitoring solution for a dedicated web server?
we are willing to pay if the free ones are not as good? what do you recommend? also want something that would not impact preformance or ask me to put their logo on the site.
I have a dedicated windows 2003 server that acts as an smtp relay (legit purposes, not open).
There are large amounts of mail relayed through the server and I would like to install some 3rd party software that can scan the messages/attachments for viruses.
Ideally, if one exists it strips it from the message and notifies the recipient and/or sender of the problem.
I run a site with about 1,000,000 unique visitors per month and recents server failures made me decide to get a failover server to minimize downtime. My goal wasn't to get 99.999% uptime but to be able to be back on track after a failure in a "reasonable" amount of time. After evaluating several solutions, I decided to go with DNS failover. Here's how the setup work:
1) mydomain.com points to main server with a very low TTL (time to live) 2) failover server replicates data from main server 3) when main server goes down, mydomain.com is changed to point to failover server
The drawback is the DNS propagation time since some DNS servers don't honor the TTL and there is some caching happening on the user's machine and browser. I looked for empirical data to gauge the extent of the problem but couldn't find any so I decided to setup my own experiment.
The Experiment ==============
I start with mydomain.com pointing to the main server with a TTL of 1800 seconds (1/2 hour). I then change it to point to the failover server which simply port forwards to the main server. On the main server, I periodically compute the percentage of requests coming from the failover server which gives me the percentage of people for which the DNS change has propagated.
I made the DNS change at exactly 16:04 on 06/21/06 and here are the percentage of propagated users:
So even after 18 hours, there is still a certain percentage of users going to the old server so DNS failover is obviously not a 99.999% uptime solution. However, since more than 90% of the users are propagated in the first hour, the solution works well enough for me.
I'm working on launching this online store for a poster designer, and we're becoming more and more aware that we need a really robust and fast server. This site is looking at extremely high levels of activity whenever this designer posts a new poster. We're talking 1700 people surfing the store (downloading med-high resolution poster images) and 300 posters sold in 16 seconds kind of thing.
So, we need a really robust hosting, to work with PHP5 and MYSQL.
My previous go-to hosting provider was Lunarpages, but their customer service has gone down the crapper, and I've just about had it with them. My main questions are: Should I be looking into getting a dedicated server, or are there hosting companies that can handle this kind of traffic on a shared server? I don't have experience administrating a server, so if we got a dedicated one we would have to pay the host to do at least some of the setup/administration, I would assume? Dedicated server or not, what's a hosting company that has really good customer service, where we can be assured of getting somebody knowledgeable without having to wait on hold for 20 (or even 10) minutes?
for a server-wide anti-spam solution I can implement on a Linux server. The mail queues are constantly getting backlogged with thousands of messages which bring the servers to a crawl. There are really two issues the solution needs to address.
1) Spam 2) Spam sent to other people that have the from address forged with our clients' email addresses.
We have spam assassin installed for individuals to use, but there's got to be some sort of solution that can clear out the vast majority of the junk before it even gets to the queue.
It probably goes without saying, but the solution needs to be open-source or have a very inexpensive license.
i have vps 512 MB ram From HostForWeb working Fine! in 160 websites! Hosted But! in swvps.com with 2 gig ram! Low Working! OverLoad and CPU Usage is Red Alert But HostForWeb VPS with 512 MB Ram good Working I Dont Know Why SWvps.com Is Low with 2 gig ram for me?
I am not sure if my dedicated server is being attacked or if it is legitimate traffic. I need help figuring out the difference and if it is an attack, how to prevent it, and if it is legitimate traffic, how to configure the server to handle the load.
SoftwareCentOS 5.3-32 Apache2 MySQL 5 PHP 5 When I do ps aux|grep httpd|wc -l I get the count of current connected clients of 259 which is always maxing out my MaxClients of 256. I had increased it to 512, and it maxed out, I had increased it to 1024 and it maxed out, and lastly I had setup to 2048 and it works, but slows the entire server down.
My webserver keeps on overloading, causing pages to lag and mysql connections to get clogged. Each time it overloads, it's because of there being too many MySQL connections, and I really just don't know why there are so many. I am not sure whether someone is sabotaging my server or whether there is a hole in my php scripts that causes an abundance of connections, or some very slow query.
I have a small VPS, with few websites each one with very low visitors in average less than100 visits per day
CentOS 2.6.9 Plesk PHP 5.1.6 Apache/2.2.3
Few days ago some Forum spammers signed up to one of the forums. One of them: stopforumspam.com/ipcheck/212.178.2.3
Today I was away for few 5 hours after I came back I recived a notice from my script that "SMF could not connect to the database"
I checked and I noticed almost all of my sites are not responding. MySql was working. A script on remote server which uses mysql from my server loaded but with dealy
------------------Next step------------------- log to SSH # uptime # 12:XX:XX up XXX days, 5:06, X users, load average: 10.58, 8.86, 5.86