It takes up pretty much 90-95% of the cpu and memory at times if I do not kill the process. But even after I kill the process it comes back and immediately hogs up cpu load again causing it to go into loads of 8.00 or higher ( I have 8cpus ).
I have a VPS where i have cpanel installed. I have noticed quite a number of times through my WHM Cpu/Memory usage that there are 3 instances of MRTG and they seem to be taking up a lot of resources.
I did not install mrtg and i don't even know how do i go ahead and view them
Can someone tell me how do i remove them and is it just me or are there actually 3 instances of MRTG running for everyone?
I installed Direct Admin on my 192mb RAM vps and right now my VPS is at 270mb (I'm going into burst). I found that if I stopped named, it goes down to less than 70. Why is Bind taking up so much RAM?
I'm currently considering a host change, so I'm putting out feelers to potential candidates. As always, I'm putting on my difficult customer mask (turning down my rationality and patience module) to find out if the host can actually handle real-life customers (one of the things I find most important and that I don't want to find out once the server is already on fire). Most companies pass the test very well. Here's how LiquidWeb handles new customers:
Quote:
Originally Posted by yosmc
Hi guys,
I'm looking to switch hosts in the next couple of months. I'd probably wait until January, but since the recent experience has been a bit bumpy with our current host, I'd like to get some basic info now so we can move more quickly if circumstances force us to do so.
MY SITUATION: I'm a do-it-yourself webmaster who has been managing his own server for years. It's become a curse though because managing your own server means you have to be online virtually every day. I'm looking for a solution that will allow me to be offline for several weeks (a REAL vacation, something I haven't had in a decade), knowing that whatever major issue there is with my sites, someone will take action and make sure the service stays available.
- Last year, I've switched to my first managed solution, but as it turns out, they're not doing what I need. Yesterday, for example, I came home to find my sites offline. The site was unavailable for over 40 minutes, and after asking about it I learned that they didn't take action because the server wasn't quite dead yet, only really, really, really slow. To me, this is hairsplitting, the only thing that matters is whether or not my site is available to visitors. - And once the service has been restored, I would also expect a managed host to figure out what caused the issue, and to propose a solution (or just implement one, e.g. change the mysql configuration) so that a similar issue won't happen anymore under the same circumstances.
- If my sites are unavailable due to a fatal error (e.g. a table needing repairs, or max users reached, "can't connect" or whatever else) I would also expect my managed host to catch it on their own, restore things to normality, and possibly think of ways to keep similar issues from happening in the future.
- If my site suffers a DOS attack, I would expect a managed host to think about how my site can be protected.
And so on.
- My largest database tables are 2.5 GIGs in size, but the /tmp disk my host configured has only 600 MB available, so everytime I perform a major operation (even if it's about slimming it down and running an OPTIMIZE afterwards) everything goes down the crapper (/tmp 100% full and load average shooting up to 200). Seems like the fact that /tmp is 100% full doesn't even trigger any alarms with my host, they send the alert to me, and expect me to contact them and ask for a fix. - When I needed to run a business-critical script that keept failing due to the small /tmp, it was me who reconfigured mysql so that it would temporarily use another partition for /tmp - no suggested solution from the host whatsoever. Not good at all.
- I would also like to see a host being able to learn from past incidents. This would require the host admitting though when they made a mistake, or gave the wrong advice. A host not admitting mistakes means that they will not learn, and will therefore keep making the same mistakes all over again (for the client that's a horrible outlook).
- I also think it's embarrassing if a host tells the client that fixing a certain issue is beyond the scope of their support, if it turns out afterwards that the issue happened because of some update done by the host. If in doubt, the host should always provide assistance.
- And if an issue does go beyond what can be expected from managed hosting, it would be the icing on the cake if the host could offer to fix it anyway, possibly against a fee. Such a situation could occur if a major site error is due to a broken script that was provided by the client. ("Looks like your script blah.php is causing the fatal error, we can look into it but this will likely take X hours and cost you Y USD.") Again, the ultimate goal for me is to be able to be offline for several weeks at a time, knowing that any major interruptions to my sites can be resolved without me.
- I would also appreciate a system that will allow trusted site members to report issues - i.e. one where I can give users the ability to report problems without at the same time giving them the privilege to push any red buttons that may damage my site.
So in a nutshell I'm trying to figure out if Liquid Web is the right hosting solution for me. Please let me know if your hosting philosophy meets me needs (and don't hesitate to let me know if it doesn't ).
Thanks!
Quote:
Greetings,
Thank you for contacting us. Liquid Web offers Heroic Support which covers the hardware, OS, and installed components. We will also monitor your server, and if a service fails one of our reps will log into your box and restart the service. We do not provide support for your content (including backups). If you are having a problem we will help you to troubleshoot the problem, however if the fault is in your content or scripts we will not be able to assist you with that.
For more information on what your support covers please see our website at: [url]
If you have any further questions please let us know.
Quote:
Originally Posted by yosmc
Hi,
I hadn't written such a long email because I'm bored, but because I wanted to know where Liquid Web stands on the issues mentioned ("what would have happened in these situations if I was hosting with Liquid Web"). You have basically answered the question about fixing script problems, and for the rest sent me to a page with unspecific promotional teasers. If that's all I can get as a reply I guess that also answers my questions (I'm already Googling for alternatives) but then again maybe you just want to give it another try?
Thank you.
Quote:
Originally Posted by LiquidWeb
Greetings,
We will take care of server administration issues, we do not take care of any content issues. From the email you sent it sounds as if you are looking for a web developer that can watch over your site, and make corrections and adjustments as needed. This is beyond the scope of what we offer.
If you have further questions please let us know.
Quote:
Originally Posted by yosmc
XY, right now I am just looking for someone to answer my questions. For what it's worth, I didn't draw the name "Liquid Web" out of a hat, and I had already been to your website prior to sending you my mail. Anyway, here's what I read from your responses:
THE BAD NEWS: - Even if it's a one-time emergency, you are paid extra and not providing help would ruin the client's business because the client is currently in a thunderstorm in the middle of the Atlantic, it is not possible to convince Liquid Web support to fix a fatal error that may have been triggered by a programming error in one of the client's scripts. - Although Liquid Web's server monitoring is called "Sonar" it is - in practice - just as slow as the one I've described in my intitial mail (because if it was any better, you would have told me by now how LW would have handled the given example differently). - Even if all my sites are down because your staff has misconfigured mysql to break under heavier traffic, or because one of the tables crashed, Liquid Web's staff will do nothing until notified because as long as the mysql service itself is up, you don't see any reason to intervene (if this is something you'd care about and fix, I'm sure you would have let me in on it by now). - EDIT: Or wait - you guys are installing mySQL but you're not configuring/tweaking it so it actually works for the client? Not sure, seems like I actually have to *guess* on that one.
- Liquid Web's ticket system cannot provide sub-accounts with lesser privileges (because if it could, you would have advertised it to me).
- When Liquid Web sets up new servers, /tmp is below 1 gigabyte as well, and when this causes issues, it is definitely not Liquid Web's fault (because if you would be handling this any differently, you would have pointed it out).
- Liquid Web has too many customers already, which is why even customers who know what they want aren't told what they can get, but instead receive links to canned information that doesn't answer their questions, along with the info that Liquid Web probably isn't for them anyway.
- Generally you're in a hurry and can't spend more than 5 minutes on the average ticket.
THE GOOD NEWS:
- LiquidWeb offers DoS protection (I had missed that, but see it clearly now).
Hope there was nothing I missed. So - thanks for all the extensive information you gave me (and sorry for using up so much of your precious time), I will make sure to honor it when I reach my decision.
No further replies.
Anyone know what's wrong with these people? Are they full, or do they only take on easy customers who need nothing?
My server has been crashing quite alot lately, it does have some high traffic sites on there but it has never really been this bad before. Today i noticed these in cpanel, what are they and is there anyway I can control them?
I was on a 100mbps shared port with a dedicated server from FDC and I use it only for downloads. The downloads took a little long to start but once they did, were as fast as they could be.
Thinking this was definitely a shared bandwidth problem, I ordered a dedicated port of 25mbps from FDC to fix it but it seems to have gotten worse.
The website uses around 10/15mbps but it takes forever for the server to respond. Even logging into cPanel takes around 40/50 seconds for the dialog box to appear but everything is fast once I log into my cPanel.
I also hired a sys admin to look into the server and He says everything is fine. I don't know what to do. I could increase my port bandwidth higher but it'll be disappointing if I do and it's just a waste of bandwidth (and money!)
I have been battling this for a while. A user will setup a CMS like joomla, e107, etc and every time the CMS changes files either with user interaction on the website or the admin changing things in their cms admin web page, apache takes ownership of the files.
I have tried installing suPHP, FastCGI, and most recently suexec. I am not having any luck with this. I really don't know what I am doing with these recent additions but meanly going on suggestions. Does anyone know of a walk through to fix this permission problem? Anyone with some good advise? Surely not everyone is having to write a script to chown each user's dir and run a cronjob every 5 minutes.
suddendly some of my sites in my server is taking sessions errors...then after a while all its going ok and then again the same problem...the problem still continues.from what might be the problem?a php update?mysql update?any exprerience?
i havent made any change.my server is linux has centos 4.7
HostGator is the only one I know of taking your 404 traffic by default. I have never experienced this with any other host I have used.
Personally it does not bother me much because I know how to change it simple. I'm a big fan of HostGator otherwise. They do provide a great service. I just find it weird your 404 page is a HostGator ad with a coupon code.
Is this a popular thing I have just never run into? I know it is the norm with free hosting providers.
It's been a while since I've made a backup in HyperVM but in the past I've had no problems. I decided to make a new one after so long, but when I did I got the error "no permission to make back up []". I contacted my provider to fix it, but it's been almost 24 hours and they still haven't gotten anywhere it seems. They even asked for my DA login information which I don't think is needed to fix the problem but I provided it anyways.
My question is, is this problem really that hard to fix? How would one fix it? Maybe I can just tell the provider how to fix it so I can get this done ASAP and then I can get back to using my VPS.
For some reason, one of the servers can't connect to my mail server. Whenever a user tries to send email from that server to my server, the message won't go through and I see the following in the logs (var/log/exim/mainlog):
2007-02-13 23:56:06 SMTP connection from (***.ca) [***.***.***.***] lost while reading message data (header)
this problem occurs only with this ***.ca mail server (as far as I know).
In fact, trying dnsreport.com tool on any of my server domains, I am getting the error message
"ERROR: I could not complete a connection to any of your mailservers!
******.com: Timed out [Last data sent: RCPT TO: ]
If this is a timeout problem, note that the DNS report only waits about 40 seconds for responses, so your mail *may* work fine in this case but you will need to use testing tools specifically designed for such situations to be certain.
- I make changes to the PHP settings but they don't seem to take effect. I even had tried making the changes in the php.ini file, but some of the changes here don't take effect either. I have found similiar posts, but resolutions that work. I have restarted the IIS service after the changes, but this did not change the results I see in phpinfo();.
Examples of Changes Not Taking Effect:
- I changed "error_log" in PHP Settings. phpinfo showed no value for error_log. I changed error_log in php.ini and the change took effect for both local and global. - memory_limit is set to 128M in php.ini. It shows as 32M for local and 128M for global with phpinfo(). No matter what I change this to (some value, "-1", default) in 'PHP Settings', the value does not change for local. - The same problem with 'memory_limit' also occurs for post_max_size. - PHP 5.2 and 5.4 are installed. If I change the version under the 'General' tab, it stays as 5.2.17 in phpinfo(). - I have changed the error_log setting in php.ini and 'PHP Settings', but still nothing is logged in the error_log file with safe_mode on or off (set to local directory). There is a note out there saying that with PHP 5.2, safe_mode on will not write to file. - I have performed IIS Restarts, but this did not make any settings take effect. - I also have tried changing PHP settings under the 'general' and "PHP Settings' tab, both under the website area and the advanced options->Website Scripting and Security. So the 'website' settings would be specific for the website and under 'Website Scripting and Security' would be for the webspace. Changing in either location does not make a difference.
Other Note - I discovered this, because a client was getting a 501 when performing a post, which also sent an email. If he attached a file larger than 7MB to his form, the code would fail with a 501 error. After investigating, the "To" field was blank if a person attached a file larger than 7MB. Defnitely seems to be a memory issue. But since no log file, nor will my settings take effect, I have not been able to resolve this.
PLESK Version - 11.0.9 Update #62 on Windows 2008 and IIS is the web server.
we are getting the following error message during our scheduled backup..Warning: mysql "wordpress_9"...Not all the data was backed up into /mnt/backup/web03/domains/domain.com.au/databases/wordpress_9_1 successfully. Use of chdir('') or chdir(undef) as chdir() is deprecated at /opt/psa/PMM/agents/shared/Storage/Bundle.pm line 39.
mysqldump: Error 2013: Lost connection to MySQL server during query when dumping table `backupdb_wp_commentmeta` at row: 717
We have many webservers in our environment, Few webservers serves static contents. In one application, we often change the excel file. Every day or 2 days once we will modified the excel sheet content. File name will be the same, only the content will be changed. We will modify the content of the sheet and upload through FTP to the docroot. After we done this if we access the application URL, its displaying the old content. Its takes time to reflect. sometime with in 3 hours sometimes after a day only reflecting. We are not sure what is the issue. we cleared the cache in browser and tried then also its showing the old content. We are using DNS,network load balancer, proxy between the browser and the webserver.
I tired accessing through FQDN, it showed old content, then i accessed through the LB IP it showed the modifed file. For testing i changed the content again and then accessed. This time even for IP it showed old content. Same i tried with the instance 1 IP and Instance 2 IP. On first time it showing properly but after i cahnged the file content and accessed it showing the old file I tried accesing from a different PC where we havent accessed before, there also it showed old content, SO i feel browser cache is not an issue.
We using Source subnet mask IP persistence in load balancer. I am not sure where the old file is cached actually. Will it be cached in Load balancer or proxy or somewhere in webserver. Just we are placing the file in docroot and accessing it in the URL
After speaking to a colleague about some major benefits of EC2 for on-demand hosting I've been very interested in learning more.
I've spent the past 2 evenings trolling through Amazon Doc's and blog posts and have a fair assessment about how things work but I'm at a stopping point.
There doesn't seem to be a dedicated EC2 area here on Sitepoint and the Amazon EC2 forums seem geared more towards 'advanced' users.
Are there any reliable communities that are more for the beginner?
I've got a LAMP webserver Instance running on EC2 but I'm very unclear about how to login and begin adding files and managing the data. I'm sure it's pretty simple but the documentation pretty much loses me when they start discussing Security Groups and public/private keys.
I'm not much of a server admin but have grown pretty comfortable on our FC4 dedicated boxes that we currently host on.
I am sending out an email blast to about 30000 addresses when I leave. Because this is the down-time for our site, is it possible to temporarily give Exim more resources to help process? Or would this even be beneficial?
If both questions answer yes, please let me know where I should look for instructions on doing this.
I'm guessing it's a no but if I start a VPS and it starts to eat ram or I think it needs more CPU can I just increase it? I'm talking about using OpenVZ.