Excessive Resources Usage
Oct 9, 2009Resource: Virtual Memory Size
Exceeded: 149 > 100 (MB)
Executable: /usr/bin/php
I've been receiving e-mails about this. May I know how to fix this?
Resource: Virtual Memory Size
Exceeded: 149 > 100 (MB)
Executable: /usr/bin/php
I've been receiving e-mails about this. May I know how to fix this?
Excessive resource usage: dbus (2015)
I get below alarm from lfd
Quote:
Time: Sun Sep 28 12:16:06 2008 +0200
Account: dbus
Resource: Process Time
Exceeded: 134303 > 1800 (seconds)
Executable: /bin/dbus-daemon
The file system shows that this executable file that the process is running has been deleted. This typically happens if the original file has been replaced by a new file when the application is updated. To prevent this being reported again, restart the process that runs this excecutable file.
Command Line: dbus-daemon --system
PID: 2015
Killed: No
How can I find which process runs this excecutable file ?
Mountain View (CA) - As a company with one of the world's largest IT infrastructures, Google has an opportunity to do more than just search the Internet. From time to time, the company publishes the results of internal research. The most recent project one is sure to spark interest in exploring how and under what circumstances hard drives work - or not.
There is a rule of thumb for replacing hard drives, which taught customers to move data from one drive to another at least every five years. But especially the mechanical nature of hard drives makes these mass storage devices prone to error and some drives may fail and die long before that five-year-mark is reached. Traditionally, extreme environmental conditions are cited as the main reasons for hard drive failure, extreme temperatures and excessive activity being the most prominent ones.
A Google study presented at the currently held Conference on File and Storage Technologies questions these traditional failure explanations and concludes that there are many more factors impacting the life expectancy of a hard drive and that failure predictions are much more complex than previously thought. What makes this study interesting is the fact that Google's server infrastructure is estimated to exceed a number of 450,000 fairly mainstream systems that, in a large number, use consumer-grade devices with capacities ranging from 80 to 400 GB in capacity. According to the company, the project covered "more than 100,000" drives that were put into production in or after 2001. The drives ran at a platter rotation speed of 5400 and 7200 rpm, came from "many of the largest disk drive manufacturers and from at least nine different models."
Google said that it is collecting "vital information" about all of its systems every few minutes and stores the data for further analysis. For example, this information includes environmental factors (such as temperatures), activity levels and SMART parameters (Self-Monitoring Analysis and Reporting Technology) that are commonly considered to be good indicators to describe the health of disk drives.
In general, Google's hard drive population saw a failure rate that was increasing with the age of the drive. Within the group of hard drives up to one year old, 1.7% of the devices had to be replaced due to failure. The rate jumps to 8% in year 2 and 8.6% in year 3. The failure rate levels out thereafter, but Google believes that the reliability of drives older than 4 years is influenced more by "the particular models in that vintage than by disk drive aging effects."
Breaking out different levels of utilization, the Google study shows an interesting result. Only drives with an age of six months or younger show a decidedly higher probability of failure when put into a high activity environment. Once the drive survives its first months, the probability of failure due to high usage decreases in year 1, 2, 3 and 4 - and increases significantly in year 5. Google's temperature research found an equally surprising result: "Failures do not increase when the average temperature increases. In fact, there is a clear trend showing that lower temperatures are associated with higher failure rates. Only at very high temperatures is there a slight reversal of this trend," the authors of the study found.
In contrast the company discovered that certain SMART parameters apparently do have an effect drive failures. For example, drives typically scan the disk surface in the background and report errors as they discover them. Significant scan errors can hint to surface errors and Google reports that fewer than 2% of its drives show scan errors. However, drives with scan errors turned out to be ten times more likely to fail than drives without scan errors. About 70% of Google's drives with scan errors survived the first eight months after the first scan error was reported.
Similarly, reallocation counts, a number that results from the remapping of faulty sectors to a new physical sector, can have a dramatic impact on a hard drive's life: Google said that drives with one or more reallocations fail more often than those with none. The observed average impact on the average fail rate came in at a factor of 3-6, while about 85% of the drives survive past eight months after the first reallocation.
Google discovered similar effects on hard drives in other SMART categories, but them bottom line revealed that 56% of all failed drives had no count in either one of these categories - which means that more than half of all failed drives were put out of operation by factors other than scan errors, reallocation count, offline reallocation and probational counts.
In the end, Google's research does not solve the problem of predicting when hard drives are likely to fail. However, it shows that temperature and high usage alone are not responsible for failures by default. Also, the researcher pointed towards a trend they call "infant mortality phase" - a time frame early in a hard drive's life that shows increased probabilities of failure under certain circumstances. The report lacks a clear cut conclusion, but the authors indicate that there is no promising approach at this time than can predict failures of hard drives: "Powerful predictive models need to make use of signals beyond those provided by SMART."
I run a server and some game server using cPanel game plugin and found that they are using cpu and RAM usage in high percentage.
What is the good cpu usage for a normal server load?
How do i get logs file for cpu and /memory resources usage on perticuler domain.
View 3 Replies View Relateda few times, my host contacted me and said that my account is suspended due to high usage of server resources.
I have 3 sites using SMF (a forum script.) I have around 8500 visitors and 3 million page views per month in total.
can this be a reason of high usage?
if it's what's the meaning of terabyte traffic limit ? which you can never use
I asked to my host if they could let me know what's causing high server resources usage. here is the answer:
We do not know which domain or script caused this high usage, it is your responsibility to investigate. I would suggest that you go through your raw access logs to see the most requested pages and then investigate those further. After you find the pages responsible, please optimize them so you are not using so many server resources.
so if a host wants simply to restrict traffic limit, they have a pre-reason under hand.high server usage! yes but how will I know that? how will I trust that it's true?
when I ask why my sites are down, they can easily tell me that it's because of high server usage!
How can I limit my dedicated server's resources ? For example, one of reseller provider's limits :
Quote:
Resellers may not use more than 2% CPU daily, 3% memory daily, run more than 10 simultaneous processes per user, allow any process to run for longer than 30 seconds CPU time, run any process that consumes more than 20% of available CPU at any time, or run any process that consumes more than 16 MB of memory. Databases are limited to 16 max user connections with a max query time of 8 seconds. Cron jobs must not execute more than once every 15 minutes and will be niced to 15 or greater.
From where or how can I configure these limits?
I keep getting emails from LFD saying this user is using to many resources and it is because of their shoutcast is their a way to take care of this problem?
View 0 Replies View RelatedI have the impression that I am being affected from a kind of DDOS or email worm attack. Is there a way I can track the sources of the connections?
The control panel I am using is Plesk 8.2 on Linux CentOS 4.2
I'm having a problem that I've never run across before, and was wondering if anyone might have any ideas as to what may be causing this.
Basically, on 3 of 5 new servers on a brand new private rack from The Planet, we're having what we've narrowed down to be a problem with PHP or Apache. Loading any sort of PHP page with a larger output (even such as a simple 'phpinfo' call) results in, depending on the computer or browser in use:
- The page loading for a split second then reverting to a DNS Server Not Found page (observed in IE)
- The page loading, but filling the source code with vast amounts of extra blank spaces, making a simple phpinfo call download 5+mb of HTML (observed in both IE and Firefox)
- The page loading part way, then hanging (observed in Firefox)
- Occasionally the page will reload over and over again all by itself until it ultimately goes to a DNS error page (observed in IE)
Pages not including PHP, including very long .HTML and .SHTML pages, load just fine.
Here's a link to a page calling a simple phpinfo string, and nothing else (as this is my first post, I can't link directly to URLs, sorry):
What was causing this:
I installed a url shortner script but the link that the script creates takes you to a server error page.
I viewed the logs and I get this error over and over again.
Request exceeded the limit of 10 internal redirects due to probable configuration error. Use 'LimitInternalRecursion' to increase the limit if necessary. Use 'LogLevel debug' to get a backtrace.
this is what is in my htaccess file
RewriteEngine On
RewriteCond %{REQUEST_FILENAME} !-d
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-l
RewriteRule ^(.+)$ index.php?url=$1 [QSA,L]
I fixed that there is a process that loads the server and much why it does not.
Name process: md2_resync
I just received an email saying
We were forced to suspend your account due to excessive apache connections causing ftp to go down. Here is a snapshot of the activity:
242-0 22189 0/57/57 W 1.24 10 0 0.0 0.09 0.09 xxxxxx xxxxxx.com
243-0 22194 0/1/1 W 0.01 0 0 0.0 0.00 0.00 xxxxxxx xxxxxx.com GET
244-0 22214 0/4/4 W 0.15 4 0 0.0 0.03 0.03 xxxxxx xxxxx.com GET
And when I was on the phone with the support, they told me my site had opened up 400 apache connections + using the GET command and was causing FTP slow downs.
It just happens so that a file I posted made its way to a popular forum and all the sudden, everyone was downloading the file.
What is the best way to correct this problem? and GET? Isn't that a FTP Command? but people were downloading via [url]
I am running a cPanel/WHM VPS and would like to know what sort of resources CSF takes?
View 1 Replies View RelatedAfter speaking to a colleague about some major benefits of EC2 for on-demand hosting I've been very interested in learning more.
I've spent the past 2 evenings trolling through Amazon Doc's and blog posts and have a fair assessment about how things work but I'm at a stopping point.
There doesn't seem to be a dedicated EC2 area here on Sitepoint and the Amazon EC2 forums seem geared more towards 'advanced' users.
Are there any reliable communities that are more for the beginner?
I've got a LAMP webserver Instance running on EC2 but I'm very unclear about how to login and begin adding files and managing the data. I'm sure it's pretty simple but the documentation pretty much loses me when they start discussing Security Groups and public/private keys.
I'm not much of a server admin but have grown pretty comfortable on our FC4 dedicated boxes that we currently host on.
I don't mind have nfs running, but how do I keep it running at the lowest as possible... seems like it's hogging up all my usage/cpu's...
View 2 Replies View RelatedI am sending out an email blast to about 30000 addresses when I leave. Because this is the down-time for our site, is it possible to temporarily give Exim more resources to help process? Or would this even be beneficial?
If both questions answer yes, please let me know where I should look for instructions on doing this.
I would like to configure my delicated server to have following restriction on cpu...
system resources: 10% @ 30 sec per cPanel Account
One of my servers keep overloading due to a SQL process.
The process is:
/usr/sbin/mysqld --basedir=/ --datadir=/var/lib/mysql --user=mysql --pid-file=/var/lib/mysql/zeus1.forcium.net.pid --skip-external-locking --socket=/var/lib/mysql/mysql.sock
It takes up pretty much 90-95% of the cpu and memory at times if I do not kill the process. But even after I kill the process it comes back and immediately hogs up cpu load again causing it to go into loads of 8.00 or higher ( I have 8cpus ).
I'm guessing it's a no but if I start a VPS and it starts to eat ram or I think it needs more CPU can I just increase it? I'm talking about using OpenVZ.
View 4 Replies View RelatedWe've just started to use a VPS, and so far no problems I've been looking at the resources and they seem a little high considering it's pretty much out of the box, and I've only setup 4 sites which aren't even public yet. The only thing I've changed is the php.ini to increase the memory limit to 32mb. My main concern is that these sites don't suffer when they go live.
In the Plesk control panel the memory says:
3.8 GB of 3.8 GB used; 47.1 MB available
The 47.1mb is pretty much average, although I've seen it go as low as 115mb.
In Virtuozzo the system usage (resource: capacity) is usually around 60-75%
Both of these seem a little high, but I'm not sure if these readings are for the whole physical server, or just my portion of it.
Also in the (Virtuozzo) QoS alerts I've had quite a few Yellow zone, black zone and one red zone reports, at around 5am - quite possibly the quietest time on a server which isn't hosting any live sites yet. These have both been on the numproc and the privvmpages services (the red zone was one the privvmpages). Is there anything I should be looking at or is this fairly normal operation for a VPS? I have nightly backups scheduled for around 1am. These were originally set for 4am, but reports were showing that they were running out of memory, so Ive now staggered the times of these to see if that helps. I've haven't changed anything resource-wise other than the php, so I thought it would be good to go from the start, but maybe it needs some fine tuning.
Is there a way on a Virtuozzo server (via. SSH) to check how much CPU or load/cpu-resources each container is using? Or some other way?
View 2 Replies View RelatedWe have a Dedicated Hosting Account running with the following configuration:
QTY Hardware Component
1 Supermicro Dual Xeon SATA X5DPA-TGM
2 Intel 2.4 GHz 533FSB P4 Xeon
4 Generic 512 MB DDR 266 ECC Reg
2 Western Digital 160GB:IDE:7200rpm 160GB
1 Unknown Onboard IDE
Installed Software
- Urchin Urchin 5
- Redhat Enterprise Linux - OS ES 4.0
- cPanel, Inc. cPanel STABLE
We are using Exim for the mail server.
The Server Load reaches upto 35-40.00 every now and then and apparently Exim eats up most of the Resources.
I am attaching a few Screens with the output of Command line top and the WHM Process Manager.
You can see there are a lot of exim Processes/threads running.
how to Optimize the Server for this.
I wanted to know a rough estimation of resources needed for a download server with these specifications..
1- No cPanel needed
2- Unmanaged plan
3- VPS or Dedicated
4- Space needed -- not much . maybe maximum 20 giga
5- Direct links with no software to upload or download all is done through FTP and HTTP browsing
6- Bandwidth / month more than 300 giga and less than 600 giga ( can compromise ) .
7- maximum simultaneous downloads will be 10 - 20 individuals on the same moment
8- Files on server wont exceed 30 mega per file
How much Ram/CPU mainly I may need?
I am running CPanel/WHM as well as the WHMSonic plugin for a Shoutcast service. Now the thing is that my RAM limit is 4GB, but my RAM usage is always at around 3.5GB and above.
I guess it's mainly due to WHMSonic, so is there any way i could lower this RAM as on multiple occasions the server has locked up and shutdown,rebooted or had to be rebooted.
On top of that, the server load is around 10.00 or above.
Is there a resource controlling script which i could install?
I have seen some requests for cheap Virtual Private Servers. By saying "cheap" I mean under $20/month... However the those who posted the requests meant under $10/month...
I don't think that a virtual machine or container would cost $10 or less, but I've seen some providers to offer virtual servers with a very small amount of resources - a couple gigs of space, not to much bandwidth and 64 MB or 128 MB RAM - and to price them around 10 bucks per month.
Although I'd never go this way I'm curious to read what do you thin about such a marketing policy. Do you think that offering a VPS which can not even have a control panel because it doesn't have enough resources is a good practice? (I realize that there are different scenarios and some people probably don't need hosting automation software, but at the same time need a low cost virtual machine...)
I would like to know which of this, or any other that you know of are light on vps server resources (light footprint).
Web Server:
- LightTPD, [url]
- sHTTPd, [url]
Email:
- maybe some kind of light deamon version.
SQL:
- SQLite
- maybe some kind of light deamon version.
PHP:
- maybe some kind of light deamon version.
I got a dedicated server running, which is administered by DirectAdmin, which I mainly use as a mysql server. Now my question would be, what would I do to give all resources possible to mySQL? I mean I donīt wanna take down directadmin and setup mySQL only, so I want to keep directadmin but give almost all server resources to mySQL?
What I did so far is adjust all tables, do indexes and stuff.
The background is that at certain times I face server loads of 40 caused by many external servers of mine querying the mySQL database on the server I am talking about.
So while the load is mainly below 0.1 it sometimes goes up to 40. So this peak I wanna slow down a little bit by giving all resources to mySQL. To say that beforehand splitting the queries from external servers is not an option - they all need to be done at the same time.
So I would really be interested and thankful in what you would advice to do to optimize the mySQL service?
BTW system is running on debian.