Excessive Resource Usage: Dbus (2015)
Nov 20, 2008
Excessive resource usage: dbus (2015)
I get below alarm from lfd
Quote:
Time: Sun Sep 28 12:16:06 2008 +0200
Account: dbus
Resource: Process Time
Exceeded: 134303 > 1800 (seconds)
Executable: /bin/dbus-daemon
The file system shows that this executable file that the process is running has been deleted. This typically happens if the original file has been replaced by a new file when the application is updated. To prevent this being reported again, restart the process that runs this excecutable file.
Command Line: dbus-daemon --system
PID: 2015
Killed: No
How can I find which process runs this excecutable file ?
View 1 Replies
ADVERTISEMENT
Oct 9, 2009
Resource: Virtual Memory Size
Exceeded: 149 > 100 (MB)
Executable: /usr/bin/php
I've been receiving e-mails about this. May I know how to fix this?
View 6 Replies
View Related
Feb 17, 2007
Mountain View (CA) - As a company with one of the world's largest IT infrastructures, Google has an opportunity to do more than just search the Internet. From time to time, the company publishes the results of internal research. The most recent project one is sure to spark interest in exploring how and under what circumstances hard drives work - or not.
There is a rule of thumb for replacing hard drives, which taught customers to move data from one drive to another at least every five years. But especially the mechanical nature of hard drives makes these mass storage devices prone to error and some drives may fail and die long before that five-year-mark is reached. Traditionally, extreme environmental conditions are cited as the main reasons for hard drive failure, extreme temperatures and excessive activity being the most prominent ones.
A Google study presented at the currently held Conference on File and Storage Technologies questions these traditional failure explanations and concludes that there are many more factors impacting the life expectancy of a hard drive and that failure predictions are much more complex than previously thought. What makes this study interesting is the fact that Google's server infrastructure is estimated to exceed a number of 450,000 fairly mainstream systems that, in a large number, use consumer-grade devices with capacities ranging from 80 to 400 GB in capacity. According to the company, the project covered "more than 100,000" drives that were put into production in or after 2001. The drives ran at a platter rotation speed of 5400 and 7200 rpm, came from "many of the largest disk drive manufacturers and from at least nine different models."
Google said that it is collecting "vital information" about all of its systems every few minutes and stores the data for further analysis. For example, this information includes environmental factors (such as temperatures), activity levels and SMART parameters (Self-Monitoring Analysis and Reporting Technology) that are commonly considered to be good indicators to describe the health of disk drives.
In general, Google's hard drive population saw a failure rate that was increasing with the age of the drive. Within the group of hard drives up to one year old, 1.7% of the devices had to be replaced due to failure. The rate jumps to 8% in year 2 and 8.6% in year 3. The failure rate levels out thereafter, but Google believes that the reliability of drives older than 4 years is influenced more by "the particular models in that vintage than by disk drive aging effects."
Breaking out different levels of utilization, the Google study shows an interesting result. Only drives with an age of six months or younger show a decidedly higher probability of failure when put into a high activity environment. Once the drive survives its first months, the probability of failure due to high usage decreases in year 1, 2, 3 and 4 - and increases significantly in year 5. Google's temperature research found an equally surprising result: "Failures do not increase when the average temperature increases. In fact, there is a clear trend showing that lower temperatures are associated with higher failure rates. Only at very high temperatures is there a slight reversal of this trend," the authors of the study found.
In contrast the company discovered that certain SMART parameters apparently do have an effect drive failures. For example, drives typically scan the disk surface in the background and report errors as they discover them. Significant scan errors can hint to surface errors and Google reports that fewer than 2% of its drives show scan errors. However, drives with scan errors turned out to be ten times more likely to fail than drives without scan errors. About 70% of Google's drives with scan errors survived the first eight months after the first scan error was reported.
Similarly, reallocation counts, a number that results from the remapping of faulty sectors to a new physical sector, can have a dramatic impact on a hard drive's life: Google said that drives with one or more reallocations fail more often than those with none. The observed average impact on the average fail rate came in at a factor of 3-6, while about 85% of the drives survive past eight months after the first reallocation.
Google discovered similar effects on hard drives in other SMART categories, but them bottom line revealed that 56% of all failed drives had no count in either one of these categories - which means that more than half of all failed drives were put out of operation by factors other than scan errors, reallocation count, offline reallocation and probational counts.
In the end, Google's research does not solve the problem of predicting when hard drives are likely to fail. However, it shows that temperature and high usage alone are not responsible for failures by default. Also, the researcher pointed towards a trend they call "infant mortality phase" - a time frame early in a hard drive's life that shows increased probabilities of failure under certain circumstances. The report lacks a clear cut conclusion, but the authors indicate that there is no promising approach at this time than can predict failures of hard drives: "Powerful predictive models need to make use of signals beyond those provided by SMART."
View 6 Replies
View Related
Oct 17, 2007
I am running a youtube clone on a VPS with 512mb ram at Lunarpages.
Whenever I log into Plesk, I find that my system usage is extremely high. 90%++ even up to 100%. However my CPU usage is often less than 5%.
This problem often occurs when there is slightly more visitors on my site. I am talking about only 30++ visitors and this problem will occur and my site slows to a crawl and I have to restart the VPS.
The script or the server?
View 13 Replies
View Related
May 11, 2008
I am wondering if simultaneous downloads could take up a lot of CPU/Ram usage? Could a celeron server with 512MB handle simultaneous downloads and how many users can it support simultaneously? The server will be serving as a pure download, no database, no php, no cgi, no nothing. And what is the highest mbps this server could potentially reach?
View 6 Replies
View Related
Oct 21, 2008
I've been running website for several years, however, there's one thing that I've never quite figured, most likely because I haven't gone over to dedicated/vps yet.
How much memory would a static 10kb HTML use or for that matter a PHP page (static)?
I know it's quite a broad question, but I'm asking this as I might start a project and this one page may receive many hits. Oh and, would the memory usage go up if I have embedded objects from an outside source (e.g. embedded Youtube videos)?
View 6 Replies
View Related
Apr 19, 2007
I have seen posts that some hosts suspend a user after they so many seconds of high server resource usage... I was wondering how this is done so that I can do this on my dedicated server.
View 1 Replies
View Related
Nov 26, 2014
I recently migrated a load of domains from a pleks 8.? install to a plesk 12 one. As part of the migration a new reseller was created but all the resellers domains got "lost". They were all there and working but not appearing in the interface.
I did some googleing and fixed this problem and can now see all the domains and when I look at the reseller plesk tells me it has 0 own customers, 0 own plans and 0 own subscriptions. However, if I look at the resources tab it tells me that 7 domains out of unlimited are being used.I just wondered where plesk get's that info from and if it is safe to delete that reseller without risking the domains getting deleted too.
View 1 Replies
View Related
Dec 27, 2008
I upgraded my hosting to DreamHost PS (http://www.dreamhost.com/hosting-vps.html) about 7 days ago, and after 3 days of evaluation i find no interest about it and decided to downgrade. But they rejected my request saying that the resource is too high, then came the long negotiation.
My question now is, would domains parked(not hosted) at dreamhost ate up memory of my PS?
Phase 1: Why didn't anyone warn me before the upgrade?
I wrote:
I found a few unreasonable facts about your statement of rejecting my downgrading request:
1. Dreamhost tempted me to upgrade when I was doing fine with share hosting;
2. Dreamhost didn't warn me with any condition or term of any kind, including downgrading would be rejected if memory reach a certain degree;
3. from the facts stated above, this upgrading thing is a total scam.
##comment: The upgrade took only a click of the button.
And their answer:
I'm sorry to hear you feel this way Ruiz. However, your accounts resource usage is above what will work on shared hosting. If we were to move you back, one of two things would begin to happen:
1 - your domains would affect the stability of the web server, slowing down other customers sites, or taking them down completely
2 - your own sites would begin to fail due to the high resource usage.
As such, we can not move you back until you've brought that usage down. However, I can understand why you would feel the way you do. As such, I'm willing to add a $15/mo subsidy to your PS service. What this means, is that the basic 150MB/MHz you have to pay for will be free, all you will need to pay for would be any resources you allocate beyond the first 150MB. If this is alright, let me know and I'll go ahead and add the subsidy to your account.Phase 2: Why my hosting suddenly ate up so much memory?
I said:
I haven't recieved any email mentioning that my domains are using too much resource BEFORE THE UPGRADE for almost ONE WHOLE YEAR, how come suddenly they are? Is it some kind of malfunction or misconfiguration of your PS that causes the problem? If it is, why should I pay for it?
They answered:
It isn't a "sudden" issue. Your sites have been using whatever resources they need for as long as you've been on the PS. It appears you've had spikes as high as 600MB of memory in your usage, and if you look at your graph over the last month: ...
View 14 Replies
View Related
Mar 17, 2014
I'm on PPA 11.5 MU#2 (Should upgrade to MU#3 soon). My problem is that the Usage of Disk Space for all my customers subscriptions are not calculated. I've run the daily maintenance script (which it actually runs periodically), but there's no update on the display.
View 3 Replies
View Related
Jan 23, 2015
When a reseller create a service plan themselves...is there any way to limit the CPU and memory setting for the application pool to recycle on their own created plan?
View 2 Replies
View Related
May 31, 2008
I keep getting emails from LFD saying this user is using to many resources and it is because of their shoutcast is their a way to take care of this problem?
View 0 Replies
View Related
Jul 12, 2007
I have the impression that I am being affected from a kind of DDOS or email worm attack. Is there a way I can track the sources of the connections?
The control panel I am using is Plesk 8.2 on Linux CentOS 4.2
View 2 Replies
View Related
Apr 5, 2007
I'm having a problem that I've never run across before, and was wondering if anyone might have any ideas as to what may be causing this.
Basically, on 3 of 5 new servers on a brand new private rack from The Planet, we're having what we've narrowed down to be a problem with PHP or Apache. Loading any sort of PHP page with a larger output (even such as a simple 'phpinfo' call) results in, depending on the computer or browser in use:
- The page loading for a split second then reverting to a DNS Server Not Found page (observed in IE)
- The page loading, but filling the source code with vast amounts of extra blank spaces, making a simple phpinfo call download 5+mb of HTML (observed in both IE and Firefox)
- The page loading part way, then hanging (observed in Firefox)
- Occasionally the page will reload over and over again all by itself until it ultimately goes to a DNS error page (observed in IE)
Pages not including PHP, including very long .HTML and .SHTML pages, load just fine.
Here's a link to a page calling a simple phpinfo string, and nothing else (as this is my first post, I can't link directly to URLs, sorry):
View 1 Replies
View Related
Nov 4, 2013
What was causing this:
I installed a url shortner script but the link that the script creates takes you to a server error page.
I viewed the logs and I get this error over and over again.
Request exceeded the limit of 10 internal redirects due to probable configuration error. Use 'LimitInternalRecursion' to increase the limit if necessary. Use 'LogLevel debug' to get a backtrace.
this is what is in my htaccess file
RewriteEngine On
RewriteCond %{REQUEST_FILENAME} !-d
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-l
RewriteRule ^(.+)$ index.php?url=$1 [QSA,L]
View 1 Replies
View Related
Jun 21, 2014
I fixed that there is a process that loads the server and much why it does not.
Name process: md2_resync
View 2 Replies
View Related
May 22, 2008
we are try SuPhp on Cpanel server but seem that is use a lot of resource, on 2 X quad core server we can't add more than 300 domains for server, whic configuration do u use? any alternative solution?
View 7 Replies
View Related
Dec 4, 2008
i know admin can limit the ram and hd resource for each vps account,
but about the cpu,
can admin limit how many percentage or MHz or each vps account?
View 4 Replies
View Related
Nov 27, 2008
how do we know if our blog spent alot of resource on server (shared hosting)? can we monitor it, so if i knew i spent alot of resource i can move to another webhost (maybe VPS) before they suspend my blog?
View 10 Replies
View Related
May 10, 2006
I have a VPS account and during the recent days it seems to have slowed down a lot, when I check the process I can find loads of
1727 0.2 0.0 /usr/local/apache/bin/httpd -DSSL 0 24 7152 S 00:00:02 99
My system usage is at 98.5% and the numproc out of 400 allowed 392 is in use It wasn't like this before and i have used up only 38% of the space alloted and cpu load is also at just 19%.
Could anyone explain me whats actually the problem of high system usage?
View 2 Replies
View Related
Apr 14, 2009
There are Small Hosting companies and There are Big Hosting companies.
And Then There are Huge Hosting companies.
Who Consumes The Most Resources?
Your Views On It?
View 9 Replies
View Related
Mar 24, 2009
I own a dedicated server and have 3 cpanel for each of my 3 sites in my WHM, I was wondering how much resource would each cpanel account use?
Reason I'm asking is because I have a couple of other sites i'd like to add to this server but I'm not sure if I should simply add them as domain add-on's in one of my current cpanel accounts or if it's ok to create another for each site without using up my server's resources.
Hope that makes sense.
View 5 Replies
View Related
Apr 14, 2008
I own a couple of servers xeon setups. All use cPanel/WHM.
A client yesterday asked if we could have ffmpeg-php installed on the server so that they could run phpfox.
I have heared ffmpeg is resource intensive? Will it make a big difference on dual core server? Should I install ffmpeg-php or risk losing a client?
If I should install ffmpeg-php I have been having trouble doing so, can anyone help me with this.
View 4 Replies
View Related
Jun 19, 2007
i have one Question regarding MX-Records, i cant solve myself. Why is an MX-Record not allowed to contain an Ip or a CNAME-Record? As far as i know, the Record has always to point to an A-record which includes the Ip. Anyone here, who can explain why thats the case? Is there any RFC Dokument where this is explained?
View 9 Replies
View Related
Oct 22, 2006
I would like to know how resource intensive is ClamAV Scanner. Should I allow it or not to my VPS clients/resellers?
Can I set it to use it as root? How?
View 0 Replies
View Related
May 2, 2008
I just received an email saying
We were forced to suspend your account due to excessive apache connections causing ftp to go down. Here is a snapshot of the activity:
242-0 22189 0/57/57 W 1.24 10 0 0.0 0.09 0.09 xxxxxx xxxxxx.com
243-0 22194 0/1/1 W 0.01 0 0 0.0 0.00 0.00 xxxxxxx xxxxxx.com GET
244-0 22214 0/4/4 W 0.15 4 0 0.0 0.03 0.03 xxxxxx xxxxx.com GET
And when I was on the phone with the support, they told me my site had opened up 400 apache connections + using the GET command and was causing FTP slow downs.
It just happens so that a file I posted made its way to a popular forum and all the sudden, everyone was downloading the file.
What is the best way to correct this problem? and GET? Isn't that a FTP Command? but people were downloading via [url]
View 2 Replies
View Related