LFD Excessive Resoureage Use
May 31, 2008I keep getting emails from LFD saying this user is using to many resources and it is because of their shoutcast is their a way to take care of this problem?
View 0 RepliesI keep getting emails from LFD saying this user is using to many resources and it is because of their shoutcast is their a way to take care of this problem?
View 0 RepliesResource: Virtual Memory Size
Exceeded: 149 > 100 (MB)
Executable: /usr/bin/php
I've been receiving e-mails about this. May I know how to fix this?
I have the impression that I am being affected from a kind of DDOS or email worm attack. Is there a way I can track the sources of the connections?
The control panel I am using is Plesk 8.2 on Linux CentOS 4.2
I'm having a problem that I've never run across before, and was wondering if anyone might have any ideas as to what may be causing this.
Basically, on 3 of 5 new servers on a brand new private rack from The Planet, we're having what we've narrowed down to be a problem with PHP or Apache. Loading any sort of PHP page with a larger output (even such as a simple 'phpinfo' call) results in, depending on the computer or browser in use:
- The page loading for a split second then reverting to a DNS Server Not Found page (observed in IE)
- The page loading, but filling the source code with vast amounts of extra blank spaces, making a simple phpinfo call download 5+mb of HTML (observed in both IE and Firefox)
- The page loading part way, then hanging (observed in Firefox)
- Occasionally the page will reload over and over again all by itself until it ultimately goes to a DNS error page (observed in IE)
Pages not including PHP, including very long .HTML and .SHTML pages, load just fine.
Here's a link to a page calling a simple phpinfo string, and nothing else (as this is my first post, I can't link directly to URLs, sorry):
Excessive resource usage: dbus (2015)
I get below alarm from lfd
Quote:
Time: Sun Sep 28 12:16:06 2008 +0200
Account: dbus
Resource: Process Time
Exceeded: 134303 > 1800 (seconds)
Executable: /bin/dbus-daemon
The file system shows that this executable file that the process is running has been deleted. This typically happens if the original file has been replaced by a new file when the application is updated. To prevent this being reported again, restart the process that runs this excecutable file.
Command Line: dbus-daemon --system
PID: 2015
Killed: No
How can I find which process runs this excecutable file ?
What was causing this:
I installed a url shortner script but the link that the script creates takes you to a server error page.
I viewed the logs and I get this error over and over again.
Request exceeded the limit of 10 internal redirects due to probable configuration error. Use 'LimitInternalRecursion' to increase the limit if necessary. Use 'LogLevel debug' to get a backtrace.
this is what is in my htaccess file
RewriteEngine On
RewriteCond %{REQUEST_FILENAME} !-d
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-l
RewriteRule ^(.+)$ index.php?url=$1 [QSA,L]
I fixed that there is a process that loads the server and much why it does not.
Name process: md2_resync
Mountain View (CA) - As a company with one of the world's largest IT infrastructures, Google has an opportunity to do more than just search the Internet. From time to time, the company publishes the results of internal research. The most recent project one is sure to spark interest in exploring how and under what circumstances hard drives work - or not.
There is a rule of thumb for replacing hard drives, which taught customers to move data from one drive to another at least every five years. But especially the mechanical nature of hard drives makes these mass storage devices prone to error and some drives may fail and die long before that five-year-mark is reached. Traditionally, extreme environmental conditions are cited as the main reasons for hard drive failure, extreme temperatures and excessive activity being the most prominent ones.
A Google study presented at the currently held Conference on File and Storage Technologies questions these traditional failure explanations and concludes that there are many more factors impacting the life expectancy of a hard drive and that failure predictions are much more complex than previously thought. What makes this study interesting is the fact that Google's server infrastructure is estimated to exceed a number of 450,000 fairly mainstream systems that, in a large number, use consumer-grade devices with capacities ranging from 80 to 400 GB in capacity. According to the company, the project covered "more than 100,000" drives that were put into production in or after 2001. The drives ran at a platter rotation speed of 5400 and 7200 rpm, came from "many of the largest disk drive manufacturers and from at least nine different models."
Google said that it is collecting "vital information" about all of its systems every few minutes and stores the data for further analysis. For example, this information includes environmental factors (such as temperatures), activity levels and SMART parameters (Self-Monitoring Analysis and Reporting Technology) that are commonly considered to be good indicators to describe the health of disk drives.
In general, Google's hard drive population saw a failure rate that was increasing with the age of the drive. Within the group of hard drives up to one year old, 1.7% of the devices had to be replaced due to failure. The rate jumps to 8% in year 2 and 8.6% in year 3. The failure rate levels out thereafter, but Google believes that the reliability of drives older than 4 years is influenced more by "the particular models in that vintage than by disk drive aging effects."
Breaking out different levels of utilization, the Google study shows an interesting result. Only drives with an age of six months or younger show a decidedly higher probability of failure when put into a high activity environment. Once the drive survives its first months, the probability of failure due to high usage decreases in year 1, 2, 3 and 4 - and increases significantly in year 5. Google's temperature research found an equally surprising result: "Failures do not increase when the average temperature increases. In fact, there is a clear trend showing that lower temperatures are associated with higher failure rates. Only at very high temperatures is there a slight reversal of this trend," the authors of the study found.
In contrast the company discovered that certain SMART parameters apparently do have an effect drive failures. For example, drives typically scan the disk surface in the background and report errors as they discover them. Significant scan errors can hint to surface errors and Google reports that fewer than 2% of its drives show scan errors. However, drives with scan errors turned out to be ten times more likely to fail than drives without scan errors. About 70% of Google's drives with scan errors survived the first eight months after the first scan error was reported.
Similarly, reallocation counts, a number that results from the remapping of faulty sectors to a new physical sector, can have a dramatic impact on a hard drive's life: Google said that drives with one or more reallocations fail more often than those with none. The observed average impact on the average fail rate came in at a factor of 3-6, while about 85% of the drives survive past eight months after the first reallocation.
Google discovered similar effects on hard drives in other SMART categories, but them bottom line revealed that 56% of all failed drives had no count in either one of these categories - which means that more than half of all failed drives were put out of operation by factors other than scan errors, reallocation count, offline reallocation and probational counts.
In the end, Google's research does not solve the problem of predicting when hard drives are likely to fail. However, it shows that temperature and high usage alone are not responsible for failures by default. Also, the researcher pointed towards a trend they call "infant mortality phase" - a time frame early in a hard drive's life that shows increased probabilities of failure under certain circumstances. The report lacks a clear cut conclusion, but the authors indicate that there is no promising approach at this time than can predict failures of hard drives: "Powerful predictive models need to make use of signals beyond those provided by SMART."
I just received an email saying
We were forced to suspend your account due to excessive apache connections causing ftp to go down. Here is a snapshot of the activity:
242-0 22189 0/57/57 W 1.24 10 0 0.0 0.09 0.09 xxxxxx xxxxxx.com
243-0 22194 0/1/1 W 0.01 0 0 0.0 0.00 0.00 xxxxxxx xxxxxx.com GET
244-0 22214 0/4/4 W 0.15 4 0 0.0 0.03 0.03 xxxxxx xxxxx.com GET
And when I was on the phone with the support, they told me my site had opened up 400 apache connections + using the GET command and was causing FTP slow downs.
It just happens so that a file I posted made its way to a popular forum and all the sudden, everyone was downloading the file.
What is the best way to correct this problem? and GET? Isn't that a FTP Command? but people were downloading via [url]