I'm having a problem that I've never run across before, and was wondering if anyone might have any ideas as to what may be causing this.
Basically, on 3 of 5 new servers on a brand new private rack from The Planet, we're having what we've narrowed down to be a problem with PHP or Apache. Loading any sort of PHP page with a larger output (even such as a simple 'phpinfo' call) results in, depending on the computer or browser in use:
- The page loading for a split second then reverting to a DNS Server Not Found page (observed in IE)
- The page loading, but filling the source code with vast amounts of extra blank spaces, making a simple phpinfo call download 5+mb of HTML (observed in both IE and Firefox)
- The page loading part way, then hanging (observed in Firefox)
- Occasionally the page will reload over and over again all by itself until it ultimately goes to a DNS error page (observed in IE)
Pages not including PHP, including very long .HTML and .SHTML pages, load just fine.
Here's a link to a page calling a simple phpinfo string, and nothing else (as this is my first post, I can't link directly to URLs, sorry):
I installed a url shortner script but the link that the script creates takes you to a server error page.
I viewed the logs and I get this error over and over again.
Request exceeded the limit of 10 internal redirects due to probable configuration error. Use 'LimitInternalRecursion' to increase the limit if necessary. Use 'LogLevel debug' to get a backtrace.
I'm migrating some websites from old server with virtualmin, some websites have files with special characters as à,ö,ç etc...
On the other server the files (images for example) are served well but on the new server with plesk 11.5 error 404 appears. (Nginx reverse proxy is activated)...
I'm migrating some websites from old server with virtualmin, some websites have files with special characters as à,ö,ç etc.. On the other server the files (images for example) are served well but on the new server with plesk 11.5 error 404 appears. (Nginx reverse proxy is activated)
This works on my site. But for some reason I still get the occasional IP's through.
I looked at my Lighttpd server-status and I have 600 connections from 3 different IPs that come from China.
I typically use ./route add -host 222.221.81.3 reject as the way to block them, but it changes from time to time. The Chinese are using 90mbps of bandwidth and I want it to stop as they must be directly hotlinking my content.
How to null route large blocks from China? Please note I want to keep Hong Kong, Macau and Taiwan.
My server has been kernel panicking on and off irreguarly every 2 days or so since I upgraded to cPanel 11, I'm not entirely sure whether this was the cause however I've isolated the fact it is a kernel issue. I am running CentOS 4.5 and was running Kernel "2.6.9-42.0.8.ELsmp" however switched to the kernel I was running before (2.6.9-42.0.3.ELsmp) and the same issue is still cropping up.
I thought that it may be a memory addressing issue or the like, however from looking up the issue, several C programming forums seem to suggest dodgy applications, the memory usage on the server does always seem fairly low and loads are generally healthy so this would seem to be supported.
The logs before the server kernel panics are as so:
Quote:
Jun 16 17:40:01 buzz kernel: Unable to handle kernel NULL pointer dereference at virtual address 00000010 Jun 16 17:40:01 buzz kernel: printing eip: Jun 16 17:40:01 buzz kernel: c016563c Jun 16 17:40:01 buzz kernel: *pde = 177b1001 Jun 16 17:40:01 buzz kernel: Oops: 0000 [#1] Jun 16 17:40:01 buzz kernel: SMP Jun 16 17:40:01 buzz kernel: Modules linked in: iptable_filter ip_tables md5 ipv6 autofs4 dm_mirror dm_mod button battery ac joydev ohci_hcd ehci_hcd snd_inte l8x0 snd_ac97_codec snd_pcm_oss snd_mixer_oss snd_pcm snd_timer snd_page_alloc snd_mpu401_uart snd_rawmidi snd_seq_device snd soundcore 8139too mii ext3 jbd Jun 16 17:40:01 buzz kernel: CPU: 0 Jun 16 17:40:01 buzz kernel: EIP: 0060:[<c016563c>] Not tainted VLI Jun 16 17:40:01 buzz kernel: EFLAGS: 00010246 (2.6.9-42.0.3.ELsmp) Jun 16 17:40:01 buzz kernel: EIP is at pipe_readv+0x28a/0x29e Jun 16 17:40:01 buzz kernel: eax: 00000000 ebx: deaeb2e0 ecx: 00020002 edx: 0000001d Jun 16 17:40:01 buzz kernel: esi: c9274f80 edi: bff0ed50 ebp: 0000006d esp: c9274f44 Jun 16 17:40:02 buzz kernel: ds: 007b es: 007b ss: 0068 Jun 16 17:40:02 buzz kernel: Process sim (pid: 31509, threadinfo=c9274000 task=dda210b0) Jun 16 17:40:02 buzz kernel: Stack: 00000000 00000000 cd7ee06d 00000013 0000006d 00000001 deaeb2e0 c9274f80 Jun 16 17:40:02 buzz kernel: c30f2a80 c032dba0 c30f2a80 00000080 c9274fac c016566c c9274fac bff0edbd Jun 16 17:40:02 buzz kernel: 00000013 c015af11 c9274fac bff0ed50 c30f2a80 fffffff7 bff0ed50 c9274000 Jun 16 17:40:02 buzz kernel: Call Trace: Jun 16 17:40:02 buzz kernel: [<c016566c>] pipe_read+0x1c/0x20 Jun 16 17:40:02 buzz kernel: [<c015af11>] vfs_read+0xb6/0xe2 Jun 16 17:40:02 buzz kernel: [<c015b124>] sys_read+0x3c/0x62 Jun 16 17:40:02 buzz kernel: [<c02d47cb>] syscall_call+0x7/0xb Jun 16 17:40:02 buzz kernel: Code: 20 01 00 00 b9 02 00 02 00 ba 1d 00 00 00 83 c0 34 e8 a3 53 00 00 58 83 7c 24 10 00 7e 15 8b 44 24 20 f6 40 1a 04 75 0b 8b 40 08 <8b> 40 10 e8 17 c5 00 00 8b 44 24 10 83 c4 24 5b 5e 5f 5d c3 83 Jun 16 17:40:02 buzz kernel: <0>Fatal exception: panic in 5 seconds
I keep getting emails from LFD saying this user is using to many resources and it is because of their shoutcast is their a way to take care of this problem?
Time: Sun Sep 28 12:16:06 2008 +0200 Account: dbus Resource: Process Time Exceeded: 134303 > 1800 (seconds) Executable: /bin/dbus-daemon The file system shows that this executable file that the process is running has been deleted. This typically happens if the original file has been replaced by a new file when the application is updated. To prevent this being reported again, restart the process that runs this excecutable file. Command Line: dbus-daemon --system PID: 2015 Killed: No
How can I find which process runs this excecutable file ?
Mountain View (CA) - As a company with one of the world's largest IT infrastructures, Google has an opportunity to do more than just search the Internet. From time to time, the company publishes the results of internal research. The most recent project one is sure to spark interest in exploring how and under what circumstances hard drives work - or not.
There is a rule of thumb for replacing hard drives, which taught customers to move data from one drive to another at least every five years. But especially the mechanical nature of hard drives makes these mass storage devices prone to error and some drives may fail and die long before that five-year-mark is reached. Traditionally, extreme environmental conditions are cited as the main reasons for hard drive failure, extreme temperatures and excessive activity being the most prominent ones.
A Google study presented at the currently held Conference on File and Storage Technologies questions these traditional failure explanations and concludes that there are many more factors impacting the life expectancy of a hard drive and that failure predictions are much more complex than previously thought. What makes this study interesting is the fact that Google's server infrastructure is estimated to exceed a number of 450,000 fairly mainstream systems that, in a large number, use consumer-grade devices with capacities ranging from 80 to 400 GB in capacity. According to the company, the project covered "more than 100,000" drives that were put into production in or after 2001. The drives ran at a platter rotation speed of 5400 and 7200 rpm, came from "many of the largest disk drive manufacturers and from at least nine different models."
Google said that it is collecting "vital information" about all of its systems every few minutes and stores the data for further analysis. For example, this information includes environmental factors (such as temperatures), activity levels and SMART parameters (Self-Monitoring Analysis and Reporting Technology) that are commonly considered to be good indicators to describe the health of disk drives.
In general, Google's hard drive population saw a failure rate that was increasing with the age of the drive. Within the group of hard drives up to one year old, 1.7% of the devices had to be replaced due to failure. The rate jumps to 8% in year 2 and 8.6% in year 3. The failure rate levels out thereafter, but Google believes that the reliability of drives older than 4 years is influenced more by "the particular models in that vintage than by disk drive aging effects."
Breaking out different levels of utilization, the Google study shows an interesting result. Only drives with an age of six months or younger show a decidedly higher probability of failure when put into a high activity environment. Once the drive survives its first months, the probability of failure due to high usage decreases in year 1, 2, 3 and 4 - and increases significantly in year 5. Google's temperature research found an equally surprising result: "Failures do not increase when the average temperature increases. In fact, there is a clear trend showing that lower temperatures are associated with higher failure rates. Only at very high temperatures is there a slight reversal of this trend," the authors of the study found.
In contrast the company discovered that certain SMART parameters apparently do have an effect drive failures. For example, drives typically scan the disk surface in the background and report errors as they discover them. Significant scan errors can hint to surface errors and Google reports that fewer than 2% of its drives show scan errors. However, drives with scan errors turned out to be ten times more likely to fail than drives without scan errors. About 70% of Google's drives with scan errors survived the first eight months after the first scan error was reported.
Similarly, reallocation counts, a number that results from the remapping of faulty sectors to a new physical sector, can have a dramatic impact on a hard drive's life: Google said that drives with one or more reallocations fail more often than those with none. The observed average impact on the average fail rate came in at a factor of 3-6, while about 85% of the drives survive past eight months after the first reallocation.
Google discovered similar effects on hard drives in other SMART categories, but them bottom line revealed that 56% of all failed drives had no count in either one of these categories - which means that more than half of all failed drives were put out of operation by factors other than scan errors, reallocation count, offline reallocation and probational counts.
In the end, Google's research does not solve the problem of predicting when hard drives are likely to fail. However, it shows that temperature and high usage alone are not responsible for failures by default. Also, the researcher pointed towards a trend they call "infant mortality phase" - a time frame early in a hard drive's life that shows increased probabilities of failure under certain circumstances. The report lacks a clear cut conclusion, but the authors indicate that there is no promising approach at this time than can predict failures of hard drives: "Powerful predictive models need to make use of signals beyond those provided by SMART."
I have an oddball problem here that I haven't seen.
When you visit domain.com, characters are displayed like the files are not being properly read. However, when you visit domain.com/index.php all works fine. I thought this would be an .htaccess issue and tried simply removing it, but this doesn't fix the issue. I checked httpd.conf and all looks fine there too.
Recently we have moved our Invision Power board (version 2.3.3) from InvisionPower hosting to a dedicated server. On our new server we have: Apache 2.2.6 PHP 5.2.4 MySQL 5.0.24
Things seem to be about 95% OK, however, there are occasional problems with posting: several members tried to post Hungarian and French characters, like Á Í Ő Ö Ő Ű à â ç é è ê ë î ï ô û ù ü ÿ
These are not getting through, and they get an error:
Bad Request
Your browser sent a request that this server could not understand. Apache/2.2.0 (Fedora) Server at Port 80
Members have been asked to try with Explorer, Firefox and Opera, and all get the same results. This is strange, as most Croatian and Serbian characters that are accentuated, like: č, š, ć, ž
These go through just fine, as well as Cyrillic alphabet is OK as well.
Additionaly, one member reported a problem that 3-4 times he got an error while posting (but is usually OK to post), and he writes in Serbian Cyrillic - which seems to be usually fine, but there is an odd problem and error message:
Method Not Implemented
POST to /forums/index.php not supported. Apache/2.2.0 (Fedora) Server at Port 80
We asked Invision Power Board tech support, however they say that the errors are on the server end in the Apache configuration, and not IPB. Which seems logical, as before the move on the new server (I don't think the old server used to run Apache), nothing like this used to happen.
I have a cpanel 11 server with php 4.4.6 installed. my site use php scripts and one day even if the file was not edited, not touched at all , i get errors like
Parse error: syntax error, unexpected ']' in /home/xxx/public_html/wp-includes/post.php on line 37
I checked and could find a lot of illeagal characters in my php file. See below. for post_status , it became post_statuó and edit_date became edit_date<8d>. If you read through the code, you can see a lot of illegal characters. This is why i get parse errors. I had to replace the file from backup and the issue fixed. But this problem continues to occur for more files and i can't find a reason for this. Again I am the only one with access, I use BBedit to edit php files when needed in Mac OS X, and beleive I know what is being edited and again, those file which gets errors does not need to be edited for nothing, not even to modify wordpress.
after putting up a very simple email program and having it email me a set of text, it looks like it is not a software problem, but something to do with the IIS email server. Has any one ran into this?
and there is a problem, because I do not get what I need. The result is: [URL] .....
The last / sing dous not even matter, because if I write the url without the ending /, the three dots are still removed.
It looks like everywhere in the url the (in regexp) .+/ pattern is replaced with a simple / sign.
The RewriteRule is very simple, I can not imagine it has anything to do with this, but it looks this:
RewriteRule ^(.*)$ index.php?p=$1 [QSA]
I started to log the rewrite and it looks like if the specific parts of the url are replaced before the rewrite got it.
These are the first few rows of the rewrite log:
add path info postfix: E:/web/service/szerz....odes -> E:/web/service/szerz....odes/action/axgetszerzodesar/ugyfelid/46402/termekid/46032/szerzodesszam/2012.01.01/ strip per-dir prefix: E:/web/service/szerz....odes/action/axgetszerzodesar/ugyfelid/46402/termekid/46032/szerzodesszam/2012.01.01/ -> szerz....odes/action/axgetszerzodesar/ugyfelid/46402/termekid/46032/szerzodesszam/2012.01.01/ applying pattern '^(.*)$' to uri 'szerz....odes/action/axgetszerzodesar/ugyfelid/46402/termekid/46032/szerzodesszam/2012.01.01/'
webserver: Apache/2.2.22 (Win32) PHP/5.2.17 and Apache/2.2.9 (Win32) PHP/5.2.17 (I refreshed it today because of this problem) os: win7 home premium sp1
It is tested on a linux os too, but there were no such problems.
I am unable to create a user in mysql with 20 characters length. I am getting the annoying error message about 16 characters limitation about a username length. I have tried to increase the character user limit length to 32 characters using the following commands:
mysql -uroot -p
use mysql;
alter table `user` modify `User` CHAR(32);
FLUSH PRIVILEGES;
quit
service mysqld restart
But after all of this was done I was and I am still unable to connect to mysql anymore with/without password.
Question 1 I had a script create a backup of every file on my site using the following format "filename.php.bac". I want to delete these files now and I tired to use "rm *.bac" but that only deleted the files in the current directory. How can I delete ALL those files in EVERY directory and sub-directory starting at the public_html directory?
Question 2 How can I escape semi-colon's (;) in a perl script? I'm trying to run a search+replace script to update some Analytics code and I have a ton of files to update but for some reason if there is a semi-colon in the find varable, it assumes that it has reached the end of the contents in that variable.
Here is the code. Take a look at the $find variable and you will see extra semi-colon's. How do I tell the script to not treat those semi-colons as the end of the variable? .........
is it possible to configure so, that it would be possible to receive only in Latin and Cyrillica written mails? No Chinese, Japanese, etc. characters, I mean.