Strange Urls In 404 Logs
Aug 15, 2008I just found hundreds of rubbish urls in awstats for a particular domain. Is this referrer spam or something more serious and can I do something about this?
I have attached a screenshot.
I just found hundreds of rubbish urls in awstats for a particular domain. Is this referrer spam or something more serious and can I do something about this?
I have attached a screenshot.
Lately we have been getting log entries similar to the following from different IPs all over the US:
74.249.4.234 - - [03/Jun/2008:18:12:36 -0500] "GET / HTTP/1.1" 200 6205 "-" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1;1813)"
74.249.4.234 - - [03/Jun/2008:18:12:37 -0500] "GET /scripts/javascript.js HTTP/1.1" 200 9153 "-" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1;1813)"
74.249.4.234 - - [03/Jun/2008:18:12:37 -0500] "GET /scripts/overlib.js HTTP/1.1" 200 50733 "-" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1;1813)"
That is all there is to each hit.
Obviously, the default index.php file is being loaded and is calling the javascript files, but what we can't understand is why the CSS files and images are not being downloaded as well.
Any ideas on why this would be occurring?
Caching and text based browsing are unlikely scenarios due to the quantity and varied locations of the IPs.
is it possiable to delete these files in the server access_logs and errors_logs
View 1 Replies View Relatedhow can i provide temporary urls for users on my server like [url]until the actual domain resolves? ive seen this done in with cpanel but i dont know how its done exactly. my current server does not have cpanel.
View 11 Replies View RelatedAre there any scripts out there that can protect URLs? For an example I am trying to protect a megaupload.com URL with a masking URL and making sure that the masking URL is only access by a referral site. Can this be done?
View 1 Replies View RelatedLet's say you want to protect againts hacking,and using method with simply blocking loading url.So let's say someone hacked your index.html and changed links to lead to his domain.com.Is it possible to block what would be loaded on site ?(to prevent possible future hacking intrusions)
View 6 Replies View RelatedI'm testing scripts on new server now, and server has 2 problems.
1. I can not enter domain name as "get" parameter. For example, if I'm requesting URL like domain.com/file.php?url=[url] - it does not work. If I'm requesting URL like domain.com/file.php?url=[url](please note it has INVALID extension for TLD) - it works!
2. fsockopen and file_get_contents does not work. I added these settings into php.ini:
allow_url_fopen = On
allow_url_include = On
...and nothing works. I get just blank pages when using these functions.
Server is running cpanel + apache 2.2 + php 5 + APF firewall
1.Find .htaccess file in root folder
2.Open .htaccess file
3.Delete all content
4.Type in this code(using your domain ):
<IfModule mod_rewrite.c>RewriteEngine onRewriteBase /RewriteCond %{HTTP_HOST} !^www.example.co.uk$ [NC]RewriteRule ^(.*)$ http://www.example.co.uk/$1 [L,R=301]RewriteRule .* - [E=HTTP_AUTHORIZATION:%{HTTP:Authorization},L]</IfModule>
We're running Apache 2.4.3 on Slackware 14:
LCMlinux ~> uname -a
Linux LCMlinux 3.2.29-smp #2 SMP Mon Sep 17 13:16:43 CDT 2012 i686
LCMlinux ~> httpd -v
Server version: Apache/2.4.3 (Unix)
Server built: Aug 23 2012 11:07:26
LCMlinux ~>
We are using this both for the Trac issue-tracking application and for a small, simple internal mirror web site. Trac is working perfectly; the web site works if exact URLs are provided (as in <a href=...>
One of the sites I have, is placed on a non-Apache server (the others are). Phpinfo() gives this:
Server API CGI
I'd like to make search engine-friendly URLs for all my sites. All of them will do fine with mod_rewrite, but that's not possible on this server, it seems. Anybody here knows how I can do this for this particular server?
I need htaccess file for one of my sites. I've searched everywhere, and I can't find a solution for my problem.
System Info:
Easy Apache v3.18.1
CentOS 6.3 (x86_64)
So I'm trying to redirect all URLs in a certain query range to a new website. i.e.,
mydomain.com/?p=35000
mydomain.com/?p=35001
mydomain.com/?p=35002
mydomain.com/?p=35003
etc.
all need to 301 redirect to myotherdomain.com
I don't need to append each query to the new domain. All the URLs simply need to redirect to the naked domain of the other site. So
mydomain.com/?p=35000 is not redirecting to myotherdomain.com/?p=35000
It is simply redirecting to myotherdomain.com
Also, I need to redirect the specific series ([35000-35999]) to the new domain. I want to redirect the www and the non-www URLs.
I've attempted to write the code myself, but when I upload the htaccess file, the URLs are not redirecting. I'm doing something wrong.
Here's what I've tried:
Code:
RewriteEngine on
RewriteCond %{HTTP_HOST} ^mydomain.com$ [OR]
RewriteCond %{HTTP_HOST} ^www.mydomain.com$
RewriteCond %{QUERY_STRING} ^p=([35000-35999]*)$
RewriteRule ^(.*)$ "http://myotherdomain.com/" [R=301,L]
This redirect redirects all subpages of a domain to another domain while not redirecting to index.php but to same page, only at another domain:
RewriteEngine On
RewriteBase /
RewriteCond %{HTTP_HOST} !^cz.hq-scenes.com$ [NC]
RewriteRule ^(.*)$ http://cz.hq-scenes.com/$1 [R=301,L]
I want to achieve that only pages like /viewtopic.php...............
will be redirected (............. means any other characters)
How to redirect all viewtopic.php pages only? How to modify it?Â
I need to accomplish the following:
1. User hits my new 2.4 reverse proxy at [URL] ....
2. I proxy the request through to my "real" app server at [URL] ....
3. I also use a re-write rule to add a querystring to the URL: ?Parameter=Foo
4. So, client's request arrives at the my app server as [URL] .....
5. When my app server responds, it is including the Parameter=Foo key/value combination. I don't want this.
6. I want my reverse proxy (somebox.com) to strip "Parameter=Foo" from the string which gets returned to the client.
I have steps 1 & 2 working nicely, but it looks like I can't handle the last bit with with mod_rewrite. I found mod_filter and mod_substitute, but it appears that this stuff is used for re-writing strings IN the document. Can these libs be used to maybe modify (I'm guessing here) the headers so that the "?Parameter=Foo" string can't be seen on the client if they're running something like fiddler?
I've taken over a site that caters for client access. They all access there own folder, and in the folder the files have an include with a relative path as below.
/core - contains all the actual files
/client/file.php -
<? include "../core/file.php";?>
but with the growing number of clients I want to go a level deeper and separate them better...
/uk/client/file.php -
<? include "../../core/file.php";?>
This is fine but when the files are included, they too have there own relative includes and this is where it breaks.
There are so many files I can't easily go through them to change all the include paths so I would like to maybe do a rewrite to fake the path? I've tried this...
RewriteRule^uk/$ /
But that doesn't work.
I used a little vServer with ubuntu (turnkey) and use logwatch to be informed by email about any errors. I'm confused about the following errors from Apache:
--------------------- httpd Begin ------------------------
Requests with error response codes
404 Not Found
http://translate.google.com/gen204: 1 Time(s)
http://www.teddybrinkofski.com/ip_json.php: 1 Time(s)
503 Service Unavailable
http://www.google.com/: 1 Time(s)
---------------------- httpd End -------------------------
These errors are definetly not from my own code. I have checked that mod_proxy is disabled and i disabled also CONNECT like here described: [URL] ....
What does these errors mean and how can i disabled this?
My website is on 1and1 shared hosting and to enable PHP I needed to put in my .htaccess the following
AddType x-mapp-php5 .php .html .htm .shtml
I am attempting to add a header to a number of requested urls i.e. domain/feeds/chicago, domain/feeds/*.I understand that this can be achieved by
•using mod_rewrite to set an environment variable
•using mod_headers to add a header based on the existence of an environment variable.
So far I can add the a new header thus Header add RSS_FEED_URL "Akamai-Edge-Control"
I have a WP online shop using WP E-commerce plugin 3.8.9 together with the SEO Yoast plugin.My problem in that when exploring the product URLs ending with / in google webmaster tools, it displays 404. But the same URL without / is found and ok. I must day that both URLs show up correctly in browsers and the non / version is redirected to the one ending in /.Here is my .htaccess:
<IfModule mod_rewrite.c>
RewriteEngine On
RewriteBase /
RewriteRule ^index.php$ - [L]
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule . /index.php [L]
</IfModule>
I could solve this problem by setting a redirect to non / URLs instead.
I'm trying to use nginx php-fpm on my forum, and to set friendly url's, I am told to add this to nginx config:
Code:
location / {
try_files $uri $uri/ /index.php?$uri&$args;
index index.php index.html;
[Code] ....
I try adding this using the Additional Nginx Directives, and get this error:
Code:
Invalid nginx configuration: nginx: [emerg] duplicate location "/" in /var/www/vhosts/system/domain.com/conf/vhost_nginx.conf:1 nginx: configuration file /etc/nginx/nginx.conf test failed
I think if my domain was in a folder, this wouldn't be a problem. I really don't want to change that, though.
I'm migrating some websites from old server with virtualmin, some websites have files with special characters as à,ö,ç etc...
On the other server the files (images for example) are served well but on the new server with plesk 11.5 error 404 appears. (Nginx reverse proxy is activated)...
My Linux (CentOS) server with Plesk 12 is giving HTTP 414 errors ("URL too long") in response to URLs which are over 256 characters in length. They happen to include a GET variable in the query string which accounts for most of this length, and if I shorten it manually, it works. But I can't change the script to submit a shorter URL or send it by POST, because it comes from an external payment processing server which I don't control.
Adding the following lines to my /etc/httpd/conf/httpd.conf file and restarting Apache does not work:
LimitRequestLine 8190
LimitRequestFieldSize 8190
The URLs I'm trying to use are well short of 8190 bytes; they are around 800 characters long.
Is this something that Plesk affects / can control? Is there a way to see what the current maximum setting for URL length is, and to change it?
I'm migrating some websites from old server with virtualmin, some websites have files with special characters as à,ö,ç etc.. On the other server the files (images for example) are served well but on the new server with plesk 11.5 error 404 appears. (Nginx reverse proxy is activated)
View 1 Replies View RelatedI don't quite know what to make of this, but I am getting hits to my search pages with the following:
/advanced_search_result.php?keywords=Hello%21%20Perfect%20and%20
/advanced_search_result.php?keywords=Hi%21%20Good%20site%20respe
There are multiple occurrences of this at any one time, and the interesting thing is that it appears to be spoofing the source IP addresses - most are all different with few exceptions.
Has anyone else seen this and know of a solution? Normally I would simply use IP deny but given the addresses appear to be spoofed and too numerous it would be futile.. I thought if I programmed OSC to quit if it matched the keywords might be a decent solution, but so far I haven't had any luck
I searched google and this forum to see if I could find out anything with no luck at all, so I'm guessing this
is fairly new.
I found a strange PHP file in a strange folder on a VPS I am using to host a few sites. I've looked through the logs but can't figure out how it got there and I've look at the code and can't make any sense of it. Can somebody take a look at the code and tell me what they think of it: .....
View 12 Replies View RelatedThis month I just pruchase dedicated server, spec are AMD X2 with 1GB RAM.
On ssh, the memory result is:
root@server1 [~]# free -m
total used free shared buffers cached
Mem: 883 836 47 0 163 397
-/+ buffers/cache: 275 608
Swap: 2047 0 2047
My question:
1. Why the total ram just 883MB? I think it should 1024Mb?
2. The server still empty, but why I see the total used memory is 836Mb?
I only have experience with cpanel vps and when my server empty it only use around 200MB RAM and around 400MB ram usage when my vps load with 30+ account.
A couple of days ago I came across www.just-ping.com site (it's a simple ping test site).
I tested my site avensen.com (IP: 72.232.147.154) with it, and got bad results like this one:
[url]
Santa Clara, U.S.A. Packets lost (20%) 50.6 51.9 52.8
Florida, U.S.A. Packets lost (80%) 45.6 45.6 45.7
Vancouver, Canada Packets lost (80%) 56.5 56.6 56.7
New York, U.S.A. Packets lost (20%) 50.7 57.2 61.5
Austin, U.S.A. Packets lost (60%) 9.5 9.6 9.9
Austin, U.S.A. Packets lost (90%) 9.4 9.4 9.4
Amsterdam, Netherlands Packets lost (60%) 121.6 122.4 123.3
Amsterdam1, Netherlands Packets lost (60%) 121.5 123.6 125.6
London, United Kingdom Packets lost (90%) 111.4 111.4 111.4
Sydney, Australia Packets lost (90%) 200.2 200.2 200.2
Stockholm, Sweden Packets lost (20%) 144.7 147.7 148.3
Cologne, Germany Packets lost (80%) 133.3 135.6 137.8
Madrid, Spain Packets lost (70%) 150.7 150.8 151.0
Paris, France Packets lost (60%) 128.4 132.5 135.5
Hong Kong, China Packets lost (30%) 196.1 196.4 196.8
Munchen, Germany Packets lost (60%) 131.7 131.8 132.0
Kraków, Poland Packets lost (70%) 196.3 198.5 200.2
Cagliari, Italy Packets lost (40%) 154.9 155.3 156.3
Melbourne, Australia Packets lost (50%) 199.6 205.5 208.2
Singapore, Singapore Packets lost (70%) 257.4 260.3 262.5
I'm trying to figure out if this is a network problem or a problem with my server. I don't get it, because there are no lost ICMP packets when I ping another hosts from my server, or when I ping my server from home PC.
And here is what server4sale support wrote:
Quote:
This is what we received from data center and will update you, when they get back to us.
"We apologize for the delay in responding to you. We are aware of an issue that involves our upstream provider, and we have opened a ticket with them to get the issue resolved ASAP. We have asked them to investigate this issue and attempt to isolate the cause. Once we have more information from them, we will update you here in this ticket.
In the meantime, if you note any changes (good or bad), please provide traceroutes BOTH "TO" your server, and "FROM" your server, as well as a 300 count ping summary. This request has been made by our upstream provider, as we will forward any additional pings and traceroutes we receive directly to them. Without the traceroutes both to and from the servers, the information will not be useful for their investigation.
We will provide you with updates thru this ticket as we receive information from our provider. If you have any additional questions, or need further assistance, feel free to contact us. We appreciate your patience, while we work with to resolve this issue."
Second message from support:
Quote:
The data center has informed that they have not yet received an update from their upstream provider as they used to inform them after performing changes.
However, for better investigation and providing the results more precisely to their upstream provider they have asked you to provide the latest:
Quote:
1) 300 ping results from your PC to server
2) Traceroute from your PC to Server and
3) Traceroute from Server to your PC
I'd really appreciate if you help me to get these results and isolate the problem.
IP of my server: 72.232.147.154
What's even stranger is that when I run a just-ping.com test over 72.232.147.174 IP (a machine in the same SAVVIS data center, I guess), I get "All OK results":
[url]
Santa Clara, U.S.A. Okay 50.9 52.3 55.3
Florida, U.S.A. Okay 46.1 51.5 54.6
Vancouver, Canada Okay 56.1 56.7 57.1
Austin, U.S.A. Okay 9.7 9.9 10.2
New York, U.S.A. Okay 49.8 51.9 54.7
Austin, U.S.A. Okay 9.7 10.0 10.3
Amsterdam1, Netherlands Okay 122.0 123.6 127.8
Amsterdam, Netherlands Okay 121.2 123.4 127.5
Sydney, Australia Okay 204.1 205.2 208.6
Hong Kong, China Okay 203.6 204.5 206.1
Stockholm, Sweden Okay 144.8 147.7 149.7
Cologne, Germany Okay 133.0 135.0 137.5
London, United Kingdom Okay 118.5 121.7 124.7
Munchen, Germany Okay 136.7 139.0 140.5
Kraków, Poland Okay 192.3 195.9 205.8
Cagliari, Italy Okay 156.3 156.8 157.4
Paris, France Okay 123.0 124.5 127.7
Madrid, Spain Okay 158.8 161.5 164.5
Amsterdam3, Netherlands Okay 125.7 130.9 134.8
Singapore, Singapore Okay 255.3 256.7 259.2
Melbourne, Australia Okay 229.8 230.3 231.0
I’m running RHEL 3, Apache and Cpanel. When I ran: "netstat –an" I found this in the results:
tcp 0 0 11.11.111.229:49158 11.11.111.229:80 ESTABLISHED
tcp 0 0 11.11.111.229:49578 11.11.111.229:80 ESTABLISHED
If I’m reading this right these two unprivileged ports are open and talking to my privileged http port 80. Does this seem right? Why would these two ports on my machine have a connection. All this attention was sparked by abnormal spikes in load. Now I’m getting paranoid that something may be off even though I’m clean when scanning for rootkits etc…
I'm very new to dedicated hosting, but not to server admin in general, and have come across what seems to be to be a problem.
I'm based in the UK, and the dedi I went with is in the US, i have several VPS in the US, and I can download to them pretty consistently from a UK based server at around 5MB/s... this is on a VPS.
The dedi I signed up to lease has a 100Mb card, and a fairly well known provider, and yet the connection I get to the UK is terrible. It fluctuates wildly between 200KB/s and 5MB/s, seemingly at random, for example, downloading a 100MB file, i'll start at 500KB/s initally, and within a few seconds it might be 3.5MB/s, this could then go either way, but i'll usually end up with an average of about 800KB/s - which really seems awfully slow.
The traceroutes appear fine, there's around 110ms on ping and that is consistant, and similar to the figure the VPS get.
I've been in touch with their support, and after trying the usual suspects - including swapping the NIC - they lost interest. I was actually very impressed with them up until this point so feel pretty let down.
Is this normal? I've honestly never seen a download vary so wildly in speed. Unfortunately i'm tied in on a 3 month contract otherwise i'd drop them in a heartbeat right now.
All accounst in my dedicated server start to show a very strange error_log with the following entries:
====
[04-Nov-2009 21:28:51] PHP Warning: PHP Startup: Unable to load dynamic library '/usr/local/lib/php/extensions/no-debug-non-zts-20060613/php_interbase.dll' - /usr/local/lib/php/extensions/no-debug-non-zts-20060613/php_interbase.dll: cannot open shared object file: No such file or directory in Unknown on line 0 .....
====
Always when a php script is accessed, new entrie with this error above is created.
I dont understand because php script have not any relation with intebase or pgsql and my server have not this e db installed.
i recently got multiple logs regarding this weird browser user agent,
Browser Agent:
XXX<? echo "w0000t"; ?>XXX
anyone have information regarding this?
I've got a few machines where Apache acts really strange and curious if anyone has any suggestions. I'd love to figure this out so it can actually be deployed to a larger amount of machines and not just test instances.
- Basic Information
Apache 2.2.8 (Tried a few 2.2 versions)
PHP 5.2.6
suPHP based
Prefork Based
- Once a day at a random time Apache fails a request from remote monitoring. It comes back within a minute however is is inaccessible for that time. It sometimes gets picked up by 5 minute monitoring on the machine itself and it restarts the service obviously.
- PHP scripts fail to be killed at times resulting in memory being used. They need to be killed in order to go away.
Worker Based
- Apache can stay up forever it does not fail any requests
- PHP scripts do not get killed at a more frequent basis than in prefork. You need to `kill -9 pid` in order to get rid of the php processes.
I read about very few issues with 2.2 so I'm quite confused by this.