I have a guy who can't get to any of the 100 or so virtual hosts on my RHEL3 server.
It's running the latest Apache RPM from RedHat. I also have mod_evasive and mod_security running.
Here's what I know. The guy *CAN* connect via SSH and FTP. The guy *CAN* see the default web page when he hits the IP in his web browser (e.g. he types [url]into the address bar on IE). But when he uses any of the host names on the server he *CAN NOT* see anything. He gets timeout errors.
His IP in NOT in ANY error logs, it's not in mod_evasive or mod_security, it's not in IPTABLES, it's not anywhere I can see.
I must be missing something. Anyone have any ideas?
What would be in front of Apache blocking his requests?
I maintain Java servlet applications on my hosting platform [hosting-q.com] and access the applications from another site [wiredpages.qisoftware.com]. Today, there was a demand problem which caused the hosting server to block access to requests from the other domain.
Do you know if there is an .htaccess directive that can perform this blocking or some sort of system trigger?
The thing is-- only the servlets requested from the external domain and not requests from the originating domain [or hosting domain] were blocked.
I have been trying to solve a big problem for the last 2 weeks with one of our servers.
The client using our system (web based w/ apache and php) is a contact center firm. They have about 120 operators, all connect to our websever with the same IP.
We have been suffering DoS attacks from some of these operators. This are simple, browser attacks , namely 5 or 10 operators will just hold F5 key and bombard the server with requests when they shouldnt.
We did manage to produce a php protection which will recognize the multiple requests and blacklist the user, but its "too late" because the request have already been sent and processed by the webserver.
We use the user ID in the system to control who should be blacklisted, so this is all dependent on our own authentication.
Ideally, we need something EXACTLY like mod_evasive, but for rejecting single requests instead of blocking the IP. Exemplifying : if a user calls the same url, 5 times, in a 3 second spawn, we will reject every next request for 30 seconds, but only the requests by that user.
If the webserver can make any use of it, the user id is stored in a cookie.
I set up a cron to run every minute & I'm running a PHP script by way of cron like
wget http://example.com/some_script.php
Now does each request of Cron is a seperate HTTP Request or what? Say my script takes more than 1 minute to execute completely but before its completed, its called again. So, will that effect the PHP script running because of previous HTTP Request or will it create a new HTTP Request & let the previous request finish its operation? Technically, it shouldn't block/affect the previous request, but I'm not sure!!
There seems to be some problem with my server, none of the websites hosted on my server are accessible, the http requests either return a blank page or a page with a red quare on the upper left hand corner.
I am not sure if this is some kind of infection or DNS problem or a problem with memory apache is taking up as i have thousands of virtualhost entries in my access log accumulated over the years out of which only a few 100 websites i am serving presently, but never deleted the non-exitent virtualhost blocks.
At times the websites are opening but most of the times they are not. And when they do not open my http requets are not logged in apacha access log.
Even the customers have reported the same problem.
Also, just four days back i had a strange issue where all http requests to my server would take me to [url].
I can SSH to server, and everything else is working fine.
When i try to open any website hosted on my server (around 50 of them) i am being taken to following malware website;
[url] [url] This is a problem with my Limnux server running Apache and not a virus on my local computer as customers from all over are reporting the same issue.
As soon as i restart Apache eveything returns to normal with no such redirects.
I think my server is being attacked causing http requests to get redirected to some malicious website.
This issue would resurface almost every hour and would not go away till i restart apache.
So far my Datacenter techs. have not been able to identify the cause of this.
My Linux Server's Http Daemon (Apache) would stop serving websites ever so often, as soon as apache is restarted the error fixes iteself only to resurface within few hours.
The apache process would still be running i.e. apache does not die but no websites hosted on my server would be accessible from browser. And when this happens the apache logs do not log any http requests.
Instead when this happens all http requests to my server would be redirected to some weird Trojan website and my Norton Antivirus would show an Alert/Warning, for example; "Browser exploit at www.xxx.xxx was blocked" Risk Name: MSIE WebViewFolderIcon ActiveX Control BO
or another error like; "Auto-Protect has detected Trojan.Fakeavalert".
At first i thought the problem could be with my Laptop/ISP so i logged on to the server via SSH and opened try to open a website using command line "lynx mywebsite.com" and it shows following error; "Alert!: HTTP/1.0 503 Service Unavailable".
Now if i assume my laptop were to be infected, then as soon as i restart my apache and visit mywebsite.com eveything returns to normal with no such warnings. Why do i see those norton error messages only when apache is down with 503, and when apache is down with 503 how come the http requests always get redirected to some suspicious websites and nothing gets logged in apache error log?
I think my server is being attacked causing http to get unresponsive and thereafter http requests to my server are redirected to some malicious website, is this correct?
Also, i suspect this is a php script exploit as some customers have reported that google have blocked their website due to security reasons, i found <iframe> tage inserted in some php pages which i fixed.
Also, another thinh i noticed; when apache responds with the 503 it is referencing PHP 5.1.4 in the header response:
[root@]# curl -I xxx.xxx.xxx.xxx (my server ip) HTTP/1.0 503 Service Unavailable Server: Apache X-Powered-By: PHP/5.1.4 Retry-After: 20
I am running PHP 4.3.9m why does apache responds with PHP 5.1.4 when this 503 error surfaces?
Also, since my apache was dowan with 503 error a customer mailed in today saying; "It seems that my site www.xxxx.com is regularly down, and the winlogon virus is involved."
I suspect this is again due to the fact that http requests start getting redirected?
where when I view my website through http using Firefox, it never stops loading. If I use IE, then I get a "page can not be displayed" error.
If I use https then everything works fine.
I have noticed that if I delete lines from the files, I don't have this problem.
If I try viewing images (so I know it can't be an html of php problem) some look fine (the very small files). But, larger files, around 168 kb, load halfway and the second half is distorted (green and purple chunks and other random colors and lines).
If I view the image through https, the image is perfectly fine!
If I put my site through w3.org's validator (just to see what it would report) it says "500 Line too long (limit is 4096)".
My website used to work, so I know there isn't any code in the pages that would cause this to happen.
Is there an Apache setting I check? Perhaps it is sending a really long header that I can not see? I am not really sure what to do. I have made my site to force https but it's slow and not signed.
my server is still effed up from the MPack attack that I received.
I just received the following email, does anyone know what this means or how it could be done? The client IP is mine, so some how my server is sending that request?
I was able to successfully delete all the files, but how do I now get rid of the directories themselves? When I do: rm -fr "/arcade/images/. /" and then locate ". " I still get:
I've just been having a look through my logwatch e-mail, and have seen the following that I've not seen before:
Code: A total of 3 unidentified 'other' records logged GET http:/ /74.52.21.101/index.php2?goto=[url] HTTP/1.0 with response code(s) 2 404 responses GET http:/ /74.52.21.100/index.php2?goto=[url] HTTP/1.0 with response code(s) 2 404 responses GET http:/ /74.52.21.102/index.php2?goto=[url] HTTP/1.0 with response code(s) 2 404 responses NB. I've added a space in the URL to break the link.
what is happening here, as this looks to be something dodgy.
I have a dedicated box with softlayer and I have noticed at varying times the past few months that with sites we host, sometimes the connection times out (I'll try to access like 5 or 6 sites within 30 seconds or so and they all drop, then a minute later they load fine).
I opened a support ticket and they said it usually has to do with the # of requests Apache can handle, and that this can be modified. They stated they could: "tweak the apache configuration file in this server that can make it possible to handle more requests."
So my question is what should the # of requests be set to? (I'm not sure what it is now, but I assume whatever the default # is).
I am getting a lot of GET requests from different IPs to 4 nonexistent PHP files on my server. Is there any way to block the requests to avoid the resources use of apache that these requests are generating?
I have installed mod_security but Im not sure about how the block rule should be.
The requests are going to images/log.php, images/log2.php, images/log3.php and images/logi.php of one of the sites hosted on the server, is there any way to block there requests for a specific domain or path?
We're trying to optimize the speed of our website. It's hosted on its own box.
We're looking for software that will monitor/aggregate the time it takes for certain requests -- For example, we would like to see which files it takes the longest to serve.
Is there server-side software that will take care of this?
I just went with Steadcom's VPS and they are great. I am setting things up and it's going pretty well, I have to dust off my linux/server knowledge that I haven't used in a couple of years.
Anyway I'm creating a virtual host.. I will have about 10 in the end, but right now I only have one domain IP Pointing to my new server. My registrar is NamesDirect.
When I create the virtual host, I can no longer access subdirectories directly. My Virtual Host directory is, say, /var/www/html/newdir
If I try to reach http://www.domainname.com which has been configued as a virtual host, that comes up correctly from the directory /var/www/html/newdir and works fine.
But if I try to reach http://myipaddress/newdir I get a 404 page not found error. Looking at the log, it's trying to reach /var/www/html/newdir/newdir so it's putting in the virtual host redirect even for just hitting the subdirectory directly.
Is this normal? Do I have something configured wrong? I have another domain that I have changed to IP Point to the VPS but until it propogates I won't be able to test having 2 virtual hosts.
Also.. I have not set up DNS on my VPS. I don't really understand it, and IP Pointing has always worked for me when I ran my own server form my home so I was just going to do that. But I wonder if this could be one of the problems.
I have Apache 2.4.2, OpenSSL/1.0.1c, on Windows Web Server 2008 R2 (64 bits)
After 12 hours of heavier load, the SSL requests stopped working/being answered. However if you requested the same page via http instead of https, it worked fine. Restarting the Apache server fixes this, for a while. Again after a few hours of traffic, the https requests stopped working again. I checked the logs, and nothing notable, the mod_ssl entries just...
The site is called only by client developed with Delphi 2007 (CodeGear user-agent). Delphi client use THTTPRIO for sending HTTPS request to SOAP.
So I just upgraded Apache 2.2.22 to Apache 2.4.3 and made sure to go through all the options that had changed and update the conf file accordingly. This included adding the cache module for SSL and changing the SSLMutex option over to Mutex default ssl-cache. We also turned off SSLCompression due to the CRIME attack vulnerability.
We use apache strictly as a loadbalancer to 2 tomcat servers via mod_jk. Apache serves no static content at this time.
After being deployed, everything worked fine until later in the day. After 3 hours of heavier load (our site only takes significant traffic during business hours), the SSL requests stopped working/being answered. However if you requested the same page via http instead of https, it worked fine.
Restarting the Apache server fixes this, for a while. Again after a few hours of traffic, the https requests stopped working again. This time I turned the loglevel up to debug and restarted the Apache server.
As traffic slowed down it took another 6 or 7 hours before SSL requests stopped working again. I checked the logs, and nothing notable, the mod_ssl entries just... stopped. (I don't know for sure its ammount of traffic related, it just seems that way)
I have tried reproducing this in a lab, but have not been able to get it to happen on the lab server.
OS: Windows Server 2008 R2 Apache: 2.4.3 vc9 build with OpenSSL 0.9.8 downloaded from apachelounge.org Mod_JK Version 1.2.37 vc9 also downloaded from apachelounge.
I've spent the last several months working on a huge upgrade of a couple dozen websites. The upgrades include modifying Apache so that visitors who arrive at links pointing to mysite/World/New_York are redirected to mysite/world/new-york. In other words, all my links now default to lower case, and underscores are replaced with dashes.
Unfortunately, publishing it has been an endless series of disasters. My websites are now all crashed, and the server is unbelievably slow. It takes pages forever to load (if they load at all), and I can scarcely publish files online.So the following notice sent to me by my webhost got my attention.
IT appears your own server IP is making GET requests to Apache, causing excessive loading and causing service failures. On today's date, your IP made almost 6,000 connections to Apache:<br><br>
I have a little problem (on my Raspberry) with the maximum concurrent connections.When I open multiple tabs of a webpage which keeps persistent connections, apache is unable to serve more requests.Here is the (shortened) mod_info output (which also takes some time till there is a process kind enough to serve the request):
Code: Server Version: Apache/2.4.10 (Raspbian) OpenSSL/1.0.1k Server MPM: prefork 5 requests currently being processed, 9 idle workers
I host my DNS with DNSmadeeasy.com , I noticed that I have daily more than 350.000 DNS requests for main domain, This domains got about 80.000 uniqes/day, so this is strange how can there be 350.000 DNS requests/day. Seems that I'll go over the quota because of this.
The TTL for all domains is set to 86400.
Is there a way to discover how its possible ? And also is there a way to do something to make this number lower (DNS requests)