I have a 6GB backup file created with another Plesk Backup Manager, now I trying to upload this backup file to my Plesk Backup Manager but after upload 3% I am getting "413 Request Entity Too Large" error, I tried with disable NGINX but still getting this error.
how can I resolve this error or is their any other way to upload my file on backup manager?
I see that Backup Manager have a file size restriction of 2GB how can I increase this?
when i check apache status, i see one domain send many request to server, for example: domain.com 10.20.30.40 domain.com 10.20.30.40 domain.com 10.20.30.40 domain.com 10.20.30.40 domain.com 10.20.30.40 - - - how can i prevent this problem? this problem tease me and my server, because induce apache to work unremitting. Ram Usage is: 65%!
I am working with XAMPP 5.6.8 (Apache 2.4.4, MySQL 5.5.32 and PHP 5.6.8 ) on a 64 bits Windows 7 Ultimate (Service Pack 1) Operating System.
I am working with an Arduino UNO and a WIFI Shield connected to the Apache server.
I am sure Arduino is connected to the WiFi network and to the server, and it also sends the GET request to the server.
Apparently, everything is OK because I can see the 200 OK message from the server in the Arduino serial monitor, but I find no trace of that request in the server log although all the requests made from the browser (by typewriting the server address in the browser address bar and pressing enter) appear in the server log.
Every request is getting processed 3 times. In other words, if I point my browser to the URL of an image hosted on this server, it generates 3 lines in the access log each time I refresh the page.
If I point it to a script which logs something to a file, it logs it 3 times, showing it's run all 3 times.
I haven't touched the httpd.conf or any other configuration. Any idea what could cause this?
Is there any tool out there (I prefer command line) that is especially for analysis of apache error log files ? I need something that can summarize information from log and give them back to me.
In my web site I have several index pages in different languages in the following format
[URL] ....
Two days ago I noticed increased, many times. Google bot activity on my site and when I checked my log file I found that all pages crawled were wrong web addresses: to the above index were added existing files from my site like
/folder1/folder2/file.html
So, the strings looked like
[URL] ....
And surprisingly all they returned code "200".
My question is: is there any way to rewrite such requests to the first ".html" found in the string.
I have question for apache in centos. I loaded the apache and I want to know that which MPM used by default two MPM defined in apache but which MPM apache actually used for request server.
I've just joined the group and new to Apache/php. I have just assembled a website in Joomla/vertumart and called petslovezone.com.au. I want to redirect all the request such as
1. http://xyz.com to https://xyz.com 2. http://www.xyz.com to https://xyz.com 3. xyz.com. to https://xyz.com 4. www.xyz.com to https://xyz.com
now know I have to change .htaccess "RewriteEngine On" section. What would be the best code to do all the above.
As we are planning to implement Mobile for our platform, we want to distinguish between request coming from Mobile and Web in Apache. We will be using Apache for Reverse Proxy and we want it to differentiate the request source and forward it to required destination.Is this possible ?
I've been having trouble the past few days with someone who's been "attacking" my site so to speak by continuously downloading very large files with as many connections as (he) can open. I operate a large downloads site for computer games, this person has selected the largest files (like 400-500MB). Not sure of the real intent other than to clog up my bandwidth capacity. Also he appears to be using proxies since as soon as I ban one, another shows up seeminly from China.
Anyway, I have mod_bw and I've limited the number of connections in the downloads area to 2. While that works ok, his tool uses threads like a download manager would and he's using up 30-40 child threads for his 2 file downloads.
So 2 questions,
Is there anyway to not only limit file downloads to 2, but limit the number of connections per request? Many of my visitors do use download managers and I'd like for them to continue using them but use a reasonable number of threads like 6 or 8, but not 30.
Also, is there a way to restrict access to someone using a proxy?
I have an Xitami server and am migrating to apache httpd. I have the regular server working fine. I tried configuring ssl, but no requests are coming through. I know 443 is open on the router because it works fine under Xitami. I checked the logs and it si starting fine. I am attaching my httpd.conf and the startup log. If I try to access the website using https, it just times out and nothing goes in the log file. I replaced my domain with domain.com. I have tried many different examples, but cannot get it to work and am not sure what to do.
So I've set everything up manually a few times before now, but I got so bored of configuring everything for a manual install I just said screw it and used XAMPP this time - so my circumstances are not completely ideal.
Basically what I am looking to find out is how to improve loading speeds for Apache, PHP and MySQL on my dedi server?
The server I have is of the following spec: Intel Xeon CPU E5-1650 V2 (3.50Ghz with 12 cores total) 64 GB DDR3 ECC 2 x 2TB SATA3 (RAID 0/1)
use Windows Web 2008 R2 so only 32GB of the RAM is usable.
With all the abive aside, here is the important part: Whilst people are browsing the websites I have configured they are random hit with a blank white page saying "Your request has timed out. Please retry the request." - I have about 100 unique hits daily and a lot of people report the same problem, and I have even had it myself.
It feels as if the server has much more power than Apache and co. is trying to utilize - what can I do?
I am using 2.2.29 in Windows.Trying to remove one cookie in a request header before passing the request to the application, but having trouble. The cookie is in the middle of the request header.
i want to redirect main domain http //, www request to https://
i added this code
RewriteCond %{HTTPS} off# First rewrite to HTTPS:# Don't put www. here. If it is already there it will be included, if not# the subsequent rule will catch it.RewriteRule ^(.*)$ https://%{HTTP_HOST}%{REQUEST_URI} [L,R=301]# Now, rewrite any request to the wrong domain to use www.RewriteCond %{HTTP_HOST} !^www.RewriteRule ^(.*)$ https://www.%{HTTP_HOST}%{REQUEST_URI} [L,R=301]
to delete and reset the domlogs files from Apache server, as they have become too large, but I want to keep a record of the past few years website stats.
I could download the domlogs files (they look like MS-DOS applications) but once I have I am not sure how to read them, is there some off server application that could open these files? I currently view stats in awstats from cPanel, but the size of the files is causing problems with updating now.
The only thing I can think of doing is taking screenshots of the relevant stats that I want to keep, before deleting the files, but was thinking there must be a better solution than that.
I am using the latest version of Apache on an Windows XP machine
When my web service is down for maintenance, since Apache is will still be up and running, I would like for Apache to serve an xml file as a response for the appropriate request. I have three operations available, makePayment, calculateFee, and voidPayment.
Is it possible to have Apache determine what type of request is made for example if I have an xml error page for each operation; how will Apache know which xml file to serve based on the operation request from the client
To make it more clear: What is the best practice for modifying apache to know what request is being made in order to serve the appropriate xml file?
So I've got a problem where a small percentage of incoming requests are resulting in "400 bad request" errors and I could really use some input. At first I thought they were just caused by malicious spiders, scrapers, etc. but they seem to be legitimate requests.
I'm running Apache 2.2.15 and mod_perl2.
The first thing I did was turn on mod_logio and interestingly enough, for every request where this happens the request headers are between 8000-9000 bytes, whereas with most requests it's under 1000. Hmm.
There are a lot of cookies being set, and it's happening across all browsers and operating systems, so I assumed it had to be related to bad or "corrupted" cookies somehow - but it's not.
I added "%{Cookie}i" to my LogFormat directive hoping that would provide some clues, but as it turns out half the time the 400 error is returned the client doesn't even have a cookie. Darn.
Next I fired up mod_log_forensic hoping to be able to see ALL the request headers, but as luck would have it nothing is logged when it happens. I guess Apache is returning the 400 error before the forensic module gets to do its logging?
By the way, when this happens I see this in the error log:
request failed: error reading the headers
To me this says Apache doesn't like something about the raw incoming request, rather than a problem with our rewriting, etc. Or am I misunderstanding the error?
I'm at a loss where to go from here. Is there some other way that I can easily see all the request headers? I feel like that's the only thing that will possibly provide a clue as to what's going on.
My customer has an external facing Apache server that is acting as a reverse proxy to two internal applications. They have:
- external addresses for each app which resolve to different ip addresses, so app1.their_domain.com and app2.their_domain.com resolve to 77.3.170.10 and 77.3.170.11 respectively. - the Apache server has two network interfaces with ip addresses 192.168.10.10 and 192.168.10.11 - the external ip addresses resolve to the above internal addresses - the firewall between the Apache server and the internal app servers is configured to allow traffic from 192.168.10.10 to reach app_server1, and traffic from 192.168.10.11 to reach app_server2, both using port 7777.
I have configured a virtual host in httpd.conf for each ip, i.e.
This works fine in that the external address are being routed to the correct application, however the firewall is blocking requests to the second app as it appears the requests are coming from the Apache servers 'primary' ip address 192.168.10.10 instead of 192.168.10.11.
Is it possible to send requests using the ip address from the relevant VirtualHost?
I'm curious. I have read some stuff recently about Amazon, Mosso and other clouds. I know Mosso has switched over to a request based pricing model and I realized that I am not sure exactly what a 'request' is.
I think that a hit, as tallied by AWStats, Webalizer, etc. would be the same thing as a request, but I wasn't 100% sure if that was the case with Mosso. I actually contacted Mosso support (someone I know is considering using them) to ask them for clarification on a request. They stated that a page with two images would be three requests, one for each image and one for the page itself. I asked if it was the same as 'hits' and they said no, it isn't. This didn't sound right to me, because my understanding of 'hits' is the same as how he described 'requests'.
So, I figured I would just ask the experts on WHT.
I have a script that needs to make a port 80 request to itself and it seems that there is something blocking that request. where should i look to correct this problem?
I couldnt find my past forum post regarding a good service from LA to Australia. Mainly, Im looking for a LA service which will provide me with super times to melbourne australia. I need a company which is small and that can provide me with fast support and great speeds.
Can I have some of you ping 24fans.net and let me know what your results are? Also where you're located would be good.
My ISP is being crappy and my dsl line is basically down ever few minutes. They're supposed to fix it tomorrow.
My current results are:
Reply from 70.84.145.91: bytes=32 time=66ms TTL=51 Reply from 70.84.145.91: bytes=32 time=59ms TTL=51 Reply from 70.84.145.91: bytes=32 time=65ms TTL=51 Reply from 70.84.145.91: bytes=32 time=58ms TTL=51
This doesn't really have anything to do with my dsl being crappy, but rather I'm just curious as to what kind of responses people are getting around the world.