Plesk 12.x / Linux :: Content-length Limit When Uploading Large Files
Jun 18, 2015
Domain has PHP Settings in Plesk set to 2G and I get this error when uploading a 48MB file using Wordpress. I assume I need ot modify this manually in conf file somewhere to allow uploading large files?
Requested content-length of 48443338 is larger than the configured limit of 10240000..
mod_fcgid: error reading data, FastCGI server closed connection...
When I download a file from my server, only specific extensions are working. This is really annoying since I want to be able to see how much time left to finish a download.
For example I uploaded a video with .vob extension file.vob --> does not show filesize when downloading
If I rename the same file to different extension: file.avi --> works fine shows filesize when downloading file.mp3 --> works fine shows filesize when downloading file.rar --> works fine shows filesize when downloading file.mp4 --> does not show filesize when downloading file.wmv --> does not show filesize when downloading
These are direct download links, not using any download scripts or anything. Why are some extensions displaying the filesize and some not displaying them? I am using Apache 2.x server.
I'm facing a very strange FTP issue with one of my shared-hosting accounts, while all of my other servers are having no problems but only this one, when I try to upload a file (whatever file) larger than 500kb from my local PCs, in most cases, the file would stop uploading during the process and hang there until it times out.
There are 2 interesting things though: The file transmission typically hangs when approximately 248kb of the file have been transferred, pls see the attached screenshot for example.
If you look at the attached screenshot, you will notice that the uploading transmission hangs when 248kb of the file have been transferred. This is very strange and what I mean is that for example, I randomly pick up a file, and attempt to upload it onto my host for 10 times, now see, 5 times it will hang when 248kb of the total size have been transferred, 3 times it will hang at other points *near* 248kb (224kb or 280kb typically), 1 time it will hang at another random point, and 1 time it might be uploaded successfully (yes, there is still a tiny chance for the file to be uploaded successfully).
My default internet uploading speed is 80kb/s-100kb/s, lately I found that, when I limit the uploading speed on my FTP client (e.g. max. 30kb/s), everything WILL WORK without any problem! No hangs, no interrupt.. Whereas when I free up the uploading speed limitation and let it upload with my regular speed, the problem appears again.
It seems to me that the FTP hangs only when the uploading speed is higher than 60kb/s. However my host provider told me that they have customers uploading without any problem at over 400kb/s, and they said "there's no problem or limitations on the server at all".
Up until now, I have done following things to troubleshoot the issue but with no luck:
Contacted my host. Disabled/Enabled the PASV mode on my FTP client. Tried different FTP clients on different computers (FlashFXP and Filezilla). Rebooted my router and reseted everything with the factory default settings. Contacted my ISP for the issue, they "did something" but nothing were helpful. Rebooted all my PCs. Disabled both firewalls on my PC and on the router.
Furthermore, I have asked another friend of mine in another city with another ISP to test the FTP uploading, but unfortunately he got the exact same problem. And I've done some search on the internet for hours but no one seemed to have the same problem..
I have a 6GB backup file created with another Plesk Backup Manager, now I trying to upload this backup file to my Plesk Backup Manager but after upload 3% I am getting "413 Request Entity Too Large" error, I tried with disable NGINX but still getting this error.
how can I resolve this error or is their any other way to upload my file on backup manager?
I see that Backup Manager have a file size restriction of 2GB how can I increase this?
I am currently trying to limit incoming UDP length 20 packets on a per IP basis to 5 a second using IPTables on a Linux machine (CentOS 5.2).
Basically, if an IP is sending more than 5 length 20 UDP packet a second to the local machine, I would like the machine to drop the excess length 20 packets coming from that IP.
The modules that should work perfectly for this type of "rule set" are;
- Limiting module - Length module
Both of which are installed / compiled with the kernel/IPTables correctly and functioning.
I have tried several rule sets, and they all seem to not fully work. Either they drop all UDP length 20 packets going to the local machine or allow all them through.
Below is one of the rule sets I use, and it is not working. Any ideas what the issue could be?
iptables -N UDPC1 iptables -A INPUT -p udp -m length --length 20 -j UDPC1 iptables -A UDPC1 -p udp -m length --length 20 -m limit --limit 5/second -j ACCEPT iptables -A UDPC1 -j DROP
I have been seeing these in my log and received complaints from customers not able to get their mail out. These messages just stay in the queue and go no where.
what they are successfully using as a tlsserverciphers and tlsclientciphers? Maybe it the dh key being too small. How can this be fixed on Qmail?
I add my site via ftp file to httpdocs but still appears in the default message domain parallels, I've made sure all correct indext.html and others, not to do.
my VPS hosted by Strato was hacked and seems to be part of a botnet now. Until now I thought that the automatic backups of the provider would be enough and I did no separate backups using pleskbackup. Unfortunately the hacker attack was earlier than my oldest backup.
Now I want to move the complete server content including the configuration of approx. 10 domains to a new one. Therefore I want to make a backup of the plesk 9.5 server using pleskbackup to import it on the new server running plesk 12.I can access the old server in recovery mode only, which means, that a recovery system runs with the content of the old server mounted under /repair. Is there a possibility to tell pleskbackup, that the content to backup is mounted under /repair? Otherwise it seems, that I have to move the content manually...(I tried starting the old server in normal mode, but it immediately starts doing evil things, so this doesn't seem a good option...)
I've just uploaded my website files onto the server, the website is working fine, but I'm not sure my files are 'arranged' right. I uploaded my images in a folder, but all the rest of the files are 'loose' along with the index file. If I go to either of my domain names, the website appears, which is ok for now, but I only really intended it to under one domain name, so there may be problems if I want another site up there. I think my files should be in a folder. Is this right? Should the index file be in the same folder, or should it be outside the folder with the links changed accordingly. I've been emailing my hosting service, and they 've been trying to help, but I feel a bit thick because I don't really understand what they're saying. Could someone tell me in very plain english how the files should be arranged?
i just bought php file management script, its running smoothly, the only problem was i cant upload big file ( > 1 Gigs ) using that script, then i ask the developer and he said it shouldnt be a problem since he sold that software before people and he never got a problem like that.
OS : Slamd64 apache 2.2.10 php 5.2.8
i tried to changed post_max_size = 1900M upload_max_filesize = 1500M
Why is it that you can upload large file size when you are using FTP, but you cant upload large file size beyond the max when you are using browser to upload?
I want to install a script (a simple wordpress blog)on my website but now i'm just astonished at how many files I have to upload on my server's directory! Uploading those files one by one will take forever. Is there a way to upload multiple files all at once?
Hi guys, I've been having problems trying to edit my php.ini file which I think I've now fixed.
The whole reason I wanted to do this was because I've just moved to Media Temple from another hosting company and I'm having a couple of problems with the switchover.
Basically I use a CMS system to add properties which appear on the main website. I also upload PDFs and images. On the old hosting company, the PDFs and images went into folders called dnDir/pdf and dnDir/images but on Media Temple, they are going in to a folder called tmp. I really want them to go to the same place as they used to.
Is this an issue with php.ini that I need to rectify? If so, could you point me in the right direction?
It appears that some people like to take advantage of those files for online web applications such as Wordpress which have php files with permissions set to 777. They use those as a means of creating an upload file. The upload files that they create then have access to the whole server somehow... Is there anyway of preventing this from happening?
We're outgrowing our current bulk storage system and I'd like to solicit opinions.
With 2 TB disks and a 16 disk array, it's possible to have a single 28 TB volume (after deducting RAID5 parity overhead and a hot-spare disk). I've seen arrays from Aberdeen with 48 and 96 disks, for nearly 200 TB. Windows supports up to 256 TB per volume when 64K cluster sizes are used.
Our backup system uses a ton of storage space, and it would be far more convenient, and more efficient from a utilization standpoint, to access that space as a single volume.
Breaking it up into smaller chunks, such as 2 TB each, means we have to make a "best guess" on balancing actual need.
For example, if we assign 25 servers to each 2 TB volume for backup storage purposes, some volumes might only see 800 GB of consumption (remaining 1.2 TB allocated but not used) while other volumes might get 1.6 TB used (remaining 400 GB allocated but not used). Key concept: wasted space, because we have to over-estimate need to assure adequate headroom.
From the opposite viewpoint, if we had a sudden increase in need that exceeded the available space allocated to that volume, we'd have to move that server to a different volume. Key concept: increased admin workload to monitor and re-balance distribution as needed.
Now if we used one giant volume, there would be no guesswork, no "allocating more than we think is needed" for a bunch of small volumes. All servers share one huge common pot.
But there has to be a practical limit from a system-overhead standpoint. Our backup sets consist of a few multi-gigabyte files, so using 64K clusters will not cause much waste from slack space.
I'd like to get your opinions on maximum disk volume sizes from a practical standpoint.
i have free hosting server and a rule to upload 3MB file max. it works for FTP, but somehow it doesn't work for php. It seems for php the limit on my server is 100MB (no idea why)
i use following directives to limit file size in php.ini :
; Maximum size of POST data that PHP will accept. post_max_size = 4M
(4 just for some margin )
; Maximum allowed size for uploaded files. upload_max_filesize = 3M
and i still can find 100MB files on disk. this is part of log file from apache from the account that uploaded it to me:
HTTP/1.1" 302 188 [url] "Mozilla/5.0 (Windows; U; Windows NT 5.1; pl; rv:1.8.1.3) Gecko/20070309 Firefox/2.0.0.3"
as the effect of this (at least i think so), there was 100MB file in his home dir.
any idea how can he POST such big files even with those two directives?
i have also set LimitRequestBody to 5194304 and LimitXMLRequestBody to 5194304 in apache2.conf which also should stop files being POSTED as big as 100MB.
i have php 4.4.4-9, Linux Debian, apache 2.2.3 working in worker mpm, and php as fastcgi.
P.S. i removed server info like IP, dir and address to not show specifics about my server in public, i put [] there.
I have been trying quite unsuccessfully to import a large sql db file via phpMyAdmin for one of my clients. Since the db file is about 250mb I get a server timeout error.how I can do this via SSH...I have a CentOS server 6.5, 64 bit that runs Plesk v 12.0.18
I am getting the following error in Plesk, on the Applications page:
"APS Catalog error: Unable to parse the ATOM content. DOMDocument::loadXML(): Opening and ending tag mismatch: hr line 5 and body in Entity, line: 6"
Clearing cache (as advised here: [URL] ....) - does not work, unfortunately.
As I noticed, the error started to occur right after Plesk was updated to v11.5.30_build115130819.13
Found this error in files:
/opt/psa/var/cache# vi d3b08b981b493e5687c45518970bc225-1024-0.cache a:3:{s:3:"url";s:108:"http://catalog.marketplace.parallels.com/all-app.atom?obsolete=hide&pageSize=999999&order=%2Bname%2C-version";s:4:"time";i:1390479055;s:7:"content";s:166:"<html>
[Code] ....
In my browser, the API catalog URL gives the same "502 Bad Gateway" error: [URL] ....
I run CentOS 5.2 (Sometimes CentOS 4.6). I have been messing around with IPTables, and cannot find out how to filter zero-length packets.
I believe I might need an unclean module. I have already done hours of reading and researching, but I have come up with nothing, for I do not think this is that common.
If anyone could please let me know the commands to use to filter out all zero-length packets, or the unclean module I need to use with IPTables, I would really appreciate it.
I'm on a VPS with Ubuntu 14.04 and Plesk 12 Web Admin Edition. I can't import a large (20 MB zipped) database dump to phpmyadmin because there is a 2MB file size limit. I suppose I have to change the server-wide PHP configuration (if I change the PHP settings for the domain nothing happens). Is there a way to change the global PHP settings via the Plesk panel?
This config is working fine until we try to load resources from the alias over ssl, I believe something is missing in the Nginx config. I'm not getting any info using the logs.
I have been using Plesk for a while on my server, but this is the first time that I need to set up large files uploading for a client who requires to upload via a form files that are larger than 128MB (but less than 400).The issue I've been seeing is that whenever the user tries to upload a file greater than 128MB I see an error on the proxy_error_log that says:
2015/05/10 21:46:18 [error] 31224#0: *9 client intended to send too large body: 175420278 bytes, client: XX.XX.XX.XX, server: myserver.com , request: "POST /admin/products/1 HTTP/1.1", host: "myserver.com", referrer: "referrer"
I've been googling this issue and everything points to the nginx configuration (PHP parameters have already been set up). I proceeded to change the configuration of /etc/nginx/nginx.conf to include
http { ... client_max_body_size 400M; ... }
HOWEVER (and this is where I'm stuck) after restarting the nginx service, the fille /etc/nginx/plesk.conf.d/vhosts/myserver.com.conf continues to hold the value:
server { ... client_max_body_size 128m; ... }
Modifying this file to change the 128m to 400m does not make a difference.
why, or when this happened, but all of a sudden the PHP Memory Limit for all of my domains is 64M? I used to have it set to 128. How to change this setting so that all domains have the ability to go to 128M ?
on a fresh debian 7 64bit openvz system we actually have a problem with the new plesk 12 feature of limiting outgoing mails.We migrated about 25 systems to plesk, this is the first that makes problems.If limiting outgoing mails is activated (i double-checked all possible checkboxes in plesk) a fresh mailbox gives us the following error while trying to send via smtp:
Aug 15 13:09:32 2d4 postfix/smtpd[8645]: connect from unknown[XX.XX.XX.XX] Aug 15 13:09:32 2d4 postfix/smtpd[8645]: E9AF61C58851: client=unknown[XX.XX.XX.XX], sasl_method=PLAIN, sasl_username=XX@XXX.XX Aug 15 13:09:32 2d4 greylisting filter[8651]: Starting greylisting filter... Aug 15 13:09:32 2d4 /usr/lib/plesk-9.0/psa-pc-remote[8611]: handlers_stderr: SKIP
[code]....
After deactivating the feature all mail is sent without any problems. We use postfix + dovecot.
I have a big problem concerning the file upload limit (I need a large size, around 2Go) : I was using my app in /var/www/vhost/default and it was working perfectly, I decided to change it and use /var/www/vhost/mydomain.com to have it throught the plesk panel, and there I have an upload limit than I need to push. I can't upload files larger than 128Mo and I don't know why.
- I have checked all php.ini files (with locate php.ini) and they are all correct. - I used plesk panel to set php conf -> done. - I put : php_value memory_limit 2000M php_value upload_max_filesize 2000M php_value post_max_size 2000M in my .htaccess in htdocs
[Code]....
I reload/restart apache2, psa, ... And it still doesn't work, I have no more idea every conf file seems correct. It's not a permission problem because I can upload some 80Mo files but not 500Mo ...