Plesk 12.x / Linux :: Download Large Backup
Aug 23, 2014I have backup my domain with Backup Manager and it created 47 GB of backup. If I try to download I recive error 504. How can I download this backup?
View 4 RepliesI have backup my domain with Backup Manager and it created 47 GB of backup. If I try to download I recive error 504. How can I download this backup?
View 4 RepliesI have multiple backups stored under server repository (subscriptions --> <domainname> --> website and domains --> backup manager).
The physical files are located at: /var/lib/psa/dumps/clients/904279/domains/<domainname>/
When I click the green arrow to download these files to a local computer (see attached image) I get a new page with title "Download the backup file". On this page I have the option to set a password on the downloaded file, but no matter what I do (password or no password) the file is not downloaded to my local PC. I don't get a pop-up box with the option to save the file. Just nothing happens ...
I have a 6GB backup file created with another Plesk Backup Manager, now I trying to upload this backup file to my Plesk Backup Manager but after upload 3% I am getting "413 Request Entity Too Large" error, I tried with disable NGINX but still getting this error.
how can I resolve this error or is their any other way to upload my file on backup manager?
I see that Backup Manager have a file size restriction of 2GB how can I increase this?
I created a backup using the backup manager in plesk panel (~4Gb). When I click the green arrow on the right to download, I get transfered to another page that suggests I should enter a password to protect my backup. No matter if I enter a password or skip this security option, I get back to the backup manager overview, but the download doesn't start. There is no error message. Just nothing happens. I also searched in /opt/psa/admin/logs for an error log, but there is none.
From another thread I read the owner should be psaadm for /var/lib/psa as well as /var/lib/psa/dumps (and files/subfolders), on my server the owner is root for all those. But that thread was for plesk 8.
Can't download a full plesk backup.
When i'm trying to download a backup from parallels plesk to local it takes like 3 minutes then timeout.
SO CentOS 6.6 (Final)
Plesk 12.0.18
Backup size: 1.82 GB
I'm currently running on a VPS. My site allows for large file uploads and downloads, with files over 600mb in size.
The server has issues when the site gets three or more requests for large file downloads. I'm trying to grow this site to thousands of users and it is hard to do when the site can't handle even three.
I've been told by my host that I need to upgrade to dedicated. My VPS only has 512mb RAM and one large file download is eating up that RAM. This is causing the issue.
I'm a newbie and while I knew I was risking a bit by going with VPS I do find it a bit annoying that these guys advertise 1TB of bandwidth per month but I can't even support downloading 1GB at the same time....maybe it's just me...
Anyway, I am now looking into moving the large files and the upload/download over to Amazon S3. If I do this I am expecting my RAM usage on the VPS to greatly decrease. Is this correct? If my PHP code is running on the VPS, but the actual file download via HTTP is coming from S3, that should not be a heavy load on my box, correct?
any opinions on S3?
I have been trying quite unsuccessfully to import a large sql db file via phpMyAdmin for one of my clients. Since the db file is about 250mb I get a server timeout error.how I can do this via SSH...I have a CentOS server 6.5, 64 bit that runs Plesk v 12.0.18
View 4 Replies View RelatedI am running Debian 7.5 and after a recent re-install of plesk and the os I am unable to download any applications. As soon as I run install wordpress or any other application the download bar just sits at zero.
View 3 Replies View RelatedI'm on a VPS with Ubuntu 14.04 and Plesk 12 Web Admin Edition. I can't import a large (20 MB zipped) database dump to phpmyadmin because there is a 2MB file size limit. I suppose I have to change the server-wide PHP configuration (if I change the PHP settings for the domain nothing happens). Is there a way to change the global PHP settings via the Plesk panel?
View 2 Replies View RelatedI have a client with a download file on 146 MB.Download speed is pretty slow and timeout out around 70MB..
View 2 Replies View RelatedIs there any way I can download Plesk admin manuals for offline use?
View 1 Replies View RelatedI have been using Plesk for a while on my server, but this is the first time that I need to set up large files uploading for a client who requires to upload via a form files that are larger than 128MB (but less than 400).The issue I've been seeing is that whenever the user tries to upload a file greater than 128MB I see an error on the proxy_error_log that says:
2015/05/10 21:46:18 [error] 31224#0: *9 client intended to send too large body: 175420278 bytes, client: XX.XX.XX.XX, server: myserver.com , request: "POST /admin/products/1 HTTP/1.1", host: "myserver.com", referrer: "referrer"
I've been googling this issue and everything points to the nginx configuration (PHP parameters have already been set up). I proceeded to change the configuration of /etc/nginx/nginx.conf to include
http {
...
client_max_body_size 400M;
...
}
HOWEVER (and this is where I'm stuck) after restarting the nginx service, the fille /etc/nginx/plesk.conf.d/vhosts/myserver.com.conf continues to hold the value:
server {
...
client_max_body_size 128m;
...
}
Modifying this file to change the 128m to 400m does not make a difference.
Domain has PHP Settings in Plesk set to 2G and I get this error when uploading a 48MB file using Wordpress. I assume I need ot modify this manually in conf file somewhere to allow uploading large files?
Requested content-length of 48443338 is larger than the configured limit of 10240000..
mod_fcgid: error reading data, FastCGI server closed connection...
I was trying to update my Plesk Panel installation and got the following error:
It seems the auto updater can't download the needed files necessary to this operation.
OS: CentOS 5.7
Panel version:10.3.1
Trying to update to version: 12.0.18
ERROR LOG
Code:
Installation started in background
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
* atomic: www7.atomicorp.com
* epel: mirror.23media.de
* openvz-kernel-rhel5: mirror.fastvps.ru
[Code] .....
This is applies to both Horde and roundcube webmail client software;
Using Plesk 11.5.30 with Horde 5.1.5 or roundcube 0.9.5 on CentOS Linux release 6.5 (Final).
We have seen this behavior occur on multiple servers.
Clients experienced slow to no response after executing a search, which eventually results in a failed to communicate with the server-error in the webmail client.
The Apache server log shows script time-out errors when searching larger mailboxes (i.e. larger than 950 MB), this does not happen on smaller mailboxes.
We have seen errors like the following in the Apache server error log with Horde (personal data like IP-address and domain name are x'ed out):
[Thu Jun 19 14:55:06 2014] [warn] [client xx.xxx.xxx.xxx] mod_fcgid: read data timeout in 45 seconds, referer: http://webmail.xxxxxxx.com/imp/dynamic.php?page=mailbox
[Thu Jun 19 14:55:06 2014] [error] [client xx.xxx.xxx.xxx] Premature end of script headers: ajax.php, referer: http://webmail.xxxxxxx.com/imp/dynamic.php?page=mailbox
And with Roundcube:
[Tue Jun 17 13:02:04 2014] [warn] [client xx.xxx.xxx.xxx] mod_fcgid: read data timeout in 45 seconds, referer: https://webmail.xxxxxxxxxxx.com/?_t...d=19445&_mbox=INBOX&_caps=pdf=0,flash=1,tif=0
[Tue Jun 17 13:02:04 2014] [error] [client xx.xxx.xxx.xxx] Premature end of script headers: index.php, referer: https://webmail.xxxxxxxxxxx.com/?_t...d=19445&_mbox=INBOX&_caps=pdf=0,flash=1,tif=0
Steps to reproduce:
- use a large mailbox (950 MB or higher)
- login to the webmail (Horde or roundcube)
- do a search in the search field on the top right
- the time-out error should appear in the server Apache error log (after at least 45 seconds)
This seems like an inefficiency or bug in the search query that searches the user's mailbox. Is there any other way we can prevent this issue and the error messages?
I have 2 problems:
Firstly I wonder if there is any possibility to limit the number of cores the plesk backup zipping tool uses? This pigz takes up all my CPU. Is there any way I can reduce the amount of cores it uses because all my websites are down every time a backup takes place for around 3 minutes.
Secondly I get the following in my syslog:
1 baby plesk sendmail[20189]: Error during 'check-quota' handler
I don't know what is wrong. I think it's since the upgrade to Plesk 12. I now have 12.0.18 Update #13.
I have an Ubuntu 14.04 LTS 64 bit virtual private server with Plesk 12. The server is hired from a hosting provider. The server is used to run the Odoo ERP application (using postgres database).
The Odoo application is running fine and now I want to create a backup of the application using Plesks Backup manager.
I choose configurations and content option in the backup manager but the created backup is only 200kb.
I think the problem is the location where the Odoo application is installed is not included in the backup. I made a tar backup from the server and extracted it on my pc. It seems that the main parts of the Odoo application are in the var, opt, etc and usr directories (not in a domain but under root).
Installing the application in a domain would solve the Plesk backup issue I think but the installation script of Odoo puts Odoo in var, opt, etc and usr directories even if I put the install script in the directory of a created domain. Since the manual Odoo installation is complicated I am very happy to use the script.
My questions are:
1. Is it possible to include the directories var, opt, etc and usr in the Plesk backup and how and where do I do that?
2. Can I restore such a backup without no problem in Plesk?
I have a website that is approx 50GB, that I essentially would like to take offline for a while. Obviously, while the site is offline, I don't want to be paying for my server.
Can anyone let me know of some options to "store" this mammoth of a site. Downloading it locally is not an option, so it needs to stay in the cloud for the lowest possible price.
I have a large database 800MB and the export zip & gzip option does not work on this size database export.
works fine on upto 30MB table sizes, does phpmyadmin not support large table
export/backup using gzip or zip?
If its the case that i should be able to use the zip options then what would i look for to put this problem right?
if its the case large table databases are not supported what could i use instead to backup the database tables with compression (gzip)?
my site is suspend from hosting companey
then i ask him to open SSH
they open it and open ftp
but when i try to do Generate/Download a Full Backup
using cpanel say to me that i can not because of too space
my all space 5 gb
the i try to zip all directry file home directry
and i do it and take ll files
can i do Generate/Download a Full Backup using SSH and FTP
the hosting copany say to me that the didi not oofer users to trasnfer site and suspend it
how can i solve it
the site is one site of my friends and i try to help hem
i buy to him VPS but we waiting any cooand of SSH help us to Generate/Download a Full Backup
i just wana know is it safe to do remote daily backup for about 70,000 files?
file sizes is about 200kb and every day i have about 1000 new file, so rsync first should check old files becouse i am deleting about 30-50 of them daily and them backup new 1000 files ,
so how much it will take every time to compare that 70,000 files?
i have 2 option now:
1-using second hdd and raid 1
2-using rsync and backuping to my second server , so i can save about $70 each month.
My backups aren't working for days now. I have noticed that on current backup status there is one backup stuck at 100% for several days now and it is preventing new scheduled backups from working.
there is no way to delete that backup, so how can I fix this, without rebooting the whole server?
I am trying to run backups to an off site location, however, I have noticed that even if I try on the server side, it will only backup 2gb, which when I check the backup, the file structure is there, but there arent any files in the backups.
View 8 Replies View RelatedI am having issues with the plesk backup manager. I set up my personal FTP personal repository in tools & settings, then backup manager. My ftp is secure with a specific port. here are my options:
FTP server hostname or IP: myipaddress":"port without quotes. Directory for backup files storage: complete also used /complete. Both correct username and password
Use passive mode checked
Use FTPS checked
Use password protection checked
Here is my scenario:
Personal FTP repository backups are set up with:
Create a multivolume backup
Volume size: 2047
Store backup in: Personal FTP Repository
max number of backups: 1
For some reason when I manually backed up the server with server repository, my storage went from 14gb to 6gb.. I have zero backups in both server repository and personal FTP repository because I deleted them. Now its stuck at 6gb, which seems that the backup is stored somewhere in the system and is not deleted.Where can I find this backup? I am using CentOS.
I upgraded Plesk 11.5 to the latest version 12.0.08 # 5 and now I can not configure the external FTP backup I get the error:Transport error: unable to list directory: Curl error: Timeout was reached. Also failing the backup shows this error: Unable to rotate dump: The dump rotation is failed with code '126 'at / opt / psa / admin / bin / plesk_agent_manager line 1041.
And this one:Can not upload file 'domains/domain.tld/backup_domain.tld_info_1406210716.xml' to ftp. Error code: 1..my SO Debian 6.0.8