I have a client that is running backup exec. They have 2 scheduled jobs in the system. One of them runs, and one does not.
It is skipping over the job. If you watch the timer for the backups, it counts down, 2 mins, 1 min. Then jumps to 60 mins. No errors or anything reported in the logs.
Again, the other backup job runs perfectly fine and a manual job runs as well.
I am running Plesk 12 on CentOS 7 64bit. I have set up the server backup instead of the website backup as I assume that will backup all server settings, as well as the website? I am running Plesk Web Admin Edition.
I have the backup set to run and dump into Personal FTP, but it does not run. If I run the backup manually, everything works great, but the scheduled backups don't work.
Is there a way to run an additional task after the scheduled backup has completed?I want to copy the backup off-site. I can't use FTP, otherwise I'd use the built-in 'Personal FTP Repository' feature. (See Rsync, Amazon S3, Rackspace cloud files, etc)
I could just create a new scheduled task, but the backup takes an unpredictable amount of time, and the tasks need to run sequentially. Also, the 2nd task should only run if the backup succeeds.
I could disable the backups in the control panel and create a new scheduled task that does both the backup and the additional task. But then the functionality of the control panel's Backup Manager page is lost.
workaround 1) rpm -q --scripts psa-backup-manager | sed 1d | sh 2) /etc/init.d/crond restart
my error message with CentOS 5.8, Plesk 11.0.9 MU#4 sh: line 2315: syntax error near unexpected token `(' sh: line 2315: `preuninstall scriptlet (using /bin/sh):'
/etc/cron.d/plesk-backup-manager is created
why do not work after you upgrade the backup cronjobs?
on one of our Plesk-Servers (Plesk 12.0.18 Update 34 on Debian 7.6) the scheduled backup stopped working. Scheduled Backup is active in Backup-Manager, but it's not executed.
I'm having troubles getting the global scheduled backup task to work at Home > Tools & Settings >Backup Manager > Server Repository.
At a subscription the scheduled backup works how it should.
Home > Subscriptions > example.com > Websites & Domains > Server Repository
The problem is as follows: When I set the scheduled task at a specific time example 00:00 it does not run at all. But when I just do "Create a backup" it does work...
I've done [URL] .... and everything in here is configured how it should be.
This is my configuration for scheduled task:
Cronjob is placed in : /etc/cron.d/plesk-backup-manager-task
Also did run: rpm -q --scripts psa-backup-manager | sed 1d | sh
This problem is only in the global backup configuration.
I have an issue with my daily backup. I configured to have a daily backup on an external FTP server. Everything looks to work correctly but in the panel I see that the task is a 100% forever. Never finishing completely.. So the next day, the task is not starting.. I have to remove manually the task in backup manager screen.
here is the last lines of the log file (in /var/log/plesk/PMM/backup-2015-01-20-20-46-02-824)
My Centos server running Plesk12 is running scheduled backups every sunday 3AM.
The backup is configured so that it's created as a multivolume backup with a volume size of 2047MB.
The backup is placed on my Personal FTP repo (another plesk12 server mounted with big storage).
The backup content is configured to backup server config and content (all).
The problem I have is when the backup is running I can see that it creates the volumes and stores it locally. After it send all the volumes the the external FTP repo it will delete the local (tmp) data. See my attached screenshot for storage health during backup.
Is this behaviour normal? This way we can never run a backup to an external FTP repo when our server passed 50% storage. Is it not normal to:
- Create a volume - Send it to FTP repo - Delete volume locally - Repeat until done
I'm new to Parallels Panel. I use version 11.0.9. I want to backup mysql database daily. First of all, what is the best way for daily database backup in plesk. I'm trying to do this in Scheduled Tasks and I use mysqldump command although I'm not sure.
I chose the time and day first and then I switched on the task. I typed the following command to Command line.
This created only a blank file. When I use this without gzip, nothing changes.
1- Is mysqldump right command for database backup? 2- Should I define full path for mysqldump, gzip and database? If so, how can I find out the full path of mysqldump, gzip and my database? Because I can't see their locations in panel. 3- I can't see any error message. There is not any log file in httpdocs folder. Where does the log file exist? 4- It is weird but should the username be "database user" or should I write "root" ?
I have build PHP as CGI but now the function exec says with every command like uptime this error: [Thu Apr 16 10:28:37 2009] [error] [client xxx.xxx.xxx.xxx] sh: uptime: command not found
This also happens when I do the command convert (yes Imagemagick is installed). Strange enough when I login with the permissions of the exame user I can do the commands through SSH without any problem.
I use DirectAdmin with custombuild. How can I resolve this? Am I required to build PHP to CLI to use the exec command?
My happiness with Innohosting (as a reseller) has come to a screeching halt when I found they've disabled exec(). This has sunk my plans to use Typo3 and Gallery for a website I'm creating for a client as they use Imagemagick through exec(). Rather than reconfigure them to use gdlib (possible?) instead, I'm inclined to look for a host that allows exec().
I've asked Innohosting about applying the PHP exec_dir patch found here:[url]
And discussed here:[url]
I'm waiting for them to get back to me. I hope it's a solution as Innohosting seem great otherwise.
Failing all else, how many hosts have PHP exec() disabled? Is this common?
I have been having a lot of problems with my server lately. Today I attempted to update container software. The operation failed with this output:
Operation update with the Env(s) "server.[site].com" is finished with errors: Can not update packages: exec failed: warning: /etc/issue created as /etc/issue.rpmnew warning: /etc/issue.net created as /etc/issue.net.rpmnew error: /etc/httpd/logs expected to be a regular file, lstat() returned 40000 error: unpacking of archive failed on file /etc/httpd/logs: cpio: rename failed - Is a directory warning: /etc/yum.conf created as /etc/yum.conf.rpmnew Error in Transaction: One or more rpm failed. Error: /usr/share/vzyum/bin/yum failed, exitcode=1 .
I am slowly learning how to use Linux and the SSH terminal to manage my server... but this beyond me...
(I wonder if it has anything to do with the "segmentation faults" that have been occurring.)
How to disable those functions on VPS with Lxadmin and CentOS 5 show_source, system, shell_exec, passthru, exec, phpinfo, popen, proc_open, base64_decode, base64_encodem, proc_terminate
setting up a server running ZFS as a backup server. My only problem is that it's very new to me, and naturally I'm skeptical of it's redundancy capabilities etc, I don't want to get burned. Anyone used ZFS, what's your experience with it?
Is it possible to create a script that will automatically download an entire website via FTP and then once the script has got an entire site, the next time it runs it only downloads the newer version of files?