I log in as Reseller1 (clean login; not by switching from PPA Admin to Reseller1), create a Webspace and a Database. At that point I am able to see and create Databases on every Node, even on Reseller1 (Node1), which is set to restricted. I guess this behaviour is not intended, isn't it?
We are migrating Plesk servers to PPA. Is it possible to upgrade a customer to reseller? Is it possible to move a subscription to another customer or reseller?
I have noticed that resellers traffic stats (looks like email) are not being updated and just show 0.
On the server that is for resellers I see there is a file in /usr/local/psa/var/log containing a file mail_traffic_pendings.dat with sender and recipient domain traffic. I presume this has to be processed to go into the database, but obviously it is not. How to debug this issue so that email stats get updated for clients.
I'd want to create a user and set custom privileges using the standard mysql-client, ie. create user and grant privileges, but I know that sometimes Plesk overwrites my config files for other services.
I've configured PPA management, web server and a mysql database server. Now I want that if a customer add a mysql, ftp or mail account, that the names begins with a prefix that I've defined.
Somehow when customers add a new MySQL database they can select a Local MySQL server. It turns out this is the MySQL instance running on the CP server, can I somehow hide this server from the list?
What could I fix by trying to start mysql server, recieve following errors:
plesk interface:
Code: Failed to restart the "mysql" service. Cannot start/stop/restart service: Rather than invoking init scripts through /etc/init.d, use the service(8) utility, e.g. service mysql restart Since the script you are attempting to invoke has been converted to an Upstart job, you may also use the stop(8) and then start(8) utilities, e.g. stop mysql ; start mysql. The restart(8) utility is also available. start: Job failed to start
from the console:
Code: sudo service mysql start start: Job failed to start
Code: mysql -uadmin -p`cat /etc/psa/.psa.shadow` psa ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2) output of plesk_11.5.30_reset_instance_data.log:
I recently had a harddrive failure and luckliy I can still access certain directories on this failed drive. I can still access the /var/lib/mysql/ directory which holds all the users databases and have backed all these up separately using tar.
Now what I need to know is how do you restore these database files to another server? I tried simply untar'ing one of these to the new servers /var/lib/mysql/ direcotry and it stuffed Mysql up - it went offline. I had to get a cpanel tech to bring Mysql back online.
how can I get these database files to fully work on a new server?
[LOGTEE]: Error Downloading Packages: [LOGTEE]: [LOGTEE]: libuuid-2.17.2-12.18.el6.i686: failure: Packages/libuuid-2.17.2-12.18.el6.i686.rpm from base: [Errno 256] No more mirrors to try.
On a clean install of CentOS 6.6 (Final) I did the following:
1. updated the /etc/hosts file to point my ip to the hostname
2. Open the ports in the iptables file.
3. ran the ppa_installer per the instructions on [URL] ....
The ppa_installer log says installed Successfully. However the following occurs:
1. Cannot browse to the url:8443, or any of the other variations (8080, 8880, https/http)
2. Yum installer is now broken (I replicated this twice). Yum will not run at all with the following error:
There was a problem importing one of the Python modules required to run yum. The error leading to this problem was: /usr/lib64/libcurl.so.4: file too short
Please install a package which provides this module, or verify that the module is installed correctly.It's possible that the above module doesn't match the
current version of Python, which is: 2.6.6 (r266:84292, Jan 22 2014, 09:42:36) [GCC 4.4.7 20120313 (Red Hat 4.4.7-4)]
I am now going to try Cent OS 6.4.. will report back.
I've recently been asked to do a lot of data extraction from a state that has about 20 databases, each with between 10 and 100 tables. I often find myself diagramming things out on paper to try to visualize how everything works together.
I wondered if there's a tool that would "draw" the tables, columns, and relationships? I hesitate to say this, but almost like how MS Access does it, but that runs on Linux and does MySQL. Is there such a thing? I know about phpMyAdmin and MySQL's Query Browser, but it's not what I'm looking for.
I am running a dedicated server with Debian, and I installed a community sofware that has a lot of mySQL entries, many of which need to be changed to fit my needs
However, it is very hard to know exactly where each value I need to change is stored. Is there a way to search all database tables for a specific value?
For example, one thing that is stored in the database is the site's title displayed in the browser's title bar. The sofware does not give me the option to change it, so I have to find where it is located in the database and change it myself, but it would be extremely time comsuming to check all tables one by one for any occurences of the current title.
My HD was failing. So ServerBeach quickly set me up with a fresh box with the dying drive installed as a secondary drive. (nice job guys)
I'm able to mount the secondary drive and browse/copy the files... but when I get to the most important files of all, the database binaries, my /var/lib dir looks like this:
Code: [root@rosemary v2]# ls -l /mnt/dying/var/lib/ total 68 ... .... -rw-r--r-- 1 root root 2171 Sep 14 03:02 logrotate.status drwxrwsr-x 6 root 41 4096 Oct 12 2005 mailman drwxr-xr-x 2 root root 4096 Oct 12 2005 misc ?--------- ? ? ? ? ? mysql drwxr-xr-x 4 root root 4096 Oct 12 2005 nfs drwxr-xr-x 2 ntp ntp 4096 Sep 14 10:31 ntp ... ... And if I try to cd into the dir, I get this:
Code: [root@rosemary v2]# cd /mnt/dying/var/lib/mysql -bash: cd: /mnt/dying/var/lib/mysql: Input/output error I really *really* need this data!
I host with hostgator and I was wondering if there are any software programs or services, or even something in my cpanel that can automatically grab my MySQL databases and everything on my server and make a backup on my personal PC?
I know I can manually do this, but I would like something that automatically does it once a week or something to insure my clients data is always backed up.
I have a corrupted VPS, and have some mysql databases on it.
I want to backup databases and restore them on the new server.
Cpanel service is down on the vps and I cannot transfer accts.
Is it possible to do so ?
1- Zip or Tar the username folder in /home dir.
2- zip or tar the database name in /var/lib/mysql folder
Either by SSH or File manager trough HyperVM
Then transfer files some where safe...
When vps is rebuilt, restore the archives and databases to the folders I have backed up before rebuilt.
There are also some accounts on a terminated vps that there is only an image of the vps available so, I only can tar files using a jailed shell account, move to the new vps and untar them.
My site is databases driven and runs on around 15 mySQL databases. Im wanting to download a local copy of these databases daily, however, if i try and back them up via cPanel (on a WHM VPS) they give me blank files. Each databases is around 55mb and growing. I can back them up one by one via phpmyadmin, it just takes around 5 minutes per database. Meaning around 40 minutes per night..
Is there any solution for having a script downloading them automatically so i can download them via ftp? I've tried [url].htm but it gives me blank files.
how to backup MySQL databases by cron and have the backups sent to you by email, or have them uploaded by FTP. It is based on a script I found at another website though I can confirm it is fully working.
Change the commented variables in the following file and save it as backup.sh:
Code: #!/bin/sh
# This script will backup one or more mySQL databases # and then optionally email them and/or FTP them
# This script will create a different backup file for each database by day of the week # i.e. 1-dbname1.sql.gz for database=dbname1 on Monday (day=1) # This is a trick so that you never have more than 7 days worth of backups on your FTP server. # as the weeks rotate, the files from the same day of the prev week are overwritten. #/bin/sh /home/user/directory/scriptname.sh > /dev/null ############################################################ #===> site-specific variables - customize for your site
# List all of the MySQL databases that you want to backup in here, # each seperated by a space # If not run by root, only one db per script instance databases="mydbname"
# Directory where you want the backup files to be placed backupdir=/home/mydomain/backups
# MySQL dump command, use the full path name here mysqldumpcmd=/usr/bin/mysqldump
# MySQL Username and password userpassword=" --user=myuser --password=mypasswd"
# MySQL dump options dumpoptions=" --quick --add-drop-table --add-locks --extended-insert --lock-tables"
# Send Backup? Would you like the backup emailed to you? # Set to "y" if you do sendbackup="n" subject="mySQL Backup" mailto="me@mydomain.com"
#===> site-specific variables for FTP ftpbackup="y" ftpserver="myftpserver.com" ftpuser="myftpuser" ftppasswd="myftppasswd" # If you are keeping the backups in a subdir to your FTP root ftpdir="forums"
#===> END site-specific variables - customize for your site ############################################################
# Get the Day of the Week (0-6) # This allows to save one backup for each day of the week # Just alter the date command if you want to use a timestamp DOW=`date +%w`
# Create our backup directory if not already there mkdir -p ${backupdir} if [ ! -d ${backupdir} ] then echo "Not a directory: ${backupdir}" exit 1 fi
# Dump all of our databases echo "Dumping MySQL Databases" for database in $databases do $mysqldumpcmd $userpassword $dumpoptions $database > ${backupdir}/${DOW}-${database}.sql done
# Compress all of our backup files echo "Compressing Dump Files" for database in $databases do rm -f ${backupdir}/${DOW}-${database}.sql.gz $gzip ${backupdir}/${DOW}-${database}.sql done
# Send the backups via email if [ $sendbackup = "y" ] then for database in $databases do $uuencode ${backupdir}/${DOW}-${database}.sql.gz > ${backupdir}/${database}.sql.gz.uu $mail -s "$subject : $database" $mailto < ${backupdir}/${DOW}-${database}.sql.gz.uu done fi
# FTP it to the off-site server echo "FTP file to $ftpserver FTP server" if [ $ftpbackup = "y" ] then for database in $databases do echo "==> ${backupdir}/${DOW}-${database}.sql.gz" ftp -n $ftpserver <<EOF user $ftpuser $ftppasswd bin prompt cd $ftpdir lcd ${backupdir} put ${DOW}-${database}.sql.gz quit EOF done fi
# And we're done ls -l ${backupdir} echo "Dump Complete!" exit Upload backup.sh to your server, to any directory you want. A directory which is not web-accessible will stop your login information being seen by just anyone .
You should chmod the file to 777:
Code: chmod 777 backup.sh If you uploaded this file from a Windows machine you will need to convert the file to Unix format. You should run the following command by SSH in the appropriate directory:
Code: dos2unix backup.sh If you don't have dos2unix installed, you can install it using yum if you have that:
Code: yum install dos2unix If you don't have yum, get it here.
You may want to test the script at this point to make sure it's doing what you want it to. Change to the appropriate directory and run this command:
Code: ./backup.sh Once you're happy with it, enter it into the crontab to run daily (or whenever you want). Cron jobs vary a lot depending on the configuration of your system, so check Google for how to do it on your system. The command you will need to run by cron is:
How can i check (using SSH) which databases/users cause server load to mysql ?
I've tried "mysqladmin proc stat" but it shows just the current. How can i get stats of the last 24 hours for example ?
I've also seen slow connections stats. What is the command to check more detailed report of the slow connections; which databases caused it etc', in the last 24 hours for example as well.
Since I have never worked on the server end of things I had a quick question for all you web hosting gurus.
Is it possible to have PHP installed on ONE single server and still have the ability for the server to work with both MS Access AND MySQL at the same time?
I would think YES, but I am being told by our server branch at my current job that this is not the case. They claim there is no way for the server on one machine to be able to handle both types of databases. Are they right?
If they are wrong and it is possible to have one server run both type of databases, what steps would be necessary to set up the server to handle both types? Do we need to tweak the php.ini file or is there another method of allowing the server the ability to work with both MySQL and MS Access.
Sorry if this question seems stupid or odd, as I said, I have minimal experience on the server end but I am confident that a server can handle both.