I am putting this thread to take other people advise and to advise them about my bad experience with rsync. Lucky, I was able to get my data back through the old drive
Three times a day, I take mysqldump and then rsync that mysql dump to a drive located in a different state.
Everything was working fine..The rsync was transferring data daily and updating the backup on other server. Few days ago, there was a hard drive failure on my server and then i checked in my backup drive for mysql dump...It was 764 bytes instead of 5 Gb...
Then i went to my other server where I rsync, to my surprise that was also 764 bytes from 5 Gb as it synced the both database..
My backup strategy failed and would be in tears if I couldn't grap data from failed drive
I would like to hear everyone views on this and learn from it
I have 100+ sites on this hard drive, and one site in particular that meant the world to me.
My host sent the drive to Gillware first, but they failed saying that the file system was so severely damaged that they could not recover anything.
Then shortly after, my host sent it to DriveSavers, a very well-known company, but they also FAILED.
I'm extremely depressed because of this. Please don't post if you're going to say "make sure you do backups next time" because I've heard it 504329504395 times now, and while I do realize my mistake, saying that does NOT help me.
I am willing to spend ALOT to get my sites back. I still have hope. Are there any other companies out there BETTER than DriveSavers? Assuming that you'd still have hope even after two companies failed, where you would you go or what would you do?
The host is going to mount this HD on the same machine after adding a new hard drive and fresh install... Does anyone have any recommendations for how I can go about recovering data? Specifically mysql databases?
Windows Application 2003 crashed on RAID 5 server, we tried to take the NTFS files from the hardrive and mount them on a knobix which was booted from a cdrom drive. Knopix could read the files but it was unable to mount them I guess for compatiblity reasons.
Is there anyway we can get a backup of that ntfs file and restore our data?
Trying to figure out rsync and am a little confused. Here is what I am trying to accomplish (usernames, passwords, and Ip's changed to protect the innocent):
ServerA = Live production server ServerA Username = rootA ServerA Password = passwordA ServerA IP = 123.111.222.AAA Directories from ServerA to backup: /usr/home/www and /var/lib/mysql
ServerB = Backup server ServerB Username = rootB ServerB Password = passwordB ServerB IP = 123.111.222.BBB Location on ServerB where backups will be located: /usr/BACKUPS
rsync is located on ServerB at /usr/bin/rsync
I want to use ServerB to pull the data from ServerA to ServerB. I want to maintain all permissions, ownerships, etc. I will be putting this in a cronjob to run every 48hours. After the rsync is done, I would like to see the following on ServerB:
Is this possible? And if so, what command usage would I use? And how would the password get passed?
My host just recently sent the hard drive with my sites to a data recovery company called Gillware. Website is [url]- but they failed and gave the following reason:
Originally Posted by Gillware
Unfortunately, your file system was so severely damaged that no data can be recovered. We will make arrangements to return your drive via UPS. Sorry we could not help you further.
Do you guys think there's still hope?
The hard drive is now being shipped to a more well known company, Drive Savers - [url]and I'm guessing that this is the last hope, because the more the drive gets tampered with, the more chance of permanent data loss.
So yeah.. I was just wondering what you think? If the file system is so severely damaged, do you think it STILL can be recovered?
We just upgraded our server with 8 brand new seagate cheetah 15k.5's, a battery backup unit, and a 256mb dimm for the raid controller. In the boot process, i noticed an error about caching or something.
After analyzing the dmesg log, i found the error: sda: asking for cache data failed sda: assuming drive cache: write through
It seems like the kernel can't get to the raid controllers cache, so it switches to the write through setting.
I've benchmarked the harddisks with the write through, and write back setting. The odd thing is that both settings deliver the same performance.
Normally, write back increases the performance with like 100%... That's why we bought the battery backup unit.
So something is going wrong, but where lays the problem?
8 X seagate cheetah 15k.5, U320, 16mb cache, SCA, 73GB 1 X chenbro backplane, U320, SCA, 2 channels, 8 ports 1 X LSI megaraid 320-2x raid controller, U320, 2 channels, battery pack and 256 upgraded dimm 6 GB DDR PC3200, ECC, CL3 2 X AMD opteron dual cores (4 X 2.0 ghz)
I've already changed a lot of MX records but never run into a problem like this one ...
I google it and found out that this might be cPanel/WHM bug. I've mailed them but they need a lot of time before they reply so I will also ask here
Here is error log from WHM:
Setting mx priority 10 (mx1.domain.com)........failed: Error fetching zone data for otherdomain.com.db's MX ...Done Setting mx priority 20 (mx2.domain.com)........failed: Error fetching zone data for otherdomain.com.db's MX ...Done