I have 100+ sites on this hard drive, and one site in particular that meant the world to me.
My host sent the drive to Gillware first, but they failed saying that the file system was so severely damaged that they could not recover anything.
Then shortly after, my host sent it to DriveSavers, a very well-known company, but they also FAILED.
I'm extremely depressed because of this. Please don't post if you're going to say "make sure you do backups next time" because I've heard it 504329504395 times now, and while I do realize my mistake, saying that does NOT help me.
I am willing to spend ALOT to get my sites back. I still have hope. Are there any other companies out there BETTER than DriveSavers? Assuming that you'd still have hope even after two companies failed, where you would you go or what would you do?
I am putting this thread to take other people advise and to advise them about my bad experience with rsync. Lucky, I was able to get my data back through the old drive
Three times a day, I take mysqldump and then rsync that mysql dump to a drive located in a different state.
Everything was working fine..The rsync was transferring data daily and updating the backup on other server. Few days ago, there was a hard drive failure on my server and then i checked in my backup drive for mysql dump...It was 764 bytes instead of 5 Gb...
Then i went to my other server where I rsync, to my surprise that was also 764 bytes from 5 Gb as it synced the both database..
My backup strategy failed and would be in tears if I couldn't grap data from failed drive
I would like to hear everyone views on this and learn from it
The host is going to mount this HD on the same machine after adding a new hard drive and fresh install... Does anyone have any recommendations for how I can go about recovering data? Specifically mysql databases?
Windows Application 2003 crashed on RAID 5 server, we tried to take the NTFS files from the hardrive and mount them on a knobix which was booted from a cdrom drive. Knopix could read the files but it was unable to mount them I guess for compatiblity reasons.
Is there anyway we can get a backup of that ntfs file and restore our data?
im doing this in favor of my friend who is having some problem with his dedicated server,"he does not speak english very well" he has a unmanaged dedicated server he changed something on his ssh port and forgot what port is it, he can still access his WHM right now, meaning he know's the root password "correct" but the problem is he forgot his ssh port
I had an issue with limestonenetworks over the last 24 hours. My 2 servers went down for 12 hours but when it came back up the separate mysql server would not load?? Somehow the hard drive on that server failed during this upgrade and they cannot retrieve the databases on the server. A database containing over 60,000 members. Through my own undoing I didnt have a recent backup. A cpanel backup was backing up the same old file instead of the updated one.
I would have been happy if the site came back and at least the data was still there. Im just upset that this "network upgrade" has fried my database. From this I get the lesson "You get what you pay for". Im just keeping my fingers crossed that they will be kind enough to make a serious attempt to recover the data or at least let me send it to someone who can.
Does anyone have any suggestions on how to get retrieve data from a linux server hard-drive?
Here are the steps that have been taken so far:
Upon console of the server it was at DISK BOOT FAILURE prompt. We first did the basics and disconnected the server and made sure all connections to the hard disk and motherboard were secure. Upon reboot we checked to see if the BIOS detected the drive which it did. The server still reported disk boot failure. We then rebooted back into the BIOS to make sure the BIOS was set not skipping the hard drive in the boot process.
It was not.
The "recovery tools" we ran was just a basic linux live cd to see if we could get the hard drive detected inside of an operating system. It would not even detect the disk in the machine. We did not perform anything on the drive to cause it to fail. I spoke directly to my manager who was supervising the move and I was told that all servers were shut down cleanly as well as moved safely and securely.
The hard disk is still detected by the BIOS but the hard disk certainly has some internal issues as you can hear a constant buzzing and clicking during POST and after.
I bought domain name and web hosting from a company. Now the problem is that the company from which I booked domain and hosting doesn't exist.
Initially the hosting company sent me only FTP and mail server details. Now I need the cPanel details to configure database. So kindly request all of you to suggest me some solutions through which I can get the cPanel details.
My server hard disk is crashed badly. The rescue function in the server cannot take part and so I've tried using some recovery software to get back my data.
I've tried using Easy Recovery Professional. It sort all the files by it file type into different folder. I found a folder named .DB, there are also some .ado and . ldb folder too. I guess one of it is my database. The problem now is, i dunno how to read the file.
Do you have any idea to read it? I've tried many recovery system. eg. DiskInternals Linux Recovery, Disk Doctor Linux Recovery.
As I have never used cPanel neither have whatsoever experience with control panels, in the process of ordering a couple of new server I am wondering what's the average recovery time after a server failure which involves data (ie. disk failure).
I am interested to understand this, because I need to choose between hardware raid or backup disk.
For example, at LiquidWeb (this is just one of the many managed providers i am evaluating) you can pay the same server 264$ with 2x250 hardware raid 1, and 219$ with 2x250 simple hard drives, where the second one is supposed to be a backup drive.
I am yet to ask, but on this servers most likely (at this prices) they do not have hot swappable disk drives where they can rebuild the array "live", so also if i choose raid hardware, in case of a disk failure, i face a downtime to bring down the server, rebuild the array, bring on the server, which I guess is something not less than 30-60 minutes downtime.
The advantage of this solution is that a disk failure keeps the system running, so you can schedule a maintenance window to replace the disk and rebuild the array, I guess.
On the other side, relying on a backup disk (and of course rsyncing data to an offsite server, i would do this anyway) you save 45$ each month, and disks after all do not fail every month.
If the main disk fails, they need to replace the disk, install again the server with what I suppose being a standard image (so let's figure a couple of hours, as they have a 30min hardware replacement sla), then you need to restore backups (which in my case are something like 20gigs). So I guess, on an average, it would take something like 4 hours downtime.
Am I correct ?
What would you reccomend as a solution with cpanel, taking into account the huge price difference ? Backup drive or Hardware raid ?
This would be for a single big website, not a hosting-company with resellers and customers environment, so I would value more the monthly saving, rather than the high availability, but of course I am interested to know what is the average time to recover a cpanel server after a drive failure.
Besides I have also another question, as I have never had a colocated server with raid hardware, so i do not know anything about this.
When a drive fails in a raid1 hardware environment, how the hosting provider gets notified ? I suppose it's not a led on the server to blink, as no one would see it.
So if you have software raid (like my computer at home) you get an email from mdadm with "warning, array degraded", so you know it quick and you can check it anytime doing a ' cat /proc/mdstat '. What about hardware raid?
My fear is that nobody notices my raid1 drive failure, the server keeps going on just a drive, then maybe the other drive fails, and it would be unpleasant: while a single drive failure would be obviously very easy to spot for me.
How can I find the data transfer rate on the server. I have done ifconfig -a , it display the amout of data has been received and transfered. I want to see the live data transfer date. Can I able to check it?