Should We Have Raid 1 Or Should We Have A Backup Drive
Oct 29, 2007
We are limited with a maximum of 2 drives per server, with a maximum of 750gb drives.
We are thinking of going with 2 500gb hard drives. However the question is, should we use the Secondary drive with Raid 1 and let our VPS clients worry about their own backups or should we instead just use the secondary drive as a backup drive and backup each VPS nightly?
My raid 1 failed and it wouldn't be such an issue except that the other drive wasn't syncing for 2 months for some reason.
So now I have to try to recover it to get the info.
It seems that the drive seems to be ok yet I am unable to boot to it.
Using supergrub I was able to boot to a kernel but when it starts loading there is a kernel panic.
Error is basically
EXT3-fs: unable to read superblock mount:error 22 mounting ext3
When I use Rescue option using a centos CD, I am unable to mount the HD.
Using Knoppix,I was able to see the HD but unable to mount it since it claims the 2nd partition is no clean. Since the HD was raid, I don't think thats the problem at all.
Is there anything you guys can advise me? I'm somewhat new to doing this and really green on Raid for that matter.
I am currently in the process of upgrading my web/mysql server due to heavy loads and io waits and have some questions. I am trying to be cost efficient but at the same time do not want to purchase something that will be either inadequate or difficult to upgrade in the future. I hope you can provide me with some guidance.
This server is a Centos Linux box, running both apache and mysql. The current usage on the box is:
Mysql Stats:
50 mysql queries per second With a ratio of read to write of 2:1 Reads are about 65 MB per hour and writes are around 32 MB per hour.
Apache stats:
35 requests per sec
The two issues that I am unsure of are:
- Whether or not i should go with Raid-1 or Raid-5
- Whether or not I should use Sata Raptor drives or SAS drives.
In either configuration I will use a dedicated Raid controller. If I went with SATA, it would be a 3ware 9650SE-4LPML card. If I went with SAS, I was looking at the Adaptec 3405 controller.
Originally, I was going to use 3 x 74GB Seagate Cheetah 15.4K SAS drives in a Raid-5 config. After more reading, I learned that raid-5 has a high write overhead. Though read is definitely more important based on my stats, I don't want to lose performance in my writes either. With this in mind, I looked into doing Raid-1 instead.
I came up with these choices:
- Raid-1 - 2 x Seagate ST373455SS Seagate Cheetah 15K.5 SAS. HDs & controller costs are $940.
- Raid-1 - 2 x WD Raptor 74GB 10K SATA 150. HDs & controller costs are $652.
- Raid-5 - 3 x Seagate Cheetah 15K.4 ST336754SS 36.7GB. HDs & controller costs are $869.
- Raid-5 - 3 x WD Raptor 36GB 10K SATA 150. HDs & controller costs are $631.
As you can see we are not looking at huge differences in price, so I would be up for any of these options if I could just determine which would give me the best performance. I also know that I should have a 4th hotspare drive, but will buy that later down the road to ease cash flow in the beginning. If I went the SATA route, I would buy the 4th immediately.
From what I can tell, both configs provide the same redundancy, but are there any major performance considerations I should take? From what I have read, scsi/sas can enable database applications to perform better do to a lot of small and random reads and writes?
I recently build a server with Asus M2N-MX SE motherboard and SuperMicro 14" mini 1u. On the back of the Asus M2N-MX SE manual. it said for RAID driver, i need to create it from the included CD and use a floppy disk. my question is how can i do it without a floppy disk? i have an external DVD-burner that i hook up to usb to install the OS. Is it possible to use a cd to install the driver when i press f6 during Windows2003 installation?
Is it worth the effort to setup RAID 1? I have two Maxtor 500GB SATA disks and using RAID 1 seem to reduce one disk and leave me with 500GB worth of space and is the onboard Nvidia RAID trust worthy? because it said due to chipset limitation, the SATA ports supported by the Nvidia chipset doesn't support Serial Optical disk drives (Serial ODD).
I've software RAID installed with one SATA and one ATA/IDE drive. It is a combined controller so I had to add noprobe=/dev/hdc to the kernel boot line. Now the disks are named /dev/sda and /dev/sdb. There are four partitions, /dev/sda1 and /dev/sdb1 are the /dev/md0 root partition and then /dev/sda2 and /dev/sdb2 are the swap partition.
First when I removed one drive I just came to the grub command line..
Then I tried to do this in grub to make both drives bootable: grub>device (hd0) /dev/sda grub>root (hd0,0) grub>setup (hd0) grub>device (hd1) /dev/sdb grub>root (hd1,0) grub>setup (hd1) grub>quit
Now it looks like normal when I get to grub apart from that the background of the boot screen is black instead of blue and then the computer just restarts when it is supposed to start/boot the system.
If I use say two SAS 36gig 15k rpm drives 16mb cache - identical specs but different drive manufacturer's / models, do you guys think I would run into anything wierd? I've never really tried it. Is matching absolutely required? Never know..I doubt seek times different by a millisecond would cause issues but I just want to check.
Planning to buy a server from softlayer, adding a single 300gb 15k scsi drive costs 100$/month and adding 4 250gb sata drives with raid-10 costs 90$/month
If there is a failed drive in RAID-1 running on Dell 2850, FreeBSD 5.4, can I just take out the failed drive and replace it with a new one, while the server is running? Will FreeBSD cope & rebuild the drive on the fly?
I use the second hard drive as backup drive. Today as normal i check the backup directory and try to use ls command. I got this warning [root@xxxxxx backup]# ls
After adding a second drive mounted as /home2 for backups I attempted to use the cPanel "Configure Backup" and I get the following message.
Quotas cannot be built!
Your cpbackup destination is on a filesystem which has quotas enabled.
Please move it to a filesystem which does not have quotas turned on or a separate partition/disk slice mounted at /backup.Backup has been disabled to prevent quota problems...
All searches on how to resolve this really reveal no solutions.
I did a google search on "disable filesystem quota" among others and could find nothing.
Today we are going to conduct a detailed study of RAIDability of contemporary 400GB hard drives on a new level. We will take two "professional" drives from Seagate and Western Digital and four ordinary "desktop" drives for our investigation. The detailed performance analysis and some useful hints on building RAID arrays are in our new detailed article.
My hosting company is offering to sell me backup space which will be mounted as a network location. e.g. /usr/local/backup
My question is how would I use this since Plesk seems to be configured either to use an FTP location or to use the server repository. Would I have to change the location of the server repository to this network location? This doesn't seem ideal.
we have a thumb drive attached to a CentOS (linux) box and mysql backups are automatically saved to the thumb drive. Is there a way an HOW to automatically purge backup files if they are older than say 15 days? I suppose I would need to write a script for this and place it in the cron scheduler.
I haven't yet broken into the realms of dedicateds, although I have a decent VPS and am anticipating the need to get a dedi in the future.
Hence I'm wondering briefly why exactly RAID (insert some random number?) is recommended? I know it does something to do preventing hard drive failure, although would an efficient backup system be a decent alternative with regards to cost?
I've recently put together a server in a hurry and overlooked an important aspect - data integrity after power loss. I'm using Linux software RAID-1 with two 150GB WD Raptors but I'm worried that data could be lost due to having write-back cache enabled without a battery backup unit. I would rather not disable the write-back cache for performance reasons.
What is the cheapest way to get a battery backup solution for Linux software RAID? Do I have to use a hardware RAID card or do standalone battery backup units exist that can use existing motherboard SATA ports?
1- i did Os Reload with new Hard drive for "home" 2- data "backups" drive lost
3- replaced the old home drive as "/old drive"
4- " /old drive " is now the secondry drive in my server and it has the all sites usres and evry thing
5- what i need , to trasfer, copy this sites from " /old drive " to "home"
but data center said
The /olddrive/home directory contains the contents that were previously in the /home directory. You can copy files from this directory to any other directory on your server.
The command to copy files in the UNIX environment is the "cp" command.
The user directories in /olddrive/home directory contain the web page files for the users. However, simply copying the contents over will not recreate the users or domain entries in DNS/httpd. If you wish these back you will need to recreate them manually or restore them from backups.
The server is displaying these errors when I tried to do an FSCK: Bad inode IO ext3-fs error (device(8,3)) IO Failure
I am having a new primary installed and old primary set as 2nd drive. I need to recover the cpanel domain accounts from this 2nd drive after I mount it with the method below:
mkdir /backup mount /dev/sdb1 /backup
However, how do I actually recover these accounts in an automated process via whm? I've done this before with the same matter (corrupt primary drive, mount as 2nd, etc) but cannot exactly remember the proper steps.
I just purchased a brand new 10K 150GB drive. How can I take an exact copy of my current drive and transfer everything over to the new drive? I think I need to create a snapshop, or mirror it somehow.
What software will do this? I was told trueimage, but its very pricy, is there anything else?
Is Motherboard RAID as good as a dedicated PCI-E card? I am guessing a dedicated card is the best option, though costs more.
We are looking at buying a barebones server from Supermicro. It features an onboard RAID controller which supports RAID 0, 1, 5 & 10 - but for some strange reason it will only support RAID 5 if you use Windows. Here is a link to the page detailing the RAID features.
[url]
We are going to be running Linux, CentOS 5.1, so we will only have the choice of RAID 0, 1 or 10. This isn't an issue, as having RAID 10 on 4x SAS (15k) drives will be fine for speed and stability. What is an issue is would this RAID controller be as fast or reliable compared to a dedicated PCI-E card? If it can only use RAID 5 in windows, does that suggest this controller is too reliant on software? It would be a nightmare to suffer downtime and data loss because the controller couldn't hack it during a drive failure, or one day it decided to bugger up the array when rebooting.
So that leads me to looking at this card, this looks very good for what we need. Are adaptec a reliable brand? I've seen it advertised for £200, which is a good price.
[url]
This card features RAID 5 and 6, would RAID 6 be better than RAID 10 for redundancy, or is it too slow to bother with? Also it seems to have a battery module available for it, what does this achieve? Cos surely if the power dies the hard drives and motherboard can't run off this little battery, or does it just help the controller stay alive long enough with some hard drive information in its memory if the power goes out during a rebuild?
I am in a somewhat complicated situation... I wanted to order a custom server with hardware 3Ware RAID controller but after over a month of waiting I was told the HW RAID controller, as well as any other 3Ware controller they tried, does not work with the motherboard used in the server from Fujitsu-Siemens and that they simply got a reply from FS that the controller is not certified to work with their motherboard.
So although I'd prefer a HW raid, I am forced to either choose a different webhost or setup a software RAID. The problem is, I haven't done that before and am somewhat moderately...scared
I have read a lot of the info about SW RAID on Linux that I could find through Google but there are some questions unanswered still. So I thought that perhaps some of the more knowledgeable WHT members could help me with this problem...
The server specs will be:
Core2Duo E6600 (2.4Ghz), 2GB RAM, 6-8x* 250GB SATA II HDDs, CentOS 4.4 or SuSe, DirectAdmin
* I prefer 8 HDDs (or actually 9) over 6 but I am not sure if their server chassis can hold that many HDDs, I am awaiting answer from them. They don't have any other drives beside the 250GB ones so I am limited to those.
The preferred SW RAID setup is to have everything in RAID 10, except for the /boot partition which has to be on RAID-1 or no RAID I believe, plus one drive as hot spare (that would be the 9th drive). I am quite sure they will not do the setup for me but will give me access to KVM over IP and a Linux image preinstalled on the first HDD so that I'll have a functional system that needs to be upgraded to RAID-10.
How do I do that? The big problem I see is that LILO or GRUB can't boot from a software RAID-5/10 so I will have to mount the /boot partition elsewhere. It's probably terribly simple...if you have done it before which I have not. I have read some articles on how to setup a RAID-5/10 with mdadm (e.g. [url] ) but they usually do not talk about how to setup the boot partition. Should it be setup as a small sized (100-200MB) RAID-1 partition spread over all of the drives in the otherwise RAID-10 array?
What about swap? Should I create a 4-8GB (I plan to upgrade the server RAM to 4GB in near future) RAID-1 swap partition on each of the disks or swap to a file on the main RAID-10 partitions. The second sounds simpler but what about performance? Is swapping to a file on RAID-10 array a bad idea, performance wise?
Is it possible to grow a RAID-10 array in a way similar to growing a RAID-5 array with mdadm (using two extra drives instead of one of course)? mdadm doesn't actually even mention RAID-10 despite it does support it without having to create RAID-0 on top of RAID-1 pairs if the support is in kernel, from what I know.