RAID0, RAID1, RAID5
May 30, 2008Which to choose? RAID0, RAID1 or RAID5?
how can explain what is the best one and why?
Which to choose? RAID0, RAID1 or RAID5?
how can explain what is the best one and why?
What do you feel is better for a typical shared/reseller hosting server?
RAID 1 or RAID 5 (with three drives)?
i use Adaptec 2200s with three scsi HD to do raid,
it is install centos with cpanel.
after rebooting,it can not load into the raid5 well,
and shows no logical drivers... and no operation system found.
i try to add new hd,but it still do not build and run.
i think it may the raid5 configuation on the three HDs are missing are ger problem or it will not work at all.
i want to ask,if a use another three hds with the Adaptec 2200s to set another raid5.
and build another one server.
and i mount the old first one HD from the old raid5 and mount the new first one HD from the new raid5,
and copy the files from old ones to the new ones.
I am confused between RAID10 and RAID0+1 by reading these two links
http://www.acnc.com/04_01_10.html
http://www.acnc.com/04_01_0_1.html
On RAID10 link, is that correct on rightside of picutre?
hm.. just confused between RAID10 and RAID0+1.. anyone has little more clear diagram between those two with good example?
Now I'm a little new to RAID5, I've always used RAID10 or RAID1, but I am looking into RAID5 for my new server.
Tell me which you think would be more beneficial, I know RAID10 is faster for writing and copying which is what is needed for SWAP but I also know that RAM is a lot faster than SWAP.
8GB RAM
2GB SWAP Partition
4x250GB RAID5
7GB RAM
3GB SWAP Partition
6x250GB RAID10
or
8GB RAM
2GB SWAP Partition
3x400GB RAID5
Is 3 drives or 4 drives faster in a RAID5? I know 4 drives would probably read faster but would copying and writing be slower?
I am trying to configure a 2.6.18 kernel and I cannot get it to read the raid5 module since my system is utilizing a software RAID5.
Whenever I build I receive the following error:
[root@localhost linux-2.6.18.8]# make install
sh /root/kernel/linux-2.6.18.8/arch/x86_64/boot/install.sh 2.6.18 arch/x86_64/boot/bzImage System.map "/boot"
WARNING: No module raid5 found for kernel 2.6.18, continuing anyway
I have tried versions 2.6.18 and 2.6.18.8 and both give me the same issue.
I have CONFIG_MD_RAID456=m in my .config so the module should be getting configured.
I tried ignoring the warning but when I boot up my system in 2.6.18 I get a kernel panic error, could not sync. (I'm guessing because it doesn't have the RAID module). Everything works fine when I boot up in the 2.6.9 CentOS kernel.
How long it takes to rebuild RAID5 if one of failed disks was replaced? How long it takes to assume full performance again?
Our support changed disk in our server 2h ago, but it's still extremely slow
I have a problem with performance on my server.
My server is with Supermicro PDSME+ and 3x500 GB WD Raid Edition with one hot-spare.
When I migrated from RAID-1 to RAID-5 the performance droped dramatically. I have been reading different posts on internet that suggest to force HDD to work on 1.5 GB/s. On Supermicro support page there is a similar issue and support staff explain that ICH7R do not support SATA-II.
When I review specification on their web site, they typed that MB support 3.0 GB/s.
Does anyone have a similar server with a similar issue?
I never use RAID1, but consider to use RAID1 server.
Let's say one hard drive failed and the DC replaced it with the new one, I was wondering if the system will automatically copy the data into the new drive or we should do some commands to copy the data into the new drive?
We are expanding our photo sharing business and are revising our Unix-based server architecture. We're looking to develop a standard server configuration so that we can easily add servers when necessary.
Our ISP has recommended a configuration with mirrored web servers and mirrored RAID5 NAS boxes. I've read about Google's server architecture which consists of identical mirrored servers; when a drive or part from one of those servers goes down, data is served from the mirrored servers and the bad machine is repaired or replaced.
Comparing the two architectures with similar storage sizes, the overall cost of the hardware itself is about the same, with the identical mirrored machines being slightly cheaper. The monthly co-location fees (rack, power, etc.) are higher for the NAS solution.
I'm interested to hear your thoughts and experiences with similar solutions. I know the web/NAS solution is popular, and it's probably the one we'll go with, initially at least. Has anyone here implemented a Google-like identical mirrored server solution?
I setup a Software RAID5 the following way:
/dev/sda:
1: /boot 101MB
2: software raid ALL
/dev/sdb
1: software raid ALL
/dev/sdc
1: software raid ALL
/dev/sdd
1: software raid ALL
/dev/md0: ext3 mounted as / for all of the software RAID partitions.
I was left to believe this would create redundancy as long as only one drive is removed from the array. Although when I unplug any of the hard drives (one at a time) I get input/output errors and when I try to reboot I get kernel sync errors.
What exactly am I doing wrong when trying to create redundancy? I know that SDA contains the /boot/ partition so it wouldn't boot without that but even if I unplug B,C, and D it still can't sync.
I have to leave the Supermicro servers and use only Dell. I have this question.
There is a big difference in performance between these two RAID configurations?
Dell - 2 x 1TB RAID1 PERC6
Supermicro - 4 x 500GB RAID10 3ware 4 port
It is for use with webhosting.
I have 2*400gb H.D.D on my server.
I have run RAID1 (software) on my server. Is it posible to disable RAID1 and use secondary 400gb H.D.D as additional H.D.D (800gb) and add another 1000gb H.D.D on my server and run RAID1 on server again? For have mirror data as RAID1 in 1000gb H.D.D
If this is not in the right forum for this... I'm sorry didnt knew where else it may go.
I have to build a new server with RAID 1 and WHM/Cpanel installed (in fact i dont have to, but i need to learn ASAP and my boss gave me an old server for practice).
I've seen the installation guide of cpanel but the sizes of the partitions apply to a disk of 80 GB (i think so) so is there any way to calculate the size of the partitions, regardless of disk size? cuz mine are of 250 gb each.
I'm trying to install it on centos 5 on text mode, so far i have been able to successfully install the system (with partitions of any size... since is a test doesnt matter) with RAID 1.
After that i ran cat /proc/mdstat and in some partitions shows me this
Rsync=Delayed
I've read in some places that this is not a big issue... but in other places says it is... maybe i did something wrong
I would like to configure RAID 1 setup. Do I need extra hardware or if I have 2 hard drives can I set it up?
View 3 Replies View RelatedI plan to install one server with RAID with that run on a dedicated card which support FreeBSD. As i have not much experience in this,
View 4 Replies View RelatedSeems to be rather new, [url] and belongs to intergenia (server4you, plusserver)
4GB / 5000GB / Quad Opt. / HW Raid 1 2x250 for €79
8GB / 10000GB / 2x Quad Opt. / HW Raid 1 2x500 for €129
16GB / 15000GB / 2x Quad Opt. / HW Raid 5 3x500 for €179
Month-to-month, overage €0.19, Centos available.
Anyone have a server there?
If this were from hetzner.de I would be all over it...
How can setup software raid1 on centos 5?
Can you provide any refrence?
I see [url]but i dont know is this need to /boot partition or no? and if need to this, is it must set for raid1 too?
How can config LVM with RAID1 in text mode centos 5.3 installation?
I need to do it for xen installation.
I have configured a Xen setup on a dual xeon system with a 3ware 8506 2 ports sata controller.
Array is configured in raid1 and I am using LVM.
I get really slow accesses on the virtual machines, when I create a big ext3 system, the system is nearly freezing.
I am looking for better disk performance. Due to the tight budget, I have to choose one of following options as my disk choice:
2 SATAII disk w/RAID0, 7200rpm, 32M cache for each disk
1 SAS disk, 15000rpm, 16M cache.
which one will be better and how better if other things(hardware & OS) are same?
I just got 2 dedicateds, and while creating software raid 1, upon initial sync up I'm getting around 7 megabytes per second (6700 kb/s) in write speed I assume.
This is a quad core, sata2 setup...
Dedicated server has 2 HDD but I am not going to pay another $25/month for the hardware RAID solution (already stretched too far).
My plan is to install FreeBSD 6 and use Gmirror to establish a raid-1 "soft" mirror.
Advantages: Entire drive is mirrored including the OS. Drives can be remotely inserted or removed from the mirror set using a console command so its possible to uncouple the mirror and perform software updates on a single drive then re-establish the mirror only after the updates have proved successful.
Disadvantages: Lower I/O than hardware solution (not a problem for me) others???
I rarely see people consider software raid for a tight-budget server and I am wondering why? Could it be that other OS's dont have a solution as good as gmirror? Or is it just that crappy soft-raid in the past has left a bitter taste in admins mouths? Or perhaps admins need the extra I/O of hardware?
about the hd,there are two options, the first one is four 7200rpm sata to do raid 10,
the second one is two 10000rpm sata to do raid 1, about the performance, which one will be better?