400GB Hard Disk Drives In RAID 0, RAID 5 And RAID 10 Arrays: Performance Analysis
Mar 7, 2007
Quote:
Today we are going to conduct a detailed study of RAIDability of contemporary 400GB hard drives on a new level. We will take two "professional" drives from Seagate and Western Digital and four ordinary "desktop" drives for our investigation. The detailed performance analysis and some useful hints on building RAID arrays are in our new detailed article.
How often do RAID arrays break? Is it worth having RAID if a servers hard drive goes down? I was thinking it may just be a better option to just have a backup drive mounted to my system and in the even of a system failure just pop in a new hard drive, reload the OS, and then reload all my backups?
I was just wondering what everyone uses to monitor their RAID arrays? We are looking to get things setup so we can either 1) have the system send us an email when a drive in the array becomes degraded or 2) somehow tie the RAID monitoring in with our Nagios cluster to alert us.
We use Adaptec 2120S RAID cards with Seagate Cheetah hard disks. I looked at the Adaptec website a while back and found some sort of monitoring tool but was not able to download it after entering the card serial number as their site just timed out. I also see Adaptec offer a Storage Manager product.
What would everyone recommend? We are not looking for anything overly fancy here
I have room for 4 more hard drives on my home server. My original goal was to go raid 10 but I've been thinking, raid 5 can support 4 drives and give more capacity. Which one would have better performance as software (md) raid? I'm thinking raid 10 might actually have bad performance as software raid, vs hardware, compared to raid 5. Would raid 5 with 4 drives be better for my case?
Is Motherboard RAID as good as a dedicated PCI-E card? I am guessing a dedicated card is the best option, though costs more.
We are looking at buying a barebones server from Supermicro. It features an onboard RAID controller which supports RAID 0, 1, 5 & 10 - but for some strange reason it will only support RAID 5 if you use Windows. Here is a link to the page detailing the RAID features.
[url]
We are going to be running Linux, CentOS 5.1, so we will only have the choice of RAID 0, 1 or 10. This isn't an issue, as having RAID 10 on 4x SAS (15k) drives will be fine for speed and stability. What is an issue is would this RAID controller be as fast or reliable compared to a dedicated PCI-E card? If it can only use RAID 5 in windows, does that suggest this controller is too reliant on software? It would be a nightmare to suffer downtime and data loss because the controller couldn't hack it during a drive failure, or one day it decided to bugger up the array when rebooting.
So that leads me to looking at this card, this looks very good for what we need. Are adaptec a reliable brand? I've seen it advertised for £200, which is a good price.
[url]
This card features RAID 5 and 6, would RAID 6 be better than RAID 10 for redundancy, or is it too slow to bother with? Also it seems to have a battery module available for it, what does this achieve? Cos surely if the power dies the hard drives and motherboard can't run off this little battery, or does it just help the controller stay alive long enough with some hard drive information in its memory if the power goes out during a rebuild?
I am in a somewhat complicated situation... I wanted to order a custom server with hardware 3Ware RAID controller but after over a month of waiting I was told the HW RAID controller, as well as any other 3Ware controller they tried, does not work with the motherboard used in the server from Fujitsu-Siemens and that they simply got a reply from FS that the controller is not certified to work with their motherboard.
So although I'd prefer a HW raid, I am forced to either choose a different webhost or setup a software RAID. The problem is, I haven't done that before and am somewhat moderately...scared
I have read a lot of the info about SW RAID on Linux that I could find through Google but there are some questions unanswered still. So I thought that perhaps some of the more knowledgeable WHT members could help me with this problem...
The server specs will be:
Core2Duo E6600 (2.4Ghz), 2GB RAM, 6-8x* 250GB SATA II HDDs, CentOS 4.4 or SuSe, DirectAdmin
* I prefer 8 HDDs (or actually 9) over 6 but I am not sure if their server chassis can hold that many HDDs, I am awaiting answer from them. They don't have any other drives beside the 250GB ones so I am limited to those.
The preferred SW RAID setup is to have everything in RAID 10, except for the /boot partition which has to be on RAID-1 or no RAID I believe, plus one drive as hot spare (that would be the 9th drive). I am quite sure they will not do the setup for me but will give me access to KVM over IP and a Linux image preinstalled on the first HDD so that I'll have a functional system that needs to be upgraded to RAID-10.
How do I do that? The big problem I see is that LILO or GRUB can't boot from a software RAID-5/10 so I will have to mount the /boot partition elsewhere. It's probably terribly simple...if you have done it before which I have not. I have read some articles on how to setup a RAID-5/10 with mdadm (e.g. [url] ) but they usually do not talk about how to setup the boot partition. Should it be setup as a small sized (100-200MB) RAID-1 partition spread over all of the drives in the otherwise RAID-10 array?
What about swap? Should I create a 4-8GB (I plan to upgrade the server RAM to 4GB in near future) RAID-1 swap partition on each of the disks or swap to a file on the main RAID-10 partitions. The second sounds simpler but what about performance? Is swapping to a file on RAID-10 array a bad idea, performance wise?
Is it possible to grow a RAID-10 array in a way similar to growing a RAID-5 array with mdadm (using two extra drives instead of one of course)? mdadm doesn't actually even mention RAID-10 despite it does support it without having to create RAID-0 on top of RAID-1 pairs if the support is in kernel, from what I know.
I have a dell POWER VAULT 725N utilizing a 4 HDD RAID 5 setup.
Server has died and bios error message shows that 2 hard drives had failed. I can not boot to windows.
Data is very crucial, what are my options for data recovery?
I really hope I can recover the data, I doubt that two HDD actually failed at the same time without giving any warnings. I hope its the raid controller.
Would like to hear pointers from the community on how to recover important data from the RAID.
Are there any companies/software that would help in this assuming it is a hdd failure and not a controller issue?
I am in the process of restructuring the infrastructure on our servers. I am thinking of using either RAID 5 (1 hot spare) vs RAID 10 as my 1U server has 4 HDD tray.
RAID 5 would have better capacity but RAID 10 has better overall performance. Which one do you guys go for a shared hosting server?
Is it possible to turn a non raided setup into Linux software raid, while it is live, and if it's the OS drive? Can you even software raid the OS drive remotely? I've been thinking about doing it for the redundancy (and possible slight performance boost for reads, but doing it more for redundancy). I'm using CentOS.
I've just bought myself a linux based NAS for storage/backups at home and a couple of WD Greenpower (Non-RAID edition) HDDs.
For those who don't know what TLER is (Time Limited Error Recovery), without it enabled the HDD does its own error recovery, which may take longer than the acceptable time for a RAID Controller. In which case, the drive is kicked out of the array. With TLER on, the idea is that the drive keeps notifying the controller, or the controller handles the error.
So, my actual question is, does Linux Software RAID benefit from TLER being enabled? Or is it best to let the drive do it's own thing?
Im planning on buying a NAS from my provider for using as a backend to my VPSes (around 15). The plan is to put the server images on the NAS so the VPSes can be moved without interruption between different nodes.
The server i have looked on so far is the following:
The budget is pretty tight so if it's possible to do this with SATA drives it would be great, otherwise it could be a possibilty to go down in diskspace and switch the SATA drives to SCSI/SAS drives.
Is there any RAID performance decrease if per say you have a 24-RAID 3ware hardware card and you already have a 6x RAID partion on RAID 5 but then you are now adding per say 18x of HDD and your going to make it to another partion of RAID 5 does the performance stay the same or decrease?
The question as to why you would have different RAID partions is because if you were to buy a 8U you would want it as an investment to avoid buying smaller cases to eliminate the amount of money on new motherboard/cpu/ram per each system and add hard drives whenever you can and RAID them.
I've been talking to the Planet about trading in my four and a half year old "SuperCeleron" (from the old ServerMatrix days) Celeron 2.4 GHz system for something new. As part of their current promotions, I've configured a system that looks decent:
Xeon 3040, 1 gig of RAM, 2x250GB hard disks, RHEL 5, cPanel+Fantastico, and 10 ips for $162.
Not too bad. I could bump up the ram to 2 gb for, I think, $12 more, which I'm thinking about and wouldn't mind some thoughts on. But, the thing that has me really confused is RAID. I like the idea of doing a RAID 1 setup with those two hard disks. But, the Planet wants $40/month for a RAID controller to do it. I really don't want to go over $200 a month!
Any thoughts on alternative redundancy strategies that might avoid that cost? Software RAID does not seem to be offered by the Planet, unless I can figure out how to do it after installation (is that possible?) Better ideas in general on the server?
Currently I am building a budget server. And the one thing I didn't buy is a raid card. How much of a performance hit do you think I will take with software raid? The server specs are
AMD Athlon 64 X2 4800+ AM2 2GB of DDR2 800ram x2 160GB 16MB cache Western Digital Hard Drives
I plan on running software Raid 1 unless there is a good reason not to. The whole server cost was $500.
I am currently in the process of upgrading my web/mysql server due to heavy loads and io waits and have some questions. I am trying to be cost efficient but at the same time do not want to purchase something that will be either inadequate or difficult to upgrade in the future. I hope you can provide me with some guidance.
This server is a Centos Linux box, running both apache and mysql. The current usage on the box is:
Mysql Stats:
50 mysql queries per second With a ratio of read to write of 2:1 Reads are about 65 MB per hour and writes are around 32 MB per hour.
Apache stats:
35 requests per sec
The two issues that I am unsure of are:
- Whether or not i should go with Raid-1 or Raid-5
- Whether or not I should use Sata Raptor drives or SAS drives.
In either configuration I will use a dedicated Raid controller. If I went with SATA, it would be a 3ware 9650SE-4LPML card. If I went with SAS, I was looking at the Adaptec 3405 controller.
Originally, I was going to use 3 x 74GB Seagate Cheetah 15.4K SAS drives in a Raid-5 config. After more reading, I learned that raid-5 has a high write overhead. Though read is definitely more important based on my stats, I don't want to lose performance in my writes either. With this in mind, I looked into doing Raid-1 instead.
I came up with these choices:
- Raid-1 - 2 x Seagate ST373455SS Seagate Cheetah 15K.5 SAS. HDs & controller costs are $940.
- Raid-1 - 2 x WD Raptor 74GB 10K SATA 150. HDs & controller costs are $652.
- Raid-5 - 3 x Seagate Cheetah 15K.4 ST336754SS 36.7GB. HDs & controller costs are $869.
- Raid-5 - 3 x WD Raptor 36GB 10K SATA 150. HDs & controller costs are $631.
As you can see we are not looking at huge differences in price, so I would be up for any of these options if I could just determine which would give me the best performance. I also know that I should have a 4th hotspare drive, but will buy that later down the road to ease cash flow in the beginning. If I went the SATA route, I would buy the 4th immediately.
From what I can tell, both configs provide the same redundancy, but are there any major performance considerations I should take? From what I have read, scsi/sas can enable database applications to perform better do to a lot of small and random reads and writes?
I took two harddrives out of a windows 2003 server and imported them as foreign disks in my pc
the problem is... when i imported them as foreign disks, windows xp decided to mark every partition on both disks as failed... even though it hasn't failed.
Problem is now, i cant map to a drive... i need to do this so i can do an NT backup from the data off the drive, then restore that data to a new drive.
If you want a quick run down as to WHY I want to do this, read here
Basically, my ISP could not get my server running stable on a simple raid 1 (or raid 5) so what it came down to was having them install my system on a single disk. I don't exactly like this, main reason being, if the system (or HDD) crashes, I'll end up with another several hours of down time... So here is my proposal:
Please Note: This will have to be accomplished on a live System (full backups!) over ssh as I don't trust my ISP to do things right as described in my post above.
mkfs -t ext3 -m 1 /dev/vg0/lvboot mkfs -t ext3 -m 1 /dev/vg0/lvroot mkfs -t ext3 -m 1 /dev/vg0/lvtmp mkfs -t ext3 -m 1 /dev/vg0/lvhome Now, I'd like to 'init 1' at this stage but I can't, so I won't (possible solutions?? Possible to umount the / partition??)
Assuming I'd have to do this on a fully live system, I'd disable all services that I can
Code: /etc/init.d/sendmail stop /etc/init.d/postfix stop /etc/init.d/saslauthd stop /etc/init.d/httpd stop /etc/init.d/mysql stop /etc/init.d/courier-authlib stop /etc/init.d/courier-imap stop /etc/init.d/amavisd stop /etc/init.d/clamd stop /etc/init.d/pure-ftpd stop /etc/init.d/fail2ban stop /etc/init.d/syslogd stop Then we copy all of our data from the single partitions to the raid disks
Code: mount /dev/vg0/lvboot /mnt/newroot/boot mount /dev/vg0/lvroot /mnt/newroot/root mount /dev/vg0/lvtmp /mnt/newroot/tmp mount /dev/vg0/lvhome /mnt/newroot/home (I think I covered everything)
Code: umount -l /dev/sda1 (/boot) umount -l /dev/sda3 (/home) cp -dpRx /* /mnt/newroot/ mount /dev/sda1 /boot cp -dpRx /boot/* /mnt/newroot/boot/ mount /dev/sda3 /home cp -dpRx /home/* /mnt/newroot/home/ Once we have everything copied, update /etc/fstab and /etc/mtab to reflect the changes we made: vi /etc/fstab
Code: title CentOS (2.6.18-164.el5) root (hd3,0) kernel /vmlinuz-2.6.18-164.el5 ro root=/dev/sda2 initrd /initrd-2.6.18-164.el5.img Where (hd3,0) is /dev/sdc. If the system fails to boot to the raid then it'll auto boot to the single disk (/dev/sda)
then update my ramdisk: mv /boot/initrd-`uname -r`.img /boot/initrd-`uname -r`.img_bak mkinitrd /boot/initrd-`uname -r`.img `uname -r`
And now to set up grub...
Code: grub > root (hd0,0) > setup (hd0) we should see something like this: Checking if "/boot/grub/stage1" exists... no Checking if "/grub/stage1" exists... yes Checking if "/grub/stage2" exists... yes Checking if "/grub/e2fs_stage1_5" exists... yes Running "embed /grub/e2fs_stage1_5 (hd0)"... 15 sectors are embedded. succeeded Running "install /grub/stage1 (hd0) (hd0)1+15 p (hd0,0)/grub/stage2 /grub/grub.conf"... succeeded Done.
Code: > root (hd3,0) > setup (hd3) Again, we should see something like this: Checking if "/boot/grub/stage1" exists... no Checking if "/grub/stage1" exists... yes Checking if "/grub/stage2" exists... yes Checking if "/grub/e2fs_stage1_5" exists... yes Running "embed /grub/e2fs_stage1_5 (hd1)"... 15 sectors are embedded. succeeded Running "install /grub/stage1 (hd1) (hd1)1+15 p (hd1,0)/grub/stage2 /grub/grub.conf"... succeeded Done.
Code: > quit From here I think we're ready to reboot, can't see where I missed anything. If all goes well then I should see my volume groups listed in 'df- h'
I'm running into a problem with a relatively new (2 months old) server. I have just a few accounts on it, and I'm already noticing unusual loads, for the... load. After some benchmarking with bonnie++ (and plain old "dd") there is clearly a problem.
Isn't a write-speed over 7MB/s reasonable to expect? Also look at the low CPU times...
Anyway running the same test on a similar but older AND busier server showed much better results than this. In fact dd'ing a 1GB file from /dev/zero "finished" in about 10 seconds but then pegged the server at 99% iowait (wa) for a full three minutes (until done being written from cache I assume), bringing load to 15.00
That's all the info I have so far... the data center just replaced the card (which gave no errors) with no effect. Running these benchmark tests is about the extend of my hardware experience.