Today we are going to conduct a detailed study of RAIDability of contemporary 400GB hard drives on a new level. We will take two "professional" drives from Seagate and Western Digital and four ordinary "desktop" drives for our investigation. The detailed performance analysis and some useful hints on building RAID arrays are in our new detailed article.
I am currently in the process of upgrading my web/mysql server due to heavy loads and io waits and have some questions. I am trying to be cost efficient but at the same time do not want to purchase something that will be either inadequate or difficult to upgrade in the future. I hope you can provide me with some guidance.
This server is a Centos Linux box, running both apache and mysql. The current usage on the box is:
Mysql Stats:
50 mysql queries per second With a ratio of read to write of 2:1 Reads are about 65 MB per hour and writes are around 32 MB per hour.
Apache stats:
35 requests per sec
The two issues that I am unsure of are:
- Whether or not i should go with Raid-1 or Raid-5
- Whether or not I should use Sata Raptor drives or SAS drives.
In either configuration I will use a dedicated Raid controller. If I went with SATA, it would be a 3ware 9650SE-4LPML card. If I went with SAS, I was looking at the Adaptec 3405 controller.
Originally, I was going to use 3 x 74GB Seagate Cheetah 15.4K SAS drives in a Raid-5 config. After more reading, I learned that raid-5 has a high write overhead. Though read is definitely more important based on my stats, I don't want to lose performance in my writes either. With this in mind, I looked into doing Raid-1 instead.
I came up with these choices:
- Raid-1 - 2 x Seagate ST373455SS Seagate Cheetah 15K.5 SAS. HDs & controller costs are $940.
- Raid-1 - 2 x WD Raptor 74GB 10K SATA 150. HDs & controller costs are $652.
- Raid-5 - 3 x Seagate Cheetah 15K.4 ST336754SS 36.7GB. HDs & controller costs are $869.
- Raid-5 - 3 x WD Raptor 36GB 10K SATA 150. HDs & controller costs are $631.
As you can see we are not looking at huge differences in price, so I would be up for any of these options if I could just determine which would give me the best performance. I also know that I should have a 4th hotspare drive, but will buy that later down the road to ease cash flow in the beginning. If I went the SATA route, I would buy the 4th immediately.
From what I can tell, both configs provide the same redundancy, but are there any major performance considerations I should take? From what I have read, scsi/sas can enable database applications to perform better do to a lot of small and random reads and writes?
Is Motherboard RAID as good as a dedicated PCI-E card? I am guessing a dedicated card is the best option, though costs more.
We are looking at buying a barebones server from Supermicro. It features an onboard RAID controller which supports RAID 0, 1, 5 & 10 - but for some strange reason it will only support RAID 5 if you use Windows. Here is a link to the page detailing the RAID features.
[url]
We are going to be running Linux, CentOS 5.1, so we will only have the choice of RAID 0, 1 or 10. This isn't an issue, as having RAID 10 on 4x SAS (15k) drives will be fine for speed and stability. What is an issue is would this RAID controller be as fast or reliable compared to a dedicated PCI-E card? If it can only use RAID 5 in windows, does that suggest this controller is too reliant on software? It would be a nightmare to suffer downtime and data loss because the controller couldn't hack it during a drive failure, or one day it decided to bugger up the array when rebooting.
So that leads me to looking at this card, this looks very good for what we need. Are adaptec a reliable brand? I've seen it advertised for £200, which is a good price.
[url]
This card features RAID 5 and 6, would RAID 6 be better than RAID 10 for redundancy, or is it too slow to bother with? Also it seems to have a battery module available for it, what does this achieve? Cos surely if the power dies the hard drives and motherboard can't run off this little battery, or does it just help the controller stay alive long enough with some hard drive information in its memory if the power goes out during a rebuild?
I am in a somewhat complicated situation... I wanted to order a custom server with hardware 3Ware RAID controller but after over a month of waiting I was told the HW RAID controller, as well as any other 3Ware controller they tried, does not work with the motherboard used in the server from Fujitsu-Siemens and that they simply got a reply from FS that the controller is not certified to work with their motherboard.
So although I'd prefer a HW raid, I am forced to either choose a different webhost or setup a software RAID. The problem is, I haven't done that before and am somewhat moderately...scared
I have read a lot of the info about SW RAID on Linux that I could find through Google but there are some questions unanswered still. So I thought that perhaps some of the more knowledgeable WHT members could help me with this problem...
The server specs will be:
Core2Duo E6600 (2.4Ghz), 2GB RAM, 6-8x* 250GB SATA II HDDs, CentOS 4.4 or SuSe, DirectAdmin
* I prefer 8 HDDs (or actually 9) over 6 but I am not sure if their server chassis can hold that many HDDs, I am awaiting answer from them. They don't have any other drives beside the 250GB ones so I am limited to those.
The preferred SW RAID setup is to have everything in RAID 10, except for the /boot partition which has to be on RAID-1 or no RAID I believe, plus one drive as hot spare (that would be the 9th drive). I am quite sure they will not do the setup for me but will give me access to KVM over IP and a Linux image preinstalled on the first HDD so that I'll have a functional system that needs to be upgraded to RAID-10.
How do I do that? The big problem I see is that LILO or GRUB can't boot from a software RAID-5/10 so I will have to mount the /boot partition elsewhere. It's probably terribly simple...if you have done it before which I have not. I have read some articles on how to setup a RAID-5/10 with mdadm (e.g. [url] ) but they usually do not talk about how to setup the boot partition. Should it be setup as a small sized (100-200MB) RAID-1 partition spread over all of the drives in the otherwise RAID-10 array?
What about swap? Should I create a 4-8GB (I plan to upgrade the server RAM to 4GB in near future) RAID-1 swap partition on each of the disks or swap to a file on the main RAID-10 partitions. The second sounds simpler but what about performance? Is swapping to a file on RAID-10 array a bad idea, performance wise?
Is it possible to grow a RAID-10 array in a way similar to growing a RAID-5 array with mdadm (using two extra drives instead of one of course)? mdadm doesn't actually even mention RAID-10 despite it does support it without having to create RAID-0 on top of RAID-1 pairs if the support is in kernel, from what I know.
How often do RAID arrays break? Is it worth having RAID if a servers hard drive goes down? I was thinking it may just be a better option to just have a backup drive mounted to my system and in the even of a system failure just pop in a new hard drive, reload the OS, and then reload all my backups?
I am in the process of restructuring the infrastructure on our servers. I am thinking of using either RAID 5 (1 hot spare) vs RAID 10 as my 1U server has 4 HDD tray.
RAID 5 would have better capacity but RAID 10 has better overall performance. Which one do you guys go for a shared hosting server?
Is it possible to turn a non raided setup into Linux software raid, while it is live, and if it's the OS drive? Can you even software raid the OS drive remotely? I've been thinking about doing it for the redundancy (and possible slight performance boost for reads, but doing it more for redundancy). I'm using CentOS.
I've been talking to the Planet about trading in my four and a half year old "SuperCeleron" (from the old ServerMatrix days) Celeron 2.4 GHz system for something new. As part of their current promotions, I've configured a system that looks decent:
Xeon 3040, 1 gig of RAM, 2x250GB hard disks, RHEL 5, cPanel+Fantastico, and 10 ips for $162.
Not too bad. I could bump up the ram to 2 gb for, I think, $12 more, which I'm thinking about and wouldn't mind some thoughts on. But, the thing that has me really confused is RAID. I like the idea of doing a RAID 1 setup with those two hard disks. But, the Planet wants $40/month for a RAID controller to do it. I really don't want to go over $200 a month!
Any thoughts on alternative redundancy strategies that might avoid that cost? Software RAID does not seem to be offered by the Planet, unless I can figure out how to do it after installation (is that possible?) Better ideas in general on the server?
Are there any significant difference between 4 15K SAS HD in RAID 10 versus 8 7.2K SATAII HD in RAID 10? I have the same question for 2 15K SAS HD in RAID 1 versus 4 7.2K SATAII HD in RAID 10.
I have room for 4 more hard drives on my home server. My original goal was to go raid 10 but I've been thinking, raid 5 can support 4 drives and give more capacity. Which one would have better performance as software (md) raid? I'm thinking raid 10 might actually have bad performance as software raid, vs hardware, compared to raid 5. Would raid 5 with 4 drives be better for my case?
We are looking to build our first server, and collocate it. It will be a higher investment than just renting the server, but will be worth it in the long term, and we have already decided we are going to support the hosting business for a minimum of 3 years - so we might as well invest in a server from the outset to benefit from lower data center charges and higher redundancy and performance.
We are currently looking at Supermicro for servers as they offer 1U barebones systems with dual hotswappable psus and upto 4 hotswappable drives. This would be ideal for redundancy, and also for taking advantage of the speed and redundancy that a RAID 10 array would give you. These two factors combined are very appealing as it would reduce the possibilities of downtime and data loss. Obviously we will be backing up daily, but its good for piece of mind to know that you could potentially blow a PSU and 2 hard drives, and your server will still be up long enough for a data centre technician to replace the parts.
Now then, my business partner and I are currently deciding what the best all round hard drive configuration would be. He has decided that we should opt for SAS instead of SATA to have lower latency seek times, which would give us better performance. I agree, though this does increase costs considerably.
He is then arguing that we use RAID 5 on cost grounds. He says we should only use 3 of the slots to begin with, save money on one drive by not having a spare, and hope we don't have a drive failure - which sods law will happen. I'm not happy us cutting corners to save money, because if we gamble and lose, that's a hell of a mess we have ourselves in, and will cost us a load more time, reputation and data center charges to get ourselves out of it.
I say we might as well go for RAID 10 for that extra performance, and redundancy, you can potentially lose 2 drives so long as they aren't from the same mirrored pair. With RAID 5 you can only lose a drive, it takes longer to rebuild onto a spare, and during rebuild the performance takes a hit. Also RAID 10 is much faster than RAID 5, and at the expense of the cost of a drive.
Now the question we should be asking is... would a SATA2 RAID 10 array provide better performance than a SAS RAID 5 array?
So I think the choice we have to make is either go for RAID 5 and run with a hot spare, and stock a cold spare, or go with RAID 10 and stock 2 cold spares.
We are considering going with Seagate drives because they are high performance and have 5 year warranties. I have had to RMA two Western Digital drives already in the past 12 months, a raptor and a mybook, both deaths invoked data loss.
The server is going to be a linux web, email, dns and mysql box. It will likely feature a single dual/quad core processor, and 4-8GB of unbuffered ddr2 ram.
Question though on RAID choices... I'm considering getting 3 x 250GB SATA drives. Would it be better to make two of them a RAID-1 mirrored pair for my OS, home directories, and use the 3rd drive seperately for backups, swap, and perhaps some logs.... OR should I put all three drives into a RAID-5 set and treat it as a single logical drive?
my math says usable space would actually be identical... with 465GB usable in either setup. RAID-1 would be faster for I/O with no parity overhead... but one drive would not be redundant. On the other hand, RAID-5 would be fully redundant but have parity overhead for writes.
I'm building a couple of VPS host servers for a client.
Each server have to host 20 VPS and each server will be 4 cores with 32GB of ram. So CPU and ram should be just fine, my interrogatioon now is hard drives. The company owns the machines, but not the drives yet.
I searched a lot on your forums but found nothing relating on VPS. I'm basicly a DBA IRL, so I have experience in hardrives when it comes to databases, but it's completely different for VPS.
According to my boss, each VPS will run a LAMP solution (having a separeted DB cluster is out of question for some reason).
First, raid1 is indeed a must. There is room for 2x 3.5 drives. I might be able to change the backplane for 4x2.5, but i'm not sure...
I've came to several solutions: 2x SATA 7.2k => comes to about 140$ 2x SATA 10k (velociraptor) => comes to about 500$ 2x SAS 10k with PCIe controller => comes to about 850$ 2x SAS 15k with PCIe controller=> comes to about 1000$
They need at least 300GB storage.
But my problem is that the servers do not have SAS onboard so I need a controller and in my case the cheapest solution is best.
But I'm not sure that SATA 7.2k will hold the charge of 20 complete VPS.
Does it worth it to go with SAS anyway or SATA should be just fine? With SATA better use plain old sata 7.2k or 10k drives?
That's a lot of text for not much: What is best for VPS: SATA 7.2k, SATA 10k or SAS 10k?
I have read about all of the things you have to do with an unmanaged server, and how beginners shouldn't even try. I am pretty smart though I have a lot of experience with cpanel, and I am not worried about getting my feet wet.
This is the system I want:
Celeron 1.7 GHz 1 GB RAM 80 GB HD 1500 GB Bandwidth cPanel / WHm Full root access
How much time would it take to keep the thing running? How do you monitor the server? How do you know when software updates, and patches are available? Can all of the software needed be found for free? What kind of problems would I encounter, and would this be way over my head?
Now iam useing 320Gig SATA harddrive as my primary hard drive,i dont use 2rd harddrive,iam haveing pure download site,in TOP command 4.5%wa is this bit high? or can i add 2rd harddisk and move some data to there to reduce wa,but my load of the server is fine or any way to reduce wa?