currently my home comp is using a WD 7200 rpm drive, im thinking of upgrading it to raid 0 10k rpm drives, here are the drives newegg.com/Product/Product.asp?item=N82E16822116006 and this is the raid card, newegg.com/Product/Product.asp?item=N82E16816118050 and then i was looking into cables for a scsi drive but i know nothing about them, my friend showed me these cables he found provantage.com/cables-go-09476~7CBTE01N.htm but it says there scsi3 now does this matter? what is scsi 3 and can it be used for these raid cards and hdd, the cables i was lookin at newegg.com/Product/Product.asp?Item=N82E16812193019 are 30 bucks each, now do i need to buy 2 of these for my raid 0 or what? any suggestions on what are the best scsi cables for me and best transfer rate? links would be great too.
HI have an urgent need to get this server up. I am trying to install 2x147gb U320 drives on a Tyan S5372 board with the Adaptec AIC-7901x SCSI controller module. I have setup RAID 1 so far and updated the Bios to latest version as well. For some reason when I specify the additional device drivers for the adaptec card for scsi win2k3 still doesn't recognize the drives.
I don't know what to do now and time is running out. I have tried over and over again with different disks thinking it could be a bad disk however that is not the case. I hooked up a sata drive to this server and win2k3 installed fine.
Planning to buy a server from softlayer, adding a single 300gb 15k scsi drive costs 100$/month and adding 4 250gb sata drives with raid-10 costs 90$/month
Today we are going to conduct a detailed study of RAIDability of contemporary 400GB hard drives on a new level. We will take two "professional" drives from Seagate and Western Digital and four ordinary "desktop" drives for our investigation. The detailed performance analysis and some useful hints on building RAID arrays are in our new detailed article.
Is Motherboard RAID as good as a dedicated PCI-E card? I am guessing a dedicated card is the best option, though costs more.
We are looking at buying a barebones server from Supermicro. It features an onboard RAID controller which supports RAID 0, 1, 5 & 10 - but for some strange reason it will only support RAID 5 if you use Windows. Here is a link to the page detailing the RAID features.
[url]
We are going to be running Linux, CentOS 5.1, so we will only have the choice of RAID 0, 1 or 10. This isn't an issue, as having RAID 10 on 4x SAS (15k) drives will be fine for speed and stability. What is an issue is would this RAID controller be as fast or reliable compared to a dedicated PCI-E card? If it can only use RAID 5 in windows, does that suggest this controller is too reliant on software? It would be a nightmare to suffer downtime and data loss because the controller couldn't hack it during a drive failure, or one day it decided to bugger up the array when rebooting.
So that leads me to looking at this card, this looks very good for what we need. Are adaptec a reliable brand? I've seen it advertised for £200, which is a good price.
[url]
This card features RAID 5 and 6, would RAID 6 be better than RAID 10 for redundancy, or is it too slow to bother with? Also it seems to have a battery module available for it, what does this achieve? Cos surely if the power dies the hard drives and motherboard can't run off this little battery, or does it just help the controller stay alive long enough with some hard drive information in its memory if the power goes out during a rebuild?
I am in a somewhat complicated situation... I wanted to order a custom server with hardware 3Ware RAID controller but after over a month of waiting I was told the HW RAID controller, as well as any other 3Ware controller they tried, does not work with the motherboard used in the server from Fujitsu-Siemens and that they simply got a reply from FS that the controller is not certified to work with their motherboard.
So although I'd prefer a HW raid, I am forced to either choose a different webhost or setup a software RAID. The problem is, I haven't done that before and am somewhat moderately...scared
I have read a lot of the info about SW RAID on Linux that I could find through Google but there are some questions unanswered still. So I thought that perhaps some of the more knowledgeable WHT members could help me with this problem...
The server specs will be:
Core2Duo E6600 (2.4Ghz), 2GB RAM, 6-8x* 250GB SATA II HDDs, CentOS 4.4 or SuSe, DirectAdmin
* I prefer 8 HDDs (or actually 9) over 6 but I am not sure if their server chassis can hold that many HDDs, I am awaiting answer from them. They don't have any other drives beside the 250GB ones so I am limited to those.
The preferred SW RAID setup is to have everything in RAID 10, except for the /boot partition which has to be on RAID-1 or no RAID I believe, plus one drive as hot spare (that would be the 9th drive). I am quite sure they will not do the setup for me but will give me access to KVM over IP and a Linux image preinstalled on the first HDD so that I'll have a functional system that needs to be upgraded to RAID-10.
How do I do that? The big problem I see is that LILO or GRUB can't boot from a software RAID-5/10 so I will have to mount the /boot partition elsewhere. It's probably terribly simple...if you have done it before which I have not. I have read some articles on how to setup a RAID-5/10 with mdadm (e.g. [url] ) but they usually do not talk about how to setup the boot partition. Should it be setup as a small sized (100-200MB) RAID-1 partition spread over all of the drives in the otherwise RAID-10 array?
What about swap? Should I create a 4-8GB (I plan to upgrade the server RAM to 4GB in near future) RAID-1 swap partition on each of the disks or swap to a file on the main RAID-10 partitions. The second sounds simpler but what about performance? Is swapping to a file on RAID-10 array a bad idea, performance wise?
Is it possible to grow a RAID-10 array in a way similar to growing a RAID-5 array with mdadm (using two extra drives instead of one of course)? mdadm doesn't actually even mention RAID-10 despite it does support it without having to create RAID-0 on top of RAID-1 pairs if the support is in kernel, from what I know.
How often do RAID arrays break? Is it worth having RAID if a servers hard drive goes down? I was thinking it may just be a better option to just have a backup drive mounted to my system and in the even of a system failure just pop in a new hard drive, reload the OS, and then reload all my backups?
I am in the process of restructuring the infrastructure on our servers. I am thinking of using either RAID 5 (1 hot spare) vs RAID 10 as my 1U server has 4 HDD tray.
RAID 5 would have better capacity but RAID 10 has better overall performance. Which one do you guys go for a shared hosting server?
Is it possible to turn a non raided setup into Linux software raid, while it is live, and if it's the OS drive? Can you even software raid the OS drive remotely? I've been thinking about doing it for the redundancy (and possible slight performance boost for reads, but doing it more for redundancy). I'm using CentOS.
any recommendations for SCSI 10k or 15k? Core2Duo would be nice as well. ~4gb ram I dont need a lot of HDD space or bandwidth. I'm also open to "hybrid" servers as well
I've been talking to the Planet about trading in my four and a half year old "SuperCeleron" (from the old ServerMatrix days) Celeron 2.4 GHz system for something new. As part of their current promotions, I've configured a system that looks decent:
Xeon 3040, 1 gig of RAM, 2x250GB hard disks, RHEL 5, cPanel+Fantastico, and 10 ips for $162.
Not too bad. I could bump up the ram to 2 gb for, I think, $12 more, which I'm thinking about and wouldn't mind some thoughts on. But, the thing that has me really confused is RAID. I like the idea of doing a RAID 1 setup with those two hard disks. But, the Planet wants $40/month for a RAID controller to do it. I really don't want to go over $200 a month!
Any thoughts on alternative redundancy strategies that might avoid that cost? Software RAID does not seem to be offered by the Planet, unless I can figure out how to do it after installation (is that possible?) Better ideas in general on the server?
According to the documentation, Hyper-V VMs cannot boot from SCSI drives and requires an IDE drive for each virtualization. I'm new to Windows (Server 2008) and Hyper-V and planning out some hardware.
Does anyone know if it is possible to:
Set up the the server with 2 SATA Drives (Raid 1), along with 8 x Ultra320 SCSI Drives (Raid 5 or 6).
Load the OS and set up all Virtual slices on the SATA drives, so that that virtual boot sectors are on the IDE drives, but the main bulk of the clients allotted space on the SCSIs? Is there issue with that and if so, how do you manage that?
I currently have a Dell Poweredge 2650 from a few years back, it is running...
2x Xeon 2.4ghz 512K 3GB DDR266 RAM 1x73GB SCSI
Back in the day this system cost $2000, now it's not worth close to that.
So my plans were to dump this bad boy as an SQL server, seeing it has the SCSI backplane and 3GB of RAM, and SQL usually doesn't need as much CPU as a web server.
Now my question, would it be better to use this server or would it be better to build a cheap Core 2 Duo with a RAID0 array with a few SATA drives?
Before you start going off on RAID0, it doesn't matter to me because I am using clustering/failover so data will not be lost and no downtime will be received if the array fails.
Basically what I want to know, is it worth it to keep this server and build upon it or would it be better to sell this server and look into spending an extra few hundred to build a new system with SATA RAID.
I'm going by price/performance rather than reliability as I am using failover to let you know once again .
To work on an HP ProLiant DL360/380. All I know is they are SCSI U320 drive bays, or that is the type of drive they take. Can anyone provide any insight on what may work? We are trying to get a more cost effective way to get more storage into a server. The largest SCSI drive I can find is 300GB for $200. You can get 2TB drives for that much these days.
is it really worth the money nowadays to put in SCSI or SAS instead of SATAII (single disk, non-raid here), IF reliability is the only concern (i.e. NOT i/o performance) during the usual 3 year life time of a server?
Actually, I was pretty amazed by the sata reliability, in the past 3 years the only hdd failure was two sata on a mismatched mobo, which didn't support SATAII (a lot of read/write error, eventually died). Although we have 0% scsi and sas failure.