250 GB SATA With RAID-10 Or 300GB 15k Single SCSI Drive
Dec 11, 2007
Planning to buy a server from softlayer, adding a single 300gb 15k scsi drive costs 100$/month and adding 4 250gb sata drives with raid-10 costs 90$/month
about the hd,there are two options, the first one is four 7200rpm sata to do raid 10, the second one is two 10000rpm sata to do raid 1, about the performance, which one will be better?
I currently have a Dell Poweredge 2650 from a few years back, it is running...
2x Xeon 2.4ghz 512K 3GB DDR266 RAM 1x73GB SCSI
Back in the day this system cost $2000, now it's not worth close to that.
So my plans were to dump this bad boy as an SQL server, seeing it has the SCSI backplane and 3GB of RAM, and SQL usually doesn't need as much CPU as a web server.
Now my question, would it be better to use this server or would it be better to build a cheap Core 2 Duo with a RAID0 array with a few SATA drives?
Before you start going off on RAID0, it doesn't matter to me because I am using clustering/failover so data will not be lost and no downtime will be received if the array fails.
Basically what I want to know, is it worth it to keep this server and build upon it or would it be better to sell this server and look into spending an extra few hundred to build a new system with SATA RAID.
I'm going by price/performance rather than reliability as I am using failover to let you know once again .
To work on an HP ProLiant DL360/380. All I know is they are SCSI U320 drive bays, or that is the type of drive they take. Can anyone provide any insight on what may work? We are trying to get a more cost effective way to get more storage into a server. The largest SCSI drive I can find is 300GB for $200. You can get 2TB drives for that much these days.
is it really worth the money nowadays to put in SCSI or SAS instead of SATAII (single disk, non-raid here), IF reliability is the only concern (i.e. NOT i/o performance) during the usual 3 year life time of a server?
Actually, I was pretty amazed by the sata reliability, in the past 3 years the only hdd failure was two sata on a mismatched mobo, which didn't support SATAII (a lot of read/write error, eventually died). Although we have 0% scsi and sas failure.
Would having a 15k rpm SCSI HD (vs 7200rpm SATA) provide significant improvement for a web server only running php scripts (the scripts are small in size, they just make DB calls to another server and return the results)? What if eAccelerator was installed?
I'm currently in the process of ordering a new server and would like to throw another $50-$70 at the default SATA II 7k 250 GB to increase performance. The server will host a site similar to WHT (php, mysql, and some forum traffic ).
There are three options I can get for the price:
1. Add another SATA II 7k 250 GB and set up RAID 1 2. Add a 73GB 15k RPM SA-SCSI and put mySQL on it. No RAID. 3. Toss out the SATA II 7k and take two SATA 10k 150 GB instead. Put mySQL on one of them. No RAID.
Please keep in mind that the question is budget-related (I know I can get more if I spend an extra $200 but that's not what I want ). Which of the above will make me happiest?
currently my home comp is using a WD 7200 rpm drive, im thinking of upgrading it to raid 0 10k rpm drives, here are the drives newegg.com/Product/Product.asp?item=N82E16822116006 and this is the raid card, newegg.com/Product/Product.asp?item=N82E16816118050 and then i was looking into cables for a scsi drive but i know nothing about them, my friend showed me these cables he found provantage.com/cables-go-09476~7CBTE01N.htm but it says there scsi3 now does this matter? what is scsi 3 and can it be used for these raid cards and hdd, the cables i was lookin at newegg.com/Product/Product.asp?Item=N82E16812193019 are 30 bucks each, now do i need to buy 2 of these for my raid 0 or what? any suggestions on what are the best scsi cables for me and best transfer rate? links would be great too.
We have a powerful server for our databases, 8 cores, 4gb ram etc because we have a huge amount of MySQL data. We store the data on a standard SATAII 500GB drive, would we notice a dramatic performance improvement if we stored the data on a SA-SCSI 10/15k drive?
HI have an urgent need to get this server up. I am trying to install 2x147gb U320 drives on a Tyan S5372 board with the Adaptec AIC-7901x SCSI controller module. I have setup RAID 1 so far and updated the Bios to latest version as well. For some reason when I specify the additional device drivers for the adaptec card for scsi win2k3 still doesn't recognize the drives.
I don't know what to do now and time is running out. I have tried over and over again with different disks thinking it could be a bad disk however that is not the case. I hooked up a sata drive to this server and win2k3 installed fine.
We have had some old dell 745n that had sata drives in them in the past. These are the only time we have ever used sata. The performance was terrible and we replaced the sata drives more times in over several years and we ever have with sas/scsi drives.
We are looking to get some new disk backup boxes which we plan to go 600gb sas drives, but might be considering 1tb nearline sata from dell.
I would like to hear from anyone using nearline sata and get feedback on performance and reliability overall. Also if you are using for backups, how many backup jobs are you able to run at the same time before performance drops?
I just got a new server with two SATA drives in it (no hardware raid). Both drives work fine under Linux, BUT I can boot only from the first disk.
The system is Debian Stable, boot loader is GRUB. I've got serial console access, so after power on I can see GRUB menu and escape to the shell. Then:
Code: grub> root (hd1,1)
Error 21: Selected disk does not exist Even when I type "root (hd" and press TAB, grub auto-completes command to "root (hd0,". It doesn't see the second drive!
When Linux has started and I run grub shell inside OS, it does see (hd1).
I have a delicated server with "Intel RAID Controller: Intel(R) 82801ER SATA RAID Controller",I cannot find information on this raid.The 80 GB harddisk is about 4 years old,if one harddisk fail,I wonder if I can swap a new one bigger capacity and it will auto rebuilt?
I'm running into a problem with a relatively new (2 months old) server. I have just a few accounts on it, and I'm already noticing unusual loads, for the... load. After some benchmarking with bonnie++ (and plain old "dd") there is clearly a problem.
Isn't a write-speed over 7MB/s reasonable to expect? Also look at the low CPU times...
Anyway running the same test on a similar but older AND busier server showed much better results than this. In fact dd'ing a 1GB file from /dev/zero "finished" in about 10 seconds but then pegged the server at 99% iowait (wa) for a full three minutes (until done being written from cache I assume), bringing load to 15.00
That's all the info I have so far... the data center just replaced the card (which gave no errors) with no effect. Running these benchmark tests is about the extend of my hardware experience.
I've recently put together a server in a hurry and overlooked an important aspect - data integrity after power loss. I'm using Linux software RAID-1 with two 150GB WD Raptors but I'm worried that data could be lost due to having write-back cache enabled without a battery backup unit. I would rather not disable the write-back cache for performance reasons.
What is the cheapest way to get a battery backup solution for Linux software RAID? Do I have to use a hardware RAID card or do standalone battery backup units exist that can use existing motherboard SATA ports?
If you want a quick run down as to WHY I want to do this, read here
Basically, my ISP could not get my server running stable on a simple raid 1 (or raid 5) so what it came down to was having them install my system on a single disk. I don't exactly like this, main reason being, if the system (or HDD) crashes, I'll end up with another several hours of down time... So here is my proposal:
Please Note: This will have to be accomplished on a live System (full backups!) over ssh as I don't trust my ISP to do things right as described in my post above.
mkfs -t ext3 -m 1 /dev/vg0/lvboot mkfs -t ext3 -m 1 /dev/vg0/lvroot mkfs -t ext3 -m 1 /dev/vg0/lvtmp mkfs -t ext3 -m 1 /dev/vg0/lvhome Now, I'd like to 'init 1' at this stage but I can't, so I won't (possible solutions?? Possible to umount the / partition??)
Assuming I'd have to do this on a fully live system, I'd disable all services that I can
Code: /etc/init.d/sendmail stop /etc/init.d/postfix stop /etc/init.d/saslauthd stop /etc/init.d/httpd stop /etc/init.d/mysql stop /etc/init.d/courier-authlib stop /etc/init.d/courier-imap stop /etc/init.d/amavisd stop /etc/init.d/clamd stop /etc/init.d/pure-ftpd stop /etc/init.d/fail2ban stop /etc/init.d/syslogd stop Then we copy all of our data from the single partitions to the raid disks
Code: mount /dev/vg0/lvboot /mnt/newroot/boot mount /dev/vg0/lvroot /mnt/newroot/root mount /dev/vg0/lvtmp /mnt/newroot/tmp mount /dev/vg0/lvhome /mnt/newroot/home (I think I covered everything)
Code: umount -l /dev/sda1 (/boot) umount -l /dev/sda3 (/home) cp -dpRx /* /mnt/newroot/ mount /dev/sda1 /boot cp -dpRx /boot/* /mnt/newroot/boot/ mount /dev/sda3 /home cp -dpRx /home/* /mnt/newroot/home/ Once we have everything copied, update /etc/fstab and /etc/mtab to reflect the changes we made: vi /etc/fstab
Code: title CentOS (2.6.18-164.el5) root (hd3,0) kernel /vmlinuz-2.6.18-164.el5 ro root=/dev/sda2 initrd /initrd-2.6.18-164.el5.img Where (hd3,0) is /dev/sdc. If the system fails to boot to the raid then it'll auto boot to the single disk (/dev/sda)
then update my ramdisk: mv /boot/initrd-`uname -r`.img /boot/initrd-`uname -r`.img_bak mkinitrd /boot/initrd-`uname -r`.img `uname -r`
And now to set up grub...
Code: grub > root (hd0,0) > setup (hd0) we should see something like this: Checking if "/boot/grub/stage1" exists... no Checking if "/grub/stage1" exists... yes Checking if "/grub/stage2" exists... yes Checking if "/grub/e2fs_stage1_5" exists... yes Running "embed /grub/e2fs_stage1_5 (hd0)"... 15 sectors are embedded. succeeded Running "install /grub/stage1 (hd0) (hd0)1+15 p (hd0,0)/grub/stage2 /grub/grub.conf"... succeeded Done.
Code: > root (hd3,0) > setup (hd3) Again, we should see something like this: Checking if "/boot/grub/stage1" exists... no Checking if "/grub/stage1" exists... yes Checking if "/grub/stage2" exists... yes Checking if "/grub/e2fs_stage1_5" exists... yes Running "embed /grub/e2fs_stage1_5 (hd1)"... 15 sectors are embedded. succeeded Running "install /grub/stage1 (hd1) (hd1)1+15 p (hd1,0)/grub/stage2 /grub/grub.conf"... succeeded Done.
Code: > quit From here I think we're ready to reboot, can't see where I missed anything. If all goes well then I should see my volume groups listed in 'df- h'
We are limited with a maximum of 2 drives per server, with a maximum of 750gb drives.
We are thinking of going with 2 500gb hard drives. However the question is, should we use the Secondary drive with Raid 1 and let our VPS clients worry about their own backups or should we instead just use the secondary drive as a backup drive and backup each VPS nightly?