Planning to buy a server from softlayer, adding a single 300gb 15k scsi drive costs 100$/month and adding 4 250gb sata drives with raid-10 costs 90$/month
I am currently in the process of upgrading my web/mysql server due to heavy loads and io waits and have some questions. I am trying to be cost efficient but at the same time do not want to purchase something that will be either inadequate or difficult to upgrade in the future. I hope you can provide me with some guidance.
This server is a Centos Linux box, running both apache and mysql. The current usage on the box is:
Mysql Stats:
50 mysql queries per second With a ratio of read to write of 2:1 Reads are about 65 MB per hour and writes are around 32 MB per hour.
Apache stats:
35 requests per sec
The two issues that I am unsure of are:
- Whether or not i should go with Raid-1 or Raid-5
- Whether or not I should use Sata Raptor drives or SAS drives.
In either configuration I will use a dedicated Raid controller. If I went with SATA, it would be a 3ware 9650SE-4LPML card. If I went with SAS, I was looking at the Adaptec 3405 controller.
Originally, I was going to use 3 x 74GB Seagate Cheetah 15.4K SAS drives in a Raid-5 config. After more reading, I learned that raid-5 has a high write overhead. Though read is definitely more important based on my stats, I don't want to lose performance in my writes either. With this in mind, I looked into doing Raid-1 instead.
I came up with these choices:
- Raid-1 - 2 x Seagate ST373455SS Seagate Cheetah 15K.5 SAS. HDs & controller costs are $940.
- Raid-1 - 2 x WD Raptor 74GB 10K SATA 150. HDs & controller costs are $652.
- Raid-5 - 3 x Seagate Cheetah 15K.4 ST336754SS 36.7GB. HDs & controller costs are $869.
- Raid-5 - 3 x WD Raptor 36GB 10K SATA 150. HDs & controller costs are $631.
As you can see we are not looking at huge differences in price, so I would be up for any of these options if I could just determine which would give me the best performance. I also know that I should have a 4th hotspare drive, but will buy that later down the road to ease cash flow in the beginning. If I went the SATA route, I would buy the 4th immediately.
From what I can tell, both configs provide the same redundancy, but are there any major performance considerations I should take? From what I have read, scsi/sas can enable database applications to perform better do to a lot of small and random reads and writes?
I recently build a server with Asus M2N-MX SE motherboard and SuperMicro 14" mini 1u. On the back of the Asus M2N-MX SE manual. it said for RAID driver, i need to create it from the included CD and use a floppy disk. my question is how can i do it without a floppy disk? i have an external DVD-burner that i hook up to usb to install the OS. Is it possible to use a cd to install the driver when i press f6 during Windows2003 installation?
Is it worth the effort to setup RAID 1? I have two Maxtor 500GB SATA disks and using RAID 1 seem to reduce one disk and leave me with 500GB worth of space and is the onboard Nvidia RAID trust worthy? because it said due to chipset limitation, the SATA ports supported by the Nvidia chipset doesn't support Serial Optical disk drives (Serial ODD).
about the hd,there are two options, the first one is four 7200rpm sata to do raid 10, the second one is two 10000rpm sata to do raid 1, about the performance, which one will be better?
I'm currently in the process of ordering a new server and would like to throw another $50-$70 at the default SATA II 7k 250 GB to increase performance. The server will host a site similar to WHT (php, mysql, and some forum traffic ).
There are three options I can get for the price:
1. Add another SATA II 7k 250 GB and set up RAID 1 2. Add a 73GB 15k RPM SA-SCSI and put mySQL on it. No RAID. 3. Toss out the SATA II 7k and take two SATA 10k 150 GB instead. Put mySQL on one of them. No RAID.
Please keep in mind that the question is budget-related (I know I can get more if I spend an extra $200 but that's not what I want ). Which of the above will make me happiest?
We have had some old dell 745n that had sata drives in them in the past. These are the only time we have ever used sata. The performance was terrible and we replaced the sata drives more times in over several years and we ever have with sas/scsi drives.
We are looking to get some new disk backup boxes which we plan to go 600gb sas drives, but might be considering 1tb nearline sata from dell.
I would like to hear from anyone using nearline sata and get feedback on performance and reliability overall. Also if you are using for backups, how many backup jobs are you able to run at the same time before performance drops?
I just got a new server with two SATA drives in it (no hardware raid). Both drives work fine under Linux, BUT I can boot only from the first disk.
The system is Debian Stable, boot loader is GRUB. I've got serial console access, so after power on I can see GRUB menu and escape to the shell. Then:
Code: grub> root (hd1,1)
Error 21: Selected disk does not exist Even when I type "root (hd" and press TAB, grub auto-completes command to "root (hd0,". It doesn't see the second drive!
When Linux has started and I run grub shell inside OS, it does see (hd1).
I have a delicated server with "Intel RAID Controller: Intel(R) 82801ER SATA RAID Controller",I cannot find information on this raid.The 80 GB harddisk is about 4 years old,if one harddisk fail,I wonder if I can swap a new one bigger capacity and it will auto rebuilt?
I'm running into a problem with a relatively new (2 months old) server. I have just a few accounts on it, and I'm already noticing unusual loads, for the... load. After some benchmarking with bonnie++ (and plain old "dd") there is clearly a problem.
Isn't a write-speed over 7MB/s reasonable to expect? Also look at the low CPU times...
Anyway running the same test on a similar but older AND busier server showed much better results than this. In fact dd'ing a 1GB file from /dev/zero "finished" in about 10 seconds but then pegged the server at 99% iowait (wa) for a full three minutes (until done being written from cache I assume), bringing load to 15.00
That's all the info I have so far... the data center just replaced the card (which gave no errors) with no effect. Running these benchmark tests is about the extend of my hardware experience.
I've recently put together a server in a hurry and overlooked an important aspect - data integrity after power loss. I'm using Linux software RAID-1 with two 150GB WD Raptors but I'm worried that data could be lost due to having write-back cache enabled without a battery backup unit. I would rather not disable the write-back cache for performance reasons.
What is the cheapest way to get a battery backup solution for Linux software RAID? Do I have to use a hardware RAID card or do standalone battery backup units exist that can use existing motherboard SATA ports?
I am in a somewhat complicated situation... I wanted to order a custom server with hardware 3Ware RAID controller but after over a month of waiting I was told the HW RAID controller, as well as any other 3Ware controller they tried, does not work with the motherboard used in the server from Fujitsu-Siemens and that they simply got a reply from FS that the controller is not certified to work with their motherboard.
So although I'd prefer a HW raid, I am forced to either choose a different webhost or setup a software RAID. The problem is, I haven't done that before and am somewhat moderately...scared
I have read a lot of the info about SW RAID on Linux that I could find through Google but there are some questions unanswered still. So I thought that perhaps some of the more knowledgeable WHT members could help me with this problem...
The server specs will be:
Core2Duo E6600 (2.4Ghz), 2GB RAM, 6-8x* 250GB SATA II HDDs, CentOS 4.4 or SuSe, DirectAdmin
* I prefer 8 HDDs (or actually 9) over 6 but I am not sure if their server chassis can hold that many HDDs, I am awaiting answer from them. They don't have any other drives beside the 250GB ones so I am limited to those.
The preferred SW RAID setup is to have everything in RAID 10, except for the /boot partition which has to be on RAID-1 or no RAID I believe, plus one drive as hot spare (that would be the 9th drive). I am quite sure they will not do the setup for me but will give me access to KVM over IP and a Linux image preinstalled on the first HDD so that I'll have a functional system that needs to be upgraded to RAID-10.
How do I do that? The big problem I see is that LILO or GRUB can't boot from a software RAID-5/10 so I will have to mount the /boot partition elsewhere. It's probably terribly simple...if you have done it before which I have not. I have read some articles on how to setup a RAID-5/10 with mdadm (e.g. [url] ) but they usually do not talk about how to setup the boot partition. Should it be setup as a small sized (100-200MB) RAID-1 partition spread over all of the drives in the otherwise RAID-10 array?
What about swap? Should I create a 4-8GB (I plan to upgrade the server RAM to 4GB in near future) RAID-1 swap partition on each of the disks or swap to a file on the main RAID-10 partitions. The second sounds simpler but what about performance? Is swapping to a file on RAID-10 array a bad idea, performance wise?
Is it possible to grow a RAID-10 array in a way similar to growing a RAID-5 array with mdadm (using two extra drives instead of one of course)? mdadm doesn't actually even mention RAID-10 despite it does support it without having to create RAID-0 on top of RAID-1 pairs if the support is in kernel, from what I know.
I run a large adult vBulletin community with 70,000 members, 1/2 million posts, 186,000 attachments (a lot video), and closing in on 100 million downloads since our start some odd years ago. I've been battling keeping the site up for quite some time, and I am starting to wonder whether we shot too low on the server setup. I figure I would ask the pros here at WHT for some advice.
This is our current setup:
Site server:
Quote:
CPU: Intel(R) Core(TM)2 Duo CPU E4500 @ 2.20GHz 4 Gig ram 250 Gig sata harddrive Unix FreeBSD 6.2 Apache
MySQL server:
Quote:
CPU: Pentium III/Pentium III Xeon/Celeron (2666.62-MHz 686-class CPU) Cores per package: 4 4 Gig ram 750 Gig SATA harddrive Unix FreeBSD 6.4 Apache
Do you think the site would perform better under one server and maybe a more powerful processor? What should I be looking at exactly as far as hardware goes for this type of site. I should note we push about 2.5TB of bandwidth monthly.
We are limited with a maximum of 2 drives per server, with a maximum of 750gb drives.
We are thinking of going with 2 500gb hard drives. However the question is, should we use the Secondary drive with Raid 1 and let our VPS clients worry about their own backups or should we instead just use the secondary drive as a backup drive and backup each VPS nightly?
My raid 1 failed and it wouldn't be such an issue except that the other drive wasn't syncing for 2 months for some reason.
So now I have to try to recover it to get the info.
It seems that the drive seems to be ok yet I am unable to boot to it.
Using supergrub I was able to boot to a kernel but when it starts loading there is a kernel panic.
Error is basically
EXT3-fs: unable to read superblock mount:error 22 mounting ext3
When I use Rescue option using a centos CD, I am unable to mount the HD.
Using Knoppix,I was able to see the HD but unable to mount it since it claims the 2nd partition is no clean. Since the HD was raid, I don't think thats the problem at all.
Is there anything you guys can advise me? I'm somewhat new to doing this and really green on Raid for that matter.
I just added a database server in private network and moved the database for Vbulletin Forum to this server.
But some how, the Forum is loading extremely slow compare to before ( when it was on the localhost). Also, Compare to another website on server (using local database) it is much slower.
One thing good is the load is lower
2 servers are connected via 10mbs private link, both servers are at Softlayer.
I've software RAID installed with one SATA and one ATA/IDE drive. It is a combined controller so I had to add noprobe=/dev/hdc to the kernel boot line. Now the disks are named /dev/sda and /dev/sdb. There are four partitions, /dev/sda1 and /dev/sdb1 are the /dev/md0 root partition and then /dev/sda2 and /dev/sdb2 are the swap partition.
First when I removed one drive I just came to the grub command line..
Then I tried to do this in grub to make both drives bootable: grub>device (hd0) /dev/sda grub>root (hd0,0) grub>setup (hd0) grub>device (hd1) /dev/sdb grub>root (hd1,0) grub>setup (hd1) grub>quit
Now it looks like normal when I get to grub apart from that the background of the boot screen is black instead of blue and then the computer just restarts when it is supposed to start/boot the system.
If I use say two SAS 36gig 15k rpm drives 16mb cache - identical specs but different drive manufacturer's / models, do you guys think I would run into anything wierd? I've never really tried it. Is matching absolutely required? Never know..I doubt seek times different by a millisecond would cause issues but I just want to check.
If there is a failed drive in RAID-1 running on Dell 2850, FreeBSD 5.4, can I just take out the failed drive and replace it with a new one, while the server is running? Will FreeBSD cope & rebuild the drive on the fly?
We're building a bunch of new servers (as I mentioned in another thread). What do you guys think is the best drive layout? Traditionally we just have the entire 4-6 drives in a RAID 5, but now I'm wondering if it makes sense to have a separate OS drive, and then the rest in a raid 5?
On a side note, we need a giant ~2TB-3TB partition on these boxes, that's why we go with the multi-drive raid setup. Thoughts? I know it was customary to have a separate OS drive back in the day (I remember having WD raptor's for that), but now when the WD Black-edition (which we'll be using for the raid setup) are as fast as the raptor's, is it even worth it?
We have a client that has a mail server with two drives. One hard disk is devoted for OS/Application (C and one is devoted for mail storage only (D
The goal is to make the D: drive which is a SATA 320GB drive to be made into a mirror, i.e. add another drive and a RAID Card and make D has a RAID Mirror drive.
My understanding is that when a RAID is configured for drives, the drive will lose whatever data it has on it ? Is there no other way to construct a singular environment into a RAID mirror environment (by adding a drive and Card) without losing drive on one of the primary drives?
I have 2x250gb drives, and this is my output of fdisk -l:
Quote:
Disk /dev/hda: 250.0 GB, 250059350016 bytes 255 heads, 63 sectors/track, 30401 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System /dev/hda1 * 1 13 104391 83 Linux /dev/hda2 14 30401 244091610 8e Linux LVM
Disk /dev/sda: 250.0 GB, 250059350016 bytes 255 heads, 63 sectors/track, 30401 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System /dev/sda1 * 1 30401 244196001 8e Linux LVM
I see the LVM is some sort of manager and it looks like the 2 drives are indeed set up as LVM, but I can't really tell what the means or how they are set up.
I'm looking to have them in a RAID-1 setup - a copy across drives so that it will continue working even if a drive fails.
we've got our new machine up and running smoothly. It's a core2duo running CentOS with Plesk 8.3. We have 2 drives in a RAID 1 configuration for reliability sake, and I'm looking for something to help me monitor the status of the disks in the array. Since RAID is pretty useless unless you know if one of the disks has died.
Is there some open source utility that will monitor this and email me if something is wrong? Perhaps something with a nice web interface?
Even better... is there some addon to plesk that will help with this?
I have a question regarding, hard drives and performance etc... I only use it for forums and currently is only one site (hopefully couple more in no time)
Currently I have 2x36gb SAS in raid 1 obviously containing everything including dbs and /home. and a third 250gb drive for backups only ^^ Ronny did an excelent job setting this up.
Any ways, my problem is that I wan't to allow some attachments on my forums, and this would take a significant ammount of space over 1gb no problem and then keep increasing (that's gonna sux for bandwidth). I know it will fit in the SAS drives no problems, dbs are rather small at the time (2.5gb in total) but logs are quite big 5-10gbs in total.
I thought it might be a good idea to purchase another drive. This 4th drive would be 750 and backups would move there , and use the 250 for the /home directory. This would give a lot of room for uploads, and backups accordingly and keep the fast ones for OS and dbs
I was told, however, and understandebly, that a lot of performance would be lost by moving /home to a SATA drive I know SATAS are no way as fast, but then vbulletin can't upload attach files to a folder outside its hirachy (without complicated modifications). (Note: i didn't specify my resons for wanting such set up)
So I'm in a bit of a pickle. Having the bigger drive would allow me to have the attachments, and should eventually result on more traffic etc to my site. /home currently is only 150mbs big... but then performance is also an issue pitty i couldn't afford the bigger drices at the time [sees the point of renting over buydowns now]
is there a way that /var/log/httpd saves those massive logs on another drive? it would free up 5-10gbs
in shortIs moving /home to a SATA drive from Raid 1 SAS a bad idea? (considering space and purpose)
Could httpd logs or /var/log in general be moved to the backup/another drive?
I have a Red Hat 8 server with RAID 1 setup and now I need more HD space.
I managed to replace both 40 gb disks on the raid array with 200 gb disks and the system stayed perfectly alive, BUT I still have only 40 gb of space available (and 160 gb of space somewhere hiding).
Is there a method on Red Hat 8 to get the rest of the disk in use?
At first I replaced another 40 gb disk with a new (empty) 200 gb disk and then on the raid setup I mirrored the old disk with the new disk. After that I replaced that another old 40 gb disk with a new 200 gb disk and did the mirroring again. Got both new 200 gb disks working on the RAID 1 array, except that little problem with space available on Red Hat 8..
I'm about to purchase a 2nd server to be used as a database/app server alongside my current server (of which will be the web server).
I wish to use 2 x 146GB 10K SCSI hard disks (in RAID 1) on the database server, but will be keeping 2 x 320GB SATAII 16M in RAID 1 on the web server. Will the SATA hard disks affect the performance / effectiveness of the SCSI disks or will I benefit from SCSI even though they're only in the database server?
Also, I'm going for 10K hard disks over 15K because they $20 per month cheaper and it's already expensive ($150 p/m for the two 10K or $170 p/m for 2 x 15k). Taking into account the already hefty price, is it worth the extra for 15K?