If you want a quick run down as to WHY I want to do this, read here
Basically, my ISP could not get my server running stable on a simple raid 1 (or raid 5) so what it came down to was having them install my system on a single disk. I don't exactly like this, main reason being, if the system (or HDD) crashes, I'll end up with another several hours of down time... So here is my proposal:
Please Note: This will have to be accomplished on a live System (full backups!) over ssh as I don't trust my ISP to do things right as described in my post above.
mkfs -t ext3 -m 1 /dev/vg0/lvboot mkfs -t ext3 -m 1 /dev/vg0/lvroot mkfs -t ext3 -m 1 /dev/vg0/lvtmp mkfs -t ext3 -m 1 /dev/vg0/lvhome Now, I'd like to 'init 1' at this stage but I can't, so I won't (possible solutions?? Possible to umount the / partition??)
Assuming I'd have to do this on a fully live system, I'd disable all services that I can
Code: /etc/init.d/sendmail stop /etc/init.d/postfix stop /etc/init.d/saslauthd stop /etc/init.d/httpd stop /etc/init.d/mysql stop /etc/init.d/courier-authlib stop /etc/init.d/courier-imap stop /etc/init.d/amavisd stop /etc/init.d/clamd stop /etc/init.d/pure-ftpd stop /etc/init.d/fail2ban stop /etc/init.d/syslogd stop Then we copy all of our data from the single partitions to the raid disks
Code: mount /dev/vg0/lvboot /mnt/newroot/boot mount /dev/vg0/lvroot /mnt/newroot/root mount /dev/vg0/lvtmp /mnt/newroot/tmp mount /dev/vg0/lvhome /mnt/newroot/home (I think I covered everything)
Code: umount -l /dev/sda1 (/boot) umount -l /dev/sda3 (/home) cp -dpRx /* /mnt/newroot/ mount /dev/sda1 /boot cp -dpRx /boot/* /mnt/newroot/boot/ mount /dev/sda3 /home cp -dpRx /home/* /mnt/newroot/home/ Once we have everything copied, update /etc/fstab and /etc/mtab to reflect the changes we made: vi /etc/fstab
Code: title CentOS (2.6.18-164.el5) root (hd3,0) kernel /vmlinuz-2.6.18-164.el5 ro root=/dev/sda2 initrd /initrd-2.6.18-164.el5.img Where (hd3,0) is /dev/sdc. If the system fails to boot to the raid then it'll auto boot to the single disk (/dev/sda)
then update my ramdisk: mv /boot/initrd-`uname -r`.img /boot/initrd-`uname -r`.img_bak mkinitrd /boot/initrd-`uname -r`.img `uname -r`
And now to set up grub...
Code: grub > root (hd0,0) > setup (hd0) we should see something like this: Checking if "/boot/grub/stage1" exists... no Checking if "/grub/stage1" exists... yes Checking if "/grub/stage2" exists... yes Checking if "/grub/e2fs_stage1_5" exists... yes Running "embed /grub/e2fs_stage1_5 (hd0)"... 15 sectors are embedded. succeeded Running "install /grub/stage1 (hd0) (hd0)1+15 p (hd0,0)/grub/stage2 /grub/grub.conf"... succeeded Done.
Code: > root (hd3,0) > setup (hd3) Again, we should see something like this: Checking if "/boot/grub/stage1" exists... no Checking if "/grub/stage1" exists... yes Checking if "/grub/stage2" exists... yes Checking if "/grub/e2fs_stage1_5" exists... yes Running "embed /grub/e2fs_stage1_5 (hd1)"... 15 sectors are embedded. succeeded Running "install /grub/stage1 (hd1) (hd1)1+15 p (hd1,0)/grub/stage2 /grub/grub.conf"... succeeded Done.
Code: > quit From here I think we're ready to reboot, can't see where I missed anything. If all goes well then I should see my volume groups listed in 'df- h'
Today we are going to conduct a detailed study of RAIDability of contemporary 400GB hard drives on a new level. We will take two "professional" drives from Seagate and Western Digital and four ordinary "desktop" drives for our investigation. The detailed performance analysis and some useful hints on building RAID arrays are in our new detailed article.
Is Motherboard RAID as good as a dedicated PCI-E card? I am guessing a dedicated card is the best option, though costs more.
We are looking at buying a barebones server from Supermicro. It features an onboard RAID controller which supports RAID 0, 1, 5 & 10 - but for some strange reason it will only support RAID 5 if you use Windows. Here is a link to the page detailing the RAID features.
[url]
We are going to be running Linux, CentOS 5.1, so we will only have the choice of RAID 0, 1 or 10. This isn't an issue, as having RAID 10 on 4x SAS (15k) drives will be fine for speed and stability. What is an issue is would this RAID controller be as fast or reliable compared to a dedicated PCI-E card? If it can only use RAID 5 in windows, does that suggest this controller is too reliant on software? It would be a nightmare to suffer downtime and data loss because the controller couldn't hack it during a drive failure, or one day it decided to bugger up the array when rebooting.
So that leads me to looking at this card, this looks very good for what we need. Are adaptec a reliable brand? I've seen it advertised for £200, which is a good price.
[url]
This card features RAID 5 and 6, would RAID 6 be better than RAID 10 for redundancy, or is it too slow to bother with? Also it seems to have a battery module available for it, what does this achieve? Cos surely if the power dies the hard drives and motherboard can't run off this little battery, or does it just help the controller stay alive long enough with some hard drive information in its memory if the power goes out during a rebuild?
Currently we're using HP servers with 4 hot-swap bays that hold 3.5" Seagate Cheetah 15K RPM SAS disks, which we can get in 300, 450, and 600 GB flavors.
I'm looking at the HP DL380/385 models which use 2.5" SAS disks. About the only decent 15K RPM SAS disk I've found in 2.5" form is the Seagate Savvio, but it doesn't come any larger than 146 GB.
Anyone know of another enterprise-class SAS disk that has all of the following attributes: 2.5", 15K RPM, SAS, and at least 300 GB?
(Please, no 10K RPM or SATA recommendations like the WD Velociraptor. I won't consider anything that's not 15K RPM SAS.)
Hardisk brands all had their ups and downs over time. So almost all brands made sometime bad drive models that failed (yes, even IBM)
I just finished reading an article that currently for servers it seems Seagate is the best (currently).
Some say Western, some say Maxtor, I heard everything. It seems nobody agrees or there isnt one that actually has the lowest failure rates.
It would be nice to hear from real experience on servers scenarios (not office, or desktop). The article also said Hitachi was one of the worst and my eyes just popped out. I found reviews of people here that said Hitachi where the bests. So to conclude it seems everybody has their own preference.
It would be nice to hear some Datacenters or people with tons of servers. I suppose recovery centers and datacenters probably have the best stats on which disks are failing the most.
I am in a somewhat complicated situation... I wanted to order a custom server with hardware 3Ware RAID controller but after over a month of waiting I was told the HW RAID controller, as well as any other 3Ware controller they tried, does not work with the motherboard used in the server from Fujitsu-Siemens and that they simply got a reply from FS that the controller is not certified to work with their motherboard.
So although I'd prefer a HW raid, I am forced to either choose a different webhost or setup a software RAID. The problem is, I haven't done that before and am somewhat moderately...scared
I have read a lot of the info about SW RAID on Linux that I could find through Google but there are some questions unanswered still. So I thought that perhaps some of the more knowledgeable WHT members could help me with this problem...
The server specs will be:
Core2Duo E6600 (2.4Ghz), 2GB RAM, 6-8x* 250GB SATA II HDDs, CentOS 4.4 or SuSe, DirectAdmin
* I prefer 8 HDDs (or actually 9) over 6 but I am not sure if their server chassis can hold that many HDDs, I am awaiting answer from them. They don't have any other drives beside the 250GB ones so I am limited to those.
The preferred SW RAID setup is to have everything in RAID 10, except for the /boot partition which has to be on RAID-1 or no RAID I believe, plus one drive as hot spare (that would be the 9th drive). I am quite sure they will not do the setup for me but will give me access to KVM over IP and a Linux image preinstalled on the first HDD so that I'll have a functional system that needs to be upgraded to RAID-10.
How do I do that? The big problem I see is that LILO or GRUB can't boot from a software RAID-5/10 so I will have to mount the /boot partition elsewhere. It's probably terribly simple...if you have done it before which I have not. I have read some articles on how to setup a RAID-5/10 with mdadm (e.g. [url] ) but they usually do not talk about how to setup the boot partition. Should it be setup as a small sized (100-200MB) RAID-1 partition spread over all of the drives in the otherwise RAID-10 array?
What about swap? Should I create a 4-8GB (I plan to upgrade the server RAM to 4GB in near future) RAID-1 swap partition on each of the disks or swap to a file on the main RAID-10 partitions. The second sounds simpler but what about performance? Is swapping to a file on RAID-10 array a bad idea, performance wise?
Is it possible to grow a RAID-10 array in a way similar to growing a RAID-5 array with mdadm (using two extra drives instead of one of course)? mdadm doesn't actually even mention RAID-10 despite it does support it without having to create RAID-0 on top of RAID-1 pairs if the support is in kernel, from what I know.
How do you handle your mega space requirements for your high-use databases? Do any of you work with storage in the terabytes? If so, what kind of hardware and setup do you use?
Do you just have many commodity servers with maybe 100GB or so in each, or do you have some kind of shared RAID array set up? Or some kind of SAN?
Keep in mind I'm not talking about network storage (i.e. slow, personal use, file server) but rather high-speed intensive high-read/high-write database requirements.
What are the options for implementing such a solution?
What types of products fit such requirements? Could you comment on what things to look for when purchasing such a set of products?
I colo a 1U machine with 2-36gig drives. They're not in RAID, and I have it set to rsync backups to a remote machine on a regular schedule. I have another remote machine functioning as a secondary DNS. Neither of these 2 are on a large upstream pipe. I just bought 2-147gig drives that I'd like to replace the 36g's with. How does this sound for a scenario to accomplish this with little downtime (pre-pardon my noob'ish ways):
1. Do a complete rsync of the filesystem to my remote machine as well as sync the mysql db's (to 1 remote drive).
2. Pop that single rsync'd drive into an external enclosure.
3. Travel to datacenter, once there, plug external drive into laptop and start up a VM that boots off of that drive.
4. Sync again so external drive has the most up-to-date data.
5. Change over IP's from colo to VM on laptop.
6. Shutdown and swap out drives in colo'd box with the new ones.
7. Setup new drives as RAID 1, install OS, then rsync filesystem over from laptop to new drives in colo'd box.
8. Change back IP's.
What am I missing, or is there an easier way without a 2nd colo/dedicated server? Currently, the colo'd machine is using about 1.3Mbit/sec outbound and it's running a low load.
anyone know any managed dedicated server provider who we can mail our hard disks to? We have a small pipe to the Internet and this is the only avenue we've thought of.
Or perhaps another solution that we haven't thought of?
Up to now we've been using CentOS with SCSI/SATA disk shich weren't "hot swap", and now we're upgrading to a Dell PowerEdge 1950 revision III with SAS hot swap disks on a PERC RAID 6i (new model of raid controller from Dell).
OF COURSE, Dell ONLY supports Windows (and Red Hat at the very most on the Linux world) so we were told by a Microsoft Tech that to be able to extract a disk and replace with another it had to be done via software. (The software powers the disk down and then you replace it)
Does anyone use CentOS with hot swap SAS disks? Do you use any special software to monitor the disks and/or replace them?
We're about to buy a Dell Poweredge 1950 with hot swap disks in a raid 1 configuration (might even think about other raid combinations).
We will be installing Centos 5 (never tried it - normally use Centos 4) + control panel
The question is: what happens when a disk fails? How do we find out?(Apart from looking at the server) Any software notices?
Once noticed, what is the standard procedure to replace the disk? (Remember they are "hot swap") Do you just pull one out and replace it? Surely you have to rebuild the array...
I have a server with 2 hard drives, say drive A and drive B. Right now all my files, database and data is on drive A, and drive B is empty. Since I have another drive available, I want to split the load between the two drives. I'm ok with having the web pages and the database on one drive. I mostly want to just have the data (I have about 500GB of data) split between the two drives. Note that I want to avoid duplicating the data. I want to have each file on either drive A XOR drive B.
Should I map a separate subdomain to drive B and then use that subdomain to serve the half of the data thats there? Is there something I can do with hard/soft links on the server so that even though the data is on 2 drives, users still use the same url to access data on either drive? Any other options?
Since purchasing 16-disk arrray NAS server 4-5 months ago, 5 disks have crashed so far. They are all WD4000YS. They're all "Raid Edition" which supposed to last longer than typical drives. It has been puzzling me until now.
It turns out that "Data Lifeguard" feature was confusing the RAID controller to believe that the disk was dead, hence the "failed" disk. AFAIK, Western Digital released firmware update on 01/09/07 that's supposed to fix this.
So, if you have WDxxxxYS on your pre-production server, pull them out for a firmware update first!
For me, I can only swap "hot-spare" out for a firmware update. For other disks, I'll just have to wait for them to "drop" out of the array first. I cannot take this server offline at all. Any suggestions?
How often do RAID arrays break? Is it worth having RAID if a servers hard drive goes down? I was thinking it may just be a better option to just have a backup drive mounted to my system and in the even of a system failure just pop in a new hard drive, reload the OS, and then reload all my backups?
I am in the process of restructuring the infrastructure on our servers. I am thinking of using either RAID 5 (1 hot spare) vs RAID 10 as my 1U server has 4 HDD tray.
RAID 5 would have better capacity but RAID 10 has better overall performance. Which one do you guys go for a shared hosting server?
Is it possible to turn a non raided setup into Linux software raid, while it is live, and if it's the OS drive? Can you even software raid the OS drive remotely? I've been thinking about doing it for the redundancy (and possible slight performance boost for reads, but doing it more for redundancy). I'm using CentOS.
Due to data center limitations, I am restricted to 100GB on my primary disk but can have up to 2TB on a second disk.Is it possible to have the backup node use the second disk instead of the primary disk?Also is it possible to have multiple to have multiple backup nodes?
I've been talking to the Planet about trading in my four and a half year old "SuperCeleron" (from the old ServerMatrix days) Celeron 2.4 GHz system for something new. As part of their current promotions, I've configured a system that looks decent:
Xeon 3040, 1 gig of RAM, 2x250GB hard disks, RHEL 5, cPanel+Fantastico, and 10 ips for $162.
Not too bad. I could bump up the ram to 2 gb for, I think, $12 more, which I'm thinking about and wouldn't mind some thoughts on. But, the thing that has me really confused is RAID. I like the idea of doing a RAID 1 setup with those two hard disks. But, the Planet wants $40/month for a RAID controller to do it. I really don't want to go over $200 a month!
Any thoughts on alternative redundancy strategies that might avoid that cost? Software RAID does not seem to be offered by the Planet, unless I can figure out how to do it after installation (is that possible?) Better ideas in general on the server?