Currently we're using HP servers with 4 hot-swap bays that hold 3.5" Seagate Cheetah 15K RPM SAS disks, which we can get in 300, 450, and 600 GB flavors.
I'm looking at the HP DL380/385 models which use 2.5" SAS disks. About the only decent 15K RPM SAS disk I've found in 2.5" form is the Seagate Savvio, but it doesn't come any larger than 146 GB.
Anyone know of another enterprise-class SAS disk that has all of the following attributes: 2.5", 15K RPM, SAS, and at least 300 GB?
(Please, no 10K RPM or SATA recommendations like the WD Velociraptor. I won't consider anything that's not 15K RPM SAS.)
Hardisk brands all had their ups and downs over time. So almost all brands made sometime bad drive models that failed (yes, even IBM)
I just finished reading an article that currently for servers it seems Seagate is the best (currently).
Some say Western, some say Maxtor, I heard everything. It seems nobody agrees or there isnt one that actually has the lowest failure rates.
It would be nice to hear from real experience on servers scenarios (not office, or desktop). The article also said Hitachi was one of the worst and my eyes just popped out. I found reviews of people here that said Hitachi where the bests. So to conclude it seems everybody has their own preference.
It would be nice to hear some Datacenters or people with tons of servers. I suppose recovery centers and datacenters probably have the best stats on which disks are failing the most.
How do you handle your mega space requirements for your high-use databases? Do any of you work with storage in the terabytes? If so, what kind of hardware and setup do you use?
Do you just have many commodity servers with maybe 100GB or so in each, or do you have some kind of shared RAID array set up? Or some kind of SAN?
Keep in mind I'm not talking about network storage (i.e. slow, personal use, file server) but rather high-speed intensive high-read/high-write database requirements.
What are the options for implementing such a solution?
What types of products fit such requirements? Could you comment on what things to look for when purchasing such a set of products?
I colo a 1U machine with 2-36gig drives. They're not in RAID, and I have it set to rsync backups to a remote machine on a regular schedule. I have another remote machine functioning as a secondary DNS. Neither of these 2 are on a large upstream pipe. I just bought 2-147gig drives that I'd like to replace the 36g's with. How does this sound for a scenario to accomplish this with little downtime (pre-pardon my noob'ish ways):
1. Do a complete rsync of the filesystem to my remote machine as well as sync the mysql db's (to 1 remote drive).
2. Pop that single rsync'd drive into an external enclosure.
3. Travel to datacenter, once there, plug external drive into laptop and start up a VM that boots off of that drive.
4. Sync again so external drive has the most up-to-date data.
5. Change over IP's from colo to VM on laptop.
6. Shutdown and swap out drives in colo'd box with the new ones.
7. Setup new drives as RAID 1, install OS, then rsync filesystem over from laptop to new drives in colo'd box.
8. Change back IP's.
What am I missing, or is there an easier way without a 2nd colo/dedicated server? Currently, the colo'd machine is using about 1.3Mbit/sec outbound and it's running a low load.
anyone know any managed dedicated server provider who we can mail our hard disks to? We have a small pipe to the Internet and this is the only avenue we've thought of.
Or perhaps another solution that we haven't thought of?
Up to now we've been using CentOS with SCSI/SATA disk shich weren't "hot swap", and now we're upgrading to a Dell PowerEdge 1950 revision III with SAS hot swap disks on a PERC RAID 6i (new model of raid controller from Dell).
OF COURSE, Dell ONLY supports Windows (and Red Hat at the very most on the Linux world) so we were told by a Microsoft Tech that to be able to extract a disk and replace with another it had to be done via software. (The software powers the disk down and then you replace it)
Does anyone use CentOS with hot swap SAS disks? Do you use any special software to monitor the disks and/or replace them?
We're about to buy a Dell Poweredge 1950 with hot swap disks in a raid 1 configuration (might even think about other raid combinations).
We will be installing Centos 5 (never tried it - normally use Centos 4) + control panel
The question is: what happens when a disk fails? How do we find out?(Apart from looking at the server) Any software notices?
Once noticed, what is the standard procedure to replace the disk? (Remember they are "hot swap") Do you just pull one out and replace it? Surely you have to rebuild the array...
I have a server with 2 hard drives, say drive A and drive B. Right now all my files, database and data is on drive A, and drive B is empty. Since I have another drive available, I want to split the load between the two drives. I'm ok with having the web pages and the database on one drive. I mostly want to just have the data (I have about 500GB of data) split between the two drives. Note that I want to avoid duplicating the data. I want to have each file on either drive A XOR drive B.
Should I map a separate subdomain to drive B and then use that subdomain to serve the half of the data thats there? Is there something I can do with hard/soft links on the server so that even though the data is on 2 drives, users still use the same url to access data on either drive? Any other options?
Since purchasing 16-disk arrray NAS server 4-5 months ago, 5 disks have crashed so far. They are all WD4000YS. They're all "Raid Edition" which supposed to last longer than typical drives. It has been puzzling me until now.
It turns out that "Data Lifeguard" feature was confusing the RAID controller to believe that the disk was dead, hence the "failed" disk. AFAIK, Western Digital released firmware update on 01/09/07 that's supposed to fix this.
So, if you have WDxxxxYS on your pre-production server, pull them out for a firmware update first!
For me, I can only swap "hot-spare" out for a firmware update. For other disks, I'll just have to wait for them to "drop" out of the array first. I cannot take this server offline at all. Any suggestions?
If you want a quick run down as to WHY I want to do this, read here
Basically, my ISP could not get my server running stable on a simple raid 1 (or raid 5) so what it came down to was having them install my system on a single disk. I don't exactly like this, main reason being, if the system (or HDD) crashes, I'll end up with another several hours of down time... So here is my proposal:
Please Note: This will have to be accomplished on a live System (full backups!) over ssh as I don't trust my ISP to do things right as described in my post above.
mkfs -t ext3 -m 1 /dev/vg0/lvboot mkfs -t ext3 -m 1 /dev/vg0/lvroot mkfs -t ext3 -m 1 /dev/vg0/lvtmp mkfs -t ext3 -m 1 /dev/vg0/lvhome Now, I'd like to 'init 1' at this stage but I can't, so I won't (possible solutions?? Possible to umount the / partition??)
Assuming I'd have to do this on a fully live system, I'd disable all services that I can
Code: /etc/init.d/sendmail stop /etc/init.d/postfix stop /etc/init.d/saslauthd stop /etc/init.d/httpd stop /etc/init.d/mysql stop /etc/init.d/courier-authlib stop /etc/init.d/courier-imap stop /etc/init.d/amavisd stop /etc/init.d/clamd stop /etc/init.d/pure-ftpd stop /etc/init.d/fail2ban stop /etc/init.d/syslogd stop Then we copy all of our data from the single partitions to the raid disks
Code: mount /dev/vg0/lvboot /mnt/newroot/boot mount /dev/vg0/lvroot /mnt/newroot/root mount /dev/vg0/lvtmp /mnt/newroot/tmp mount /dev/vg0/lvhome /mnt/newroot/home (I think I covered everything)
Code: umount -l /dev/sda1 (/boot) umount -l /dev/sda3 (/home) cp -dpRx /* /mnt/newroot/ mount /dev/sda1 /boot cp -dpRx /boot/* /mnt/newroot/boot/ mount /dev/sda3 /home cp -dpRx /home/* /mnt/newroot/home/ Once we have everything copied, update /etc/fstab and /etc/mtab to reflect the changes we made: vi /etc/fstab
Code: title CentOS (2.6.18-164.el5) root (hd3,0) kernel /vmlinuz-2.6.18-164.el5 ro root=/dev/sda2 initrd /initrd-2.6.18-164.el5.img Where (hd3,0) is /dev/sdc. If the system fails to boot to the raid then it'll auto boot to the single disk (/dev/sda)
then update my ramdisk: mv /boot/initrd-`uname -r`.img /boot/initrd-`uname -r`.img_bak mkinitrd /boot/initrd-`uname -r`.img `uname -r`
And now to set up grub...
Code: grub > root (hd0,0) > setup (hd0) we should see something like this: Checking if "/boot/grub/stage1" exists... no Checking if "/grub/stage1" exists... yes Checking if "/grub/stage2" exists... yes Checking if "/grub/e2fs_stage1_5" exists... yes Running "embed /grub/e2fs_stage1_5 (hd0)"... 15 sectors are embedded. succeeded Running "install /grub/stage1 (hd0) (hd0)1+15 p (hd0,0)/grub/stage2 /grub/grub.conf"... succeeded Done.
Code: > root (hd3,0) > setup (hd3) Again, we should see something like this: Checking if "/boot/grub/stage1" exists... no Checking if "/grub/stage1" exists... yes Checking if "/grub/stage2" exists... yes Checking if "/grub/e2fs_stage1_5" exists... yes Running "embed /grub/e2fs_stage1_5 (hd1)"... 15 sectors are embedded. succeeded Running "install /grub/stage1 (hd1) (hd1)1+15 p (hd1,0)/grub/stage2 /grub/grub.conf"... succeeded Done.
Code: > quit From here I think we're ready to reboot, can't see where I missed anything. If all goes well then I should see my volume groups listed in 'df- h'
Due to data center limitations, I am restricted to 100GB on my primary disk but can have up to 2TB on a second disk.Is it possible to have the backup node use the second disk instead of the primary disk?Also is it possible to have multiple to have multiple backup nodes?