What Kind Of Raid Array Can I Use While It's Being Rebuilt
Mar 24, 2007
What kind of raid array can I use while it's being rebuilt?
For instance if a HDD fails, on my servers I need to take the server down in order to rebuild the array. I'd like to be able to rebuild the array without the users to notice it.
I have a server with a Dell Perc4e Hardware RAID Controller and 2 Maxtor Atlas 15K2 U320 SCSI. I´m currently having a problem with one of the HD´s. For the second time in about 2 - 3 months, it failed. I reconsctruct the array and everything goes online and healthy. Does this mean this specific HD may not be in the best condition? Shouldn´t it fail and never go back online again, in this case? Is there any chance that the problem is the SCSI cable (i´m not using hot swap cradle, but cable + adapter)?
I have probably what is the most simple thing I'm going to have to do soon, but I have never done it so I'm looking for tips of how to do it because I dont want to screw it up.
I want to copy a CentOS install from one RAID array, to another. However, I have a drive image (27GB RAID array) of the old RAID array, but the new RAID array will be larger.
How can I copy the old CentOS install, to a new array?
I have a couple of Dell 1950s and in one of them, I have 2x Seagate 15K.5s that I purchased through Dell and I also have a spare sitting in my rack in case one goes bad, also from Dell.
I was going to be repurposing one of my other 1950s and was going to get two more 15K.5s for it, but wasn't planning on getting them through Dell (rip off?). This way, could still keep the same spare drive around in case a drive went bad in that system as well.
When I was talking to my Dell rep recently when purchasing another system, their hardware tech said you can't use non-Dell drives with Dell drives in the same RAID array because of the different firmware between them.
Anyone know if it is true? Anyone have any experience with using drives from Dell in conjunction with the same model drives from a third party retailer?
Today we are going to conduct a detailed study of RAIDability of contemporary 400GB hard drives on a new level. We will take two "professional" drives from Seagate and Western Digital and four ordinary "desktop" drives for our investigation. The detailed performance analysis and some useful hints on building RAID arrays are in our new detailed article.
Yesterday I went to run an upgrade on MySQL from 3.23 to 4.1.
The upgrade went fine, but after that was completed, I could no longer access Plesk. After a lot of hassle I finally went to reinstall Plesk, well during that somewhere down the line everything crumbled horribly.
I can no longer access any of my sites on 80, httpd fails to start. When I type Quote:
/etc/init.d/httpd start
I get
Code: [rtusrgd@nuinspiration rtusrgd]$ sudo su [root@nuinspiration rtusrgd]# /etc/init.d/httpd start Starting httpd: (98)Address already in use: make_sock: could not bind to address [::]:80 no listening sockets available, shutting down Unable to open logs [FAILED] [root@nuinspiration rtusrgd]# And when I try to run
Quote:
/etc/init.d/pssa start
I get
Code: [root@nuinspiration /]# /etc/init.d/psa restart PSA is down, performing full restart. Starting psa-spamassassin service: [ OK ] websrvmng: Service /etc/init.d/httpd failed to start websrvmng: Service /etc/init.d/httpd failed to start /usr/local/psa/admin/bin/httpsdctl start: httpd started Starting Plesk: [ OK ] Starting up drwebd: Dr.Web (R) daemon for Linux/Plesk Edition v4.33 (4.33.0.09211) Copyright (c) Igor Daniloff, 1992-2005 Doctor Web, Ltd., Moscow, Russia Support service: [url] To purchase: [url]
I can still access my sites via ftp, just not via web browsing.
GoDaddy is my server host and after speaking to them, the only advice they were able to give me, was that the only solution they see is to format the drive and start from fresh. The only issue is I have about 93 sites on the server itself and not all have hard backups.
I went to run an upgrade on MySQL from 3.23 to 4.1.
The upgrade went fine, but after that was completed, I could no longer access Plesk.
After a lot of hassle I finally went to reinstall Plesk, well during that somewhere down the line everything crumbled horribly.
I can no longer access any of my sites on 80, httpd fails to start. When I type Quote:
/etc/init.d/httpd start
I get
Code: [rtusrgd@nuinspiration rtusrgd]$ sudo su [root@nuinspiration rtusrgd]# /etc/init.d/httpd start Starting httpd: (98)Address already in use: make_sock: could not bind to address [::]:80 no listening sockets available, shutting down Unable to open logs [FAILED] [root@nuinspiration rtusrgd]# And when I try to run
Quote:
/etc/init.d/pssa start
I get
Code: [root@nuinspiration /]# /etc/init.d/psa restart PSA is down, performing full restart. Starting psa-spamassassin service: [ OK ] websrvmng: Service /etc/init.d/httpd failed to start websrvmng: Service /etc/init.d/httpd failed to start /usr/local/psa/admin/bin/httpsdctl start: httpd started Starting Plesk: [ OK ] Starting up drwebd: Dr.Web (R) daemon for Linux/Plesk Edition v4.33 (4.33.0.09211) Copyright (c) Igor Daniloff, 1992-2005 Doctor Web, Ltd., Moscow, Russia Support service: http://support.drweb.com To purchase: http://buy.drweb.com
I can still access my sites via ftp, just not via web browsing.
GoDaddy is my server host and after speaking to them, the only advice they were able to give me, was that the only solution they see is to format the drive and start from fresh. The only issue is I have about 93 sites on the server itself and not all have hard backups.
Can anyone recommend a good case/enclosure for a sata raid array? I would like to build an array using 500 GB SATA Harddrives. Will the server need much processing power and ram if I am going to use a decent hardware raid card? What card would you recommend? Are there any premade sata arrays that allow you to just pop in your own harddrives and don't cost thousands of dollars?
Also, can anyone recommend a enclosure if you had a server that had build in raid with 8 sata ports, but only two harddrive bays and wanted to use the entire 8 ports?
I am trying to figure out what file system to use for my server. It has 24 hard drives, 2 run the OS in RAID 1, and the other 22 are in RAID 10. When I was installing the OS (Ubuntu 8), I kept on getting problems when I tried to partition and format the second drive (the one with the 22 disks in RAID 10) and it keeps failing on me. I then changed the file system type from ext3 to XFS and it worked fine. I also gave it another try and did not partition/format the second drive and decided to do it manually once the OS was installed. When I did it it told me that the file system was too large for ext3. So my guess is that ext3 has a limit on the size of the file system it is being installed on.
Anyway, so I am wondering, is there any other file system that will get me the best performance, mainly I/O performance, that I can install? I would like to stick with Ubuntu OS. This server will mainly serve large files for download over HTTP.
I've almost got my server ready to get shipped out to colo.
Its a HP Proliant ML570 G2, 4x2.0GHz Xeon, 4GB PC1600 DDR ECC RAM, and a Smart Array 6402 Ultra320 SCSI RAID controller card. The way I currently have the HDs configured is as follows:
I will add one-two more 73.5GB 10k Ultra320 drives for running in RAID 5 or 6.
The 2 36.4GB 15k Ultra320 drives are running in RAID 1. This is the array that I am performing these tests on, -not- the 73.5GB drives.
Anyway, I was a little curious about my performance.
The following tsets
I know hdparm is not really meant for running tests on SCSI disks, however here is the output:
Code: # hdparm -tT /dev/cciss/c0d1
/dev/cciss/c0d1: Timing cached reads: 1732 MB in 2.00 seconds = 865.91 MB/sec HDIO_DRIVE_CMD(null) (wait for flush complete) failed: Inappropriate ioctl for device Timing buffered disk reads: 320 MB in 3.01 seconds = 106.45 MB/sec HDIO_DRIVE_CMD(null) (wait for flush complete) failed: Inappropriate ioctl for device Here is the IOZone output:
Code: File size set to 4096 KB Command line used: /opt/iozone/bin/iozone -s 4096 Output is in Kbytes/sec Time Resolution = 0.000001 seconds. Processor cache size set to 1024 Kbytes. Processor cache line size set to 32 bytes. File stride size set to 17 * record size. random random bkwd record stride KB reclen write rewrite read reread read write read rewrite read fwrite frewrite fread freread 4096 4 188852 523178 658943 688853 610887 484275 572466 539010 580498 182914 471075 644937 671916 Then, just for the heck of it, I ran the tests on my little home Dell server with a ~80GB 7.2k RPM 1.5GB/s SATA drive:
Code: /dev/sda: Timing cached reads: 1680 MB in 2.00 seconds = 839.29 MB/sec Timing buffered disk reads: 136 MB in 3.02 seconds = 45.08 MB/sec
Code: File size set to 4096 KB Command line used: ./iozone -s 4096 Output is in Kbytes/sec Time Resolution = 0.000001 seconds. Processor cache size set to 1024 Kbytes. Processor cache line size set to 32 bytes. File stride size set to 17 * record size. random random bkwd record stride KB reclen write rewrite read reread read write read rewrite read fwrite frewrite fread freread 4096 4 152972 1135245 1358510 2132128 1163636 1103448 1184499 1829350 1588205 490774 1006882 1378196 2104830 Can you give me a little insight on my results?
NOTE: The batteries on my Smart Array 6402 controller are DEAD and need to be replaced, they will not charge. Since they are dead, the cache (which is 128MB) will not enable itself. If this cache were enabled, would I experience much different results from the tests?
Is Motherboard RAID as good as a dedicated PCI-E card? I am guessing a dedicated card is the best option, though costs more.
We are looking at buying a barebones server from Supermicro. It features an onboard RAID controller which supports RAID 0, 1, 5 & 10 - but for some strange reason it will only support RAID 5 if you use Windows. Here is a link to the page detailing the RAID features.
[url]
We are going to be running Linux, CentOS 5.1, so we will only have the choice of RAID 0, 1 or 10. This isn't an issue, as having RAID 10 on 4x SAS (15k) drives will be fine for speed and stability. What is an issue is would this RAID controller be as fast or reliable compared to a dedicated PCI-E card? If it can only use RAID 5 in windows, does that suggest this controller is too reliant on software? It would be a nightmare to suffer downtime and data loss because the controller couldn't hack it during a drive failure, or one day it decided to bugger up the array when rebooting.
So that leads me to looking at this card, this looks very good for what we need. Are adaptec a reliable brand? I've seen it advertised for £200, which is a good price.
[url]
This card features RAID 5 and 6, would RAID 6 be better than RAID 10 for redundancy, or is it too slow to bother with? Also it seems to have a battery module available for it, what does this achieve? Cos surely if the power dies the hard drives and motherboard can't run off this little battery, or does it just help the controller stay alive long enough with some hard drive information in its memory if the power goes out during a rebuild?
We damaged a file on a windows server 2003 system which caused us to get a grey screen where the login windows should be (it was setup as a domain controller)
We tried booting it in safe mode and all the other modes etc etc.... but to no avail... we couldn't get it to work.
So we wanted to replace the file manually using another pc (by inserting one of the harddrives into the pc, and copying the file to it).
We used a windows XP machine.... imported the harddrive using disk management..... we had to un mirror the drive to be able to access it.
Once the mirror was broken i gave it a drive letter, copied the the new file to it, and, removed the drive letter i assigned it, and tried booting with *just* that drive in the windows 2003 server.
Now it wont boot, it just reboots everytime i tries to start up, probably because we broke the mirror on a different machine.
It still boots from the other untouched drive... that was in the mirror, but we have no way to edit the files on there.
So is there any way to actually get this, once mirrored drive, to boot now its a single disc?
I am in a somewhat complicated situation... I wanted to order a custom server with hardware 3Ware RAID controller but after over a month of waiting I was told the HW RAID controller, as well as any other 3Ware controller they tried, does not work with the motherboard used in the server from Fujitsu-Siemens and that they simply got a reply from FS that the controller is not certified to work with their motherboard.
So although I'd prefer a HW raid, I am forced to either choose a different webhost or setup a software RAID. The problem is, I haven't done that before and am somewhat moderately...scared
I have read a lot of the info about SW RAID on Linux that I could find through Google but there are some questions unanswered still. So I thought that perhaps some of the more knowledgeable WHT members could help me with this problem...
The server specs will be:
Core2Duo E6600 (2.4Ghz), 2GB RAM, 6-8x* 250GB SATA II HDDs, CentOS 4.4 or SuSe, DirectAdmin
* I prefer 8 HDDs (or actually 9) over 6 but I am not sure if their server chassis can hold that many HDDs, I am awaiting answer from them. They don't have any other drives beside the 250GB ones so I am limited to those.
The preferred SW RAID setup is to have everything in RAID 10, except for the /boot partition which has to be on RAID-1 or no RAID I believe, plus one drive as hot spare (that would be the 9th drive). I am quite sure they will not do the setup for me but will give me access to KVM over IP and a Linux image preinstalled on the first HDD so that I'll have a functional system that needs to be upgraded to RAID-10.
How do I do that? The big problem I see is that LILO or GRUB can't boot from a software RAID-5/10 so I will have to mount the /boot partition elsewhere. It's probably terribly simple...if you have done it before which I have not. I have read some articles on how to setup a RAID-5/10 with mdadm (e.g. [url] ) but they usually do not talk about how to setup the boot partition. Should it be setup as a small sized (100-200MB) RAID-1 partition spread over all of the drives in the otherwise RAID-10 array?
What about swap? Should I create a 4-8GB (I plan to upgrade the server RAM to 4GB in near future) RAID-1 swap partition on each of the disks or swap to a file on the main RAID-10 partitions. The second sounds simpler but what about performance? Is swapping to a file on RAID-10 array a bad idea, performance wise?
Is it possible to grow a RAID-10 array in a way similar to growing a RAID-5 array with mdadm (using two extra drives instead of one of course)? mdadm doesn't actually even mention RAID-10 despite it does support it without having to create RAID-0 on top of RAID-1 pairs if the support is in kernel, from what I know.
How often do RAID arrays break? Is it worth having RAID if a servers hard drive goes down? I was thinking it may just be a better option to just have a backup drive mounted to my system and in the even of a system failure just pop in a new hard drive, reload the OS, and then reload all my backups?
I am in the process of restructuring the infrastructure on our servers. I am thinking of using either RAID 5 (1 hot spare) vs RAID 10 as my 1U server has 4 HDD tray.
RAID 5 would have better capacity but RAID 10 has better overall performance. Which one do you guys go for a shared hosting server?
Is it possible to turn a non raided setup into Linux software raid, while it is live, and if it's the OS drive? Can you even software raid the OS drive remotely? I've been thinking about doing it for the redundancy (and possible slight performance boost for reads, but doing it more for redundancy). I'm using CentOS.
I'm not quite sure how much of Ram I need for my vps, But I'm going to get 1GB Vps from wiredtree.com
Anyone can tell me what kind of website I would be able to run on such an VPS? If it's just wordpress driven website...
Maybe anyone can share how much traffic your site have and how much ram it's using?
At the moment I have website with about 40k uniques/day and ~100k pageloads per day hosted on shared hosting but they have gave me 3days to find another hosting because they say I use to much of their traffic...
I'm just posting a little topic here to see what kind of demand numbers and opinions I might get for the Hybrid VPS industry. Hybrid as in semi-dedicated, being packages like 2GB ram, 25% guaranteed processor, etc. "Semi Dedicated" if you will. I've been considering opening up a sub-company of GeekLayer for a few weeks now that sold fully managed xen virtual private server "hybrids", although be it with a slight twist.
I want to launch a site for mp3's and pictures. Which kind of hosting is better for this kind of site.And i am expecting more traffic from India.So Hosting from which country is better for me. And one more thing , Are there any restrictions for serving mp3 files on any site.
I donno what kind of this attack. Dddos or syn attack?. Every time my apache server down dan cpu load very high because of this.. please help me how to defend attacker like this because when im block that ip in iptables after 3/4 hour it come again with another ip. What must i do?
Many connection like below example IP 60.50.80.51. Just one ip per onetime attack but so many same ip 60.50.80.51 can see when type netstat -anp... Im just paste a few here
I am planning to set up a VPN server for a group of about 4 people who will use it as an "internet proxy" for *all* their internet connectivity. (They live in a country where internet activity is severely restricted (and monitored), and I think a VPN would be a good solution for them.)
However, I have no idea what kind of processing power or system resources are needed for this kind of situation. Is a simple VPS ok? How much memory would be good? How about server speed?
The server will be linux (probably CentOS) and will use OpenVPN.
I'm thinking that there probably doesn't need to be a whole lot of raw CPU power or RAM, but maybe I'm wrong?
I need to have a private proxy which I want to share with 2 of my friends? What kind of hosting do I need? Also anybody who have experience with setting up private proxies? I only know of setting up web based proxy.
vpsland.com what kind of passive support they have ? 48H without any progress ... i bought VPS three months back and i am avoiding to disturb their sleeping APAIC but this is the first issue i have with their support , my vps turned offline with out any known reason no billing ,no abusing emails ..etc ...