I have a server with a Dell Perc4e Hardware RAID Controller and 2 Maxtor Atlas 15K2 U320 SCSI. I´m currently having a problem with one of the HD´s. For the second time in about 2 - 3 months, it failed. I reconsctruct the array and everything goes online and healthy. Does this mean this specific HD may not be in the best condition? Shouldn´t it fail and never go back online again, in this case? Is there any chance that the problem is the SCSI cable (i´m not using hot swap cradle, but cable + adapter)?
What kind of raid array can I use while it's being rebuilt?
For instance if a HDD fails, on my servers I need to take the server down in order to rebuild the array. I'd like to be able to rebuild the array without the users to notice it.
I have probably what is the most simple thing I'm going to have to do soon, but I have never done it so I'm looking for tips of how to do it because I dont want to screw it up.
I want to copy a CentOS install from one RAID array, to another. However, I have a drive image (27GB RAID array) of the old RAID array, but the new RAID array will be larger.
How can I copy the old CentOS install, to a new array?
we use 3Ware's 8006-2LP sata raid controller in each of our servers for RAID1. Our servers are all Supermicro boxes with hot-swap driver carriers (ie. the 2 raided drives are in them)
One of the drives appears to be starting to crap itself as smartd is reporting issues (although tw_cli c0 shows the raid to be OK) incl. multi zone errors.
Anyway, I'd like to replace the failing drive before it becomes a real issue so i've bought a replacement (74gb raptor, just like the original) drive.
Now I've never had to replace a failing drive in any of our servers before and I used to think it would be a simple matter of simply popping out the failing drive's carrier, put the new drive in the carrier and stick it back in the server... and the raid controller would do the rest.
Yes a little naive I know but i've never had to do it before so never paid much attention .. Anyway, I've just read and re-read the 3ware docs for my controller and their instructions are VERY VAGUE ... however I do get the feeling that the process is move involved ie. I need to tell the controller (via cli or 3dm) to first 'remove' the failing drive from the raid ..and then add a new drive and then rebuild
However there is one catch .. 3dmd/3dm2 has NEVER worked on our (centos 4) servers - 3dmd crashes regularly and 3dm2 never worked. So yes I am stuck with the 3ware cli ... which I don't mind as long as someone can tell me the sequency of commands I need to issue
As this point I'm thinking what I need to do via cli is
1) tell raid controlloer to remove the failing drive on port 0
2) eject the drive carrier with the drive in question
3) insert new drive in carrier and re-insert into server
4) using tw_cli tell the controller to add the new drive to the array and to rebuild the array
Am I anywhere close to being correct? I'm sure there are some of you out there who've done this countless times before with the 3ware controllers and hotswap drive carriers ..
I have a couple of Dell 1950s and in one of them, I have 2x Seagate 15K.5s that I purchased through Dell and I also have a spare sitting in my rack in case one goes bad, also from Dell.
I was going to be repurposing one of my other 1950s and was going to get two more 15K.5s for it, but wasn't planning on getting them through Dell (rip off?). This way, could still keep the same spare drive around in case a drive went bad in that system as well.
When I was talking to my Dell rep recently when purchasing another system, their hardware tech said you can't use non-Dell drives with Dell drives in the same RAID array because of the different firmware between them.
Anyone know if it is true? Anyone have any experience with using drives from Dell in conjunction with the same model drives from a third party retailer?
Today we are going to conduct a detailed study of RAIDability of contemporary 400GB hard drives on a new level. We will take two "professional" drives from Seagate and Western Digital and four ordinary "desktop" drives for our investigation. The detailed performance analysis and some useful hints on building RAID arrays are in our new detailed article.
Yesterday I went to run an upgrade on MySQL from 3.23 to 4.1.
The upgrade went fine, but after that was completed, I could no longer access Plesk. After a lot of hassle I finally went to reinstall Plesk, well during that somewhere down the line everything crumbled horribly.
I can no longer access any of my sites on 80, httpd fails to start. When I type Quote:
/etc/init.d/httpd start
I get
Code: [rtusrgd@nuinspiration rtusrgd]$ sudo su [root@nuinspiration rtusrgd]# /etc/init.d/httpd start Starting httpd: (98)Address already in use: make_sock: could not bind to address [::]:80 no listening sockets available, shutting down Unable to open logs [FAILED] [root@nuinspiration rtusrgd]# And when I try to run
Quote:
/etc/init.d/pssa start
I get
Code: [root@nuinspiration /]# /etc/init.d/psa restart PSA is down, performing full restart. Starting psa-spamassassin service: [ OK ] websrvmng: Service /etc/init.d/httpd failed to start websrvmng: Service /etc/init.d/httpd failed to start /usr/local/psa/admin/bin/httpsdctl start: httpd started Starting Plesk: [ OK ] Starting up drwebd: Dr.Web (R) daemon for Linux/Plesk Edition v4.33 (4.33.0.09211) Copyright (c) Igor Daniloff, 1992-2005 Doctor Web, Ltd., Moscow, Russia Support service: [url] To purchase: [url]
I can still access my sites via ftp, just not via web browsing.
GoDaddy is my server host and after speaking to them, the only advice they were able to give me, was that the only solution they see is to format the drive and start from fresh. The only issue is I have about 93 sites on the server itself and not all have hard backups.
I went to run an upgrade on MySQL from 3.23 to 4.1.
The upgrade went fine, but after that was completed, I could no longer access Plesk.
After a lot of hassle I finally went to reinstall Plesk, well during that somewhere down the line everything crumbled horribly.
I can no longer access any of my sites on 80, httpd fails to start. When I type Quote:
/etc/init.d/httpd start
I get
Code: [rtusrgd@nuinspiration rtusrgd]$ sudo su [root@nuinspiration rtusrgd]# /etc/init.d/httpd start Starting httpd: (98)Address already in use: make_sock: could not bind to address [::]:80 no listening sockets available, shutting down Unable to open logs [FAILED] [root@nuinspiration rtusrgd]# And when I try to run
Quote:
/etc/init.d/pssa start
I get
Code: [root@nuinspiration /]# /etc/init.d/psa restart PSA is down, performing full restart. Starting psa-spamassassin service: [ OK ] websrvmng: Service /etc/init.d/httpd failed to start websrvmng: Service /etc/init.d/httpd failed to start /usr/local/psa/admin/bin/httpsdctl start: httpd started Starting Plesk: [ OK ] Starting up drwebd: Dr.Web (R) daemon for Linux/Plesk Edition v4.33 (4.33.0.09211) Copyright (c) Igor Daniloff, 1992-2005 Doctor Web, Ltd., Moscow, Russia Support service: http://support.drweb.com To purchase: http://buy.drweb.com
I can still access my sites via ftp, just not via web browsing.
GoDaddy is my server host and after speaking to them, the only advice they were able to give me, was that the only solution they see is to format the drive and start from fresh. The only issue is I have about 93 sites on the server itself and not all have hard backups.
Can anyone recommend a good case/enclosure for a sata raid array? I would like to build an array using 500 GB SATA Harddrives. Will the server need much processing power and ram if I am going to use a decent hardware raid card? What card would you recommend? Are there any premade sata arrays that allow you to just pop in your own harddrives and don't cost thousands of dollars?
Also, can anyone recommend a enclosure if you had a server that had build in raid with 8 sata ports, but only two harddrive bays and wanted to use the entire 8 ports?
I am trying to figure out what file system to use for my server. It has 24 hard drives, 2 run the OS in RAID 1, and the other 22 are in RAID 10. When I was installing the OS (Ubuntu 8), I kept on getting problems when I tried to partition and format the second drive (the one with the 22 disks in RAID 10) and it keeps failing on me. I then changed the file system type from ext3 to XFS and it worked fine. I also gave it another try and did not partition/format the second drive and decided to do it manually once the OS was installed. When I did it it told me that the file system was too large for ext3. So my guess is that ext3 has a limit on the size of the file system it is being installed on.
Anyway, so I am wondering, is there any other file system that will get me the best performance, mainly I/O performance, that I can install? I would like to stick with Ubuntu OS. This server will mainly serve large files for download over HTTP.
I've almost got my server ready to get shipped out to colo.
Its a HP Proliant ML570 G2, 4x2.0GHz Xeon, 4GB PC1600 DDR ECC RAM, and a Smart Array 6402 Ultra320 SCSI RAID controller card. The way I currently have the HDs configured is as follows:
I will add one-two more 73.5GB 10k Ultra320 drives for running in RAID 5 or 6.
The 2 36.4GB 15k Ultra320 drives are running in RAID 1. This is the array that I am performing these tests on, -not- the 73.5GB drives.
Anyway, I was a little curious about my performance.
The following tsets
I know hdparm is not really meant for running tests on SCSI disks, however here is the output:
Code: # hdparm -tT /dev/cciss/c0d1
/dev/cciss/c0d1: Timing cached reads: 1732 MB in 2.00 seconds = 865.91 MB/sec HDIO_DRIVE_CMD(null) (wait for flush complete) failed: Inappropriate ioctl for device Timing buffered disk reads: 320 MB in 3.01 seconds = 106.45 MB/sec HDIO_DRIVE_CMD(null) (wait for flush complete) failed: Inappropriate ioctl for device Here is the IOZone output:
Code: File size set to 4096 KB Command line used: /opt/iozone/bin/iozone -s 4096 Output is in Kbytes/sec Time Resolution = 0.000001 seconds. Processor cache size set to 1024 Kbytes. Processor cache line size set to 32 bytes. File stride size set to 17 * record size. random random bkwd record stride KB reclen write rewrite read reread read write read rewrite read fwrite frewrite fread freread 4096 4 188852 523178 658943 688853 610887 484275 572466 539010 580498 182914 471075 644937 671916 Then, just for the heck of it, I ran the tests on my little home Dell server with a ~80GB 7.2k RPM 1.5GB/s SATA drive:
Code: /dev/sda: Timing cached reads: 1680 MB in 2.00 seconds = 839.29 MB/sec Timing buffered disk reads: 136 MB in 3.02 seconds = 45.08 MB/sec
Code: File size set to 4096 KB Command line used: ./iozone -s 4096 Output is in Kbytes/sec Time Resolution = 0.000001 seconds. Processor cache size set to 1024 Kbytes. Processor cache line size set to 32 bytes. File stride size set to 17 * record size. random random bkwd record stride KB reclen write rewrite read reread read write read rewrite read fwrite frewrite fread freread 4096 4 152972 1135245 1358510 2132128 1163636 1103448 1184499 1829350 1588205 490774 1006882 1378196 2104830 Can you give me a little insight on my results?
NOTE: The batteries on my Smart Array 6402 controller are DEAD and need to be replaced, they will not charge. Since they are dead, the cache (which is 128MB) will not enable itself. If this cache were enabled, would I experience much different results from the tests?
Is Motherboard RAID as good as a dedicated PCI-E card? I am guessing a dedicated card is the best option, though costs more.
We are looking at buying a barebones server from Supermicro. It features an onboard RAID controller which supports RAID 0, 1, 5 & 10 - but for some strange reason it will only support RAID 5 if you use Windows. Here is a link to the page detailing the RAID features.
[url]
We are going to be running Linux, CentOS 5.1, so we will only have the choice of RAID 0, 1 or 10. This isn't an issue, as having RAID 10 on 4x SAS (15k) drives will be fine for speed and stability. What is an issue is would this RAID controller be as fast or reliable compared to a dedicated PCI-E card? If it can only use RAID 5 in windows, does that suggest this controller is too reliant on software? It would be a nightmare to suffer downtime and data loss because the controller couldn't hack it during a drive failure, or one day it decided to bugger up the array when rebooting.
So that leads me to looking at this card, this looks very good for what we need. Are adaptec a reliable brand? I've seen it advertised for £200, which is a good price.
[url]
This card features RAID 5 and 6, would RAID 6 be better than RAID 10 for redundancy, or is it too slow to bother with? Also it seems to have a battery module available for it, what does this achieve? Cos surely if the power dies the hard drives and motherboard can't run off this little battery, or does it just help the controller stay alive long enough with some hard drive information in its memory if the power goes out during a rebuild?
We damaged a file on a windows server 2003 system which caused us to get a grey screen where the login windows should be (it was setup as a domain controller)
We tried booting it in safe mode and all the other modes etc etc.... but to no avail... we couldn't get it to work.
So we wanted to replace the file manually using another pc (by inserting one of the harddrives into the pc, and copying the file to it).
We used a windows XP machine.... imported the harddrive using disk management..... we had to un mirror the drive to be able to access it.
Once the mirror was broken i gave it a drive letter, copied the the new file to it, and, removed the drive letter i assigned it, and tried booting with *just* that drive in the windows 2003 server.
Now it wont boot, it just reboots everytime i tries to start up, probably because we broke the mirror on a different machine.
It still boots from the other untouched drive... that was in the mirror, but we have no way to edit the files on there.
So is there any way to actually get this, once mirrored drive, to boot now its a single disc?
I am in a somewhat complicated situation... I wanted to order a custom server with hardware 3Ware RAID controller but after over a month of waiting I was told the HW RAID controller, as well as any other 3Ware controller they tried, does not work with the motherboard used in the server from Fujitsu-Siemens and that they simply got a reply from FS that the controller is not certified to work with their motherboard.
So although I'd prefer a HW raid, I am forced to either choose a different webhost or setup a software RAID. The problem is, I haven't done that before and am somewhat moderately...scared
I have read a lot of the info about SW RAID on Linux that I could find through Google but there are some questions unanswered still. So I thought that perhaps some of the more knowledgeable WHT members could help me with this problem...
The server specs will be:
Core2Duo E6600 (2.4Ghz), 2GB RAM, 6-8x* 250GB SATA II HDDs, CentOS 4.4 or SuSe, DirectAdmin
* I prefer 8 HDDs (or actually 9) over 6 but I am not sure if their server chassis can hold that many HDDs, I am awaiting answer from them. They don't have any other drives beside the 250GB ones so I am limited to those.
The preferred SW RAID setup is to have everything in RAID 10, except for the /boot partition which has to be on RAID-1 or no RAID I believe, plus one drive as hot spare (that would be the 9th drive). I am quite sure they will not do the setup for me but will give me access to KVM over IP and a Linux image preinstalled on the first HDD so that I'll have a functional system that needs to be upgraded to RAID-10.
How do I do that? The big problem I see is that LILO or GRUB can't boot from a software RAID-5/10 so I will have to mount the /boot partition elsewhere. It's probably terribly simple...if you have done it before which I have not. I have read some articles on how to setup a RAID-5/10 with mdadm (e.g. [url] ) but they usually do not talk about how to setup the boot partition. Should it be setup as a small sized (100-200MB) RAID-1 partition spread over all of the drives in the otherwise RAID-10 array?
What about swap? Should I create a 4-8GB (I plan to upgrade the server RAM to 4GB in near future) RAID-1 swap partition on each of the disks or swap to a file on the main RAID-10 partitions. The second sounds simpler but what about performance? Is swapping to a file on RAID-10 array a bad idea, performance wise?
Is it possible to grow a RAID-10 array in a way similar to growing a RAID-5 array with mdadm (using two extra drives instead of one of course)? mdadm doesn't actually even mention RAID-10 despite it does support it without having to create RAID-0 on top of RAID-1 pairs if the support is in kernel, from what I know.
How often do RAID arrays break? Is it worth having RAID if a servers hard drive goes down? I was thinking it may just be a better option to just have a backup drive mounted to my system and in the even of a system failure just pop in a new hard drive, reload the OS, and then reload all my backups?
I am in the process of restructuring the infrastructure on our servers. I am thinking of using either RAID 5 (1 hot spare) vs RAID 10 as my 1U server has 4 HDD tray.
RAID 5 would have better capacity but RAID 10 has better overall performance. Which one do you guys go for a shared hosting server?
Is it possible to turn a non raided setup into Linux software raid, while it is live, and if it's the OS drive? Can you even software raid the OS drive remotely? I've been thinking about doing it for the redundancy (and possible slight performance boost for reads, but doing it more for redundancy). I'm using CentOS.
I've been talking to the Planet about trading in my four and a half year old "SuperCeleron" (from the old ServerMatrix days) Celeron 2.4 GHz system for something new. As part of their current promotions, I've configured a system that looks decent:
Xeon 3040, 1 gig of RAM, 2x250GB hard disks, RHEL 5, cPanel+Fantastico, and 10 ips for $162.
Not too bad. I could bump up the ram to 2 gb for, I think, $12 more, which I'm thinking about and wouldn't mind some thoughts on. But, the thing that has me really confused is RAID. I like the idea of doing a RAID 1 setup with those two hard disks. But, the Planet wants $40/month for a RAID controller to do it. I really don't want to go over $200 a month!
Any thoughts on alternative redundancy strategies that might avoid that cost? Software RAID does not seem to be offered by the Planet, unless I can figure out how to do it after installation (is that possible?) Better ideas in general on the server?
When i try to install BotNET 1.0 on my dedicated, i got this error :
root@leet [~/botnet/BotNET-1.0]# . install.sh Compiling source code . . . In file included from src/main.c:9: src/../include/bot.h:43: error: array type has incomplete element type src/../include/bot.h:57: error: array type has incomplete element type src/../include/bot.h:89: error: array type has incomplete element type src/main.c: In function: src/main.c:146: error: type of formal parameter 1 is incomplete Here is my install.sh file: Code: #!/bin/bash # BotNET installation script. # If this script causes problems, try "make all" instead. # Usage: . install.sh
if [ "$bot" != "1" ]; then echo "Installation complete." echo "Executables will be found in bin/" else echo "Errors encountered during compilation!" fi
My OS is centOs 5.x Kernel : Linux 2.6.18-53.el5 #1 SMP Mon Nov 12 02:22:48 EST 2007 i686 i686 i386 GNU/Linux * I have tried all other way to install (make all) and other *
Are there any significant difference between 4 15K SAS HD in RAID 10 versus 8 7.2K SATAII HD in RAID 10? I have the same question for 2 15K SAS HD in RAID 1 versus 4 7.2K SATAII HD in RAID 10.
I have room for 4 more hard drives on my home server. My original goal was to go raid 10 but I've been thinking, raid 5 can support 4 drives and give more capacity. Which one would have better performance as software (md) raid? I'm thinking raid 10 might actually have bad performance as software raid, vs hardware, compared to raid 5. Would raid 5 with 4 drives be better for my case?