Not sure if this is too specific for this forum or not, but since I've gotten great advice here in the past I'll give it a shot.
I have a colo'd production server with a 3ware 9500S-12 RAID card and 12 400GB drives attached. The drives form 3 arrays:
1) 2 drive RAID 1 (400GB)
2) 2 drive RAID 1 (400GB)
3) 6 drive RAID 5 (2TB)
plus 2 global hot spares.
For a variety of reasons I need to change this setup so that array 1) and 2) remain as is, and array 3) is removed and those 6 drives replaced with 6 new 750GB drives in JBOD mode. I've copied all the data from the RAID5 array number 3) onto 3 of the new 750 drives (the 2TB array wasn't completely full,) and I have 3 other blank 750GB drives.
What's the best / safest way to do this? Ideally I'd like to remove the 6 old 400GB drives and retain the ability to plug them all back in and get my old array back (if something goes horribly wrong doing the switch.)
Do I need to reboot into 3BM (3ware Bios Manager) to do this, or can I do it from the command line?
Is there any problem with having a drive that already contains data written to it by another system, and bringing it up on the 3ware card in JBOD mode with the data intact? (All filesystems are ext3.) I'm not going to have to reformat the drive, am I?
Is there any problem with the new drives being SATAII (Seagate Barracuda ES 750GB) but the old drives (and I think the 3ware card, and certainly my motherboard) being SATAI? I've read that this should "just work" but of course I am nervous! There are no jumpers I can see on the 750GB drives.
Will it be possible to remove the RAID 5 in such a way that I could plug the drives back in and get the array back?
I have installed 3dm for checking 3ware 8086 card status, but when going to [url] it doesnt show anything. It seems cannot connect to 1080 port, even I have turned off the firewall. Have checked in its config file already to make sure the port is 1080
Is there anyone having experience with 3dm software?
Probably going to give this a shot in the near future anyway, but just wanted to check whether anyone has tried and had success putting either 3Ware 8006-2LP or 9550SX-4LP cards in Dell PowerEdge 860 systems with a couple of SATA drives instead of using the Dell PERC controllers?
we use 3Ware's 8006-2LP sata raid controller in each of our servers for RAID1. Our servers are all Supermicro boxes with hot-swap driver carriers (ie. the 2 raided drives are in them)
One of the drives appears to be starting to crap itself as smartd is reporting issues (although tw_cli c0 shows the raid to be OK) incl. multi zone errors.
Anyway, I'd like to replace the failing drive before it becomes a real issue so i've bought a replacement (74gb raptor, just like the original) drive.
Now I've never had to replace a failing drive in any of our servers before and I used to think it would be a simple matter of simply popping out the failing drive's carrier, put the new drive in the carrier and stick it back in the server... and the raid controller would do the rest.
Yes a little naive I know but i've never had to do it before so never paid much attention .. Anyway, I've just read and re-read the 3ware docs for my controller and their instructions are VERY VAGUE ... however I do get the feeling that the process is move involved ie. I need to tell the controller (via cli or 3dm) to first 'remove' the failing drive from the raid ..and then add a new drive and then rebuild
However there is one catch .. 3dmd/3dm2 has NEVER worked on our (centos 4) servers - 3dmd crashes regularly and 3dm2 never worked. So yes I am stuck with the 3ware cli ... which I don't mind as long as someone can tell me the sequency of commands I need to issue
As this point I'm thinking what I need to do via cli is
1) tell raid controlloer to remove the failing drive on port 0
2) eject the drive carrier with the drive in question
3) insert new drive in carrier and re-insert into server
4) using tw_cli tell the controller to add the new drive to the array and to rebuild the array
Am I anywhere close to being correct? I'm sure there are some of you out there who've done this countless times before with the 3ware controllers and hotswap drive carriers ..
3w-9xxx: scsi0: AEN: INFO (0x04:0x0053): Battery capacity test is overdue:.
When I'm in the CLI console (tw_cli) and tries to test the battery, I'm seeing the following:
Quote:
//vpsXX1> /c0/bbu test Depending on the Storsave setting, performing the battery capacity test may disable the write cache on the controller /c0 for up to 24 hours. Do you want to continue ? Y|N [N]:
This is a live production server with client VPSs on it. Anyone here actually did 3ware battery test before on production system? Is it ok to do this? I'm seeking someone actually performed test operation before, not from someone just assumes it will be ok.
I have a 3ware 9650SE-24M8 RAID Controller. It was working fine for a few days and today while I was changing the RAID configs and installing different OSs, it just stopped working. Now when I boot my machine up it does not even detect any hard drives or RAID controller. I looked inside the box and the LED light on the RAID controller that is usually solid green is now blinking red. I googled for solutions but all searches lead me to useless information such as blinking red lights on the server case.
After seeing a topic a week or go discussing RAID cards I decided to give a hardware raid card a go to see if the performance will increase in one of our boxes.
Just for the simplicity of the test, I have put them into a RAID0 formation for purely performance tests and no redundancy. I choose a 3ware RAID card and went for the 2 port 8006-2LP option rather than the 9600 (as they had the 8006-2lp and risers in stock and what I've always been told is that SATA1 and SATA2 is really a selling point rather than any performance increase but we will leave that argument there). Because we run mainly Windows systems, I have put on Windows Server 2003 x64 R2. What I am finding after installing it all is it seems pretty "slow".
The rest of the hardware is a Dual, Quad Xeon (E5410x2), 8GB ram on a Tyan motherboard. Hard drives are 160GB Western Digital 7200 RPM so I can't see quite why it feels like its not running at a peak level.
Does anyone have any applications or software to give this RAID array a test as I really don't want to order any more or roll them out on to the network to find that software raid would be a better improvement. I did try a burn in app which tests everything but it according to the 20 seconds I ran it, in average it only transferred at 2mbs.. That cant be right..
I think one possibility is the RAID drivers arn't installed correctly as its still coming "Unknown Devices" in Device Manager and it seems It wont let me manually install the drivers for the 3ware device as it doesn't like the OS even though I have the correct ones and it installed Windows with it fine (a bit longer than normal anyway)
I am in a somewhat complicated situation... I wanted to order a custom server with hardware 3Ware RAID controller but after over a month of waiting I was told the HW RAID controller, as well as any other 3Ware controller they tried, does not work with the motherboard used in the server from Fujitsu-Siemens and that they simply got a reply from FS that the controller is not certified to work with their motherboard.
So although I'd prefer a HW raid, I am forced to either choose a different webhost or setup a software RAID. The problem is, I haven't done that before and am somewhat moderately...scared
I have read a lot of the info about SW RAID on Linux that I could find through Google but there are some questions unanswered still. So I thought that perhaps some of the more knowledgeable WHT members could help me with this problem...
The server specs will be:
Core2Duo E6600 (2.4Ghz), 2GB RAM, 6-8x* 250GB SATA II HDDs, CentOS 4.4 or SuSe, DirectAdmin
* I prefer 8 HDDs (or actually 9) over 6 but I am not sure if their server chassis can hold that many HDDs, I am awaiting answer from them. They don't have any other drives beside the 250GB ones so I am limited to those.
The preferred SW RAID setup is to have everything in RAID 10, except for the /boot partition which has to be on RAID-1 or no RAID I believe, plus one drive as hot spare (that would be the 9th drive). I am quite sure they will not do the setup for me but will give me access to KVM over IP and a Linux image preinstalled on the first HDD so that I'll have a functional system that needs to be upgraded to RAID-10.
How do I do that? The big problem I see is that LILO or GRUB can't boot from a software RAID-5/10 so I will have to mount the /boot partition elsewhere. It's probably terribly simple...if you have done it before which I have not. I have read some articles on how to setup a RAID-5/10 with mdadm (e.g. [url] ) but they usually do not talk about how to setup the boot partition. Should it be setup as a small sized (100-200MB) RAID-1 partition spread over all of the drives in the otherwise RAID-10 array?
What about swap? Should I create a 4-8GB (I plan to upgrade the server RAM to 4GB in near future) RAID-1 swap partition on each of the disks or swap to a file on the main RAID-10 partitions. The second sounds simpler but what about performance? Is swapping to a file on RAID-10 array a bad idea, performance wise?
Is it possible to grow a RAID-10 array in a way similar to growing a RAID-5 array with mdadm (using two extra drives instead of one of course)? mdadm doesn't actually even mention RAID-10 despite it does support it without having to create RAID-0 on top of RAID-1 pairs if the support is in kernel, from what I know.
We have a client that has a mail server with two drives. One hard disk is devoted for OS/Application (C and one is devoted for mail storage only (D
The goal is to make the D: drive which is a SATA 320GB drive to be made into a mirror, i.e. add another drive and a RAID Card and make D has a RAID Mirror drive.
My understanding is that when a RAID is configured for drives, the drive will lose whatever data it has on it ? Is there no other way to construct a singular environment into a RAID mirror environment (by adding a drive and Card) without losing drive on one of the primary drives?
I have 2x250gb drives, and this is my output of fdisk -l:
Quote:
Disk /dev/hda: 250.0 GB, 250059350016 bytes 255 heads, 63 sectors/track, 30401 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System /dev/hda1 * 1 13 104391 83 Linux /dev/hda2 14 30401 244091610 8e Linux LVM
Disk /dev/sda: 250.0 GB, 250059350016 bytes 255 heads, 63 sectors/track, 30401 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System /dev/sda1 * 1 30401 244196001 8e Linux LVM
I see the LVM is some sort of manager and it looks like the 2 drives are indeed set up as LVM, but I can't really tell what the means or how they are set up.
I'm looking to have them in a RAID-1 setup - a copy across drives so that it will continue working even if a drive fails.
we've got our new machine up and running smoothly. It's a core2duo running CentOS with Plesk 8.3. We have 2 drives in a RAID 1 configuration for reliability sake, and I'm looking for something to help me monitor the status of the disks in the array. Since RAID is pretty useless unless you know if one of the disks has died.
Is there some open source utility that will monitor this and email me if something is wrong? Perhaps something with a nice web interface?
Even better... is there some addon to plesk that will help with this?
I have a question regarding, hard drives and performance etc... I only use it for forums and currently is only one site (hopefully couple more in no time)
Currently I have 2x36gb SAS in raid 1 obviously containing everything including dbs and /home. and a third 250gb drive for backups only ^^ Ronny did an excelent job setting this up.
Any ways, my problem is that I wan't to allow some attachments on my forums, and this would take a significant ammount of space over 1gb no problem and then keep increasing (that's gonna sux for bandwidth). I know it will fit in the SAS drives no problems, dbs are rather small at the time (2.5gb in total) but logs are quite big 5-10gbs in total.
I thought it might be a good idea to purchase another drive. This 4th drive would be 750 and backups would move there , and use the 250 for the /home directory. This would give a lot of room for uploads, and backups accordingly and keep the fast ones for OS and dbs
I was told, however, and understandebly, that a lot of performance would be lost by moving /home to a SATA drive I know SATAS are no way as fast, but then vbulletin can't upload attach files to a folder outside its hirachy (without complicated modifications). (Note: i didn't specify my resons for wanting such set up)
So I'm in a bit of a pickle. Having the bigger drive would allow me to have the attachments, and should eventually result on more traffic etc to my site. /home currently is only 150mbs big... but then performance is also an issue pitty i couldn't afford the bigger drices at the time [sees the point of renting over buydowns now]
is there a way that /var/log/httpd saves those massive logs on another drive? it would free up 5-10gbs
in shortIs moving /home to a SATA drive from Raid 1 SAS a bad idea? (considering space and purpose)
Could httpd logs or /var/log in general be moved to the backup/another drive?
I have a Red Hat 8 server with RAID 1 setup and now I need more HD space.
I managed to replace both 40 gb disks on the raid array with 200 gb disks and the system stayed perfectly alive, BUT I still have only 40 gb of space available (and 160 gb of space somewhere hiding).
Is there a method on Red Hat 8 to get the rest of the disk in use?
At first I replaced another 40 gb disk with a new (empty) 200 gb disk and then on the raid setup I mirrored the old disk with the new disk. After that I replaced that another old 40 gb disk with a new 200 gb disk and did the mirroring again. Got both new 200 gb disks working on the RAID 1 array, except that little problem with space available on Red Hat 8..
I am currently in the process of upgrading my web/mysql server due to heavy loads and io waits and have some questions. I am trying to be cost efficient but at the same time do not want to purchase something that will be either inadequate or difficult to upgrade in the future. I hope you can provide me with some guidance.
This server is a Centos Linux box, running both apache and mysql. The current usage on the box is:
Mysql Stats:
50 mysql queries per second With a ratio of read to write of 2:1 Reads are about 65 MB per hour and writes are around 32 MB per hour.
Apache stats:
35 requests per sec
The two issues that I am unsure of are:
- Whether or not i should go with Raid-1 or Raid-5
- Whether or not I should use Sata Raptor drives or SAS drives.
In either configuration I will use a dedicated Raid controller. If I went with SATA, it would be a 3ware 9650SE-4LPML card. If I went with SAS, I was looking at the Adaptec 3405 controller.
Originally, I was going to use 3 x 74GB Seagate Cheetah 15.4K SAS drives in a Raid-5 config. After more reading, I learned that raid-5 has a high write overhead. Though read is definitely more important based on my stats, I don't want to lose performance in my writes either. With this in mind, I looked into doing Raid-1 instead.
I came up with these choices:
- Raid-1 - 2 x Seagate ST373455SS Seagate Cheetah 15K.5 SAS. HDs & controller costs are $940.
- Raid-1 - 2 x WD Raptor 74GB 10K SATA 150. HDs & controller costs are $652.
- Raid-5 - 3 x Seagate Cheetah 15K.4 ST336754SS 36.7GB. HDs & controller costs are $869.
- Raid-5 - 3 x WD Raptor 36GB 10K SATA 150. HDs & controller costs are $631.
As you can see we are not looking at huge differences in price, so I would be up for any of these options if I could just determine which would give me the best performance. I also know that I should have a 4th hotspare drive, but will buy that later down the road to ease cash flow in the beginning. If I went the SATA route, I would buy the 4th immediately.
From what I can tell, both configs provide the same redundancy, but are there any major performance considerations I should take? From what I have read, scsi/sas can enable database applications to perform better do to a lot of small and random reads and writes?
I recently build a server with Asus M2N-MX SE motherboard and SuperMicro 14" mini 1u. On the back of the Asus M2N-MX SE manual. it said for RAID driver, i need to create it from the included CD and use a floppy disk. my question is how can i do it without a floppy disk? i have an external DVD-burner that i hook up to usb to install the OS. Is it possible to use a cd to install the driver when i press f6 during Windows2003 installation?
Is it worth the effort to setup RAID 1? I have two Maxtor 500GB SATA disks and using RAID 1 seem to reduce one disk and leave me with 500GB worth of space and is the onboard Nvidia RAID trust worthy? because it said due to chipset limitation, the SATA ports supported by the Nvidia chipset doesn't support Serial Optical disk drives (Serial ODD).
I've taken the scalable approach when it comes to servers for my various sites. With shared servers, I never really worried about backup or even hard drives going down. Same goes for VPS. For some reason, when I moved to dedicated servers, I outfitted them with 74GB SATA drives in a RAID setup. My understanding is that it protects me if one drive happens to fail. I've been lucky and haven't had that problem.
I'm at the point now where I'm looking to upgrade from a VPS paying around $75 per month to a dedicated server. I can stand to be down a day if a hard drive goes, if it means $75 a month in savings. My biggest concern would be suggestions on the best way to protect myself in the event of a catastrophe.
Contacted SoftLayer about possibly adding a second server for me and honoring the price I'm paying on my old server.
Finally, both the old and new site are seeing roughly 3,000 visits per day. The server I'm considering is a Clovertown 5320 1.86 dual quadcore, 4GB RAM, RAID, 2 74GB Cheetah drives,100mbps, 2000GB bandwidth. Is this overkill or the right server for the job?
Today we are going to conduct a detailed study of RAIDability of contemporary 400GB hard drives on a new level. We will take two "professional" drives from Seagate and Western Digital and four ordinary "desktop" drives for our investigation. The detailed performance analysis and some useful hints on building RAID arrays are in our new detailed article.
Is Motherboard RAID as good as a dedicated PCI-E card? I am guessing a dedicated card is the best option, though costs more.
We are looking at buying a barebones server from Supermicro. It features an onboard RAID controller which supports RAID 0, 1, 5 & 10 - but for some strange reason it will only support RAID 5 if you use Windows. Here is a link to the page detailing the RAID features.
[url]
We are going to be running Linux, CentOS 5.1, so we will only have the choice of RAID 0, 1 or 10. This isn't an issue, as having RAID 10 on 4x SAS (15k) drives will be fine for speed and stability. What is an issue is would this RAID controller be as fast or reliable compared to a dedicated PCI-E card? If it can only use RAID 5 in windows, does that suggest this controller is too reliant on software? It would be a nightmare to suffer downtime and data loss because the controller couldn't hack it during a drive failure, or one day it decided to bugger up the array when rebooting.
So that leads me to looking at this card, this looks very good for what we need. Are adaptec a reliable brand? I've seen it advertised for £200, which is a good price.
[url]
This card features RAID 5 and 6, would RAID 6 be better than RAID 10 for redundancy, or is it too slow to bother with? Also it seems to have a battery module available for it, what does this achieve? Cos surely if the power dies the hard drives and motherboard can't run off this little battery, or does it just help the controller stay alive long enough with some hard drive information in its memory if the power goes out during a rebuild?
When trying to load an ovzkernel it loses support for the 3ware 8006 controller. As suggested on ovzforums, it says to load the new 3dm2 tools. I tried this but when I try to install it, it says it can't find the 3dm2 binary.
However, after extracting it, when I run the setup (./setupLinux_x64.bin -console) I get "Bundled JRE is not binary compatible with host OS/Arch or it is corrupt. Testing bundled JRE failed."
Can anyone give me the steps for installing 3dm2 on a centos/WHM box?
I have a bunch of 3Ware 95** RAID arrays with BBUs. Lately the BBUs are sensing high and too high temps a lot.
The dc reports that intake temp is 74 degress and exhaust is 91 degrees. Since the RAID cards and the BBUs are at the back of the machine, its getting more hot air than cool.
How often do RAID arrays break? Is it worth having RAID if a servers hard drive goes down? I was thinking it may just be a better option to just have a backup drive mounted to my system and in the even of a system failure just pop in a new hard drive, reload the OS, and then reload all my backups?