I have installed 3dm for checking 3ware 8086 card status, but when going to [url] it doesnt show anything. It seems cannot connect to 1080 port, even I have turned off the firewall. Have checked in its config file already to make sure the port is 1080
Is there anyone having experience with 3dm software?
Not sure if this is too specific for this forum or not, but since I've gotten great advice here in the past I'll give it a shot.
I have a colo'd production server with a 3ware 9500S-12 RAID card and 12 400GB drives attached. The drives form 3 arrays:
1) 2 drive RAID 1 (400GB)
2) 2 drive RAID 1 (400GB)
3) 6 drive RAID 5 (2TB)
plus 2 global hot spares.
For a variety of reasons I need to change this setup so that array 1) and 2) remain as is, and array 3) is removed and those 6 drives replaced with 6 new 750GB drives in JBOD mode. I've copied all the data from the RAID5 array number 3) onto 3 of the new 750 drives (the 2TB array wasn't completely full,) and I have 3 other blank 750GB drives.
What's the best / safest way to do this? Ideally I'd like to remove the 6 old 400GB drives and retain the ability to plug them all back in and get my old array back (if something goes horribly wrong doing the switch.)
Do I need to reboot into 3BM (3ware Bios Manager) to do this, or can I do it from the command line?
Is there any problem with having a drive that already contains data written to it by another system, and bringing it up on the 3ware card in JBOD mode with the data intact? (All filesystems are ext3.) I'm not going to have to reformat the drive, am I?
Is there any problem with the new drives being SATAII (Seagate Barracuda ES 750GB) but the old drives (and I think the 3ware card, and certainly my motherboard) being SATAI? I've read that this should "just work" but of course I am nervous! There are no jumpers I can see on the 750GB drives.
Will it be possible to remove the RAID 5 in such a way that I could plug the drives back in and get the array back?
Probably going to give this a shot in the near future anyway, but just wanted to check whether anyone has tried and had success putting either 3Ware 8006-2LP or 9550SX-4LP cards in Dell PowerEdge 860 systems with a couple of SATA drives instead of using the Dell PERC controllers?
we use 3Ware's 8006-2LP sata raid controller in each of our servers for RAID1. Our servers are all Supermicro boxes with hot-swap driver carriers (ie. the 2 raided drives are in them)
One of the drives appears to be starting to crap itself as smartd is reporting issues (although tw_cli c0 shows the raid to be OK) incl. multi zone errors.
Anyway, I'd like to replace the failing drive before it becomes a real issue so i've bought a replacement (74gb raptor, just like the original) drive.
Now I've never had to replace a failing drive in any of our servers before and I used to think it would be a simple matter of simply popping out the failing drive's carrier, put the new drive in the carrier and stick it back in the server... and the raid controller would do the rest.
Yes a little naive I know but i've never had to do it before so never paid much attention .. Anyway, I've just read and re-read the 3ware docs for my controller and their instructions are VERY VAGUE ... however I do get the feeling that the process is move involved ie. I need to tell the controller (via cli or 3dm) to first 'remove' the failing drive from the raid ..and then add a new drive and then rebuild
However there is one catch .. 3dmd/3dm2 has NEVER worked on our (centos 4) servers - 3dmd crashes regularly and 3dm2 never worked. So yes I am stuck with the 3ware cli ... which I don't mind as long as someone can tell me the sequency of commands I need to issue
As this point I'm thinking what I need to do via cli is
1) tell raid controlloer to remove the failing drive on port 0
2) eject the drive carrier with the drive in question
3) insert new drive in carrier and re-insert into server
4) using tw_cli tell the controller to add the new drive to the array and to rebuild the array
Am I anywhere close to being correct? I'm sure there are some of you out there who've done this countless times before with the 3ware controllers and hotswap drive carriers ..
3w-9xxx: scsi0: AEN: INFO (0x04:0x0053): Battery capacity test is overdue:.
When I'm in the CLI console (tw_cli) and tries to test the battery, I'm seeing the following:
Quote:
//vpsXX1> /c0/bbu test Depending on the Storsave setting, performing the battery capacity test may disable the write cache on the controller /c0 for up to 24 hours. Do you want to continue ? Y|N [N]:
This is a live production server with client VPSs on it. Anyone here actually did 3ware battery test before on production system? Is it ok to do this? I'm seeking someone actually performed test operation before, not from someone just assumes it will be ok.
I have a 3ware 9650SE-24M8 RAID Controller. It was working fine for a few days and today while I was changing the RAID configs and installing different OSs, it just stopped working. Now when I boot my machine up it does not even detect any hard drives or RAID controller. I looked inside the box and the LED light on the RAID controller that is usually solid green is now blinking red. I googled for solutions but all searches lead me to useless information such as blinking red lights on the server case.
After seeing a topic a week or go discussing RAID cards I decided to give a hardware raid card a go to see if the performance will increase in one of our boxes.
Just for the simplicity of the test, I have put them into a RAID0 formation for purely performance tests and no redundancy. I choose a 3ware RAID card and went for the 2 port 8006-2LP option rather than the 9600 (as they had the 8006-2lp and risers in stock and what I've always been told is that SATA1 and SATA2 is really a selling point rather than any performance increase but we will leave that argument there). Because we run mainly Windows systems, I have put on Windows Server 2003 x64 R2. What I am finding after installing it all is it seems pretty "slow".
The rest of the hardware is a Dual, Quad Xeon (E5410x2), 8GB ram on a Tyan motherboard. Hard drives are 160GB Western Digital 7200 RPM so I can't see quite why it feels like its not running at a peak level.
Does anyone have any applications or software to give this RAID array a test as I really don't want to order any more or roll them out on to the network to find that software raid would be a better improvement. I did try a burn in app which tests everything but it according to the 20 seconds I ran it, in average it only transferred at 2mbs.. That cant be right..
I think one possibility is the RAID drivers arn't installed correctly as its still coming "Unknown Devices" in Device Manager and it seems It wont let me manually install the drivers for the 3ware device as it doesn't like the OS even though I have the correct ones and it installed Windows with it fine (a bit longer than normal anyway)
Today we are going to conduct a detailed study of RAIDability of contemporary 400GB hard drives on a new level. We will take two "professional" drives from Seagate and Western Digital and four ordinary "desktop" drives for our investigation. The detailed performance analysis and some useful hints on building RAID arrays are in our new detailed article.
I know absolutely nothing about RAID cards.. trying to learn some here. I know 3ware cards have a command line, does this card? What about other adaptecs?
I've got a bunch of machines running Adaptec 2015S cards in raid-1. I cannot seem to get it to work I get the same error on every command I seem to run for example
raidutil -L all Engine connect failed: Open
They all run CentOS with kernels such as 2.6.15.1 #1 SMP PREEMPT (They are not public facing so don't bother discussing how old the kernel is) x86_64 x86_64 x86_64 GNU/Linux
So does anyone have any suggestions on this? I've tried everything from what I can find and continue to receive this error.
***NOTICE: /boot/vmlinuz-2.6.18-8.1.14.el5.028stab045.1 is not a kernel. Skipping.
There is also another issue not being able to disable Selinux. I have tried the normal routes and even attempted disabling it in rc.sysinit..still...this "security framework" is able to load it..and cause problems.
Openvz and SeLinux don't get along..even a little bit.
So..those are the two probably seperate issues...that prevent the poor server from booting.
On the board, there is an SCSI adapter which is an Adaptec 7899.
This configuration is working perfectly under Windows 2003. However, as per customer request, I have to install CentOS, RedHat or Fedora. Even Debian is OK.
However, during the install, the OS find NO hard drives and the installation is aborted.
I googled some time and it looks that there is 1 million people looking for a solution on how to install Linux on a machine with an AIC-7899.
The installer loads a driver AIC-7XXX but didn't find the device anyway.
Is Motherboard RAID as good as a dedicated PCI-E card? I am guessing a dedicated card is the best option, though costs more.
We are looking at buying a barebones server from Supermicro. It features an onboard RAID controller which supports RAID 0, 1, 5 & 10 - but for some strange reason it will only support RAID 5 if you use Windows. Here is a link to the page detailing the RAID features.
[url]
We are going to be running Linux, CentOS 5.1, so we will only have the choice of RAID 0, 1 or 10. This isn't an issue, as having RAID 10 on 4x SAS (15k) drives will be fine for speed and stability. What is an issue is would this RAID controller be as fast or reliable compared to a dedicated PCI-E card? If it can only use RAID 5 in windows, does that suggest this controller is too reliant on software? It would be a nightmare to suffer downtime and data loss because the controller couldn't hack it during a drive failure, or one day it decided to bugger up the array when rebooting.
So that leads me to looking at this card, this looks very good for what we need. Are adaptec a reliable brand? I've seen it advertised for £200, which is a good price.
[url]
This card features RAID 5 and 6, would RAID 6 be better than RAID 10 for redundancy, or is it too slow to bother with? Also it seems to have a battery module available for it, what does this achieve? Cos surely if the power dies the hard drives and motherboard can't run off this little battery, or does it just help the controller stay alive long enough with some hard drive information in its memory if the power goes out during a rebuild?
I am in a somewhat complicated situation... I wanted to order a custom server with hardware 3Ware RAID controller but after over a month of waiting I was told the HW RAID controller, as well as any other 3Ware controller they tried, does not work with the motherboard used in the server from Fujitsu-Siemens and that they simply got a reply from FS that the controller is not certified to work with their motherboard.
So although I'd prefer a HW raid, I am forced to either choose a different webhost or setup a software RAID. The problem is, I haven't done that before and am somewhat moderately...scared
I have read a lot of the info about SW RAID on Linux that I could find through Google but there are some questions unanswered still. So I thought that perhaps some of the more knowledgeable WHT members could help me with this problem...
The server specs will be:
Core2Duo E6600 (2.4Ghz), 2GB RAM, 6-8x* 250GB SATA II HDDs, CentOS 4.4 or SuSe, DirectAdmin
* I prefer 8 HDDs (or actually 9) over 6 but I am not sure if their server chassis can hold that many HDDs, I am awaiting answer from them. They don't have any other drives beside the 250GB ones so I am limited to those.
The preferred SW RAID setup is to have everything in RAID 10, except for the /boot partition which has to be on RAID-1 or no RAID I believe, plus one drive as hot spare (that would be the 9th drive). I am quite sure they will not do the setup for me but will give me access to KVM over IP and a Linux image preinstalled on the first HDD so that I'll have a functional system that needs to be upgraded to RAID-10.
How do I do that? The big problem I see is that LILO or GRUB can't boot from a software RAID-5/10 so I will have to mount the /boot partition elsewhere. It's probably terribly simple...if you have done it before which I have not. I have read some articles on how to setup a RAID-5/10 with mdadm (e.g. [url] ) but they usually do not talk about how to setup the boot partition. Should it be setup as a small sized (100-200MB) RAID-1 partition spread over all of the drives in the otherwise RAID-10 array?
What about swap? Should I create a 4-8GB (I plan to upgrade the server RAM to 4GB in near future) RAID-1 swap partition on each of the disks or swap to a file on the main RAID-10 partitions. The second sounds simpler but what about performance? Is swapping to a file on RAID-10 array a bad idea, performance wise?
Is it possible to grow a RAID-10 array in a way similar to growing a RAID-5 array with mdadm (using two extra drives instead of one of course)? mdadm doesn't actually even mention RAID-10 despite it does support it without having to create RAID-0 on top of RAID-1 pairs if the support is in kernel, from what I know.
When trying to load an ovzkernel it loses support for the 3ware 8006 controller. As suggested on ovzforums, it says to load the new 3dm2 tools. I tried this but when I try to install it, it says it can't find the 3dm2 binary.
However, after extracting it, when I run the setup (./setupLinux_x64.bin -console) I get "Bundled JRE is not binary compatible with host OS/Arch or it is corrupt. Testing bundled JRE failed."
Can anyone give me the steps for installing 3dm2 on a centos/WHM box?
I have a bunch of 3Ware 95** RAID arrays with BBUs. Lately the BBUs are sensing high and too high temps a lot.
The dc reports that intake temp is 74 degress and exhaust is 91 degrees. Since the RAID cards and the BBUs are at the back of the machine, its getting more hot air than cool.
How often do RAID arrays break? Is it worth having RAID if a servers hard drive goes down? I was thinking it may just be a better option to just have a backup drive mounted to my system and in the even of a system failure just pop in a new hard drive, reload the OS, and then reload all my backups?
I am in the process of restructuring the infrastructure on our servers. I am thinking of using either RAID 5 (1 hot spare) vs RAID 10 as my 1U server has 4 HDD tray.
RAID 5 would have better capacity but RAID 10 has better overall performance. Which one do you guys go for a shared hosting server?
Is it possible to turn a non raided setup into Linux software raid, while it is live, and if it's the OS drive? Can you even software raid the OS drive remotely? I've been thinking about doing it for the redundancy (and possible slight performance boost for reads, but doing it more for redundancy). I'm using CentOS.