Areca Vs Adaptec Vs 3ware
Nov 9, 2009Of the three raid card brands, which do you think is best and why?
I'm currently looking for a card that can do both SAS and SATA, and have four internal ports.
Of the three raid card brands, which do you think is best and why?
I'm currently looking for a card that can do both SAS and SATA, and have four internal ports.
i want to use q9550 with 8G RAM,
install on X7SBL-LN2,
and use for 500G SATAII to do raid 10,
about Adaptec 2405SAS and 3Ware 9650SE-4LPML,
which one will get better performance?
Has anyone used this card?
If so, is it a decent card? Perform OK?
I know absolutely nothing about RAID cards.. trying to learn some here. I know 3ware cards have a command line, does this card? What about other adaptecs?
I've got a bunch of machines running Adaptec 2015S cards in raid-1. I cannot seem to get it to work I get the same error on every command I seem to run for example
raidutil -L all
Engine connect failed: Open
They all run CentOS with kernels such as
2.6.15.1 #1 SMP PREEMPT (They are not public facing so don't bother discussing how old the kernel is)
x86_64 x86_64 x86_64 GNU/Linux
So does anyone have any suggestions on this? I've tried everything from what I can find and continue to receive this error.
Anyone knows how to fix this error. I have tried to ugprade the lastest firmware but the error won't go away.
hdparm -tT /dev/sda
/dev/sda:
Timing cached reads: 3556 MB in 2.00 seconds = 1778.27 MB/sec
HDIO_DRIVE_CMD(null) (wait for flush complete) failed: Inappropriate ioctl for device
Timing buffered disk reads: 206 MB in 3.00 seconds = 68.59 MB/sec
HDIO_DRIVE_CMD(null) (wait for flush complete) failed: Inappropriate ioctl for device
[root@cpe-76-173-239-109 ~]# hdparm -tT /dev/sda
/dev/sda:
Timing cached reads: 3520 MB in 2.00 seconds = 1759.39 MB/sec
HDIO_DRIVE_CMD(null) (wait for flush complete) failed: Inappropriate ioctl for device
Timing buffered disk reads: 220 MB in 3.01 seconds = 73.00 MB/sec
HDIO_DRIVE_CMD(null) (wait for flush complete) failed: Inappropriate ioctl for device
I am having quite a challenge getting openvz to work on centos 5 with a adaptec RAID card.
The driver likes the plain jane kernel..but then...
Configuration [OpenVZ (2.6.18-8.1.14.el5.028stab045.1)]
***NOTICE: /boot/vmlinuz-2.6.18-8.1.14.el5.028stab045.1 is not a kernel. Skipping.
There is also another issue not being able to disable Selinux.
I have tried the normal routes and even attempted disabling it in rc.sysinit..still...this "security framework" is able to load it..and cause problems.
Openvz and SeLinux don't get along..even a little bit.
So..those are the two probably seperate issues...that prevent the poor server from booting.
Anyone knows the CLI command for rebuilding a mirror raid for Adaptec 2410?
View 1 Replies View RelatedI have got a server with the following specs:
Dual Xeon 2.4 Ghz
6 Gb DDR ECC REG
1 HD 147 Gb 15K
On the board, there is an SCSI adapter which is an Adaptec 7899.
This configuration is working perfectly under Windows 2003. However, as per customer request, I have to install CentOS, RedHat or Fedora. Even Debian is OK.
However, during the install, the OS find NO hard drives and the installation is aborted.
I googled some time and it looks that there is 1 million people looking for a solution on how to install Linux on a machine with an AIC-7899.
The installer loads a driver AIC-7XXX but didn't find the device anyway.
I have installed 3dm for checking 3ware 8086 card status, but when going to [url] it doesnt show anything. It seems cannot connect to 1080 port, even I have turned off the firewall. Have checked in its config file already to make sure the port is 1080
Is there anyone having experience with 3dm software?
When trying to load an ovzkernel it loses support for the 3ware 8006 controller. As suggested on ovzforums, it says to load the new 3dm2 tools. I tried this but when I try to install it, it says it can't find the 3dm2 binary.
View 0 Replies View RelatedI downloaded the latest 3dm2 software:
wget [url]
However, after extracting it, when I run the setup (./setupLinux_x64.bin -console) I get "Bundled JRE is not binary compatible with host OS/Arch or it is corrupt. Testing bundled JRE failed."
Can anyone give me the steps for installing 3dm2 on a centos/WHM box?
The card is a 3ware 9690SA.
Not sure if this is too specific for this forum or not, but since I've gotten great advice here in the past I'll give it a shot.
I have a colo'd production server with a 3ware 9500S-12 RAID card and 12 400GB drives attached. The drives form 3 arrays:
1) 2 drive RAID 1 (400GB)
2) 2 drive RAID 1 (400GB)
3) 6 drive RAID 5 (2TB)
plus 2 global hot spares.
For a variety of reasons I need to change this setup so that array 1) and 2) remain as is, and array 3) is removed and those 6 drives replaced with 6 new 750GB drives in JBOD mode. I've copied all the data from the RAID5 array number 3) onto 3 of the new 750 drives (the 2TB array wasn't completely full,) and I have 3 other blank 750GB drives.
What's the best / safest way to do this? Ideally I'd like to remove the 6 old 400GB drives and retain the ability to plug them all back in and get my old array back (if something goes horribly wrong doing the switch.)
Do I need to reboot into 3BM (3ware Bios Manager) to do this, or can I do it from the command line?
Is there any problem with having a drive that already contains data written to it by another system, and bringing it up on the 3ware card in JBOD mode with the data intact? (All filesystems are ext3.) I'm not going to have to reformat the drive, am I?
Is there any problem with the new drives being SATAII (Seagate Barracuda ES 750GB) but the old drives (and I think the 3ware card, and certainly my motherboard) being SATAI? I've read that this should "just work" but of course I am nervous! There are no jumpers I can see on the 750GB drives.
Will it be possible to remove the RAID 5 in such a way that I could plug the drives back in and get the array back?
I have a bunch of 3Ware 95** RAID arrays with BBUs. Lately the BBUs are sensing high and too high temps a lot.
The dc reports that intake temp is 74 degress and exhaust is 91 degrees. Since the RAID cards and the BBUs are at the back of the machine, its getting more hot air than cool.
Probably going to give this a shot in the near future anyway, but just wanted to check whether anyone has tried and had success putting either 3Ware 8006-2LP or 9550SX-4LP cards in Dell PowerEdge 860 systems with a couple of SATA drives instead of using the Dell PERC controllers?
View 3 Replies View Relatedwe use 3Ware's 8006-2LP sata raid controller in each of our servers for RAID1. Our servers are all Supermicro boxes with hot-swap driver carriers (ie. the 2 raided drives are in them)
One of the drives appears to be starting to crap itself as smartd is reporting issues (although tw_cli c0 shows the raid to be OK) incl. multi zone errors.
Anyway, I'd like to replace the failing drive before it becomes a real issue so i've bought a replacement (74gb raptor, just like the original) drive.
Now I've never had to replace a failing drive in any of our servers before and I used to think it would be a simple matter of simply popping out the failing drive's carrier, put the new drive in the carrier and stick it back in the server... and the raid controller would do the rest.
Yes a little naive I know but i've never had to do it before so never paid much attention .. Anyway, I've just read and re-read the 3ware docs for my controller and their instructions are VERY VAGUE ... however I do get the feeling that the process is move involved ie. I need to tell the controller (via cli or 3dm) to first 'remove' the failing drive from the raid ..and then add a new drive and then rebuild
However there is one catch .. 3dmd/3dm2 has NEVER worked on our (centos 4) servers - 3dmd crashes regularly and 3dm2 never worked. So yes I am stuck with the 3ware cli ... which I don't mind as long as someone can tell me the sequency of commands I need to issue
As this point I'm thinking what I need to do via cli is
1) tell raid controlloer to remove the failing drive on port 0
2) eject the drive carrier with the drive in question
3) insert new drive in carrier and re-insert into server
4) using tw_cli tell the controller to add the new drive to the array and to rebuild the array
Am I anywhere close to being correct? I'm sure there are some of you out there who've done this countless times before with the 3ware controllers and hotswap drive carriers ..
Just bought a new OEM 3ware 9500S-12 SATA RAID Controller from ebay for $250, is it a good deal? Does it work with SATA-II HDDs?
View 1 Replies View RelatedI'm seeing the following from dmesg:
Quote:
3w-9xxx: scsi0: AEN: INFO (0x04:0x0053): Battery capacity test is overdue:.
When I'm in the CLI console (tw_cli) and tries to test the battery, I'm seeing the following:
Quote:
//vpsXX1> /c0/bbu test
Depending on the Storsave setting, performing the battery capacity test may
disable the write cache on the controller /c0 for up to 24 hours.
Do you want to continue ? Y|N [N]:
This is a live production server with client VPSs on it. Anyone here actually did 3ware battery test before on production system? Is it ok to do this? I'm seeking someone actually performed test operation before, not from someone just assumes it will be ok.
I have a 3ware 9650SE-24M8 RAID Controller. It was working fine for a few days and today while I was changing the RAID configs and installing different OSs, it just stopped working. Now when I boot my machine up it does not even detect any hard drives or RAID controller. I looked inside the box and the LED light on the RAID controller that is usually solid green is now blinking red. I googled for solutions but all searches lead me to useless information such as blinking red lights on the server case.
View 12 Replies View RelatedAfter seeing a topic a week or go discussing RAID cards I decided to give a hardware raid card a go to see if the performance will increase in one of our boxes.
Just for the simplicity of the test, I have put them into a RAID0 formation for purely performance tests and no redundancy. I choose a 3ware RAID card and went for the 2 port 8006-2LP option rather than the 9600 (as they had the 8006-2lp and risers in stock and what I've always been told is that SATA1 and SATA2 is really a selling point rather than any performance increase but we will leave that argument there). Because we run mainly Windows systems, I have put on Windows Server 2003 x64 R2. What I am finding after installing it all is it seems pretty "slow".
The rest of the hardware is a Dual, Quad Xeon (E5410x2), 8GB ram on a Tyan motherboard. Hard drives are 160GB Western Digital 7200 RPM so I can't see quite why it feels like its not running at a peak level.
Does anyone have any applications or software to give this RAID array a test as I really don't want to order any more or roll them out on to the network to find that software raid would be a better improvement. I did try a burn in app which tests everything but it according to the 20 seconds I ran it, in average it only transferred at 2mbs.. That cant be right..
I think one possibility is the RAID drivers arn't installed correctly as its still coming "Unknown Devices" in Device Manager and it seems It wont let me manually install the drivers for the 3ware device as it doesn't like the OS even though I have the correct ones and it installed Windows with it fine (a bit longer than normal anyway)
I'm planning to run RAID 1 on my PowerEdge 860 server running CentOS 4.4 (320gb x 2 SATA II).
After doing much research, I've decided to go with 3ware 9550sx.
I have some questions regarding installing the 3ware on an existing system with an OS already installed - How do I go about doing it?
I've read that CentOS 4.4 already has the drivers for it.. Do I just configure the RAID config during the boot up?
Is this an overkill for my PE Pentium D 2.8GHZ ?
I have configured a Xen setup on a dual xeon system with a 3ware 8506 2 ports sata controller.
Array is configured in raid1 and I am using LVM.
I get really slow accesses on the virtual machines, when I create a big ext3 system, the system is nearly freezing.
I am not sure exactly which model is in my server, but I know it is in the 3ware 8xxx line.
What is my best option to be alerted to a drive failure. I am running a Raid 1 setup.
I went to 3ware's sight, and wasn't sure exactly which software to download/install.
I am on Centos 5.2 32bit and have Cpanel/WHM installed.
I am configuring a new file server, with 11 drives and two 3ware 8506-8 raid cards.
I would like to have one hto spare shared between the two cards.
I have read on the docs that 3ware have "multiple card support" but I don't find anuthing in the documentation stating how to have two cards managing one big array.
Is it actually supported or the "multiple card support" term is just a marketing leadout to state "you can put two card in a system without conflict"?
MY server configure our drives with RAID-1.
How can I check it my server configure with 3ware or software raid ?
Also please advise me how can I monitor raid configuration that my raid is working fine or no ?