Use SATA-II HDDs With 3ware 9500S
Aug 21, 2007Just bought a new OEM 3ware 9500S-12 SATA RAID Controller from ebay for $250, is it a good deal? Does it work with SATA-II HDDs?
View 1 RepliesJust bought a new OEM 3ware 9500S-12 SATA RAID Controller from ebay for $250, is it a good deal? Does it work with SATA-II HDDs?
View 1 Repliesabout the hd,there are two options, the first one is four 7200rpm sata to do raid 10,
the second one is two 10000rpm sata to do raid 1, about the performance, which one will be better?
I'm currently in the process of ordering a new server and would like to throw another $50-$70 at the default SATA II 7k 250 GB to increase performance. The server will host a site similar to WHT (php, mysql, and some forum traffic ).
There are three options I can get for the price:
1. Add another SATA II 7k 250 GB and set up RAID 1
2. Add a 73GB 15k RPM SA-SCSI and put mySQL on it. No RAID.
3. Toss out the SATA II 7k and take two SATA 10k 150 GB instead. Put mySQL on one of them. No RAID.
Please keep in mind that the question is budget-related (I know I can get more if I spend an extra $200 but that's not what I want ). Which of the above will make me happiest?
I am curious as to whether there is improvement in a 4x HDD RAID 10 array vs an 8x HDD RAID 10 Array.
If so, Is it GREAT improvement?
With 1.5TB drives out now and 500GB and 1TB being dirt cheap, I still see that 80-120GB is standard on dedicated servers. If you want a 500Gig you will pay up the ***. Why is this? It's actually EASIER to buy a 500GB drive then a 120GB one nowdays.
I'm guessing it's because they still have tons of the lower space ones on site so it stops them from needing to buy new ones.
I have 2 questions:
1) I have one 80Gb HDD and one 250Gb HDD ( all are s-ata ).
In which practical way they should be partitioned? I thought to setup OS on 80Gb HDD and /home on 250Gb HDD.
2) Which version of CentOS is currently the most stable for cPanel? I had an experince with CentOS 5, there were various problems with mail and mysql.. And now I want to install CentOS 4.6.
The server will have CentOS and cPanel installed.
Everything I found is the type of RAID controller installed. It's Adaptec 4800SAS SA-SCSI RAID-10. Nothing in /proc about types of HDDs.
View 7 Replies View RelatedOf the three raid card brands, which do you think is best and why?
I'm currently looking for a card that can do both SAS and SATA, and have four internal ports.
I have installed 3dm for checking 3ware 8086 card status, but when going to [url] it doesnt show anything. It seems cannot connect to 1080 port, even I have turned off the firewall. Have checked in its config file already to make sure the port is 1080
Is there anyone having experience with 3dm software?
When trying to load an ovzkernel it loses support for the 3ware 8006 controller. As suggested on ovzforums, it says to load the new 3dm2 tools. I tried this but when I try to install it, it says it can't find the 3dm2 binary.
View 0 Replies View RelatedI downloaded the latest 3dm2 software:
wget [url]
However, after extracting it, when I run the setup (./setupLinux_x64.bin -console) I get "Bundled JRE is not binary compatible with host OS/Arch or it is corrupt. Testing bundled JRE failed."
Can anyone give me the steps for installing 3dm2 on a centos/WHM box?
The card is a 3ware 9690SA.
Not sure if this is too specific for this forum or not, but since I've gotten great advice here in the past I'll give it a shot.
I have a colo'd production server with a 3ware 9500S-12 RAID card and 12 400GB drives attached. The drives form 3 arrays:
1) 2 drive RAID 1 (400GB)
2) 2 drive RAID 1 (400GB)
3) 6 drive RAID 5 (2TB)
plus 2 global hot spares.
For a variety of reasons I need to change this setup so that array 1) and 2) remain as is, and array 3) is removed and those 6 drives replaced with 6 new 750GB drives in JBOD mode. I've copied all the data from the RAID5 array number 3) onto 3 of the new 750 drives (the 2TB array wasn't completely full,) and I have 3 other blank 750GB drives.
What's the best / safest way to do this? Ideally I'd like to remove the 6 old 400GB drives and retain the ability to plug them all back in and get my old array back (if something goes horribly wrong doing the switch.)
Do I need to reboot into 3BM (3ware Bios Manager) to do this, or can I do it from the command line?
Is there any problem with having a drive that already contains data written to it by another system, and bringing it up on the 3ware card in JBOD mode with the data intact? (All filesystems are ext3.) I'm not going to have to reformat the drive, am I?
Is there any problem with the new drives being SATAII (Seagate Barracuda ES 750GB) but the old drives (and I think the 3ware card, and certainly my motherboard) being SATAI? I've read that this should "just work" but of course I am nervous! There are no jumpers I can see on the 750GB drives.
Will it be possible to remove the RAID 5 in such a way that I could plug the drives back in and get the array back?
I have a bunch of 3Ware 95** RAID arrays with BBUs. Lately the BBUs are sensing high and too high temps a lot.
The dc reports that intake temp is 74 degress and exhaust is 91 degrees. Since the RAID cards and the BBUs are at the back of the machine, its getting more hot air than cool.
Probably going to give this a shot in the near future anyway, but just wanted to check whether anyone has tried and had success putting either 3Ware 8006-2LP or 9550SX-4LP cards in Dell PowerEdge 860 systems with a couple of SATA drives instead of using the Dell PERC controllers?
View 3 Replies View Relatedwe use 3Ware's 8006-2LP sata raid controller in each of our servers for RAID1. Our servers are all Supermicro boxes with hot-swap driver carriers (ie. the 2 raided drives are in them)
One of the drives appears to be starting to crap itself as smartd is reporting issues (although tw_cli c0 shows the raid to be OK) incl. multi zone errors.
Anyway, I'd like to replace the failing drive before it becomes a real issue so i've bought a replacement (74gb raptor, just like the original) drive.
Now I've never had to replace a failing drive in any of our servers before and I used to think it would be a simple matter of simply popping out the failing drive's carrier, put the new drive in the carrier and stick it back in the server... and the raid controller would do the rest.
Yes a little naive I know but i've never had to do it before so never paid much attention .. Anyway, I've just read and re-read the 3ware docs for my controller and their instructions are VERY VAGUE ... however I do get the feeling that the process is move involved ie. I need to tell the controller (via cli or 3dm) to first 'remove' the failing drive from the raid ..and then add a new drive and then rebuild
However there is one catch .. 3dmd/3dm2 has NEVER worked on our (centos 4) servers - 3dmd crashes regularly and 3dm2 never worked. So yes I am stuck with the 3ware cli ... which I don't mind as long as someone can tell me the sequency of commands I need to issue
As this point I'm thinking what I need to do via cli is
1) tell raid controlloer to remove the failing drive on port 0
2) eject the drive carrier with the drive in question
3) insert new drive in carrier and re-insert into server
4) using tw_cli tell the controller to add the new drive to the array and to rebuild the array
Am I anywhere close to being correct? I'm sure there are some of you out there who've done this countless times before with the 3ware controllers and hotswap drive carriers ..
I'm seeing the following from dmesg:
Quote:
3w-9xxx: scsi0: AEN: INFO (0x04:0x0053): Battery capacity test is overdue:.
When I'm in the CLI console (tw_cli) and tries to test the battery, I'm seeing the following:
Quote:
//vpsXX1> /c0/bbu test
Depending on the Storsave setting, performing the battery capacity test may
disable the write cache on the controller /c0 for up to 24 hours.
Do you want to continue ? Y|N [N]:
This is a live production server with client VPSs on it. Anyone here actually did 3ware battery test before on production system? Is it ok to do this? I'm seeking someone actually performed test operation before, not from someone just assumes it will be ok.
I have a 3ware 9650SE-24M8 RAID Controller. It was working fine for a few days and today while I was changing the RAID configs and installing different OSs, it just stopped working. Now when I boot my machine up it does not even detect any hard drives or RAID controller. I looked inside the box and the LED light on the RAID controller that is usually solid green is now blinking red. I googled for solutions but all searches lead me to useless information such as blinking red lights on the server case.
View 12 Replies View RelatedAfter seeing a topic a week or go discussing RAID cards I decided to give a hardware raid card a go to see if the performance will increase in one of our boxes.
Just for the simplicity of the test, I have put them into a RAID0 formation for purely performance tests and no redundancy. I choose a 3ware RAID card and went for the 2 port 8006-2LP option rather than the 9600 (as they had the 8006-2lp and risers in stock and what I've always been told is that SATA1 and SATA2 is really a selling point rather than any performance increase but we will leave that argument there). Because we run mainly Windows systems, I have put on Windows Server 2003 x64 R2. What I am finding after installing it all is it seems pretty "slow".
The rest of the hardware is a Dual, Quad Xeon (E5410x2), 8GB ram on a Tyan motherboard. Hard drives are 160GB Western Digital 7200 RPM so I can't see quite why it feels like its not running at a peak level.
Does anyone have any applications or software to give this RAID array a test as I really don't want to order any more or roll them out on to the network to find that software raid would be a better improvement. I did try a burn in app which tests everything but it according to the 20 seconds I ran it, in average it only transferred at 2mbs.. That cant be right..
I think one possibility is the RAID drivers arn't installed correctly as its still coming "Unknown Devices" in Device Manager and it seems It wont let me manually install the drivers for the 3ware device as it doesn't like the OS even though I have the correct ones and it installed Windows with it fine (a bit longer than normal anyway)
I'm planning to run RAID 1 on my PowerEdge 860 server running CentOS 4.4 (320gb x 2 SATA II).
After doing much research, I've decided to go with 3ware 9550sx.
I have some questions regarding installing the 3ware on an existing system with an OS already installed - How do I go about doing it?
I've read that CentOS 4.4 already has the drivers for it.. Do I just configure the RAID config during the boot up?
Is this an overkill for my PE Pentium D 2.8GHZ ?
I have configured a Xen setup on a dual xeon system with a 3ware 8506 2 ports sata controller.
Array is configured in raid1 and I am using LVM.
I get really slow accesses on the virtual machines, when I create a big ext3 system, the system is nearly freezing.
i want to use q9550 with 8G RAM,
install on X7SBL-LN2,
and use for 500G SATAII to do raid 10,
about Adaptec 2405SAS and 3Ware 9650SE-4LPML,
which one will get better performance?
I am not sure exactly which model is in my server, but I know it is in the 3ware 8xxx line.
What is my best option to be alerted to a drive failure. I am running a Raid 1 setup.
I went to 3ware's sight, and wasn't sure exactly which software to download/install.
I am on Centos 5.2 32bit and have Cpanel/WHM installed.
I rented a new server. I check WD6400AAKS, it's indeed a SATA drive. However, why is the label says hda and hdc? Isn't it supposed to be sda and sdc?
Is it true DMA is not needed for my SATA?
Is my disk performance to slow? Does the performance suggest it's a IDE disk?
Here is what on my WHM
hda: WDC WD6400AAKS-65A7B0, ATA DISK drive
hdc: WDC WD6400AAKS-65A7B0, ATA DISK drive
hda: max request size: 512KiB
hda: 1250263728 sectors (640135 MB) w/16384KiB Cache, CHS=65535/255/63
hda: cache flushes supported
hdc: max request size: 512KiB
hdc: 1250263728 sectors (640135 MB) w/16384KiB Cache, CHS=65535/255/63
hdc: cache flushes supported
root@abc [~]# hdparm /dev/hda
/dev/hda:
multcount = 16 (on)
IO_support = 1 (32-bit)
unmaskirq = 1 (on)
using_dma = 0 (off)
keepsettings = 0 (off)
readonly = 0 (off)
readahead = 256 (on)
geometry = 65535/255/63, sectors = 1250263728, start = 0
root@abc [~]# hdparm -Tt /dev/hda
/dev/hda:
Timing cached reads: 12040 MB in 1.99 seconds = 6036.46 MB/sec
Timing buffered disk reads: 24 MB in 3.20 seconds = 7.49 MB/sec
Maybe what the advantages/disadvantages are of each in regards to a web hosting configuration?
View 6 Replies View Relatedabout SSD vs SCSI vs SATA HDDs?
I heard that SSD are slow in writing, but fast in reading.
is this true?
80GB Intel X25-M SSD is the model i am looking at.
Is this HDD recommended on servers? will this perform better then SCSI or SATA RAID10?
how this works, tell me more about writing
eading speeds (on SSD) etc.
what is the benefit of scsi upon a sata hdisk
View 10 Replies View RelatedI have ordered a dedicated server SATA Xeon but got IDE Xeon. Should I contact my datacenter to change the server or is ide and sata the same thing and it does not make a real difference.
View 9 Replies View RelatedSAS/SATA compatibility
I am looking to buy this barebones system:[url]
On a normal shared hosting server, what kind of performance gains can you see using a SAS drive instead of a SATA II in raid-1?
View 6 Replies View RelatedI wonder which drive give the best performance? Look like they all have the same 15000 rpm. :d
Any experience?
I currently have a Dell Poweredge 2650 from a few years back, it is running...
2x Xeon 2.4ghz 512K
3GB DDR266 RAM
1x73GB SCSI
Back in the day this system cost $2000, now it's not worth close to that.
So my plans were to dump this bad boy as an SQL server, seeing it has the SCSI backplane and 3GB of RAM, and SQL usually doesn't need as much CPU as a web server.
Now my question, would it be better to use this server or would it be better to build a cheap Core 2 Duo with a RAID0 array with a few SATA drives?
Before you start going off on RAID0, it doesn't matter to me because I am using clustering/failover so data will not be lost and no downtime will be received if the array fails.
Basically what I want to know, is it worth it to keep this server and build upon it or would it be better to sell this server and look into spending an extra few hundred to build a new system with SATA RAID.
I'm going by price/performance rather than reliability as I am using failover to let you know once again .