I have a couple of Dell 1950s and in one of them, I have 2x Seagate 15K.5s that I purchased through Dell and I also have a spare sitting in my rack in case one goes bad, also from Dell.
I was going to be repurposing one of my other 1950s and was going to get two more 15K.5s for it, but wasn't planning on getting them through Dell (rip off?). This way, could still keep the same spare drive around in case a drive went bad in that system as well.
When I was talking to my Dell rep recently when purchasing another system, their hardware tech said you can't use non-Dell drives with Dell drives in the same RAID array because of the different firmware between them.
Anyone know if it is true? Anyone have any experience with using drives from Dell in conjunction with the same model drives from a third party retailer?
Is there any way to use SATA harddrives instead of SCSI harddrives with a Dell Poweredge 2650? I purchased the Dell on the cheap. It came with one 36GB SCSI drive. Four more hotswap slots. No sleds/caddies/trays for additional drives, so I need those as well.
Looking at the market (eBay mainly) for used SCSI harddrives, it's pretty pricey to get drives of a decent size (73gb+, $50+ each). I would like to use SATA drives instead (get them brand new vs. used SCSI). Problem is this server does not have an embedded SATA controller, just the SCSI Perc3/DI controller. So maybe get a 3ware 8000 series SATA controller + sata drives + trays? Is that possible? There are no molex connectors inside the server. Power to the harddrives comes from the SCSI backplane.
The server will be just for my personal stuff (personal web site, DNS, etc), nothing mission critical. Just trying to save a few bucks buying cheap large capacity SATA drives vs. used smaller capacity SCSI drives.
Will a Perc 5 raid card work in a non-dell Linux server? These cards can be found for about $100 on eBay, and are much cheaper than the Adaptec cards with similar features and ports.
Today we are going to conduct a detailed study of RAIDability of contemporary 400GB hard drives on a new level. We will take two "professional" drives from Seagate and Western Digital and four ordinary "desktop" drives for our investigation. The detailed performance analysis and some useful hints on building RAID arrays are in our new detailed article.
I have a dell POWER VAULT 725N utilizing a 4 HDD RAID 5 setup.
Server has died and bios error message shows that 2 hard drives had failed. I can not boot to windows.
Data is very crucial, what are my options for data recovery?
I really hope I can recover the data, I doubt that two HDD actually failed at the same time without giving any warnings. I hope its the raid controller.
Would like to hear pointers from the community on how to recover important data from the RAID.
Are there any companies/software that would help in this assuming it is a hdd failure and not a controller issue?
I've just bought myself a linux based NAS for storage/backups at home and a couple of WD Greenpower (Non-RAID edition) HDDs.
For those who don't know what TLER is (Time Limited Error Recovery), without it enabled the HDD does its own error recovery, which may take longer than the acceptable time for a RAID Controller. In which case, the drive is kicked out of the array. With TLER on, the idea is that the drive keeps notifying the controller, or the controller handles the error.
So, my actual question is, does Linux Software RAID benefit from TLER being enabled? Or is it best to let the drive do it's own thing?
We've got a couple of Dell SC440's that we use for low end stuff. We need one with RAID-1 so after talking with our Dell rep we ordered a 5iR card and the corresponding SATA cables. The cables are kind of funky in that the drive end of the cable has both the SATA and power connector in one "thing". You then plug the SATA power into the back of this (it piggy backs on). Once you have done all that the cover won't fit back on as the power connectors stick about 1/2" in beyond the case.
Anyone out there have a SC440 with RAID so we can compare notes?
2x36 GB in a RAID 1 with LD 0 of 36GB (the HD's changed to 1x72 & 1x144 and is working fine) 3x72 GB in a RAID 5 with LD 1 of 144GB 1x72 GB as a HotSpare
The 'problem/chalange' is as follows:
The LD 0 RAID 1 is getting too small (2x36GB seen as 36) for my Windows 2008 server C drive. So I changed one 36 GB disk for a 72GB and after a couple of days the other 36 GB to a 144 GB disk (because I had those two and not 2x72 GB or 2x144 GB). Everything is working fine in this RAID 1. But now I want to enlarge/expand de Logical Drive (LD no. 0) from 36 GB to 72 GB.
I only managed to make an extra (third) Logical Drive (LD no. 2 of 36 GB).
Now I do not know how to delete this extra LD 2 on RAID 1 AND how to expand the RAID 1 LD 0 into a LD of 72 GB.
The problem is that I do not know which option to choose in the RAID controller panel (ctrl+m during the bootup progres screen).
Who knows which buttons to press or a Windows based program to configure the RAID?
If there is a failed drive in RAID-1 running on Dell 2850, FreeBSD 5.4, can I just take out the failed drive and replace it with a new one, while the server is running? Will FreeBSD cope & rebuild the drive on the fly?
I have room for 4 more hard drives on my home server. My original goal was to go raid 10 but I've been thinking, raid 5 can support 4 drives and give more capacity. Which one would have better performance as software (md) raid? I'm thinking raid 10 might actually have bad performance as software raid, vs hardware, compared to raid 5. Would raid 5 with 4 drives be better for my case?
I know there are a lot of experienced hardware guys on here, so I wanted some input on 1.5TB drives. Are they reliable enough to be used in non-mission critical storage servers? 99% of what we do is OEM (Dell) equipment, so I don't test raw hardware much these days.
I've read a lot of negative things about Seagate lately. Can anyone chime in with specific models they've had positive or negative experiences with from any vendor? Reading some reviews on the WD 1.5TB Caviar Black drives, there seems to be some weird issues with them going into a recovery cycle.
What is the best way to find out which filesystems and harddrive drivers you can remove? Obviously, i need ext2,3 but how do you find which HD you only need?
I was just wondering whats the real live experience or difference on using Western Digital Caviar series (green, blue or black) on a DC environment vs the RE which are suppose to be for enterprise business.
On the WDC website the caviar series are targeted under desktop disks not servers. But allot of servers and providers use them. If you have servers your suppose to use the RE series, I exclude raptors as I only want to compare medium performance disks here.
I'm building a couple of VPS host servers for a client.
Each server have to host 20 VPS and each server will be 4 cores with 32GB of ram. So CPU and ram should be just fine, my interrogatioon now is hard drives. The company owns the machines, but not the drives yet.
I searched a lot on your forums but found nothing relating on VPS. I'm basicly a DBA IRL, so I have experience in hardrives when it comes to databases, but it's completely different for VPS.
According to my boss, each VPS will run a LAMP solution (having a separeted DB cluster is out of question for some reason).
First, raid1 is indeed a must. There is room for 2x 3.5 drives. I might be able to change the backplane for 4x2.5, but i'm not sure...
I've came to several solutions: 2x SATA 7.2k => comes to about 140$ 2x SATA 10k (velociraptor) => comes to about 500$ 2x SAS 10k with PCIe controller => comes to about 850$ 2x SAS 15k with PCIe controller=> comes to about 1000$
They need at least 300GB storage.
But my problem is that the servers do not have SAS onboard so I need a controller and in my case the cheapest solution is best.
But I'm not sure that SATA 7.2k will hold the charge of 20 complete VPS.
Does it worth it to go with SAS anyway or SATA should be just fine? With SATA better use plain old sata 7.2k or 10k drives?
That's a lot of text for not much: What is best for VPS: SATA 7.2k, SATA 10k or SAS 10k?
Do the old RLX Blade servers use 'mini' hard drives? I can't find an answer anywhere. I seem to recall that they use smaller 2.5" drives. Is this the case?
And, if so, do they make "good" drives worthy of being in a server in that size? Are they essentially just a laptop drive?
My server has been formated it has two drives. I have my back up on the second drives. What is the command I use to list the drives and how to mount the second drive.
Does anyone have experience with SSD drives in a server environment? I've seen now a few offers with SSD (Intel) and wondering if the speed is noticeable?
Are they worth it? from what I have been reading is that they a superior in reliability, but have issues with limited write cycles.
Anybody know the best place to get a really cheap server with at least 250GB drives? I'm assuming most providers offer HDD's of that size on relatively cheap systems now if we're just looking at SATA.
The machine doesn't need to be anything special, I don't need a ton of bandwidth.
Basically this will be an extra backup machine to pull backups from servers instead of my usual "pushing" of backup data.
So to clarify, I'm looking for a simple machine pretty much anywhere with some drive space! VPS just won't cut it because the drivespace they provide is too expensive (yes, I understand they have nice drive setups though).
RAID etc is not needed, I'm not running anything mission critical but would like to have more locations in place to hold backups for me. WHT worries me alot
I can't find providers with 10Krpm HDs+ offshore and they have to have good support. Also I need atleast 2Tb over 100Mbit.
The reason why I need it to be offshore is because my client wants to have a subtitles sites and I'm not exactly sure if its legal or not in america and UK. Also netherlands or germany is preffered I looked at swiftnoc but not sure if they have 10krpm hard drives.
I'm sure by now we have all noticed the Liquid Web ads for solid state drives by now. These offerings would make for incredible database servers, among other things.My questions is:
How many of you are going to run out and get a solution like this, from liquidweb or anyone else? Why or why not?
I am thinking of purchasing Samsung Spinpoint F1 drives, either the 750 GB or 1000 GB one. The purpose would be to put them on a large RAID array (e.g. 14 drives in RAID 10/RAID 50). The price and performance looks good. However, I have read many mixed reviews about the drives. Does anybody has any experience with the drive? Again, this will not be used on a desktop environment, but a server environment. OS would be Win2K3 or CentOS.