I have an Adaptec 2105S card, with 3 x SCSI HDD's attached, in a RAID5 setup. When I boot the Cent OS Server 4.4 CD, it says I don't have any hard drives attached. I am then prompted to select a device driver, but it does not show up in the list.
The options in the list are:
- Adaptec AACRAID (aacraid)
- Adaptec AHA-152x (aha152x)
- Adaptec AHA-2740, 28xx, 29xx, 39xx (aic7xxx)
- Adaptec Aic79xx SCSI Host Bus Adapter driver (aic79xx)
- Adaptec SAS/SATA Host Bus Adapter driver (adp94xx)
It says if I have a driver disk, hit F2. The problem is that I don't have a driver disk.
The manual tells me to visit [url]for drivers, but it's not offline. When I go to the download section for the card [url]the only *nix option, is SuSE.
Ive just bought a Power Edge 2600 and I decided to setup the three disks that came with it as raid 5 (within PERC) and installed Win2003 Std. Works great.
But while browsing one of my MCSA books I read that using a raid 5 configuration on the system / boot volume it is next to useless as it offers no redundancy should a volume fail. (With respect to easily booting it back up should a volume fail)
I'm going to use Raid 1 (again with PERC) across two other drives for my exchange data but I'm worried about my system volume. Is my MS book talking about the OS not been able to rebuild data should a volume fail or is this a general rule for raid 5 setups?
I haven't yet broken into the realms of dedicateds, although I have a decent VPS and am anticipating the need to get a dedi in the future.
Hence I'm wondering briefly why exactly RAID (insert some random number?) is recommended? I know it does something to do preventing hard drive failure, although would an efficient backup system be a decent alternative with regards to cost?
This problem has happened to me twice on the very same server. I can't seem to figure out what's wrong with it.
My raid controller is Areca 1160 + 1GB Ram + 16xWD4000YS. Initially I had the problem of the disk dropping from the array randomly. I suppose this problem has been fixed since I have upgraded disks firmware according to WDC suggestion.
But there's another problem now. From my observation, the controller is randomly dropped off the system. When this happen, I cannot read/write anything from the disk at all. I can SSH in and able to run Areca CLI tool to see what's going on but only to get the message saying that it cannot find any Areca controller.
My only option is to restart the server and fsck all those 16 disks (not fun). I checked the log on the RAID controller itself, there was no sign of problem at all. No alert, no disk being dropped off the array, no events logged at all.
I'm using CentOS 4.3. Firmware on the RAID controller is 1.41, latest is 1.42. (Stupid me for not upgrading firmware when server crashed.) I plan to take the server down for RAID firmware upgrade soon. I just hoped that it's just problem with the firmware, not the controller or backplane itself.
I am having issues setting up CloudFlare on out PPA system. I have is setup under Hosted Applications. I have setup a Resource for CloudClare and added it to the service templates. When i go into a clients Control Panel to install the app, I get a couple different errors. When i try to use a previously setup administrator for the subscription, which is listed under "Grant administrative access to existing user," I get the following error:
"Error: Installation of CloudFlare For PA failed. invalid values passed for settings of application resource; details for setting with id 'admin_username': The "CloudFlare username" setting value 'xxxxx' is already in use. Please provide unique value." Is this because I am logged in as the user?
When I create a login under "Use administrative credentials not connected to any particular user," I get the following error: "Error: Installation of CloudFlare For PA failed. Invalid structured output returned by script: Failed to parse structured output. Structured output: '<output xmlns= URL.... WARNING: A connectivity error occurred while contacting the service. WARNING: A connectivity error occurred while contacting the service. WARNING: A connectivity error occurred while contacting the service. WARNING: A connectivity error occurred while contacting the service.'. "
I do get an email from CoudFlare with the setup instructions for the new account.Under the PPA control panel, I am using the IP address for the Management Node in the "POA API IP" field. Is this correct, or should I have entered localhost?I am running 11.5 Update 8, and CloudFlare For PA (3.0.2-18).Setup instructions for Plesk or POA does not quite cover everything for a PPA installation.
Today we are going to conduct a detailed study of RAIDability of contemporary 400GB hard drives on a new level. We will take two "professional" drives from Seagate and Western Digital and four ordinary "desktop" drives for our investigation. The detailed performance analysis and some useful hints on building RAID arrays are in our new detailed article.
Is Motherboard RAID as good as a dedicated PCI-E card? I am guessing a dedicated card is the best option, though costs more.
We are looking at buying a barebones server from Supermicro. It features an onboard RAID controller which supports RAID 0, 1, 5 & 10 - but for some strange reason it will only support RAID 5 if you use Windows. Here is a link to the page detailing the RAID features.
We are going to be running Linux, CentOS 5.1, so we will only have the choice of RAID 0, 1 or 10. This isn't an issue, as having RAID 10 on 4x SAS (15k) drives will be fine for speed and stability. What is an issue is would this RAID controller be as fast or reliable compared to a dedicated PCI-E card? If it can only use RAID 5 in windows, does that suggest this controller is too reliant on software? It would be a nightmare to suffer downtime and data loss because the controller couldn't hack it during a drive failure, or one day it decided to bugger up the array when rebooting.
So that leads me to looking at this card, this looks very good for what we need. Are adaptec a reliable brand? I've seen it advertised for £200, which is a good price.
This card features RAID 5 and 6, would RAID 6 be better than RAID 10 for redundancy, or is it too slow to bother with? Also it seems to have a battery module available for it, what does this achieve? Cos surely if the power dies the hard drives and motherboard can't run off this little battery, or does it just help the controller stay alive long enough with some hard drive information in its memory if the power goes out during a rebuild?
I am in a somewhat complicated situation... I wanted to order a custom server with hardware 3Ware RAID controller but after over a month of waiting I was told the HW RAID controller, as well as any other 3Ware controller they tried, does not work with the motherboard used in the server from Fujitsu-Siemens and that they simply got a reply from FS that the controller is not certified to work with their motherboard.
So although I'd prefer a HW raid, I am forced to either choose a different webhost or setup a software RAID. The problem is, I haven't done that before and am somewhat moderately...scared
I have read a lot of the info about SW RAID on Linux that I could find through Google but there are some questions unanswered still. So I thought that perhaps some of the more knowledgeable WHT members could help me with this problem...
The server specs will be:
Core2Duo E6600 (2.4Ghz), 2GB RAM, 6-8x* 250GB SATA II HDDs, CentOS 4.4 or SuSe, DirectAdmin
* I prefer 8 HDDs (or actually 9) over 6 but I am not sure if their server chassis can hold that many HDDs, I am awaiting answer from them. They don't have any other drives beside the 250GB ones so I am limited to those.
The preferred SW RAID setup is to have everything in RAID 10, except for the /boot partition which has to be on RAID-1 or no RAID I believe, plus one drive as hot spare (that would be the 9th drive). I am quite sure they will not do the setup for me but will give me access to KVM over IP and a Linux image preinstalled on the first HDD so that I'll have a functional system that needs to be upgraded to RAID-10.
How do I do that? The big problem I see is that LILO or GRUB can't boot from a software RAID-5/10 so I will have to mount the /boot partition elsewhere. It's probably terribly simple...if you have done it before which I have not. I have read some articles on how to setup a RAID-5/10 with mdadm (e.g. [url] ) but they usually do not talk about how to setup the boot partition. Should it be setup as a small sized (100-200MB) RAID-1 partition spread over all of the drives in the otherwise RAID-10 array?
What about swap? Should I create a 4-8GB (I plan to upgrade the server RAM to 4GB in near future) RAID-1 swap partition on each of the disks or swap to a file on the main RAID-10 partitions. The second sounds simpler but what about performance? Is swapping to a file on RAID-10 array a bad idea, performance wise?
Is it possible to grow a RAID-10 array in a way similar to growing a RAID-5 array with mdadm (using two extra drives instead of one of course)? mdadm doesn't actually even mention RAID-10 despite it does support it without having to create RAID-0 on top of RAID-1 pairs if the support is in kernel, from what I know.
How often do RAID arrays break? Is it worth having RAID if a servers hard drive goes down? I was thinking it may just be a better option to just have a backup drive mounted to my system and in the even of a system failure just pop in a new hard drive, reload the OS, and then reload all my backups?
Is it possible to turn a non raided setup into Linux software raid, while it is live, and if it's the OS drive? Can you even software raid the OS drive remotely? I've been thinking about doing it for the redundancy (and possible slight performance boost for reads, but doing it more for redundancy). I'm using CentOS.
I've been talking to the Planet about trading in my four and a half year old "SuperCeleron" (from the old ServerMatrix days) Celeron 2.4 GHz system for something new. As part of their current promotions, I've configured a system that looks decent:
Xeon 3040, 1 gig of RAM, 2x250GB hard disks, RHEL 5, cPanel+Fantastico, and 10 ips for $162.
Not too bad. I could bump up the ram to 2 gb for, I think, $12 more, which I'm thinking about and wouldn't mind some thoughts on. But, the thing that has me really confused is RAID. I like the idea of doing a RAID 1 setup with those two hard disks. But, the Planet wants $40/month for a RAID controller to do it. I really don't want to go over $200 a month!
Any thoughts on alternative redundancy strategies that might avoid that cost? Software RAID does not seem to be offered by the Planet, unless I can figure out how to do it after installation (is that possible?) Better ideas in general on the server?
I have room for 4 more hard drives on my home server. My original goal was to go raid 10 but I've been thinking, raid 5 can support 4 drives and give more capacity. Which one would have better performance as software (md) raid? I'm thinking raid 10 might actually have bad performance as software raid, vs hardware, compared to raid 5. Would raid 5 with 4 drives be better for my case?
We are looking to build our first server, and collocate it. It will be a higher investment than just renting the server, but will be worth it in the long term, and we have already decided we are going to support the hosting business for a minimum of 3 years - so we might as well invest in a server from the outset to benefit from lower data center charges and higher redundancy and performance.
We are currently looking at Supermicro for servers as they offer 1U barebones systems with dual hotswappable psus and upto 4 hotswappable drives. This would be ideal for redundancy, and also for taking advantage of the speed and redundancy that a RAID 10 array would give you. These two factors combined are very appealing as it would reduce the possibilities of downtime and data loss. Obviously we will be backing up daily, but its good for piece of mind to know that you could potentially blow a PSU and 2 hard drives, and your server will still be up long enough for a data centre technician to replace the parts.
Now then, my business partner and I are currently deciding what the best all round hard drive configuration would be. He has decided that we should opt for SAS instead of SATA to have lower latency seek times, which would give us better performance. I agree, though this does increase costs considerably.
He is then arguing that we use RAID 5 on cost grounds. He says we should only use 3 of the slots to begin with, save money on one drive by not having a spare, and hope we don't have a drive failure - which sods law will happen. I'm not happy us cutting corners to save money, because if we gamble and lose, that's a hell of a mess we have ourselves in, and will cost us a load more time, reputation and data center charges to get ourselves out of it.
I say we might as well go for RAID 10 for that extra performance, and redundancy, you can potentially lose 2 drives so long as they aren't from the same mirrored pair. With RAID 5 you can only lose a drive, it takes longer to rebuild onto a spare, and during rebuild the performance takes a hit. Also RAID 10 is much faster than RAID 5, and at the expense of the cost of a drive.
Now the question we should be asking is... would a SATA2 RAID 10 array provide better performance than a SAS RAID 5 array?
So I think the choice we have to make is either go for RAID 5 and run with a hot spare, and stock a cold spare, or go with RAID 10 and stock 2 cold spares.
We are considering going with Seagate drives because they are high performance and have 5 year warranties. I have had to RMA two Western Digital drives already in the past 12 months, a raptor and a mybook, both deaths invoked data loss.
The server is going to be a linux web, email, dns and mysql box. It will likely feature a single dual/quad core processor, and 4-8GB of unbuffered ddr2 ram.
Question though on RAID choices... I'm considering getting 3 x 250GB SATA drives. Would it be better to make two of them a RAID-1 mirrored pair for my OS, home directories, and use the 3rd drive seperately for backups, swap, and perhaps some logs.... OR should I put all three drives into a RAID-5 set and treat it as a single logical drive?
my math says usable space would actually be identical... with 465GB usable in either setup. RAID-1 would be faster for I/O with no parity overhead... but one drive would not be redundant. On the other hand, RAID-5 would be fully redundant but have parity overhead for writes.