This problem has happened to me twice on the very same server. I can't seem to figure out what's wrong with it.
My raid controller is Areca 1160 + 1GB Ram + 16xWD4000YS. Initially I had the problem of the disk dropping from the array randomly. I suppose this problem has been fixed since I have upgraded disks firmware according to WDC suggestion.
But there's another problem now. From my observation, the controller is randomly dropped off the system. When this happen, I cannot read/write anything from the disk at all. I can SSH in and able to run Areca CLI tool to see what's going on but only to get the message saying that it cannot find any Areca controller.
My only option is to restart the server and fsck all those 16 disks (not fun). I checked the log on the RAID controller itself, there was no sign of problem at all. No alert, no disk being dropped off the array, no events logged at all.
I'm using CentOS 4.3. Firmware on the RAID controller is 1.41, latest is 1.42. (Stupid me for not upgrading firmware when server crashed.) I plan to take the server down for RAID firmware upgrade soon. I just hoped that it's just problem with the firmware, not the controller or backplane itself.
We had one of RAID controllers failed on our IBM RS/6000 server. There are two RAID controllers on this server, one holds the OS (AIX) and the other one holds our database and this is the one that failed.
Anyway, I've always thought that once a RAID controller failed and we put in a replacement controller, it will reformat all the hard drives that were connected to the failed controller, which means we would have to restore the data from backup once the new controller is in place. However, the IBM technician we dispatched was able to build the new controller and connect all the drives to the new controller without reformatting the drives. I think he copied the RAID controller's configurations using SMIT. I think that was amazing; it saved us a lot of time.
My question is, is this something unique to IBM hardware/AIX or other hardware and OSes (Linux, Windows, etc.) have similar capability?
3w-9xxx: scsi0: AEN: INFO (0x04:0x0053): Battery capacity test is overdue:.
When I'm in the CLI console (tw_cli) and tries to test the battery, I'm seeing the following:
Quote:
//vpsXX1> /c0/bbu test Depending on the Storsave setting, performing the battery capacity test may disable the write cache on the controller /c0 for up to 24 hours. Do you want to continue ? Y|N [N]:
This is a live production server with client VPSs on it. Anyone here actually did 3ware battery test before on production system? Is it ok to do this? I'm seeking someone actually performed test operation before, not from someone just assumes it will be ok.
I have a 3ware 9650SE-24M8 RAID Controller. It was working fine for a few days and today while I was changing the RAID configs and installing different OSs, it just stopped working. Now when I boot my machine up it does not even detect any hard drives or RAID controller. I looked inside the box and the LED light on the RAID controller that is usually solid green is now blinking red. I googled for solutions but all searches lead me to useless information such as blinking red lights on the server case.
I have an Adaptec 2105S card, with 3 x SCSI HDD's attached, in a RAID5 setup. When I boot the Cent OS Server 4.4 CD, it says I don't have any hard drives attached. I am then prompted to select a device driver, but it does not show up in the list.
The options in the list are: - Adaptec AACRAID (aacraid) - Adaptec AHA-152x (aha152x) - Adaptec AHA-2740, 28xx, 29xx, 39xx (aic7xxx) - Adaptec Aic79xx SCSI Host Bus Adapter driver (aic79xx) - Adaptec SAS/SATA Host Bus Adapter driver (adp94xx)
It says if I have a driver disk, hit F2. The problem is that I don't have a driver disk.
The manual tells me to visit [url]for drivers, but it's not offline. When I go to the download section for the card [url]the only *nix option, is SuSE.
Ive just bought a Power Edge 2600 and I decided to setup the three disks that came with it as raid 5 (within PERC) and installed Win2003 Std. Works great.
But while browsing one of my MCSA books I read that using a raid 5 configuration on the system / boot volume it is next to useless as it offers no redundancy should a volume fail. (With respect to easily booting it back up should a volume fail)
I'm going to use Raid 1 (again with PERC) across two other drives for my exchange data but I'm worried about my system volume. Is my MS book talking about the OS not been able to rebuild data should a volume fail or is this a general rule for raid 5 setups?
I haven't yet broken into the realms of dedicateds, although I have a decent VPS and am anticipating the need to get a dedi in the future.
Hence I'm wondering briefly why exactly RAID (insert some random number?) is recommended? I know it does something to do preventing hard drive failure, although would an efficient backup system be a decent alternative with regards to cost?
Today we are going to conduct a detailed study of RAIDability of contemporary 400GB hard drives on a new level. We will take two "professional" drives from Seagate and Western Digital and four ordinary "desktop" drives for our investigation. The detailed performance analysis and some useful hints on building RAID arrays are in our new detailed article.
I have been hosting boxcheats.com with HostGator for a few months, and two days ago I ordered a dedicated IP address to see if it would help with seo. Traffic and Adsense revenue significantly dropped immediately after the dedicated IP was assigned to my account.
Is this normal when switching to a dedicated IP address and what is the cause for the instant drop in traffic?
I have most of my domains with GoDaddy and use their included hosting ("Economy Hosting") for a couple of them. Previously, the included hosting included support for apps ...
Is Motherboard RAID as good as a dedicated PCI-E card? I am guessing a dedicated card is the best option, though costs more.
We are looking at buying a barebones server from Supermicro. It features an onboard RAID controller which supports RAID 0, 1, 5 & 10 - but for some strange reason it will only support RAID 5 if you use Windows. Here is a link to the page detailing the RAID features.
[url]
We are going to be running Linux, CentOS 5.1, so we will only have the choice of RAID 0, 1 or 10. This isn't an issue, as having RAID 10 on 4x SAS (15k) drives will be fine for speed and stability. What is an issue is would this RAID controller be as fast or reliable compared to a dedicated PCI-E card? If it can only use RAID 5 in windows, does that suggest this controller is too reliant on software? It would be a nightmare to suffer downtime and data loss because the controller couldn't hack it during a drive failure, or one day it decided to bugger up the array when rebooting.
So that leads me to looking at this card, this looks very good for what we need. Are adaptec a reliable brand? I've seen it advertised for £200, which is a good price.
[url]
This card features RAID 5 and 6, would RAID 6 be better than RAID 10 for redundancy, or is it too slow to bother with? Also it seems to have a battery module available for it, what does this achieve? Cos surely if the power dies the hard drives and motherboard can't run off this little battery, or does it just help the controller stay alive long enough with some hard drive information in its memory if the power goes out during a rebuild?
I am in a somewhat complicated situation... I wanted to order a custom server with hardware 3Ware RAID controller but after over a month of waiting I was told the HW RAID controller, as well as any other 3Ware controller they tried, does not work with the motherboard used in the server from Fujitsu-Siemens and that they simply got a reply from FS that the controller is not certified to work with their motherboard.
So although I'd prefer a HW raid, I am forced to either choose a different webhost or setup a software RAID. The problem is, I haven't done that before and am somewhat moderately...scared
I have read a lot of the info about SW RAID on Linux that I could find through Google but there are some questions unanswered still. So I thought that perhaps some of the more knowledgeable WHT members could help me with this problem...
The server specs will be:
Core2Duo E6600 (2.4Ghz), 2GB RAM, 6-8x* 250GB SATA II HDDs, CentOS 4.4 or SuSe, DirectAdmin
* I prefer 8 HDDs (or actually 9) over 6 but I am not sure if their server chassis can hold that many HDDs, I am awaiting answer from them. They don't have any other drives beside the 250GB ones so I am limited to those.
The preferred SW RAID setup is to have everything in RAID 10, except for the /boot partition which has to be on RAID-1 or no RAID I believe, plus one drive as hot spare (that would be the 9th drive). I am quite sure they will not do the setup for me but will give me access to KVM over IP and a Linux image preinstalled on the first HDD so that I'll have a functional system that needs to be upgraded to RAID-10.
How do I do that? The big problem I see is that LILO or GRUB can't boot from a software RAID-5/10 so I will have to mount the /boot partition elsewhere. It's probably terribly simple...if you have done it before which I have not. I have read some articles on how to setup a RAID-5/10 with mdadm (e.g. [url] ) but they usually do not talk about how to setup the boot partition. Should it be setup as a small sized (100-200MB) RAID-1 partition spread over all of the drives in the otherwise RAID-10 array?
What about swap? Should I create a 4-8GB (I plan to upgrade the server RAM to 4GB in near future) RAID-1 swap partition on each of the disks or swap to a file on the main RAID-10 partitions. The second sounds simpler but what about performance? Is swapping to a file on RAID-10 array a bad idea, performance wise?
Is it possible to grow a RAID-10 array in a way similar to growing a RAID-5 array with mdadm (using two extra drives instead of one of course)? mdadm doesn't actually even mention RAID-10 despite it does support it without having to create RAID-0 on top of RAID-1 pairs if the support is in kernel, from what I know.
one of my servers has 2x250gb hard drives in hardware RAID 0 using a 3ware controller, but ive now got the following error message 3 times prior to 3 server crashes, my server supplier beleves it one of the drives failing (i have backups on a non raid drive so rthats fine) but i suspect the controller, whats everyone else think...
Code: Mar 28 22:41:46 server kernel: sd 2:0:0:0: WARNING: Command (0x2a) timed out, resetting card. Mar 28 22:42:39 server kernel: 3w-xxxx: scsi2: Command failed: status = 0xc7, flags = 0x1b, unit #0. Mar 28 22:43:08 server kernel: 3w-xxxx: scsi2: Command failed: status = 0xc7, flags = 0x1b, unit #0. Mar 28 22:43:12 server kernel: 3w-xxxx: scsi2: AEN: WARNING: ATA port timeout: Port #1.
I installed a HC7 software that is manage host to provide my company 's customer. I have a problem with Mail Server is MDaemon, it cannot connect HC7 while Database Server, DNS server and Webserver are connected.
All things r ok except of MDamon. Exactly, I cannot create/add mail domain in HC7 for the website is created. However, when i opened MDaemon interface to check then i recognize mail domain with same name as webise that i have created in HC7. i return HC7, add mail domain for that website again. Result, it occur errror : Unable to create user.
MDaemon of tray icon is blue , is it right configure?
How often do RAID arrays break? Is it worth having RAID if a servers hard drive goes down? I was thinking it may just be a better option to just have a backup drive mounted to my system and in the even of a system failure just pop in a new hard drive, reload the OS, and then reload all my backups?
I am in the process of restructuring the infrastructure on our servers. I am thinking of using either RAID 5 (1 hot spare) vs RAID 10 as my 1U server has 4 HDD tray.
RAID 5 would have better capacity but RAID 10 has better overall performance. Which one do you guys go for a shared hosting server?
Is it possible to turn a non raided setup into Linux software raid, while it is live, and if it's the OS drive? Can you even software raid the OS drive remotely? I've been thinking about doing it for the redundancy (and possible slight performance boost for reads, but doing it more for redundancy). I'm using CentOS.
I am looking for a cheap sata controller that can do JBOD with a PCI Express interface(preferably low profile, 2U server) and proper driver support for linux and solaris. The cheapeast one I found is from supermicro but I don't think they can be used in non-supermicro servers. Any suggestions?