CLI For Rebuilding Raid Mirror In Adaptec 2410
May 17, 2007Anyone knows the CLI command for rebuilding a mirror raid for Adaptec 2410?
View 1 RepliesAnyone knows the CLI command for rebuilding a mirror raid for Adaptec 2410?
View 1 Repliesi want to use q9550 with 8G RAM,
install on X7SBL-LN2,
and use for 500G SATAII to do raid 10,
about Adaptec 2405SAS and 3Ware 9650SE-4LPML,
which one will get better performance?
I was wondering if anyone knows of any provider that provides RAID VPS Mirrored Hosting.
So if one drive goes down, the site will immediately switch over to the next one in the cluster.
Quote:
Today we are going to conduct a detailed study of RAIDability of contemporary 400GB hard drives on a new level. We will take two "professional" drives from Seagate and Western Digital and four ordinary "desktop" drives for our investigation. The detailed performance analysis and some useful hints on building RAID arrays are in our new detailed article.
[url]
if i rebuild apache through easyapache, does httpd.conf reset to default?
View 5 Replies View RelatedI have are the .frm, .MYD, .MYI how do I restore/rebuild the database?
View 3 Replies View RelatedHas anyone used this card?
If so, is it a decent card? Perform OK?
I know absolutely nothing about RAID cards.. trying to learn some here. I know 3ware cards have a command line, does this card? What about other adaptecs?
I've got a bunch of machines running Adaptec 2015S cards in raid-1. I cannot seem to get it to work I get the same error on every command I seem to run for example
raidutil -L all
Engine connect failed: Open
They all run CentOS with kernels such as
2.6.15.1 #1 SMP PREEMPT (They are not public facing so don't bother discussing how old the kernel is)
x86_64 x86_64 x86_64 GNU/Linux
So does anyone have any suggestions on this? I've tried everything from what I can find and continue to receive this error.
Anyone knows how to fix this error. I have tried to ugprade the lastest firmware but the error won't go away.
hdparm -tT /dev/sda
/dev/sda:
Timing cached reads: 3556 MB in 2.00 seconds = 1778.27 MB/sec
HDIO_DRIVE_CMD(null) (wait for flush complete) failed: Inappropriate ioctl for device
Timing buffered disk reads: 206 MB in 3.00 seconds = 68.59 MB/sec
HDIO_DRIVE_CMD(null) (wait for flush complete) failed: Inappropriate ioctl for device
[root@cpe-76-173-239-109 ~]# hdparm -tT /dev/sda
/dev/sda:
Timing cached reads: 3520 MB in 2.00 seconds = 1759.39 MB/sec
HDIO_DRIVE_CMD(null) (wait for flush complete) failed: Inappropriate ioctl for device
Timing buffered disk reads: 220 MB in 3.01 seconds = 73.00 MB/sec
HDIO_DRIVE_CMD(null) (wait for flush complete) failed: Inappropriate ioctl for device
This topin i creat it to found some of Security Tips about Support Center in any webhosting or Webserver for VPS ; Detecated Servers ..
some of my webserver had been hacked via the support Center ..
the attacker had some Exploite Script to logged in some users who work in the support Center
and then Going to Rebuild the Os For my Srv ; Detecated Server ..
or Manage my DNS Templates ..
This two ways the attacker can hacked the Servers via them ...
so this is my messege to all the Security Researching and Plp who work / own companies for Webserver ..
Today I noticed that my server becames unavailable.
I made remote reboot, and server did not back. I asked my host to connect remote console to this server. After that I get information, that fsck failed to complete and I need to run it manually.
I put my root password and then noticed that RAID-1 array is rebuilding.
Do I have to wait untill RAID-1 (HARDWARE) array completes rebuild and then run
fsck -r /dev/sda1
?
or I should stop rebuilding process and run fsck now?
Of the three raid card brands, which do you think is best and why?
I'm currently looking for a card that can do both SAS and SATA, and have four internal ports.
I am having quite a challenge getting openvz to work on centos 5 with a adaptec RAID card.
The driver likes the plain jane kernel..but then...
Configuration [OpenVZ (2.6.18-8.1.14.el5.028stab045.1)]
***NOTICE: /boot/vmlinuz-2.6.18-8.1.14.el5.028stab045.1 is not a kernel. Skipping.
There is also another issue not being able to disable Selinux.
I have tried the normal routes and even attempted disabling it in rc.sysinit..still...this "security framework" is able to load it..and cause problems.
Openvz and SeLinux don't get along..even a little bit.
So..those are the two probably seperate issues...that prevent the poor server from booting.
I have got a server with the following specs:
Dual Xeon 2.4 Ghz
6 Gb DDR ECC REG
1 HD 147 Gb 15K
On the board, there is an SCSI adapter which is an Adaptec 7899.
This configuration is working perfectly under Windows 2003. However, as per customer request, I have to install CentOS, RedHat or Fedora. Even Debian is OK.
However, during the install, the OS find NO hard drives and the installation is aborted.
I googled some time and it looks that there is 1 million people looking for a solution on how to install Linux on a machine with an AIC-7899.
The installer loads a driver AIC-7XXX but didn't find the device anyway.
Is Motherboard RAID as good as a dedicated PCI-E card? I am guessing a dedicated card is the best option, though costs more.
We are looking at buying a barebones server from Supermicro. It features an onboard RAID controller which supports RAID 0, 1, 5 & 10 - but for some strange reason it will only support RAID 5 if you use Windows. Here is a link to the page detailing the RAID features.
[url]
We are going to be running Linux, CentOS 5.1, so we will only have the choice of RAID 0, 1 or 10. This isn't an issue, as having RAID 10 on 4x SAS (15k) drives will be fine for speed and stability. What is an issue is would this RAID controller be as fast or reliable compared to a dedicated PCI-E card? If it can only use RAID 5 in windows, does that suggest this controller is too reliant on software? It would be a nightmare to suffer downtime and data loss because the controller couldn't hack it during a drive failure, or one day it decided to bugger up the array when rebooting.
So that leads me to looking at this card, this looks very good for what we need. Are adaptec a reliable brand? I've seen it advertised for £200, which is a good price.
[url]
This card features RAID 5 and 6, would RAID 6 be better than RAID 10 for redundancy, or is it too slow to bother with? Also it seems to have a battery module available for it, what does this achieve? Cos surely if the power dies the hard drives and motherboard can't run off this little battery, or does it just help the controller stay alive long enough with some hard drive information in its memory if the power goes out during a rebuild?
I could try the Software-RAID 5 of the EQ9 Server of Hetzner.
Does anyone here has experiences, how fast a hardware raid 5 compared against the software-Raid 5 is?
The i7-975 should have enough power to compute the redundnacy on the fly, so there would be a minimal impact on performance. But I have no idea.
I want to run the server under ubuntu 8.04 LTS x64.
On it a vitualisation like VMware the IO-Load could get really high.
So I've just got a server with 2xSATA raid 1 (OS, cpanel and everything in here) and 4xSCSI raid 10 (clean).
Which one do you guys think will give the best performance:
1. Move mysql only to 4xSCSI raid 10
2. Move mysql and home folder to 4xSCSI raid 10
I am in a somewhat complicated situation... I wanted to order a custom server with hardware 3Ware RAID controller but after over a month of waiting I was told the HW RAID controller, as well as any other 3Ware controller they tried, does not work with the motherboard used in the server from Fujitsu-Siemens and that they simply got a reply from FS that the controller is not certified to work with their motherboard.
So although I'd prefer a HW raid, I am forced to either choose a different webhost or setup a software RAID. The problem is, I haven't done that before and am somewhat moderately...scared
I have read a lot of the info about SW RAID on Linux that I could find through Google but there are some questions unanswered still. So I thought that perhaps some of the more knowledgeable WHT members could help me with this problem...
The server specs will be:
Core2Duo E6600 (2.4Ghz), 2GB RAM, 6-8x* 250GB SATA II HDDs, CentOS 4.4 or SuSe, DirectAdmin
* I prefer 8 HDDs (or actually 9) over 6 but I am not sure if their server chassis can hold that many HDDs, I am awaiting answer from them. They don't have any other drives beside the 250GB ones so I am limited to those.
The preferred SW RAID setup is to have everything in RAID 10, except for the /boot partition which has to be on RAID-1 or no RAID I believe, plus one drive as hot spare (that would be the 9th drive). I am quite sure they will not do the setup for me but will give me access to KVM over IP and a Linux image preinstalled on the first HDD so that I'll have a functional system that needs to be upgraded to RAID-10.
How do I do that? The big problem I see is that LILO or GRUB can't boot from a software RAID-5/10 so I will have to mount the /boot partition elsewhere. It's probably terribly simple...if you have done it before which I have not. I have read some articles on how to setup a RAID-5/10 with mdadm (e.g. [url] ) but they usually do not talk about how to setup the boot partition. Should it be setup as a small sized (100-200MB) RAID-1 partition spread over all of the drives in the otherwise RAID-10 array?
What about swap? Should I create a 4-8GB (I plan to upgrade the server RAM to 4GB in near future) RAID-1 swap partition on each of the disks or swap to a file on the main RAID-10 partitions. The second sounds simpler but what about performance? Is swapping to a file on RAID-10 array a bad idea, performance wise?
Is it possible to grow a RAID-10 array in a way similar to growing a RAID-5 array with mdadm (using two extra drives instead of one of course)? mdadm doesn't actually even mention RAID-10 despite it does support it without having to create RAID-0 on top of RAID-1 pairs if the support is in kernel, from what I know.
How often do RAID arrays break? Is it worth having RAID if a servers hard drive goes down? I was thinking it may just be a better option to just have a backup drive mounted to my system and in the even of a system failure just pop in a new hard drive, reload the OS, and then reload all my backups?
View 14 Replies View RelatedI have a new server and it is rather slow during raid 1 recovery after system installed
CPU: Intel Core2Duo E5200 Dual Core, 2.5Ghz, 2MB Cache, 800Mhz FSB
Memory: 4GB DDR RAM
Hard Disk 1: 500GB SATA-2 16MB Cache
Hard Disk 2: 500GB SATA-2 16MB Cache
root@server [~]# cat /proc/mdstat
Personalities : [raid1]
md0 : active raid1 sdb1[1] sda1[0]
256896 blocks [2/2] [UU]
md1 : active raid1 sdb2[1] sda2[0]
2096384 blocks [2/2] [UU]
md2 : active raid1 sdb4[2] sda4[0]
480608448 blocks [2/1] [U_]
[=======>.............] recovery = 36.7% (176477376/480608448) finish=1437.6min speed=3445K/sec
the sync speed is just 3.4Mb/second and the total hours needs to be more than 40 hours
Also the server load is very high (nobody uses it)
root@server [~]# top
top - 07:00:14 up 16:55, 1 user, load average: 1.88, 1.41, 1.34
Tasks: 120 total, 1 running, 119 sleeping, 0 stopped, 0 zombie
Cpu(s): 0.0%us, 0.2%sy, 0.0%ni, 99.7%id, 0.0%wa, 0.0%hi, 0.2%si, 0.0%st
Mem: 4148632k total, 747768k used, 3400864k free, 17508k buffers
Swap: 5421928k total, 0k used, 5421928k free, 569252k cached
I am in the process of restructuring the infrastructure on our servers. I am thinking of using either RAID 5 (1 hot spare) vs RAID 10 as my 1U server has 4 HDD tray.
RAID 5 would have better capacity but RAID 10 has better overall performance. Which one do you guys go for a shared hosting server?
Is it possible to turn a non raided setup into Linux software raid, while it is live, and if it's the OS drive? Can you even software raid the OS drive remotely? I've been thinking about doing it for the redundancy (and possible slight performance boost for reads, but doing it more for redundancy). I'm using CentOS.
View 12 Replies View RelatedI want to take some data from a raid-disk (taken from a raid-1 sstem). Put it into a new system already, but this system doesn't have any raid.
When viewing "fdisk -l", it said /dev/sdb doesn't contain valid partition. Is there anyway I can mount it now? I am on CentOS 4 box
Ok here's a potentially dumb newbie question, but I have to know. How many of you mirror your sites, and how do you set it up?
Is it as simple as getting 2 different webhosts, and setting your DNS settings to
primary: DNS.HOST1.COM
secondary: DNS.HOST2.COM
And, in the event of HOST1 going down, will the magical internet genie know to direct all your traffic to HOST2?
Also, what's this I've heard something about a "round-robin DNS" setup? I read it in a thread in response to someone who was trying to manage bandwidth. The suggestion was to set up a bunch of mirrors with a round-robin DNS so that traffic gets split equally.
This is all very interesting & new to me. How do I do it?
MY server configure our drives with RAID-1.
How can I check it my server configure with 3ware or software raid ?
Also please advise me how can I monitor raid configuration that my raid is working fine or no ?
I've been talking to the Planet about trading in my four and a half year old "SuperCeleron" (from the old ServerMatrix days) Celeron 2.4 GHz system for something new. As part of their current promotions, I've configured a system that looks decent:
Xeon 3040, 1 gig of RAM, 2x250GB hard disks, RHEL 5, cPanel+Fantastico, and 10 ips for $162.
Not too bad. I could bump up the ram to 2 gb for, I think, $12 more, which I'm thinking about and wouldn't mind some thoughts on. But, the thing that has me really confused is RAID. I like the idea of doing a RAID 1 setup with those two hard disks. But, the Planet wants $40/month for a RAID controller to do it. I really don't want to go over $200 a month!
Any thoughts on alternative redundancy strategies that might avoid that cost? Software RAID does not seem to be offered by the Planet, unless I can figure out how to do it after installation (is that possible?) Better ideas in general on the server?
Just curious what your thoughts are on performance:
2 SCSI Drives 10k w/RAID 1
or
4 SATA 10k w/RAID 10
Prices are not too different with 4 drives just being a tad more.
how well software raid can perform and how it compares to hardware raid. How does software raid actually work and is it worth it?
How should I look at be setting up software raid if I was going to? Would you recommend just to use hardware raid instead?
Which do you guys recommend of the following?
4x 73GB 15,000rpm SAS drives in a RAID 10
or
4x 73GB 15,000rpm SAS drives in a RAID 5 w/ online backup