Linux & Hardware Raid Configurations
Aug 14, 2007at implementing RAID 5 into my RHEL4 box, and am wondering what the best configuration would be. I'm not very familiar with LVM, but I've heard great things about it.
View 3 Repliesat implementing RAID 5 into my RHEL4 box, and am wondering what the best configuration would be. I'm not very familiar with LVM, but I've heard great things about it.
View 3 RepliesI recently reinstalled my Plesk Panel 12.0.18 after several failures, which I wasn't able to repair (not even bootstrapper.sh would work). All of my websites weren't accessible, the connection between websites and the Database server didn't work and the Plesk backend was unavailable, too. I used the autoinstaller via command line to make a new Plesk installation.
Now I have a clean panel, the websites are available again, the MySQL database works again, but I don't have all my settings and websites in the Plesk backend anymore.
My question is: How can I get the old settings/configurations, which are still on the server, back in to Plesk?
I am in a somewhat complicated situation... I wanted to order a custom server with hardware 3Ware RAID controller but after over a month of waiting I was told the HW RAID controller, as well as any other 3Ware controller they tried, does not work with the motherboard used in the server from Fujitsu-Siemens and that they simply got a reply from FS that the controller is not certified to work with their motherboard.
So although I'd prefer a HW raid, I am forced to either choose a different webhost or setup a software RAID. The problem is, I haven't done that before and am somewhat moderately...scared
I have read a lot of the info about SW RAID on Linux that I could find through Google but there are some questions unanswered still. So I thought that perhaps some of the more knowledgeable WHT members could help me with this problem...
The server specs will be:
Core2Duo E6600 (2.4Ghz), 2GB RAM, 6-8x* 250GB SATA II HDDs, CentOS 4.4 or SuSe, DirectAdmin
* I prefer 8 HDDs (or actually 9) over 6 but I am not sure if their server chassis can hold that many HDDs, I am awaiting answer from them. They don't have any other drives beside the 250GB ones so I am limited to those.
The preferred SW RAID setup is to have everything in RAID 10, except for the /boot partition which has to be on RAID-1 or no RAID I believe, plus one drive as hot spare (that would be the 9th drive). I am quite sure they will not do the setup for me but will give me access to KVM over IP and a Linux image preinstalled on the first HDD so that I'll have a functional system that needs to be upgraded to RAID-10.
How do I do that? The big problem I see is that LILO or GRUB can't boot from a software RAID-5/10 so I will have to mount the /boot partition elsewhere. It's probably terribly simple...if you have done it before which I have not. I have read some articles on how to setup a RAID-5/10 with mdadm (e.g. [url] ) but they usually do not talk about how to setup the boot partition. Should it be setup as a small sized (100-200MB) RAID-1 partition spread over all of the drives in the otherwise RAID-10 array?
What about swap? Should I create a 4-8GB (I plan to upgrade the server RAM to 4GB in near future) RAID-1 swap partition on each of the disks or swap to a file on the main RAID-10 partitions. The second sounds simpler but what about performance? Is swapping to a file on RAID-10 array a bad idea, performance wise?
Is it possible to grow a RAID-10 array in a way similar to growing a RAID-5 array with mdadm (using two extra drives instead of one of course)? mdadm doesn't actually even mention RAID-10 despite it does support it without having to create RAID-0 on top of RAID-1 pairs if the support is in kernel, from what I know.
Quote:
Today we are going to conduct a detailed study of RAIDability of contemporary 400GB hard drives on a new level. We will take two "professional" drives from Seagate and Western Digital and four ordinary "desktop" drives for our investigation. The detailed performance analysis and some useful hints on building RAID arrays are in our new detailed article.
[url]
I have been reading various articles on these forums regarding the issue of installing a VNC server on my VPS using GNOME.
I have been following a guide produced by a member called TouchVPS and so far I have had no problems, I have sucessfully completed all of the stages below, although when I connect to my server via UltraVNC Viewer I am presented with a grey screen and a black cross.
Further looking into the issue, I decided to run gnome-session to see whether the application would execute in memory (via SSH) and encountered the following message:
Code:
Gtk-WARNING **: cannot open display
I realise this might be becuase I am trying to launch from the shell, but was wondering whether it had anything to do with the problem.
Any how here is the intructional guide I have followed:
Quote:
for centos/fedora:
1. yum -y update
2. yum -y install gnome*
3. yum -y install vnc-server vnc nano
now use:
4. vncserver - set your VNC password the results will be:
====
xauth: creating new authority file /root/.Xauthority
New 'desktop:1 (root)' desktop is desktop:1
Creating default startup script /root/.vnc/xstartup
Starting applications specified in /root/.vnc/xstartup
Log file is /root/.vnc/desktop:1.log
=== that means the VNC is up and running on port 5901 now use:
5. killall -9 Xvnc
6. nano .vnc/xstartup
delete twm & and replace with gnome-session &, save.
7. vncserver results will be like this:
====
Warning: desktop:1 is taken because of /tmp/.X1-lock
Remove this file if there is no X server desktop:1
New 'desktop:2 (root)' desktop is desktop:2
Starting applications specified in /root/.vnc/xstartup
Log file is /root/.vnc/desktop:2.log
===== that means your vnc is up and running on port 5902 and you are ready to go now .
this are easy steps i use them in so many virtual servers with centos fedora and always work without a problem.
or contact me and i will be more than happy to install it for you free.
I have completed all of the steps in this tutorial and I am running CentOS 5 if this helps.
Buying a dedicated server, what configuration would you expect to be as standard memory, hard drive, and raid?
And what would your budget be for this configuration? What do you expect to pay for it?
The CPU would be as the following:
Single CPU Quad Core Xeon 3220
Single CPU Quad Core Xeon 5430
Dual CPU Quad Core Xeon 5430
What standard configuration would you expect on these?
I have a technical question that I cannot figure out.
Right my system spec is:
CentOS 4.4 Server (VPS OpenVZ)
cPanel and WHM
IMAP Server = uwimap
Mail Server = Exim
POP3 Server = cppop
Ok, I have a small business with 20 user, 4 of which are partners. I want all 20 users to have the own mailbox and address.
However I want all email sent and received copied to the 4 partners mailboxes.
So, for example.
Employee 1 (joe.bloggs@domain.com) gets an email into his mailbox, and then he replies to it.
Ideally, all emails to and from joe.bloggs@domain.com get copied into a folder within each of the partners mailbox, but one step at a time ehh.
What I want is the email coming in and the reply copied to all partners mailbox aswell, by default.
Is there a way to set this up via the server, instead of the email client, as this could be tampered with.
i Have Server And Hosting In it 275-300 site
and i want limit Resource usage for all sites usin pam limits
good configuration to limits.conf file?
I have two problems regarding Mailman Mailing Lists.
1) The Mailman Interface is usually located under lists.domain.tld/mailman (I changed the config from /cgi-bin/mailman/ to just /mailman/), but it seems the apache confs for this subdomain are not applied – I only see the servers default page when visiting this URL. domain.tld/mailman works, though. It would be great if lists.domain.tld/mailman would actually work and domain.tld/mailman wouldn't work.
How do I change/repair the configurations properly? I've installed the newest MU and already to reinstall mailman.
2) I get an Internal Server Error when visiting the domain.tld/mailman Interface. Reason is mod_suexec, which I need to disable for domain.tld/mailman, but where and how? All the vhost configurations are created automatically.
I could imagine this is related to my first problem and the mod_suexec thing is properly configured in the lists-subdomain config.
I'm having problems setting up email in Outlook with cpanel configurations. I've exhausted google trying to find an answer to this. Maybe someone here can help or has had a similiar problem.
I've set up accounts in cpanel (eg: info @ example.com), have the incoming mail server and outgoing mail server as mail.example.com and I even checked off the "server required authentication" but I still can't receive emails. I changed the outgoing server to smtp.internetprovider.com and it still doesn't work. I get a prompt asking for Network Server Password and click ok with the login and password but it keep popping up.
I am putting together a 1u Linux server. It's for VPS's.
I intend to use s/w RAID 1 with 2x 500mb harddrives. I am wondering if I am better off buying two different makes of disks, to mitigate the risk of bad batch of disks. Is there any disadvantage to this, i.e. is there a good reason to keep the disks identical?
O/S will be Centos 5 x64
I know on windows, you just add a new HDD, right click and select drive mirror, on an already fully operational box.
On Linux I know you can setup software raid during the initial installation.
Is there any way to add mirriroing of any kind on an already installed system?
I've just bought myself a linux based NAS for storage/backups at home and a couple of WD Greenpower (Non-RAID edition) HDDs.
For those who don't know what TLER is (Time Limited Error Recovery), without it enabled the HDD does its own error recovery, which may take longer than the acceptable time for a RAID Controller. In which case, the drive is kicked out of the array. With TLER on, the idea is that the drive keeps notifying the controller, or the controller handles the error.
So, my actual question is, does Linux Software RAID benefit from TLER being enabled? Or is it best to let the drive do it's own thing?
I've been using it a bit at home and I've always bashed software raid, but gota say its quite impressive and very manageable. I could see it become super easy to deploy cheap raid without paying extra for a raid controller, with the right custom software.
View 6 Replies View RelatedIs Motherboard RAID as good as a dedicated PCI-E card? I am guessing a dedicated card is the best option, though costs more.
We are looking at buying a barebones server from Supermicro. It features an onboard RAID controller which supports RAID 0, 1, 5 & 10 - but for some strange reason it will only support RAID 5 if you use Windows. Here is a link to the page detailing the RAID features.
[url]
We are going to be running Linux, CentOS 5.1, so we will only have the choice of RAID 0, 1 or 10. This isn't an issue, as having RAID 10 on 4x SAS (15k) drives will be fine for speed and stability. What is an issue is would this RAID controller be as fast or reliable compared to a dedicated PCI-E card? If it can only use RAID 5 in windows, does that suggest this controller is too reliant on software? It would be a nightmare to suffer downtime and data loss because the controller couldn't hack it during a drive failure, or one day it decided to bugger up the array when rebooting.
So that leads me to looking at this card, this looks very good for what we need. Are adaptec a reliable brand? I've seen it advertised for £200, which is a good price.
[url]
This card features RAID 5 and 6, would RAID 6 be better than RAID 10 for redundancy, or is it too slow to bother with? Also it seems to have a battery module available for it, what does this achieve? Cos surely if the power dies the hard drives and motherboard can't run off this little battery, or does it just help the controller stay alive long enough with some hard drive information in its memory if the power goes out during a rebuild?
I have bought an Intel SR2500LX server chassis with a S5000PAL mainboard running Centos. This system has an active backplane with a LSI MegaRaid chipset.
I'd like to be notified in the case of a drive failure but I'm totally stumped on how to get any monitoring working.
The only raid managment utility Intel supplies is a Java based monstrosity, which needs X-Windows to run and needs to be run continuously in order for the email notification to work.
Web based management can only be used from the same subnet as the server, so that's not very useful either.
I've contacted Intel support which gave me the advice to reboot the server and use the Bios utility if I want to check the Raid arrays consistency . Needless to say I'm very dissapointed, this server has every redundancy feature you can think of but it seems impossible to monitor the Raid under Linux.
Does anybody have experience with the Chassis and Raid monitoring under Linux?
I could try the Software-RAID 5 of the EQ9 Server of Hetzner.
Does anyone here has experiences, how fast a hardware raid 5 compared against the software-Raid 5 is?
The i7-975 should have enough power to compute the redundnacy on the fly, so there would be a minimal impact on performance. But I have no idea.
I want to run the server under ubuntu 8.04 LTS x64.
On it a vitualisation like VMware the IO-Load could get really high.
So I've just got a server with 2xSATA raid 1 (OS, cpanel and everything in here) and 4xSCSI raid 10 (clean).
Which one do you guys think will give the best performance:
1. Move mysql only to 4xSCSI raid 10
2. Move mysql and home folder to 4xSCSI raid 10
How often do RAID arrays break? Is it worth having RAID if a servers hard drive goes down? I was thinking it may just be a better option to just have a backup drive mounted to my system and in the even of a system failure just pop in a new hard drive, reload the OS, and then reload all my backups?
View 14 Replies View RelatedI have a new server and it is rather slow during raid 1 recovery after system installed
CPU: Intel Core2Duo E5200 Dual Core, 2.5Ghz, 2MB Cache, 800Mhz FSB
Memory: 4GB DDR RAM
Hard Disk 1: 500GB SATA-2 16MB Cache
Hard Disk 2: 500GB SATA-2 16MB Cache
root@server [~]# cat /proc/mdstat
Personalities : [raid1]
md0 : active raid1 sdb1[1] sda1[0]
256896 blocks [2/2] [UU]
md1 : active raid1 sdb2[1] sda2[0]
2096384 blocks [2/2] [UU]
md2 : active raid1 sdb4[2] sda4[0]
480608448 blocks [2/1] [U_]
[=======>.............] recovery = 36.7% (176477376/480608448) finish=1437.6min speed=3445K/sec
the sync speed is just 3.4Mb/second and the total hours needs to be more than 40 hours
Also the server load is very high (nobody uses it)
root@server [~]# top
top - 07:00:14 up 16:55, 1 user, load average: 1.88, 1.41, 1.34
Tasks: 120 total, 1 running, 119 sleeping, 0 stopped, 0 zombie
Cpu(s): 0.0%us, 0.2%sy, 0.0%ni, 99.7%id, 0.0%wa, 0.0%hi, 0.2%si, 0.0%st
Mem: 4148632k total, 747768k used, 3400864k free, 17508k buffers
Swap: 5421928k total, 0k used, 5421928k free, 569252k cached
I am in the process of restructuring the infrastructure on our servers. I am thinking of using either RAID 5 (1 hot spare) vs RAID 10 as my 1U server has 4 HDD tray.
RAID 5 would have better capacity but RAID 10 has better overall performance. Which one do you guys go for a shared hosting server?
Is it possible to turn a non raided setup into Linux software raid, while it is live, and if it's the OS drive? Can you even software raid the OS drive remotely? I've been thinking about doing it for the redundancy (and possible slight performance boost for reads, but doing it more for redundancy). I'm using CentOS.
View 12 Replies View RelatedI want to take some data from a raid-disk (taken from a raid-1 sstem). Put it into a new system already, but this system doesn't have any raid.
When viewing "fdisk -l", it said /dev/sdb doesn't contain valid partition. Is there anyway I can mount it now? I am on CentOS 4 box
MY server configure our drives with RAID-1.
How can I check it my server configure with 3ware or software raid ?
Also please advise me how can I monitor raid configuration that my raid is working fine or no ?
I've been talking to the Planet about trading in my four and a half year old "SuperCeleron" (from the old ServerMatrix days) Celeron 2.4 GHz system for something new. As part of their current promotions, I've configured a system that looks decent:
Xeon 3040, 1 gig of RAM, 2x250GB hard disks, RHEL 5, cPanel+Fantastico, and 10 ips for $162.
Not too bad. I could bump up the ram to 2 gb for, I think, $12 more, which I'm thinking about and wouldn't mind some thoughts on. But, the thing that has me really confused is RAID. I like the idea of doing a RAID 1 setup with those two hard disks. But, the Planet wants $40/month for a RAID controller to do it. I really don't want to go over $200 a month!
Any thoughts on alternative redundancy strategies that might avoid that cost? Software RAID does not seem to be offered by the Planet, unless I can figure out how to do it after installation (is that possible?) Better ideas in general on the server?
Just curious what your thoughts are on performance:
2 SCSI Drives 10k w/RAID 1
or
4 SATA 10k w/RAID 10
Prices are not too different with 4 drives just being a tad more.
how well software raid can perform and how it compares to hardware raid. How does software raid actually work and is it worth it?
How should I look at be setting up software raid if I was going to? Would you recommend just to use hardware raid instead?
Which do you guys recommend of the following?
4x 73GB 15,000rpm SAS drives in a RAID 10
or
4x 73GB 15,000rpm SAS drives in a RAID 5 w/ online backup
Are there any significant difference between 4 15K SAS HD in RAID 10 versus 8 7.2K SATAII HD in RAID 10? I have the same question for 2 15K SAS HD in RAID 1 versus 4 7.2K SATAII HD in RAID 10.
View 13 Replies View Related