Today we are going to conduct a detailed study of RAIDability of contemporary 400GB hard drives on a new level. We will take two "professional" drives from Seagate and Western Digital and four ordinary "desktop" drives for our investigation. The detailed performance analysis and some useful hints on building RAID arrays are in our new detailed article.
Is Motherboard RAID as good as a dedicated PCI-E card? I am guessing a dedicated card is the best option, though costs more.
We are looking at buying a barebones server from Supermicro. It features an onboard RAID controller which supports RAID 0, 1, 5 & 10 - but for some strange reason it will only support RAID 5 if you use Windows. Here is a link to the page detailing the RAID features.
[url]
We are going to be running Linux, CentOS 5.1, so we will only have the choice of RAID 0, 1 or 10. This isn't an issue, as having RAID 10 on 4x SAS (15k) drives will be fine for speed and stability. What is an issue is would this RAID controller be as fast or reliable compared to a dedicated PCI-E card? If it can only use RAID 5 in windows, does that suggest this controller is too reliant on software? It would be a nightmare to suffer downtime and data loss because the controller couldn't hack it during a drive failure, or one day it decided to bugger up the array when rebooting.
So that leads me to looking at this card, this looks very good for what we need. Are adaptec a reliable brand? I've seen it advertised for £200, which is a good price.
[url]
This card features RAID 5 and 6, would RAID 6 be better than RAID 10 for redundancy, or is it too slow to bother with? Also it seems to have a battery module available for it, what does this achieve? Cos surely if the power dies the hard drives and motherboard can't run off this little battery, or does it just help the controller stay alive long enough with some hard drive information in its memory if the power goes out during a rebuild?
I am in a somewhat complicated situation... I wanted to order a custom server with hardware 3Ware RAID controller but after over a month of waiting I was told the HW RAID controller, as well as any other 3Ware controller they tried, does not work with the motherboard used in the server from Fujitsu-Siemens and that they simply got a reply from FS that the controller is not certified to work with their motherboard.
So although I'd prefer a HW raid, I am forced to either choose a different webhost or setup a software RAID. The problem is, I haven't done that before and am somewhat moderately...scared
I have read a lot of the info about SW RAID on Linux that I could find through Google but there are some questions unanswered still. So I thought that perhaps some of the more knowledgeable WHT members could help me with this problem...
The server specs will be:
Core2Duo E6600 (2.4Ghz), 2GB RAM, 6-8x* 250GB SATA II HDDs, CentOS 4.4 or SuSe, DirectAdmin
* I prefer 8 HDDs (or actually 9) over 6 but I am not sure if their server chassis can hold that many HDDs, I am awaiting answer from them. They don't have any other drives beside the 250GB ones so I am limited to those.
The preferred SW RAID setup is to have everything in RAID 10, except for the /boot partition which has to be on RAID-1 or no RAID I believe, plus one drive as hot spare (that would be the 9th drive). I am quite sure they will not do the setup for me but will give me access to KVM over IP and a Linux image preinstalled on the first HDD so that I'll have a functional system that needs to be upgraded to RAID-10.
How do I do that? The big problem I see is that LILO or GRUB can't boot from a software RAID-5/10 so I will have to mount the /boot partition elsewhere. It's probably terribly simple...if you have done it before which I have not. I have read some articles on how to setup a RAID-5/10 with mdadm (e.g. [url] ) but they usually do not talk about how to setup the boot partition. Should it be setup as a small sized (100-200MB) RAID-1 partition spread over all of the drives in the otherwise RAID-10 array?
What about swap? Should I create a 4-8GB (I plan to upgrade the server RAM to 4GB in near future) RAID-1 swap partition on each of the disks or swap to a file on the main RAID-10 partitions. The second sounds simpler but what about performance? Is swapping to a file on RAID-10 array a bad idea, performance wise?
Is it possible to grow a RAID-10 array in a way similar to growing a RAID-5 array with mdadm (using two extra drives instead of one of course)? mdadm doesn't actually even mention RAID-10 despite it does support it without having to create RAID-0 on top of RAID-1 pairs if the support is in kernel, from what I know.
How often do RAID arrays break? Is it worth having RAID if a servers hard drive goes down? I was thinking it may just be a better option to just have a backup drive mounted to my system and in the even of a system failure just pop in a new hard drive, reload the OS, and then reload all my backups?
I am in the process of restructuring the infrastructure on our servers. I am thinking of using either RAID 5 (1 hot spare) vs RAID 10 as my 1U server has 4 HDD tray.
RAID 5 would have better capacity but RAID 10 has better overall performance. Which one do you guys go for a shared hosting server?
Is it possible to turn a non raided setup into Linux software raid, while it is live, and if it's the OS drive? Can you even software raid the OS drive remotely? I've been thinking about doing it for the redundancy (and possible slight performance boost for reads, but doing it more for redundancy). I'm using CentOS.
I've been talking to the Planet about trading in my four and a half year old "SuperCeleron" (from the old ServerMatrix days) Celeron 2.4 GHz system for something new. As part of their current promotions, I've configured a system that looks decent:
Xeon 3040, 1 gig of RAM, 2x250GB hard disks, RHEL 5, cPanel+Fantastico, and 10 ips for $162.
Not too bad. I could bump up the ram to 2 gb for, I think, $12 more, which I'm thinking about and wouldn't mind some thoughts on. But, the thing that has me really confused is RAID. I like the idea of doing a RAID 1 setup with those two hard disks. But, the Planet wants $40/month for a RAID controller to do it. I really don't want to go over $200 a month!
Any thoughts on alternative redundancy strategies that might avoid that cost? Software RAID does not seem to be offered by the Planet, unless I can figure out how to do it after installation (is that possible?) Better ideas in general on the server?
What would cause a Linux server to run out of swap? Would it be a memory leak? This happened today to my server and it had to be forcefully rebooted by the data center.
Opcode caches have stability problems with some PHP apps:
[url]
I've been trying to get XCache working with gallery2, right now apache has to restart every 2-4 hours but it's been worse at times and it'd be nice to know what's going on
I noticed that both XCache and eAccel use a small amount of swap space all the time, it grows slowly and if I'm lucky enough to go without a segfault it will reach maybe 4MB. That's not a lot but it doesn't happen at all without a PHP cache (swap stays locked at 72kb unless RAM runs out and the server thrashes).
Here's the error and my settings:
[Tue Feb 19 06:19:55 2008] [error] PHP Warning: Invalid argument supplied for foreach() in /home/cityv4/public_html/gallery/modules/customfield/classes/CustomFieldHelper.class on line 233 [Tue Feb 19 06:19:55 2008] [error] PHP Warning: Invalid argument supplied for foreach() in /home/cityv4/public_html/gallery/modules/customfield/classes/CustomFieldHelper.class on line 233 [Tue Feb 19 06:19:56 2008] [notice] child pid 31028 exit signal Segmentation fault (11) [Tue Feb 19 06:19:56 2008] [notice] child pid 10638 exit signal Segmentation fault (11)
I have purchased several VPS from a provider and found they do not provide swap space with VPS, and even with 256MB ram, I get 'out of memory' trying to compile a perl library... Creation of a swap file by myself doesn't work (operation is not permitted). Hosting providers runs HyperVM.
So the question. Is it common or is it a misconfiguration? For now I got just 'checking on this for you' and three days of silence from their support.
I'm not opening hosting provider name, but I will if they say "You must pay for more RAM", just because other 5 VPS providers support my VPS servers with swap space.
I have talked with the moderators and they have agreed that I can start a new thread to clarify the issue of Xen and swapping as long the discussion remains technical.
For people who are curious, I would first like to explain why this is important. What we have here is someone making a specific technical accusation against Xen, and if it is indeed crucial, it needs to be solved, or otherwise people have to know it before they get into Xen.
Claim Number 1: The original claim is that users can create arbitrarily large swap and this can lead to the equivalent of overselling.
Fact: Arbitrarily large swap has absolutely zero effect on a normal system, since Linux treats swap as an auxiliary storage, and will not use it unnecessarily. Linux will always use the RAM to the full, but swap is used only when all the buffers have been cleared. If you run free on a dedicated hardware, you will see that swap usage is most of the times zero, even though you have assigned a very large swap to the system. In fact, you can try this by simply increasing the swap to very huge value, you will see that Linux will ignore it completely.
For large swap to cause a problem, the user not only has to assign a large swap, he has to run a really huge workload on his limited RAM, and this being physically impossible, will cripple his vps long before it has any serious impact on the host.
I think the above has been agreed to by the person who made the initial claim.
Claim Number 2 Merely having a swap can lead to vps getting slowed down.
Fact ; The use of swap does not mean thrashing .
Thrashing is a technical term, and it means that the application is registering a swap hit every few instructions. This is rare in normal systems, because of a property of programs that execution is always localized. That is, at any particular time, a certain portion of the program will be continuously being executed, and other portions would be idle. Linux has algorithms that will swap out the Least Recently Used page, and this will mean that the system will not run into too many swap hits.
Now again, like in the earlier case, there will be trouble if the vps customer is trying to run a 1GB workload in a 64MB RAM. This will actually cripple his vps, which is what's the right thing to happen. So normal usage of Swap will not lead to disk I/O, since Linux has explicit algorithms to reduce swap usage.
So for minor OverUsage of memory:
For virtuozzo: the application will crash
For Xen: there will be a very minor degradation of quality. And sometimes it won't have any affect at all, since as I said, just because swap drive is non-empty doesn't mean that linux is constantly swapping out.
So summarizing:
a) Inordinately large swap has zero effect on Linux
b) Non-empty swap doesn't mean that the system is registering swap hits. For normal workloads, the swap hits will be very minimal.
And an extra advice is that if you are using Xen, don't ever use snapshoting, as it will double the disk I/O, and the worst thing is that the vps that's causing this will not even be penalized. The overhead will be completely borne by the system.
I've come across a XEN based plan that offers additional 256MB swap space, and a few others Virtuozzo based plans that offer bursting capability (some even upto 8GB).
Which would be the better option?
Also, in Virtuozzo, will SLM allocation be significantly more beneficial than UBC for a small VPS?
Another question ... DirectAdmin officially claims to be able to run on a minimum of 64MB RAM. How would it perform on a 128 MB VPS ? I'm not really looking to do much, only to host a few (5-6) small sites along with some other non-webhosting applications (which is the reason for getting a VPS instead of a reseller).
I've seen most people suggest atleast 256MB for DirectAdmin, but that is beyond my budget. (I'm also seriously considering the option of employing vi as a control panel to further conserve my limited resources)
I have a handful of machines running Rails on CentOS 5, each with 8GB RAM. They're monitored with Nagios, so I get paged if we exceed 80% memory usage usage or 2% swap. FWIW, they're primarily dual x. dual-core Opterons, and we're using PAE.
The problem is that I keep getting paged in the middle of the night for situations where we're using 2-20% swap yet there are still several gigs of RAM free.
On some level, this makes sense -- if nothing is using a big chunk of RAM, we might as well move it out to disk. On the other hand, when there's still several gigs of RAM free, I don't see any point in bothering, especially when it causes me to get paged at 2am.
So my question is threefold:
Is there any easy way to see what's using swap? top can be made to show a swap column, but it's not what you'd think it is. (For example, most processes show as using lots of swap, and the sum vastly exceeds the actual size of our swap partition.) Can I get a list of what processes are actually swapped out to disk?
Is this a problem other people have? Are others just less aggressive in getting alerted at swap usage, or does this not happen to other people? Our setup is probably not too common (Opterons, PAE, adn 8GB on an older CentOS release?), which makes me think there's a small chance a bona fide bug plays into this somehow.
Is this a case for playing with the swappiness sysctl to make the machine less obsessed with swapping things out to disk? And if so, is there any good documentation on what the 0-100 value actually means, beyond the one sentence explaining that 100 is most likely to swap and 0 is least? Does the current setting of 60 "mean" anything, for example? 98% of the stuff online is just a mirror of the e-mail thread in which Andrew Morton and others go back and forth about what setting is ideal, which doesn't help and is tiring to read over and over.
if there is any specific way (maybe logs) to see what is using the swap memory of a dedicated server..
its a server with 8GB ram.. it has 60% of memory used, and a constant swap memory usage of 30%.. i thought that in normal conditions swap memory was not used..