It seems the HighPoint-RocketRAID 3120 is the cheapest 2 port hardware SATA2 RAID out there with advertised windows, linux, freebsd and mac support based on Marvell IOP and 128MB RAM. They say the drivers have been backported to centos redhat/kernels already.
I've recently put together a server in a hurry and overlooked an important aspect - data integrity after power loss. I'm using Linux software RAID-1 with two 150GB WD Raptors but I'm worried that data could be lost due to having write-back cache enabled without a battery backup unit. I would rather not disable the write-back cache for performance reasons.
What is the cheapest way to get a battery backup solution for Linux software RAID? Do I have to use a hardware RAID card or do standalone battery backup units exist that can use existing motherboard SATA ports?
just bought 2 servers with same specs from FDC, called RAID1 HW with RocketRAID Controllers
but i doubt thats real HW RAID since performances sux on one of servers
here is FDISK/IOSTAT/LSPCI/PROC commands outputs for both servers
first server which seems to be OK, no any lag/iowait even at peak:
Quote:
root@server1 [~]# fdisk -l
Disk /dev/sda: 500.0 GB, 500028145664 bytes 255 heads, 63 sectors/track, 60791 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System /dev/sda1 * 1 25 200781 83 Linux /dev/sda2 26 1069 8385930 82 Linux swap / Solaris /dev/sda3 1070 60791 479716965 83 Linux
Today we are going to conduct a detailed study of RAIDability of contemporary 400GB hard drives on a new level. We will take two "professional" drives from Seagate and Western Digital and four ordinary "desktop" drives for our investigation. The detailed performance analysis and some useful hints on building RAID arrays are in our new detailed article.
Is Motherboard RAID as good as a dedicated PCI-E card? I am guessing a dedicated card is the best option, though costs more.
We are looking at buying a barebones server from Supermicro. It features an onboard RAID controller which supports RAID 0, 1, 5 & 10 - but for some strange reason it will only support RAID 5 if you use Windows. Here is a link to the page detailing the RAID features.
[url]
We are going to be running Linux, CentOS 5.1, so we will only have the choice of RAID 0, 1 or 10. This isn't an issue, as having RAID 10 on 4x SAS (15k) drives will be fine for speed and stability. What is an issue is would this RAID controller be as fast or reliable compared to a dedicated PCI-E card? If it can only use RAID 5 in windows, does that suggest this controller is too reliant on software? It would be a nightmare to suffer downtime and data loss because the controller couldn't hack it during a drive failure, or one day it decided to bugger up the array when rebooting.
So that leads me to looking at this card, this looks very good for what we need. Are adaptec a reliable brand? I've seen it advertised for £200, which is a good price.
[url]
This card features RAID 5 and 6, would RAID 6 be better than RAID 10 for redundancy, or is it too slow to bother with? Also it seems to have a battery module available for it, what does this achieve? Cos surely if the power dies the hard drives and motherboard can't run off this little battery, or does it just help the controller stay alive long enough with some hard drive information in its memory if the power goes out during a rebuild?
I have been running my sites on dedicated servers from The Planet for 4 years now. I go through about 500,000 page views per day with a fairly intensive web app (1000+ queries/s). Mosso keeps coming up and it is really interesting me. The promised peace of mind would be a relief and I often have trouble with site lag during peak times even with fairly high end servers. I am wondering if Mosso could work for me or if it is just too good to be true.
I searched the forums, but most of what I found was from 2006 when it first came out, wondering if I can get any info from people who have had experience or heard things more recently. I am really interested about loading times. I read from a lot of people that they experienced slow load times for pages (again these posts were at least a year old), but I viewed some sites hosted on Mosso and they seemed quite fast to me.
Just signed up for a new FutureHosting VPS last night. Their pricing is very good, and the reviews generally kind to them on WHT.
I paid for the $44.99 account with 30 domain Plesk. They have a 35% discount on at the moment that made the package very tempting. Guarantees 768mb RAM, 30gb disk space etc.
Account activated several hours ago.
Took me 1-2 minutes to establish an SSH connection and authorization. Trying to login to Plesk control panel for the first time has taken 6-7 minutes and it's still loading as I write this.
Note all I'm doing is running top and trying to access Plesk.
This doesn't look too impressive to say the least.
I know this happens from time to time with most VPS solutions. I've probably been unlucky that I've hit a spike in server load at the precise moment I tried to use the account for the first time.
But anyone have any experiences with FutureHosting that suggest this could be an ongoing problem?
looks to me like they've stopped renewing domains; everything for me is intact but shortly after the renewal date the domain name doesn't work and the email addresses that come with the service don't work. i had 5 different emails for support for them, none of them work; ditto for any support phone numbers. Too bad, had 2 years of no problems.
i've contested the last payment made through paypal, don't expect that will do much good. don't really care, except it's a nuisance and some $ to move on. i also have a personal web site with them and that's working fine, probably will die when the year is up. i'm looking through the other services to find a safe inexpensive one.
I have been searching and searching for a solution. We are currently using one single vps to host some of our clients. We are finding more and more that we need to have some redundancy.
I have looked at using DNS failover using RSYNC/mysql replication etc with two servers, but just dont like the idea.
I have also looked at hosts like imountain etc that use h-sphere. I dont like this setup because services are split onto single machines. For example mail is done on one single server, therefor if that server is down, mail is down.
What I am looking for:
I am trying to stay in a budget of 150/month or less.
I would like to get one of the two options here:
option 1: two vps's or dedi's that technically act as one(a true cluster) then on top of that is OS and control panel and done. This solution doesnt allow for whole datacenter outages, or network issues.
option 2: Two geographically placed vps or dedi's that are somehow either load balanced, or failover.
Ultimately our goal is to have high uptime, but we dont really have much server load.
Basically failover is ok as loads are always low anyway, but if we are paying big $$ it would be nice to have it load balanced.
Please let me know if my expectations are way to high or my price is way too low. I need to find a solution here somehow and if I cant find anything will most likely just go with DNS failover.
I am in a somewhat complicated situation... I wanted to order a custom server with hardware 3Ware RAID controller but after over a month of waiting I was told the HW RAID controller, as well as any other 3Ware controller they tried, does not work with the motherboard used in the server from Fujitsu-Siemens and that they simply got a reply from FS that the controller is not certified to work with their motherboard.
So although I'd prefer a HW raid, I am forced to either choose a different webhost or setup a software RAID. The problem is, I haven't done that before and am somewhat moderately...scared
I have read a lot of the info about SW RAID on Linux that I could find through Google but there are some questions unanswered still. So I thought that perhaps some of the more knowledgeable WHT members could help me with this problem...
The server specs will be:
Core2Duo E6600 (2.4Ghz), 2GB RAM, 6-8x* 250GB SATA II HDDs, CentOS 4.4 or SuSe, DirectAdmin
* I prefer 8 HDDs (or actually 9) over 6 but I am not sure if their server chassis can hold that many HDDs, I am awaiting answer from them. They don't have any other drives beside the 250GB ones so I am limited to those.
The preferred SW RAID setup is to have everything in RAID 10, except for the /boot partition which has to be on RAID-1 or no RAID I believe, plus one drive as hot spare (that would be the 9th drive). I am quite sure they will not do the setup for me but will give me access to KVM over IP and a Linux image preinstalled on the first HDD so that I'll have a functional system that needs to be upgraded to RAID-10.
How do I do that? The big problem I see is that LILO or GRUB can't boot from a software RAID-5/10 so I will have to mount the /boot partition elsewhere. It's probably terribly simple...if you have done it before which I have not. I have read some articles on how to setup a RAID-5/10 with mdadm (e.g. [url] ) but they usually do not talk about how to setup the boot partition. Should it be setup as a small sized (100-200MB) RAID-1 partition spread over all of the drives in the otherwise RAID-10 array?
What about swap? Should I create a 4-8GB (I plan to upgrade the server RAM to 4GB in near future) RAID-1 swap partition on each of the disks or swap to a file on the main RAID-10 partitions. The second sounds simpler but what about performance? Is swapping to a file on RAID-10 array a bad idea, performance wise?
Is it possible to grow a RAID-10 array in a way similar to growing a RAID-5 array with mdadm (using two extra drives instead of one of course)? mdadm doesn't actually even mention RAID-10 despite it does support it without having to create RAID-0 on top of RAID-1 pairs if the support is in kernel, from what I know.
I've never used windows hosting before but I have just taken over hosting a site for a new client and the website is written in ASP. One of my developers has installed the website and back-end on a Webfusion shared windows hosting account, the site runs okay but there is an admin section that isn't working at the moment as the script uses something called 'aspcompat'.
Are there any hosts on the net with reasonably priced true multi site hosting*?
I'd host something like 5-6 small wordpress & blog sites, no special diskspace or bw requirements.
I know Site5 offers something like this, but I'd like something with over 80% uptime
(* By this I mean each site should have its own cpanel & the sites should be completely separate, due to the fact that I'm lazy & don't want to set up domain pointers or whatever they are called.)
I am trying to get my own set of ips from ARIN, and I need to qualify under the multi-home. My understanding of this setup is that I have 512 or more ips that I am using and that I have multiple routes in from the internet from at least 2 providers. Right now where I am coloing at I just have servers and no routers (only the ones that are provided by the co-location). In order to setup a true multi-home for my org will I need to have my own routers/switches and such to accomplish this?
Recently I saw an advertisement about Surf Speedy Servers [url] and I am interested in one of their VPS servers.
[url]
However when I searched for reviews in WebHostingTalk and other websites, I got only REALLY bad and REALLY good reviews regarding Surf Speedy Servers (mostly at webhostingjury.com).
Can you tell me if you had any good or bad experience with Surf Speedy, and what are their problems?
How often do RAID arrays break? Is it worth having RAID if a servers hard drive goes down? I was thinking it may just be a better option to just have a backup drive mounted to my system and in the even of a system failure just pop in a new hard drive, reload the OS, and then reload all my backups?
I am in the process of restructuring the infrastructure on our servers. I am thinking of using either RAID 5 (1 hot spare) vs RAID 10 as my 1U server has 4 HDD tray.
RAID 5 would have better capacity but RAID 10 has better overall performance. Which one do you guys go for a shared hosting server?
Is it possible to turn a non raided setup into Linux software raid, while it is live, and if it's the OS drive? Can you even software raid the OS drive remotely? I've been thinking about doing it for the redundancy (and possible slight performance boost for reads, but doing it more for redundancy). I'm using CentOS.
Just wondering if there is some 'plugin' for MRTG which will e-mail or send an sms if certain conditions are true (bandwidth exceeded on some server /device)?
I've been talking to the Planet about trading in my four and a half year old "SuperCeleron" (from the old ServerMatrix days) Celeron 2.4 GHz system for something new. As part of their current promotions, I've configured a system that looks decent:
Xeon 3040, 1 gig of RAM, 2x250GB hard disks, RHEL 5, cPanel+Fantastico, and 10 ips for $162.
Not too bad. I could bump up the ram to 2 gb for, I think, $12 more, which I'm thinking about and wouldn't mind some thoughts on. But, the thing that has me really confused is RAID. I like the idea of doing a RAID 1 setup with those two hard disks. But, the Planet wants $40/month for a RAID controller to do it. I really don't want to go over $200 a month!
Any thoughts on alternative redundancy strategies that might avoid that cost? Software RAID does not seem to be offered by the Planet, unless I can figure out how to do it after installation (is that possible?) Better ideas in general on the server?