put together a high availability cluster for a PHP + MySQL based app to run on a LAN. We're going to use Linux, and cost is a major consern. The app itself doesn't use / need too much resources, as it will only be accessed by 2 / 3 people at a time, so I'm using the following:
2 identical PC's with:
3Ghz PIV CPU
1GB RAM
2x SATAII 160GB HDD space setup as RAID 1
10/100 Mbps LAN NIC's, on a 100MB 8 port switch
Up to now I have been running MySQL-Max 5.0.15 to run a MySQL master-master replication server, which works fine, but the setup involved a lot of manual work, and downloading of the right binaries.
Furthermore I used Linux Heartbeat todo auto switchover between the two servers & RSYNC to sync the application files between the two. This has been working fine untill one of the server's HDD's failed recently corrupting both HDD's
So, I need a better way of doing this, and want to meet the following requirements:
If 1 HDD fails on either server, the server still needs to be able to run without a problem.
Replacing a HDD & rebuilding the RAID array should be easy to manage, preferabbly over the net.
Setting up a cluster should be easy to manage, both for the MySQL DB server & the files that need to be synced between the two machines Re-installing the server should be easy todo as well.
For No.1 I have been thinking of setting up RAID 5 with 4x HDD's - how reliable / safe / redundant is this?
For No. 4 I have been thinking of using something like sysimager to backup the server once setup, but will / can it recreate the RAID array upon restoration? The MySQL DB & PHP files are being backed up to a removable HDD on a daily basis.
The client is 700Km's away, so we can't just drop-in to fix things as often as we like. Thus redundancy is of utter importance. Currently I'm running Suse 9.3, simply due to the fact that it's easy enough to tell the client over the phone howto do things with YaST. Suse 10.1 will be used for the new setup, but I could also use Fedora Core 5, and have also been thinking of using SME server 7.0.
Today we are going to conduct a detailed study of RAIDability of contemporary 400GB hard drives on a new level. We will take two "professional" drives from Seagate and Western Digital and four ordinary "desktop" drives for our investigation. The detailed performance analysis and some useful hints on building RAID arrays are in our new detailed article.
I am in the process of restructuring the infrastructure on our servers. I am thinking of using either RAID 5 (1 hot spare) vs RAID 10 as my 1U server has 4 HDD tray.
RAID 5 would have better capacity but RAID 10 has better overall performance. Which one do you guys go for a shared hosting server?
Is it possible to turn a non raided setup into Linux software raid, while it is live, and if it's the OS drive? Can you even software raid the OS drive remotely? I've been thinking about doing it for the redundancy (and possible slight performance boost for reads, but doing it more for redundancy). I'm using CentOS.
I've been talking to the Planet about trading in my four and a half year old "SuperCeleron" (from the old ServerMatrix days) Celeron 2.4 GHz system for something new. As part of their current promotions, I've configured a system that looks decent:
Xeon 3040, 1 gig of RAM, 2x250GB hard disks, RHEL 5, cPanel+Fantastico, and 10 ips for $162.
Not too bad. I could bump up the ram to 2 gb for, I think, $12 more, which I'm thinking about and wouldn't mind some thoughts on. But, the thing that has me really confused is RAID. I like the idea of doing a RAID 1 setup with those two hard disks. But, the Planet wants $40/month for a RAID controller to do it. I really don't want to go over $200 a month!
Any thoughts on alternative redundancy strategies that might avoid that cost? Software RAID does not seem to be offered by the Planet, unless I can figure out how to do it after installation (is that possible?) Better ideas in general on the server?
And on cluster02 and cluster03, I did something like that: - srv01 (synchronize changes) - cluster01 (synchronize changes) - cluster03 (standalone) And - srv01 (synchronize changes) - cluster01 (synchronize changes) - cluster02 (standalone)
1. So, is that correct?
2. It seems that when I click on 'syncronize all DNS records on all the servers' only cluster01 and srv01 got all the DNS records. So is it normal?
Yet If I add a domain on cluster02 for example, where I need to add the DNS? cluster02 named.conf or on cluster01/srv01?
I have started to move my websites over to my UK Colocation But would like a little guidence on what the best solution would be.
I would like to build up a Server Cluster that will handle all of my sites really and also have redundancy, so If 1 server goes down the other Web/SQL/Whatever server will be used instead.
I have websites ranging from large forums to streaming & download websites.
Should I got for a setup for example:
X Web Servers + X Database Servers Connected to X Storage (see link below)
Connecting the Web Servers in something like a a round robin config or use a Load Balancer / Other
OR Should I setup multiple: Web Server + Database Server + Media Server
OR another config?
Below are my current setups
Current UK Setup (Colocation)
Web Server Quad Core, 8GB Ram, 250gb HDD Raid 1 Quad Core, 8GB Ram, 250gb HDD Raid 1 (just ordered)
SQL Server Quad Core, 4GB Ram, 250gb HDD Raid 10
Storage Server HP StorageWorks NAS 1200s 1TB (just Ordered) Link: [url]
Current US Setup (Dedicated Servers)
Web Server Quad Core x2, 8GB Ram, 3TB HDD
SQL Server Quad Core x2, 8GB Ram, 1TB HDD
Media Server Quad Core x2, 8GB Ram, 3TB HDD Dual Core, 4GB Ram, 3TB HDD
i am going to use with hsphere preferently i want to use Dell blade server 1955 with exchange and sharepoint, cause the low HD capacity i will like to add and HD array could be NAS or SAN,
I have 4 dedicated servers in 1and1 and i want to create a server cluster for windows media services to stabilize the server load. i tried to add the second server details in the first server's windows media services and it always says, access denied message.
how can i give permissions to create a server cluster for windows media services? i didn't installed any additional firewalls in the servers and i am using this for only Windows media services so IIS is disabled. my configuration and server details are below...
Windows 2003 Standard edition 64 bit Athlon 64x2 3800+ 2 x 2,0GHz 1 GB Ram
I'm working on a huge project that will take like 2 months or so to release. The thing is, as I expect, this site is going to grow massively. My question is, How can I handle lot of traffic and give lot of space to my users? With a load balancing right? or server cluster? my question is? how does this work? where do I get it? what are the prices like?
All info on it is appreciated, i want to start with one from day one, so I can handle the grownth of the site once it happens.
I have a small dns cluster with 4 servers, the problem is that when i want to update a dns registry one of them doesnt sync, i have to try like 6 or 8 times to get that server to sync with all the others, and im concern because the one who gets trouble to sync is my secondary dns server
my current servers are part of this mess with Alphared: [url]
what I'm looking for:
static content main server: average cpu, average ram 15-20gb data tops, but could use a fast drive need about 6tb/m of higher quality bandwidth 2 machine cluster for forums
only thing on this will be vBulletin forums. current database is about 6gb (~7 million posts) averaging about 800 members active per 15 mins this isn't for a business, so it all comes out of my pocket. however, after the $#@! with Alphared I do recognize the importance of a good host and I am willing to put money toward that as needed. however, my goal is in the $600-$800/m range for everything. is that price range doable? if not, what is a reasonable price for what I'm asking? and can anyone recommend reliable hosts (especially one that can correctly setup the cluster for the forums).
Hi there, i have few question for the best suited DNS Setup for our compagny.
We have tree server located in montreal. Two are running web services, one of them is only for dns ( home server ).
Two main server have Whm/Cpanel. One run Cpanel Dns only.
Main server have 7 ips each Dns server, only have 1
Let say we use domain xxx.com Right now we have ns1/ns2/ns3/ns4 point to server1 ns5/ns6/ns7 point to server2
We have no Nameserver that point to our dns only server for now. I'd like to advoid runing DNS Service on all of them, maybe having two slave one master would be fine. Question: What would be the best suited dns setup with my current config for best responding time and fast replication?
Currently, I run cPanel servers for my clients and also host my company website on them. However, I now wish to place my company website on a seperate system running ISP Manager, and in order to connect the servers to my DNS I presume that I would have to use DNS clustering. Here is my question - how would I create a DNS cluster between cPanel and ISP Manager? Anyone know how?
I work in a DC and am looking for a better way to deal with clients who have multiple servers hosted with us.
Heres the scenario. Client buys a server or two at the beginning, with a HW firewall or they are clustered together and require their own switch and down the road, they like what we do and want to buy more servers.
However, we've been selling other servers and the only way we would be able to accommodate them would be to run a cross connect to another cab with their new server in it. Hopefully you can see where I am going with this.
This can keep happening multiple times and with multiple clients. and eventually you can end up with spider web of cable everywhere .
My thought of doing it right but more work would be to schedule down time with a client and migrate all the HW to a new cabinet where they can grow.
Weve also been tossing around ideas like getting projected growth from clients and setting aside space for them to having dedicated cluster cabinets.
We have a quad code with 8GB ram dedicated for one website, but still the load goes through the roof and crashes the server. The site has one vB forum (with minimum hacks), and a custom CMS for the front pages which uses a minimum of queries.
I'm not an expert, but it would seem like mySQL is the one that crashes. When the site crashes and someone browse the site, you can see the "could not connect to mySQL through socket" error message. At that point the load just skyrocket (been up to 4-500), and then the server crashes and has to be rebooted.
* Is our only option to make some sort of cluster? * Could upgrading to i.eg Apache 2.x or installing an op code cacher like xCache help? * Is it possible to run the databases from RAM to save IO (so it only writes to the HDD on update/insert/delete etc)?
I have root access, so if you need me to run more commands to look up statistics, just let me know.
I've added a new Apache/MySQL node to our PPA cluster and we'd like to be able to host a single subscription on that node for the Apache webhosting and MySQL databases for that subscription while allowing that subscription to utilized the rest of the shared resources in the cluster, such as mail, DNS, etc. I'm looking for the best way to go about doing that.
I guess one simple way would be to set the new server to 'ready to provide' and set the rest to not ready, add the "linux shared hosting" hosting subscription to the customer account, then switch the new server to not ready and the rest back to ready. That works fine I suppose in a small cluster, but there has to be a better way. Is there a way to craft a service template that restricts subscribers to a particular node for Apache + MySQL and leaves them to rest of the "regular" nodes in the cluster for other services?
Is Motherboard RAID as good as a dedicated PCI-E card? I am guessing a dedicated card is the best option, though costs more.
We are looking at buying a barebones server from Supermicro. It features an onboard RAID controller which supports RAID 0, 1, 5 & 10 - but for some strange reason it will only support RAID 5 if you use Windows. Here is a link to the page detailing the RAID features.
[url]
We are going to be running Linux, CentOS 5.1, so we will only have the choice of RAID 0, 1 or 10. This isn't an issue, as having RAID 10 on 4x SAS (15k) drives will be fine for speed and stability. What is an issue is would this RAID controller be as fast or reliable compared to a dedicated PCI-E card? If it can only use RAID 5 in windows, does that suggest this controller is too reliant on software? It would be a nightmare to suffer downtime and data loss because the controller couldn't hack it during a drive failure, or one day it decided to bugger up the array when rebooting.
So that leads me to looking at this card, this looks very good for what we need. Are adaptec a reliable brand? I've seen it advertised for £200, which is a good price.
[url]
This card features RAID 5 and 6, would RAID 6 be better than RAID 10 for redundancy, or is it too slow to bother with? Also it seems to have a battery module available for it, what does this achieve? Cos surely if the power dies the hard drives and motherboard can't run off this little battery, or does it just help the controller stay alive long enough with some hard drive information in its memory if the power goes out during a rebuild?
I am in a somewhat complicated situation... I wanted to order a custom server with hardware 3Ware RAID controller but after over a month of waiting I was told the HW RAID controller, as well as any other 3Ware controller they tried, does not work with the motherboard used in the server from Fujitsu-Siemens and that they simply got a reply from FS that the controller is not certified to work with their motherboard.
So although I'd prefer a HW raid, I am forced to either choose a different webhost or setup a software RAID. The problem is, I haven't done that before and am somewhat moderately...scared
I have read a lot of the info about SW RAID on Linux that I could find through Google but there are some questions unanswered still. So I thought that perhaps some of the more knowledgeable WHT members could help me with this problem...
The server specs will be:
Core2Duo E6600 (2.4Ghz), 2GB RAM, 6-8x* 250GB SATA II HDDs, CentOS 4.4 or SuSe, DirectAdmin
* I prefer 8 HDDs (or actually 9) over 6 but I am not sure if their server chassis can hold that many HDDs, I am awaiting answer from them. They don't have any other drives beside the 250GB ones so I am limited to those.
The preferred SW RAID setup is to have everything in RAID 10, except for the /boot partition which has to be on RAID-1 or no RAID I believe, plus one drive as hot spare (that would be the 9th drive). I am quite sure they will not do the setup for me but will give me access to KVM over IP and a Linux image preinstalled on the first HDD so that I'll have a functional system that needs to be upgraded to RAID-10.
How do I do that? The big problem I see is that LILO or GRUB can't boot from a software RAID-5/10 so I will have to mount the /boot partition elsewhere. It's probably terribly simple...if you have done it before which I have not. I have read some articles on how to setup a RAID-5/10 with mdadm (e.g. [url] ) but they usually do not talk about how to setup the boot partition. Should it be setup as a small sized (100-200MB) RAID-1 partition spread over all of the drives in the otherwise RAID-10 array?
What about swap? Should I create a 4-8GB (I plan to upgrade the server RAM to 4GB in near future) RAID-1 swap partition on each of the disks or swap to a file on the main RAID-10 partitions. The second sounds simpler but what about performance? Is swapping to a file on RAID-10 array a bad idea, performance wise?
Is it possible to grow a RAID-10 array in a way similar to growing a RAID-5 array with mdadm (using two extra drives instead of one of course)? mdadm doesn't actually even mention RAID-10 despite it does support it without having to create RAID-0 on top of RAID-1 pairs if the support is in kernel, from what I know.
How often do RAID arrays break? Is it worth having RAID if a servers hard drive goes down? I was thinking it may just be a better option to just have a backup drive mounted to my system and in the even of a system failure just pop in a new hard drive, reload the OS, and then reload all my backups?