Do You Get Hardware Raid When Ordering A Dedicated Server
Apr 23, 2009When you order a dedicated server, do you opt in for the hardware raid? Why or why not?
View 14 RepliesWhen you order a dedicated server, do you opt in for the hardware raid? Why or why not?
View 14 Repliesordering a dedicated server package, what else is there to expect as far as running or pointing your domain name to the servers IP? what else is needed to.. lets say get supercoolryan.com up and running.
Would the hosting company provide this setup for me or do i have to do it myself?
i might order windows server 2003 and install apache.
I need to colocate a server, and then order two E1 links to it (to be more precise, it's one link and one backup). Both the colocation DC and the link's destination will be in NY metro area. The link will be delivered via MPLS.
Please, could you help me to clarify the following points:
1) How much, very roughly, should an MPLS E1(+backup) cost in NY metro?
2) I understand that usually there are several quality of service classes in MPLS (e.g., realtime, guaranteed, best efforts, etc). Do I understand it right, that I can have, say, 20% realtime, and 80% best efforts, and then 400kbps would be reserved for the lowest latency traffic on E1 line?
3) What kind of quality guarantees (e.g., max/avg latency) are usually given for T1/E1 MPLS lines?
4) How expensive is "realtime" quality class compared to "guaranteed"?
And last, but not least
5) how is this E1-via-MPLS link delivered to my server? (I think, this is something called "demarcation" or "demarking" or whatever it is called). Does it terminate somewhere in the DC building? And how does it go from there? Do I get a copper ethernet plug in the end?
6) Do I need to pay an enormous cross-connect fee to the DC (in addition to the E1 fees to the line provider), or are these things normally much cheaper than intra-DC cross-connects?
Is Motherboard RAID as good as a dedicated PCI-E card? I am guessing a dedicated card is the best option, though costs more.
We are looking at buying a barebones server from Supermicro. It features an onboard RAID controller which supports RAID 0, 1, 5 & 10 - but for some strange reason it will only support RAID 5 if you use Windows. Here is a link to the page detailing the RAID features.
[url]
We are going to be running Linux, CentOS 5.1, so we will only have the choice of RAID 0, 1 or 10. This isn't an issue, as having RAID 10 on 4x SAS (15k) drives will be fine for speed and stability. What is an issue is would this RAID controller be as fast or reliable compared to a dedicated PCI-E card? If it can only use RAID 5 in windows, does that suggest this controller is too reliant on software? It would be a nightmare to suffer downtime and data loss because the controller couldn't hack it during a drive failure, or one day it decided to bugger up the array when rebooting.
So that leads me to looking at this card, this looks very good for what we need. Are adaptec a reliable brand? I've seen it advertised for £200, which is a good price.
[url]
This card features RAID 5 and 6, would RAID 6 be better than RAID 10 for redundancy, or is it too slow to bother with? Also it seems to have a battery module available for it, what does this achieve? Cos surely if the power dies the hard drives and motherboard can't run off this little battery, or does it just help the controller stay alive long enough with some hard drive information in its memory if the power goes out during a rebuild?
So I've just got a server with 2xSATA raid 1 (OS, cpanel and everything in here) and 4xSCSI raid 10 (clean).
Which one do you guys think will give the best performance:
1. Move mysql only to 4xSCSI raid 10
2. Move mysql and home folder to 4xSCSI raid 10
Quote:
Today we are going to conduct a detailed study of RAIDability of contemporary 400GB hard drives on a new level. We will take two "professional" drives from Seagate and Western Digital and four ordinary "desktop" drives for our investigation. The detailed performance analysis and some useful hints on building RAID arrays are in our new detailed article.
[url]
I am in the process of restructuring the infrastructure on our servers. I am thinking of using either RAID 5 (1 hot spare) vs RAID 10 as my 1U server has 4 HDD tray.
RAID 5 would have better capacity but RAID 10 has better overall performance. Which one do you guys go for a shared hosting server?
Is it possible to turn a non raided setup into Linux software raid, while it is live, and if it's the OS drive? Can you even software raid the OS drive remotely? I've been thinking about doing it for the redundancy (and possible slight performance boost for reads, but doing it more for redundancy). I'm using CentOS.
View 12 Replies View RelatedNote before reading: Apologies everyone, but I can not link to URL's or use BB code as I have a post count of less than 5, when I posted this review.
Hello everyone,
A while back, I posted on here to see if anyone could offer a good deal for an i7 920, or know of a place who could. After finding no success due to the i7 being too new, I chose kimsufi, a section of OVH.
Following this you will find my lengthy review of OVH/Kimsufi, I hope members or guests of this forum find this useful in their search for the best provider or worst provider out there.
Specifications from kimsufi.co.uk (a section of ovh.co.uk):
- Intel Core i7 920 x2 (Quad Core) 2.66GHz+
- 6GB DDR3 RAM
- 250GB HDD SATA2
- 1GBPS Network Connection
- Unmetered Bandwidth
- FTP Backup Space
After ordering on the 24th December, knowing they only had 72h server's in stock, I couldn't wait to use the new i7 server! So, on Monday 29th I queried when the server would be ready, hoping it would be before the new year as that works out to Monday, Tuesday and Wednesday, which are all business days calculating to the 72 hour setup time.
Later that day, I received the following response via email:
"I can see that the order is being processed, however with the kimsufi's orders can take up to 72 hours to be delivered (not including the bank holidays). If you can, please be patient and it should be with you before the New Year bank holiday."
So basically, they were saying it should be up by Wednesday, I thought great! However, it would seem this is not true after all. I rung up on Wednesday via phone this time, the first call at lunch time asking about my order, the guy told me he would chase it up and let me know via email. Three hours later, nothing. So I rung back again and it was the same guy I spoke to earlier, he said he had no response on the enquiries he was making and would get onto the matter now. About 20 minutes later he rung me back to explain that the order appears to have been stuck in processing and would not get processed till Monday at the earliest, of next week. A moment later, possibly after I wasn't too pleased, he told me to hold the line, then a moment later told me Friday would be the earliest, of this week. After phoning back later and talking to option 2, rather than option 1, I spoke to someone called Sophie. We had quite a lengthy conversation, but in the end it resulted in the situation being resolved on Friday, by which she would contact me regarding the situation.
On Friday morning I received a phone call from Sophie stating the distribution I had chosen during the ordering process is encountering problems and would be delayed until Monday (VMWare Server), so I chose to have CentOS 5.0 64-bit without VMWare Server instead. She told me this would be done within an hour and I would receive login details, in the meantime she would keep an eye on the progress every so often. Almost 3 hours go by and nothing has happened. I then emailed customerservices to try and get in contact with her via email, but after two emails within a span of roughly two hours, I had no response. I then got a colleague to phone them this time, as I had given up with this fight for a server order, and after talking to Sophie, she told them she would ring my mobile phone shortly to let me know what has happened. Another hour goes by, no phone call or email from OVH or Sophie herself. During the span of that afternoon I had several colleagues hassle OVH regarding my order, to no success.
As of now, it would seem my server hostname pings and responds to SSH, where it did not a while ago (as if it wasn't made yet), but unfortunately some wise person hasn't bothered to send out the login information to me, and change the pending state on my account on OVH to an active state. After phoning the french phone number which is available till 11pm, they told me they could not resolve this issue and therefore I must contact their commercial support number at 9am tomorrow morning. I have high doubts that they will get this resolved tomorrow after all the running around I have had to do and have been given by OVH support.
To say the least I am not amused by the disgusting order process. I have never encountered anything so shoddily put together in my entire life. The worst part of it is, by Wednesday 23:58 they will have technically exceed the 72h setup time which the site stated when I ordered on the 24th, as 72h is 3 business days (Monday, Tuesday, Wednesday), and no compensation has been offered other than a simple apology and a 'happy new year', I'll give them a happy new year for sure...
Due to this experience, I would NOT recommend to anyone who is looking for dedicated servers to order from OVH/Kimsufi unless you are willing to put up with a possibly bad ordering process as I did, or worse, having to chase up support several times on this matter.
Maybe commercial support will fix the problems tomorrow, but this is my review of OVH/Kimsufi to the date I posted this.
support request should be on average 30minute or less, at no time no longer then 1 hour.
raid 1, sata, size does not matter
preferably core2duo
What really are the chances of a drive failure in any given year?
I worked in corporate IT departments for 15 years and had RAID on everything even though I rarely saw a drive failure. Out of hundreds of drives one might fail in any given year.
It does look like some folks here have experienced drive failures on dedicated boxes though, so my dilemma is this: If both cost the same am I better off to have a box with no RAID at a good host like theplanet, or have a box WITH raid with one of the value hosts?
I've been talking to the Planet about trading in my four and a half year old "SuperCeleron" (from the old ServerMatrix days) Celeron 2.4 GHz system for something new. As part of their current promotions, I've configured a system that looks decent:
Xeon 3040, 1 gig of RAM, 2x250GB hard disks, RHEL 5, cPanel+Fantastico, and 10 ips for $162.
Not too bad. I could bump up the ram to 2 gb for, I think, $12 more, which I'm thinking about and wouldn't mind some thoughts on. But, the thing that has me really confused is RAID. I like the idea of doing a RAID 1 setup with those two hard disks. But, the Planet wants $40/month for a RAID controller to do it. I really don't want to go over $200 a month!
Any thoughts on alternative redundancy strategies that might avoid that cost? Software RAID does not seem to be offered by the Planet, unless I can figure out how to do it after installation (is that possible?) Better ideas in general on the server?
I could try the Software-RAID 5 of the EQ9 Server of Hetzner.
Does anyone here has experiences, how fast a hardware raid 5 compared against the software-Raid 5 is?
The i7-975 should have enough power to compute the redundnacy on the fly, so there would be a minimal impact on performance. But I have no idea.
I want to run the server under ubuntu 8.04 LTS x64.
On it a vitualisation like VMware the IO-Load could get really high.
I am in a somewhat complicated situation... I wanted to order a custom server with hardware 3Ware RAID controller but after over a month of waiting I was told the HW RAID controller, as well as any other 3Ware controller they tried, does not work with the motherboard used in the server from Fujitsu-Siemens and that they simply got a reply from FS that the controller is not certified to work with their motherboard.
So although I'd prefer a HW raid, I am forced to either choose a different webhost or setup a software RAID. The problem is, I haven't done that before and am somewhat moderately...scared
I have read a lot of the info about SW RAID on Linux that I could find through Google but there are some questions unanswered still. So I thought that perhaps some of the more knowledgeable WHT members could help me with this problem...
The server specs will be:
Core2Duo E6600 (2.4Ghz), 2GB RAM, 6-8x* 250GB SATA II HDDs, CentOS 4.4 or SuSe, DirectAdmin
* I prefer 8 HDDs (or actually 9) over 6 but I am not sure if their server chassis can hold that many HDDs, I am awaiting answer from them. They don't have any other drives beside the 250GB ones so I am limited to those.
The preferred SW RAID setup is to have everything in RAID 10, except for the /boot partition which has to be on RAID-1 or no RAID I believe, plus one drive as hot spare (that would be the 9th drive). I am quite sure they will not do the setup for me but will give me access to KVM over IP and a Linux image preinstalled on the first HDD so that I'll have a functional system that needs to be upgraded to RAID-10.
How do I do that? The big problem I see is that LILO or GRUB can't boot from a software RAID-5/10 so I will have to mount the /boot partition elsewhere. It's probably terribly simple...if you have done it before which I have not. I have read some articles on how to setup a RAID-5/10 with mdadm (e.g. [url] ) but they usually do not talk about how to setup the boot partition. Should it be setup as a small sized (100-200MB) RAID-1 partition spread over all of the drives in the otherwise RAID-10 array?
What about swap? Should I create a 4-8GB (I plan to upgrade the server RAM to 4GB in near future) RAID-1 swap partition on each of the disks or swap to a file on the main RAID-10 partitions. The second sounds simpler but what about performance? Is swapping to a file on RAID-10 array a bad idea, performance wise?
Is it possible to grow a RAID-10 array in a way similar to growing a RAID-5 array with mdadm (using two extra drives instead of one of course)? mdadm doesn't actually even mention RAID-10 despite it does support it without having to create RAID-0 on top of RAID-1 pairs if the support is in kernel, from what I know.
How often do RAID arrays break? Is it worth having RAID if a servers hard drive goes down? I was thinking it may just be a better option to just have a backup drive mounted to my system and in the even of a system failure just pop in a new hard drive, reload the OS, and then reload all my backups?
View 14 Replies View RelatedI have a new server and it is rather slow during raid 1 recovery after system installed
CPU: Intel Core2Duo E5200 Dual Core, 2.5Ghz, 2MB Cache, 800Mhz FSB
Memory: 4GB DDR RAM
Hard Disk 1: 500GB SATA-2 16MB Cache
Hard Disk 2: 500GB SATA-2 16MB Cache
root@server [~]# cat /proc/mdstat
Personalities : [raid1]
md0 : active raid1 sdb1[1] sda1[0]
256896 blocks [2/2] [UU]
md1 : active raid1 sdb2[1] sda2[0]
2096384 blocks [2/2] [UU]
md2 : active raid1 sdb4[2] sda4[0]
480608448 blocks [2/1] [U_]
[=======>.............] recovery = 36.7% (176477376/480608448) finish=1437.6min speed=3445K/sec
the sync speed is just 3.4Mb/second and the total hours needs to be more than 40 hours
Also the server load is very high (nobody uses it)
root@server [~]# top
top - 07:00:14 up 16:55, 1 user, load average: 1.88, 1.41, 1.34
Tasks: 120 total, 1 running, 119 sleeping, 0 stopped, 0 zombie
Cpu(s): 0.0%us, 0.2%sy, 0.0%ni, 99.7%id, 0.0%wa, 0.0%hi, 0.2%si, 0.0%st
Mem: 4148632k total, 747768k used, 3400864k free, 17508k buffers
Swap: 5421928k total, 0k used, 5421928k free, 569252k cached
I want to take some data from a raid-disk (taken from a raid-1 sstem). Put it into a new system already, but this system doesn't have any raid.
When viewing "fdisk -l", it said /dev/sdb doesn't contain valid partition. Is there anyway I can mount it now? I am on CentOS 4 box
MY server configure our drives with RAID-1.
How can I check it my server configure with 3ware or software raid ?
Also please advise me how can I monitor raid configuration that my raid is working fine or no ?
Just curious what your thoughts are on performance:
2 SCSI Drives 10k w/RAID 1
or
4 SATA 10k w/RAID 10
Prices are not too different with 4 drives just being a tad more.
how well software raid can perform and how it compares to hardware raid. How does software raid actually work and is it worth it?
How should I look at be setting up software raid if I was going to? Would you recommend just to use hardware raid instead?
Which do you guys recommend of the following?
4x 73GB 15,000rpm SAS drives in a RAID 10
or
4x 73GB 15,000rpm SAS drives in a RAID 5 w/ online backup
I was building a 1u server a month or so ago to colocate. After screwing up the purchase of the raid card (got a pci 64 or something) its been sitting next to me not doing much. I need to go ahead and find a good raid card and get this colocated,
I have literally spent hours on Newegg, Amazon and Ebay and found nothing that really jumps out at me. When people were helping with the build before, Adaptec was recommended, but the card is now deactivated on newegg so I am looking again.
put together a high availability cluster for a PHP + MySQL based app to run on a LAN. We're going to use Linux, and cost is a major consern. The app itself doesn't use / need too much resources, as it will only be accessed by 2 / 3 people at a time, so I'm using the following:
2 identical PC's with:
3Ghz PIV CPU
1GB RAM
2x SATAII 160GB HDD space setup as RAID 1
10/100 Mbps LAN NIC's, on a 100MB 8 port switch
Up to now I have been running MySQL-Max 5.0.15 to run a MySQL master-master replication server, which works fine, but the setup involved a lot of manual work, and downloading of the right binaries.
Furthermore I used Linux Heartbeat todo auto switchover between the two servers & RSYNC to sync the application files between the two. This has been working fine untill one of the server's HDD's failed recently corrupting both HDD's
So, I need a better way of doing this, and want to meet the following requirements:
If 1 HDD fails on either server, the server still needs to be able to run without a problem.
Replacing a HDD & rebuilding the RAID array should be easy to manage, preferabbly over the net.
Setting up a cluster should be easy to manage, both for the MySQL DB server & the files that need to be synced between the two machines Re-installing the server should be easy todo as well.
For No.1 I have been thinking of setting up RAID 5 with 4x HDD's - how reliable / safe / redundant is this?
For No. 4 I have been thinking of using something like sysimager to backup the server once setup, but will / can it recreate the RAID array upon restoration? The MySQL DB & PHP files are being backed up to a removable HDD on a daily basis.
The client is 700Km's away, so we can't just drop-in to fix things as often as we like. Thus redundancy is of utter importance. Currently I'm running Suse 9.3, simply due to the fact that it's easy enough to tell the client over the phone howto do things with YaST. Suse 10.1 will be used for the new setup, but I could also use Fedora Core 5, and have also been thinking of using SME server 7.0.
I'm about to purchase a new Xeon dedicated server, however I'm unsure whether to opt for a RAID 1 configuration on the two 320GB SATA drives. Can this decrease performance in anyway or only increase? I run an extremely resource hungry site utilising audio/video en/decoding, running vBulletin so anything that might negatively affect performance could have a big impact.
Secondly, I'm unsure on whether to opt for CentOS 5 over 4.5. I'll be using cPanel of which I've heard there have been problems with the latest release of CentOS.
And finally, is it worth upgrading to Apache 2 and MySQL 5 (which I know is installed as default on CentOS 5)? The reason I ask is that I've heard of server load problems after upgrading to these latest versions on high traffic sites.
I`m building some Xeon Nehalem servers for shared hosting with cPanel. The servers will be:
Dell PowerEdge R410
Xeon Nehalem E5502
12GB DDR3 RAM
3ware raid controller
But for shared hosting, is it worthy to have a RAID-10, or would a RAID-1 be enough?
We have some Xeon E3xxx servers running with RAID-1 hosting more than 1000 accounts, we hadn`t had any IO/load problem so far.
I have a delicated server with "Intel RAID Controller: Intel(R) 82801ER SATA RAID Controller",I cannot find information on this raid.The 80 GB harddisk is about 4 years old,if one harddisk fail,I wonder if I can swap a new one bigger capacity and it will auto rebuilt?
View 2 Replies View RelatedI haven't yet broken into the realms of dedicateds, although I have a decent VPS and am anticipating the need to get a dedi in the future.
Hence I'm wondering briefly why exactly RAID (insert some random number?) is recommended? I know it does something to do preventing hard drive failure, although would an efficient backup system be a decent alternative with regards to cost?
I'm running a busy web hosting server with about 300 domains with no raid mirroring. I just installed a new HD in it.
Is it possible to make a raid1 via software?
I've been using it a bit at home and I've always bashed software raid, but gota say its quite impressive and very manageable. I could see it become super easy to deploy cheap raid without paying extra for a raid controller, with the right custom software.
View 6 Replies View RelatedI have several servers that have a HighPoint RocketRaid 1520 SATA raid card. I have recently discovered that this card requires drivers in the OS to actually take advantage of the raid functionality.
Well, it seems the drivers were never installed. So I essentially do not currently have a raid setup, even though I have a pair of HD's in each of the servers affected. Now, I do have a Windows server that automatically loaded the driver, but the Linux boxes do not have the driver it seems. The instructions for this card state that the driver is installed at the time of OS installation. I bought these servers when I acquired a host a couple of years ago and obviously they weren't setup properly.
So here's my question, I am thinking of trying to install this raid driver on the affected servers now. They have been running for a couple of years this way and I don't want to screw something up. Is this something that I can do at this point without going back to reinstalling the OS, etc? Can the raid driver be installed after the fact here like I'm thinking of doing?
Should I proceed with trying to get this RocketRaid card to work or would I better off buying hardware raid cards that are configured outside of the OS? And if I should go with the hardware level raid, what card do you all recommend? I'm running RHE3 on these servers and they are P4's with 1GB of RAM.
I have a Windows 2000 server running Raid 1 software raid. Recently, one hard disk in the mirror crash, I replace another trying to build the mirror. Problem is the existing hard disk has a few bad blocks, even after chkdsk it still failed to rebuild the software raid, error msg was due to bad blocks in the existing hard disk.
View 2 Replies View Related