Doubts About Raid 1 And Cpanel
Apr 11, 2007
I´ve ordered a RAID 1 server 2x500GB SATA. With the 3ware raid card. I have cpanel installed.
This is the first time I´m using raid configs, so we are having some doubts:
1. How can I check if raid is running on the server? How can I be sure that raid system is correctly configured?
2. I´m only seeing one HDD, is this correct?
View 4 Replies
ADVERTISEMENT
Oct 30, 2008
I am going to implement my first SSL certificate hence I have so many doubts about it.I am on shared host, I created Signing request and got my SSL verified and issued. However, in my WHM there is no option to install it. I asked host and they said that they will install for us asked me for Cert and KEY ..
Thats ok but they said that I need to have dedicated IP assigned to the domain for which I Need SSL certificate..
so guys, it is not possible to install SSL on shared IP but bound to the domain I want ?
Please give your inputs and I will be happy if anyone want to share or tell me about common misconception about SSL certificate which a noob can have.. (based on your personal experience).
View 9 Replies
View Related
Mar 7, 2007
Quote:
Today we are going to conduct a detailed study of RAIDability of contemporary 400GB hard drives on a new level. We will take two "professional" drives from Seagate and Western Digital and four ordinary "desktop" drives for our investigation. The detailed performance analysis and some useful hints on building RAID arrays are in our new detailed article.
[url]
View 0 Replies
View Related
Feb 17, 2007
Mountain View (CA) - As a company with one of the world's largest IT infrastructures, Google has an opportunity to do more than just search the Internet. From time to time, the company publishes the results of internal research. The most recent project one is sure to spark interest in exploring how and under what circumstances hard drives work - or not.
There is a rule of thumb for replacing hard drives, which taught customers to move data from one drive to another at least every five years. But especially the mechanical nature of hard drives makes these mass storage devices prone to error and some drives may fail and die long before that five-year-mark is reached. Traditionally, extreme environmental conditions are cited as the main reasons for hard drive failure, extreme temperatures and excessive activity being the most prominent ones.
A Google study presented at the currently held Conference on File and Storage Technologies questions these traditional failure explanations and concludes that there are many more factors impacting the life expectancy of a hard drive and that failure predictions are much more complex than previously thought. What makes this study interesting is the fact that Google's server infrastructure is estimated to exceed a number of 450,000 fairly mainstream systems that, in a large number, use consumer-grade devices with capacities ranging from 80 to 400 GB in capacity. According to the company, the project covered "more than 100,000" drives that were put into production in or after 2001. The drives ran at a platter rotation speed of 5400 and 7200 rpm, came from "many of the largest disk drive manufacturers and from at least nine different models."
Google said that it is collecting "vital information" about all of its systems every few minutes and stores the data for further analysis. For example, this information includes environmental factors (such as temperatures), activity levels and SMART parameters (Self-Monitoring Analysis and Reporting Technology) that are commonly considered to be good indicators to describe the health of disk drives.
In general, Google's hard drive population saw a failure rate that was increasing with the age of the drive. Within the group of hard drives up to one year old, 1.7% of the devices had to be replaced due to failure. The rate jumps to 8% in year 2 and 8.6% in year 3. The failure rate levels out thereafter, but Google believes that the reliability of drives older than 4 years is influenced more by "the particular models in that vintage than by disk drive aging effects."
Breaking out different levels of utilization, the Google study shows an interesting result. Only drives with an age of six months or younger show a decidedly higher probability of failure when put into a high activity environment. Once the drive survives its first months, the probability of failure due to high usage decreases in year 1, 2, 3 and 4 - and increases significantly in year 5. Google's temperature research found an equally surprising result: "Failures do not increase when the average temperature increases. In fact, there is a clear trend showing that lower temperatures are associated with higher failure rates. Only at very high temperatures is there a slight reversal of this trend," the authors of the study found.
In contrast the company discovered that certain SMART parameters apparently do have an effect drive failures. For example, drives typically scan the disk surface in the background and report errors as they discover them. Significant scan errors can hint to surface errors and Google reports that fewer than 2% of its drives show scan errors. However, drives with scan errors turned out to be ten times more likely to fail than drives without scan errors. About 70% of Google's drives with scan errors survived the first eight months after the first scan error was reported.
Similarly, reallocation counts, a number that results from the remapping of faulty sectors to a new physical sector, can have a dramatic impact on a hard drive's life: Google said that drives with one or more reallocations fail more often than those with none. The observed average impact on the average fail rate came in at a factor of 3-6, while about 85% of the drives survive past eight months after the first reallocation.
Google discovered similar effects on hard drives in other SMART categories, but them bottom line revealed that 56% of all failed drives had no count in either one of these categories - which means that more than half of all failed drives were put out of operation by factors other than scan errors, reallocation count, offline reallocation and probational counts.
In the end, Google's research does not solve the problem of predicting when hard drives are likely to fail. However, it shows that temperature and high usage alone are not responsible for failures by default. Also, the researcher pointed towards a trend they call "infant mortality phase" - a time frame early in a hard drive's life that shows increased probabilities of failure under certain circumstances. The report lacks a clear cut conclusion, but the authors indicate that there is no promising approach at this time than can predict failures of hard drives: "Powerful predictive models need to make use of signals beyond those provided by SMART."
View 6 Replies
View Related
Mar 24, 2008
Is Motherboard RAID as good as a dedicated PCI-E card? I am guessing a dedicated card is the best option, though costs more.
We are looking at buying a barebones server from Supermicro. It features an onboard RAID controller which supports RAID 0, 1, 5 & 10 - but for some strange reason it will only support RAID 5 if you use Windows. Here is a link to the page detailing the RAID features.
[url]
We are going to be running Linux, CentOS 5.1, so we will only have the choice of RAID 0, 1 or 10. This isn't an issue, as having RAID 10 on 4x SAS (15k) drives will be fine for speed and stability. What is an issue is would this RAID controller be as fast or reliable compared to a dedicated PCI-E card? If it can only use RAID 5 in windows, does that suggest this controller is too reliant on software? It would be a nightmare to suffer downtime and data loss because the controller couldn't hack it during a drive failure, or one day it decided to bugger up the array when rebooting.
So that leads me to looking at this card, this looks very good for what we need. Are adaptec a reliable brand? I've seen it advertised for £200, which is a good price.
[url]
This card features RAID 5 and 6, would RAID 6 be better than RAID 10 for redundancy, or is it too slow to bother with? Also it seems to have a battery module available for it, what does this achieve? Cos surely if the power dies the hard drives and motherboard can't run off this little battery, or does it just help the controller stay alive long enough with some hard drive information in its memory if the power goes out during a rebuild?
View 14 Replies
View Related
Sep 17, 2009
I could try the Software-RAID 5 of the EQ9 Server of Hetzner.
Does anyone here has experiences, how fast a hardware raid 5 compared against the software-Raid 5 is?
The i7-975 should have enough power to compute the redundnacy on the fly, so there would be a minimal impact on performance. But I have no idea.
I want to run the server under ubuntu 8.04 LTS x64.
On it a vitualisation like VMware the IO-Load could get really high.
View 14 Replies
View Related
Jan 14, 2008
So I've just got a server with 2xSATA raid 1 (OS, cpanel and everything in here) and 4xSCSI raid 10 (clean).
Which one do you guys think will give the best performance:
1. Move mysql only to 4xSCSI raid 10
2. Move mysql and home folder to 4xSCSI raid 10
View 0 Replies
View Related
Jul 8, 2007
I am in a somewhat complicated situation... I wanted to order a custom server with hardware 3Ware RAID controller but after over a month of waiting I was told the HW RAID controller, as well as any other 3Ware controller they tried, does not work with the motherboard used in the server from Fujitsu-Siemens and that they simply got a reply from FS that the controller is not certified to work with their motherboard.
So although I'd prefer a HW raid, I am forced to either choose a different webhost or setup a software RAID. The problem is, I haven't done that before and am somewhat moderately...scared
I have read a lot of the info about SW RAID on Linux that I could find through Google but there are some questions unanswered still. So I thought that perhaps some of the more knowledgeable WHT members could help me with this problem...
The server specs will be:
Core2Duo E6600 (2.4Ghz), 2GB RAM, 6-8x* 250GB SATA II HDDs, CentOS 4.4 or SuSe, DirectAdmin
* I prefer 8 HDDs (or actually 9) over 6 but I am not sure if their server chassis can hold that many HDDs, I am awaiting answer from them. They don't have any other drives beside the 250GB ones so I am limited to those.
The preferred SW RAID setup is to have everything in RAID 10, except for the /boot partition which has to be on RAID-1 or no RAID I believe, plus one drive as hot spare (that would be the 9th drive). I am quite sure they will not do the setup for me but will give me access to KVM over IP and a Linux image preinstalled on the first HDD so that I'll have a functional system that needs to be upgraded to RAID-10.
How do I do that? The big problem I see is that LILO or GRUB can't boot from a software RAID-5/10 so I will have to mount the /boot partition elsewhere. It's probably terribly simple...if you have done it before which I have not. I have read some articles on how to setup a RAID-5/10 with mdadm (e.g. [url] ) but they usually do not talk about how to setup the boot partition. Should it be setup as a small sized (100-200MB) RAID-1 partition spread over all of the drives in the otherwise RAID-10 array?
What about swap? Should I create a 4-8GB (I plan to upgrade the server RAM to 4GB in near future) RAID-1 swap partition on each of the disks or swap to a file on the main RAID-10 partitions. The second sounds simpler but what about performance? Is swapping to a file on RAID-10 array a bad idea, performance wise?
Is it possible to grow a RAID-10 array in a way similar to growing a RAID-5 array with mdadm (using two extra drives instead of one of course)? mdadm doesn't actually even mention RAID-10 despite it does support it without having to create RAID-0 on top of RAID-1 pairs if the support is in kernel, from what I know.
View 8 Replies
View Related
Feb 25, 2009
How often do RAID arrays break? Is it worth having RAID if a servers hard drive goes down? I was thinking it may just be a better option to just have a backup drive mounted to my system and in the even of a system failure just pop in a new hard drive, reload the OS, and then reload all my backups?
View 14 Replies
View Related
May 20, 2009
I have a new server and it is rather slow during raid 1 recovery after system installed
CPU: Intel Core2Duo E5200 Dual Core, 2.5Ghz, 2MB Cache, 800Mhz FSB
Memory: 4GB DDR RAM
Hard Disk 1: 500GB SATA-2 16MB Cache
Hard Disk 2: 500GB SATA-2 16MB Cache
root@server [~]# cat /proc/mdstat
Personalities : [raid1]
md0 : active raid1 sdb1[1] sda1[0]
256896 blocks [2/2] [UU]
md1 : active raid1 sdb2[1] sda2[0]
2096384 blocks [2/2] [UU]
md2 : active raid1 sdb4[2] sda4[0]
480608448 blocks [2/1] [U_]
[=======>.............] recovery = 36.7% (176477376/480608448) finish=1437.6min speed=3445K/sec
the sync speed is just 3.4Mb/second and the total hours needs to be more than 40 hours
Also the server load is very high (nobody uses it)
root@server [~]# top
top - 07:00:14 up 16:55, 1 user, load average: 1.88, 1.41, 1.34
Tasks: 120 total, 1 running, 119 sleeping, 0 stopped, 0 zombie
Cpu(s): 0.0%us, 0.2%sy, 0.0%ni, 99.7%id, 0.0%wa, 0.0%hi, 0.2%si, 0.0%st
Mem: 4148632k total, 747768k used, 3400864k free, 17508k buffers
Swap: 5421928k total, 0k used, 5421928k free, 569252k cached
View 8 Replies
View Related
Oct 22, 2009
I am in the process of restructuring the infrastructure on our servers. I am thinking of using either RAID 5 (1 hot spare) vs RAID 10 as my 1U server has 4 HDD tray.
RAID 5 would have better capacity but RAID 10 has better overall performance. Which one do you guys go for a shared hosting server?
View 14 Replies
View Related
Dec 23, 2008
Is it possible to turn a non raided setup into Linux software raid, while it is live, and if it's the OS drive? Can you even software raid the OS drive remotely? I've been thinking about doing it for the redundancy (and possible slight performance boost for reads, but doing it more for redundancy). I'm using CentOS.
View 12 Replies
View Related
May 22, 2008
I want to take some data from a raid-disk (taken from a raid-1 sstem). Put it into a new system already, but this system doesn't have any raid.
When viewing "fdisk -l", it said /dev/sdb doesn't contain valid partition. Is there anyway I can mount it now? I am on CentOS 4 box
View 2 Replies
View Related
Jul 13, 2009
I need to set up server for Cpanel.
Basically I need raid 1 (2 disks) + 1 backup hard disk for Cpanel
View 4 Replies
View Related
Mar 24, 2009
MY server configure our drives with RAID-1.
How can I check it my server configure with 3ware or software raid ?
Also please advise me how can I monitor raid configuration that my raid is working fine or no ?
View 8 Replies
View Related
Jul 11, 2008
I've been talking to the Planet about trading in my four and a half year old "SuperCeleron" (from the old ServerMatrix days) Celeron 2.4 GHz system for something new. As part of their current promotions, I've configured a system that looks decent:
Xeon 3040, 1 gig of RAM, 2x250GB hard disks, RHEL 5, cPanel+Fantastico, and 10 ips for $162.
Not too bad. I could bump up the ram to 2 gb for, I think, $12 more, which I'm thinking about and wouldn't mind some thoughts on. But, the thing that has me really confused is RAID. I like the idea of doing a RAID 1 setup with those two hard disks. But, the Planet wants $40/month for a RAID controller to do it. I really don't want to go over $200 a month!
Any thoughts on alternative redundancy strategies that might avoid that cost? Software RAID does not seem to be offered by the Planet, unless I can figure out how to do it after installation (is that possible?) Better ideas in general on the server?
View 14 Replies
View Related
May 23, 2007
Just curious what your thoughts are on performance:
2 SCSI Drives 10k w/RAID 1
or
4 SATA 10k w/RAID 10
Prices are not too different with 4 drives just being a tad more.
View 5 Replies
View Related
Jun 5, 2007
how well software raid can perform and how it compares to hardware raid. How does software raid actually work and is it worth it?
How should I look at be setting up software raid if I was going to? Would you recommend just to use hardware raid instead?
View 2 Replies
View Related
Dec 10, 2007
Which do you guys recommend of the following?
4x 73GB 15,000rpm SAS drives in a RAID 10
or
4x 73GB 15,000rpm SAS drives in a RAID 5 w/ online backup
View 9 Replies
View Related
Nov 3, 2009
Are there any significant difference between 4 15K SAS HD in RAID 10 versus 8 7.2K SATAII HD in RAID 10? I have the same question for 2 15K SAS HD in RAID 1 versus 4 7.2K SATAII HD in RAID 10.
View 13 Replies
View Related
Apr 19, 2009
I'm currently using 4 x 15K SAS raid 10 for a mysql server for a pretty busy forum, it has no I/O problem.
Now i'm going to migrate to a new server that i'm building soon, I have choice of:
2 x Intel X25-E SSD RAID 1
or
4 x 15K Fujitsu SAS RAID 10
will be using Adaptec 2405 RAID card.
The OS will be installed on a seperate hard drive.
If I go with the SAS setup, will be about $200 cheaper.
Which one do you think is better for Mysql performance?
View 14 Replies
View Related
May 10, 2009
I have room for 4 more hard drives on my home server. My original goal was to go raid 10 but I've been thinking, raid 5 can support 4 drives and give more capacity. Which one would have better performance as software (md) raid? I'm thinking raid 10 might actually have bad performance as software raid, vs hardware, compared to raid 5. Would raid 5 with 4 drives be better for my case?
View 10 Replies
View Related
Mar 16, 2008
We are looking to build our first server, and collocate it. It will be a higher investment than just renting the server, but will be worth it in the long term, and we have already decided we are going to support the hosting business for a minimum of 3 years - so we might as well invest in a server from the outset to benefit from lower data center charges and higher redundancy and performance.
We are currently looking at Supermicro for servers as they offer 1U barebones systems with dual hotswappable psus and upto 4 hotswappable drives. This would be ideal for redundancy, and also for taking advantage of the speed and redundancy that a RAID 10 array would give you. These two factors combined are very appealing as it would reduce the possibilities of downtime and data loss. Obviously we will be backing up daily, but its good for piece of mind to know that you could potentially blow a PSU and 2 hard drives, and your server will still be up long enough for a data centre technician to replace the parts.
Now then, my business partner and I are currently deciding what the best all round hard drive configuration would be. He has decided that we should opt for SAS instead of SATA to have lower latency seek times, which would give us better performance. I agree, though this does increase costs considerably.
He is then arguing that we use RAID 5 on cost grounds. He says we should only use 3 of the slots to begin with, save money on one drive by not having a spare, and hope we don't have a drive failure - which sods law will happen. I'm not happy us cutting corners to save money, because if we gamble and lose, that's a hell of a mess we have ourselves in, and will cost us a load more time, reputation and data center charges to get ourselves out of it.
I say we might as well go for RAID 10 for that extra performance, and redundancy, you can potentially lose 2 drives so long as they aren't from the same mirrored pair. With RAID 5 you can only lose a drive, it takes longer to rebuild onto a spare, and during rebuild the performance takes a hit. Also RAID 10 is much faster than RAID 5, and at the expense of the cost of a drive.
Now the question we should be asking is... would a SATA2 RAID 10 array provide better performance than a SAS RAID 5 array?
So I think the choice we have to make is either go for RAID 5 and run with a hot spare, and stock a cold spare, or go with RAID 10 and stock 2 cold spares.
We are considering going with Seagate drives because they are high performance and have 5 year warranties. I have had to RMA two Western Digital drives already in the past 12 months, a raptor and a mybook, both deaths invoked data loss.
The server is going to be a linux web, email, dns and mysql box. It will likely feature a single dual/quad core processor, and 4-8GB of unbuffered ddr2 ram.
View 7 Replies
View Related
Aug 23, 2007
I'm trying to build a physical raid 0/5 that can plug in to any computer which has SCSI behind it.
What are components you recommend (case, cpu, motherboard, SATA ...)
This is first time raid builder so i don't really need an expensive components.
View 8 Replies
View Related
Oct 22, 2007
Question though on RAID choices... I'm considering getting 3 x 250GB SATA drives. Would it be better to make two of them a RAID-1 mirrored pair for my OS, home directories, and use the 3rd drive seperately for backups, swap, and perhaps some logs.... OR should I put all three drives into a RAID-5 set and treat it as a single logical drive?
my math says usable space would actually be identical... with 465GB usable in either setup. RAID-1 would be faster for I/O with no parity overhead... but one drive would not be redundant. On the other hand, RAID-5 would be fully redundant but have parity overhead for writes.
I think I just sold myself on RAID-5, didn't I.
View 5 Replies
View Related
Apr 27, 2008
I would like to hear which configuration you think will be better for a hosting server.
I have allready a raid controller in the server.
I am more concerned with security.
View 12 Replies
View Related
Jan 5, 2009
I am trying to determine if i really got RAID 5 from a server of ServerLoft.
But i am not really sure how, fdisk only show 1 HDD.
This is the screenshot of IRMC hopefully it help, what do you guys think?
View 3 Replies
View Related
Apr 19, 2009
The answer to my previous seems to HARDWARE RAID because of the ability of the server to still function during a rebuild.
The hardware is 4 X 1TB Western Digital RE3 Drives
However which configuration would you suggest, RAID 5 or RAID 10?
View 14 Replies
View Related
Sep 11, 2008
Is it good enough to run vps (virtozzo) on Raid 1 or 5? I know almost every one use Raid 10 for the purpose.
View 14 Replies
View Related
Dec 22, 2008
i want to buy dedicated server in limestone for vps reselling.
which must i choose?
with raid or not?
the raid 1 is $45/month
what different raid1 with no raid?
View 12 Replies
View Related
Jul 24, 2008
If you got 4 drives, let say 250GX4
what RAID option will you go ? RAID5 or RAID10?
respecttively , what is the usable space on both?
I heard that RAID5 only need 3 drives, then we might keep 4th one for backup, but any disadvantages for RAID5 when comparing to RAID10?
View 4 Replies
View Related