I would really like to know which hard drive brand you have had the best success with in regards to server hard drive reliability. Is it Seagate or Western Digital? Or is it one of the other brands? Please vote. This poll is specifically regarding SATA2 hard drive experiences in servers. Please do not factor in SCSI hard drives.
Now iam useing 320Gig SATA harddrive as my primary hard drive,i dont use 2rd harddrive,iam haveing pure download site,in TOP command 4.5%wa is this bit high? or can i add 2rd harddisk and move some data to there to reduce wa,but my load of the server is fine or any way to reduce wa?
[cPanel smartcheck] Possible Hard Drive Failure Soon
ATA Error Count: 1512 (device log contains only the most recent five errors) Error 1512 occurred at disk power-on lifetime: 11736 hours (489 days + 0 hours) Error 1511 occurred at disk power-on lifetime: 11736 hours (489 days + 0 hours) Error 1510 occurred at disk power-on lifetime: 11736 hours (489 days + 0 hours) Error 1509 occurred at disk power-on lifetime: 11736 hours (489 days + 0 hours) Error 1508 occurred at disk power-on lifetime: 11736 hours (489 days + 0 hours) ----END /dev/sda--
What do you advice me to do? ask the DC to change the Hard Drive or wait till damaged?
I have a terminal server with a bunch of applications on it, among which is a database driven app. There have been complaints that access to the db is sluggish. right now the server is on just one 7.2k drive.
I am guessing its a hard drive bottleneck because memory and cpu usage seems okay. I have a few ideas. Please tell me which of these you would recommend.
1. Upgrade entire server to raid 10 system.
2. Upgrade the entire server to a 5-disk RAID 5 system.
3. Create two raid 1 setups. One for the OS and regular apps, the second one to host the DB.
4. Create a raid 1 setup for the OS and regular apps, and a RAID 5 setup for the DB.
Ideally I would like to see improved read/write speeds on both regular files/apps and on the database. The RAID 10 system was what I was leaning towards at first because the stripping increases throughput, but then I realized I may see better performance by keeping the regular files and the DB on independent setups so that OS and file read/writes won't affect the DB read/writes.
I have never had a hard drive fail on me, i dont think the I/O on my servers would ever warrant it but im looking to ask people that have had a hard drive die one them the following:
1. Were you ever cautious before your hard drive failure or did you lose data and if you did lose data, did it make you buck up your ideas (i.e do you now have a solid backup plan or do you still play maverick)
2. If you have had a hard drive failure, has it made you, do you think, overly-cautious?
3. Has a hard-drive failure ever swayed you towards the more expensive raptor drives? and if so, why?
I just purchased a 500GB ATA hard drive (meant to get SATA but must have misordered, but I don't think it makes that much difference) to replace a 80GB SATA hard drive on a Windows Server 2003 server. There are two partitions on the 80GB drive, and rather than add the new HD as a third partition, I would like to clone the data on the 80GB drive to the 500GB drive, then increase the partition sizes and remove the 80GB completely. Is there a specific method or software I can use to accomplish this?
I am going to mount my new hard drive to /home/user/public_html but I have a question before I do so. If I upload stuff to /home/user/public_html will it go to my new hard drive that I just mounted there or will it still go in the old hard drive?
The reason why I want to mount it there is because my other hard drive ran out of space, and I would like to continue this user account's space without having to move this user.
From your experiences, which manufacturer has been the most reliable for you? What kind of life expectancy can I expect from a new hard drive in a server environment? Also, what kind of quality/life can I expect from refurbished drives?
Thanks in advance.
By the way, I'm specifically looking at 10,000RPM SCSI drives. Both U-160 and U-320.
I'm currently hosted on a VPS server with Galaxy Solutions, but just last night I was informed of a hard drive failure.
This morning they said the DC couldn't recover the data, and now they are trying to recover the data themselves. Right now I'm assuming that if the DC can't recover it, then there isn't much of a chance of it being recovered at all.
I've put in countless hours and dedicated so much of my time to my sites, something which certainly cannot be compensated for. I would like to exhaust any possible way to recover the data.
Can anyone recommend what I should do in a situation like this? Would it be advisable to consult a data recovery specialist? It would be great if you could also recommend one.
The cost doesn't matter. I'm extremely frustrated, annoyed, and confused because of all this. Just like that, all my work is gone.
I did have backups, but they were on the same server.
what others are using in their servers when it comes to hard drive preferred manufacturer.
We use to only use Seagate but in some recent servers switched to Western Digital Caviar Black Drives (1TB) and experienced 1 failure after 30 days. Not that this makes us think that these are bad drives as we have also had failures with Seagate as well but would really like to know if there any difference in the reliability between manufactures?
I have my backup disks here because my server got hacked and we didn't like how liquidweb made the things. So we ask them to ship us the disk. They ran photorec and they got lots of .gz files from it. All accounts I would say. But 50% of them the .tar.gz files came corrupt. And is lefting all the big accounts and until now I haven't seen any corrupted file that came with MySQL. And I think MySQL is most important to all clients.
This week I reached the pinnacle of all time greatest screw ups I've ever done with a web server in five years. During a routine upgrade, my server crashed and basically burned to the point that the technicians at Burst/Nocster felt it would be in my best interest to clear the server out and do a fresh restore.
Fortunately, I had the majorty of files and designs I've done on a safe backup. Unfortunately, the mysql database I had was not so fresh and recent. In there lies my big problem on an issue I really have not seen much information about.
We all know of the mysqldump command that can be used to backup databases and save a .sql file on the server. Its quick, easy, and relatively painless. The problem I have run into is
This would be the command I'd use on my normal mysql dump. However, all of my files and past server information has been installed as a slave hard drive temporarily until I can gather everything I need. Therefore, the command above won't work because it is looking for this mysql database & user that does not exist on the new server. I currently have the slave hard drive path stored at
/mnt/olddrive/
So for example, to get to the website that would have that particular database
/mnt/olddrive/home/nqforum/
So my question for those who know anything about slave hard drives and mysql, how can I get a simple current backup of this database saved to a location, then of course once it is saved as a .sql file somewhere, I can simply run a mysql restore command in SSH to bring it back.
I am trying out all sorts of new drives and raid card adapters because I am finding the 3ware's I have been faithfully using to really suck lately.
We are testing them against areca and adaptec (although I really hate areca from prior bad experience)...
We are using bonnie right now... of course we plan to play around with xen, a couple of domains, prime95, some dd's and then run bonnie too... kind of a stress test.
the problem is that I have /dev/sda5 mounted to var. I want cPanel to use this HDD as the main storage for the /home directory. How do I make cPanel use this hdd for storage?
is it possible for me to access my servers hard drive in the same way I would with any other drive on my computer? Kind of like with ssh but in a file directory instead of a command line.
it looks like dell sell sas mainly without scsi now, with high load server,scsi and sas will better than sata, but the price of sas is higher a lot, i want to ask if you use sas hdd to run your hosting server? and if it is worth to use sas now?
I am currently in the process of upgrading my web/mysql server due to heavy loads and io waits and have some questions. I am trying to be cost efficient but at the same time do not want to purchase something that will be either inadequate or difficult to upgrade in the future. I hope you can provide me with some guidance.
This server is a Centos Linux box, running both apache and mysql. The current usage on the box is:
Mysql Stats:
50 mysql queries per second With a ratio of read to write of 2:1 Reads are about 65 MB per hour and writes are around 32 MB per hour.
Apache stats:
35 requests per sec
The two issues that I am unsure of are:
- Whether or not i should go with Raid-1 or Raid-5
- Whether or not I should use Sata Raptor drives or SAS drives.
In either configuration I will use a dedicated Raid controller. If I went with SATA, it would be a 3ware 9650SE-4LPML card. If I went with SAS, I was looking at the Adaptec 3405 controller.
Originally, I was going to use 3 x 74GB Seagate Cheetah 15.4K SAS drives in a Raid-5 config. After more reading, I learned that raid-5 has a high write overhead. Though read is definitely more important based on my stats, I don't want to lose performance in my writes either. With this in mind, I looked into doing Raid-1 instead.
I came up with these choices:
- Raid-1 - 2 x Seagate ST373455SS Seagate Cheetah 15K.5 SAS. HDs & controller costs are $940.
- Raid-1 - 2 x WD Raptor 74GB 10K SATA 150. HDs & controller costs are $652.
- Raid-5 - 3 x Seagate Cheetah 15K.4 ST336754SS 36.7GB. HDs & controller costs are $869.
- Raid-5 - 3 x WD Raptor 36GB 10K SATA 150. HDs & controller costs are $631.
As you can see we are not looking at huge differences in price, so I would be up for any of these options if I could just determine which would give me the best performance. I also know that I should have a 4th hotspare drive, but will buy that later down the road to ease cash flow in the beginning. If I went the SATA route, I would buy the 4th immediately.
From what I can tell, both configs provide the same redundancy, but are there any major performance considerations I should take? From what I have read, scsi/sas can enable database applications to perform better do to a lot of small and random reads and writes?