am getting new server with 2 (73GB) hard drives i need to know the following:
1.I need to put /home in one hard drive 73GB and the other partitions like /boot, /tmp,/usr and /var on the other drive
where should i put /home? on the primary or or secondary drive?is there any effect on the speed?
2. Am used to servers with 1 drive. is there any difference when it comes to security aplications such as APF,BFD,mod security and other aplicatuions settings?
3. in general should i take the same actions when handling a server with 1 drive and server with 2 drives?
I'm building a couple of VPS host servers for a client.
Each server have to host 20 VPS and each server will be 4 cores with 32GB of ram. So CPU and ram should be just fine, my interrogatioon now is hard drives. The company owns the machines, but not the drives yet.
I searched a lot on your forums but found nothing relating on VPS. I'm basicly a DBA IRL, so I have experience in hardrives when it comes to databases, but it's completely different for VPS.
According to my boss, each VPS will run a LAMP solution (having a separeted DB cluster is out of question for some reason).
First, raid1 is indeed a must. There is room for 2x 3.5 drives. I might be able to change the backplane for 4x2.5, but i'm not sure...
I've came to several solutions: 2x SATA 7.2k => comes to about 140$ 2x SATA 10k (velociraptor) => comes to about 500$ 2x SAS 10k with PCIe controller => comes to about 850$ 2x SAS 15k with PCIe controller=> comes to about 1000$
They need at least 300GB storage.
But my problem is that the servers do not have SAS onboard so I need a controller and in my case the cheapest solution is best.
But I'm not sure that SATA 7.2k will hold the charge of 20 complete VPS.
Does it worth it to go with SAS anyway or SATA should be just fine? With SATA better use plain old sata 7.2k or 10k drives?
That's a lot of text for not much: What is best for VPS: SATA 7.2k, SATA 10k or SAS 10k?
Do the old RLX Blade servers use 'mini' hard drives? I can't find an answer anywhere. I seem to recall that they use smaller 2.5" drives. Is this the case?
And, if so, do they make "good" drives worthy of being in a server in that size? Are they essentially just a laptop drive?
I am in a little bit of trouble I got a couple (5) of 750GB hdds that I need backed up to another couple (5?) of 750GB hdds so I can save the data storage on them. They are in a Linux box with a LVM setup I also have a RAID ware card on it but not using any RAID # on them. I decided after finding out what I could do with it to go to Windows 2003 on the server and installing RAID5/6 on it.
It seems that I will have to give up all my data and have everything wiped off from the hard drives this is very sad for me but I still have a chance to save the data on them. So I am thinking of copying them to another bunch of hard drives and then re-add it once the system is in place.
I was looking at this [url]
But thats clearly too expensive as I just need to back up 5 hard drives (750GB/each) and just need to do it one time. Anyone have any suggestions to this or how should I go about doing it. It doesnt have to be right away but its good to know my options.
Is there any place where they might to do this kind of stuff they let you rent their machine for a couple of hours for a fee so you can back up your data? The server is a COLO and the hardware is mine so I have every right to take it off and back it up with no problem from the datacenter.
/dev/md0: ext3 mounted as / for all of the software RAID partitions.
I was left to believe this would create redundancy as long as only one drive is removed from the array. Although when I unplug any of the hard drives (one at a time) I get input/output errors and when I try to reboot I get kernel sync errors.
What exactly am I doing wrong when trying to create redundancy? I know that SDA contains the /boot/ partition so it wouldn't boot without that but even if I unplug B,C, and D it still can't sync.
I want to try something different on our methods of replacing or upgrading hard drives.
I want to be able to do most of it via our KVM/IP instead of babysitting the server(s) for so long in the DC.
My thoughts are, how can I add the new hard drive in the DC, and move the data over via the KVM/IP. Can this be done with just a raw drive added (no new setup) using DD or even rsync, or is it better to setup a new installation of CentOS on the new drive, and use rsync to move the data over. Then how do I get the proper drive to boot until I go back into the DC to remove the bad or old drive? I'd be interested in how some of you folks are doing this, as far as what's easiest and could be done over the KVM/IP once the new drive is connected.
Or on systems that have 2 drives with cPanel/WHM, how can we temporarily on an emergency basis untilize the backup drive to do a new setup, copy the data over from the drive that is failing, then just replace the bad drive as a backup drive next time you go in the DC? We have an external USB CD in place to allow remote installs...just curious if anyone does something like this or has ideas how we could make this work.
We use cloning software now, but can end up babysitting a clone for a long period in the DC like this.
Mountain View (CA) - As a company with one of the world's largest IT infrastructures, Google has an opportunity to do more than just search the Internet. From time to time, the company publishes the results of internal research. The most recent project one is sure to spark interest in exploring how and under what circumstances hard drives work - or not.
There is a rule of thumb for replacing hard drives, which taught customers to move data from one drive to another at least every five years. But especially the mechanical nature of hard drives makes these mass storage devices prone to error and some drives may fail and die long before that five-year-mark is reached. Traditionally, extreme environmental conditions are cited as the main reasons for hard drive failure, extreme temperatures and excessive activity being the most prominent ones.
A Google study presented at the currently held Conference on File and Storage Technologies questions these traditional failure explanations and concludes that there are many more factors impacting the life expectancy of a hard drive and that failure predictions are much more complex than previously thought. What makes this study interesting is the fact that Google's server infrastructure is estimated to exceed a number of 450,000 fairly mainstream systems that, in a large number, use consumer-grade devices with capacities ranging from 80 to 400 GB in capacity. According to the company, the project covered "more than 100,000" drives that were put into production in or after 2001. The drives ran at a platter rotation speed of 5400 and 7200 rpm, came from "many of the largest disk drive manufacturers and from at least nine different models."
Google said that it is collecting "vital information" about all of its systems every few minutes and stores the data for further analysis. For example, this information includes environmental factors (such as temperatures), activity levels and SMART parameters (Self-Monitoring Analysis and Reporting Technology) that are commonly considered to be good indicators to describe the health of disk drives.
In general, Google's hard drive population saw a failure rate that was increasing with the age of the drive. Within the group of hard drives up to one year old, 1.7% of the devices had to be replaced due to failure. The rate jumps to 8% in year 2 and 8.6% in year 3. The failure rate levels out thereafter, but Google believes that the reliability of drives older than 4 years is influenced more by "the particular models in that vintage than by disk drive aging effects."
Breaking out different levels of utilization, the Google study shows an interesting result. Only drives with an age of six months or younger show a decidedly higher probability of failure when put into a high activity environment. Once the drive survives its first months, the probability of failure due to high usage decreases in year 1, 2, 3 and 4 - and increases significantly in year 5. Google's temperature research found an equally surprising result: "Failures do not increase when the average temperature increases. In fact, there is a clear trend showing that lower temperatures are associated with higher failure rates. Only at very high temperatures is there a slight reversal of this trend," the authors of the study found.
In contrast the company discovered that certain SMART parameters apparently do have an effect drive failures. For example, drives typically scan the disk surface in the background and report errors as they discover them. Significant scan errors can hint to surface errors and Google reports that fewer than 2% of its drives show scan errors. However, drives with scan errors turned out to be ten times more likely to fail than drives without scan errors. About 70% of Google's drives with scan errors survived the first eight months after the first scan error was reported.
Similarly, reallocation counts, a number that results from the remapping of faulty sectors to a new physical sector, can have a dramatic impact on a hard drive's life: Google said that drives with one or more reallocations fail more often than those with none. The observed average impact on the average fail rate came in at a factor of 3-6, while about 85% of the drives survive past eight months after the first reallocation.
Google discovered similar effects on hard drives in other SMART categories, but them bottom line revealed that 56% of all failed drives had no count in either one of these categories - which means that more than half of all failed drives were put out of operation by factors other than scan errors, reallocation count, offline reallocation and probational counts.
In the end, Google's research does not solve the problem of predicting when hard drives are likely to fail. However, it shows that temperature and high usage alone are not responsible for failures by default. Also, the researcher pointed towards a trend they call "infant mortality phase" - a time frame early in a hard drive's life that shows increased probabilities of failure under certain circumstances. The report lacks a clear cut conclusion, but the authors indicate that there is no promising approach at this time than can predict failures of hard drives: "Powerful predictive models need to make use of signals beyond those provided by SMART."
I just got an additional 500GB hard drive added and mounted it to /home2
There are files that are in /home1 (orginal HD) that will need to be constantly moved over to /home2 via a ftp
But i keep getting this error
550 Rename/move failure: Invalid cross-device link
Does anyone have any ideas? I tried changing permissions but no luck also tried mounting the 2nd hard drive within a directory in /home1. Still gives the error.
Today we are going to conduct a detailed study of RAIDability of contemporary 400GB hard drives on a new level. We will take two "professional" drives from Seagate and Western Digital and four ordinary "desktop" drives for our investigation. The detailed performance analysis and some useful hints on building RAID arrays are in our new detailed article.
Does anyone have experience with SSD drives in a server environment? I've seen now a few offers with SSD (Intel) and wondering if the speed is noticeable?
Are they worth it? from what I have been reading is that they a superior in reliability, but have issues with limited write cycles.
I'm currently running Dell 1750s, 1850s, and 1950s in a colo facility. I am not happy with the 1850s and 1950s power consumption. My 1950s have a single quad core 5310, 2GB memory, dual 15k 73GB drives, dual power suppies and are running at about 1.9 amps with spikes up to 2.4 amps. My applications are disk bound and the servers typically run at a load of .1 to .2.
I'm looking for alternatives to the 1950 that use significantly less power. I need at least 2 Hot Plug SAS drives and would like to have it in 1U. I run 2GB of memory. Dual power supplies would be nice, but are not absolutely necessary. I'd rather not go with a non-hot plug solution, but may have to consider it. I will probably buy 10-15 servers soon and would like them to be identical. I'd prefer buying a name-brand.
I have a couple of Dell 1950s and in one of them, I have 2x Seagate 15K.5s that I purchased through Dell and I also have a spare sitting in my rack in case one goes bad, also from Dell.
I was going to be repurposing one of my other 1950s and was going to get two more 15K.5s for it, but wasn't planning on getting them through Dell (rip off?). This way, could still keep the same spare drive around in case a drive went bad in that system as well.
When I was talking to my Dell rep recently when purchasing another system, their hardware tech said you can't use non-Dell drives with Dell drives in the same RAID array because of the different firmware between them.
Anyone know if it is true? Anyone have any experience with using drives from Dell in conjunction with the same model drives from a third party retailer?
Week ago I decided to rent another dedicated box , install CentOS 5 64Bit and use LiteSpeed as a web server. What seemed to be trivial at start became nightmare later.
I was unable to compile PHP 5 with --with-litespeed and --with-curl directives. If I removed one of them it was ok but together it didn't work.
Tried to search forums but nothing helped, so I decided to go back to Centos 5 32bit and try there. It compiled OK. So I'm in trouble now. I wanted to have server with 64bit OS + LiteSpeed because of its speed, system resources consumption and good DOS protection. I asked several questions about advantages of 64bit over 32bit OS and the most important thing was how much issues can I expect on 64bit OS (libraries availability). Almost every reply said that it's without issues to go with 64bit OS. My experiences say it's not as easy as I expected.
I didn't find any good protection against DOS for Apache so far, mod_evasive doesn't work as I expected. What do you recommend ? Should I stay with apache on Centos 64bit + apache with everything installed via yum (should work OK) or go with CentOS 5 32bit + LiteSpeed there ? LiteSpeed I'm talking about is Enterprise edition.
it looks like dell sell sas mainly without scsi now, with high load server,scsi and sas will better than sata, but the price of sas is higher a lot, i want to ask if you use sas hdd to run your hosting server? and if it is worth to use sas now?
What is the best way to setup my server hard drives for shared hosting? I have yet to purchase the hard drives. I was thinking of using two 250GB HD's or maybe a single 500GB HD. What is your experience with this in the past? Should I aim to use SATA or IDE? My thoughts are its better to have two drives for redundancy and performance.
My server hard disk is crashed badly. The rescue function in the server cannot take part and so I've tried using some recovery software to get back my data.
I've tried using Easy Recovery Professional. It sort all the files by it file type into different folder. I found a folder named .DB, there are also some .ado and . ldb folder too. I guess one of it is my database. The problem now is, i dunno how to read the file.
Do you have any idea to read it? I've tried many recovery system. eg. DiskInternals Linux Recovery, Disk Doctor Linux Recovery.
Not sure if I labeled that correctly, but I am looking to setup a multi-server where I offer a cloud ssd hosting plan, and sata hosting plan. The current setup has ssd hosting, but id like to add another ip address, as well as its hard drive to host other websites on that specific server, which is sata based.
For example, I add a domain to my plesk 12 admin account and choose the added Ip address (the sata based one), where it points to that server to access the files for that specific website.
At the moment, cloudflare handles all of my dns settings.. but I am totally lost on how this needs to be setup and if I am required to purchase another plesk license. I am trying to avoid purchasing another plesk license and having to setup a whole new plesk installation just to do this.. This is a vps by the way, not a dedicated server...
I have a new CentOS 7, with Plesk 12, CentOS 7 by default has XFS filesystem.
I try migrate sites from another Plesk Server but Plesk agent say: "hard disk quota is not supported due to configuration of server file system" (my CentOS 7)
I added "usrquota,grpquota", then mount -o remount / ; but when I try quotacheck -fmv / I gest this:
[root@ns ~]$ quotacheck -fmv / quotacheck: Skipping /dev/mapper/centos-root [/] quotacheck: Cannot find filesystem to check or filesystem not mounted with quota option.
but quotaon command works:
[root@ns ~]$ quotaon / quotaon: Enforcing group quota already on /dev/mapper/centos-root quotaon: Enforcing user quota already on /dev/mapper/centos-root
The problem here is why Plesk does not recognize quotas as enabled on CentOS 7??