I have a couple of Dell 1950s and in one of them, I have 2x Seagate 15K.5s that I purchased through Dell and I also have a spare sitting in my rack in case one goes bad, also from Dell.
I was going to be repurposing one of my other 1950s and was going to get two more 15K.5s for it, but wasn't planning on getting them through Dell (rip off?). This way, could still keep the same spare drive around in case a drive went bad in that system as well.
When I was talking to my Dell rep recently when purchasing another system, their hardware tech said you can't use non-Dell drives with Dell drives in the same RAID array because of the different firmware between them.
Anyone know if it is true? Anyone have any experience with using drives from Dell in conjunction with the same model drives from a third party retailer?
we have one box in hivelocity.net that has been down so many times this month that we were forced to remove links to siteuptime where we were once so proud of having a 99.7% uptime for 3 years in theplanet.
syslog shows that just before crashing, these entries were made:
kernel: kernel BUG at mm/rmap.c:479 kernel: invalid operand:0000 [#1]
dmesg also shows this:
... Brought up 2 CPUs zapping low mappings. checking if image is initramfs... it is Freeing initrd memory: 482k freed NET: Registered protocol family 16 PCI: PCI BIOS revision 2.10 entry at 0xf9f20, last bus=1 PCI: Using configuration type 1 mtrr: v2.0 (20020519) mtrr: your CPUs had inconsistent fixed MTRR settings mtrr: probably your BIOS does not setup all CPUs. mtrr: corrected configuration. ...
i've googled these messages and they point to ram problems.
hivelocity.net claims to have done diagnostics on the box and that there were no problems reported.
they said this is a result of a sys configuration problem made by us.
Last year I ordered a new server with Centos 4.3 and it had the kernel kernel 2.6.9-34.0.2ELsmp installed. It runned fine and I didn't update any packages since then.
Today I started getting a problem where both mysqld and kswapd0 uses very high amounts of CPU, spiking up to 100% and my memory usage is at 99% all the time. The problem seems exactly the same as the one mentioned in this thread.
In that thread the exact same kernel is said to be insecure and to cause this problem. I also came across a centOS bug that reports this problem with high cpu, mem usage and mysql & kswapd0 consuming all resources.
In the linked thread the person solved the problem by upgrading to kernel 2.6.9-42 using rpms but others recommended a newer kernel or a custom compiled kernel for CentOS.
Apparently when they used yum it said 34.0.2 was the latest kernel.
What should I do to upgrade the kernel, which version should i upgrade to, and where do I get it from? I won't be able to compile a custom kernel and I've only installed basic rpm packages before.
I know there are a lot of experienced hardware guys on here, so I wanted some input on 1.5TB drives. Are they reliable enough to be used in non-mission critical storage servers? 99% of what we do is OEM (Dell) equipment, so I don't test raw hardware much these days.
I've read a lot of negative things about Seagate lately. Can anyone chime in with specific models they've had positive or negative experiences with from any vendor? Reading some reviews on the WD 1.5TB Caviar Black drives, there seems to be some weird issues with them going into a recovery cycle.
I was just wondering whats the real live experience or difference on using Western Digital Caviar series (green, blue or black) on a DC environment vs the RE which are suppose to be for enterprise business.
On the WDC website the caviar series are targeted under desktop disks not servers. But allot of servers and providers use them. If you have servers your suppose to use the RE series, I exclude raptors as I only want to compare medium performance disks here.
I'm building a couple of VPS host servers for a client.
Each server have to host 20 VPS and each server will be 4 cores with 32GB of ram. So CPU and ram should be just fine, my interrogatioon now is hard drives. The company owns the machines, but not the drives yet.
I searched a lot on your forums but found nothing relating on VPS. I'm basicly a DBA IRL, so I have experience in hardrives when it comes to databases, but it's completely different for VPS.
According to my boss, each VPS will run a LAMP solution (having a separeted DB cluster is out of question for some reason).
First, raid1 is indeed a must. There is room for 2x 3.5 drives. I might be able to change the backplane for 4x2.5, but i'm not sure...
I've came to several solutions: 2x SATA 7.2k => comes to about 140$ 2x SATA 10k (velociraptor) => comes to about 500$ 2x SAS 10k with PCIe controller => comes to about 850$ 2x SAS 15k with PCIe controller=> comes to about 1000$
They need at least 300GB storage.
But my problem is that the servers do not have SAS onboard so I need a controller and in my case the cheapest solution is best.
But I'm not sure that SATA 7.2k will hold the charge of 20 complete VPS.
Does it worth it to go with SAS anyway or SATA should be just fine? With SATA better use plain old sata 7.2k or 10k drives?
That's a lot of text for not much: What is best for VPS: SATA 7.2k, SATA 10k or SAS 10k?
Anybody know the best place to get a really cheap server with at least 250GB drives? I'm assuming most providers offer HDD's of that size on relatively cheap systems now if we're just looking at SATA.
The machine doesn't need to be anything special, I don't need a ton of bandwidth.
Basically this will be an extra backup machine to pull backups from servers instead of my usual "pushing" of backup data.
So to clarify, I'm looking for a simple machine pretty much anywhere with some drive space! VPS just won't cut it because the drivespace they provide is too expensive (yes, I understand they have nice drive setups though).
RAID etc is not needed, I'm not running anything mission critical but would like to have more locations in place to hold backups for me. WHT worries me alot
I can't find providers with 10Krpm HDs+ offshore and they have to have good support. Also I need atleast 2Tb over 100Mbit.
The reason why I need it to be offshore is because my client wants to have a subtitles sites and I'm not exactly sure if its legal or not in america and UK. Also netherlands or germany is preffered I looked at swiftnoc but not sure if they have 10krpm hard drives.
I am thinking of purchasing Samsung Spinpoint F1 drives, either the 750 GB or 1000 GB one. The purpose would be to put them on a large RAID array (e.g. 14 drives in RAID 10/RAID 50). The price and performance looks good. However, I have read many mixed reviews about the drives. Does anybody has any experience with the drive? Again, this will not be used on a desktop environment, but a server environment. OS would be Win2K3 or CentOS.
I am in a little bit of trouble I got a couple (5) of 750GB hdds that I need backed up to another couple (5?) of 750GB hdds so I can save the data storage on them. They are in a Linux box with a LVM setup I also have a RAID ware card on it but not using any RAID # on them. I decided after finding out what I could do with it to go to Windows 2003 on the server and installing RAID5/6 on it.
It seems that I will have to give up all my data and have everything wiped off from the hard drives this is very sad for me but I still have a chance to save the data on them. So I am thinking of copying them to another bunch of hard drives and then re-add it once the system is in place.
I was looking at this [url]
But thats clearly too expensive as I just need to back up 5 hard drives (750GB/each) and just need to do it one time. Anyone have any suggestions to this or how should I go about doing it. It doesnt have to be right away but its good to know my options.
Is there any place where they might to do this kind of stuff they let you rent their machine for a couple of hours for a fee so you can back up your data? The server is a COLO and the hardware is mine so I have every right to take it off and back it up with no problem from the datacenter.
I'm currently running Dell 1750s, 1850s, and 1950s in a colo facility. I am not happy with the 1850s and 1950s power consumption. My 1950s have a single quad core 5310, 2GB memory, dual 15k 73GB drives, dual power suppies and are running at about 1.9 amps with spikes up to 2.4 amps. My applications are disk bound and the servers typically run at a load of .1 to .2.
I'm looking for alternatives to the 1950 that use significantly less power. I need at least 2 Hot Plug SAS drives and would like to have it in 1U. I run 2GB of memory. Dual power supplies would be nice, but are not absolutely necessary. I'd rather not go with a non-hot plug solution, but may have to consider it. I will probably buy 10-15 servers soon and would like them to be identical. I'd prefer buying a name-brand.