I have a couple of Dell 1950s and in one of them, I have 2x Seagate 15K.5s that I purchased through Dell and I also have a spare sitting in my rack in case one goes bad, also from Dell.
I was going to be repurposing one of my other 1950s and was going to get two more 15K.5s for it, but wasn't planning on getting them through Dell (rip off?). This way, could still keep the same spare drive around in case a drive went bad in that system as well.
When I was talking to my Dell rep recently when purchasing another system, their hardware tech said you can't use non-Dell drives with Dell drives in the same RAID array because of the different firmware between them.
Anyone know if it is true? Anyone have any experience with using drives from Dell in conjunction with the same model drives from a third party retailer?
I have two domains as virtual hosts on same IP address.
I am getting certificate error for the second domain when I try to check email (using MS Outlook). I can't permanently "accept" certificate, it complains again and again. Certuficate I created and self signed for imap.domain1.com, but the second email server is imap.domain2.com, so it complains.
How do I set separate email certificates for two domains? Is it possible at all?
Depending on where u are at on my site (documents pages, training, main root, etc.) will depend on which type of background, footer, header and the like you'll get. Now I was thinking. Is there a way to have multiple error messages for more then one page depending on where you are at on a site? Right now it's intranet site and a modded snitz forum. What is the code and were does it go and in which apache conf file(s) does it go in?
I know there are a lot of experienced hardware guys on here, so I wanted some input on 1.5TB drives. Are they reliable enough to be used in non-mission critical storage servers? 99% of what we do is OEM (Dell) equipment, so I don't test raw hardware much these days.
I've read a lot of negative things about Seagate lately. Can anyone chime in with specific models they've had positive or negative experiences with from any vendor? Reading some reviews on the WD 1.5TB Caviar Black drives, there seems to be some weird issues with them going into a recovery cycle.
I have 2 servers connected through a private network.. I wanted to remotely mount a folder on one of the servers. However when i do it gives me permission error..
On the server 1. Iptables are stopped
2. Exportfs shows the result fine
3. entry in hosts.allow and /etc/hosts is made
4. NFS is running fine
On the client: 1. NFS shows some error
Quote:
service nfs restart Shutting down NFS mountd: [FAILED] Shutting down NFS daemon: [FAILED] Shutting down NFS services: [FAILED] Starting NFS services: [ OK ] Starting NFS quotas: Cannot register service: RPC: Unable to receive; errno = Connection refused rpc.rquotad: unable to register (RQUOTAPROG, RQUOTAVERS, udp). [FAILED] 2. Have done the entry is hosts.allow
3. Stopped iptables
4. when i mount i get this error
Quote:
mount -v 10.252.5.34:/mtest mtest mount: no type was given - I'll assume nfs because of the colon mount: trying 10.252.5.34 prog 100003 vers 3 prot tcp port 2049 mount: trying 10.252.5.34 prog 100005 vers 3 prot udp port 901 mount: 10.252.5.34:/mtest failed, reason given by server: Permission denied
However i read somewhere that NFS should be running on server and not necessary on client. But this error
What is the best way to find out which filesystems and harddrive drivers you can remove? Obviously, i need ext2,3 but how do you find which HD you only need?
I was just wondering whats the real live experience or difference on using Western Digital Caviar series (green, blue or black) on a DC environment vs the RE which are suppose to be for enterprise business.
On the WDC website the caviar series are targeted under desktop disks not servers. But allot of servers and providers use them. If you have servers your suppose to use the RE series, I exclude raptors as I only want to compare medium performance disks here.
I'm building a couple of VPS host servers for a client.
Each server have to host 20 VPS and each server will be 4 cores with 32GB of ram. So CPU and ram should be just fine, my interrogatioon now is hard drives. The company owns the machines, but not the drives yet.
I searched a lot on your forums but found nothing relating on VPS. I'm basicly a DBA IRL, so I have experience in hardrives when it comes to databases, but it's completely different for VPS.
According to my boss, each VPS will run a LAMP solution (having a separeted DB cluster is out of question for some reason).
First, raid1 is indeed a must. There is room for 2x 3.5 drives. I might be able to change the backplane for 4x2.5, but i'm not sure...
I've came to several solutions: 2x SATA 7.2k => comes to about 140$ 2x SATA 10k (velociraptor) => comes to about 500$ 2x SAS 10k with PCIe controller => comes to about 850$ 2x SAS 15k with PCIe controller=> comes to about 1000$
They need at least 300GB storage.
But my problem is that the servers do not have SAS onboard so I need a controller and in my case the cheapest solution is best.
But I'm not sure that SATA 7.2k will hold the charge of 20 complete VPS.
Does it worth it to go with SAS anyway or SATA should be just fine? With SATA better use plain old sata 7.2k or 10k drives?
That's a lot of text for not much: What is best for VPS: SATA 7.2k, SATA 10k or SAS 10k?
I'm trying to mount a western digital external hd to a fedora core server. Does anyone know what I put in the file system line in /etc/fstab? I tried tmpfs but I couldn't see the files i placed there when I moved to a new server. I also tried vfat but that wasn't compatible. It couldn't read ext3 file system. Does anyone know what else I could try?
I tried to mount second harddisk, but i faced error "mount: you must specify the filesystem type", please help, see below
Code: [root@s ~]# fdisk -l
Disk /dev/sda: 320.0 GB, 320072933376 bytes 255 heads, 63 sectors/track, 38913 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System /dev/sda1 * 1 13 104391 83 Linux /dev/sda2 14 38913 312464250 8e Linux LVM
Disk /dev/sdb: 160.0 GB, 160041885696 bytes 255 heads, 63 sectors/track, 19457 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System /dev/sdb1 * 1 13 104391 83 Linux /dev/sdb2 14 19457 156183930 8e Linux LVM [root@s ~]# mkdir /mnt/hd2 mkdir: cannot create directory `/mnt/hd2': File exists [root@s ~]# mount /dev/sdb2 /mnt/hd2 mount: you must specify the filesystem type
I currently have a VPS. I have installed cPanel/WHM + CSF Firewall.
Everything is fine and all the ports are closed except for the ones I need.
I currently have some issues I need to fix, but google isn't helping
Quote:
Check /tmp is mounted as a filesystemWARNING/tmp should be mounted as a separate filesystem with the noexec,nosuid options set
I tried googling this and there was a cPanel script but I do not have permission to run it. So does anyone mind explaining it to me one step at a time?
Quote:
You should consider adding ini_set to the disable_functions in the PHP configuration as this setting allows PHP scripts to override global security and performance settings for PHP scripts. Adding ini_set can break PHP scripts and commenting out any use of ini_set in such scripts is advised
I have disabled this in php.ini but I do not know why it still says that I have to fix this
I have a few folders mounted, but sometimes it drops off the network when the NFS server got rebooted or what not... how do I make sure that it's automatically remounted if the folder goes blank? Is there a way to do this?
I want to use DD to fully duplicate a HD, however, the HD is 120GB and also have a mount folder that is 2TB. If I use DD, will it also trying to copy that 2TB onto my 2nd HD?
I'm running centos 4.4 and i just installed another HD... seems like everytime I reboot the server, my mount is gone... This is what I have in my fstab:
Do the old RLX Blade servers use 'mini' hard drives? I can't find an answer anywhere. I seem to recall that they use smaller 2.5" drives. Is this the case?
And, if so, do they make "good" drives worthy of being in a server in that size? Are they essentially just a laptop drive?
My server has been formated it has two drives. I have my back up on the second drives. What is the command I use to list the drives and how to mount the second drive.
Does anyone have experience with SSD drives in a server environment? I've seen now a few offers with SSD (Intel) and wondering if the speed is noticeable?
Are they worth it? from what I have been reading is that they a superior in reliability, but have issues with limited write cycles.