Has anyone here tried the Dell/EqualLogic PS5000 series iSCSI SANs? Any opinions on them? I'm looking to use two in a VMWare ESX cluster.
Also, does anyone have any more specific pricing than the 'starting at $19,000' that's been banded around the internet? I have an EqualLogic sales person visiting my offices in a week but would ideally like to get some rough pricing before then (eg: the price difference between SAS and SATA). Trying to get prices out of Dell hasn't been easy so far.
I am trying to learn more about storage solutions over Ethernet. There are iSCSI (internet SCSI) and AoE (ATA over Ethernet). Certainly the AoE is a cheaper solution. Besides pricing, what are the advantages (and disadvantages) of each?
we are setting up a couple of servers for a clients application which will have 2 x 1U application servers running XenServer 5.5 and one NFS or iSCSI server (Dell 2950, Raid 10) running ether just Linux with NFS or Openfiler with iSCSI.
Would there be any benefit in running iSCSI vs. NFS considering their is no iSCSI accelerator and this is just a regular Server ?
NFS is simple but is it's performance really that much different than iSCSI in this scenario?
I'm colo'ing, so this wouldn't be shared with anyone else.
I'm looking at using iscsi with Virtuozzo Containers, and while the setup for the hardware nodes seems fairly straightforward, can anyone comment on their own experiences using iscsi with VZ?
Did you find your experience reliable? Slow? Any information will be useful.
Also, did you have a hard time setting things up? Or did VZ just manage everything as if it was still on a single server?
Thinking of putting together a ISCSI box with 14 sata II 750's, 3Ware sata controller (raid 6) and Intel quad port gigabit card ganged together for 4 gig transfer and tieing it all together with Open-E ISCSi or DSS module.
Anyone done something similar with good (or bad) results? Thing of using this for hosting web sites primarily as well as some storage for mail server and some databases. Servers running raid 1 and using MS iscsi initiator. Have a vlan setup just for iscsi traffic in my 48 port gigabit switch.
Are the TOE cards better to have or is the MS Initiator good enough. Plan on using the second NIC on the servers solely for ISCSI transfer.
I setup an iSCSI target and two iSCSI initiators but I am having some trouble sharing the storage.
I partitioned the drive when I used the first initiator, a 1TB partition, I mounted it without any issues, it showed up in df -h.
Now I went to mount the iSCSI target on the second initiator, I mounted it fine, the partition I made on the first initiator was recognized on this one, however when I add files to either or, the changes aren't recognized on the other initiator. Any ideas why this might be?
I put 1GB of files in one initiator and I ran the df -h command on the other, and it still had the same amount of free space.
if anyone had any good recommendations for iSCSI products, either software or appliances for the small business market. We need good low cost SAN storage. I am looking at Nimbus and Open-E.
Iīm thinking about using a centralized hosting solution, in order to achieve better redundance and performance while having more room to expand if necessary. In order to achieve this, I was thinking about implementing a storage server, and use a software to provide iSCSI target capabilities.
As storage server, I was thinking about using HP DL320s ( URL ), loaded with 12 147GB SAS HDīs 15K RPM. I will make some tests to understand the real difference between RAID5 and RAID10, concerning write speed. Also, Iīm not sure if the controller provided with this server is good enough to provide a reliable operation.
For switching, I will use HP 2824 or 2848 Gigabit switch, and use port trunk in order to join both NIC controllers of the storage server.
As iSCSI target software, I still donīt know wich one to use. I think FalconStor would be a good bet, however it seems to be a bit expensive. Any good alternative?
This storage server would be used to provide storage for about 10 "regular" hosting servers, that have, at the moment, regular dual 10K SATA Drives (Raptor) in RAID1. I'm afraid the 2x Gigabit ports arenīt enough, even considering that I will not have intensive sequential reads / writes, but random acceses.
I'm currently connecting one of my servers to an iSCSI SAN but would like to hook up another server to that target as well. However, this doesn't work with NTFS filesystem and I couldn't really find any windows solutions for that. Does anyone have experience with this?
Iīm running a Dell Powerconnect 6224 with firmware 2.2.0.3 for a customer.
After upgrade to firmware 2.2.0.3 from the version 2.0.0.12, and starting to use ISCSI with link aggregation groups, the switch began to reboot every 2-3 days. Now, i have disabled LAG and this issue also happen.
It could be a firmware problem? Really, with firmware 2.0.0.12 it was solid as rock but without a advanced usage such as vlan, link aggregation, IP routing...
Iīm not sure if my customer would like pay more and choose a more stable switch such as Cisco Catalyst...