I'm currently connecting one of my servers to an iSCSI SAN but would like to hook up another server to that target as well. However, this doesn't work with NTFS filesystem and I couldn't really find any windows solutions for that. Does anyone have experience with this?
I am trying to learn more about storage solutions over Ethernet. There are iSCSI (internet SCSI) and AoE (ATA over Ethernet). Certainly the AoE is a cheaper solution. Besides pricing, what are the advantages (and disadvantages) of each?
we are setting up a couple of servers for a clients application which will have 2 x 1U application servers running XenServer 5.5 and one NFS or iSCSI server (Dell 2950, Raid 10) running ether just Linux with NFS or Openfiler with iSCSI.
Would there be any benefit in running iSCSI vs. NFS considering their is no iSCSI accelerator and this is just a regular Server ?
NFS is simple but is it's performance really that much different than iSCSI in this scenario?
I'm colo'ing, so this wouldn't be shared with anyone else.
I'm looking at using iscsi with Virtuozzo Containers, and while the setup for the hardware nodes seems fairly straightforward, can anyone comment on their own experiences using iscsi with VZ?
Did you find your experience reliable? Slow? Any information will be useful.
Also, did you have a hard time setting things up? Or did VZ just manage everything as if it was still on a single server?
Thinking of putting together a ISCSI box with 14 sata II 750's, 3Ware sata controller (raid 6) and Intel quad port gigabit card ganged together for 4 gig transfer and tieing it all together with Open-E ISCSi or DSS module.
Anyone done something similar with good (or bad) results? Thing of using this for hosting web sites primarily as well as some storage for mail server and some databases. Servers running raid 1 and using MS iscsi initiator. Have a vlan setup just for iscsi traffic in my 48 port gigabit switch.
Are the TOE cards better to have or is the MS Initiator good enough. Plan on using the second NIC on the servers solely for ISCSI transfer.
I currently have a VPS. I have installed cPanel/WHM + CSF Firewall.
Everything is fine and all the ports are closed except for the ones I need.
I currently have some issues I need to fix, but google isn't helping
Quote:
Check /tmp is mounted as a filesystemWARNING/tmp should be mounted as a separate filesystem with the noexec,nosuid options set
I tried googling this and there was a cPanel script but I do not have permission to run it. So does anyone mind explaining it to me one step at a time?
Quote:
You should consider adding ini_set to the disable_functions in the PHP configuration as this setting allows PHP scripts to override global security and performance settings for PHP scripts. Adding ini_set can break PHP scripts and commenting out any use of ini_set in such scripts is advised
I have disabled this in php.ini but I do not know why it still says that I have to fix this
I have a server with pacificrack. When I see my website it says "Read-Only File system" and my SSH dies with read-only file system errors.
It happens once per month, and the bad thing is I'm sleeping when it happens so I don't know my server is in this state.
Even though support fixes the problem within 15minutes of my support ticket, my server is down for hours because I don't know its down to then raise a ticket. I asked support why it happens and they never tell me... just tell me to read the logs but I'm not technical enough to understand linux logs etc.
Has anyone here tried the Dell/EqualLogic PS5000 series iSCSI SANs? Any opinions on them? I'm looking to use two in a VMWare ESX cluster.
Also, does anyone have any more specific pricing than the 'starting at $19,000' that's been banded around the internet? I have an EqualLogic sales person visiting my offices in a week but would ideally like to get some rough pricing before then (eg: the price difference between SAS and SATA). Trying to get prices out of Dell hasn't been easy so far.
I setup an iSCSI target and two iSCSI initiators but I am having some trouble sharing the storage.
I partitioned the drive when I used the first initiator, a 1TB partition, I mounted it without any issues, it showed up in df -h.
Now I went to mount the iSCSI target on the second initiator, I mounted it fine, the partition I made on the first initiator was recognized on this one, however when I add files to either or, the changes aren't recognized on the other initiator. Any ideas why this might be?
I put 1GB of files in one initiator and I ran the df -h command on the other, and it still had the same amount of free space.
if anyone had any good recommendations for iSCSI products, either software or appliances for the small business market. We need good low cost SAN storage. I am looking at Nimbus and Open-E.
We're considering deploying a large server that will have 8x 500GB drives in a RAID-10 config. I intend to use a 3ware 9650SE w/ BBU along with A/B power to each of the PSU's.
My question is... since this will return into a 2TB array/partition, in event of a crash (kernel panic, etc -- I expect a power outage will be very, very rare if at all) what do you guys think the fsck time would be? In my experience a RAID BBU significantly drops it, sometimes to the point of no manual fsck required, but in event of a manual fsck shouldn't the BBU be able to provide more consistent data (less errors) and therefore a much shorter fsck? Maybe just recovering the journal?
Iīm thinking about using a centralized hosting solution, in order to achieve better redundance and performance while having more room to expand if necessary. In order to achieve this, I was thinking about implementing a storage server, and use a software to provide iSCSI target capabilities.
As storage server, I was thinking about using HP DL320s ( URL ), loaded with 12 147GB SAS HDīs 15K RPM. I will make some tests to understand the real difference between RAID5 and RAID10, concerning write speed. Also, Iīm not sure if the controller provided with this server is good enough to provide a reliable operation.
For switching, I will use HP 2824 or 2848 Gigabit switch, and use port trunk in order to join both NIC controllers of the storage server.
As iSCSI target software, I still donīt know wich one to use. I think FalconStor would be a good bet, however it seems to be a bit expensive. Any good alternative?
This storage server would be used to provide storage for about 10 "regular" hosting servers, that have, at the moment, regular dual 10K SATA Drives (Raptor) in RAID1. I'm afraid the 2x Gigabit ports arenīt enough, even considering that I will not have intensive sequential reads / writes, but random acceses.
Installing Default Quota Databases...../aquota.user..../quota.user..../boot/aquota.user..../boot/quota.user..../backups/aquota.user..../backups/quota.user.....Done Quotas are now on Updating Quota Files...... quotacheck: Can't find filesystem to check or filesystem not mounted with quota option. quotacheck: Can't find filesystem to check or filesystem not mounted with quota option.
....Done
AND
root@srv01 [/www/logs]# /scripts/initquotas Quotas are now on Updating Quota Files...... quotacheck: Can't find filesystem to check or filesystem not mounted with quota option. quotacheck: Can't find filesystem to check or filesystem not mounted with quota option.
I am in the process of configuring my volumedrive dedicated server and would like some input on ideal settings for my filesystem.
I'd appreciate suggestions and explanations of what they options do.
Here is my system info: Linux volumedrive.com 2.6.18-53.el5 #1 SMP Mon Nov 12 02:14:55 EST 2007 x86_64 x86_64 x86_64 GNU/Linux CentOS release 5 (Final)
AMD Sempron64 3000+ 1GB RAM
HDD: Location: SCSI device B Cylinders: 60801 Size: 465.76 GB Model: ATA ST3500320AS
Seagate Barracuda 7200.11 ST3500320AS 500GB 7200 RPM 32MB Cache SATA 3.0Gb/s Hard Drive
Parameters I can change: EXT 3 File System Configuration Options: Block size Fragment size Bytes per inode Reserved blocks Journal file size
Edit IDE Parameters Transfer mode: Default mode / Disable IORDY / PIO mode 1,2,3,4 / Multimode DMA 0,1,2 / Ultra DMA 0,1,2 Using DMA: On/Off Sector count: 256 Read-lookahead: On/Off Write caching: On/Off Interrupt unmask: On/Off Keep settings over reset: On/Off Keep features over reset: On/Off Read only: On/Off Reprogram best PIO: On/Off Standby timeout: 0 32-bit I/O support: Disable / Enable / Enable with special sync sequence Sector count for multiple sector I/O: Disable 2 4 8 16 32
i had to do a file system back up because of the size of my database.
i shutdown postmaster and tar'd the files.
i recently reloaded my OS and now i'm attempting to restore the backups. i first shut down postmaster, restored the backups to the data folder and restarted postmaster.
i then su'd from root to postgres user:
Code: [postgres@austin1 pgsql]$ psql service_2_3 psql: FATAL: database "service_2_3" does not exist DETAIL: The database subdirectory "base/16385" is missing.
the backups had a database named service_2_3 but upon restoring the backups, it doesn't seem to be available.
my data directory is /var/pgsql/data.
Code: [postgres@austin1 pgsql]$ ls -lahR /var/pgsql/ | grep 16385 but that yielded nothing.
have i done something wrong while backing up? i don't think i have since i followed the postgres manual when backing up. but the administrators have not been very thorough with how they've set the system up for me.
how i can resolve this issue if you have the time.
Iīm running a Dell Powerconnect 6224 with firmware 2.2.0.3 for a customer.
After upgrade to firmware 2.2.0.3 from the version 2.0.0.12, and starting to use ISCSI with link aggregation groups, the switch began to reboot every 2-3 days. Now, i have disabled LAG and this issue also happen.
It could be a firmware problem? Really, with firmware 2.0.0.12 it was solid as rock but without a advanced usage such as vlan, link aggregation, IP routing...
Iīm not sure if my customer would like pay more and choose a more stable switch such as Cisco Catalyst...
Installing package webmin-1.350-1 needs 1 inodes on the / filesystem Hmm, this is the first time I've encountered this error when installing Webmin and I have no idea what it means. My other servers doesnn't output this error.
What does it means by it needs 1 inodes on the filesystem?
I have a wordpress based website that is currently doing about 500,000 uniques a month and 5gb a day. It uses the wordpress module for php caching but it is still pretty damn heavy on the CPU. I am looking for a VPS, preferably one in a clustered type of environment so that I don't need to see them reboot the (single) physical server, or be down for physical maintenance of any kind.
Also this website looks like it will continue to grow pretty fast, so a place that can handle this kind of growth would be a must.
I've narrowed down my VPS search to one of these providers.
SolarVPS. JupiterLX 832MB Total SLM RAM 30gb storage 600gb bandwidth with cpanel 100Mbit uplink fully managed not sure about the datacentre, the site says Euroconnex around £45 per month
or
Clustered.net 512 RAM 512 swap RAM Server has 2 x quad-core Intel Xeon "Clovertown" @ 2.33Ghz (18.64Ghz) Max of 15 servers per node 25gb storage 300gb bandwidth with cpanel 1Gbps uplink 1 hour replacement fully managed Looks like the datacentre is redbus interhouse in UK £55 per month
Which is going to be the better quality provider? Clustered offer a 100% uptime guarantee, and for every hour a server is down they refund you a days hosting.
Clustered seem to offer tape backups? solarvps offer off-site hosting, which might come in handy as well.
I searched the forum for clustered.net, but didnt find many reviews.
From my first impressions I think clustered seem to be my best bet, although I will get less in the way of storage and bandwidth, I feel there website makes me think I'll get a better quality service. There website talks about how redundant everything is. Hopefully people can backup what I'm thinking.
Does anyone know of a good reliable and redundant method of organizing clustered storage? I know that IBM has GPFS - has anyone actually used it? Do they charge a crap load for it?
I've switched my VPS to clustered.net about 6 months ago and I thought I'll just share a few of my experiences.
Clustered.net has their servers in London (somewhere in Canary Wharf, I've forgotten the name of the datacentre). I signed up for their smallest VPS available with 512MB/25GB/450GB.
Connections speeds to the UK and Europe are great, as far as I can see US is no problem either. I'm in the UK and most of the traffic is from the UK, so I wanted a UK provider.
I've got a 20M cable line at home and the server is always able to give me the full 20meg. I'd say perfect. I don't have any really heavy traffic sites, though. I've got a few sites on there, but mostly wanted my own spam solution and personal server for data-transfer etc.
Not only the speed is pretty good, the server has been up and running pretty much all the time, almost no downtimes at all. Really solid performance and reliability. Once we had a outage time of a few minutes and I got money off my next bill.
They offer cpanel, which I really wanted to have. We all know that Cpanel offers great versatility, and ASSP as spam solution is fantastic. Cpanel cost a fiver more per month.
Their website is a bit wonky, which shouldn't put you off - the support is awesome! I've done stuff like locking myself out of the VPS (set the firewall a bit too tight) and I've always had a helpful reply to my query within 10 minutes. Really helpful and quick.
Their support is truly outstanding.
I've been with Interhost before and it's a difference like day and night. At Interhost I got virtually no support at all, clustered.net always helpful, always quick. Interhost was unstable and poor speeds - clustered.net is the opposite.
Their services are not exactly at the lower end of the cost scale, but if you're looking for a reliable, fast VPS in Europe with excellent service, I can only recommend them.
I currently am in the process of building an online shop, so far we have about 200,000 products in the database to when running a search its taking up to 10 seconds to display the results which is not good enough. The setup at the moment is a cpanel vps with zipservers, 512mb ram.
Given this i have started looking for new hosting for the site which is built however it cannot be launched until we have a good server. We already have 150,000 adwrods adverts set up on pause ready to go so the project is needing to move fast.
Currently pondering between 1) a 1gb dedicated server with cpanel 2) Netfirms clustered hosting which they claim is more powerful than dedicated
at the moment i am tempted to go for the Netfirms hosting as i have used them for other projects in the past and seem to get on ok with them. They claim that when i make a database it will be servered by several sql servers which will make things much faster than if i went for the dedicated set up.
Does anyone have experience of clustered hosting and is it better than dedciated?