It seems the more places we can put servers, the more places boss-man wants them
We're setting up an external network to test back into our network from geographically/carrier diverse locations. We've got about 15 hosts up, but most are in the states, one in london, one in amsterdam, one in frankfurt and one in hong kong.
The current wish list of locations includes -
- S. Korea
- Australia (holy cow bw is expensive in sydney! is anyone charging less than $500 per Mb?)
- Paris, France (we have one quote in, but it is pretty pricy)
I'm doing research and have submitted rfq's to companies in most of these locations, but was hoping for personal recommendations of hosts you have used.
We have a few IIS servers that will be acting as front end for our users to split off the load. Currently, we have one IIS with 1 SSL. we are going to do round robining for the FQDN to span a few IIS servers. I just wanted to make sure we would not run into any issues with the certificate. I was thinking of installing it on one server after generating the CSR and then once the root trusted is in place, export it and import it in all the other servers. Do you think this would be an issue at all? is there a better way?
additionally, the certificate is about to expire, I need a certificate with $1mil insurance, what do you think is the best deal (trusted source) going around? should I do the ones with 256bit too or have you seen any conflicts, do they auto negotiate if the client can only support 128bit?
I just setup a round-robin so that my website is always available even if one of the 2 servers goes down. It works like it should, however, i can't access the userdirs from server2. The userdirs are located on server1.
It's setup like this:
www1.domain.org is server1 www2.domain.org is server2www.domain.org is the round-robin. I have 2 WWW's pointed with an A record to each of the server's IP addresses.
On server 1 i have 2 userdirs which are accessible through www1.domain.org/~user . They should also be reachable on www2, because if they are not, they wont be accessible half the time due to the round robin.
I therefore added a .htaccess in www2's root document directory with the following info:
So when i go to www2.domain.org/~user1/, i should be automatically be transferred to server1... but all i get is a 404 error page. It works perfectly when accessed on www1. I don't see what i'm doing wrong. I thought it might be the userDir setting in apache that might be causing trouble.. but that is turned off on server2, so that should not be the problem. Anyone here have any idea how to access the user dirs via server2?
I'm looking to make high availability setup, and wondering how many of you have made it so? we are looking to multi-home the page with a round robin setup, using multiple VPSs/dedicated servers geographically different locations.
Right now i'm still looking at "stale" DNS setup, no automanagement of servers down. Is there a service/software which already offers automatic changes of zones for removing servers which are down, and adding them back when they get back online?
I am trying to do a fail over solution with round robin dns. Our dns is served by windows and our web servers are setup with Linux..
I know round-robin does not by default do a fail over, however my understanding is that a script can be used to remove the failed server for dns, is anyone aware of something that will do this for windows?
Please give me the difference. Colo in carrier hotel, we can choose our preferred network provider, but should we do that if we cannot have our own tech in datacenter? How about the supporting service from carrier hotel? Just general question, cause I dont address exactly which facility.
And the second would be more expensive? Saying the same number of rack, amount of bandwidth... Who is providing IP addresses then?
i have already transfered a site from another server to my server !
this website using PHP5 and they asked me to enable register_global but i don`t know how to active register_global when PHP5 set as CGI value will not be accepted when PHP5 set as CGI : php_flag register_globals 1 * i think the last server used Suphp ( i found some files like : php.ini in FTP)
Looking for quick, easy global load balancing solution. This is actually for a temporary situation (we need to move to a new DC and need to make this seamless as possible). Linux solution preferred if possible. What can we use to get this achieved?
How exactly does it work? does it need VPN between locations or is client redirected to a different IP somehow?
We would consider dedicated hardware solutions provided that we can get 2 pieces for under $2,000 total (ebay i guess).
We currently take transit from Level3 and Tiscali in addition to peering at LINX in the UK. We reaching capacity on our 100Mbps connection to Level3 which we take through a Reseller. I plan to keep our Tiscali transit as we receive great routes to Europe.
I have received quotes for increasing our Level3 to 1Gbps with 100 Mbps CDR and also switching to Global Crossing direct which are I think are fairly competitive at ~ £12 per Mbps?
Does any one have direct experience with either of these two providers in the UK and can recommend who has the best support/routes etc? Additionally I see a number of other UK providers are using Telia and NTT. Having had no experience with Telia or NTT I am unsure if they are in the same league as Global Crossing and Level3. Also are there any other Tier 1's we should be looking at?
Is anyone here running GFS? The responsibility of managing a small cluster of them is about to fall into my lap, and the only documentation I can find is on Wikipedia, which is troubling. I've got the man pages, but I was hoping for more of a document outlining how it works.
Why would lock_dlm2 or gfs_scand take up close to 100% CPU with minimal traffic on the machine, for example? What do those do? How can I tune it to not do that?
I'm not so much looking for specific answers here about tuning, but am more curious about where I should be looking for documentation. I find it hard to believe that there is none?
I'm doing a bit of research into the market of Global Server Load Balancing and I'm wondering if anyone knows of any web hosting companies that offer this service. I'm looking for companies large and small that have this service.
I've recently upgraded from Shared hosting to a VPS. I'm currently getting my new VPS setup before migrating my site over. On my shared server, both the global and local safe_mode directives were reported as off by php_infO(). On ym new server, the global is reported as off, but local is reported as on.
On my old server, the PHP was version 4.4.9 running as a CGI. On my new server, PHP 5.1.6 is running as an Apache 2.0 Handler.
I have already set safe_mode to off in my global php.ini file (hence why global is reported by off). However, I have no local php.ini files, htaccess files, or php directive settings in place, so I cannot figure out why local is set to on!
I've tried editing httpd.conf to include "php_admin_flag safe_mode Off", though I'm not certain I put it in the right place. There is only one website on this server.
With the CGI php on my old server, I was able to create a local php.ini file to overwrite global directives, but that seems to have no effect with the Apache Handler on my new server.
Thought this might be of interest to folks on WHT. We put together a solution using Nginx ( Engine-X ) to do Global Server Load Balancing. This solution lets you do GSLB without having to fork over $26k per site to F5 or Foundry.
Thought it would be of interest to both end-users as well as dedicated hosting providers who might want to make it into a service (eg. sell a dedicated host in Europe and the US as a group, with the solution pre-installed).
The entire project, including relavent configs is available for download in the latest ( issue 6 ) FREE issue of o3 magazine (o3magazine.com)