I was a webhost from a while ago leasing dedicated servers and eventually went to work for the datacenter where I had my colo. For a while now I've working with a neat group of 5-6 other folks programming a new uptime monitor/geo-dispersed server load testing system/software. We were looking for possible partners to keep hosting costs down during the alpha stage of the project but while we were drawing up the papers, we saw just too much opportunity for a conflict of interest to arise and realized we couldn't realistically associate ourselves with any single company to that degree. So after a little work and fundraising, we're finally in a position to either lease some servers or colo.
Since I've been out of the loop for a while, I just want to know who the major/reliable players are when it comes to leasing or colo machines in multiple areas (ideally East, Middle, West, Canada and Europe/Asia? We would prefer to be with one company for ease of billing and have our network of monitoring stations spread out geographically. But we don't want all of our eggs in one basket so if a provider goes belly-up or decides to hike our rates 30-40% with little notice, we won't have too much to worry about.
We're watching what we spend during the alpha stage very closely, but I've been insisting we can strike the right balance between cost and reliability (connectivity).
I have a client that asked me to educate myself about web hosting and make a recommendation to him about where he should be. He currently has a shared hosting server at Network Solutions and finds unexplained slow downs and disk corruption reports in his forums DB unacceptable.
I'm glad I found this site-lots of good info but nothing like throwing up some stats and seeing what people recommend. The client told me he wanted to move to a dedicated server but I'm thinking a VPS might do the trick. Especially if upgraded with dedicated Core as well as RAM such as wiredtree is offering.
Looking for a managed, Unix based server that in a typical month serves 100k unique visitors 230k page views 500Gb of downloads
But needs to be easily upgradeable to handle his expected traffic levels in the next year of monthly visits in the order of: 250k unique visitors 600k page views 1.1Tb of throughput As far as features:
*Currently they use about 15 gigs of disk space. Some of that is inefficient disk management but the bulk is them supporting previous software releases.
*needs to be fully managed
*US datacenter with all the features you guys would expect to have as far as backbone access, security, power backups, etc..
*Backups by provider. Let's say 5 gigs worth since the old software versions don't really need to be backed up.(I'll recommend his own backups as well)
*Either plesk or cpanel
*15 minute hardware SLA is what the client is asking for but i'd like to present some comparisons to 1 hour SLA companies to see how much he'd save.
And finally, i tried to search for the answer to this but the keywords kept bringing up lots of hits without good info. The client sells software so the bandwidth needed is pretty consistent until they release a new version. Then it skyrockets to the point they may have 1500 people trying to download a 50Meg file simultaneously. What is the right way to handle that? Use a CDN or negotiate with the hosting provider to provide burstable bandwidth as needed. As a side note while looking at many offerings I was most surprised that bandwidth seems to sold in large chunks with overage costs hidden.
I use zoneedit to point my domain to the server, and a few times their servers don't respond for a few minutes that causes my site to be unaccessible. I was wondering if there was any better way of doing this? Please give me suggestions on what to do to have proper dns.
if upgrading to that new server that I'll mention will probably solve my problems. Whatever help you can provide would be greatly appreciated. Below are the details:
In the GMT evenings and nights my current server gets so loaded that every page load takes 10 - 30 seconds. Even the pure html pages will be so slow to load. It seems that after a certain treshold it just suddenly becomes that much slower. Not much middleground there. I have high MaxClients and ServerLimit values now and the error log doesn't say that they are exceeded anymore but that didn't help enough.
I have a high traffic website that is using latest version of apache (2.2.x) with the prefork MPM and apache is optimized, PHP 5.2.5 and APC 3.0.15.
I get 160,000 - 210,000 pageloads per day. 32,000 - 45,000 visits per day.
Most of its pages are PHP but shouldn't be too CPU or databes intensive. Mysql isn't used and I mostly used sharedmem (php's shm functions) for databases. 2 semaphores are quite heavily used but that can't explain how a few more users would make the server serve pages so much slower.
Swap usage is practically 0 and CPU user % usage is like 1 - 2 % and CPU system % is also about the same even during peak times. However the Average Load or whatever that "top" reports is 6 - 9.
My current server scecs: 1 GB Ram, Pentium D 3 ghz, CentOS 5 32bit fully updated.
I load all pictures and even the stylesheet from a secondary server by using href="$secondaryserverIP..." in the html code, so the main server practically just serves the pages.
My new server will have apache with the worker MPM and latest versions of every software. Also its specs are: 2 GB of RAM, Intel Dual Core Xeon 2.40GHz, CentOS 5.1 32bit fully updated.
I have a sophisticated netstat based ddos script that is an improved version of DDoS Deflate and while some of these slowdowns seem to have been caused by attacks that it then was able to defend me from, most of them are not. I am even protected from users who constantly have 7+ connections to my site and if someone has a way too high number of connections, the script won't even check if it constantly has it and the script just bans that user outright. It probably is banning a bunch of innocent proxy users too but that is a small price to pay.
Thought this might be of interest to folks on WHT. We put together a solution using Nginx ( Engine-X ) to do Global Server Load Balancing. This solution lets you do GSLB without having to fork over $26k per site to F5 or Foundry.
Thought it would be of interest to both end-users as well as dedicated hosting providers who might want to make it into a service (eg. sell a dedicated host in Europe and the US as a group, with the solution pre-installed).
The entire project, including relavent configs is available for download in the latest ( issue 6 ) FREE issue of o3 magazine (o3magazine.com)
what the max number of hits is a quard core server with RAID disk system can handle, it is running on a Linux with separated MySQL server?
The host says there are no restrictions on the bandwith, but somehow it is strange we always only have MAX 300 users online (24/7/365) now I wonder if it just is that way or if some users might be denied access from time to time when they try to enter some of the websites hosted on the server ?
Maybe you know a monitoring service or something that can tell if this is an issue.
We are setting up 100s of new domain names every day. We are only hosting a simple blog, in fact with our current setup we are on a HostGator Reseller account and we are just using one account meaning one single cPanel account.
We have a script that automatically runs when people sign up for our service, this script sets up an addon domain on that same single cPanel account with the same document root. Our modified wordpress blog simply looks at the HTTP_HOST in the config file and opens a separate database tables for every new domain name.
The problem we are running into is not the bandwidth usage, nor storage space. But simply the mass addon domains. The cPanel adddomain.html script seems to be getting ran so many times it is overloading the web server.
So I have read about some other people here WHT that are starting to use a new server software that uses a lot less server resources then WHM and cPanel. So I am wondering what hosting companies can provide that sort of a server.
Storage: A couple hundred GB Bandwidth: 1000s of GB Server software to run PHP scripts and mySQL databases Ability to create 1000s of addon domains every day.
Past shared hosting deals we been trying out sucks. We are a community forum that just started but know we will grow a big database in the future. We want to prepare now and have no problems. It's a LOCAL Radio-Control (R/C) forum.
What VPS Host would you guys recommend that handles forums well? So far, I broke down to WiredTree.com.
Since I have never worked on the server end of things I had a quick question for all you web hosting gurus.
Is it possible to have PHP installed on ONE single server and still have the ability for the server to work with both MS Access AND MySQL at the same time?
I would think YES, but I am being told by our server branch at my current job that this is not the case. They claim there is no way for the server on one machine to be able to handle both types of databases. Are they right?
If they are wrong and it is possible to have one server run both type of databases, what steps would be necessary to set up the server to handle both types? Do we need to tweak the php.ini file or is there another method of allowing the server the ability to work with both MySQL and MS Access.
Sorry if this question seems stupid or odd, as I said, I have minimal experience on the server end but I am confident that a server can handle both.
here is what I seen when I installed kernel-2.6.20-1.2948.fc6.src.rpm
rpm -ivh kernel-2.6.20-1.2948.fc6.src.rpm 1:kernel warning: user brewbuilder does not exist - using root warning: group brewbuilder does not exist - using root warning: user brewbuilder does not exist - using root ########################################### [100%] warning: user brewbuilder does not exist - using root warning: group brewbuilder does not exist - using root
then when I ran: rpmbuild -bp --target=$(uname -m) /usr/src/redhat/SPECS/kernel-2.6.spec
I seen this error: + Arch=x86_64 + make ARCH=x86_64 nonint_oldconfig In file included from /usr/include/sys/socket.h:35, from /usr/include/netinet/in.h:24, from /usr/include/arpa/inet.h:23, from scripts/basic/fixdep.c:117: /usr/include/bits/socket.h:310:24: error: asm/socket.h: No such file or directory make: *** [scripts/basic/fixdep] Error 1 make: *** [scripts_basic] Error 2 error: Bad exit status from /var/tmp/rpm-tmp.93770 (%prep)
I need to have this installed to get a app installed etc... suggestions or ideas? thanks