I need some advice on my situaton at my host, and possibly some frame of reference as to what can/should be expected from a VPS setup like mine and what I can expect it to manage.
I have a site that sees some traffic of about 150k pageviews per day. On any given day, it peaks for roughly a timespan of 4 hours per day where there may be about 5 req/s.
I use a standard setup (LAMP) running mod_php in Apache, not fast cgi. I have a VPS on Virtuozzos Power Panel that has 1,5 GB RAM and really an unkonwn amount of CPU. I haven't been able to ascertain that information but probably could if I asked my host.
The problem is that during these hours it gets a bit slow from time to time. Running TOP shows sometimes a staggering amount of waiting processes i.e. the load is quite high (15 - 25).
So, I'm now really at a fork in the road where I either start looking into going with a different setup, say Nginx + PHP-FPM (FCGI) and try to see if that makes a difference. I'm not really an admin so I would be kind of lost on that. I could also start looking into my code to see if I can cache more or do smarter stuff etc.
However, before doing any of the above, I'd like to ask this crowd here if you think that I've sort of hit the roof on what can be expected from a VPS of the size I just told you about. That my situation is quite normal and that the real solution is actually just to upgrade my VPS. Is it?
How do go about discovering where your bottleneck is with an openvpn setup?
Ive setup openvpn on my remote server and have setup everything so that my desktop clients (vista and ubuntu) at home can connect and have all internet traffic directed successfully through the tunnel.
Im using it to stream video that would normally be inaccessible outside of the UK whilst im in Japan i.e. iplayer.
The problem is its oftenvery choppy and unplayable. Though its good when england is sleeping.
Im new to servers (not to stuff like programming though) so dont know where the problem lies, how to find it out or even where to start looking. For example things ging through my head
Is it the limitations of the vps? How do i find that out? my plan:- vps1
Is it the location of the actual server in the uk? How do i know if there are any better coming from japan?
Is it my configurations? How do i pinpoint that?
Or is it working as good as it can be? How do i know that for sure? etc etc
What would be your process of elimantion? Quick checks that would tell you which direction to move forward in?
I am working on a busy and popular website which has a large amount of database activity - and requires hourly backups of all database data.
At the moment the site is hosted on two servers - one for the front end web server, one for the database.
Both servers are running a RAID HDD system which allows quick swaps of faulty HDDs without data loss. An hourly full backup of database tables is running which is killing the server when it runs.
ISP has suggested installing a third server to run as a slave to the existing DB server, and hence always hold a duplicated of the live database.
I have a feeling however that this is basically just like having RAID mirroring, but on a different machine - so to solve the problem of a potential dodgy SQL statement wiping out ALL copies of the live database, we'd STILL need hourly backups to run, and hence would still see the major system speed drop each hour at the time of backup.
I am using dreamhost host 3 of my web sites and 1 blog. Dreamhost is great, offers alot space and bandwidth.
but I think they are oversellling their space, sometimes it gets really slow. (overselling ? ok, I dont really know, but sometimes its really slow, and most my asian readers said need to refresh to load the page. I am wondering if theres a way to check if they are overselling or not.)
I am thinking about buying vps, even tho, I still got 5 month left with dreamhost.
I found 2 vps companies are highly recommanded on this forum, JaguarPC and LiquidWeb.
theres already a post compared both companies in terms of price and service. I say I will pick JagarPc, cuz, its basic plan just 20 USD, and htey got promotion now, its even cheaper. and basic Liquidweb vps plan is 60 bucks.
I am wondering why Jagarpc is so cheap , are they overselling? how can we check if they are overselling.
I found a few posts saying how good jaguarPc is. and they are not overselling, but those members just signed up this month, and only have 1-3 posts. I cannot really trust those new members.
Can someone share their experience with JaguarPC? compare JaguarPc performance and liquidweb performance. antoher question is switch from dreamhost to JaguarPC basic vPS plan, will performance gets better?
last question: VPS account allows 3 IP, 3ip = 3 domains? if not, how many domains can I have?
As my clients' needs expand, they're asking for chroot ssh/sftp setup. I'm currently on a dedicated Linux setup but don't really have the time to set up a whole new box with full virtualization or investigate a full chroot solution (baby on the way), and to be honest it would be less hassle to move to a new provider than worry about down time with sites.
What I'm looking for:
- linux hosting - hosting for 30+ accounts, some with several domains - at least 6 IP addresses for SSL certs - each account in a full chroot environment (ssh/sftp/ftp) so they can't poke around each others' files, or each account set up in a virtual machine setup (ie: openvz) - maildir - spamassassin - php 5, mysql, perl 5.8.8 - suexec apache would be nice
I have learnt it is harder to setup than I initially expected (since I have just moved from a shared hosting service). I am in need of some help setting up my DNS servers, as I am very confused. Here is most of the info I know:
1) I am running HyperVM
2) I've installed LXAdmin
3) I own the domain (purchased from xeodomains.com) runemart.com
4) My VPS hostname is: vps.runemart.com
5) I know my IP
6) My host has said:
'For VPS customers that have a HyperVM login you can now host forward DNS on the DNS servers rdns1.vaserv.com (US)rdns2.vaserv.com (UK'
And I am unsure what this means/how to do it.
I am not sure if I need some more information to set up my DNS, however I am sure that I can get it if I do.
Now, my questions begin. Firstly, I need to point my domain - runemart.com - somewhere. I believe I need to set up my DNS via HyperVM or LXAdmin so that they are something like: ns1.runemart.com and ns2.runemart.com. Though, is this correct? Am I able to set up my own actual domain name servers, or will my domain have to point at something like rdns2.vaserv.com?
If anyone can assist me in this I would be very greatful, as I am waiting to get my website running. This is all I will ask for now, I will take it one step at a time =).
We run a very busy web application written in .net . The backend is SQL 2005. The server running SQL for this web app is slammed constantly. CPU is red lined, and the disks are queuing up because they cant keep up with the demand. What I am wondering is what do the big websites do to gain performance? What direction should we start moving in to get ahead of the curve. We are using an HP DL 580 with 4 x quad core xeons and the fastest SAS drives we could get.
Does anyone have experience using LVM2? We'd rely on hardware RAID mirroring for the underlying physical redundancy, but we're very interested in LVM2's storage virtualization features.
If anyone can share their experiences with LVM2 with regards to performance and possibly use in a SAN environment,
Let's say I've got a single website built in Drupal (using PHP and MySQL). It gets less than a 1,000 visits per day and needs very little storage or bandwidth. The site is currently on a shared host and it runs okay, but often has very slow page loads due to sluggish MySQL calls and other traffic on the server. Sometimes the homepage loads in 2s but other times it takes 20-30s depending on time of day. The client is sick of this performance on such a low traffic site and wants to improve the situation.
Question: Will a VPS really provide that much better performance than a shared host?
Remember I'm talking ONLY about page load time under minimal load. No need to take into account scaling or Digg/Slashdot effects.
I know dedicated is the best option but it seems crazy for such a low traffic site. A lot of the VPS offers are very attractive in theory (managed and affordable) but in practice I'm concerned that even a 512MB VPS with 1GB burst won't make much of a performance difference.
Mainly I don't want to go to the hassle and extra monthly expense of moving everything to a VPS for only a minimal gain.
We shifted one website based on Article Dashboard (its an article directory script coded in Zend) to a Godaddy VPS ($35 per month) from a shared hosting account with hostgator.
This VPS is really slow compared to hostgator account.
Im planning on buying a NAS from my provider for using as a backend to my VPSes (around 15). The plan is to put the server images on the NAS so the VPSes can be moved without interruption between different nodes.
The server i have looked on so far is the following:
The budget is pretty tight so if it's possible to do this with SATA drives it would be great, otherwise it could be a possibilty to go down in diskspace and switch the SATA drives to SCSI/SAS drives.
We are getting into VPS hosting and wanted to get some opinions and feedback as we're quite unsure on what to expect as for performance and how many clients we can generally keep on a box.
For now we've bought 3 dell R710 with dual Xeon L5520, 72GB ram and 8 x 2.5" SAS drives.
We are thinking of a base offering of 512 megabytes of ram and was hoping to get about 40-50 onto a server.
With 40 there should be -plenty- free ram and plenty drivecache.
Then a next offering of 1 gig ram and next one of 2 gigs.
Even if we do the biggest 2 gig offering with 25 on a server we should have free ram to spare.
The software would be virtuozzo.
Any thoughts on this, am I expecting too much, or am I being fairly realistic?
I have been working with Xen over the last week or so and I can't figure out why the performance is downgraded so much when booting into Xen. There are certain things that seem just as fast but other things just don't seem normal.
I have tried this on two different quad-core systems, one new generation (York) with CentOS5 and one old (Kent) with Debian Lenny but neither seem to produce good speeds.
For example, when I use the default kernels I can usually get about ~600 score out of unixbench-wht and things such as top and core systems show up as 0% cpu when running top.
When I boot into Xen kernel however, whether it been from Dom0 or the guest OS, top uses about 3% CPU and unixbench-wht produces scores under 250.
I have set vcpus to 4 and have even tried vcpu-pin 0 0, 1 1, 2 2, 3 3 but nothing seems to be changing anything. The disk speeds seem about the same (hdparm). I'm assuming it is something with the CPU,
Lets assume that we (me and the people I'm working with) were to launch a really powerful website. Then all of a sudden there is more demand for the website than the backend infrastructure can handle.
What do we do?
- 1000 users (ok so one powerful server should be enough).
- 2000 users (lets setup an additional server to work as the HTTP while the powerful server acts as the database only).
- 3000 users (lets eliminate all the commercial linux programs and install a fresh version of linux on both boxes and compile only the programs we need).
- 5000 (lets setup another server that handles the sessions).
- 6000 (lets setup a static-only server to deliver the non-dynamic content).
- 7000 (lets do some caching ... ugh maybe it won't be enough).
Any greater and what? We've run out of ideas on how to separate the code logic and how to optimize every byte of data on the website! What do we do? We can buy more servers, but how do we balance the load?
This is where I'm stuck at. In the past I've separated the load in a modular sense (one server does this and one server does that), but eventually I'll come across a wall.
how clustering works? What I wanna know is how is the information, whether it be the server-side code or the static information, is shared across machines. Is it worth it anymore to learn these things, or is it worth it just to host with a scalable hosting solution like AWS?
I have several VPS's that I run. Some run LAMP, others RoR, and my latest runs with Nginx + Cherrypy (python).
To be honest, I've never run any benchmarks to see how well the servers performed under stress. But I'd like to start.
Are there are good (free) programs out there that will stress test my web servers? I develop on windows, but deploy on linux, so either platform is ok. I'm most interested in how many concurrent connections can be maintained.
I currently have a VPS in the UK that I host my clients joomla sites off and the specs of this VPS server are as below:
- 20 GB SA-SCSI Disk Space - 350GB bandwidth - Full root access / Parallels/WHM/cPanel - 2 Dedicated IPs - 384 MB SLM RA
I am now running around 10 joomla based sites off of this VPS, 5-6 of which are Ecommerce based sites. Whilst I am probably only using 10gb of the overall disk-space so far, in terms of performance, should I continue to add clients to this server or should I keep the more hungry sites on this server and move some of the less resource intensive non-ecommerce sites to another VPS? Or would it be in my best interest to upgrade to a Dedicated server where I will have all my own resources?
Would I be roughly right in assuming that an American customer accessing a UK server will see similar speeds to what I have been getting as a UK customer accessing the same site on a US server?
Is there any RAID performance decrease if per say you have a 24-RAID 3ware hardware card and you already have a 6x RAID partion on RAID 5 but then you are now adding per say 18x of HDD and your going to make it to another partion of RAID 5 does the performance stay the same or decrease?
The question as to why you would have different RAID partions is because if you were to buy a 8U you would want it as an investment to avoid buying smaller cases to eliminate the amount of money on new motherboard/cpu/ram per each system and add hard drives whenever you can and RAID them.
I am currently working with an internet radio station, it is currently listed in iTunes and we are pushing about 90mbps from a ecatel server during the day. We are expanding and are looking to pick up more capacity and were considering doing Geolocating for generating playlist so listeners would get the closest relay to them. Staminus has excellent pricing on unmetered connections so we were looking into them to use for a US provider.
I have searched the forum and haven't found many reviews on their unmetered connections, more on the DDoS protection. Does anyone have any recent experiences with their unmetered connections they have been offering with great prices?
my server load is max 1,it wont cross more than 1 but for 2 days iam getting 20 or more,but this load is extents for 1 or 2 min only after that it become normal to 0.58 around 1,in top i can able to see lotz apachi process when the load increase,
Does anyone have any experience running Juniper SSG-550 firewalls in a high-traffic hosting environment?
I run network operations for a hosting provider in Australia. We currently have two J4350s running as border routers, and we are looking at putting two Juniper SSG-550s behind the border routers to do stateful firewalling / NAT.
We'll be using active/active NSRP on the SSGs for load balancing and failover.
My concern is that these devices may not be able to handle our traffic load. They have a hard-set limit of 256,000 "concurrent sessions" which may not be enough for us in peak times. Almost all of our traffic is HTTP though, so I would imagine sessions would timeout quite quickly?
I've tried asking on the xen-users mailing list, but haven't received much response. So, i'm asking here.
I'm running Xen 3.1 with CentOS 5 64bit on a Dell 2950 with 2 x 2.33Ghz Quad-Core CPUs. This should/is be a very powerful system. However, when running Xen the performance drop is huge. The strange thing is, on the mailing list others were reporting much lower levels of performance loss. (Just to be clear, I'm using the XenSource compiled kernel, etc.)
With Xen running, my UnixBench results aren't too bad.
Code: INDEX VALUES TEST BASELINE RESULT INDEX
Dhrystone 2 using register variables 376783.7 52116444.7 1383.2 Double-Precision Whetstone 83.1 2612.0 314.3 Execl Throughput 188.3 11429.1 607.0 File Copy 1024 bufsize 2000 maxblocks 2672.0 155443.0 581.7 File Copy 256 bufsize 500 maxblocks 1077.0 37493.0 348.1 File Read 4096 bufsize 8000 maxblocks 15382.0 1475439.0 959.2 Pipe-based Context Switching 15448.6 548465.7 355.0 Pipe Throughput 111814.6 3313637.0 296.4 Process Creation 569.3 34050.6 598.1 Shell Scripts (8 concurrent) 44.8 3566.8 796.2 System Call Overhead 114433.5 2756155.3 240.9 ========= FINAL SCORE 510.9 However, once I boot into Xen, the Dom0 performance drops a lot.
Code: INDEX VALUES TEST BASELINE RESULT INDEX
Dhrystone 2 using register variables 376783.7 50864253.7 1350.0 Double-Precision Whetstone 83.1 2617.9 315.0 Execl Throughput 188.3 2786.5 148.0 File Copy 1024 bufsize 2000 maxblocks 2672.0 159749.0 597.9 File Copy 256 bufsize 500 maxblocks 1077.0 44884.0 416.8 File Read 4096 bufsize 8000 maxblocks 15382.0 1191772.0 774.8 Pipe-based Context Switching 15448.6 306121.8 198.2 Pipe Throughput 111814.6 1417645.2 126.8 Process Creation 569.3 4699.2 82.5 Shell Scripts (8 concurrent) 44.8 781.6 174.5 System Call Overhead 114433.5 1021813.7 89.3 ========= FINAL SCORE 261.6 Now, here is where it gets weird. The only running DomU which is CentOS 5 PVed, gets a higher score than Dom0.
I was a victim/winner due to slashdot yesterday. My site, www.electricalengineer.com runs Joomla hosted through Rackforce's dds-400l package. We thought we were under attack yesterday, but later found it to be the slashdot effect. Anyhow, google analytics show ~5700 visitors. This doesn't seem like it would be enough to slow the server to a halt, but it did. Rackforce suggested that we upgrade to a more powerful package. I'm not sure though that the following should have slowed us down: Dual Quad-Core Xeon 1GB DDR2 ECC 667 RAM 30GB on SAS/SCSI 10Mbps Dedicated Unmetered
We have a project in mind and we are planning on using a Cisco 7140 to push about 80Mbps over ethernet. Do you think the 7140 will be enough or it will get maxed out? (the 7140 is supposed to be like the 7200VXR NPE-300).
The routing would be thorugh BGP with partial routes.
I am considering to upgrade to MySQL 5.0. I am using 4.1 at present. Now I wonder if it will really improve performance... I really have some busy databases... Also wonder if 5.0 is fully downwards compatible to 4.1