I've tried asking on the xen-users mailing list, but haven't received much response. So, i'm asking here.
I'm running Xen 3.1 with CentOS 5 64bit on a Dell 2950 with 2 x 2.33Ghz Quad-Core CPUs. This should/is be a very powerful system. However, when running Xen the performance drop is huge. The strange thing is, on the mailing list others were reporting much lower levels of performance loss. (Just to be clear, I'm using the XenSource compiled kernel, etc.)
With Xen running, my UnixBench results aren't too bad.
Code:
INDEX VALUES
TEST BASELINE RESULT INDEX
Dhrystone 2 using register variables 376783.7 52116444.7 1383.2
Double-Precision Whetstone 83.1 2612.0 314.3
Execl Throughput 188.3 11429.1 607.0
File Copy 1024 bufsize 2000 maxblocks 2672.0 155443.0 581.7
File Copy 256 bufsize 500 maxblocks 1077.0 37493.0 348.1
File Read 4096 bufsize 8000 maxblocks 15382.0 1475439.0 959.2
Pipe-based Context Switching 15448.6 548465.7 355.0
Pipe Throughput 111814.6 3313637.0 296.4
Process Creation 569.3 34050.6 598.1
Shell Scripts (8 concurrent) 44.8 3566.8 796.2
System Call Overhead 114433.5 2756155.3 240.9
=========
FINAL SCORE 510.9
However, once I boot into Xen, the Dom0 performance drops a lot.
Code:
INDEX VALUES
TEST BASELINE RESULT INDEX
Dhrystone 2 using register variables 376783.7 50864253.7 1350.0
Double-Precision Whetstone 83.1 2617.9 315.0
Execl Throughput 188.3 2786.5 148.0
File Copy 1024 bufsize 2000 maxblocks 2672.0 159749.0 597.9
File Copy 256 bufsize 500 maxblocks 1077.0 44884.0 416.8
File Read 4096 bufsize 8000 maxblocks 15382.0 1191772.0 774.8
Pipe-based Context Switching 15448.6 306121.8 198.2
Pipe Throughput 111814.6 1417645.2 126.8
Process Creation 569.3 4699.2 82.5
Shell Scripts (8 concurrent) 44.8 781.6 174.5
System Call Overhead 114433.5 1021813.7 89.3
=========
FINAL SCORE 261.6
Now, here is where it gets weird. The only running DomU which is CentOS 5 PVed, gets a higher score than Dom0.
1) We have 3 web servers each with IIS and ColdFusion. When updating the site which setup is better:
a) upload the changed file to all 3 web servers, keeping them in sync b) move the source files to our storage server, then change the site root on the web servers to point to a network share in the storage server
Main issue: Will the network latency of fetching the source files be a performance problem?
2) We have a storage server that will serve up some audio/video via http. Which setup is better:
a) expose it to the Internet and serve it directly to the users via its own IIS b) create a network share and let the 3 web servers serve the files
If you think long and hard about this issue you'll realize there are many cons and pros to each approach. I can't seem to make up my mind.
So would the load times be noticeably longer if I ran load balancers, and then had my web servers nfs mounted to file servers / san and connecting over the network to database servers? It seems like a lot of network overhead to deal with.
I am using dreamhost host 3 of my web sites and 1 blog. Dreamhost is great, offers alot space and bandwidth.
but I think they are oversellling their space, sometimes it gets really slow. (overselling ? ok, I dont really know, but sometimes its really slow, and most my asian readers said need to refresh to load the page. I am wondering if theres a way to check if they are overselling or not.)
I am thinking about buying vps, even tho, I still got 5 month left with dreamhost.
I found 2 vps companies are highly recommanded on this forum, JaguarPC and LiquidWeb.
theres already a post compared both companies in terms of price and service. I say I will pick JagarPc, cuz, its basic plan just 20 USD, and htey got promotion now, its even cheaper. and basic Liquidweb vps plan is 60 bucks.
I am wondering why Jagarpc is so cheap , are they overselling? how can we check if they are overselling.
I found a few posts saying how good jaguarPc is. and they are not overselling, but those members just signed up this month, and only have 1-3 posts. I cannot really trust those new members.
Can someone share their experience with JaguarPC? compare JaguarPc performance and liquidweb performance. antoher question is switch from dreamhost to JaguarPC basic vPS plan, will performance gets better?
last question: VPS account allows 3 IP, 3ip = 3 domains? if not, how many domains can I have?
We run a very busy web application written in .net . The backend is SQL 2005. The server running SQL for this web app is slammed constantly. CPU is red lined, and the disks are queuing up because they cant keep up with the demand. What I am wondering is what do the big websites do to gain performance? What direction should we start moving in to get ahead of the curve. We are using an HP DL 580 with 4 x quad core xeons and the fastest SAS drives we could get.
Does anyone have experience using LVM2? We'd rely on hardware RAID mirroring for the underlying physical redundancy, but we're very interested in LVM2's storage virtualization features.
If anyone can share their experiences with LVM2 with regards to performance and possibly use in a SAN environment,
Let's say I've got a single website built in Drupal (using PHP and MySQL). It gets less than a 1,000 visits per day and needs very little storage or bandwidth. The site is currently on a shared host and it runs okay, but often has very slow page loads due to sluggish MySQL calls and other traffic on the server. Sometimes the homepage loads in 2s but other times it takes 20-30s depending on time of day. The client is sick of this performance on such a low traffic site and wants to improve the situation.
Question: Will a VPS really provide that much better performance than a shared host?
Remember I'm talking ONLY about page load time under minimal load. No need to take into account scaling or Digg/Slashdot effects.
I know dedicated is the best option but it seems crazy for such a low traffic site. A lot of the VPS offers are very attractive in theory (managed and affordable) but in practice I'm concerned that even a 512MB VPS with 1GB burst won't make much of a performance difference.
Mainly I don't want to go to the hassle and extra monthly expense of moving everything to a VPS for only a minimal gain.
We shifted one website based on Article Dashboard (its an article directory script coded in Zend) to a Godaddy VPS ($35 per month) from a shared hosting account with hostgator.
This VPS is really slow compared to hostgator account.
Im planning on buying a NAS from my provider for using as a backend to my VPSes (around 15). The plan is to put the server images on the NAS so the VPSes can be moved without interruption between different nodes.
The server i have looked on so far is the following:
The budget is pretty tight so if it's possible to do this with SATA drives it would be great, otherwise it could be a possibilty to go down in diskspace and switch the SATA drives to SCSI/SAS drives.
We are getting into VPS hosting and wanted to get some opinions and feedback as we're quite unsure on what to expect as for performance and how many clients we can generally keep on a box.
For now we've bought 3 dell R710 with dual Xeon L5520, 72GB ram and 8 x 2.5" SAS drives.
We are thinking of a base offering of 512 megabytes of ram and was hoping to get about 40-50 onto a server.
With 40 there should be -plenty- free ram and plenty drivecache.
Then a next offering of 1 gig ram and next one of 2 gigs.
Even if we do the biggest 2 gig offering with 25 on a server we should have free ram to spare.
The software would be virtuozzo.
Any thoughts on this, am I expecting too much, or am I being fairly realistic?
I have been working with Xen over the last week or so and I can't figure out why the performance is downgraded so much when booting into Xen. There are certain things that seem just as fast but other things just don't seem normal.
I have tried this on two different quad-core systems, one new generation (York) with CentOS5 and one old (Kent) with Debian Lenny but neither seem to produce good speeds.
For example, when I use the default kernels I can usually get about ~600 score out of unixbench-wht and things such as top and core systems show up as 0% cpu when running top.
When I boot into Xen kernel however, whether it been from Dom0 or the guest OS, top uses about 3% CPU and unixbench-wht produces scores under 250.
I have set vcpus to 4 and have even tried vcpu-pin 0 0, 1 1, 2 2, 3 3 but nothing seems to be changing anything. The disk speeds seem about the same (hdparm). I'm assuming it is something with the CPU,
I need some advice on my situaton at my host, and possibly some frame of reference as to what can/should be expected from a VPS setup like mine and what I can expect it to manage.
I have a site that sees some traffic of about 150k pageviews per day. On any given day, it peaks for roughly a timespan of 4 hours per day where there may be about 5 req/s.
I use a standard setup (LAMP) running mod_php in Apache, not fast cgi. I have a VPS on Virtuozzos Power Panel that has 1,5 GB RAM and really an unkonwn amount of CPU. I haven't been able to ascertain that information but probably could if I asked my host.
The problem is that during these hours it gets a bit slow from time to time. Running TOP shows sometimes a staggering amount of waiting processes i.e. the load is quite high (15 - 25).
So, I'm now really at a fork in the road where I either start looking into going with a different setup, say Nginx + PHP-FPM (FCGI) and try to see if that makes a difference. I'm not really an admin so I would be kind of lost on that. I could also start looking into my code to see if I can cache more or do smarter stuff etc.
However, before doing any of the above, I'd like to ask this crowd here if you think that I've sort of hit the roof on what can be expected from a VPS of the size I just told you about. That my situation is quite normal and that the real solution is actually just to upgrade my VPS. Is it?
Lets assume that we (me and the people I'm working with) were to launch a really powerful website. Then all of a sudden there is more demand for the website than the backend infrastructure can handle.
What do we do?
- 1000 users (ok so one powerful server should be enough).
- 2000 users (lets setup an additional server to work as the HTTP while the powerful server acts as the database only).
- 3000 users (lets eliminate all the commercial linux programs and install a fresh version of linux on both boxes and compile only the programs we need).
- 5000 (lets setup another server that handles the sessions).
- 6000 (lets setup a static-only server to deliver the non-dynamic content).
- 7000 (lets do some caching ... ugh maybe it won't be enough).
Any greater and what? We've run out of ideas on how to separate the code logic and how to optimize every byte of data on the website! What do we do? We can buy more servers, but how do we balance the load?
This is where I'm stuck at. In the past I've separated the load in a modular sense (one server does this and one server does that), but eventually I'll come across a wall.
how clustering works? What I wanna know is how is the information, whether it be the server-side code or the static information, is shared across machines. Is it worth it anymore to learn these things, or is it worth it just to host with a scalable hosting solution like AWS?
I have several VPS's that I run. Some run LAMP, others RoR, and my latest runs with Nginx + Cherrypy (python).
To be honest, I've never run any benchmarks to see how well the servers performed under stress. But I'd like to start.
Are there are good (free) programs out there that will stress test my web servers? I develop on windows, but deploy on linux, so either platform is ok. I'm most interested in how many concurrent connections can be maintained.
I currently have a VPS in the UK that I host my clients joomla sites off and the specs of this VPS server are as below:
- 20 GB SA-SCSI Disk Space - 350GB bandwidth - Full root access / Parallels/WHM/cPanel - 2 Dedicated IPs - 384 MB SLM RA
I am now running around 10 joomla based sites off of this VPS, 5-6 of which are Ecommerce based sites. Whilst I am probably only using 10gb of the overall disk-space so far, in terms of performance, should I continue to add clients to this server or should I keep the more hungry sites on this server and move some of the less resource intensive non-ecommerce sites to another VPS? Or would it be in my best interest to upgrade to a Dedicated server where I will have all my own resources?
Would I be roughly right in assuming that an American customer accessing a UK server will see similar speeds to what I have been getting as a UK customer accessing the same site on a US server?
Is there any RAID performance decrease if per say you have a 24-RAID 3ware hardware card and you already have a 6x RAID partion on RAID 5 but then you are now adding per say 18x of HDD and your going to make it to another partion of RAID 5 does the performance stay the same or decrease?
The question as to why you would have different RAID partions is because if you were to buy a 8U you would want it as an investment to avoid buying smaller cases to eliminate the amount of money on new motherboard/cpu/ram per each system and add hard drives whenever you can and RAID them.
I am currently working with an internet radio station, it is currently listed in iTunes and we are pushing about 90mbps from a ecatel server during the day. We are expanding and are looking to pick up more capacity and were considering doing Geolocating for generating playlist so listeners would get the closest relay to them. Staminus has excellent pricing on unmetered connections so we were looking into them to use for a US provider.
I have searched the forum and haven't found many reviews on their unmetered connections, more on the DDoS protection. Does anyone have any recent experiences with their unmetered connections they have been offering with great prices?
my server load is max 1,it wont cross more than 1 but for 2 days iam getting 20 or more,but this load is extents for 1 or 2 min only after that it become normal to 0.58 around 1,in top i can able to see lotz apachi process when the load increase,
Does anyone have any experience running Juniper SSG-550 firewalls in a high-traffic hosting environment?
I run network operations for a hosting provider in Australia. We currently have two J4350s running as border routers, and we are looking at putting two Juniper SSG-550s behind the border routers to do stateful firewalling / NAT.
We'll be using active/active NSRP on the SSGs for load balancing and failover.
My concern is that these devices may not be able to handle our traffic load. They have a hard-set limit of 256,000 "concurrent sessions" which may not be enough for us in peak times. Almost all of our traffic is HTTP though, so I would imagine sessions would timeout quite quickly?
I was a victim/winner due to slashdot yesterday. My site, www.electricalengineer.com runs Joomla hosted through Rackforce's dds-400l package. We thought we were under attack yesterday, but later found it to be the slashdot effect. Anyhow, google analytics show ~5700 visitors. This doesn't seem like it would be enough to slow the server to a halt, but it did. Rackforce suggested that we upgrade to a more powerful package. I'm not sure though that the following should have slowed us down: Dual Quad-Core Xeon 1GB DDR2 ECC 667 RAM 30GB on SAS/SCSI 10Mbps Dedicated Unmetered
We have a project in mind and we are planning on using a Cisco 7140 to push about 80Mbps over ethernet. Do you think the 7140 will be enough or it will get maxed out? (the 7140 is supposed to be like the 7200VXR NPE-300).
The routing would be thorugh BGP with partial routes.
I am considering to upgrade to MySQL 5.0. I am using 4.1 at present. Now I wonder if it will really improve performance... I really have some busy databases... Also wonder if 5.0 is fully downwards compatible to 4.1
I've got 18 domains parked on it, with only 4 of those having active websites. There are 3 mailman lists set up, and a further 10 or so email accounts with SpamAssassin active on them.
It also runs cPanel/WHM.
The server itself has 384Mb RAM, with 512Mb burst.
Am I putting too much strain on the VPS with the amount of domains/emails I run through it? Is cPanel/WHM the problem? Is the server config suspect? Are VPS accounts only good for 1 or 2 domains?