Performance Of Virtuzzo In Windows?
Jul 22, 2008will this be a good product to give VPS solutions for service providers? how's the performance of virtuzzo in Windows?
View 5 Replieswill this be a good product to give VPS solutions for service providers? how's the performance of virtuzzo in Windows?
View 5 RepliesI installed lxadmin on a Virtuzzo VPS today and it stated I need to open ports 7777 and 7778 now With the Virtuzzo Firewall how would I go about opening these ports up?
View 0 Replies View RelatedI have been thinking about setting up a VPS sever
This server should run Virtruzzo and Cpanel
I was informed by cpanel that I need to install the OS will be CentOS 4.+
then I am assuming Virtruzzo then cpanel sepratly on each VPS Server
I also have a couple of questions say on an 80GB HD how big should
each VPS be? what should the Partitions be? and are they seprate for each vps?
I've got an app that I've built using FastCGI, PHP, MS SQL and it's currently running on a single IIS 6 (Win2K3) server. The MS SQL database is very large and some of the pages return large result sets so I've also installed memcached to cache the data.
Right now the web app and the database are both running on the same server. I'm looking at moving the database to a 2 node cluster, and the web server to it's own 2 node cluster for higher availability. But I've got a couple of questions:
1. Would IIS 7 (Windows 2008) be faster than IIS 6 (Windows 2003)?
2. Would Apache (on Windows) be faster than IIS?
3. Can Apache be clustered?
4. Would it make more sense to have a load-balanced pair for the web servers instead of an active-active cluster?
5. How does IIS 7 Kernel Mode caching compare to using memcached?
6. Would there be any decrease in performance if the database is running on another machine instead of the same machine as the web app?
7. Should memcached run on the web servers (which will be storing and retrieving the data), on the database servers (which actually has the data), or on it's own dedicated server?
Since my linux server is out of disk space, I just use cifs to mount some drives from the windows server for people to download files (usually 100-200mb/file).
But I found the performance is not good. For example, I need to wait for a long time before the download process begin. Also, it seems the load average of the server becomes high too.
Is there any suggestion? Should I mount the windows drives through cifs? Or should I change to another server which allow me to add more local harddisks? How about if I mount drives in other linux machine, will the performance better?
I have a Windows 2000 Advanced Server where there's a performance issue with some of the .asp pages that retrieve data from Access databases, (I know Access databases aren't ideal for data). These pages will just get stuck/freeze, and then either suddenly spring back to life, or give a script timeout error 0113.
The largest Access database I've seen is 136MB (is that way too large?)
I will probably move some of the large Access databases onto a different server but before I do:
- Are there any tools you can recommend to diagnose exactly what files / databases are causing the problem. I don't think the Win 2000 performance monitor tools even work.
- Can anyone explain more about the technicalities behind this issue. I expect it has something to do with processes, threads, memory, Access drivers being loaded into memory etc etc. Can anyone tell me what they know to put me in the picture better?
I noticed that all my client websites were as slow as a snail snot before uninstalling perl and Python.
Is it normal that perl and python components slowing down IIS?
I search online, it is a common scenerio for those two to slow down IIS.
After I uninstalled perl and python as well as awstat, the site are working as fast as before.
I am using dreamhost host 3 of my web sites and 1 blog. Dreamhost is great, offers alot space and bandwidth.
but I think they are oversellling their space, sometimes it gets really slow. (overselling ? ok, I dont really know, but sometimes its really slow, and most my asian readers said need to refresh to load the page. I am wondering if theres a way to check if they are overselling or not.)
I am thinking about buying vps, even tho, I still got 5 month left with dreamhost.
I found 2 vps companies are highly recommanded on this forum, JaguarPC and LiquidWeb.
theres already a post compared both companies in terms of price and service. I say I will pick JagarPc, cuz, its basic plan just 20 USD, and htey got promotion now, its even cheaper. and basic Liquidweb vps plan is 60 bucks.
I am wondering why Jagarpc is so cheap , are they overselling? how can we check if they are overselling.
I found a few posts saying how good jaguarPc is. and they are not overselling, but those members just signed up this month, and only have 1-3 posts. I cannot really trust those new members.
Can someone share their experience with JaguarPC? compare JaguarPc performance and liquidweb performance. antoher question is switch from dreamhost to JaguarPC basic vPS plan, will performance gets better?
last question: VPS account allows 3 IP, 3ip = 3 domains? if not, how many domains can I have?
We run a very busy web application written in .net . The backend is SQL 2005. The server running SQL for this web app is slammed constantly. CPU is red lined, and the disks are queuing up because they cant keep up with the demand. What I am wondering is what do the big websites do to gain performance? What direction should we start moving in to get ahead of the curve. We are using an HP DL 580 with 4 x quad core xeons and the fastest SAS drives we could get.
View 14 Replies View RelatedAny rumors known already?
View 4 Replies View RelatedDoes anyone have experience using LVM2? We'd rely on hardware RAID mirroring for the underlying physical redundancy, but we're very interested in LVM2's storage virtualization features.
If anyone can share their experiences with LVM2 with regards to performance and possibly use in a SAN environment,
Hypothetical Scenario:
Let's say I've got a single website built in Drupal (using PHP and MySQL). It gets less than a 1,000 visits per day and needs very little storage or bandwidth. The site is currently on a shared host and it runs okay, but often has very slow page loads due to sluggish MySQL calls and other traffic on the server. Sometimes the homepage loads in 2s but other times it takes 20-30s depending on time of day. The client is sick of this performance on such a low traffic site and wants to improve the situation.
Question: Will a VPS really provide that much better performance than a shared host?
Remember I'm talking ONLY about page load time under minimal load. No need to take into account scaling or Digg/Slashdot effects.
I know dedicated is the best option but it seems crazy for such a low traffic site. A lot of the VPS offers are very attractive in theory (managed and affordable) but in practice I'm concerned that even a 512MB VPS with 1GB burst won't make much of a performance difference.
Mainly I don't want to go to the hassle and extra monthly expense of moving everything to a VPS for only a minimal gain.
We shifted one website based on Article Dashboard (its an article directory script coded in Zend) to a Godaddy VPS ($35 per month) from a shared hosting account with hostgator.
This VPS is really slow compared to hostgator account.
Can anyone tell what we should do?
Im planning on buying a NAS from my provider for using as a backend to my VPSes (around 15). The plan is to put the server images on the NAS so the VPSes can be moved without interruption between different nodes.
The server i have looked on so far is the following:
CPU: Xeon 3330 2,67Ghz
RAM: 4GB DDR2
HDD: 8*Barracuda 7200.12 1000GB, 7200rpm, 32MB, SATA-II
RAID: 3Ware 9650SE
Network: Intel 2*1Gbit
Would it be enough to fill the Gbit-line?
The budget is pretty tight so if it's possible to do this with SATA drives it would be great, otherwise it could be a possibilty to go down in diskspace and switch the SATA drives to SCSI/SAS drives.
We are getting into VPS hosting and wanted to get some opinions and feedback as we're quite unsure on what to expect as for performance and how many clients we can generally keep on a box.
For now we've bought 3 dell R710 with dual Xeon L5520, 72GB ram and 8 x 2.5" SAS drives.
We are thinking of a base offering of 512 megabytes of ram and
was hoping to get about 40-50 onto a server.
With 40 there should be -plenty- free ram and plenty drivecache.
Then a next offering of 1 gig ram and next one of 2 gigs.
Even if we do the biggest 2 gig offering with 25 on a server we should have free ram to spare.
The software would be virtuozzo.
Any thoughts on this, am I expecting too much, or am I being fairly realistic?
I have been working with Xen over the last week or so and I can't figure out why the performance is downgraded so much when booting into Xen. There are certain things that seem just as fast but other things just don't seem normal.
I have tried this on two different quad-core systems, one new generation (York) with CentOS5 and one old (Kent) with Debian Lenny but neither seem to produce good speeds.
For example, when I use the default kernels I can usually get about ~600 score out of unixbench-wht and things such as top and core systems show up as 0% cpu when running top.
When I boot into Xen kernel however, whether it been from Dom0 or the guest OS, top uses about 3% CPU and unixbench-wht produces scores under 250.
I have set vcpus to 4 and have even tried vcpu-pin 0 0, 1 1, 2 2, 3 3 but nothing seems to be changing anything. The disk speeds seem about the same (hdparm). I'm assuming it is something with the CPU,
I have to leave the Supermicro servers and use only Dell. I have this question.
There is a big difference in performance between these two RAID configurations?
Dell - 2 x 1TB RAID1 PERC6
Supermicro - 4 x 500GB RAID10 3ware 4 port
It is for use with webhosting.
I need some advice on my situaton at my host, and possibly some frame of reference as to what can/should be expected from a VPS setup like mine and what I can expect it to manage.
I have a site that sees some traffic of about 150k pageviews per day. On any given day, it peaks for roughly a timespan of 4 hours per day where there may be about 5 req/s.
I use a standard setup (LAMP) running mod_php in Apache, not fast cgi. I have a VPS on Virtuozzos Power Panel that has 1,5 GB RAM and really an unkonwn amount of CPU. I haven't been able to ascertain that information but probably could if I asked my host.
The problem is that during these hours it gets a bit slow from time to time. Running TOP shows sometimes a staggering amount of waiting processes i.e. the load is quite high (15 - 25).
So, I'm now really at a fork in the road where I either start looking into going with a different setup, say Nginx + PHP-FPM (FCGI) and try to see if that makes a difference. I'm not really an admin so I would be kind of lost on that. I could also start looking into my code to see if I can cache more or do smarter stuff etc.
However, before doing any of the above, I'd like to ask this crowd here if you think that I've sort of hit the roof on what can be expected from a VPS of the size I just told you about. That my situation is quite normal and that the real solution is actually just to upgrade my VPS. Is it?
Lets assume that we (me and the people I'm working with) were to launch a really powerful website. Then all of a sudden there is more demand for the website than the backend infrastructure can handle.
What do we do?
- 1000 users (ok so one powerful server should be enough).
- 2000 users (lets setup an additional server to work as the HTTP while the powerful server acts as the database only).
- 3000 users (lets eliminate all the commercial linux programs and install a fresh version of linux on both boxes and compile only the programs we need).
- 5000 (lets setup another server that handles the sessions).
- 6000 (lets setup a static-only server to deliver the non-dynamic content).
- 7000 (lets do some caching ... ugh maybe it won't be enough).
Any greater and what? We've run out of ideas on how to separate the code logic and how to optimize every byte of data on the website! What do we do? We can buy more servers, but how do we balance the load?
This is where I'm stuck at. In the past I've separated the load in a modular sense (one server does this and one server does that), but eventually I'll come across a wall.
how clustering works? What I wanna know is how is the information, whether it be the server-side code or the static information, is shared across machines. Is it worth it anymore to learn these things, or is it worth it just to host with a scalable hosting solution like AWS?
How much faster is a 10k rpm HDD vs a 7200 rpm HDD in a server environment?
IMO, a 7200 rpm HDD is much faster than 5400 rpm HDD when it comes down to desktop PCs..
Just wondering if it's worth upgrading to a 10k rpm HDD from a 7200 rpm HDD and losing about 1TB of storage as well...
(Comparing specifically 2 750GB 16mb cache 7200rpm SATA 2 HDD RAID 1 with 2 150GB 16mb cache 10krpm HDD in RAID 1)
From the Disk I/O performance is it better
1) to have main PHP file with 10 includes
2) all 11 files as one file
3) the difference is not big
Suppose
a) a low traffic site
b) a high traffice site
I have several VPS's that I run. Some run LAMP, others RoR, and my latest runs with Nginx + Cherrypy (python).
To be honest, I've never run any benchmarks to see how well the servers performed under stress. But I'd like to start.
Are there are good (free) programs out there that will stress test my web servers? I develop on windows, but deploy on linux, so either platform is ok. I'm most interested in how many concurrent connections can be maintained.
I currently have a VPS in the UK that I host my clients joomla sites off and the specs of this VPS server are as below:
- 20 GB SA-SCSI Disk Space
- 350GB bandwidth
- Full root access / Parallels/WHM/cPanel
- 2 Dedicated IPs
- 384 MB SLM RA
I am now running around 10 joomla based sites off of this VPS, 5-6 of which are Ecommerce based sites. Whilst I am probably only using 10gb of the overall disk-space so far, in terms of performance, should I continue to add clients to this server or should I keep the more hungry sites on this server and move some of the less resource intensive non-ecommerce sites to another VPS? Or would it be in my best interest to upgrade to a Dedicated server where I will have all my own resources?
I’m moving my web server from the US to the UK.
Would I be roughly right in assuming that an American customer accessing a UK server will see similar speeds to what I have been getting as a UK customer accessing the same site on a US server?
Is there any RAID performance decrease if per say you have a 24-RAID 3ware hardware card and you already have a 6x RAID partion on RAID 5 but then you are now adding per say 18x of HDD and your going to make it to another partion of RAID 5 does the performance stay the same or decrease?
The question as to why you would have different RAID partions is because if you were to buy a 8U you would want it as an investment to avoid buying smaller cases to eliminate the amount of money on new motherboard/cpu/ram per each system and add hard drives whenever you can and RAID them.
I am currently working with an internet radio station, it is currently listed in iTunes and we are pushing about 90mbps from a ecatel server during the day. We are expanding and are looking to pick up more capacity and were considering doing Geolocating for generating playlist so listeners would get the closest relay to them. Staminus has excellent pricing on unmetered connections so we were looking into them to use for a US provider.
I have searched the forum and haven't found many reviews on their unmetered connections, more on the DDoS protection. Does anyone have any recent experiences with their unmetered connections they have been offering with great prices?
A couple sources with RAID performance numbers:
[url]
[url]
RAID 0 is the fastest by far, excluding RAID 10 on the F2 layout (which is significantly faster than RAID 10).
Do these numbers match up with your experience?
I haven't been able to find any dedicated servers with RAID 10 F2, so this doesn't seem to be a viable option.
my server load is max 1,it wont cross more than 1 but for 2 days iam getting 20 or more,but this load is extents for 1 or 2 min only after that it become normal to 0.58 around 1,in top i can able to see lotz apachi process when the load increase,
View 3 Replies View RelatedDoes anyone have any experience running Juniper SSG-550 firewalls in a high-traffic hosting environment?
I run network operations for a hosting provider in Australia. We currently have two J4350s running as border routers, and we are looking at putting two Juniper SSG-550s behind the border routers to do stateful firewalling / NAT.
We'll be using active/active NSRP on the SSGs for load balancing and failover.
My concern is that these devices may not be able to handle our traffic load. They have a hard-set limit of 256,000 "concurrent sessions" which may not be enough for us in peak times. Almost all of our traffic is HTTP though, so I would imagine sessions would timeout quite quickly?
On a normal shared hosting server, what kind of performance gains can you see using a SAS drive instead of a SATA II in raid-1?
View 6 Replies View RelatedI've tried asking on the xen-users mailing list, but haven't received much response. So, i'm asking here.
I'm running Xen 3.1 with CentOS 5 64bit on a Dell 2950 with 2 x 2.33Ghz Quad-Core CPUs. This should/is be a very powerful system. However, when running Xen the performance drop is huge. The strange thing is, on the mailing list others were reporting much lower levels of performance loss. (Just to be clear, I'm using the XenSource compiled kernel, etc.)
With Xen running, my UnixBench results aren't too bad.
Code:
INDEX VALUES
TEST BASELINE RESULT INDEX
Dhrystone 2 using register variables 376783.7 52116444.7 1383.2
Double-Precision Whetstone 83.1 2612.0 314.3
Execl Throughput 188.3 11429.1 607.0
File Copy 1024 bufsize 2000 maxblocks 2672.0 155443.0 581.7
File Copy 256 bufsize 500 maxblocks 1077.0 37493.0 348.1
File Read 4096 bufsize 8000 maxblocks 15382.0 1475439.0 959.2
Pipe-based Context Switching 15448.6 548465.7 355.0
Pipe Throughput 111814.6 3313637.0 296.4
Process Creation 569.3 34050.6 598.1
Shell Scripts (8 concurrent) 44.8 3566.8 796.2
System Call Overhead 114433.5 2756155.3 240.9
=========
FINAL SCORE 510.9
However, once I boot into Xen, the Dom0 performance drops a lot.
Code:
INDEX VALUES
TEST BASELINE RESULT INDEX
Dhrystone 2 using register variables 376783.7 50864253.7 1350.0
Double-Precision Whetstone 83.1 2617.9 315.0
Execl Throughput 188.3 2786.5 148.0
File Copy 1024 bufsize 2000 maxblocks 2672.0 159749.0 597.9
File Copy 256 bufsize 500 maxblocks 1077.0 44884.0 416.8
File Read 4096 bufsize 8000 maxblocks 15382.0 1191772.0 774.8
Pipe-based Context Switching 15448.6 306121.8 198.2
Pipe Throughput 111814.6 1417645.2 126.8
Process Creation 569.3 4699.2 82.5
Shell Scripts (8 concurrent) 44.8 781.6 174.5
System Call Overhead 114433.5 1021813.7 89.3
=========
FINAL SCORE 261.6
Now, here is where it gets weird. The only running DomU which is CentOS 5 PVed, gets a higher score than Dom0.
Code:
INDEX VALUES
TEST BASELINE RESULT INDEX
Dhrystone 2 using register variables 376783.7 38015133.3 1008.9
Double-Precision Whetstone 83.1 2023.4 243.5
Execl Throughput 188.3 3877.4 205.9
File Copy 1024 bufsize 2000 maxblocks 2672.0 270737.0 1013.2
File Copy 256 bufsize 500 maxblocks 1077.0 78470.0 728.6
File Read 4096 bufsize 8000 maxblocks 15382.0 1227115.0 797.8
Pipe Throughput 111814.6 1383157.5 123.7
Pipe-based Context Switching 15448.6 310378.3 200.9
Process Creation 569.3 7534.8 132.4
Shell Scripts (8 concurrent) 44.8 1179.6 263.3
System Call Overhead 114433.5 1056362.3 92.3
=========
FINAL SCORE 308.2
why the performance is so low? Perhaps any tips on boosting performance?