PHP 6 Performance
May 12, 2008Any rumors known already?
View 4 RepliesAny rumors known already?
View 4 RepliesI am using dreamhost host 3 of my web sites and 1 blog. Dreamhost is great, offers alot space and bandwidth.
but I think they are oversellling their space, sometimes it gets really slow. (overselling ? ok, I dont really know, but sometimes its really slow, and most my asian readers said need to refresh to load the page. I am wondering if theres a way to check if they are overselling or not.)
I am thinking about buying vps, even tho, I still got 5 month left with dreamhost.
I found 2 vps companies are highly recommanded on this forum, JaguarPC and LiquidWeb.
theres already a post compared both companies in terms of price and service. I say I will pick JagarPc, cuz, its basic plan just 20 USD, and htey got promotion now, its even cheaper. and basic Liquidweb vps plan is 60 bucks.
I am wondering why Jagarpc is so cheap , are they overselling? how can we check if they are overselling.
I found a few posts saying how good jaguarPc is. and they are not overselling, but those members just signed up this month, and only have 1-3 posts. I cannot really trust those new members.
Can someone share their experience with JaguarPC? compare JaguarPc performance and liquidweb performance. antoher question is switch from dreamhost to JaguarPC basic vPS plan, will performance gets better?
last question: VPS account allows 3 IP, 3ip = 3 domains? if not, how many domains can I have?
We run a very busy web application written in .net . The backend is SQL 2005. The server running SQL for this web app is slammed constantly. CPU is red lined, and the disks are queuing up because they cant keep up with the demand. What I am wondering is what do the big websites do to gain performance? What direction should we start moving in to get ahead of the curve. We are using an HP DL 580 with 4 x quad core xeons and the fastest SAS drives we could get.
View 14 Replies View RelatedDoes anyone have experience using LVM2? We'd rely on hardware RAID mirroring for the underlying physical redundancy, but we're very interested in LVM2's storage virtualization features.
If anyone can share their experiences with LVM2 with regards to performance and possibly use in a SAN environment,
Hypothetical Scenario:
Let's say I've got a single website built in Drupal (using PHP and MySQL). It gets less than a 1,000 visits per day and needs very little storage or bandwidth. The site is currently on a shared host and it runs okay, but often has very slow page loads due to sluggish MySQL calls and other traffic on the server. Sometimes the homepage loads in 2s but other times it takes 20-30s depending on time of day. The client is sick of this performance on such a low traffic site and wants to improve the situation.
Question: Will a VPS really provide that much better performance than a shared host?
Remember I'm talking ONLY about page load time under minimal load. No need to take into account scaling or Digg/Slashdot effects.
I know dedicated is the best option but it seems crazy for such a low traffic site. A lot of the VPS offers are very attractive in theory (managed and affordable) but in practice I'm concerned that even a 512MB VPS with 1GB burst won't make much of a performance difference.
Mainly I don't want to go to the hassle and extra monthly expense of moving everything to a VPS for only a minimal gain.
We shifted one website based on Article Dashboard (its an article directory script coded in Zend) to a Godaddy VPS ($35 per month) from a shared hosting account with hostgator.
This VPS is really slow compared to hostgator account.
Can anyone tell what we should do?
Im planning on buying a NAS from my provider for using as a backend to my VPSes (around 15). The plan is to put the server images on the NAS so the VPSes can be moved without interruption between different nodes.
The server i have looked on so far is the following:
CPU: Xeon 3330 2,67Ghz
RAM: 4GB DDR2
HDD: 8*Barracuda 7200.12 1000GB, 7200rpm, 32MB, SATA-II
RAID: 3Ware 9650SE
Network: Intel 2*1Gbit
Would it be enough to fill the Gbit-line?
The budget is pretty tight so if it's possible to do this with SATA drives it would be great, otherwise it could be a possibilty to go down in diskspace and switch the SATA drives to SCSI/SAS drives.
We are getting into VPS hosting and wanted to get some opinions and feedback as we're quite unsure on what to expect as for performance and how many clients we can generally keep on a box.
For now we've bought 3 dell R710 with dual Xeon L5520, 72GB ram and 8 x 2.5" SAS drives.
We are thinking of a base offering of 512 megabytes of ram and
was hoping to get about 40-50 onto a server.
With 40 there should be -plenty- free ram and plenty drivecache.
Then a next offering of 1 gig ram and next one of 2 gigs.
Even if we do the biggest 2 gig offering with 25 on a server we should have free ram to spare.
The software would be virtuozzo.
Any thoughts on this, am I expecting too much, or am I being fairly realistic?
I have been working with Xen over the last week or so and I can't figure out why the performance is downgraded so much when booting into Xen. There are certain things that seem just as fast but other things just don't seem normal.
I have tried this on two different quad-core systems, one new generation (York) with CentOS5 and one old (Kent) with Debian Lenny but neither seem to produce good speeds.
For example, when I use the default kernels I can usually get about ~600 score out of unixbench-wht and things such as top and core systems show up as 0% cpu when running top.
When I boot into Xen kernel however, whether it been from Dom0 or the guest OS, top uses about 3% CPU and unixbench-wht produces scores under 250.
I have set vcpus to 4 and have even tried vcpu-pin 0 0, 1 1, 2 2, 3 3 but nothing seems to be changing anything. The disk speeds seem about the same (hdparm). I'm assuming it is something with the CPU,
I have to leave the Supermicro servers and use only Dell. I have this question.
There is a big difference in performance between these two RAID configurations?
Dell - 2 x 1TB RAID1 PERC6
Supermicro - 4 x 500GB RAID10 3ware 4 port
It is for use with webhosting.
I need some advice on my situaton at my host, and possibly some frame of reference as to what can/should be expected from a VPS setup like mine and what I can expect it to manage.
I have a site that sees some traffic of about 150k pageviews per day. On any given day, it peaks for roughly a timespan of 4 hours per day where there may be about 5 req/s.
I use a standard setup (LAMP) running mod_php in Apache, not fast cgi. I have a VPS on Virtuozzos Power Panel that has 1,5 GB RAM and really an unkonwn amount of CPU. I haven't been able to ascertain that information but probably could if I asked my host.
The problem is that during these hours it gets a bit slow from time to time. Running TOP shows sometimes a staggering amount of waiting processes i.e. the load is quite high (15 - 25).
So, I'm now really at a fork in the road where I either start looking into going with a different setup, say Nginx + PHP-FPM (FCGI) and try to see if that makes a difference. I'm not really an admin so I would be kind of lost on that. I could also start looking into my code to see if I can cache more or do smarter stuff etc.
However, before doing any of the above, I'd like to ask this crowd here if you think that I've sort of hit the roof on what can be expected from a VPS of the size I just told you about. That my situation is quite normal and that the real solution is actually just to upgrade my VPS. Is it?
Lets assume that we (me and the people I'm working with) were to launch a really powerful website. Then all of a sudden there is more demand for the website than the backend infrastructure can handle.
What do we do?
- 1000 users (ok so one powerful server should be enough).
- 2000 users (lets setup an additional server to work as the HTTP while the powerful server acts as the database only).
- 3000 users (lets eliminate all the commercial linux programs and install a fresh version of linux on both boxes and compile only the programs we need).
- 5000 (lets setup another server that handles the sessions).
- 6000 (lets setup a static-only server to deliver the non-dynamic content).
- 7000 (lets do some caching ... ugh maybe it won't be enough).
Any greater and what? We've run out of ideas on how to separate the code logic and how to optimize every byte of data on the website! What do we do? We can buy more servers, but how do we balance the load?
This is where I'm stuck at. In the past I've separated the load in a modular sense (one server does this and one server does that), but eventually I'll come across a wall.
how clustering works? What I wanna know is how is the information, whether it be the server-side code or the static information, is shared across machines. Is it worth it anymore to learn these things, or is it worth it just to host with a scalable hosting solution like AWS?
How much faster is a 10k rpm HDD vs a 7200 rpm HDD in a server environment?
IMO, a 7200 rpm HDD is much faster than 5400 rpm HDD when it comes down to desktop PCs..
Just wondering if it's worth upgrading to a 10k rpm HDD from a 7200 rpm HDD and losing about 1TB of storage as well...
(Comparing specifically 2 750GB 16mb cache 7200rpm SATA 2 HDD RAID 1 with 2 150GB 16mb cache 10krpm HDD in RAID 1)
From the Disk I/O performance is it better
1) to have main PHP file with 10 includes
2) all 11 files as one file
3) the difference is not big
Suppose
a) a low traffic site
b) a high traffice site
I have several VPS's that I run. Some run LAMP, others RoR, and my latest runs with Nginx + Cherrypy (python).
To be honest, I've never run any benchmarks to see how well the servers performed under stress. But I'd like to start.
Are there are good (free) programs out there that will stress test my web servers? I develop on windows, but deploy on linux, so either platform is ok. I'm most interested in how many concurrent connections can be maintained.
I currently have a VPS in the UK that I host my clients joomla sites off and the specs of this VPS server are as below:
- 20 GB SA-SCSI Disk Space
- 350GB bandwidth
- Full root access / Parallels/WHM/cPanel
- 2 Dedicated IPs
- 384 MB SLM RA
I am now running around 10 joomla based sites off of this VPS, 5-6 of which are Ecommerce based sites. Whilst I am probably only using 10gb of the overall disk-space so far, in terms of performance, should I continue to add clients to this server or should I keep the more hungry sites on this server and move some of the less resource intensive non-ecommerce sites to another VPS? Or would it be in my best interest to upgrade to a Dedicated server where I will have all my own resources?
I’m moving my web server from the US to the UK.
Would I be roughly right in assuming that an American customer accessing a UK server will see similar speeds to what I have been getting as a UK customer accessing the same site on a US server?
Is there any RAID performance decrease if per say you have a 24-RAID 3ware hardware card and you already have a 6x RAID partion on RAID 5 but then you are now adding per say 18x of HDD and your going to make it to another partion of RAID 5 does the performance stay the same or decrease?
The question as to why you would have different RAID partions is because if you were to buy a 8U you would want it as an investment to avoid buying smaller cases to eliminate the amount of money on new motherboard/cpu/ram per each system and add hard drives whenever you can and RAID them.
I am currently working with an internet radio station, it is currently listed in iTunes and we are pushing about 90mbps from a ecatel server during the day. We are expanding and are looking to pick up more capacity and were considering doing Geolocating for generating playlist so listeners would get the closest relay to them. Staminus has excellent pricing on unmetered connections so we were looking into them to use for a US provider.
I have searched the forum and haven't found many reviews on their unmetered connections, more on the DDoS protection. Does anyone have any recent experiences with their unmetered connections they have been offering with great prices?
A couple sources with RAID performance numbers:
[url]
[url]
RAID 0 is the fastest by far, excluding RAID 10 on the F2 layout (which is significantly faster than RAID 10).
Do these numbers match up with your experience?
I haven't been able to find any dedicated servers with RAID 10 F2, so this doesn't seem to be a viable option.
my server load is max 1,it wont cross more than 1 but for 2 days iam getting 20 or more,but this load is extents for 1 or 2 min only after that it become normal to 0.58 around 1,in top i can able to see lotz apachi process when the load increase,
View 3 Replies View RelatedDoes anyone have any experience running Juniper SSG-550 firewalls in a high-traffic hosting environment?
I run network operations for a hosting provider in Australia. We currently have two J4350s running as border routers, and we are looking at putting two Juniper SSG-550s behind the border routers to do stateful firewalling / NAT.
We'll be using active/active NSRP on the SSGs for load balancing and failover.
My concern is that these devices may not be able to handle our traffic load. They have a hard-set limit of 256,000 "concurrent sessions" which may not be enough for us in peak times. Almost all of our traffic is HTTP though, so I would imagine sessions would timeout quite quickly?
On a normal shared hosting server, what kind of performance gains can you see using a SAS drive instead of a SATA II in raid-1?
View 6 Replies View RelatedI've tried asking on the xen-users mailing list, but haven't received much response. So, i'm asking here.
I'm running Xen 3.1 with CentOS 5 64bit on a Dell 2950 with 2 x 2.33Ghz Quad-Core CPUs. This should/is be a very powerful system. However, when running Xen the performance drop is huge. The strange thing is, on the mailing list others were reporting much lower levels of performance loss. (Just to be clear, I'm using the XenSource compiled kernel, etc.)
With Xen running, my UnixBench results aren't too bad.
Code:
INDEX VALUES
TEST BASELINE RESULT INDEX
Dhrystone 2 using register variables 376783.7 52116444.7 1383.2
Double-Precision Whetstone 83.1 2612.0 314.3
Execl Throughput 188.3 11429.1 607.0
File Copy 1024 bufsize 2000 maxblocks 2672.0 155443.0 581.7
File Copy 256 bufsize 500 maxblocks 1077.0 37493.0 348.1
File Read 4096 bufsize 8000 maxblocks 15382.0 1475439.0 959.2
Pipe-based Context Switching 15448.6 548465.7 355.0
Pipe Throughput 111814.6 3313637.0 296.4
Process Creation 569.3 34050.6 598.1
Shell Scripts (8 concurrent) 44.8 3566.8 796.2
System Call Overhead 114433.5 2756155.3 240.9
=========
FINAL SCORE 510.9
However, once I boot into Xen, the Dom0 performance drops a lot.
Code:
INDEX VALUES
TEST BASELINE RESULT INDEX
Dhrystone 2 using register variables 376783.7 50864253.7 1350.0
Double-Precision Whetstone 83.1 2617.9 315.0
Execl Throughput 188.3 2786.5 148.0
File Copy 1024 bufsize 2000 maxblocks 2672.0 159749.0 597.9
File Copy 256 bufsize 500 maxblocks 1077.0 44884.0 416.8
File Read 4096 bufsize 8000 maxblocks 15382.0 1191772.0 774.8
Pipe-based Context Switching 15448.6 306121.8 198.2
Pipe Throughput 111814.6 1417645.2 126.8
Process Creation 569.3 4699.2 82.5
Shell Scripts (8 concurrent) 44.8 781.6 174.5
System Call Overhead 114433.5 1021813.7 89.3
=========
FINAL SCORE 261.6
Now, here is where it gets weird. The only running DomU which is CentOS 5 PVed, gets a higher score than Dom0.
Code:
INDEX VALUES
TEST BASELINE RESULT INDEX
Dhrystone 2 using register variables 376783.7 38015133.3 1008.9
Double-Precision Whetstone 83.1 2023.4 243.5
Execl Throughput 188.3 3877.4 205.9
File Copy 1024 bufsize 2000 maxblocks 2672.0 270737.0 1013.2
File Copy 256 bufsize 500 maxblocks 1077.0 78470.0 728.6
File Read 4096 bufsize 8000 maxblocks 15382.0 1227115.0 797.8
Pipe Throughput 111814.6 1383157.5 123.7
Pipe-based Context Switching 15448.6 310378.3 200.9
Process Creation 569.3 7534.8 132.4
Shell Scripts (8 concurrent) 44.8 1179.6 263.3
System Call Overhead 114433.5 1056362.3 92.3
=========
FINAL SCORE 308.2
why the performance is so low? Perhaps any tips on boosting performance?
I was a victim/winner due to slashdot yesterday. My site, www.electricalengineer.com runs Joomla hosted through Rackforce's dds-400l package. We thought we were under attack yesterday, but later found it to be the slashdot effect. Anyhow, google analytics show ~5700 visitors. This doesn't seem like it would be enough to slow the server to a halt, but it did. Rackforce suggested that we upgrade to a more powerful package. I'm not sure though that the following should have slowed us down: Dual Quad-Core Xeon
1GB DDR2 ECC 667 RAM
30GB on SAS/SCSI
10Mbps Dedicated Unmetered
anyone else have performance issues with Joomla?
We have a project in mind and we are planning on using a Cisco 7140 to push about 80Mbps over ethernet. Do you think the 7140 will be enough or it will get maxed out? (the 7140 is supposed to be like the 7200VXR NPE-300).
The routing would be thorugh BGP with partial routes.
I am considering to upgrade to MySQL 5.0. I am using 4.1 at present. Now I wonder if it will really improve performance... I really have some busy databases... Also wonder if 5.0 is fully downwards compatible to 4.1
View 2 Replies View RelatedI've got a VPS, and its performance sucks.
I've got 18 domains parked on it, with only 4 of those having active websites. There are 3 mailman lists set up, and a further 10 or so email accounts with SpamAssassin active on them.
It also runs cPanel/WHM.
The server itself has 384Mb RAM, with 512Mb burst.
My beancounters are.....
Version: 2.5
uid resource held maxheld barrier limit failcnt
2102: kmemsize 8406834 10504002 30000000 30000000 2722182
lockedpages 0 0 256 256 0
privvmpages 111738 132127 131072 131072 1034096
shmpages 731 747 21504 21504 0
dummy 0 0 0 0 0
numproc 73 96 240 240 0
physpages 55335 75477 0 2147483647 0
vmguarpages 0 0 65536 2147483647 0
oomguarpages 55335 75477 26112 2147483647 0
numtcpsock 24 47 360 360 0
numflock 9 20 188 206 0
numpty 1 1 16 16 0
numsiginfo 0 17 256 256 0
tcpsndbuf 214656 1750788 53687296 61551616 0
tcprcvbuf 393216 786996 53687296 61551616 0
othersockbuf 30416 275664 53687296 61551616 0
dgramrcvbuf 0 55460 53687296 61551616 0
numothersock 24 42 360 360 0
dcachesize 0 0 2273280 2416640 0
numfile 2078 2449 5820 5820 0
dummy 0 0 0 0 0
dummy 0 0 0 0 0
dummy 0 0 0 0 0
numiptent 10 10 128 128 0
Am I putting too much strain on the VPS with the amount of domains/emails I run through it? Is cPanel/WHM the problem? Is the server config suspect? Are VPS accounts only good for 1 or 2 domains?
I have a site that runs from a MySQL database. The database isn't big, it has only 16.1 MB on 17 tables with 103,978 records, and all are properly indexed.
My problem is that my MySQL has a particular problem with this database because I'll have to wait around 5-10 minutes to receive the query results.
I've tuned MySQL, I've tuned Apache and in the daily usage i have usually low Load Averages of 0.19 0.22 0.40. My server specs are Intel P4 at 3GHz with HT, 2Gb of RAM (in dual Chanel), 2 Hdd of 250Gb (backup) and 300Gb (main hdd), both with 16 mb cache. I'm running Debian 3.1 with 2.6 kernel and there is no swapping to disk as i had previous on the 2.4 kernel.
I will post my.conf below and maybe you'll give me a suggestion.
Code:
#
# The MySQL database server configuration file.
#
# You can copy this to one of:
# - "/etc/mysql/my.cnf" to set global options,
# - "/var/lib/mysql/my.cnf" to set server-specific options or
# - "~/.my.cnf" to set user-specific options.
#
# One can use all long options that the program supports.
# Run program with --help to get a list of available options and with
# --print-defaults to see which it would actually understand and use.
#
# For explanations see
# [url]
# This will be passed to all mysql clients
# It has been reported that passwords should be enclosed with ticks/quotes
# escpecially if they contain "#" chars...
# Remember to edit /etc/mysql/debian.cnf when changing the socket location.
[client]
port= 3306
socket= /var/run/mysqld/mysqld.sock
# Here is entries for some specific programs
# The following values assume you have at least 32M ram
# This was formally known as [safe_mysqld]. Both versions are currently parsed.
[mysqld_safe]
socket= /var/run/mysqld/mysqld.sock
nice= 0
[mysqld]
#
# * Basic Settings
#
user= mysql
pid-file= /var/run/mysqld/mysqld.pid
socket= /var/run/mysqld/mysqld.sock
port= 3306
basedir= /usr
datadir= /var/lib/mysql
tmpdir= /tmp
language= /usr/share/mysql/english
skip-external-locking
#
# For compatibility to other Debian packages that still use
# libmysqlclient10 and libmysqlclient12.
old_passwords= 1
#
# Instead of skip-networking the default is now to listen only on
# localhost which is more compatible and is not less secure.
bind-address= 127.0.0.1
#
# * Fine Tuning
#
key_buffer= 32M
key_buffer_size= 32M
max_allowed_packet= 16M
myisam_sort_buffer_size = 32M
table_cache= 3072
sort_buffer_size= 4M
read_buffer_size= 4M
read_rnd_buffer_size= 4M
join_buffer_size= 2M
thread_stack= 512K
wait_timeout= 300
max_connection = 60
max_connect_errors= 10
thread_cache_size= 100
long_query_time= 2
max_user_connections= 50
interactive_timeout= 100
connect_timeout= 15
tmp_table_size= 64M
open_files_limit= 3072
max_heap_table_size = 64M
#
# * Query Cache Configuration
#
query_cache_limit= 2M
query_cache_size = 64M
query_cache_type = 1
#
# * Logging and Replication
#
# Both location gets rotated by the cronjob.
# Be aware that this log type is a performance killer.
#log= /var/log/mysql.log
#log= /var/log/mysql/mysql.log
#
# Error logging goes to syslog. This is a Debian improvement :)
#
# Here you can see queries with especially long duration
log-slow-queries= /var/log/mysql/mysql-slow.log
log-queries-not-using-indexes = /var/log/mysql/mysql-index.log
# The following can be used as easy to replay backup logs or for replication.
#server-id= 1
log-bin= /var/log/mysql/mysql-bin.log
# See /etc/mysql/debian-log-rotate.conf for the number of files kept.
max_binlog_size = 104857600
#binlog-do-db= include_database_name
#binlog-ignore-db= include_database_name
#
# * BerkeleyDB
#
# According to an MySQL employee the use of BerkeleyDB is now discouraged
# and support for it will probably cease in the next versions.
skip-bdb
#
# * InnoDB
#
# InnoDB is enabled by default with a 10MB datafile in /var/lib/mysql/.
# Read the manual for more InnoDB related options. There are many!
#
# * Security Features
#
# Read the manual, too, if you want chroot!
# chroot = /var/lib/mysql/
#
# If you want to enable SSL support (recommended) read the manual or my
# HOWTO in /usr/share/doc/mysql-server/SSL-MINI-HOWTO.txt.gz
# ssl-ca=/etc/mysql/cacert.pem
# ssl-cert=/etc/mysql/server-cert.pem
# ssl-key=/etc/mysql/server-key.pem
[mysqldump]
quick
quote-names
max_allowed_packet= 64M
[mysql]
#no-auto-rehash# faster start of mysql but no tab completition
[isamchk]
key_buffer= 32M
sort_buffer_size= 32M
read_buffer= 16M
write_buffer= 16M
[myisamchk]
key_buffer = 32M
sort_buffer_size = 32M
read_buffer = 16M
write_buffer = 16M
I thought that i might need to start rewriting my code in order to be able to fix things but i need at least another opinion from someone who knows more.
for running PostgreSQL on a VPS,
Take a look what moderator of MySource Matrix forums says (enterprise-CMS based on PHP4+pgSQL):
Quote:
Yes, installing onto a virtual server is great, because its like you have an entire server to yourself. However, running PostgreSQL in a VM is not recommended for performance reasons. Obviously this isn't a concern on low-traffic sites. The ultimate solution would be a virtual server for Apache/PHP/MySource Matrix and a dedicated server for PostgreSQL. Though, I don't know of any virtual server providers that also provide dedicated database server access.
(...)
The problem with PostgreSQL (or any database) under virtualisation is that there is no IO prioritization given. So, the database just waits along with everything else for the hypervisor to deliver IO, with no consideration for priorities. This leads to a performance hit of about 70-80%. In testing, we've seen PostgreSQL under VMware ESX/GSX perform horribly, even with fibre-attached SAN storage. Though, sticking PostgreSQL on a Xen DomU seems better. The sysadmins have done some testing on this too.
If you run PostgreSQL across multiple virtual servers on the same hardware, you're just compounding the IO bottlenecks, so I can't recommend it at all.
Source: forums.matrix.squiz.net/index.php?showtopic=3929&st=15&start=15
Can anyone comment and share thoughts on using PostgreSQL on private virtual server?
One of the things people seem to bring up a lot is disk IO Performance.
Why? Because theres little you can do about a customer being stupid and creating a disk swap nightmare.
There is however something you can do to reduce the impact across your clients, Have a separate raid array for swap space.
This does 2 things, it splits some of the Disk IO across 2 arrays, but more important it reduces the affect someone overusing there swap will have on the ones that are not.
Just my quick 10 cents for the day.