I have a site that runs from a MySQL database. The database isn't big, it has only 16.1 MB on 17 tables with 103,978 records, and all are properly indexed.
My problem is that my MySQL has a particular problem with this database because I'll have to wait around 5-10 minutes to receive the query results.
I've tuned MySQL, I've tuned Apache and in the daily usage i have usually low Load Averages of 0.19 0.22 0.40. My server specs are Intel P4 at 3GHz with HT, 2Gb of RAM (in dual Chanel), 2 Hdd of 250Gb (backup) and 300Gb (main hdd), both with 16 mb cache. I'm running Debian 3.1 with 2.6 kernel and there is no swapping to disk as i had previous on the 2.4 kernel.
I will post my.conf below and maybe you'll give me a suggestion.
Code:
#
# The MySQL database server configuration file.
#
# You can copy this to one of:
# - "/etc/mysql/my.cnf" to set global options,
# - "/var/lib/mysql/my.cnf" to set server-specific options or
# - "~/.my.cnf" to set user-specific options.
#
# One can use all long options that the program supports.
# Run program with --help to get a list of available options and with
# --print-defaults to see which it would actually understand and use.
#
# For explanations see
# [url]
# This will be passed to all mysql clients
# It has been reported that passwords should be enclosed with ticks/quotes
# escpecially if they contain "#" chars...
# Remember to edit /etc/mysql/debian.cnf when changing the socket location.
[client]
port= 3306
socket= /var/run/mysqld/mysqld.sock
# Here is entries for some specific programs
# The following values assume you have at least 32M ram
# This was formally known as [safe_mysqld]. Both versions are currently parsed.
[mysqld_safe]
socket= /var/run/mysqld/mysqld.sock
nice= 0
[mysqld]
#
# * Basic Settings
#
user= mysql
pid-file= /var/run/mysqld/mysqld.pid
socket= /var/run/mysqld/mysqld.sock
port= 3306
basedir= /usr
datadir= /var/lib/mysql
tmpdir= /tmp
language= /usr/share/mysql/english
skip-external-locking
#
# For compatibility to other Debian packages that still use
# libmysqlclient10 and libmysqlclient12.
old_passwords= 1
#
# Instead of skip-networking the default is now to listen only on
# localhost which is more compatible and is not less secure.
bind-address= 127.0.0.1
#
# * Fine Tuning
#
key_buffer= 32M
key_buffer_size= 32M
max_allowed_packet= 16M
myisam_sort_buffer_size = 32M
table_cache= 3072
sort_buffer_size= 4M
read_buffer_size= 4M
read_rnd_buffer_size= 4M
join_buffer_size= 2M
thread_stack= 512K
wait_timeout= 300
max_connection = 60
max_connect_errors= 10
thread_cache_size= 100
long_query_time= 2
max_user_connections= 50
interactive_timeout= 100
connect_timeout= 15
tmp_table_size= 64M
open_files_limit= 3072
max_heap_table_size = 64M
#
# * Query Cache Configuration
#
query_cache_limit= 2M
query_cache_size = 64M
query_cache_type = 1
#
# * Logging and Replication
#
# Both location gets rotated by the cronjob.
# Be aware that this log type is a performance killer.
#log= /var/log/mysql.log
#log= /var/log/mysql/mysql.log
#
# Error logging goes to syslog. This is a Debian improvement :)
#
# Here you can see queries with especially long duration
log-slow-queries= /var/log/mysql/mysql-slow.log
log-queries-not-using-indexes = /var/log/mysql/mysql-index.log
# The following can be used as easy to replay backup logs or for replication.
#server-id= 1
log-bin= /var/log/mysql/mysql-bin.log
# See /etc/mysql/debian-log-rotate.conf for the number of files kept.
max_binlog_size = 104857600
#binlog-do-db= include_database_name
#binlog-ignore-db= include_database_name
#
# * BerkeleyDB
#
# According to an MySQL employee the use of BerkeleyDB is now discouraged
# and support for it will probably cease in the next versions.
skip-bdb
#
# * InnoDB
#
# InnoDB is enabled by default with a 10MB datafile in /var/lib/mysql/.
# Read the manual for more InnoDB related options. There are many!
#
# * Security Features
#
# Read the manual, too, if you want chroot!
# chroot = /var/lib/mysql/
#
# If you want to enable SSL support (recommended) read the manual or my
# HOWTO in /usr/share/doc/mysql-server/SSL-MINI-HOWTO.txt.gz
# ssl-ca=/etc/mysql/cacert.pem
# ssl-cert=/etc/mysql/server-cert.pem
# ssl-key=/etc/mysql/server-key.pem
I thought that i might need to start rewriting my code in order to be able to fix things but i need at least another opinion from someone who knows more.
I am considering to upgrade to MySQL 5.0. I am using 4.1 at present. Now I wonder if it will really improve performance... I really have some busy databases... Also wonder if 5.0 is fully downwards compatible to 4.1
Right now i have a busy running forum Running on a Single Xeon 5335 with 4GB of RAM,
single 73GB SCSI 15K. And the site seems running fine most of the time except at peak.
The load sometime goes up to 8 for about 1 hour. So i am looking to upgrade my server.
The next server i am thinking about is
Single C2Q 9300, 8GB of RAM, 1x750GB SATAII as primary drive for for webserver, 1x150GB Raptor 10K to serve MYSQL only.
I wonder if the HDD performance on my current server server with future server be the same of better? Since the future server has better CPU and RAM, the only thing i worry is the HDD performance.
So in short, Single SCSI 15K V.S Combination of SATA + RAPTOR. What do you guy think?
i am using Litespeed as webserver and i also will be using litespeed on future server.
I have the hatchling plan on HostGator. Is there a PHP and MySQL open-source application where I can put up there to test its performance on running PHP and MySql applications? I want to see if there is any performance issue on this host before migrating a large ZenCart app to it.
What are people's experience on HostGator running PHP and MySQL?
I'm going to be rolling out a php/mysql driven site soon and I'm pretty much resigned to the fact that the mysql performance dreamhost has given me isn't going to cut it, its probably too oversold. Simple one table one column selects can take 30 seconds or time out depending on how badly the server is being hammered. HTTP requests are usually snappy, but the mysql performance is bogus.
What is a good host for me to launch this site with? Storage wouldn't need to be too terribly high, at least initially. I'm tempted by MediaTemple's slick marketing, but I've seen on here that some people have had poor sql performance (contrary to what some personal friends have experienced, so I'm torn). I was reading about downtownhost on here, but their load times seemed slow when I hit a couple domains listed on here that are hosted by them.
This host needs to be located in the US. Honestly, I like dreamhost and their panel, save for the sql sluggishness I'm getting.
During my poking around performance tips I found the DELAY_KEY_WRITE option (and innodb_flush_log_at_trx_commit = 0 for innodb)
which supposedly for mysql will disable the immediate disk flush for every transaction written and instead update only once every second at most?
One thing I've never had to restart on my vps is mysql, it's been great. So is this safe to turn on? Am I risking corruption? Will the performance gain be worth it with only a 16M cache?
I am using dreamhost host 3 of my web sites and 1 blog. Dreamhost is great, offers alot space and bandwidth.
but I think they are oversellling their space, sometimes it gets really slow. (overselling ? ok, I dont really know, but sometimes its really slow, and most my asian readers said need to refresh to load the page. I am wondering if theres a way to check if they are overselling or not.)
I am thinking about buying vps, even tho, I still got 5 month left with dreamhost.
I found 2 vps companies are highly recommanded on this forum, JaguarPC and LiquidWeb.
theres already a post compared both companies in terms of price and service. I say I will pick JagarPc, cuz, its basic plan just 20 USD, and htey got promotion now, its even cheaper. and basic Liquidweb vps plan is 60 bucks.
I am wondering why Jagarpc is so cheap , are they overselling? how can we check if they are overselling.
I found a few posts saying how good jaguarPc is. and they are not overselling, but those members just signed up this month, and only have 1-3 posts. I cannot really trust those new members.
Can someone share their experience with JaguarPC? compare JaguarPc performance and liquidweb performance. antoher question is switch from dreamhost to JaguarPC basic vPS plan, will performance gets better?
last question: VPS account allows 3 IP, 3ip = 3 domains? if not, how many domains can I have?
We run a very busy web application written in .net . The backend is SQL 2005. The server running SQL for this web app is slammed constantly. CPU is red lined, and the disks are queuing up because they cant keep up with the demand. What I am wondering is what do the big websites do to gain performance? What direction should we start moving in to get ahead of the curve. We are using an HP DL 580 with 4 x quad core xeons and the fastest SAS drives we could get.
Does anyone have experience using LVM2? We'd rely on hardware RAID mirroring for the underlying physical redundancy, but we're very interested in LVM2's storage virtualization features.
If anyone can share their experiences with LVM2 with regards to performance and possibly use in a SAN environment,
Let's say I've got a single website built in Drupal (using PHP and MySQL). It gets less than a 1,000 visits per day and needs very little storage or bandwidth. The site is currently on a shared host and it runs okay, but often has very slow page loads due to sluggish MySQL calls and other traffic on the server. Sometimes the homepage loads in 2s but other times it takes 20-30s depending on time of day. The client is sick of this performance on such a low traffic site and wants to improve the situation.
Question: Will a VPS really provide that much better performance than a shared host?
Remember I'm talking ONLY about page load time under minimal load. No need to take into account scaling or Digg/Slashdot effects.
I know dedicated is the best option but it seems crazy for such a low traffic site. A lot of the VPS offers are very attractive in theory (managed and affordable) but in practice I'm concerned that even a 512MB VPS with 1GB burst won't make much of a performance difference.
Mainly I don't want to go to the hassle and extra monthly expense of moving everything to a VPS for only a minimal gain.
I installed the MySQL binary packages in /usr/local/mysql/ after removing the MySQL RPM package. MySQL is functioning when I executed /usr/local/mysql/bin/safe_mysqld. I reinstalled MySQL before I installed PHP. When I used a PHP script to access a MySQL database, it outputs an error:
Code: Warning: mysqli::mysqli() [function.mysqli-mysqli]: (HY000/2002): can't connect to local mysql server through socket /var/lib/mysql/mysql.sock in index.php on line 2 However, I installed MySQL in /usr/local/mysql, not in /var/lib/mysql. How do I fix MySQL?
We shifted one website based on Article Dashboard (its an article directory script coded in Zend) to a Godaddy VPS ($35 per month) from a shared hosting account with hostgator.
This VPS is really slow compared to hostgator account.
Im planning on buying a NAS from my provider for using as a backend to my VPSes (around 15). The plan is to put the server images on the NAS so the VPSes can be moved without interruption between different nodes.
The server i have looked on so far is the following:
The budget is pretty tight so if it's possible to do this with SATA drives it would be great, otherwise it could be a possibilty to go down in diskspace and switch the SATA drives to SCSI/SAS drives.
We are getting into VPS hosting and wanted to get some opinions and feedback as we're quite unsure on what to expect as for performance and how many clients we can generally keep on a box.
For now we've bought 3 dell R710 with dual Xeon L5520, 72GB ram and 8 x 2.5" SAS drives.
We are thinking of a base offering of 512 megabytes of ram and was hoping to get about 40-50 onto a server.
With 40 there should be -plenty- free ram and plenty drivecache.
Then a next offering of 1 gig ram and next one of 2 gigs.
Even if we do the biggest 2 gig offering with 25 on a server we should have free ram to spare.
The software would be virtuozzo.
Any thoughts on this, am I expecting too much, or am I being fairly realistic?
I have been working with Xen over the last week or so and I can't figure out why the performance is downgraded so much when booting into Xen. There are certain things that seem just as fast but other things just don't seem normal.
I have tried this on two different quad-core systems, one new generation (York) with CentOS5 and one old (Kent) with Debian Lenny but neither seem to produce good speeds.
For example, when I use the default kernels I can usually get about ~600 score out of unixbench-wht and things such as top and core systems show up as 0% cpu when running top.
When I boot into Xen kernel however, whether it been from Dom0 or the guest OS, top uses about 3% CPU and unixbench-wht produces scores under 250.
I have set vcpus to 4 and have even tried vcpu-pin 0 0, 1 1, 2 2, 3 3 but nothing seems to be changing anything. The disk speeds seem about the same (hdparm). I'm assuming it is something with the CPU,
I need some advice on my situaton at my host, and possibly some frame of reference as to what can/should be expected from a VPS setup like mine and what I can expect it to manage.
I have a site that sees some traffic of about 150k pageviews per day. On any given day, it peaks for roughly a timespan of 4 hours per day where there may be about 5 req/s.
I use a standard setup (LAMP) running mod_php in Apache, not fast cgi. I have a VPS on Virtuozzos Power Panel that has 1,5 GB RAM and really an unkonwn amount of CPU. I haven't been able to ascertain that information but probably could if I asked my host.
The problem is that during these hours it gets a bit slow from time to time. Running TOP shows sometimes a staggering amount of waiting processes i.e. the load is quite high (15 - 25).
So, I'm now really at a fork in the road where I either start looking into going with a different setup, say Nginx + PHP-FPM (FCGI) and try to see if that makes a difference. I'm not really an admin so I would be kind of lost on that. I could also start looking into my code to see if I can cache more or do smarter stuff etc.
However, before doing any of the above, I'd like to ask this crowd here if you think that I've sort of hit the roof on what can be expected from a VPS of the size I just told you about. That my situation is quite normal and that the real solution is actually just to upgrade my VPS. Is it?
Lets assume that we (me and the people I'm working with) were to launch a really powerful website. Then all of a sudden there is more demand for the website than the backend infrastructure can handle.
What do we do?
- 1000 users (ok so one powerful server should be enough).
- 2000 users (lets setup an additional server to work as the HTTP while the powerful server acts as the database only).
- 3000 users (lets eliminate all the commercial linux programs and install a fresh version of linux on both boxes and compile only the programs we need).
- 5000 (lets setup another server that handles the sessions).
- 6000 (lets setup a static-only server to deliver the non-dynamic content).
- 7000 (lets do some caching ... ugh maybe it won't be enough).
Any greater and what? We've run out of ideas on how to separate the code logic and how to optimize every byte of data on the website! What do we do? We can buy more servers, but how do we balance the load?
This is where I'm stuck at. In the past I've separated the load in a modular sense (one server does this and one server does that), but eventually I'll come across a wall.
how clustering works? What I wanna know is how is the information, whether it be the server-side code or the static information, is shared across machines. Is it worth it anymore to learn these things, or is it worth it just to host with a scalable hosting solution like AWS?
I have several VPS's that I run. Some run LAMP, others RoR, and my latest runs with Nginx + Cherrypy (python).
To be honest, I've never run any benchmarks to see how well the servers performed under stress. But I'd like to start.
Are there are good (free) programs out there that will stress test my web servers? I develop on windows, but deploy on linux, so either platform is ok. I'm most interested in how many concurrent connections can be maintained.
I currently have a VPS in the UK that I host my clients joomla sites off and the specs of this VPS server are as below:
- 20 GB SA-SCSI Disk Space - 350GB bandwidth - Full root access / Parallels/WHM/cPanel - 2 Dedicated IPs - 384 MB SLM RA
I am now running around 10 joomla based sites off of this VPS, 5-6 of which are Ecommerce based sites. Whilst I am probably only using 10gb of the overall disk-space so far, in terms of performance, should I continue to add clients to this server or should I keep the more hungry sites on this server and move some of the less resource intensive non-ecommerce sites to another VPS? Or would it be in my best interest to upgrade to a Dedicated server where I will have all my own resources?
Would I be roughly right in assuming that an American customer accessing a UK server will see similar speeds to what I have been getting as a UK customer accessing the same site on a US server?
Is there any RAID performance decrease if per say you have a 24-RAID 3ware hardware card and you already have a 6x RAID partion on RAID 5 but then you are now adding per say 18x of HDD and your going to make it to another partion of RAID 5 does the performance stay the same or decrease?
The question as to why you would have different RAID partions is because if you were to buy a 8U you would want it as an investment to avoid buying smaller cases to eliminate the amount of money on new motherboard/cpu/ram per each system and add hard drives whenever you can and RAID them.