Tables Grow In Size And Sometime And Need To Use RAID 5
Nov 2, 2009
I have a database which is growing to have about 100 tables. These tables will grow in size and sometime I will need to use RAID 5, I am told by my server provider.
My questions are:-
1. if these other servers are mirrors, should I have the database stored on each server?
2. when one server gets too busy, does the RAID query a lesser-used server so as not to bog down the first one?
3. Or, do I need to have different content in each db on each server so a query gets what it needs form each?
im planning on designing a webservice, it will have a lot of data, spread in many tables.
The problem is that there will be tables created constantly (around 5 each day..)
All tables/data will be accessed equally, so I dont know how to set up the system, to create multiple databases, and balance the number of tables equally on each database, or create less databases and have a lot of tables on each one.
I was wondering if anybody had a suggestion of what to look for in a VPS that I could use to learn and grow on. I generated a decent-sized web application (PHP5/MySQL) and I'm looking to grow in userbase and features.
I started it on Lunarpages shared hosting (which was a nightmare) before realizing I needed something a little better. I ended up switching over to WestHost and I've been very happy with them for a few years now. Though some may argue it's not a "true" VPS, it's done everything I wanted it to do so far and been quite reliable. It was a great stepping stone.
My site has been growing, as have my service plans, budget, and aspirations. I like to learn by getting my hands dirty, and I wanted to learn some of the more advanced aspects of running a webserver, so I'm in the market for a VPS with more freedom.
What VPS features or aspects should I keep an eye out for if I'm looking to both learn about the server and expand (aka do development work) at the same time? Benchmarks, RAM and CPU? Upgradability, managed and good support? CPanel, DA, Plesk? Snazzy logos and plan names?
Today we are going to conduct a detailed study of RAIDability of contemporary 400GB hard drives on a new level. We will take two "professional" drives from Seagate and Western Digital and four ordinary "desktop" drives for our investigation. The detailed performance analysis and some useful hints on building RAID arrays are in our new detailed article.
One of our resellers has an account.. When looking into cpanel, it says that that account is using 3300megs. When we go into the ftp of that account, in reality it is only using 1.3megs. This is a huge difference! Most of folders are empty. We are using the latest version of WHM and Cpanel.
Is Motherboard RAID as good as a dedicated PCI-E card? I am guessing a dedicated card is the best option, though costs more.
We are looking at buying a barebones server from Supermicro. It features an onboard RAID controller which supports RAID 0, 1, 5 & 10 - but for some strange reason it will only support RAID 5 if you use Windows. Here is a link to the page detailing the RAID features.
[url]
We are going to be running Linux, CentOS 5.1, so we will only have the choice of RAID 0, 1 or 10. This isn't an issue, as having RAID 10 on 4x SAS (15k) drives will be fine for speed and stability. What is an issue is would this RAID controller be as fast or reliable compared to a dedicated PCI-E card? If it can only use RAID 5 in windows, does that suggest this controller is too reliant on software? It would be a nightmare to suffer downtime and data loss because the controller couldn't hack it during a drive failure, or one day it decided to bugger up the array when rebooting.
So that leads me to looking at this card, this looks very good for what we need. Are adaptec a reliable brand? I've seen it advertised for £200, which is a good price.
[url]
This card features RAID 5 and 6, would RAID 6 be better than RAID 10 for redundancy, or is it too slow to bother with? Also it seems to have a battery module available for it, what does this achieve? Cos surely if the power dies the hard drives and motherboard can't run off this little battery, or does it just help the controller stay alive long enough with some hard drive information in its memory if the power goes out during a rebuild?
I am in a somewhat complicated situation... I wanted to order a custom server with hardware 3Ware RAID controller but after over a month of waiting I was told the HW RAID controller, as well as any other 3Ware controller they tried, does not work with the motherboard used in the server from Fujitsu-Siemens and that they simply got a reply from FS that the controller is not certified to work with their motherboard.
So although I'd prefer a HW raid, I am forced to either choose a different webhost or setup a software RAID. The problem is, I haven't done that before and am somewhat moderately...scared
I have read a lot of the info about SW RAID on Linux that I could find through Google but there are some questions unanswered still. So I thought that perhaps some of the more knowledgeable WHT members could help me with this problem...
The server specs will be:
Core2Duo E6600 (2.4Ghz), 2GB RAM, 6-8x* 250GB SATA II HDDs, CentOS 4.4 or SuSe, DirectAdmin
* I prefer 8 HDDs (or actually 9) over 6 but I am not sure if their server chassis can hold that many HDDs, I am awaiting answer from them. They don't have any other drives beside the 250GB ones so I am limited to those.
The preferred SW RAID setup is to have everything in RAID 10, except for the /boot partition which has to be on RAID-1 or no RAID I believe, plus one drive as hot spare (that would be the 9th drive). I am quite sure they will not do the setup for me but will give me access to KVM over IP and a Linux image preinstalled on the first HDD so that I'll have a functional system that needs to be upgraded to RAID-10.
How do I do that? The big problem I see is that LILO or GRUB can't boot from a software RAID-5/10 so I will have to mount the /boot partition elsewhere. It's probably terribly simple...if you have done it before which I have not. I have read some articles on how to setup a RAID-5/10 with mdadm (e.g. [url] ) but they usually do not talk about how to setup the boot partition. Should it be setup as a small sized (100-200MB) RAID-1 partition spread over all of the drives in the otherwise RAID-10 array?
What about swap? Should I create a 4-8GB (I plan to upgrade the server RAM to 4GB in near future) RAID-1 swap partition on each of the disks or swap to a file on the main RAID-10 partitions. The second sounds simpler but what about performance? Is swapping to a file on RAID-10 array a bad idea, performance wise?
Is it possible to grow a RAID-10 array in a way similar to growing a RAID-5 array with mdadm (using two extra drives instead of one of course)? mdadm doesn't actually even mention RAID-10 despite it does support it without having to create RAID-0 on top of RAID-1 pairs if the support is in kernel, from what I know.
How often do RAID arrays break? Is it worth having RAID if a servers hard drive goes down? I was thinking it may just be a better option to just have a backup drive mounted to my system and in the even of a system failure just pop in a new hard drive, reload the OS, and then reload all my backups?
I am in the process of restructuring the infrastructure on our servers. I am thinking of using either RAID 5 (1 hot spare) vs RAID 10 as my 1U server has 4 HDD tray.
RAID 5 would have better capacity but RAID 10 has better overall performance. Which one do you guys go for a shared hosting server?
Is it possible to turn a non raided setup into Linux software raid, while it is live, and if it's the OS drive? Can you even software raid the OS drive remotely? I've been thinking about doing it for the redundancy (and possible slight performance boost for reads, but doing it more for redundancy). I'm using CentOS.
I am using /usr/bin/mysqlcheck --all-databases --check --auto-repair to check all MySQL DBs every day via cron. But sometimes when the DB crash and is being repaired the tmpdir = in /etc/my.cnf is getting full 100%. Is it any smart way to use some other tmp folder when repairing the DB tables like that?
I am considering on implementing a new firewall in our colo which would have about 10 servers behind it which generates on averages 2.314 megabits/sec for everything.
I am looking at the new Watchguard x750e running version 10 of Fireware which seem like a good fit without breaking the bank but I have also thought of simply implementing a Poweredge server running CentOS and running an IPtables config to provide firewall services.
Anybody have any Feedback on the Watchguard unit or use a Watchguard product in their setup and can comment?
Created_tmp_disk_tables 2,118 The number of temporary tables on disk created automatically by the server while executing statements. If Created_tmp_disk_tables is big, you may want to increase the tmp_table_size value to cause temporary tables to be memory-based instead of disk-based. Created_tmp_files 0 How many temporary files mysqld has created. Created_tmp_tables 5,637 The number of in-memory temporary tables created automatically by the server while executing statements.
I have tried upping tmp_table_size but it's no use. My my.cnf file:
Quote:
# Example MySQL config file for very large systems. # # This is for a large system with memory of 1G-2G where the system runs mainly # MySQL. # # You can copy this file to # /etc/my.cnf to set global options, # mysql-data-dir/my.cnf to set server-specific options (in this # installation this directory is /var/lib/mysql) or # ~/.my.cnf to set user-specific options. # # In this file, you can use all long options that a program supports. # If you want to know which options a program supports, run the program # with the "--help" option.
# The following options will be passed to all MySQL clients [client] #password= your_password port= 3306 socket= /var/lib/mysql/mysql.sock
# Here follows entries for some specific programs
# The MySQL server [mysqld] port= 3306 socket= /var/lib/mysql/mysql.sock skip-locking key_buffer = 384M max_allowed_packet = 1M table_cache = 384 sort_buffer_size = 2M read_buffer_size = 2M read_rnd_buffer_size = 8M myisam_sort_buffer_size = 64M thread_cache_size = 8 query_cache_size = 32M # Try number of CPU's*2 for thread_concurrency thread_concurrency = 8 tmp_table_size = 300M max_tmp_tables=100
# Don't listen on a TCP/IP port at all. This can be a security enhancement, # if all processes that need to connect to mysqld run on the same host. # All interaction with mysqld must be made via Unix sockets or named pipes. # Note that using this option without enabling named pipes on Windows # (via the "enable-named-pipe" option) will render mysqld useless! # #skip-networking
# Replication Master Server (default) # binary logging is required for replication log-bin
# required unique id between 1 and 2^32 - 1 # defaults to 1 if master-host is not set # but will not function as a master if omitted server-id= 1
# Replication Slave (comment out master section to use this) # # To configure this host as a replication slave, you can choose between # two methods : # # 1) Use the CHANGE MASTER TO command (fully described in our manual) - # the syntax is: # # CHANGE MASTER TO MASTER_HOST=<host>, MASTER_PORT=<port>, # MASTER_USER=<user>, MASTER_PASSWORD=<password> ; # # where you replace <host>, <user>, <password> by quoted strings and # <port> by the master's port number (3306 by default). # # Example: # # CHANGE MASTER TO MASTER_HOST='125.564.12.1', MASTER_PORT=3306, # MASTER_USER='joe', MASTER_PASSWORD='secret'; # # OR # # 2) Set the variables below. However, in case you choose this method, then # start replication for the first time (even unsuccessfully, for example # if you mistyped the password in master-password and the slave fails to # connect), the slave will create a master.info file, and any later # change in this file to the variables' values below will be ignored and # overridden by the content of the master.info file, unless you shutdown # the slave server, delete master.info and restart the slaver server. # For that reason, you may want to leave the lines below untouched # (commented) and instead use CHANGE MASTER TO (see above) # # required unique id between 2 and 2^32 - 1 # (and different from the master) # defaults to 2 if master-host is set # but will not function as a slave if omitted #server-id = 2 # # The replication master for this slave - required #master-host = <hostname> # # The username the slave will use for authentication when connecting # to the master - required #master-user = <username> # # The password the slave will authenticate with when connecting to # the master - required #master-password = <password> # # The port the master is listening on. # optional - defaults to 3306 #master-port = <port> # # binary logging - not required for slaves, but recommended #log-bin
# Point the following paths to different dedicated disks #tmpdir= /tmp/ #log-update = /path-to-dedicated-directory/hostname
# Uncomment the following if you are using BDB tables #bdb_cache_size = 384M #bdb_max_lock = 100000
# Uncomment the following if you are using InnoDB tables #innodb_data_home_dir = /var/lib/mysql/ #innodb_data_file_path = ibdata1:2000M;ibdata2:10M:autoextend #innodb_log_group_home_dir = /var/lib/mysql/ #innodb_log_arch_dir = /var/lib/mysql/ # You can set .._buffer_pool_size up to 50 - 80 % # of RAM but beware of setting memory usage too high #innodb_buffer_pool_size = 384M #innodb_additional_mem_pool_size = 20M # Set .._log_file_size to 25 % of buffer pool size #innodb_log_file_size = 100M #innodb_log_buffer_size = 8M #innodb_flush_log_at_trx_commit = 1 #innodb_lock_wait_timeout = 50
[mysqldump] quick max_allowed_packet = 16M
[mysql] no-auto-rehash # Remove the next comment character if you are not familiar with SQL #safe-updates
This server has 1GB of RAM, not a dedicated sql server though. RAM usage is low too, 567MB of RAM is being used right now total... is there something I'm overlooking?
every 4 or 5 days the lock table permission keeps getting revoked, does anyone have anything that can point me in the general direction of what would cause this? The only thing i can think of is a cpanel layer 2 update has occured a few times during hte periods where the permission is revoked
unfortunately whenever it happens it results in my SQL backup script failing