i have this couple of windows 2003 servers, colocated in data center, i need to improve download speeds to our customers who are at least 200ms away, the end user is not using download accelarator,
is there a way that any settings to be done on server so that per thread speed can be increased, this case the server and client both have the ability to make a connection at more than a megabit speed. i did some search but all the articles point to end user and not the server saying to increase tcp window size etc.. not sure if those articles relate to server side changes.
as part of a project I have lately been looking into various aspects of kernel tuning. Most notably lately tuning the TCP stack for more efficient memory usage/throughput.
Thought I would start this thread to mention some of the tools I'd found for doing testing and see what anyone else had to recommend.
So far my favorite of the bunch is nuttcp. Its easy to use and gives a very good idea of how much of your bandwidth you are able to utilize.
A few interesting web pages are as follows for anyone interested in the topic:
[url]- Tuning TCP for High Bandwidth Delay networks
[url]- TCP Tuning Cook book, some interesting information in there as well
[url]...formanceTuning - Performance Tuning TWiki. Has a list of useful tools, flags for existing tools and ways to monitor network performance from a system level, along with some suggestions of things to correct
I have Linux server with WHM/Cpanel with 2000 domains now my problem is.Mysql is using 90-100% CPU usage and 1500-2000 queries are running at a time so please guide me how can i optimize it and how can i tune mysql server so it doesn't go high.
I have configure my.cnf file as ---> max_allowed_packet = 4M set-variable = max_connections=100 safe-show-database query_cache_limit=1M query_cache_size=128M query_cache_type=1 key_buffer_size=256M long_query_time = 3 table_cache=9092
What have you found to be the best tuning sites for MySQL?
I'm getting into a bit of trouble. We have a weather site, and with all of the traffic, we're getting a little tapped out. When the loads hit between 134 and 160, the mail clients start to time out. Apache is still pretty fast, although it takes a little longer once you cross loads of 80, 5 second page loads, but when it hits between 130 and 160, I'm seeing 15-20 second page loads. DA is impossible above 80 but SSH is still very workable. Apache is tweaked to the max. I've kicked up some of the sizes in MySQL several weeks ago, and that did it them. However, we're taking on about 22,000 to 25,000 uniques an hour now. We normally can handle that no problem, but people are asking for maps a lot more now with the flooding and all. That requires a lot of MySQL lookups and the CPU creating a lot more maps. The maps I already cache for the duration, which is 15 minutes. The only horse I have left to whip is MySQL. After that, it will probably be a move to FreeBSD 7, but I'd like to throw in a few tweaks yet before we do that.
Uptime = 0 days 0 hrs 4 min 15 sec Avg. qps = 17 Total Questions = 4479 Threads Connected = 1
Warning: Server has not been running for at least 48hrs. It may not be safe to use these recommendations
To find out more information on how each of these runtime variables effects performance visit: [url]
SLOW QUERIES Current long_query_time = 10 sec. You have 1 out of 4491 that take longer than 10 sec. to complete The slow query log is NOT enabled. Your long_query_time may be too high, I typically set this under 5 sec.
WORKER THREADS Current thread_cache_size = 128 Current threads_cached = 6 Current threads_per_sec = 0 Historic threads_per_sec = 0 Your thread_cache_size is fine
MAX CONNECTIONS Current max_connections = 2000 Current threads_connected = 1 Historic max_used_connections = 7 The number of used connections is 0% of the configured maximum. You are using less than 10% of your configured max_connections. Lowering max_connections could help to avoid an over-allocation of memory See "MEMORY USAGE" section to make sure you are not over-allocating
MEMORY USAGE Max Memory Ever Allocated : 96 M Configured Max Per-thread Buffers : 10 G Configured Max Global Buffers : 58 M Configured Max Memory Limit : 10 G Total System Memory : 3.95 G
Max memory limit exceeds 85% of total system memory
KEY BUFFER Current MyISAM index space = 78 M Current key_buffer_size = 16 M Key cache miss rate is 1 : 735 Key buffer fill ratio = 8.00 % Your key_buffer_size seems to be too high. Perhaps you can use these resources elsewhere
QUERY CACHE Query cache is enabled Current query_cache_size = 32 M Current query_cache_used = 4 M Current query_cach_limit = 1 M Current Query cache fill ratio = 14.83 % Your query_cache_size seems to be too high. Perhaps you can use these resources elsewhere MySQL won't cache query results that are larger than query_cache_limit in size
SORT OPERATIONS Current sort_buffer_size = 2 M Current record/read_rnd_buffer_size = 256 K Sort buffer seems to be fine
JOINS Current join_buffer_size = 1.00 M You have had 0 queries where a join could not use an index properly Your joins seem to be using indexes properly
OPEN FILES LIMIT Current open_files_limit = 10000 files The open_files_limit should typically be set to at least 2x-3x that of table_cache if you have heavy MyISAM usage. Your open_files_limit value seems to be fine
TABLE CACHE Current table_cache value = 1024 tables You have a total of 721 tables You have 93 open tables. The table_cache value seems to be fine
TEMP TABLES Current max_heap_table_size = 16 M Current tmp_table_size = 32 M Of 212 temp tables, 0% were created on disk Effective in-memory tmp_table_size is limited to max_heap_table_size. Created disk tmp tables ratio seems fine
TABLE SCANS Current read_buffer_size = 1 M Current table scan ratio = 17754 : 1 You have a high ratio of sequential access requests to SELECTs You may benefit from raising read_buffer_size and/or improving your use of indexes.
TABLE LOCKING Current Lock Wait ratio = 1 : 76 You may benefit from selective use of InnoDB. If you have long running SELECT's against MyISAM tables and perform frequent updates consider setting 'low_priority_updates=1'
how to make the changes in red? My server works good for awhile, but then gets REALLY REALLY slow.
I have a VPS system on the west coast of the US, and access it from the east coast. Sometimes I can get 1Mbyte/sec downloads, and other times it is as bad a 250KB/sec.
I have done some pings, and have not seen any packet loss. I've experimented with sysctl and changed some parameters to hopefully help, but really haven't seen much of a difference.
Does anyone have a recommendation as to what I could do different to squeeze a little more speed out of the connection? The problem is that from both sides of the US, I see ping times (depending on different ISPs on the east coast) from 80ms-120ms.
We'd like to use cPanel backups server-wide (for servers with hundreds of accounts) but we encounter one problem. The server generates the individual account backups much faster than it is able to offload it to our off-site NAS (some 6MiB/s; the servers have SSDs in hw raid 10 so they are somewhat fast). The resulting problem is that the server gets full to the point where we no longer have any spare space which will obviously wreak havoc on a busy cPanel server (and probably to the backup process itself).
We are trying to get a faster network to the NAS but this is not the actual issue so faster network is not the proper solution. Networks can be fast but they also can have a bad day (the un-commited ones). Even worse: the network can be down for a time. We need to do this for multiple servers and to be able to leave the backup process unattended indefinitely without the fear that we'll end up with 100% utilization on "/".
Is there some option to setup the backup in a serial manner: package the first backup, upload it, package the next backup, upload it and so on?
I believe the legacy backup worked this way but the old backup has a very rudimentary (and quite inept IMO) way of managing the backup retention. The new one seems better in this regard (though not perfect).
So...is there some way to tune aspects of the new cPanel backup to do it in a "serial" fashion?
I am having some serious speed issues with my 1Gbit server at FDC. After opening a ticket, they've simply dismissed it as a server configuration problem. However I am convinced it isn't because certain ISP's (usually universities) get good speeds, usually 700kb/sec but the vast majority of my users get between 20-50 kb/sec and it's causing a lot of complaints.
Furthermore I have other servers with FDC which are 100mbit which perform better than my 1 Gbit one. There are no server bottlenecks (CPU/RAM/HDD), since I've closely monitored them (PRTG) and they aren't even heavily utilised. So the problem is with the network at some point.
Speed Test : [url]
where abouts you are downloading from, your ISP and net connection. Wget's from servers are also welcome as are traceroutes.
I want ask How i know the Server Download speed from Rapidshare ? I have windows Dedicated Server with 1GB Port and when i download something from rapidshare its just 2MB/s , is that Normal ? and now i am thinking to Buy another one , How i can know the Download speed from rapidshare site ? I ask some Company about that but No one give me test or any thing like that ,
I recently setup WAMP on my dedicated, and I'm unsure if the slow download speed is from WAMP or something else I need to remove from the dedicated. Might just be the distance I am from the dedicated because it's hitting 88.74Mb/s down and 71.19Mb/s up from a local city.
After reading a lot of good reviews on Softlayer around here, I was thinking about switching to them. They have some really good deals and I'm planning on getting one of their dual opterons, but I have a question about the processor speed in relation to database intense stuff (like a forum with a good number of users on at once, or a CMS).
I have the option of either going for a Dual Opteron 248 (2.2 gHz) or a 252 (2.6 gHz) which costs $50 more. My question: is the 252 really worth the extra money compared to the 248? All I have on my server is an IPB forum with 100-450+ users on at any given time, and a static site (soon to be converted to a CMS), and I'm thinking that RAM is more important than the processor for a database app like those (that, and hard drive speed). So I wanted to know if it would be better to go with the 248 and spend the money on more Ram and a SA SCSI 10k drive or two?
I just bought a hosting on one of the American servers providers, so I am interested how fast my site loads at different locations around the world. Mainly I am interested in US, Europe and Australia. I am from Europe but, my connection is not so great I want you input on how fast does it load on your end, and maybe how high ping is from your provider.
I hope there are some people who want to help me. I want post a link here because then people will complaint that I am advertising the site...
So if it is not a problem just post a message on this topic and I will send you a link true private message.
we have 9 rack each rack have 2 cisco switch 2950(1 for internet and 1 for private network) if we try to do a transfer files between two server we can't go over 1mbps, the same if we move the file from the web, every server nic is set to 100mbps the same for swith port, why we get a so slow speed?