Does Internap "performance base" their routing in the upstream direction only (e.g. set LocalPref on received routes from their carriers), or also "optimize" routing for downstream traffic (e.g. set communities to force their carriers not to readvertise to certain other networks)?
I'm interested in hiring someone to go through our Windows VPS and do a bit of a tune-up and investigation to check if everything is running as well as it should be.
In particular 95% of memory seems to be constantly used, mostly by perl.exe and I just want to be convinced that this isn't being caused by a problem somewhere, before I upgrade.
Just wondering if anyone provides this kind of service, or knows someone who does and is good? I'd prefer a recommendation rather than just handing over the administration login to anyone...
I am running on a VPS which so far seems to be great, but at times I see some pauses. I have been working to tune mysql and so far have seen some good improvements. I am confused some sample my.cnf's which show dedicated myisamchk buffers. Do the normal buffers not count?
My current VPS has 768MB of dedicated ram, and is on a new 8 way server utilizing virtuozzo. On Unix bench I receive between a 190 - 240.
I am running 3 sites, and only one of which has notable traffic. it is a phpBB2 site with 30-50 concurrent users. The database is roughly 300MB and 4million records.
As stated at times there seems to be pauses and I am unsure if is simple related to the nature of the VPS or if I have some tuning left to do. Below are the stats I have gathered and I can provide more information if needed.
I will say that query cache does not seem to benefit phpBB2 as I was only getting 1-2:1 hit ratio.
Uptime = 0 days 20 hrs 56 min 1 sec Avg. qps = 7 Total Questions = 535268 Threads Connected = 5
Warning: Server has not been running for at least 48hrs. It may not be safe to use these recommendations
To find out more information on how each of these runtime variables effects performance visit: [url]
SLOW QUERIES Current long_query_time = 10 sec. You have 996 out of 535280 that take longer than 10 sec. to complete The slow query log is NOT enabled. Your long_query_time may be too high, I typically set this under 5 sec.
WORKER THREADS Current thread_cache_size = 50 Current threads_cached = 46 Current threads_per_sec = 0 Historic threads_per_sec = 0 Your thread_cache_size is fine
MAX CONNECTIONS Current max_connections = 120 Current threads_connected = 4 Historic max_used_connections = 50 The number of used connections is 41% of the configured maximum. Your max_connections variable seems to be fine.
MEMORY USAGE Max Memory Ever Allocated : 292 M Configured Max Per-thread Buffers : 486 M Configured Max Global Buffers : 90 M Configured Max Memory Limit : 576 M Total System Memory : 768.00 M Max memory limit seem to be within acceptable norms
KEY BUFFER Current MyISAM index space = 117 M Current key_buffer_size = 80 M Key cache miss rate is 1 : 30980 Key buffer fill ratio = 70.00 % Your key_buffer_size seems to be too high. Perhaps you can use these resources elsewhere
QUERY CACHE Query cache is supported but not enabled Perhaps you should set the query_cache_size
SORT OPERATIONS Current sort_buffer_size = 2 M Current record/read_rnd_buffer_size = 764 K Sort buffer seems to be fine
JOINS Current join_buffer_size = 1.00 M You have had 2779 queries where a join could not use an index properly You should enable "log-queries-not-using-indexes" Then look for non indexed joins in the slow query log. If you are unable to optimize your queries you may want to increase your join_buffer_size to accommodate larger joins in one pass.
Note! This script will still suggest raising the join_buffer_size when ANY joins not using indexes are found.
OPEN FILES LIMIT Current open_files_limit = 1130 files The open_files_limit should typically be set to at least 2x-3x that of table_cache if you have heavy MyISAM usage. Your open_files_limit value seems to be fine
TABLE CACHE Current table_cache value = 500 tables You have a total of 204 tables You have 438 open tables. The table_cache value seems to be fine
TEMP TABLES Current max_heap_table_size = 16 M Current tmp_table_size = 32 M Of 45794 temp tables, 4% were created on disk Effective in-memory tmp_table_size is limited to max_heap_table_size. Created disk tmp tables ratio seems fine
TABLE SCANS Current read_buffer_size = 128 K Current table scan ratio = 929 : 1 read_buffer_size seems to be fine
TABLE LOCKING Current Lock Wait ratio = 1 : 47 You may benefit from selective use of InnoDB. If you have long running SELECT's against MyISAM tables and perform frequent updates consider setting 'low_priority_updates=1'
mysqlreport:
Code: MySQL 4.1.22-standard uptime 0 20:57:20 Thu Dec 13 18:07:02 2007
__ Key _________________________________________________________________ Buffer used 50.00M of 80.00M %Used: 62.50 Current 59.10M %Usage: 73.87 Write hit 37.16% Read hit 100.00%
I see that FDC is offering $12 per Mbit/sec for InterNAP bandwidth. I'm really interest on this offer. Are there anyone colocating severs at FDC and using their InterNAP bandwidth? if so, could you please give me some reviews and your experience?
My colo is charging about $80/Mbit, they use InterNAP. Is that reasonable? This is in west coast/california.
That brings me to my other question, how do you know whats a good network? How is hurricane electric compared to InterNAP?
fixedorbit.com shows HE on the top 10 list and shows a lot of peering. I don't see InterNAP on that list at all! Does that mean thats it not as good?
The more peering, the better? (I guess we assume that the network provider isn't over selling and isn't cramming a lot of customers into a single port etc...)
I am getting my quote back Tuesday but need a little bargaining power with these guys...
Oakland, Ca datacenter
40mbps, 20A, 42U rack.
What should I be looking at price range here, how much per mbps?
Only info I've seen is from 2003 where people were saying $200/mbps. Obviously prices have come WAY down. I've seen people on here reselling internap bandwidth for $12/mbps, but they might have bought a huge commit.
We are in the process of starting a new project for a client and we are trying to decide which network to place it on.
We have a choice of a Level3/Time Warner mix or pure Internap. Obviously the Internap bandwidth is a bit more expensive, but since this customers website serves an international community we are thinking that Internap bandwidth would be well worth the cost.
What are the advantages of using Internap? How is the network performance? We've setup a machine on the Internap network and have begun running tests, but I would like to hear from people who have direct experience with Internap bandwidth.
After running 6 or so dedicated servers purchased thru several different resellers, my company decided to get a rack at the Chicago InterNap DC.
The quote we got was $3,400 per month inclusive of cabinet, Usage based 10/100, and Cross connects.
Have a couple questions.
A) Is that price in the ballpark of where it should be?
and
B) Our quote states Usage based 100mb Ethernet (10MBps Min). Tier2 at $150 month and $1,400 for the 10.00 Mbps Base. Being new to this, I have no idea exactly how much bandwidth we can use before the "usage" fees kick in.
As we are finishing our migration plans to Cisco OER. I would like to get everyone's thoughts on the low latency "brand name" internap bandwidth.
Do you think that the high priced brandname is going to continue with Cisco finally releasing OER to what a large number of datacenters use as their primary core switch? In my eyes the FCP and the Avaya/RouteScience platform just lost a lot of value. The OER product looks very complete and in testing works excellent, the final verdict will be in what the platform actually does.
If you are wondering Cisco OER information can be found here [url]
Who are the people/companies with a good reputation with the WHT community who provide Internap or Peer1 colo? And it seems it's mostly in LA or NY, I know both Peer1 and Internap have a presence in other places, is there anyone reselling out of anywhere else?
Pure Internap or Pure Peer1 hosts only please. The only one I know of right now is H4Y.us
I have been reading quite a bit lately about the Internap FCP. I am wondering how much it actually improves network performance and how it compares to BGP4. We currently use BGP4, but are considering using a data center with the Internap FCP for a project for a client.
I am looking for reviews from others who have experience with the Internap FCP and it's performance. How does it compare to a network using BGP4? I know that FCP uses more intelligent routing than BGP, but how big of an improvement does it make?
I wonder how it effects to network performance? The network will be faster? How much? The normal routers can choose the best routes too, is it correct?