When I talked with my friends who are into computer stuff, they'd say that it is important that the provider that I am about to host my site with, has good ping response so that my site will load faster.
We're considering moving from our current vps host with a California datacenter to a host with their datacenter in Chicago for support reasons... this will mean going from 15 ms ping times to around 75 ms. All of our clients will be running their own wordpress installations...and most of them are on the west coast as well... is the ping time difference significant or not?
I have a lot of international users connecting to my server. They are from all around the world including the philipines , germany, etc. Im thinking those users experience distance lag (Latency?). How do i cut down on this distance lag. Should i upgrade from 10mpbs to 100mpbs would that do any good?
The server is up and running, so the server is not an issue. I therefore suspect there must be a DNS problem somewhere.
The common denominator among those that are having the problem is that they are using connections with high latency (e.g. Satellite). Could high latency be the problem? If yes, is there anything I can do so my users will stop having this problem?
Id like to know if I could use a remote desktop ( remotely ) as I have about 350msec latency to the server where I am planning to install it. I am planning to use remotely anywhere server .
One of my apps is based on querying Youtube API. My response times are badly slow: something like 2 to 3 seconds. The guys on YouTube APIs Developer Forum suggested that the response time should be more like <.5 seconds.
Would you guys do me a favor and post your results for this command:
1) We have 3 web servers each with IIS and ColdFusion. When updating the site which setup is better:
a) upload the changed file to all 3 web servers, keeping them in sync b) move the source files to our storage server, then change the site root on the web servers to point to a network share in the storage server
Main issue: Will the network latency of fetching the source files be a performance problem?
2) We have a storage server that will serve up some audio/video via http. Which setup is better:
a) expose it to the Internet and serve it directly to the users via its own IIS b) create a network share and let the 3 web servers serve the files
If you think long and hard about this issue you'll realize there are many cons and pros to each approach. I can't seem to make up my mind.
I'm hosting some domains on a whm setup. One of the domains has outgrown the shared hosting setup, so I'm moving it to it's own vps. I want to limit the downtime, and I understand I should lower the TTL on the domain.
The registrar is Network Solutions and the nameservers are pointing to the shared host (which is on a whm/cpanel setup). How can I lower the ttl on this domain? Do I have to move the domain to a more advanced DNS service to achieve this, or is this something I can do within whm?
Processor #2 Vendor: GenuineIntel Processor #2 Name: Intel(R) Core(TM)2 CPU 6700 @ 2.66GHz Processor #2 speed: 2660.000 MHz Processor #2 cache size: 4096 KB Why is the Processor #1 speed labeled as 1.6 ghz? Processor #2 speed never goes down no matter how high the load is. Could it be the reason that my server can't handle 4 websites with a cumulative total of 20k unique hits per day?
I'm having a problem with high MySQL CPU usage on my server, one of my sites is getting hit pretty hard right now and MySQL is just killing the box. Its averaging a load of over 20, CPU usage is around 130%.
here is my my.cnf file. is there anything in their that should be changed to help lower the CPU usage?
# The MySQL server [mysqld] port= 3306 socket= /var/lib/mysql/mysql.sock skip-locking key_buffer = 256M max_allowed_packet = 1M table_cache = 512 max_connections=500 sort_buffer_size = 2M read_buffer_size = 2M read_rnd_buffer_size = 8M myisam_sort_buffer_size = 64M thread_cache_size = 8 query_cache_size = 32M # Try number of CPU's*2 for thread_concurrency thread_concurrency = 2
[mysqldump] quick max_allowed_packet = 16M
[mysql] no-auto-rehash # Remove the next comment character if you are not familiar with SQL #safe-updates
On a Cpanel server, lightly loaded, but some fairly large sites (~3GB stored) loads get pretty high during CP backups (D/W/M to secondary drive, compression on). It looks like RAM is showing mostly used during this time (977,555 out of 1,026,348), and iowait is ~50, sometimes quite a bit higher at ~80 on the larger accounts. Not pegged at that amount, but fairly steady. This box only has 1GB RAM, so I'm thinking adding another Gig would alleviate this issue.
I'm gathering info for getting a new dedicated server, planning on using my own colocated hardware, but still looking at what's available in dedicated servers at the same time.
There are lot's of dedicated servers being offered at prices lower than 1U colocated rackspace. How's that possible, what am I missing?
I host my DNS with DNSmadeeasy.com , I noticed that I have daily more than 350.000 DNS requests for main domain, This domains got about 80.000 uniqes/day, so this is strange how can there be 350.000 DNS requests/day. Seems that I'll go over the quota because of this.
The TTL for all domains is set to 86400.
Is there a way to discover how its possible ? And also is there a way to do something to make this number lower (DNS requests)
I have a small issue that's probably easy to answer. If I upload a zip file to a Linux server, and run this command via SSH:
Code:
unzip -a name_of_zip.zip
Although it does unzip the directories as expected, it makes all file names and folders lowercase. This is a problem when trying to install software that relies on case sensitive names.
Does anyone know what command tells the server to retain the file names and not alter them?
I can't get access to a certain site. I always get the page with:
network time out - server at *** takes to long to respons. More people have noticed this and apparently it only happens to people with certain specific providers. And not all the time. Some times they DO get access eventy to they belong to the same ISP. So I guess an ISP isn't blocking access to it otherwise it would be permenantly/The site administrator insists that certain ISP's are blocking his site. He's hosting it on his own server. The domain belongs is registered at namecheap.com.
If an ISP is blocking this site (if that's possible?), that would lead to that 'network timeout' page wouldn't it?
What is the most likely reason for getting a timeout page anyway?
I have a dedicated server specs: AMD 3500+ 64 Bit CPU, 1 GB Ram, 160 GB Sata Drive. For 1 month, CPU load average reaches 40-50 value. This happens about 5-6 times in a day. When I stop httpd service for 30 seconds everything goes normal. I think this is not a DoS attack because it comes systematic, I dont believe no one makes this regularly except bots.
Maybe its a system service or a cronjob but it stops when I turn off httpd service? How can I be sure about what's making this regularly load?
I also did set up a script which mail me when load average of system goes crazy and restart httpd service. But instant restart is not working to stop load increase.
The server is going down from time to time, every 12 days or so the site hosted there is no longer accesible, everything starts with the site slowing don and down and then is not longer reachable, what we do is to request a power cycle, and with this we start all over again till next power cycle, so on so on, of course, here are my server details and more info on this:
- MySQL - 5.1.41-3ubuntu12.10 - Apache - 2.2.14-5ubuntu8.4 - PHP - 5.3.2-1ubuntu4.9 - operating system: Ubuntu Server 10.04 LTS
After some time emailing the support guys to barely check about what's going on, we received an email with a few things:
1.- found a few errors that likely would cause issues with Apache. The first error is: [Mon Feb 04 05:03:10 2013] [error] mod_fcgid: fcgid process manager died, restarting the server and the next error is: [Mon Feb 04 14:32:34 2013] [error] server reached MaxClients setting, consider raising the MaxClients setting ...
Both these errors seem to indicate that you have a process that is running out of control on your server. We were unable to determine what script on your site is running caused your connections to be maxed out however it does appear that before these errors were generated there was a WordPress plugin referenced in your access logs...
2.- Additionally during our review we did find that your error log for mercadodedinerousa.com is 45 GB's which is excessively large and can cause problems when Apache is trying to write a such a large file.
3.- The majority of the errors being logged are: [Wed Feb 06 12:12:31 2013] [error] [client 200.76.90.5] Options FollowSymLinks or SymLinksIfOwnerMatch is off which implies that RewriteRule directive is forbidden: /var/www/vhosts/mercadodedinerousa.com/httpdocs/index.pl, referer: [URL]
My server DC technician change the cable to another switch in another rack. all in sudden, my server went 'offline'.
the weird thing is, I can ping the server IP
When I plug my KVM, I can ping google.com from my server
But when I lynx google.com, it keep saying HTTP requests sent; waiting for response._
When I open my putty and SSH, it keep waiting.
When I try to ssh from another server, it keep waiting $ ssh school@xxx.xxx.xxx.xx [nothing happen] $ ping xxx.xxx.xxx.xx PING xxx.xxx.xxx.xx (xxx.xxx.xxx.xx) 56(84) bytes of data. 64 bytes from xxx.xxx.xxx.xx: icmp_seq=1 ttl=63 time=1.10 ms 64 bytes from xxx.xxx.xxx.xx: icmp_seq=2 ttl=63 time=0.864 ms 64 bytes from xxx.xxx.xxx.xx: icmp_seq=3 ttl=63 time=0.854 ms
- I can ping my server from MS DOS - I can ping my server from Iptools.com - I can ping my server from another server
But.. - I cannot ssh into my server - I cannot ssh into my server from another server
Inside my server.. - I can ping google.com, yahoo.com, other servers - But I cannot lynx and open all websites
When I restart network, it shows [refer attachment] captured from my KVM