I'm looking into knownhost and they offer twice the bandwidth in their California option for the same money. Jay from known host said I should pick the one with the best ping times. I'd like to put the information here so someone might perhaps tell me what my best choice is.
I'll put the stats here and then if you could tell me which one is better (Texas vs. California) that would be great.
But if you think that the times between the two are only marginal (both real good), then could you help me decide about getting double the bandwidth for the same price.
I know nothing of course, but the two data centers look like their both giving great speeds (Texas being better though). But perhaps both speeds are great. If that's the case, can anyone tell me why someone would not take the higher bandwidth offer?
Thanks, I really appreciate any help with this!
Here is the info...
Texas (ping):
PING 65.99.213.7 (65.99.213.7) 56(84) bytes of data.
64 bytes from 65.99.213.7: icmp_seq=1 ttl=56 time=1.26 ms
64 bytes from 65.99.213.7: icmp_seq=2 ttl=56 time=1.35 ms
64 bytes from 65.99.213.7: icmp_seq=3 ttl=56 time=1.41 ms
64 bytes from 65.99.213.7: icmp_seq=4 ttl=56 time=1.22 ms
64 bytes from 65.99.213.7: icmp_seq=5 ttl=56 time=1.51 ms .............
I am still trying to diagnose a problem some members have on my forums, when they load a page it will load a grey screen (my background color) and stop, after 15-30 seconds+ it will finally load the page
i opened a ticket with my server co and they forwared it to the NOC, NOC said it was apache config problem, server co said it was fixed and was due to apache log reaching 2GB limit, logrotate installed
same problem still existed, opened tk with server management co, they tweaked httpd.conf and disabled logs, problem still exists
I asked 3rd management co about it, changed some settings in httpd, said it may be due to ads on the sites, i took out the ads and a stat script
problem still exists, the thing is the problem exists with some users and not others, doing speed tests to the server shows it is very quick, load is low, no i/o wait and i just installed the second GB of memory so memory is fine
this is happeneing to users on seperate forums, one using vB, one using IPB, so it is server/ hardware related, AMD barton 3000, with 2GB ram, nowehere reaching the bandwidth limit or 10mbps port speed limit
any ideas? doing tracerts to the server shows a timeout before the sites IP address, every time, but doing a tracert OUT of the server shows no time outs....
I installed net query tool from this site http://virtech.org/tools/
They say in order for ping and trace to work, you must chmod those executables on the server itself to 755. Is this safe or a security risk? The odd thing is, ping and trace works fine from dnsstuff.com or any other network tool site.
I was wondering if anyone out there knew of any programs out there that can monitor and record ping times to various server IP's and couple output say a simple graph. There used to be this website I used that you could simply go onto there site, and register, and put in your servers ip addresses and it would output a simple graph that showed latency, loss etc for any ip you put into there system and displayed a web based mrtg type graph. I am having some troubles at one of the datacenters that I have some co-located servers at, and would like to show them how much packet loss they are having at a given time, or over say a 24 hour period.
I'm looking to set up some custom monitoring stuff that ties into my existing systems, so that's why I'm not looking for established monitoring services.
I'm looking for a shared environment where I could ping and trace to my equipment, either from cron or a background daemon which I'd write in perl, python, etc.
I'd also want it located somewhere on the left coast or Texas.
Does anyone know any hosts that would meet the above?
I have a problem with the traceroute for 72.36.229.84 . Some useres can reatch it, but not all.
I 've attaced 3 routes under that isn't working and 2 that work. Look at the routes that doesn't work, they all stop by gblx.net ( String 9 - 10) Working routes doesn't use gblex.net My question : What should i do, what can i do ? Is gblex.net the problem ?
Not working route : emil@egenhost:~$ ping 72.36.229.84 PING 72.36.229.84 (72.36.229.84) 56(84) bytes of data...
During these periods of inaccessibility, ping/traceroutes from multiple physical locations around the world show 50-100% packet loss.
During times when the server is accessible, ping times are anywhere from 100ms-700ms and the server does not remain accessible for very long.
I gave my provider traceroutes and pings for those times when it was inaccessible and accessible and they stated it was not on their side. It was on a hop in the middle between me (and apparently everyone else, since multiple locations around the world were used) and my server. They say it is not in their control and they cannot do anything about it.
I am reasonably sure this isn't just me or my VPS. I am on the phoenix node of PrimaryVPS.
The latest traceroute I did showed something new - a router advertisement claiming the TTL was exceeded....
I run tracroutes to many IPs often, anyoone could whip up a script to run traceroutes by entering IP and hitting enter and ability to cancel the process (as it gets stuck sometimes) and enter another IP to trace to?
We're looking to bring in a T3 for our small startup hosting company and when we do traces from multiple location it always runs through a cox.net IP and it concerns me because I dont want our customers to believe they're being hosted on some kids cablemodem. What do you folks suggest, the IP is 64.19.96.5 to their outer router. Should it be a concern that we route through everyone through a cox.net IP?
I want to use scp to backup files, however I find most tutorials confusing as to which computer is the remote and the local. Is the local the one you are logged into via ssh command, or the computer from which you logged into ssh.
Lets say I am on my Windows computer. I open up putty, and login to the ssh connection of a remote linux computer. What scp command do I enter into the ssh terminal to copy a file from D:ackup of the windows computer to /home/backup of the linux computer?
I am getting ready to install APF, I have read multiple articles, but am still confused with the following parameters and what needs to be included in each:
having my own dedicated server. I have apf installed and I wanted to see how it blocked IPs so I had a friend, whoes IP I knew, help me. I added his IP to the deny_hosts.rules, thinking that would block him from my server, but it did not. Now, mind you, the way I added his IP was to simply use an editor and add his IP to the bottom of the list. Then I got to thinking, does teh apf only load the rules every so often? If so, how can I tell when or how often the rules load? ALso, do I need to add an IP using apr -d IPNUMBER in order for the apf to recognize it? I'd appreciate some info on how the apf works and how I can add IPs myself that I want to add and be sure that they are being blocked.
I am going to place my first physical server to a server room. I wish to use it also as an nameserver for my domains and I am missing some basic principle there. I can probably configure BIND etc, but how will the servers upper in the hierarchy learn that this is a nameserver for certain domains? To start with, I have several empty domains (they are not hosted and so far using nameservers of a big company).
I have a basic understanding of their role and how they work in general (mapping domain names to ip addresses).
I'll start with my setup/scenario: Plesk - (dedicated company server - serving parent site via WHMCS) ResellerClub - (domain registrar) Cpanel/WHM - (shared server)
1. What would be the recommendation for a new hosting provider when it comes to name server(s)?
2. Do most who have limited resources use the BIND service on each WHM shared server itself? So if you have umpteen shared servers you would have umpteen name servers, as well?
2a. If so is it preferred/recommended to ultimately use completely separate/dedicated server(s) for DNS services for all shared servers? What is common?
3. If using the WHM shared server itself is the common practice, what are its pros and cons?
For the last week or two my VPS keeps getting added to blacklists.
Yesterday I noticed that a website on the server was forward mail from a contact form to the clients AOL account, obviously scripts were completing the form on the website and that was being sent to AOL, who would of obviously blacklisted the IP.
I've stopped that now, but we're still getting black listed. I've had my VPS provider get exim to record the path that sent the e-mail, and there are no scripts on the server sending out mails that I should be worried about.
Some questions to help me.
How can you identify an outgoing e-mail? is it by the character '=>'? If so, is it normal for there to be e-mails being sent out: 2007-08-23 19:04:10 1IOH2K-00038j-Jg => /dev/null <shaun[at]sr8.co.uk> F=<aaron_straubegnvu[at]yahoo.com> R=central_filter T=**bypassed** S=0 QT=6s DT=0s 2007-08-23 19:04:10 1IOH2K-00038j-Jg Completed QT=6s
I'm puzzled as to why the server keeps being blacklisted, when I can't really see any problems in the log file?
The CBL website (which blacklisted us) says we were added at around 19:00GMT, so I've checked the logs for that time and can't really find much.
On the server there is one account with an autoresponder set as that person is away on holiday.
I recently went from dedicated to Dotster VPS to cut down on price and also because I do not have as many of clients I once had.
I chose their cPanel Premium [url]
Soon after I realized via the Virtuozzo Panel, that I have limits on everything . I was pretty upset that none of these limits were posted anywhere on Dotsters site and its a surprise not welcomed.
I want to post some images so maybe you can tell me if their limits are reasonable or if they are way off.
[url]
I had them actually raise my diskinodes from 400/500k to 600/700k and also my quotaugidlimit from 100 to 200.
For some reason I have 162 ugid's but i have only restored 23 or so accounts on this server, with no other special things running besides the standards. Shouldn't I be around under 100 ugid's?
So my main problems that I have is the folowing limits:
diskinodes: I have only 35 gigs used of the 50 allocated, their initial quota of 500,000 seemed low. now they bumped be to 700,000 and i'm almost there.
quotaugidlimit: for only having 23 accounts it's crazy to believe that I had to have their initial limit raise to 200. I have noticed a lot more users like #2121, #13232, #124312 and so on compaired to my dedicated server. My dedicated had about 5, this VPS shows about 30 or so.
kmemsize: their limit is 18,022,400 bytes, which I always seem to be reaching.
privvmpages: hard limit is 292,912 and i'm usually exceeding this one.
Here is a screenshot of my QoS: [url]
Also, I noticed once I reach/exceed limits, the first things to be shutdown is my webmail and cpanel and so on, but the sites stay up Is there a way of setting up which resources are shut down in what order? To have mail up is the biggest request. I rather have ftp and cpanel down first.
I am really not happy with what is going on and gaining some user feedback would be great. I really wish Dotster had a complete breakdown of limits, before I bought.
I want to understand the mechanics of a DDOS attack. I have been doing a lot of reading about them this weekend.
The way I am understanding it, a DDOS attack is done at the network level. It may be requesting that pages from a given website, or websites, are served up, but it basically will effect the entire network. So if 'page display' requests are made to a website(s) that is hosted at ABC Hosting (example only), to the tune of 15GBs then I have to assume that the network will be terribly degraded. If that is so, wouldn't other servers also get taken out?
I believe the architecture of the internet is something like this (example only):
I use Munin to monitor the health of our servers, I can tell by looking at the graphs there's nothing to worry about, however, I'm struggling to baseline acceptable performance.what would be classed as 'normal' output for some of the more relevant munin graphs.
I've been looking at the Apache* modules and this is the output from one of our servers:
average of: 300 accesses per minute, 6 busy servers and 4.10MB a minute volume
max of: 1400 accesses per minute, 81 busy servers and 51MB a minute volume
This is a dedicated box running one site.
We have another box that is running approximately 30 sites
average of: 30 accesses per minute, 1 busy server and a 500K a minute volume
max of: 322 accesses per minute, 11 busy servers and a 4MB a minute volume.
These servers are pretty much the same spec, dual core 64Bit, 4GB of ram, two SATA disks in RAID1.
I'd like to seek help on how to read eximlog file. I saw the below inside eximlog. I'm wondering now because realemail@domain.com does not exist on this user emailaddress when I browse his cpanel. Now who is sending it? the only correct info is the pixelxl which is the user.
trying to understand the colocation business model (for webhosting).
Am I right in assuming that following business model:
1. Rent a portion (or full) rack from datacenter, e.g. calpop.com 2. Buy servers and get them shipped to datacenter 3. For unmanaged servers, typically most support will be limited to reboots, reinstall of OS/control panel, and server hardware issues - correct? 4. Provide basic support, or sign-up with companies like bobcare.com to provide support to clients who order server
One-time cost (for 1-3 years): Cost of dedicated server hardware Ongoing monthly costs: rack rental + outsourced support (optional)
...plus marketing costs....
Please let me know if I missed something or overlooked details...
I use my dedicated server to host my own large site and web forum, and I want to stop hosting my own email server so I don't have to manage it. I want to use Google Apps for Your Domain to manage my email, pointing my MX records to Google. However, I am not clear on how this will effect PHP scripts sending email on my server. My vBulletin installation sends 1,000+ email notifications every day, which far exceeds Google's 500/day sending limit, so I obviously can't use their SMTP servers. If I'm sending mail from my own server via PHP, though, and my MX records point to Google's, how can vBulletin send an email from an address at my domain? I've been reading up on how email works, but I just can't seem to figure out how this works...
I see that there are some connections from my server to some remote mySQL server, and I am curios to know which script is running them. (192.168.30.98:40493 207.45.xxx.xx:3306 5339/httpd)
I try through lsof but it is not that it points directly to the website running this connection.