I'm having a issue with my current robots.txt file , which is not properly handling the requests/ blocking the content to be access . What I want is that to only allow like google bots , yahoo , msn , bing , alexander ranking beside those bots block all other bots . my current file rebots.txt is below
3 For 'disallow', shouldn't I only include urls which are linked from public pages - and not those which I use for testing and which aren't linked-to from any public pages?
4 If I include such urls in 'disallow', aren't I simply alerting spiders (and anyone else who wants to see what sections of my server I don't want known) to stuff they'd otherwise not discover?
I use Outpost Firewall to view active connections to my server. If I don't restart the httpd service on a regular basis my server will grind to a halt from being flooded by robots.
I currently have the service set up to restart at Midnight and Noon every day. Sometimes that's enough, lately it's not. For example, I checked an hour ago and I had 385 connections to httpd. At least 50% of the connections were robots - tons of the same IP addresses and they're just crawling the site.
Almost all of the connections show up as less than 1kb bytes received and 0 bytes sent per connection.
I already have a good 20 connections by these robots and the connection time shows as 11 minutes... I just browsed to a web gallery page on my site figuring that'd be mildly "intensive" on connections with all the thumbnails and my connections aren't lasting more than one minute.
So, what's with all these connections that are lasting 10+ minutes? I've even got one connection that has an Uptime of 30 minutes, bytes sent 65811, bytes received 180. It seems like something with these robots doesn't terminate correctly...
what to do so these connections quit jamming my server up? It's like a very very slow DOS...
I run a free hosting service where users can sign up for a free blog.
With wildcard DNS and A entry, the users that sign up for a blog get the addy: theirblog.mydomain.com
This all works but there is no scalability. I have reached the limit of what my server can take.
If possible I would like to run multiple servers with just one domain.
So that when server 1 reaches its limit, it is possible to sign up new users that gets theirblog.mydomain.com (same domain) but now its physically stored on server 2.
Is this possible if you use wildcard DNS for the users subdomain sites?
I am quite new at this. I know there are ways to manage one domain on several servers, but is this possible too with this setup using wildcard dns and subdomains?
I bulk register .com names for future use, and out of all the ones I use, only about 30 have actual sites, whilst I have about 200 that are just not doing anything.
I have seen places like godaddy who whatever website people visit just gets fowarded to a landing page, and I have set it up so the nameservers all foward to my dns servers (I manage my own infrastructure).
I have seen through IIS it is possible to have a "catch all" website where any traffic goes there, however, how do I set this up in DNS?
I tried creating a zone called .com but that didnt work! and I really dont know what I should do!
Does anyone know.. if you have a wildcard ssl cert for *.domain.com meaning you can use anything.domain.com can you also use anything.anything.domain.com (ie. unlimited subdomain levels) ?.
so the serverB A record is on serverA which is fine (i am aware it's a risk if serverA goes down).
If I'm setting a sub domain of serverB such as site1.serverB.maindomain.com I have to setup an A record on serverA for the subdomain and I don't want to have to do this every time.
Is there anyway I can setup a wildcard A record for subdomains such as: *.serverB.maindomain.com
Now that I have relieved a lot of frustration I can talk about my issue. We have a cpanel server and are trying to install a wildcard ssl certificate on it. As you could tell from above its not working out so well.
The cert was ordered an installed as *.domain.com . It works great on https://domain.com and [url]
However, when I do [url]its an epic fail. Its a seperate cpanel account even with its own dedicated IP address. I look in the httpd.conf file and see the :80 entry however, I need to add a :443 entry in a way that cpanel/whm will not change it next time i add an account.
When I try to install the SSL certificate in WHM on jeff.domain.com it tells me the domain does not match. So I am asking how do I get this to work? I have searched for quite a while on the net and have not come across anything that works.
one of my customers asked me if i can enable the WildCard DNS for him on Apache. i just want to know is it any security issue with this if i enable it? can it be a security problem for my server if i enable it?
I am not that much of a dns/server expert, and I really need help setting up wildcard dns for domain names.
I have 1 parent domain zone file. Let's name this domain "parentdomain.com". Now this domain is set up with private nameservers of ns1.parentdomain.com and ns2.parentdomain.com and everything works fine.
What I want to do is now have the ability to put ns1.parentdomain.com and ns2.parentdomain.com into any other domain on the web and make it forward to this Parentdomain.com web root.
How do I do this WITHOUT adding a zone file for each of these individual child domains?
The thing is that I have a project where I don't know which domains will have nameservers set to ns1/ns2.parentdomain.com, but I want them all to forward to the appropriate directory.
got the VirtualHost entries for Appache, so each Domain points to a folder.
The (Windows-)DNS runs on the server, means i have to create a new DNS Reverse-Lookup and Folder for each Domain seperatly, but this workflow appears to be pretty stupid..
Now my question:
Anyone can tell me how to setup somwhat a "Wildcard / Catchall DNS", and also "Catchall VirtualHost", so each Domain is automatically pointed to the right folder? Any scripts needed for that?
"Wild card sub-domains will not work with our standard shared hosting accounts. although the DNS configuration is available to have any sub domain not listed direct to an IP address the actually hosting account will not recognize a wild card DNS entry unless you have manually added the specific name of the sub domain."
The only part I understood was the first line. Maybe it is my lack of hosting related knowledge or maybe because English is not my native language, the second sentence is very confusing to me.
I am setting wildcard nameservers that are meant to resolve any dpim pointed to them. I need for my domain parking service.
I was able to set wildcard A records with BIND, and this is good. However, some registrars seem to look for the SOA records, and it seems that BIND doesn't allow creating such records. Therefore, I need to switch to another DNS software, but I can;t find any that supports wildcard SOA records.
I've just got a new server with WHM/cPanel. I was upset that after I installed my *.example.com wild card SSL cert successfully that it didn't operate how I expected.....
I am not a domain squatter (i.e. I don’t have thousand and thousand of domains). Though I do have a good amount of domains that I am not doing anything with because I register the domains to start new projects (that never get started, sigh)…
Anyways, is it possible to point the name servers of a mydomainexample.com to ns1.mymaindomain.com and ns2.mymaindomain.com and without actually parking it or creating a zone file / httpd.conf entry, have it redirect to my mainmaindomain.com?
(i.e. can I create a zone like *.* or virtual host *.* and have anything that isn’t in the config files all point to my maindomain.com)?
Hopefully, my post makes sense. If it is possible, please list a way on how to set it up as well.
Say for example I have a wildcard ssl for *.foo.com this will cover bar.foo.com allright but will it cover zort.bar.foo.com or would i have to have a cert for *.bar.foo.com (or even *.*.foo.com)
I'm trying to create a symlink (ln -s) in SSH with the goal of having a php-file to be able to be reached from my wildcard subdomains "username.domain.com". The reason is because of XMLHttpRequest that resides in the php-file. When trying to access it from username.domain.com I only get an error, because of the cross-domain issue.
Anyway, I got the suggestion of creating a symlink on the file system, but I can't really get the symlink right... Where should I place it on the file system?
This is the path to the script;
Code: /home/web2753/domains/domain.com/public_html/ajax/status.php I tried creating the symlink in various places, like in the /domains/ directory;
Code: ln -s /home/web2753/domains/domain.com/public_html/ajax/status.php But I don't seem to get it right! If I have understood everything correctly, I'm supposed to create a symlink for status.php so the Subdomains can access it as if it was placed directly under the subdomains.
This is what my .JS file looks like (with the XMLhttprequest, this might not matter..?)
Code: var cururl = 'htp domain com'; // this forum didn't like this url?
function createRequestObject() { var req;
if (window.XMLHttpRequest) { req = new XMLHttpRequest(); } else if (window.ActiveXObject) { req = new ActiveXObject("Microsoft.XMLHTTP"); } else { alert('Problem creating the XMLHttpRequest object'); }
return req; }
function handleDivTag(divtag) { var divtag; return divtag; }
var http = createRequestObject(); var divhandler = new handleDivTag(null);
function sendRequest(ua_id,show,series) { http.open('get', cururl+'ajax/status.php?ua_id='+ua_id+'&show='+show+'&series='+series+'&dummy=' + new Date().getTime()); http.onreadystatechange = handleResponseTwo; divhandler.divtag = ua_id; http.send(null); }
function handleResponseTwo() { if (http.readyState == 4 && http.status == 200) { var response = http.responseText; if (response) { document.getElementById('editinfo'+divhandler.divtag).innerHTML = response; } } } Everything works except of the cross-domain issue which I'm trying to overcome by creating a symlink.