I'm using Apache 2.4.4 as a part of WAMPSERVER 2.4 under Windows Server 2008 to host a business web application. I recently added a tool that involves users accessing the application from their mobile devices connected through our wifi network.
The issue occurs when a user is connected via our external address(x.x.x.x:8080). This issue does not occur if connected through our local address(192.168.x.x:8080).
First, the user makes a POST request to our server. The server handles the request and redirects the user to a GET version of the same page. Standard stuff.
However, if the user leaves the network while this process is occurring, the entire server will stop sending back responses. Localhost, x.x.x.x:8080, and 192.168.x.x:8080 all fail to receive responses until the Apache service is restarted.
If that isn't bizarre enough, my access.log continues to be filled with requests and my apache_error.log doesn't report any issues and even reports when the server is being restarted.
At first, I thought this might be a routing problem since it only occurs when connected though our external address and it isn't typical as far as I know for ALL Apache childs to become unresponsive. However, not being able to access the application through even localhost made me rethink that. So, I'm thinking it's a Windows or Apache issue, but that's as far as I've gotten.
This makes our application nearly unusable because it's a system for punching in and out of work, so it's likely people will be trying to punch out while leaving the building and network, causing this issue.
I've very slow response (from 1-2 seconds to tens of seconds) from http://localhost using Apache 2.4.3/Win32 service on Windows 8/32bit (6.2 build 9200). It doesn't depend on the browser used. Responses from all other sites are much faster.
Because it seemed to me to be a DNS issue, I tried to uncomment "127.0.0.1 localhost" and/or "::1 localhost" line in the hosts file, put "http://127.0.0.1" or "http://[::1]" instead of "http://localhost" into the browser, flushing dns by ipconfig /flushdns, stopping DNS Client by "net stop dnscache", disabling IPV6, etc. but nothing worked.
It seems that hosts file is ignored and "localhost" is passed to external DNS. Ping localhost yields to "Reply from ::1: time<1ms" response, ping 127.0.0.1 yields to expected "Reply from 127.0.0.1: bytes=32 time<1ms TTL=128" response.
Finally, "unplugging the network cable" is the only solution I've found: then the connection to localhost is much faster.
I have a managed box with rackspace (green) On this I am running a couple of distance learning sites e.g howtofish.com (moodle on apache) and video streaming (lighttpd). This is all working fine.
I'm now looking at moving our "normal" website (drupal on apache) to a managed server with rackspace. (call this blue)
The 2 different parts will be running on 2 different machines, but I'd like to be able to move sites/applications between them if needs be due to load/work/failure
For example if the normal website server goes down, temporarily move that on to the distance learning/media server or vice versa. Or if after a while we get more traffic on the distance learning server than the website, move the media server over to the normal website server.
Hope that makes sense - I'm not looking at a proper cluster here, but 2 servers I can juggle things between if needs be.
The way I'm thinking of doing it is having both machines with the same setup (apache port 80 and lighttpd port 81 for streaming) And use Vhosts on both machines setup for all the sites. i.e. green.flyfishing.com and blue.flyishing.com each have vhosts setup for flyfishing.com, howtofish.com and fishingvideo.com.
This way, whichever server the dns for flyfishing.com points to goes there, and the same for other services.
Then do a nightly backup of the mysql databases to the other server and keep files rsynced.
Then if a server goes down or I want to move x from green to blue I just load the backup of green database for the relevant application on the blue server and point the dns entry for the app to blue.
I am trying to configure Apache 2.2 on Linux Mint 17 ( derived from Ubuntu 14 LTS).
I am wanting to create a variety of localhost sites all for development. One of those is built on Laravel 4. I have followed every tutorial I can find yet, for some reason which I do not understand ALL my sites route back to the Laravel root document when called from the browser. Just don't get it.
Here is my hosts file:
Code:
127.0.0.1 localhost 127.0.1.1 vince-XPS-8300 127.0.0.1 auburntree 127.0.0.1 example
Not far ago somebody hacked our customer account through the vulnerability in phpBB Album module and uploaded some scripts. Then it started to send nigerian spam using exim and apache. These scripts were found and deleted and the Album module was fully deleted too. But when I look at the processes now I see that exim and httpd still start very often so the system resources are probably overused by them ......
I have installed Apache 2.2.22 in Windows Server 2008 R2 Operating System. I want to upload a file using HTTP put command to "uploadtest" folder of the server
1. I have configured "uploattest" folder to accept file without any authentication (Anonymous_NoUserID On)
<IfModule alias_module> Alias uploadtest G:DataImportSvcUploadTest </IfModule> <Directory "G:DataImportSvcUploadTest"> <FilesMatch ".(enc|xml|zip)$">
[Code] ....
We are using .enc files so I allowed that file type
2. "uploadtest" folder has right permission to everybody.
3. We are using WindowsCE client to send file using HTTP put command . Use HttpOpenRequest to send files with lpszVerb = PUT
I am using apache as a reverse proxy, I have several site with http and everything is working fine. For the first time I have tried to configure with https port 443 with certificate, the problem is that it doesn't return to the browser the certificate that I have indicate in the "virtual host" but rather the default certificate of the site.
Many of the VPS providers I've looked at don't back up the servers that house the VPSs and as such don't back up the VPSs either - have you considered what would happen should the VPS itself be erased (such as what happened to vaserv/fsckvps not too long ago)?
I have setup a xen vps. I have installed a package called Ossim. The xen VPS is based on the OS -- Centos 5. There are no web hosting panels etc installed like cpanel or plesk. Iam going to proceed with a few more software installations on it like elastix.
However before proceeding further, i want to backup the entire vps ie the entire data in it.
I've always thought, and it is like that, that the back ups are part of any quality VPS Hosting service. But see this below. It is part of the TOS, listed by quite popular vps hosting provider.
"Your use of - - - servers and services is at your sole risk. - - - is not responsible for files and/or data residing on your VPS. While complimentary backups may be provided by - - -, you agree to take full responsibility for files and data transferred to/from and maintained on your VPS and agree that it is your own responsiblity to take backups of data residing on your VPS."
I want to set up a cron job to make daily back-ups of my database, but by turning my site off first.
This is how I envisage it to work: 1: rename '.htacess' (in public_html folder for the site) to .htaccess-open 2: rename '.htaccess-closed' to .htaccess // this closes the site down so no-one can write/access the db (they are basically shown a 'site down for maintenance' page - I already have the code for this)
3: mysqldump --opt (DB_NAME) -u (DB_USERNAME) -p(DB_PASSWORD) > /path/to/dbbackup-$(date +%m%d%Y).sql // this backs up the database
4: wait for 3 to finish 5: rename '.htaccess' to .htaccess-closed 6: rename '.htacess-open' to .htaccess // this opens the site back upIs this easy enough to do? Anyone got any tips/pointers?
I would like some tips on how to create back ups with WHM/cPanel on a Reseller if you could.
Also where to store them?
I was thinking of storing them up on a VPS or Dedi server that just has backups or should I buy a enternal Hard Drive for my computer and store them on that? I have a 500GB Hard Drive.
we have a client transferring to us from hg, they have created a full backup from cpanel and we have tried restoring it from whm all the databases, domains and folder structure is restored fine the error is that all the files are restored to a cgi-bin not the corresponding folders.
how we can fix this or a easier way to transfer a hg account
I would like to make some kind of script (probably .sh?) that automatically takes a directory, makes a copy of it then makes a gzip tar and then shoots it over to an FTP server. I would like this to happen twice a day (ie. every 12 hours).
I just recently changed the server my website was being hosted on (my website being a small hosting-biz) and my site has been replaced by 1and1's temporary landing page. The thing is I have no idea or experience on what to do to put my pages back up. I don't even think I have them saved anywhere...
if I get a dedicated server, is there a way or software program where I can back up my server to a hard drive at my house? OR is better to just pay the monthly fee and get a second hard drive installed and back it up on their server?
I am so mixed-up now that its hard to even figure out what i need to know, so let me tell you a story.....
It started about a week ago, when i first started on the server configuration module of my CIW course. I began to get confused concerning the topic of permissions and access levels in IIS 5. Because of this confusion i started to backtrack what i actually knew (or rather, what i thought i knew) about networks. I now find myself realizing that i am not even sure about the basic types of networks. Anyway, thank you for your patience so far, and without further a-do here comes the obligatory questions.
I thought that a peer to peer network was any network without a dedicated server, so the hosts (computers) on the network communicated with each other directly. I also thought that once you added a server to the network, all the shares that were once stored on the individual computers are now moved to the server, and that the individual hosts no longer actually communicate with each other to access files, printers, etc.
However, now I am starting to think that i am wrong about this, and that the hosts may indeed still keep some shares on themselves to be accessed by the other hosts on the network.
It is this method of ACCESS that is confusing me.
QUESTION 1 Do the hosts now have to ask the server to fetch the shares on the other hosts, or can each host still have direct access to any other host?
QUESTION 2 A book i have states "a peer-to-peer network does not regulate user access from a central point". To me, this implies that using a server on the network somehow centralizes access, BUT ACCESS TO WHAT??. Does this mean access to the server that has just been installed, or that the server is responsible for giving permission for host "A" to connect to host "B" to access the shares stored on host "B
QUESTION 3 The same book also states (regarding user-level-access and some kind of access list)...."this access list can be central to a particular server or to an entire network" WHAT THE HELL DOES THAT MEAN??? Does it mean that this list can be either stored on the SERVER (CENTRAL) or EACH HOST("entire network").
I hope one of you guys can figure out,at least, where i am getting confused because the more i read the more i seem to tie myself up
The issue is that a lot of my emails seem to be bouncing back at the moment with the subject 'Subject: Warning: could not send message for past 1 day'
Could anyone explain why this might be the case? I've had a look online and the only thing I can think it might be is that my email address has been gray listed. To solve this it was recommended to use a SPF Record.
I did not really want to start messing around with my DNS before I could get some confirmation that this is the case. Though if there is another explaination please can you let m know.
The emails that I have sent have all had pdf's attached to them.