I have recently tweaked my server (AMD 3000+ 1gb Ram 10mbps port) by configuring the httpd.conf, my.cnf and php.ini.
I am pleased to say that the server is now responding well and the load is always below 1.00.
However, sometimes a user will experience a time-out through their browser. Once they refresh the server will then react as it should and carry out the command which is being asked of it.
tweaking apache so that timeouts do not occur....here are the changes I have made...
php.ini
Code: [MySQL] ; Allow or prevent persistent links. mysql.allow_persistent = Off httpd.conf
Code: KeepAlive On MaxKeepAliveRequests 100 KeepAliveTimeout 3 MinSpareServers 16 MaxSpareServers 32 StartServers 20 MaxClients 150 MaxRequestsPerChild 5 and finally here is my.cnf file
I have installed red5 war version at my tomcat webapps directory
which I have installed using easyapache 3, demos/porttester works great but I am unable to login to admin panel at [url] I get tyhe server address as well as username password panel but none of the passwords I try gets accepted, also in this type of installation I was unable to find register.html to create a login,
I have a webserver using apache 2.4.4 and 2 application servers with tomcat 7 on windows server 2008 R2 which uses VMWare ESX5.
I use juvmroute to load balance between these 2 application servers. but with no error in apache and tomcat log files, it can not connect to application servers. and i have to restart the apache service. and it crashes after almost 5 minutes.
This is the part of the access.log file where the crash started:
[Mon Aug 05 00:02:51.980503 2013] [mpm_winnt:notice] [pid 1036:tid 292] AH00422: Parent: Received shutdown signal -- Shutting down the server. [Mon Aug 05 00:02:54.008507 2013] [mpm_winnt:notice] [pid 3416:tid 200] AH00364: Child: All worker threads have exited. [Mon Aug 05 00:02:54.024107 2013] [mpm_winnt:notice] [pid 1036:tid 292] AH00430: Parent: Child process 3416 exited successfully. [Mon Aug 05 00:08:12.932936 2013] [mpm_winnt:notice] [pid 2648:tid 296] AH00455: Apache/2.4.4 (Win64) configured -- resuming normal operations
Issue: Upgraded to Apache 2.4.4 and Tomcat 7.0.33. Accessing the website via HTTPS produces "Object not found" error. The error logs (server,tomcat,apache) show no errors. It was working with Apache 2.2
Server OS: Windows 2008 Apache: version 2.4.4 Tomcat: version 7.033 JRE: version 1.6.0_43 Httpd.conf
I am unable to get A username and password requested by http://127.0.0.1:8080. "Tomcat Manager Application"..I created userid and password in tomcat-users.x.
We want to implementing the load balancing for our domain, if the traffic is heavy and 8080 (i.e. currently integrated with apache) doesn''t serve more that time the apache will call 8081 and serve to the request without any problem.
We want to access our site www.domain.com (i.e. run on port 80). Please guide us it is possible or not?
I have an Apache Server (2.4.3) and a Tomcat Server (7.0.36) and have some Java Applications deployed.Everything works fine, but when we start a quite long Ajax process, I see in my Java Application, that a Ajax request is received and starts processing - everything fine. But during processing of the first request, I see a second request starts after 5 minutes.
My configuration is Apache 2.2.3 using Tomcat - AJP with mod_proxy_ajp, mod_ssl.We have configured Kerberos but some users are getting an error - Size of a request header field exceeds server limit.
Users with headers above 8K are getting this error, users less than 8K can get in fine. How can I increase this header limit in Apache/Tomcat? I have tried multiple suggestions found on google and other sites.
Here is what I tried:
Adding the following to the http.conf LimitRequestFieldSize 65536 ProxyIOBufferSize 65536
Adding the following to server.xml packetSize="65536"
editing a workers.propeties file, but we dont have any files on the server with that name.
I have a php cli shell script that I want to be running continuously without ever stopping. However, I noticed that it would stop executing after a while on its own.
Is there a way to keep a script running forever without timing out, or daemonize it?
i have had a problem for some time now, regarding my CRON jobs. I am trying to download a large amount of data from ebay (through their API, totally legal and aboveboard) using php, but my CRONjob times out.
I have tried resetting the timeout variable, but then it exceeds the maximum filesize SO, my question: is there any way to have a script run as a CRON job, and wen it is complete, call another script?
I'm not sure why my brand new dual proc quad core xeon 2.5ghz harpertown gets time out when the server load is under .5
Like it'll be running ultra fast and suddenly, I can't get into ssh, whm, my websites or anything. When I ping it, no reponse. Is it because it restarted itself?
I decided to merge 2 old dedicated servers into 1 colocated machine with better specs the old machines have a combined number of about 280 accounts.
I purchased a Broadberry Server and requested a specific setup and OS (CentOS) after some delays they finally got it working and shipped it to the Datacentre
I chose 49pence/RapidSwitch for colocation in the UK
I received a email from 49pence on how they wanted it setup and Broadberry did this as well which was good.
Unfortunately I got the email it had been received and installed from RapidSwtich before I received the email regarding server admin and password info
Broadberry set a very weak password up a bit of an oversight this
As within 12 hours of it being installed it was hacked!
Being a UK bank holiday we were unable to do anything till today
And now we are having to retrieve the server to reinstall everything and start again!
I hope the companys involved will be cooperative so we can get this up and running asap.
My severs at Coreix end later this month.
A lesson to be learned for us and I hope anyone reading
Next time we have a check list to make sure nothing gets overlooked!
Luckly no data on the server and no harm done other than cost and time.
I had a strange error this morning, httpd was running fine but nothing was loading. All the other services worked fine but I checked the error log and couldn't find anything. I restarted httpd and it's running fine now.
Quote:
[Sat Feb 10 11:48:01 2007] [notice] suEXEC mechanism enabled (wrapper: /usr/sbin/suexec) [Sat Feb 10 11:48:01 2007] [notice] Accept mutex: sysvsem (Default: sysvsem) [Sat Feb 10 13:06:02 2007] [notice] caught SIGTERM, shutting down [Sat Feb 10 13:06:03 2007] [notice] Apache configured -- resuming normal operations [Sat Feb 10 13:06:03 2007] [notice] suEXEC mechanism enabled (wrapper: /usr/sbin/suexec) [Sat Feb 10 13:06:03 2007] [notice] Accept mutex: sysvsem (Default: sysvsem) [Sat Feb 10 20:42:26 2007] [notice] caught SIGTERM, shutting down [Sat Feb 10 20:42:28 2007] [notice] Apache configured -- resuming normal operations [Sat Feb 10 20:42:28 2007] [notice] suEXEC mechanism enabled (wrapper: /usr/sbin/suexec) [Sat Feb 10 20:42:28 2007] [notice] Accept mutex: sysvsem (Default: sysvsem)
Looks just like normal operations... I checked the access log and nothing looked out of the ordinary either.
Anyway the only suspicious thing I saw was the daily scan by spammers to see if I had anything exploitable.
Quote:
[Sat Feb 10 00:16:32 2007] [error] [client 69.13.76.82] File does not exist: /var/www/html/a1b2c3d4e5f6g7h8i9/nonexistentfile.php [Sat Feb 10 00:16:33 2007] [error] [client 69.13.76.82] File does not exist: /var/www/html/adxmlrpc.php [Sat Feb 10 00:16:33 2007] [error] [client 69.13.76.82] File does not exist: /var/www/html/adserver/adxmlrpc.php [Sat Feb 10 00:16:33 2007] [error] [client 69.13.76.82] File does not exist: /var/www/html/phpAdsNew/adxmlrpc.php [Sat Feb 10 00:16:33 2007] [error] [client 69.13.76.82] File does not exist: /var/www/html/phpadsnew/adxmlrpc.php [Sat Feb 10 00:16:33 2007] [error] [client 69.13.76.82] File does not exist: /var/www/html/phpads/adxmlrpc.php [Sat Feb 10 00:16:33 2007] [error] [client 69.13.76.82] File does not exist: /var/www/html/Ads/adxmlrpc.php [Sat Feb 10 00:16:33 2007] [error] [client 69.13.76.82] File does not exist: /var/www/html/ads/adxmlrpc.php [Sat Feb 10 00:16:33 2007] [error] [client 69.13.76.82] File does not exist: /var/www/html/xmlrpc.php [Sat Feb 10 00:16:34 2007] [error] [client 69.13.76.82] File does not exist: /var/www/html/xmlrpc/xmlrpc.php [Sat Feb 10 00:16:34 2007] [error] [client 69.13.76.82] File does not exist: /var/www/html/xmlsrv/xmlrpc.php [Sat Feb 10 00:16:34 2007] [error] [client 69.13.76.82] File does not exist: /var/www/html/blog/xmlrpc.php [Sat Feb 10 00:16:34 2007] [error] [client 69.13.76.82] File does not exist: /var/www/html/drupal/xmlrpc.php [Sat Feb 10 00:16:34 2007] [error] [client 69.13.76.82] File does not exist: /var/www/html/community/xmlrpc.php [Sat Feb 10 00:16:34 2007] [error] [client 69.13.76.82] File does not exist: /var/www/html/blogs/xmlrpc.php [Sat Feb 10 00:16:34 2007] [error] [client 69.13.76.82] File does not exist: /var/www/html/blogs/xmlsrv/xmlrpc.php [Sat Feb 10 00:16:34 2007] [error] [client 69.13.76.82] File does not exist: /var/www/html/blog/xmlsrv/xmlrpc.php [Sat Feb 10 00:16:34 2007] [error] [client 69.13.76.82] File does not exist: /var/www/html/blogtest/xmlsrv/xmlrpc.php [Sat Feb 10 00:16:35 2007] [error] [client 69.13.76.82] File does not exist: /var/www/html/b2/xmlsrv/xmlrpc.php [Sat Feb 10 00:16:35 2007] [error] [client 69.13.76.82] File does not exist: /var/www/html/b2evo/xmlsrv/xmlrpc.php [Sat Feb 10 00:16:35 2007] [error] [client 69.13.76.82] File does not exist: /var/www/html/wordpress/xmlrpc.php [Sat Feb 10 00:16:35 2007] [error] [client 69.13.76.82] File does not exist: /var/www/html/phpgroupware/xmlrpc.php
I have nothing to be exploited so I'm thinking that wasn't the cause either.
I checked user_beancounters and there are also 0 fail counts.
I'm working on a script to help users get routed to the nearest, fastest server for the best ping. I'm in 2 datacenters, one on the east and one one the west coast of the US.
I've looked at some of the geo lookup programs based on IP, but they either seem inaccurate or expensive.. and just downright difficult to use.
I found out that some geo load balancers use the connection speed to figure out the best route. So, I'm trying to think of a way of timing the users connection from multiple server locations.
Has anyone here done that sort of thing before? Any suggestions on how to best do that?
Two completely different methods I've considered:
1. putting 2 images on a web page, and using javascript to time the loading of them. 2. pinging the user IP from each coast and seeing which is fastest. (Is there a lighter way than ping? )
For over a week now I have had the following network issues:
- browser timing out (for me and visitors to my site)
- ftp connection issues
The server load is low so it's not server related.
Traceroute TO the server appears fine.
Traceroute FROM the server to users IP's appears to have issues over the SingTel/Optus network.
My webhost says it's an issue for SingTel/Optus.
SingTel/Optus Engineer say: "Our testings point to a problem either within Cogent's network or on a peering link between Cogent and Singtel in LA.
I'd suggest that the owner of the domain (me!) approach his hosting provider and have them escalate to Cogent. We can't escalate to Cogent as we have no peering with them."
So I've been the meat in the sandwich for over a week with no sign of a fix.
My options appear to be to either move the VPS away from the webhost and host it locally (Australia) or to somehow wait for someone to step up and take responsiblity and get this resolved.
My heart says wait as it's not *my* responsibility but it's costing me financially and professionally.
Anyone else experiencing similiar/same issues from the Asia Pacific region to the US?
to open EML files on apache webserver, actually when I load an eml (outlook email message) by using an URL, it shows the email code, it is not pasing it correctly.
What libraries (apache or system) do I need installed to parse this kind of files?
My server is with Centos OS. I have a cache directory which has tons of scrap files. I am unable to delete it. rm -rf dirname gives an error.Is there any way to remove this directory quickly.
how can i hide all files and directories in public_html so when using apps suck Flash Get Site Explorere and similar software it will not show any file or directories in public_html.
I have a client that is certain someone is trying to hack her web-portal. I need to set up something that will alert me on suspicious activity on the server. For example someone fiddling with requests trying to make SQL / shell .. injection and similar threats.
Does any tool (for example bash script with grep) exist that would parse the raw apache logs and report if something is suspicious. Apache logs don't show the POST data so I am talking to admin to setup dump_io apache mod that enables this.
Or am I going into wrong direction here and there is whole another way to do this? I searched the web and forums for anything like this and didn't find anything.
I have moved my sites from old server with Apache 1.3 to new one with Apache 2.2.4. Since that time, the error log is full of these lines:
Code: [Wed Jul 04 05:36:32 2007] [error] [client 212.47.9.194] File does not exist: /home/domain/public_html/russia [Wed Jul 04 05:36:39 2007] [error] [client 212.47.9.194] File does not exist: /home/domain/public_html/russia [Wed Jul 04 05:36:45 2007] [error] [client 213.192.18.2] File does not exist: /home/domain/public_html/italy [Wed Jul 04 05:36:57 2007] [error] [client 83.8.104.181] File does not exist: /home/domain/public_html/mexico The access logs show even more accesses, so sometimes the same page is ok, sometimes it is logged here. The strange thing is that these files (pages) exist! They are accessible through the browser without any problem.
Do you have any idea where could be the problem? It would helped me lot, I am unable to find any real problem now, when error log is full of these.
My OS is Ubuntu 6.06. I use mod_rewrite through .htaccess. I can provide list of apache modules, if it helps.
I noticed that all sites on the server, can not parse php files, tried restarting httpd, recompiling using apache update or easyapache script, and the problem stills.
index.php is at DirectoryIndex, also Addtype shows php extension active at httpd.conf.
but, when I type "php -v" from the shell, i got this message:
Code: php: /usr/lib/libmysqlclient.so.14: version `libmysqlclient_14' not found (required by php)
I found someone with the same problem, tested the solution posted there but it doesnt seems to solve this issue.
I'm moving to a dedicated server for first time in my life and I have some questions that I want to share with you all.
I have run many scripts like joomla, vbulletin, etc, and I have found that when I install a module or template or when I upload images using this scripts, the permisions of this files are wrongly assigned as they belong to apache:apache instead of myuser:mygroup.
The problem is that as my new server is going to be fully managed, I am not going to have root access to chmod this files to the right myuser:mygroup and as a consecuence I'm going to be unable to dmove or delete them usinf my ftp or shh user acount.
Hope you can help me this this issue. I need to tell my hosting company what I want to do to avoid this way of working but I have no idea of how can this be solved
I recently upgraded my Apache 2.2.22 installation on Win 8.1 to 2.4.9, making all necessary changes (I believe) to the conf files. I am puzzled that two files in the format authdigest_shm.xxxx now appear in my logs directory when the server is restarted. (Edit: there is also no httpd.pid file)I assume this is to do with running digest authentication, but is a new phenomenon since the upgrade.what conf file setting(s) have I screwed up?!
When a user enters the whole url to a file on the webserver he/she can view this file. I want to prevent this and only allow access to the files from within the application (under apache). How can I do that? I already tried:
<Directory /var/www/html/folder/files> order deny,allow allow from localhost </Directory>
This works BUT the file also isn't viewable from within the application anymore.