i have had a problem for some time now, regarding my CRON jobs. I am trying to download a large amount of data from ebay (through their API, totally legal and aboveboard) using php, but my CRONjob times out.
I have tried resetting the timeout variable, but then it exceeds the maximum filesize
SO, my question: is there any way to have a script run as a CRON job, and wen it is complete, call another script?
I have a php cli shell script that I want to be running continuously without ever stopping. However, I noticed that it would stop executing after a while on its own.
Is there a way to keep a script running forever without timing out, or daemonize it?
I'm not sure why my brand new dual proc quad core xeon 2.5ghz harpertown gets time out when the server load is under .5
Like it'll be running ultra fast and suddenly, I can't get into ssh, whm, my websites or anything. When I ping it, no reponse. Is it because it restarted itself?
I have recently tweaked my server (AMD 3000+ 1gb Ram 10mbps port) by configuring the httpd.conf, my.cnf and php.ini.
I am pleased to say that the server is now responding well and the load is always below 1.00.
However, sometimes a user will experience a time-out through their browser. Once they refresh the server will then react as it should and carry out the command which is being asked of it.
tweaking apache so that timeouts do not occur....here are the changes I have made...
php.ini
Code: [MySQL] ; Allow or prevent persistent links. mysql.allow_persistent = Off httpd.conf
Code: KeepAlive On MaxKeepAliveRequests 100 KeepAliveTimeout 3 MinSpareServers 16 MaxSpareServers 32 StartServers 20 MaxClients 150 MaxRequestsPerChild 5 and finally here is my.cnf file
I decided to merge 2 old dedicated servers into 1 colocated machine with better specs the old machines have a combined number of about 280 accounts.
I purchased a Broadberry Server and requested a specific setup and OS (CentOS) after some delays they finally got it working and shipped it to the Datacentre
I chose 49pence/RapidSwitch for colocation in the UK
I received a email from 49pence on how they wanted it setup and Broadberry did this as well which was good.
Unfortunately I got the email it had been received and installed from RapidSwtich before I received the email regarding server admin and password info
Broadberry set a very weak password up a bit of an oversight this
As within 12 hours of it being installed it was hacked!
Being a UK bank holiday we were unable to do anything till today
And now we are having to retrieve the server to reinstall everything and start again!
I hope the companys involved will be cooperative so we can get this up and running asap.
My severs at Coreix end later this month.
A lesson to be learned for us and I hope anyone reading
Next time we have a check list to make sure nothing gets overlooked!
Luckly no data on the server and no harm done other than cost and time.
I had a strange error this morning, httpd was running fine but nothing was loading. All the other services worked fine but I checked the error log and couldn't find anything. I restarted httpd and it's running fine now.
Quote:
[Sat Feb 10 11:48:01 2007] [notice] suEXEC mechanism enabled (wrapper: /usr/sbin/suexec) [Sat Feb 10 11:48:01 2007] [notice] Accept mutex: sysvsem (Default: sysvsem) [Sat Feb 10 13:06:02 2007] [notice] caught SIGTERM, shutting down [Sat Feb 10 13:06:03 2007] [notice] Apache configured -- resuming normal operations [Sat Feb 10 13:06:03 2007] [notice] suEXEC mechanism enabled (wrapper: /usr/sbin/suexec) [Sat Feb 10 13:06:03 2007] [notice] Accept mutex: sysvsem (Default: sysvsem) [Sat Feb 10 20:42:26 2007] [notice] caught SIGTERM, shutting down [Sat Feb 10 20:42:28 2007] [notice] Apache configured -- resuming normal operations [Sat Feb 10 20:42:28 2007] [notice] suEXEC mechanism enabled (wrapper: /usr/sbin/suexec) [Sat Feb 10 20:42:28 2007] [notice] Accept mutex: sysvsem (Default: sysvsem)
Looks just like normal operations... I checked the access log and nothing looked out of the ordinary either.
Anyway the only suspicious thing I saw was the daily scan by spammers to see if I had anything exploitable.
Quote:
[Sat Feb 10 00:16:32 2007] [error] [client 69.13.76.82] File does not exist: /var/www/html/a1b2c3d4e5f6g7h8i9/nonexistentfile.php [Sat Feb 10 00:16:33 2007] [error] [client 69.13.76.82] File does not exist: /var/www/html/adxmlrpc.php [Sat Feb 10 00:16:33 2007] [error] [client 69.13.76.82] File does not exist: /var/www/html/adserver/adxmlrpc.php [Sat Feb 10 00:16:33 2007] [error] [client 69.13.76.82] File does not exist: /var/www/html/phpAdsNew/adxmlrpc.php [Sat Feb 10 00:16:33 2007] [error] [client 69.13.76.82] File does not exist: /var/www/html/phpadsnew/adxmlrpc.php [Sat Feb 10 00:16:33 2007] [error] [client 69.13.76.82] File does not exist: /var/www/html/phpads/adxmlrpc.php [Sat Feb 10 00:16:33 2007] [error] [client 69.13.76.82] File does not exist: /var/www/html/Ads/adxmlrpc.php [Sat Feb 10 00:16:33 2007] [error] [client 69.13.76.82] File does not exist: /var/www/html/ads/adxmlrpc.php [Sat Feb 10 00:16:33 2007] [error] [client 69.13.76.82] File does not exist: /var/www/html/xmlrpc.php [Sat Feb 10 00:16:34 2007] [error] [client 69.13.76.82] File does not exist: /var/www/html/xmlrpc/xmlrpc.php [Sat Feb 10 00:16:34 2007] [error] [client 69.13.76.82] File does not exist: /var/www/html/xmlsrv/xmlrpc.php [Sat Feb 10 00:16:34 2007] [error] [client 69.13.76.82] File does not exist: /var/www/html/blog/xmlrpc.php [Sat Feb 10 00:16:34 2007] [error] [client 69.13.76.82] File does not exist: /var/www/html/drupal/xmlrpc.php [Sat Feb 10 00:16:34 2007] [error] [client 69.13.76.82] File does not exist: /var/www/html/community/xmlrpc.php [Sat Feb 10 00:16:34 2007] [error] [client 69.13.76.82] File does not exist: /var/www/html/blogs/xmlrpc.php [Sat Feb 10 00:16:34 2007] [error] [client 69.13.76.82] File does not exist: /var/www/html/blogs/xmlsrv/xmlrpc.php [Sat Feb 10 00:16:34 2007] [error] [client 69.13.76.82] File does not exist: /var/www/html/blog/xmlsrv/xmlrpc.php [Sat Feb 10 00:16:34 2007] [error] [client 69.13.76.82] File does not exist: /var/www/html/blogtest/xmlsrv/xmlrpc.php [Sat Feb 10 00:16:35 2007] [error] [client 69.13.76.82] File does not exist: /var/www/html/b2/xmlsrv/xmlrpc.php [Sat Feb 10 00:16:35 2007] [error] [client 69.13.76.82] File does not exist: /var/www/html/b2evo/xmlsrv/xmlrpc.php [Sat Feb 10 00:16:35 2007] [error] [client 69.13.76.82] File does not exist: /var/www/html/wordpress/xmlrpc.php [Sat Feb 10 00:16:35 2007] [error] [client 69.13.76.82] File does not exist: /var/www/html/phpgroupware/xmlrpc.php
I have nothing to be exploited so I'm thinking that wasn't the cause either.
I checked user_beancounters and there are also 0 fail counts.
I'm working on a script to help users get routed to the nearest, fastest server for the best ping. I'm in 2 datacenters, one on the east and one one the west coast of the US.
I've looked at some of the geo lookup programs based on IP, but they either seem inaccurate or expensive.. and just downright difficult to use.
I found out that some geo load balancers use the connection speed to figure out the best route. So, I'm trying to think of a way of timing the users connection from multiple server locations.
Has anyone here done that sort of thing before? Any suggestions on how to best do that?
Two completely different methods I've considered:
1. putting 2 images on a web page, and using javascript to time the loading of them. 2. pinging the user IP from each coast and seeing which is fastest. (Is there a lighter way than ping? )
For over a week now I have had the following network issues:
- browser timing out (for me and visitors to my site)
- ftp connection issues
The server load is low so it's not server related.
Traceroute TO the server appears fine.
Traceroute FROM the server to users IP's appears to have issues over the SingTel/Optus network.
My webhost says it's an issue for SingTel/Optus.
SingTel/Optus Engineer say: "Our testings point to a problem either within Cogent's network or on a peering link between Cogent and Singtel in LA.
I'd suggest that the owner of the domain (me!) approach his hosting provider and have them escalate to Cogent. We can't escalate to Cogent as we have no peering with them."
So I've been the meat in the sandwich for over a week with no sign of a fix.
My options appear to be to either move the VPS away from the webhost and host it locally (Australia) or to somehow wait for someone to step up and take responsiblity and get this resolved.
My heart says wait as it's not *my* responsibility but it's costing me financially and professionally.
Anyone else experiencing similiar/same issues from the Asia Pacific region to the US?
I have root access to a server. Is it possible to create a cron that would restart my ftp and http server every so often. Like once a week or somthing. If so how would i do it?
Hey everyone, my friend's dad is looking for a web host that will allow his cron jobs to run every second. Most hosts apparently dont allow cron jobs faster than 5 seconds apart.
How often a host can run cron jobs isn't really advertised on their sites so I'm having a bit of trouble finding a host. I've resorted to just sending emails to sales addresses asking about it.
VPS isn't rebooting by itself when it goes down. Anyone has any program/script that monitors heartbeat of the server? Like when it goes down, the program will automatically reboots the system. I know there's such a script out there but I forgot what it called.
I have heard mixed reports and can't find any good info. Personally i've run a cronjob for up to 6 minutes, but as my best method was sending myself emails through php, its not exactly a highly accurate testing method.
On the same note, what would happen if one cronjob is running a php script for over 10 minutes, then another cron job starts on the same script, before the first one has finished?
I have my own server. I create php file for adding cronjobs. I checked /etc/cron.deny and /etc/cron.allow. both of them is empty so no problem. I execute the php script but nothing : I check with crontab -u user l and it told me no cronjobs for that user. When I access as root from ssh and try teh same command, it works fine. I don't understand how to fix that.
I have a bit of a strange problem, I have an rsync command setup in the servers crontab and from the cron log it show it ran the command but the files don't copy to the backup server. If I take the rsync syntax and run it manually all the files copy across with no errors, but I can't figure out why the cronjob doesn't work properly.
I've just noticed that many people may have a free remote cron facility without realising it.
If you have any domains registered with Godaddy, you get free web space that includes a cron facility. It only runs every half hour, but you could set six jobs at 5 min intervals to get an effective 5-minute poll, which is good enough for many purposes. You could use it to check uptime on another site, for example. Has anyone tried this?
We are running cpanel on one of our servers. Several cron jobs were deleted from the cron panel of one acct. I have no idea of the paths to re-enter these jobs. Is their a log file on the server that will show cron job history from previous runs so I can recover the proper paths?
I want to set up a cron job to make daily back-ups of my database, but by turning my site off first.
This is how I envisage it to work: 1: rename '.htacess' (in public_html folder for the site) to .htaccess-open 2: rename '.htaccess-closed' to .htaccess // this closes the site down so no-one can write/access the db (they are basically shown a 'site down for maintenance' page - I already have the code for this)
3: mysqldump --opt (DB_NAME) -u (DB_USERNAME) -p(DB_PASSWORD) > /path/to/dbbackup-$(date +%m%d%Y).sql // this backs up the database
4: wait for 3 to finish 5: rename '.htaccess' to .htaccess-closed 6: rename '.htacess-open' to .htaccess // this opens the site back upIs this easy enough to do? Anyone got any tips/pointers?
I've got limited knowledge in scripting so I've come to the interweb for help. Google hasn't answered any of my queries so the trusty WHT is next.
I'm trying to create a script cron that will email my clients once per month with space and bandwidth useage reminders. I'd prefer not to have to set up crons in each individual account, but rather email all with tokening including |name| |bandwidth| |space| out of the allowed space & bandwidth according to the clients package.
I've been reading through tutorials for setting up cron commands via cPanel, but everything I have tried does not work. What I need to do is simple - I just want to run a php file on my server once every 15 minutes.