CRON Job Timing Out
Sep 19, 2007
i have had a problem for some time now, regarding my CRON jobs. I am trying to download a large amount of data from ebay (through their API, totally legal and aboveboard) using php, but my CRONjob times out.
I have tried resetting the timeout variable, but then it exceeds the maximum filesize
SO, my question: is there any way to have a script run as a CRON job, and wen it is complete, call another script?
View 4 Replies
ADVERTISEMENT
Mar 24, 2009
I have a php cli shell script that I want to be running continuously without ever stopping. However, I noticed that it would stop executing after a while on its own.
Is there a way to keep a script running forever without timing out, or daemonize it?
View 8 Replies
View Related
Mar 10, 2008
I'm not sure why my brand new dual proc quad core xeon 2.5ghz harpertown gets time out when the server load is under .5
Like it'll be running ultra fast and suddenly, I can't get into ssh, whm, my websites or anything. When I ping it, no reponse. Is it because it restarted itself?
View 4 Replies
View Related
Feb 10, 2007
I have recently tweaked my server (AMD 3000+ 1gb Ram 10mbps port) by configuring the httpd.conf, my.cnf and php.ini.
I am pleased to say that the server is now responding well and the load is always below 1.00.
However, sometimes a user will experience a time-out through their browser. Once they refresh the server will then react as it should and carry out the command which is being asked of it.
tweaking apache so that timeouts do not occur....here are the changes I have made...
php.ini
Code:
[MySQL]
; Allow or prevent persistent links.
mysql.allow_persistent = Off
httpd.conf
Code:
KeepAlive On
MaxKeepAliveRequests 100
KeepAliveTimeout 3
MinSpareServers 16
MaxSpareServers 32
StartServers 20
MaxClients 150
MaxRequestsPerChild 5
and finally here is my.cnf file
Code:
[mysqld]
skip-locking
skip-innodb
max_connections = 400
key_buffer = 200M
myisam_sort_buffer_size = 64M
join_buffer=2M
read_buffer_size = 5M
sort_buffer_size = 5M
read_rnd_buffer_size = 5M
table_cache = 1536
thread_cache_size = 128
interactive_timeout=100
wait_timeout=10
connect_timeout=10
tmp_table_size = 48M
max_allowed_packet = 16M
max_connect_errors = 10
query_cache_limit = 3M
query_cache_size = 64M
query_cache_type = 1
thread_concurrency=4
log-slow-queries = /var/log/mysql/mysql-slow.log
old-passwords = 1
[mysqld_safe]
open_files_limit = 8192
[mysqldump]
quick
max_allowed_packet = 16M
[myisamchk]
key_buffer = 64M
sort_buffer = 64M
read_buffer = 16M
write_buffer = 16M
View 6 Replies
View Related
Aug 28, 2007
I decided to merge 2 old dedicated servers into 1 colocated machine with better specs the old machines have a combined number of about 280 accounts.
I purchased a Broadberry Server and requested a specific setup and OS (CentOS) after some delays they finally got it working and shipped it to the Datacentre
I chose 49pence/RapidSwitch for colocation in the UK
I received a email from 49pence on how they wanted it setup and Broadberry did this as well which was good.
Unfortunately I got the email it had been received and installed from RapidSwtich before I received the email regarding server admin and password info
Broadberry set a very weak password up a bit of an oversight this
As within 12 hours of it being installed it was hacked!
Being a UK bank holiday we were unable to do anything till today
And now we are having to retrieve the server to reinstall everything and start again!
I hope the companys involved will be cooperative so we can get this up and running asap.
My severs at Coreix end later this month.
A lesson to be learned for us and I hope anyone reading
Next time we have a check list to make sure nothing gets overlooked!
Luckly no data on the server and no harm done other than cost and time.
View 2 Replies
View Related
Feb 10, 2007
I had a strange error this morning, httpd was running fine but nothing was loading. All the other services worked fine but I checked the error log and couldn't find anything. I restarted httpd and it's running fine now.
Quote:
[Sat Feb 10 11:48:01 2007] [notice] suEXEC mechanism enabled (wrapper: /usr/sbin/suexec)
[Sat Feb 10 11:48:01 2007] [notice] Accept mutex: sysvsem (Default: sysvsem)
[Sat Feb 10 13:06:02 2007] [notice] caught SIGTERM, shutting down
[Sat Feb 10 13:06:03 2007] [notice] Apache configured -- resuming normal operations
[Sat Feb 10 13:06:03 2007] [notice] suEXEC mechanism enabled (wrapper: /usr/sbin/suexec)
[Sat Feb 10 13:06:03 2007] [notice] Accept mutex: sysvsem (Default: sysvsem)
[Sat Feb 10 20:42:26 2007] [notice] caught SIGTERM, shutting down
[Sat Feb 10 20:42:28 2007] [notice] Apache configured -- resuming normal operations
[Sat Feb 10 20:42:28 2007] [notice] suEXEC mechanism enabled (wrapper: /usr/sbin/suexec)
[Sat Feb 10 20:42:28 2007] [notice] Accept mutex: sysvsem (Default: sysvsem)
Looks just like normal operations... I checked the access log and nothing looked out of the ordinary either.
Anyway the only suspicious thing I saw was the daily scan by spammers to see if I had anything exploitable.
Quote:
[Sat Feb 10 00:16:32 2007] [error] [client 69.13.76.82] File does not exist: /var/www/html/a1b2c3d4e5f6g7h8i9/nonexistentfile.php
[Sat Feb 10 00:16:33 2007] [error] [client 69.13.76.82] File does not exist: /var/www/html/adxmlrpc.php
[Sat Feb 10 00:16:33 2007] [error] [client 69.13.76.82] File does not exist: /var/www/html/adserver/adxmlrpc.php
[Sat Feb 10 00:16:33 2007] [error] [client 69.13.76.82] File does not exist: /var/www/html/phpAdsNew/adxmlrpc.php
[Sat Feb 10 00:16:33 2007] [error] [client 69.13.76.82] File does not exist: /var/www/html/phpadsnew/adxmlrpc.php
[Sat Feb 10 00:16:33 2007] [error] [client 69.13.76.82] File does not exist: /var/www/html/phpads/adxmlrpc.php
[Sat Feb 10 00:16:33 2007] [error] [client 69.13.76.82] File does not exist: /var/www/html/Ads/adxmlrpc.php
[Sat Feb 10 00:16:33 2007] [error] [client 69.13.76.82] File does not exist: /var/www/html/ads/adxmlrpc.php
[Sat Feb 10 00:16:33 2007] [error] [client 69.13.76.82] File does not exist: /var/www/html/xmlrpc.php
[Sat Feb 10 00:16:34 2007] [error] [client 69.13.76.82] File does not exist: /var/www/html/xmlrpc/xmlrpc.php
[Sat Feb 10 00:16:34 2007] [error] [client 69.13.76.82] File does not exist: /var/www/html/xmlsrv/xmlrpc.php
[Sat Feb 10 00:16:34 2007] [error] [client 69.13.76.82] File does not exist: /var/www/html/blog/xmlrpc.php
[Sat Feb 10 00:16:34 2007] [error] [client 69.13.76.82] File does not exist: /var/www/html/drupal/xmlrpc.php
[Sat Feb 10 00:16:34 2007] [error] [client 69.13.76.82] File does not exist: /var/www/html/community/xmlrpc.php
[Sat Feb 10 00:16:34 2007] [error] [client 69.13.76.82] File does not exist: /var/www/html/blogs/xmlrpc.php
[Sat Feb 10 00:16:34 2007] [error] [client 69.13.76.82] File does not exist: /var/www/html/blogs/xmlsrv/xmlrpc.php
[Sat Feb 10 00:16:34 2007] [error] [client 69.13.76.82] File does not exist: /var/www/html/blog/xmlsrv/xmlrpc.php
[Sat Feb 10 00:16:34 2007] [error] [client 69.13.76.82] File does not exist: /var/www/html/blogtest/xmlsrv/xmlrpc.php
[Sat Feb 10 00:16:35 2007] [error] [client 69.13.76.82] File does not exist: /var/www/html/b2/xmlsrv/xmlrpc.php
[Sat Feb 10 00:16:35 2007] [error] [client 69.13.76.82] File does not exist: /var/www/html/b2evo/xmlsrv/xmlrpc.php
[Sat Feb 10 00:16:35 2007] [error] [client 69.13.76.82] File does not exist: /var/www/html/wordpress/xmlrpc.php
[Sat Feb 10 00:16:35 2007] [error] [client 69.13.76.82] File does not exist: /var/www/html/phpgroupware/xmlrpc.php
I have nothing to be exploited so I'm thinking that wasn't the cause either.
I checked user_beancounters and there are also 0 fail counts.
Quote:
Version: 2.5
uid resource held maxheld barrier limit failcnt
509: kmemsize 4679572 9150217 50000000 50000000 0
lockedpages 0 0 256 256 0
privvmpages 51822 84271 262140 524280 0
shmpages 10913 10929 21504 21504 0
dummy 0 0 0 0 0
numproc 52 91 400 400 0
physpages 14273 24587 0 2147483647 0
vmguarpages 0 0 131070 2147483647 0
oomguarpages 14912 33040 26112 2147483647 0
numtcpsock 16 146 360 360 0
numflock 1 6 188 206 0
numpty 2 2 16 16 0
numsiginfo 0 25 256 256 0
tcpsndbuf 154284 371176 1720320 2703360 0
tcprcvbuf 197308 539932 1720320 2703360 0
othersockbuf 13416 239892 1126080 2097152 0
dgramrcvbuf 0 2800 262144 262144 0
numothersock 13 27 360 360 0
dcachesize 0 0 2273280 2416640 0
numfile 1199 2022 5820 5820 0
dummy 0 0 0 0 0
dummy 0 0 0 0 0
dummy 0 0 0 0 0
numiptent 89 89 128 128 0
View 2 Replies
View Related
May 6, 2008
I'm working on a script to help users get routed to the nearest, fastest server for the best ping. I'm in 2 datacenters, one on the east and one one the west coast of the US.
I've looked at some of the geo lookup programs based on IP, but they either seem inaccurate or expensive.. and just downright difficult to use.
I found out that some geo load balancers use the connection speed to figure out the best route. So, I'm trying to think of a way of timing the users connection from multiple server locations.
Has anyone here done that sort of thing before? Any suggestions on how to best do that?
Two completely different methods I've considered:
1. putting 2 images on a web page, and using javascript to time the loading of them.
2. pinging the user IP from each coast and seeing which is fastest. (Is there a lighter way than ping? )
View 5 Replies
View Related
Jul 20, 2009
There is a behavour with my server FTP when uploading a whole directory with many files in many sub-directories
Very often, the server disconnect itself when actively uploading files and the log simply says 'timeout'
It is as if the file got 'stuck' half way, and the FTP consider them as idle, therefore it disconnect you with a 'timeout' before reconnecting you.
But i have no problem uploading a single 200mb file to the server via FTP. I suppose no problem with 'keep alive'
So what is this behavior and how to solve it?
View 10 Replies
View Related
Dec 15, 2013
First time configuring Apache (2.2 w/ mod_jk) and when running locally (192.168.1.x), no problem.
When configuring from a DynDns through my UVerse router, I get several JS and CSS 'not found' and of course bad rendered site.
View 5 Replies
View Related
Aug 11, 2008
I have a VPS located in LA, USA.
For over a week now I have had the following network issues:
- browser timing out (for me and visitors to my site)
- ftp connection issues
The server load is low so it's not server related.
Traceroute TO the server appears fine.
Traceroute FROM the server to users IP's appears to have issues over the SingTel/Optus network.
My webhost says it's an issue for SingTel/Optus.
SingTel/Optus Engineer say:
"Our testings point to a problem either within Cogent's network or on a peering link between Cogent and Singtel in LA.
I'd suggest that the owner of the domain (me!) approach his hosting provider and have them escalate to Cogent. We can't escalate to Cogent as we have no peering with them."
So I've been the meat in the sandwich for over a week with no sign of a fix.
My options appear to be to either move the VPS away from the webhost and host it locally (Australia) or to somehow wait for someone to step up and take responsiblity and get this resolved.
My heart says wait as it's not *my* responsibility but it's costing me financially and professionally.
Anyone else experiencing similiar/same issues from the Asia Pacific region to the US?
View 11 Replies
View Related
Feb 28, 2007
I wanna run this command "./adfsas.sh" Every 4 Hours can someone tell me what command I can use via SSH to set this cronjob?
View 8 Replies
View Related
Oct 4, 2008
PHP Code:
* */1 * * *
But I didnt understand the difference from
PHP Code:
* * * * *
time property? Why is there a need for /1?
View 7 Replies
View Related
Sep 17, 2007
I have root access to a server. Is it possible to create a cron that would restart my ftp and http server every so often. Like once a week or somthing. If so how would i do it?
View 6 Replies
View Related
Feb 24, 2007
in order to backup db automatically i want to use cron job. So i set cron job at 00 AM
suppose that : my infos
db name : db
db user : zode
db pass : 123
the command i use is following
PHP Code:
mysqldump -u zode -p123 of -K -c -f --compatible=mysql40 --default-character-set=utf8 db > backup/db_`date +%d%m%y`.sql
in good time i am looking into backup directory db is in it or not
but there is nothing in it
View 3 Replies
View Related
Jun 18, 2007
I run A Centos 4.xx latest kernel server and Ive got a problem with Cron Job reporting.
The Cron Jobs themselves are working fine but I keep getting this message, on the hour, every hour
Quote:
Originally Posted by Email from the Cron Daemon
Not a directory: /etc/cron.hourly
The folder etc/cron.hourly DOES EXIST! and I cannot work out what could be causing this?
Does anyone hgave any ideas what could be causing it
View 2 Replies
View Related
Apr 5, 2009
Hey everyone, my friend's dad is looking for a web host that will allow his cron jobs to run every second. Most hosts apparently dont allow cron jobs faster than 5 seconds apart.
How often a host can run cron jobs isn't really advertised on their sites so I'm having a bit of trouble finding a host. I've resorted to just sending emails to sales addresses asking about it.
Does anyone know how I can find a host like this?
View 7 Replies
View Related
Aug 20, 2007
VPS isn't rebooting by itself when it goes down. Anyone has any program/script that monitors heartbeat of the server? Like when it goes down, the program will automatically reboots the system. I know there's such a script out there but I forgot what it called.
View 2 Replies
View Related
Jul 12, 2007
I want to execute the following command on the 15th of every month at 1AM:
echo > /usr/local/apache/logs/error_log
How to accomplish this?
View 2 Replies
View Related
Feb 7, 2007
My server with cPanel, I'd like run file http://domain.com/file.php at 0h00 everyday, I have set the Cron Job in cPanel :
Code:
0 0 * * * /usr/bin/ehpwget http://domain.com/file.php
but The cron is not working well
Code:
/bin/sh: /usr/bin/ehpwget: No such file or directory
Can any one please let me know how to run a php file with cron.
(as user or root)
View 2 Replies
View Related
Jun 13, 2005
Anyone know how I'd run a cron job on the begining (first day)of every month?
View 2 Replies
View Related
May 23, 2007
Simply wondering, does cron timeout?
I have heard mixed reports and can't find any good info. Personally i've run a cronjob for up to 6 minutes, but as my best method was sending myself emails through php, its not exactly a highly accurate testing method.
On the same note, what would happen if one cronjob is running a php script for over 10 minutes, then another cron job starts on the same script, before the first one has finished?
View 2 Replies
View Related
Sep 27, 2006
I have my own server. I create php file for adding cronjobs. I checked /etc/cron.deny and /etc/cron.allow. both of them is empty so no problem. I execute the php script but nothing : I check with crontab -u user l and it told me no cronjobs for that user. When I access as root from ssh and try teh same command, it works fine. I don't understand how to fix that.
View 0 Replies
View Related
May 5, 2009
I have a bit of a strange problem, I have an rsync command setup in the servers crontab and from the cron log it show it ran the command but the files don't copy to the backup server. If I take the rsync syntax and run it manually all the files copy across with no errors, but I can't figure out why the cronjob doesn't work properly.
View 8 Replies
View Related
Mar 30, 2009
I've just noticed that many people may have a free remote cron facility without realising it.
If you have any domains registered with Godaddy, you get free web space that includes a cron facility. It only runs every half hour, but you could set six jobs at 5 min intervals to get an effective 5-minute poll, which is good enough for many purposes. You could use it to check uptime on another site, for example. Has anyone tried this?
View 11 Replies
View Related
Jul 4, 2009
We are running cpanel on one of our servers. Several cron jobs were deleted from the cron panel of one acct. I have no idea of the paths to re-enter these jobs. Is their a log file on the server that will show cron job history from previous runs so I can recover the proper paths?
View 4 Replies
View Related
Jul 10, 2009
I want to set up a cron job to make daily back-ups of my database, but by turning my site off first.
This is how I envisage it to work:
1: rename '.htacess' (in public_html folder for the site) to .htaccess-open
2: rename '.htaccess-closed' to .htaccess
// this closes the site down so no-one can write/access the db (they are basically shown a 'site down for maintenance' page - I already have the code for this)
3: mysqldump --opt (DB_NAME) -u (DB_USERNAME) -p(DB_PASSWORD) > /path/to/dbbackup-$(date +%m%d%Y).sql
// this backs up the database
4: wait for 3 to finish
5: rename '.htaccess' to .htaccess-closed
6: rename '.htacess-open' to .htaccess
// this opens the site back upIs this easy enough to do? Anyone got any tips/pointers?
View 4 Replies
View Related
Aug 5, 2009
I've got limited knowledge in scripting so I've come to the interweb for help. Google hasn't answered any of my queries so the trusty WHT is next.
I'm trying to create a script cron that will email my clients once per month with space and bandwidth useage reminders. I'd prefer not to have to set up crons in each individual account, but rather email all with tokening including |name| |bandwidth| |space| out of the allowed space & bandwidth according to the clients package.
View 2 Replies
View Related
Jan 28, 2008
I've been reading through tutorials for setting up cron commands via cPanel, but everything I have tried does not work. What I need to do is simple - I just want to run a php file on my server once every 15 minutes.
View 1 Replies
View Related
Oct 28, 2008
Netstat & APF cron job ...
View 7 Replies
View Related