what I want to do, have a "node" somewhere serve media (static) files from a central server, but cache the static files the first time they are hit, so subsequent requests to the "node" don't require getting the file from the central server.
I just installed mySQL server on a brand new dedicated server and am getting Timeout error occurred trying to start MySQL Daemon. I have uninstalled and reinstalled it a few times and am still getting the same error.
I am getting the error (as listed in the title) on my server. This is a BRAND NEW FRESH install on a brand new server...
OS: Unix Fedora Kernel Version: Kernel 2.6.18-1/2798.fc6 on an i686 Hardware Information: Brand new Dell SC1435 Server Software Version: MySQl 5 Control Panel: Nothing yet
Apache starts ok but when I go to strart MySQL I get the following error...
[root@server112]# /etc/rc.d/init.d/mysqld start Timeout error occurred trying to start MySQL Daemon Starting MySQL: [FAILED]
Does anyone have any ideas? I did some pretty extensive searching... Web Hosting Talk article that basically describes the same error. I followed everything as listed in his suggestion and it didn't change anything.
I got this while trying to force the urchin log processing:
Code: WARNING: (7024-323-2398) Could not delete backup file - check permissions. DETAIL: /home/virtual/mysite.com/var/log/httpd/access_log : Permission denied All permissions are OK.. I dont know what is happening.
One of my users is receiving way too many Mailer Daemon messages and his mailbox is full. I've had this problem from time to time and I am trying to figure out how to block mailer daemon messages for a specific domain so that they do not even get on the mailing queue...much like when you set a default address to ":fail:". So I came up with this:
refuse_md1: deny message = The original message did not come from this site. condition = ${if eq{$sender_address}{}{yes}{no}} condition = ${if eq{$local_part}{userdomain.com}{yes}{no}} log_message = Refused a bounce message for userdomain.com
However, this doesn't help. The emails are still going to the mailing queue and when I look at the Exim log I see the usual error message saying that the email was blocked because the account has run out of space.
I have root access to a server. Is it possible to create a cron that would restart my ftp and http server every so often. Like once a week or somthing. If so how would i do it?
i have had a problem for some time now, regarding my CRON jobs. I am trying to download a large amount of data from ebay (through their API, totally legal and aboveboard) using php, but my CRONjob times out.
I have tried resetting the timeout variable, but then it exceeds the maximum filesize SO, my question: is there any way to have a script run as a CRON job, and wen it is complete, call another script?
Hey everyone, my friend's dad is looking for a web host that will allow his cron jobs to run every second. Most hosts apparently dont allow cron jobs faster than 5 seconds apart.
How often a host can run cron jobs isn't really advertised on their sites so I'm having a bit of trouble finding a host. I've resorted to just sending emails to sales addresses asking about it.
VPS isn't rebooting by itself when it goes down. Anyone has any program/script that monitors heartbeat of the server? Like when it goes down, the program will automatically reboots the system. I know there's such a script out there but I forgot what it called.
I have heard mixed reports and can't find any good info. Personally i've run a cronjob for up to 6 minutes, but as my best method was sending myself emails through php, its not exactly a highly accurate testing method.
On the same note, what would happen if one cronjob is running a php script for over 10 minutes, then another cron job starts on the same script, before the first one has finished?
I have my own server. I create php file for adding cronjobs. I checked /etc/cron.deny and /etc/cron.allow. both of them is empty so no problem. I execute the php script but nothing : I check with crontab -u user l and it told me no cronjobs for that user. When I access as root from ssh and try teh same command, it works fine. I don't understand how to fix that.
I have a bit of a strange problem, I have an rsync command setup in the servers crontab and from the cron log it show it ran the command but the files don't copy to the backup server. If I take the rsync syntax and run it manually all the files copy across with no errors, but I can't figure out why the cronjob doesn't work properly.
I've just noticed that many people may have a free remote cron facility without realising it.
If you have any domains registered with Godaddy, you get free web space that includes a cron facility. It only runs every half hour, but you could set six jobs at 5 min intervals to get an effective 5-minute poll, which is good enough for many purposes. You could use it to check uptime on another site, for example. Has anyone tried this?
We are running cpanel on one of our servers. Several cron jobs were deleted from the cron panel of one acct. I have no idea of the paths to re-enter these jobs. Is their a log file on the server that will show cron job history from previous runs so I can recover the proper paths?