I want to check error logs in hypervm in my friend vps , as I haven't done before, I want to know where can I view error logs
I have a doubt if there was an error in installation of kloxo, as I couldn't find an option to add ip address in kloxo or any other error which is troubling in adding an ip to kloxo
I have remote uptime tracking on my server, and I am regularly getting text messages and emails regarding my server going down. This in itself isn't odd, however the fact is, everytime the problem is cPanel.
I log into my PC and find an email from my server and it contains the following;
Quote:
cpsrvd failed @ Thu Aug 7 19:11:17 2008. A restart was attempted automagically.
Failure Reason: Unable to connect to port 2086
what's causing this? It's on a VPS and the main server hasn't been down once, but this VPS constantly goes offline, at least once a day...
I have seen this on a few different cpanel servers. Mainly seems an issue with some kernels but I have a server at softlayer and it seems it does it after a server has been up a few days.
At first I thought this was just an issue with grsecurity kernels but seems its an issue with any kernel on it after it been up a few days. On other servers though I have been able to change kernel versions and fix it. I suppose this may be related to disk space used as well.
Problem with my softlayer server is that teh tech didnt follow my partitong directions and installed everything in one slab / so I guess when you edit or add quota it has to scan the entire disk.
I had recently rebooted that server into a 2.6.29-grsecurity kernel and it fixed the issue but tonight I was messing around and it started all over. Not as bad as before but still around 305 minutes of totally lagging the server.
My mount options are LABEL=/ / ext3 defaults,noatime,usrquota 1 1
Seems to do the same with or without noatime.
Has anyone ran into this issue or know any solution?
I tried to upgrade from apache 2.0.51 to 2.0.63 but it crashes as soon as new version is started and httpd is reloaded.
here is a part of the log
[Fri Apr 25 10:57:46 2008] [notice] Apache/2.0.63 (Unix) configured -- resuming normal operations Segmentation Fault in 3642, waiting for debugger Segmentation Fault in 3697, waiting for debugger Segmentation Fault in 3696, waiting for debugger [Fri Apr 25 10:58:11 2008] [notice] caught SIGTERM, shutting down [Fri Apr 25 10:58:22 2008] [notice] suEXEC mechanism enabled (wrapper: /usr/sbin/suexec) [Fri Apr 25 10:58:22 2008] [notice] mod_security/1.9.4 configured [Fri Apr 25 10:58:22 2008] [notice] Digest: generating secret for digest authentication ... [Fri Apr 25 10:58:22 2008] [notice] Digest: done [Fri Apr 25 10:58:22 2008] [notice] LDAP: Built with OpenLDAP LDAP SDK [Fri Apr 25 10:58:22 2008] [notice] LDAP: SSL support unavailable [Fri Apr 25 10:58:23 2008] [notice] Apache/2.0.63 (Unix) configured -- resuming normal operations Segmentation Fault in 5422, waiting for debugger Segmentation Fault in 5421, waiting for debugger Segmentation Fault in 5452, waiting for debugger Segmentation Fault in 5461, waiting for debugger Segmentation Fault in 5451, waiting for debugger Segmentation Fault in 5466, waiting for debugger Segmentation Fault in 5465, waiting for debugger Segmentation Fault in 7363, waiting for debugger Segmentation Fault in 5435, waiting for debugger Segmentation Fault in 5906, waiting for debugger Segmentation Fault in 7251, waiting for debugger Segmentation Fault in 6041, waiting for debugger Segmentation Fault in 7723, waiting for debugger Segmentation Fault in 7986, waiting for debugger Segmentation Fault in 9659, waiting for debugger Segmentation Fault in 9643, waiting for debugger Segmentation Fault in 9361, waiting for debugger Segmentation Fault in 9744, waiting for debugger Segmentation Fault in 9543, waiting for debugger Segmentation Fault in 9879, waiting for debugger Segmentation Fault in 9794, waiting for debugger Segmentation Fault in 9758, waiting for debugger [Fri Apr 25 11:01:10 2008] [notice] caught SIGTERM, shutting down [Fri Apr 25 11:01:12 2008] [notice] suEXEC mechanism enabled (wrapper: /usr/sbin/suexec) [Fri Apr 25 11:01:12 2008] [notice] mod_security/1.9.4 configured [Fri Apr 25 11:01:12 2008] [notice] Digest: generating secret for digest authentication ... [Fri Apr 25 11:01:12 2008] [notice] Digest: done [Fri Apr 25 11:01:12 2008] [notice] LDAP: Built with OpenLDAP LDAP SDK [Fri Apr 25 11:01:12 2008] [notice] LDAP: SSL support unavailable [Fri Apr 25 11:01:13 2008] [notice] Apache/2.0.63 (Unix) configured -- resuming normal operations Segmentation Fault in 11634, waiting for debugger Segmentation Fault in 11635, waiting for debugger Segmentation Fault in 11661, waiting for debugger Segmentation Fault in 11695, waiting for debugger Segmentation Fault in 11760, waiting for debugger Segmentation Fault in 11723, waiting for debugger Segmentation Fault in 11694, waiting for debugger Segmentation Fault in 11837, waiting for debugger Segmentation Fault in 11812, waiting for debugger Segmentation Fault in 12022, waiting for debugger Segmentation Fault in 11848, waiting for debugger Segmentation Fault in 11879, waiting for debugger Segmentation Fault in 13342, waiting for debugger Segmentation Fault in 12062, waiting for debugger Segmentation Fault in 13428, waiting for debugger Segmentation Fault in 13569, waiting for debugger Segmentation Fault in 11849, waiting for debugger Segmentation Fault in 13881, waiting for debugger Segmentation Fault in 13380, waiting for debugger [Fri Apr 25 11:02:27 2008] [notice] caught SIGTERM, shutting down [Fri Apr 25 11:02:29 2008] [notice] suEXEC mechanism enabled (wrapper: /usr/sbin/suexec) [Fri Apr 25 11:02:29 2008] [notice] mod_security/1.9.4 configured [Fri Apr 25 11:02:29 2008] [notice] Digest: generating secret for digest authentication ... [Fri Apr 25 11:02:29 2008] [notice] Digest: done [Fri Apr 25 11:02:29 2008] [notice] LDAP: Built with OpenLDAP LDAP SDK [Fri Apr 25 11:02:29 2008] [notice] LDAP: SSL support unavailable [Fri Apr 25 11:02:30 2008] [notice] Apache/2.0.63 (Unix) configured -- resuming normal operations Segmentation Fault in 14123, waiting for debugger Segmentation Fault in 14154, waiting for debugger Segmentation Fault in 14156, waiting for debugger Segmentation Fault in 14273, waiting for debugger Segmentation Fault in 14114, waiting for debugger Segmentation Fault in 14308, waiting for debugger Segmentation Fault in 14316, waiting for debugger Segmentation Fault in 14274, waiting for debugger Segmentation Fault in 14315, waiting for debugger Segmentation Fault in 15562, waiting for debugger Segmentation Fault in 14113, waiting for debugger Segmentation Fault in 15583, waiting for debugger Segmentation Fault in 15615, waiting for debugger Segmentation Fault in 15616, waiting for debugger Segmentation Fault in 15584, waiting for debugger Segmentation Fault in 15637, waiting for debugger Segmentation Fault in 15631, waiting for debugger Segmentation Fault in 15614, waiting for debugger Segmentation Fault in 16332, waiting for debugger Segmentation Fault in 14262, waiting for debugger Segmentation Fault in 17504, waiting for debugger Segmentation Fault in 15638, waiting for debugger Segmentation Fault in 17515, waiting for debugger Segmentation Fault in 18105, waiting for debugger Segmentation Fault in 17516, waiting for debugger Segmentation Fault in 18163, waiting for debugger Segmentation Fault in 18175, waiting for debugger Segmentation Fault in 18177, waiting for debugger Segmentation Fault in 18178, waiting for debugger Segmentation Fault in 18149, waiting for debugger Segmentation Fault in 19931, waiting for debugger Segmentation Fault in 20098, waiting for debugger Segmentation Fault in 18176, waiting for debugger
I am on VPS and myself am quite a noob in this. Upgrade is performed by the support people.
If you guys can help please explain in lame terms Let me know if you need some specific info about the system.
I was suggested that some modules might not be upgraded yet and they cause the crash, so to fix I should remove modules one by one from httpd config until get to the problematic one.
I have a problem every few days, the server keeps hanging up and giving an "Out of Memory" message and SSH just hangs and doesn't connect. Every time i have to call out a tech to manually reboot it.
Is there a setting i can change to make SSH connect even when it is out of memory, or anything that can prevent it happening?
I am running CPanel and WHM, and every morning, I need to restart apache manually in order to work. Then it works for the whole day. It crashes I guess, but I don't know why. What logs do I need to check?
Problem: the machine crashes exactly every 10 minutes. The crash occurs with no entry on the logs and with 0.00 load. It is as if someone take out the current every 10 minutes.
Here are the specs:
- 2 CPU Intel Xeon 2.0 - 8 Gb RAM ECC - 2 x 250 Gb HDs
This machine needs plenty of current. I wonder if I am not going over the rack power quota. May be there is a system to allow overages for then 10 minutes, then it cut back the current to the rack quota.
I have to reboot this server each day. It seems to crash (or freeze if you call it) around 11:55 pm server time (central US). How do I trace the cause of this? I installed CSF / LFD in replacement of APF/BFD.
This is a centos 3.9 / cpanel 11 box.
crontab -e shows nothing out of the default ordinary
/var/log/messages shows this
Nov 23 19:14:00 server sshd(pam_unix)[29523]: authentication failure; logname= uid=0 euid=0 tty=NODEVssh ruser= rhost=a2.f5.5646.static.theplanet.com Nov 23 19:14:02 server sshd(pam_unix)[29533]: authentication failure; logname= uid=0 euid=0 tty=NODEVssh ruser= rhost=a2.f5.5646.static.theplanet.com user=gopher Nov 23 19:14:02 server sshd(pam_unix)[29541]: authentication failure; logname= uid=0 euid=0 tty=NODEVssh ruser= rhost=a2.f5.5646.static.theplanet.com user=gopher Nov 23 19:14:02 server sshd(pam_unix)[29553]: authentication failure; logname= uid=0 euid=0 tty=NODEVssh ruser= rhost=a2.f5.5646.static.theplanet.com user=gopher Nov 23 19:14:02 server sshd(pam_unix)[29570]: check pass; user unknown Nov 23 19:14:02 server sshd(pam_unix)[29570]: authentication failure; logname= uid=0 euid=0 tty=NODEVssh ruser= rhost=a2.f5.5646.static.theplanet.com Nov 23 19:14:05 server sshd(pam_unix)[29580]: check pass; user unknown Nov 23 19:14:05 server sshd(pam_unix)[29580]: authentication failure; logname= uid=0 euid=0 tty=NODEVssh ruser= rhost=a2.f5.5646.static.theplanet.com Nov 23 19:14:08 server sshd(pam_unix)[29584]: check pass; user unknown Nov 23 19:14:08 server sshd(pam_unix)[29584]: authentication failure; logname= uid=0 euid=0 tty=NODEVssh ruser= rhost=a2.f5.5646.static.theplanet.com Nov 23 19:14:11 server sshd(pam_unix)[29588]: check pass; user unknown Nov 23 19:14:11 server sshd(pam_unix)[29588]: authentication failure; logname= uid=0 euid=0 tty=NODEVssh ruser= rhost=a2.f5.5646.static.theplanet.com Nov 23 19:14:13 server sshd(pam_unix)[29592]: check pass; user unknown Nov 23 19:14:13 server sshd(pam_unix)[29592]: authentication failure; logname= uid=0 euid=0 tty=NODEVssh ruser= rhost=a2.f5.5646.static.theplanet.com Nov 23 19:14:16 server sshd(pam_unix)[29596]: check pass; user unknown Nov 23 19:14:16 server sshd(pam_unix)[29596]: authentication failure; logname= uid=0 euid=0 tty=NODEVssh ruser= rhost=a2.f5.5646.static.theplanet.com Nov 23 19:14:19 server sshd(pam_unix)[29600]: check pass; user unknown Nov 23 19:14:19 server sshd(pam_unix)[29600]: authentication failure; logname= uid=0 euid=0 tty=NODEVssh ruser= rhost=a2.f5.5646.static.theplanet.com Nov 23 19:14:22 server sshd(pam_unix)[29605]: check pass; user unknown Nov 23 19:14:22 server sshd(pam_unix)[29605]: authentication failure; logname= uid=0 euid=0 tty=NODEVssh ruser= rhost=a2.f5.5646.static.theplanet.com Nov 23 19:14:25 server sshd(pam_unix)[29614]: check pass; user unknown Nov 23 19:14:25 server sshd(pam_unix)[29614]: authentication failure; logname= uid=0 euid=0 tty=NODEVssh ruser= rhost=a2.f5.5646.static.theplanet.com Nov 23 19:14:27 server sshd(pam_unix)[29618]: check pass; user unknown Nov 23 19:14:27 server sshd(pam_unix)[29618]: authentication failure; logname= uid=0 euid=0 tty=NODEVssh ruser= rhost=a2.f5.5646.static.theplanet.com Nov 23 19:14:30 server sshd(pam_unix)[29622]: authentication failure; logname= uid=0 euid=0 tty=NODEVssh ruser= rhost=a2.f5.5646.static.theplanet.com user=mailnull Nov 23 19:14:33 server sshd(pam_unix)[29631]: authentication failure; logname= uid=0 euid=0 tty=NODEVssh ruser= rhost=a2.f5.5646.static.theplanet.com user=nfsnobody Nov 23 19:14:36 server sshd(pam_unix)[29635]: authentication failure; logname= uid=0 euid=0 tty=NODEVssh ruser= rhost=a2.f5.5646.static.theplanet.com user=rpcuser Nov 23 19:14:38 server sshd(pam_unix)[29639]: authentication failure; logname= uid=0 euid=0 tty=NODEVssh ruser= rhost=a2.f5.5646.static.theplanet.com user=rpc Nov 23 19:14:41 server sshd(pam_unix)[29643]: authentication failure; logname= uid=0 euid=0 tty=NODEVssh ruser= rhost=a2.f5.5646.static.theplanet.com user=gopher
My mysql server keeps on crashing. I wonder if it is because of out of memory. Last week, I upgraded my memory from 2GB to 6GB and I executed the command "free" to check memory usage. It uses 5GB in memory usage and leave under 10,000k unuse. Do you think this might be the cause of weekly mysql crashes?
I only use the server for mysql, so it is a dedicated mysql server.
I've been trying to see whats going on with this server for a few days now but am unable to resolve the issue.
This is a 3 server setup. One is a mail server, another a mysql server and the host server. The apache on the host has plenty of ram (8 gigs) and it is currently under 5 gigs. The site is mainly a social networking site.
Now the apache would crash at times like 1:16, 2:17, 3:16, 4:18; basically in 1 hour intervals. Now it doesn't always crash every hour. Sometimes it might go up to 5 hours and may crash at say 8:50
Well it turns out that the mail server new mass mailing method has some sort of effect on the host server.
This is what happens.
1.) Host has a ton of apache processes up and running (site is fine and dandy).
2.) Mail server is about to launch its barrage email (load about .5)
3.) Then all of the apache processes stopped appearing on my top (nobody is able to browse the site at this point).
4.) after a few minutes the apache processes reappear and the site loads again.
5.) Mail server is now mass mailing and its loads reaches up to 4.
6.) WHM emails me that apache was down and it restarted the service.
Here is the interesting thing about this. First of all I found out that the apache processes were actually not gone. They were somehow suppressed. For example during this freeze moment, if I run a "ps aux" I see that all the apache process are running yet they are not using any cpu or ram (which is why I don't see them on top). Now if I were to do a "service httpd restart" during this moment, it will restart but will still be frozen until its set time.
When the site is accessible again, that is when the mail starts leaving from the mail server.
The owner thinks this is related, and I tend to agree.
The only problem is that I have no clue as to what is going on. Apache does not leave any error logs period. Nothing in messages or in exim logs. For a while I thought it might be lfd killing the apache but I was wrong. Nothing informative on the lfd logs either.
Could it be perhaps a bind issue? Somehow the mail server takes over the host system before its launch?
I'm posting this in the hope that some of the techies here and give me some hints and directions where I can find an answer. Perhaps someone who read this thread has seen this kind of issue before.
I have a VPS with Linux and 128 MB RAM and the Control Panel is an Interworx one. Backups are made with SiteWorx (a panel within NodeWorx, and only visible one for Shared Host customers).
- The VPS is working properly the whole day;
- The content of my VPS (besides the necessary software) is a PHPBB 3.0.0 forum that is heavily visited. Its subject is World of Warcraft, a popular MMORPG;
- Making a backup is successful, and I am reported by the system that way by an e-mail which also reaches me;
- Shortly after that my VPS crashes and stays offline, until I can restart it or my webhost notices that it is offline. NodeWorx and SSH are inaccesible. As soon as I can access the SSH I can restart MySQL server and everything is working properly again.
I suspect that 128 MB RAM is too little for my VPS backups.
I have a dedicated server with burst.net and it's been pretty good, well, it was, but more recently I've found it seems to be crashing every day, at least once. On the server I have created 2 VPS's, both have cPanel installed, however every day I am receiving emails from my server telling me that cPanel has crashed and the server has to have been rebooted, which seems odd.
I've contacted cPanel support before, numerous times now and everytime they tell me that they're looking into it, they update cPanel but the problem persists.
has anyone got any experience with this?
Server information: 2048MB Ram 2.8GHZ processor centOS 5 hyperVM
Each VPS Has 1024mb ram assigned to it, and it doesn't seem to be a problem with ram that is causing it, because the VPS's never use more than 200MB of ram each. Each VPS was created in hyperVM with the "OpenVZ" type selected. Could there be a problem with my server, as in, hardware, or is this a software problem? Has anyone been through this experience before? I'm mighty confused about what's going on.
I'm using mod_perl 2.0.7 on Windows with Apache 2.2.23. I got Apache from Apachelounge, and compiled mod_perl and perl 5.16.2 myself using Visual Studio 2008. I'm using a 32-bit Windows Vista.
Pretty frequently my app (which works just fine on Linux) makes Apache crash. If I perform 500 requests with Apache Bench, I see this:
Benchmarking localhost (be patient) Completed 100 requests Completed 200 requests Completed 300 requests apr_socket_recv: An existing connection was forcibly closed by the remote host. (730054) Total of 338 requests completed
In the apache error log I see apache is restarting, but this results in some HTTP 500 errors that make the apache-bench results fail. When using a web browser, I also get these http 500 errors.
If I run the application using native CGI (i.e. I turn off mod_perl) I do not see crashes but of course it is *very* slow.
How can I find out what makes apache/mod_perl crash?
I asked the same on the modperl mailing lists, there they said I needed the symbol files for apache (*.pdb files). Where are these?
Today my server started to go off-line and each time i restarted it went down within couple minutes. So i looked into it and found that i had over 7000 emails in Exim Mail Queue. All of them are spam. I deleted them through WHM and now everything seems good again but....
My question is how can i check what caused this problem? Has one of my domains been compromised with and sending out spam? What can i do?
I encountered a strange problem when installing apache 2.2 or 2.4 as a service and set the service to be automaticly started.On restart of windows, apache service crashes with an access violation.
I've got a problem at a local customer with rotatelogs.exe and the current release of Apache HTTPD 2.4.12.
I've downloaded the 64-bit zip-file (VC11) and installed the VC11 vcredist in both 32- and 64-bit version.
The project is to upgrade apache 2.2 to apache 2.4. I've adjusted the configuration and added rotatelogs for log rotation for error_log and access_log.
The configuration is 100% correct, I can copy the line to cmd.exe and it runs correctly.
Variables are set in global environment, APACHE_HOME is set with "/" instead of "" to get sure rotatelogs.exe is found.
Windows Application 2003 crashed on RAID 5 server, we tried to take the NTFS files from the hardrive and mount them on a knobix which was booted from a cdrom drive. Knopix could read the files but it was unable to mount them I guess for compatiblity reasons.
Is there anyway we can get a backup of that ntfs file and restore our data?
I would like to know how to check load via ssh and check files causing load?
I want the ssh codes for 2 different set of control panels, one with cpanel+whm and other with kloxo+hypervm
and I would also know how to check the files causing the load, such as some files could have been interrupted while processing, so they could be causing load some times, so I want to stop such processes if any are running on the vps on my friends accounts
I just got an email from my vps saying that a BFD attack was stopped and the ip was banned after 40 failed attempts of logging into ftpdpro. I logged in and started looking around and I noticed that in my apf log file there was:
Code: Jan 15 00:54:07 s1 apf(22290): {glob} firewall initalized Jan 15 00:54:07 s1 apf(22290): {glob} fast load snapshot saved Jan 15 00:58:06 s1 apf(32425): {glob} uptime less than 5 minutes, going full load Jan 15 00:58:06 s1 apf(32425): {glob} activating firewall Jan 15 00:58:06 s1 apf(32500): {glob} unable to load iptables module (ip_tables), aborting. Jan 15 00:58:06 s1 apf(32425): {glob} firewall initalized Jan 15 00:58:06 s1 apf(32425): {glob} fast load snapshot saved Jan 15 01:00:04 s1 apf(3950): {glob} uptime less than 5 minutes, going full load My concern is that it says "unable to load iptables module (ip_tables), aborting.
is there anything that logs server load and what processes have caused any spikes?
one of my servers keeps going down under high load, well it seems to lock up and the noc has to reboot, but ofcourse the techs can't diagnose a problem after as it runs fine and when i send them a ticket it's because the server can't be reached at all and then they can't diagnose it either
I moved a domain of mine from one of my CentOS servers on my SoHo LAN, to one of my CentOS cPanel/WHM servers. Since the SoHo machine had been handling this domain's mail for almost 2 years (300+ mb of mail), I decided to continue running it from home.
The Apache daemon was stopped on said SoHo box following DNS propagation to the cPanel machine, but Apache was automatically started again after having to reboot the SoHo server. Before I got a chance to kill Apache, I got some weird entries showin' up in the access_logs.
I ask simply because I don't recall seeing a "CONNECT" entry in my logs before, and I've been at this for awhile. That or I've just not paid any attention. And what's with the SSL port?
I guess I'm just a little confused as to what was trying to be accomplished here...it hasn't returned since.