The "/etc/php.ini" file seems reasonable and the session file path seems correct: session.save_path = "/var/lib/php/session"
But the session files never get deleted automatically. The session folder fills up quickly and I have been deleting session files manually while trying to resolve the problem. How can I verify that the default Plesk cron jobs are set up and running properly?
When you delete a site backup from its "Backup Manager" Panel, it is removed and no longer displayed in the Panel. However, I cannot tell if this action actually does anything with the real site backup files in "/var/lib/psa/dumps". Does this action merely remove it from PSA's database but not touch any actual files? If this is true, then how are site backup files supposed to be managed if this action doesn't actually delete them?
I've had some recurring problems with my host, VPSLink. For some reason startup scripts and init files keep on being deleted and I can't reach my site. I am indeed a bit of an amateur at maintaining a site but I haven't messed with these files or deleted them. What could be going on? An attack from someone on the web? Or some kind of VPSLink related problem?
This is part of the latest reply I got from VPSLink:We have managed to get your VPS back online. It appears some Ubuntu package changed the way networking is started/shut down which removes the /var/run/network directory completely. This directory contains the 'ifstate' file, which OpenVZ uses to set up the network interfaces.
The site is indeed up and running again so I'm not desperate, but I would very much like to understand what's going on here.
im doing this in favor of my friend who is having some problem with his dedicated server,"he does not speak english very well" he has a unmanaged dedicated server he changed something on his ssh port and forgot what port is it, he can still access his WHM right now, meaning he know's the root password "correct" but the problem is he forgot his ssh port
I had an issue with limestonenetworks over the last 24 hours. My 2 servers went down for 12 hours but when it came back up the separate mysql server would not load?? Somehow the hard drive on that server failed during this upgrade and they cannot retrieve the databases on the server. A database containing over 60,000 members. Through my own undoing I didnt have a recent backup. A cpanel backup was backing up the same old file instead of the updated one.
I would have been happy if the site came back and at least the data was still there. Im just upset that this "network upgrade" has fried my database. From this I get the lesson "You get what you pay for". Im just keeping my fingers crossed that they will be kind enough to make a serious attempt to recover the data or at least let me send it to someone who can.
Does anyone have any suggestions on how to get retrieve data from a linux server hard-drive?
Here are the steps that have been taken so far:
Upon console of the server it was at DISK BOOT FAILURE prompt. We first did the basics and disconnected the server and made sure all connections to the hard disk and motherboard were secure. Upon reboot we checked to see if the BIOS detected the drive which it did. The server still reported disk boot failure. We then rebooted back into the BIOS to make sure the BIOS was set not skipping the hard drive in the boot process.
It was not.
The "recovery tools" we ran was just a basic linux live cd to see if we could get the hard drive detected inside of an operating system. It would not even detect the disk in the machine. We did not perform anything on the drive to cause it to fail. I spoke directly to my manager who was supervising the move and I was told that all servers were shut down cleanly as well as moved safely and securely.
The hard disk is still detected by the BIOS but the hard disk certainly has some internal issues as you can hear a constant buzzing and clicking during POST and after.
I bought domain name and web hosting from a company. Now the problem is that the company from which I booked domain and hosting doesn't exist.
Initially the hosting company sent me only FTP and mail server details. Now I need the cPanel details to configure database. So kindly request all of you to suggest me some solutions through which I can get the cPanel details.
I am putting this thread to take other people advise and to advise them about my bad experience with rsync. Lucky, I was able to get my data back through the old drive
Three times a day, I take mysqldump and then rsync that mysql dump to a drive located in a different state.
Everything was working fine..The rsync was transferring data daily and updating the backup on other server. Few days ago, there was a hard drive failure on my server and then i checked in my backup drive for mysql dump...It was 764 bytes instead of 5 Gb...
Then i went to my other server where I rsync, to my surprise that was also 764 bytes from 5 Gb as it synced the both database..
My backup strategy failed and would be in tears if I couldn't grap data from failed drive
I would like to hear everyone views on this and learn from it
The host is going to mount this HD on the same machine after adding a new hard drive and fresh install... Does anyone have any recommendations for how I can go about recovering data? Specifically mysql databases?
My server hard disk is crashed badly. The rescue function in the server cannot take part and so I've tried using some recovery software to get back my data.
I've tried using Easy Recovery Professional. It sort all the files by it file type into different folder. I found a folder named .DB, there are also some .ado and . ldb folder too. I guess one of it is my database. The problem now is, i dunno how to read the file.
Do you have any idea to read it? I've tried many recovery system. eg. DiskInternals Linux Recovery, Disk Doctor Linux Recovery.
As I have never used cPanel neither have whatsoever experience with control panels, in the process of ordering a couple of new server I am wondering what's the average recovery time after a server failure which involves data (ie. disk failure).
I am interested to understand this, because I need to choose between hardware raid or backup disk.
For example, at LiquidWeb (this is just one of the many managed providers i am evaluating) you can pay the same server 264$ with 2x250 hardware raid 1, and 219$ with 2x250 simple hard drives, where the second one is supposed to be a backup drive.
I am yet to ask, but on this servers most likely (at this prices) they do not have hot swappable disk drives where they can rebuild the array "live", so also if i choose raid hardware, in case of a disk failure, i face a downtime to bring down the server, rebuild the array, bring on the server, which I guess is something not less than 30-60 minutes downtime.
The advantage of this solution is that a disk failure keeps the system running, so you can schedule a maintenance window to replace the disk and rebuild the array, I guess.
On the other side, relying on a backup disk (and of course rsyncing data to an offsite server, i would do this anyway) you save 45$ each month, and disks after all do not fail every month.
If the main disk fails, they need to replace the disk, install again the server with what I suppose being a standard image (so let's figure a couple of hours, as they have a 30min hardware replacement sla), then you need to restore backups (which in my case are something like 20gigs). So I guess, on an average, it would take something like 4 hours downtime.
Am I correct ?
What would you reccomend as a solution with cpanel, taking into account the huge price difference ? Backup drive or Hardware raid ?
This would be for a single big website, not a hosting-company with resellers and customers environment, so I would value more the monthly saving, rather than the high availability, but of course I am interested to know what is the average time to recover a cpanel server after a drive failure.
Besides I have also another question, as I have never had a colocated server with raid hardware, so i do not know anything about this.
When a drive fails in a raid1 hardware environment, how the hosting provider gets notified ? I suppose it's not a led on the server to blink, as no one would see it.
So if you have software raid (like my computer at home) you get an email from mdadm with "warning, array degraded", so you know it quick and you can check it anytime doing a ' cat /proc/mdstat '. What about hardware raid?
My fear is that nobody notices my raid1 drive failure, the server keeps going on just a drive, then maybe the other drive fails, and it would be unpleasant: while a single drive failure would be obviously very easy to spot for me.
Windows Application 2003 crashed on RAID 5 server, we tried to take the NTFS files from the hardrive and mount them on a knobix which was booted from a cdrom drive. Knopix could read the files but it was unable to mount them I guess for compatiblity reasons.
Is there anyway we can get a backup of that ntfs file and restore our data?
We are running cpanel on one of our servers. Several cron jobs were deleted from the cron panel of one acct. I have no idea of the paths to re-enter these jobs. Is their a log file on the server that will show cron job history from previous runs so I can recover the proper paths?