I have a dedicated server and have backup configure set up to do daily, weekly and monthly. What I can’t seem to find in the backup configure (using WHM) is where to set the TIME you want the backups to begin? I know there must be some way to designate what time the backups start and it is probably right under my nose, but being a relative newcomer to the world of dedicated servers, I can't find it. Can someone point me to the right location to set the backup time?
I can't get access to a certain site. I always get the page with:
network time out - server at *** takes to long to respons. More people have noticed this and apparently it only happens to people with certain specific providers. And not all the time. Some times they DO get access eventy to they belong to the same ISP. So I guess an ISP isn't blocking access to it otherwise it would be permenantly/The site administrator insists that certain ISP's are blocking his site. He's hosting it on his own server. The domain belongs is registered at namecheap.com.
If an ISP is blocking this site (if that's possible?), that would lead to that 'network timeout' page wouldn't it?
What is the most likely reason for getting a timeout page anyway?
I have a dedicated server specs: AMD 3500+ 64 Bit CPU, 1 GB Ram, 160 GB Sata Drive. For 1 month, CPU load average reaches 40-50 value. This happens about 5-6 times in a day. When I stop httpd service for 30 seconds everything goes normal. I think this is not a DoS attack because it comes systematic, I dont believe no one makes this regularly except bots.
Maybe its a system service or a cronjob but it stops when I turn off httpd service? How can I be sure about what's making this regularly load?
I also did set up a script which mail me when load average of system goes crazy and restart httpd service. But instant restart is not working to stop load increase.
The server is going down from time to time, every 12 days or so the site hosted there is no longer accesible, everything starts with the site slowing don and down and then is not longer reachable, what we do is to request a power cycle, and with this we start all over again till next power cycle, so on so on, of course, here are my server details and more info on this:
- MySQL - 5.1.41-3ubuntu12.10 - Apache - 2.2.14-5ubuntu8.4 - PHP - 5.3.2-1ubuntu4.9 - operating system: Ubuntu Server 10.04 LTS
After some time emailing the support guys to barely check about what's going on, we received an email with a few things:
1.- found a few errors that likely would cause issues with Apache. The first error is: [Mon Feb 04 05:03:10 2013] [error] mod_fcgid: fcgid process manager died, restarting the server and the next error is: [Mon Feb 04 14:32:34 2013] [error] server reached MaxClients setting, consider raising the MaxClients setting ...
Both these errors seem to indicate that you have a process that is running out of control on your server. We were unable to determine what script on your site is running caused your connections to be maxed out however it does appear that before these errors were generated there was a WordPress plugin referenced in your access logs...
2.- Additionally during our review we did find that your error log for mercadodedinerousa.com is 45 GB's which is excessively large and can cause problems when Apache is trying to write a such a large file.
3.- The majority of the errors being logged are: [Wed Feb 06 12:12:31 2013] [error] [client 200.76.90.5] Options FollowSymLinks or SymLinksIfOwnerMatch is off which implies that RewriteRule directive is forbidden: /var/www/vhosts/mercadodedinerousa.com/httpdocs/index.pl, referer: [URL]
Recently left a big corporate job and started my own consulting firm in the area of human resources/employee benefits. Right now, my technology consists of a laptop with online backup through Carbonite. I am ILLITERATE when it comes to technology, so bear with me please....
BackgroundThe business plan calls for growth by adding a small number of employees, starting with an assistant, along with working with independent contractors. These will each need to have access to the files that now reside on my hard drive. I don't anticipate more than 6 people (employees and/or contractors, combined) in the first year.
In addition to the shared access described above, I would need to be sure that the environment where files are stored is highly secure, and that I can grant access to files to some people and not others.
In addition to the above, I need to ensure that all the data are backed up routinely.
Employees/contractors will likely not be in the same office location where I am located, and some (most) may work from a home office.
My assistant, when hired, may/may not be in my office. Regardless, (s)he will need to have access to (and make changes to) my contacts and calendar in outlook. In addition, (s)he will need to be able to read and send emails on my behalf.
AT&T, as part of my advertising with them in the Yellow Pages, is developing (and will host) my website. Included in their service will be email boxes (up to 20) with my domain name.
Business Need Based on these points, I figured I needed to buy a server, so I've been talking with Dell. Of course, they'll sell me a server and they have a relationship with All Covered who will install it and make sure its operational.
But after talking wtih Dell, I learned that there was something called managed hosting, colocated hosting, dedicated hosting, and shared hosting. I called Rackspace, they said they would be overkill for what I needed and referred me to Mosso. Mosso said the same thing, and sent me to this site.
i install APF and config conf file but i have a problem for start APF :
root@server [~]# /usr/local/sbin/apf -r eth0 : error fetching interface information Device not found eth0 : error fetching interface information Device not found eth0 : error fetching interface information Device not found eth0 : error fetching interface information Device not found eth0 : error fetching interface information Device not found Development mode enabled !; firewall will fluch every 5 minutes. Unable to load iptable module (ip_tables), abroting
While i have loads a experience running servers remotely, I know to start my own ISP is a bit more costly and involves things like rack servers, temperature controled rooms and BANDWIDTH.
I have handle on 1st two, ( and yes htings like insurance and a store are looked after )
What i need are a list of T3 providers that I can contact about purchasing a dedicated T3 line for our store. While the location isn't yet set in stone and i may in fact look at a existing computer store owner who may be open to the idea already.
The company will host local websites, and others websites.
AND allow for reselling on a lessor level and include at least 6 others who host game servers.
With these 6 the T3 line will probably be almost half used, so the ISP must be scalable for growth.
The thing i need in nut shell are pricing on a T3 line of my own in or close to downtown Oshawa Ontario, Canada
I have multiple backups stored under server repository (subscriptions --> <domainname> --> website and domains --> backup manager).
The physical files are located at: /var/lib/psa/dumps/clients/904279/domains/<domainname>/
When I click the green arrow to download these files to a local computer (see attached image) I get a new page with title "Download the backup file". On this page I have the option to set a password on the downloaded file, but no matter what I do (password or no password) the file is not downloaded to my local PC. I don't get a pop-up box with the option to save the file. Just nothing happens ...
Firstly I wonder if there is any possibility to limit the number of cores the plesk backup zipping tool uses? This pigz takes up all my CPU. Is there any way I can reduce the amount of cores it uses because all my websites are down every time a backup takes place for around 3 minutes.
Secondly I get the following in my syslog:
1 baby plesk sendmail[20189]: Error during 'check-quota' handler
I don't know what is wrong. I think it's since the upgrade to Plesk 12. I now have 12.0.18 Update #13.
I have a 6GB backup file created with another Plesk Backup Manager, now I trying to upload this backup file to my Plesk Backup Manager but after upload 3% I am getting "413 Request Entity Too Large" error, I tried with disable NGINX but still getting this error.
how can I resolve this error or is their any other way to upload my file on backup manager?
I see that Backup Manager have a file size restriction of 2GB how can I increase this?