I am interested in setup user activity limits to avoid peaks on the server load, I have readen a lot about PAM and limits.conf but still have no idea on how set this limits. Most of the examples are similar to this page http://www.seifried.org/lasg/users/ but they are still confusing to me
>> I would like to setup rules like this:
Customers may not use more than 2% CPU daily, 3% memory daily, run more than 10 simultaneous processes per user, allow any process to run for longer than 30 CPU seconds, run any process that consumes more than 20% of available CPU at any time, or run any process that consumes more than 16 MB of memory.
I may have this wrong, but I think it's possible. I have a friend who wants to run a process on one of my servers, now I don't particularly 'care' about this server, it's just used for a couple of unimportant things so I'm okay with him running it, but I don't want the hassle of sorting out things if he decides to delete everything, so I'm hoping it's possible to limit his directory.
For example, I have the folder "people" in the top most directory, inside that I have "arthur", I want to limit the ssh user "arthur" to the folder "arthur", I don't want him to be able to cd ../../ and delete stuff, is this possible?
i've been researching this but can't seem to find a decent solution.
basically trying to limit the cpu usage of a single domain, or limit accesses per hour if that's possible?
i tried adding an entry in /etc/security/limits.conf which restricts numproc to 2 and then down to 1, but it doens't seem to make any impact on the load. load of the server drops from about 1.5-2 down to 0.02 when this single site is suspended, and i can't have it run wild.
ideally would like a message that says something along the lines of server is too busy... trying again in 5 seconds or something along those lines?
I thought I knew enough about my .htaccess stuff to do this, but I can't seem to work it out. What I want to do is if a user visits domain.com/folder, we check to see if the folder exists. If so, show as normal (IE domain.com/support)
If a user visits domain.com/dynamicusername (dynamicusername is not a physical folder), redirect to dynamicusername.domain.com
I have a vps at VPSVille, which I use for tunneling out onto the internet (behind a nazi firewall). Unfortunately, my vps keeps getting disabled because I cross the "20GB of traffic per month" allowed to any vps for tunneling - even though I have 300 GBs of bandwidth.
I'm now looking for another cheap vps host ($20/mo or less) that will let me utilize all the bandwidth I paid for.
I am just playing around with a server of mine, and I wanted to know is there anyway to limit the amount of inodes ( files and folders) a user can have? I am using cpanel/whm. Or is there a way I can receive an " alert" once a user has reached the specified limit?
I've recently hired a new server with the aim of offering FTP backups.
I have Plesk CP but I don't want to use that to create the accounts for these users - i want to use the Plesk license for webhosting customers.
My box is running CentOS with ProFTPd.
The problem i'm having, is locking down SSH access to the Backup customers - if i add an account using "useradd", it is enabled in ProFTPd as I want, and I have set ProFTPd to only allow access to their home dir. However, the users can still log in via SSH and have full access to the box. Ideally, I want to be able to remove these users from SSH access, or if that's not easily possible, not allow them to "cd" above their home dir.
I know it's possible to add their usernames to the SSHd config file under Deny, but is there a more elegant solution (bearing in mind i'm planning to have quite a few of these users)?
bandwidth limiting on my home network, to some specified pc's, as they are taking up to much download bandwidth. Im leaning towards windows more but i am still unsure, would any one be able to guide me or give me a few tips on what type of program will do this, what operating system and maybe some tutorials if anyone knows any good ones?
On an cPanel + RHEL 5.3 box at WHM - Tweak Settings, I activated "The maximum each domain can send out per hour (0 is unlimited)" and set that value to "300".
But, it seems that this limit is only if the user is sending using webmail or an email client, right now a joomla website is sending much more than 300 mails per hour, but it's using php to send the mails.
My question, how can I limit emails per hour on each domain while sending from php?
We have Linux and windows shared hosting on our dedicated servers using Plesk are we are interested in putting in connection limits and bandwidth limiting in place but not sure what to use - we don't want to set them to be too restrictive but we don't want it to be free reign either.
I was wondering if anyone else was doing this, and if so what their levels were. Additionally, does anyone use anything for IIS 6 on 2003 that supports bandwidth throttling of dynamic content? I know there are things for IIS 7 in 2008 but that doesn't help much
Is it possible to setup ftp accounts that have the ability for the users to view files on the server and upload their own files, but not download anything from the server? If so, how would this be done, if not, then as a 2nd option, do most hosting companies provide the option to disallow downloads on an ftp account if you need to do so, but allow uploads?
I've been searching a while for a box that I can throw in front of my servers that will basically look at inbound connections and if there are more than X amount of connections from IP:XXX it will block it for X minutes.
I thought a standard load balancer would do it, but according to Barracuda, their load balancers don't. They do DDOS prevention, but that's based on packet inspection. What I'm looking for is something that will block legitimate traffic when there's just been too much in a short amount of time. Basically I've got people using autosurfing programs and refreshers on my site and I'm getting tired of analyzing log files to find them.
I have got 2 low budget dedicated servers from poundhost,I would like to find a way to limit their total monthly bandwith to 5TB so that I dont get past this limit even if the server stays for 1 day with blocked traffic or else! The one server is cent os 5 and the other is windows server.
how can we limit the maximum number of e-mails that can be sent by a domain in PLesk. We are facing issues where out server IP is getting blocked by some e-mail providers for bulk mailing.
I've been unable to limit the size of a process to keep it from hosing a system. I've tried the following methods:
- RLimitMEM for just Apache (although I'd like this to apply to any process in the system) - ulimit - PAM limits.conf (/etc/security/limits.conf)
In theory, either ulimit or limits.conf should do the trick, but when I start up Apache and run a test script to build up some memory, it doesn't get killed off. Is there any way to do what I want? I'll even take kernel modification as an option. We're running CentOS 4.4, but I don't have a problem with swapping out for another kernel.
I have a VPS with 256MB ram. i want to limit mysql memory access. suppose i want that mysql should not use more than 128MB ram. is this possible? if yes how?
I've seen a lot of requests for a simple howto dealing with bandwidth limiting/"capping" on Linux. I put together a howto yesterday on this which I hope you'll find useful.
Ifve recently optimized the scripts used for bandwidth management in one of our UK facilities and I thought Ifd post a quick howto on it.
The full script can be found directly here: http://www.adamsinfo.com/bandwith-li...oute2/#more-15
My setup here is a live feed entering eth0 on this linux router and leaving eth1 into a switch connected to a collection of other servers. This is set up as an unrestricted public router, routing between a /30 on eth0 and a /24 on eth1. Note: We canft in any way restrict the amount of traffic that eth0 receives from the outside, so instead we restrict how fast eth0 sends data out, the same applies the other way round. So, if we want to limit the amount of data that the local servers can send, we shape the routerfs external interface (eth0). If we want to limit the amount of data that the local servers can receive, we shape the routerfs internal interface (eth1)
With Debian Etch on 2.6.x, run: apt-get install tc iproute2 bc
Then script as follows: # Set some variables #!/bin/bash EXT_IFACE=heth0 INT_IFACE=heth1 TC=htch UNITS=hkbith LINE=h10000 #maximum ext link speed LIMIT=h5000 #maximum that wefll allow
# Set some variables for individual gclassesh that wefll use to shape internal upload speed, i.e. shaping eth0 CLS1_RATE=h200 # High Priority traffic class has 200kbit CLS2_RATE=h300 # Medium Priority class has 300kbit CLS3_RATE=h4500 # Bulk class has 4500kbit # (Wefll set which ones can borrow from which later)
# Set some variables for individual gclassesh that wefll use to shape internal download speed, i.e. shaping eth1 INT_CLS1_RATE=h1000 #Priority INT_CLS2_RATE=h4000 #Bulk
[...] A few hundred lines [...]
I have tried not to get bogged down with too many irrelevant details here and would be happy to answer any questions or take any corrections. Itfs pretty simple and it works well. Install bmon and you can confirm this yourself. The purpose of this is that I can take a 10mbit connection and limit the traffic to 5mbit ensuring that I donft break the 95th percentile that I want to maintain at the datacenter. I can increase and decrease this at any time as traffic requires or permits respectively.
What limiting factors should clients most look into when they shop for a type of hosting, starting with shared ? When these factors are importants, a client would at least go for VPS.
I listed some of them I know (for a shared/reseller hosting type) :
- Fact that you can only host websites (no backup, no specialties hosting)
- No control over what's installer on the server
- Can go to slow to fast, depends on which sites you're hosted with
- Account can be suspended in case of traffic surge
I currently use debian and have multiple (5+) ip address all assigned for a different purpose and usage. My issue is that i need the ability to accurately monitor the traffic being generated (and potentially graph via mrtg) additionally, I want the ability to enforce priority and restrictions per ip address.
Whilst on the subject I also have an openvpn setup which i want the ability to rate limit each user....
spend too much time trying to figure this out so now i'm asking for some help! More than happy to consider offers ($) for someone to do it for me...
I have yet another new server, however there is a client of which constantly now is bursting ~20 mbit/s month average. Is there a way to limit his ability to burst like this? The bandwidth overages are coming in, and him and I both would rather skip the overage charges and just keep him in range.
The server is equipped with cPanel however I am doubting this can be controlled using this utility.
There are several big domains that frequently defer accepting mail from us causing long delays or rejections. Google, AOL, and Yahoo are examples. I'm considering trying the suggestions found in this online posting regarding rate limiting the sending of messages to those domains. In the below URL, please see the section titled "Different policies for different domains"...URL....
Would these changes be safe to make on a CentOS 6.4 server running Plesk 11.0.9 with Postfix 2.8.4? Would any special modifications for Plesk be necessary?
how to limit the amount of data transfered for a single client in a share hosting scenario using Windows 2003/2008 and NOT using a off-the-shelf control panel.
Within IIS you can limit the number of connections and throttle the transfer rate, but I don't see how to limit the amount of data transferred.
Are the Control Panels monitoring the log files and totaling the amount of data transferred or is there another way to implement data transfer limits?