Limit User Download Daily
Jun 22, 2008Is there any apache module which can limit user download daily. e.g. userA can download XX GB per day.
I am using mod_cband but it seems it can't do something like that.
Is there any apache module which can limit user download daily. e.g. userA can download XX GB per day.
I am using mod_cband but it seems it can't do something like that.
How can I limit outgoing mail hourly or daily (per mail account) on Plesk 11 Windows? I know I can do that with MailEnable Enterprise version, but I want to know if there any other way to do that.
View 4 Replies View Relatedim using vmware in centOS dedicated server
how i can limit Download speed for each IP ?
OR
how i can limit download speed for each Vms?
OR
how i can limit download speed for all Vms?
What's best way to limit upload/download from/to server?
I have found this
[url]
about mod_cband
Is mod_cband best solution ?
This can be done also with mod_bandwith, default enable on whm/cpanel, as write here:
[url]
We have several site that are downloading. how may i limit this site
limit bw,omit speed download,limit connection and ... ecause this sites have very download and ...
my server is centos
I have an server linux OS CentOS 5.2 and using firewall CSF. and need question.
how to limit download theart ( limit connecting when download files ) EX : 4 connecting or 8 or 16 connecting ( my Guest using soft Internet Download Manager ).
For example, my website [url]and Direct links are: [url]. how to limit theart (Connecting) when Guest download which and using soft internetdownloadmanager, flasget.
I am setting up a file download server and using FTP.
But i only want user to download and not to upload( prevent possibility of uploading bad script or shell script )
So does anyone know how can i do it?
Does it have any risk of security using this method?
Currently i am using apache so user can download, but this method take too much resource and server often overload when too many people download.
How can i set user limit to access web site ,
Ex:- if my site access 10000 users at a time and then if any more user access that site it will receive error or custom error page.
and how to and where i can set this values...........
I run 10+ sites on my dedicated server.In last 7 days one of them used over 1.7 TB of bandwidth.
I can't suspend that domain because it's mine.
I can't ban the IP of those who use considerable amount of bandwidth because they are a lot.
So i think about software/mod/whatever else that limit bandwidth usage per user(IP) per day! is there any one who has experience about this situation?
I want to limit resources like CPU and RAM maybe as a per user setting on a Windows Server 2003.
I've read about Windows System Resource Manager but from what I see it applies to per-process.
Is there a solution (freeware maybe) for this?
How do you limit the amount of processes a user can create? I'm running suEXEc mode and I want to limit the processes to 5 to prevent abuse and resource drainage.
View 7 Replies View Relatedi have 2 questions
1 ) iwant limit one user on the cpanel limit on the apache conections?
2 ) i want limit one user not use download manager for site clients
How to forbid the exit to the user too /home/user?
View 1 Replies View RelatedIs it possible (or if anyone has) to have ability or some script that will integrate into apache or something and will tell limit CPU load per directory (hence user)?
Eg: I don't want one high traffic user using like 2% load.
We have a questions for everyone and any help would be greatfull, we are looking to limit disk inodes on a per user basis or server wide. we would like to know if anyone ca referance us as to how this is accomplished.
View 3 Replies View Relatedi have run exim -qff from SSH then i got below error let me know what to do
Code:
root@web [~]# exim -qff
sda7: write failed, user block limit reached.
sda7: write failed, user block limit reached.
sda7: write failed, user block limit reached.
I thought I knew enough about my .htaccess stuff to do this, but I can't seem to work it out. What I want to do is if a user visits domain.com/folder, we check to see if the folder exists. If so, show as normal (IE domain.com/support)
If a user visits domain.com/dynamicusername (dynamicusername is not a physical folder), redirect to dynamicusername.domain.com
I want to set up my server (a linux dedicated server) to automatically create daily backups of the pop3, mysql, & webfiles. I want it to go to a server which i have purchased with the exact same specifications.
I am not very good at unix command line/scripting. So what I need is for someone to help me define the backup strategy, select the scripts, and tell me of how to make sure backup server is secure.
I am running a dedicated server.
My apache crashes daily and I am investigating the cause of it.
I have found this strange message in my apache error_log....
Our website is receiving a daily attack from a french network called Neuf Cegetel. The IP is different each day but the network is always the same. The attack is daily and during several hours.
The website does not use ajax (the request is an ajax request) and there is no URL /0_0?_=... But the attacker use a random URL similar to this /0_0?_=1238873869634. Since the URL is always different the page is not cached so it is compressed by mod_deflate and therefore the attack is more harmful. The User-Agent and the cookies changes quite a lot but it is always an ajax request. Taking in account that it is the only ajax request in the server that would be the easily way to stop it. But it seems that when we try to stop the attack, the attacker try another way, what makes me think that the attack is voluntary (not a virus nor something like that).
Since it seems that the attacker can be easily found it (we are a Spanish website and the attacker comes always from the same French network), should we report this? If it were a virus in a remote server, the solution maybe is just to contact the abuse department of the network but if it is voluntary I think that we should discover who is behind the attack since it might be a company that want to bother us, a competitor or something like that. What do you think?
This is a very small copy of the logs containing a few examples:
Code:
4087 ReqStart c XX.XXX.42.189 52592 517548693
4087 RxRequest c GET
4087 RxURL c /0_0?_=1238873869634
4087 RxProtocol c HTTP/1.1
4087 RxHeader c x-requested-with: XMLHttpRequest
4087 RxHeader c Accept-Language: fr
4087 RxHeader c Referer: http://thewebsite.com/
4087 RxHeader c Accept: application/xml, text/xml, */*
4087 RxHeader c x-requested-handler: ajax
4087 RxHeader c UA-CPU: x86
4087 RxHeader c Accept-Encoding: gzip, deflate
4087 RxHeader c User-Agent: Mozilla/4.0 (compatible; MSIE 7.0;
Windows NT 6.0; SLCC1; .NET CLR 2.0.50727; Media Center PC 5.0; .NET
CLR 3.5.30729; .NET CLR 3.0.30618; FDM; OfficeLiveConnector.1.3;
OfficeLivePatch.0.0)
4087 RxHeader c Host: thewebsite.com
4087 RxHeader c Connection: Keep-Alive
4087 RxHeader c Cookie:
__utma=9819446.1354119376.1238785835.1238785835.1238865537.2;
__utmz=9819446.1238865537.2.2.utmccn=(organic)|utmcsr=msn|utmctr=thewebsite|utmcmd=organic;
__utmc=9819446; /=
4087 VCL_call c recv lookup
4087 VCL_call c hash hash
4087 VCL_call c miss fetch
4087 Backend c 3052 default default
4087 ObjProtocol c HTTP/1.1
4087 ObjStatus c 404
4087 ObjResponse c Not Found
4087 ObjHeader c Date: Sat, 04 Apr 2009 19:37:47 GMT
4087 ObjHeader c Server: Apache/2.2.3 (CentOS)
4087 ObjHeader c Vary: Accept-Encoding
4087 ObjHeader c Content-Encoding: gzip
4087 ObjHeader c Content-Type: text/html; charset=iso-8859-1
4087 TTL c 517548693 RFC 120 1238873867 0 0 0 0
4087 VCL_call c fetch
4087 TTL c 517548693 VCL 3600 1238873868
4087 VCL_return c deliver
4087 Length c 235
4087 VCL_call c deliver deliver
4087 TxProtocol c HTTP/1.1
4087 TxStatus c 404
4087 TxResponse c Not Found
4087 TxHeader c Server: Apache/2.2.3 (CentOS)
4087 TxHeader c Vary: Accept-Encoding
4087 TxHeader c Content-Encoding: gzip
4087 TxHeader c Content-Type: text/html; charset=iso-8859-1
4087 TxHeader c Content-Length: 235
4087 TxHeader c cache-control: max-age = 300
4087 TxHeader c Date: Sat, 04 Apr 2009 19:37:47 GMT
4087 TxHeader c X-Varnish: 517548693
4087 TxHeader c Via: 1.1 varnish
4087 TxHeader c Connection: keep-alive
4087 TxHeader c age: 0
4087 ReqEnd c 517548693 1238873867.757586718
1238873867.758437872 0.936849117 0.000804424 0.000046730
customer of HostV's VPS hosting, and for the past 3 days, at almost exactly 01:20 GMT, CPU load jumps from an average of about 0.10 to 2.5+, stays there for over an hour, then drops back down.
During this time, there are NO processes on my virtual server using any significant amount of CPU time, memory, or IO. No cron jobs are running on my server, etc.
Note the output from 'uptime' below (I was monitoring it waiting for the problem to occur, which it did at exactly the time I expected):
00:32:05 up 22:01, 2 users, load average: 0.09, 0.11, 0.08
00:32:07 up 22:01, 2 users, load average: 0.08, 0.11, 0.08
01:09:49 up 22:39, 2 users, load average: 0.06, 0.03, 0.00
01:10:03 up 22:39, 2 users, load average: 0.05, 0.03, 0.00
01:19:26 up 22:48, 2 users, load average: 0.46, 0.16, 0.04
01:20:42 up 22:50, 2 users, load average: 1.53, 0.55, 0.18
01:21:39 up 22:51, 2 users, load average: 1.40, 0.67, 0.24
01:46:04 up 23:15, 2 users, load average: 3.06, 2.02, 1.52
Also note output from 'top', taken when load average was at 3.06 shown on the last line above:
Cpu(s): 0.1% us, 0.0% sy, 0.0% ni, 91.0% id, 9.0% wa, 0.0% hi, 0.0% si
My cpu usage is very low (0.1%) but wait time is at 9.0%, and I've seen this go as high as 70% during these times.
So, basically, there is a problem that exists on the host node somewhere that is causing my site to become effectively unresponsive (page load 20 seconds+ - measured), and it happens every single day at the same time.
So, why am I posting it here instead of logging a trouble ticket? I have logged a trouble ticket, but when I encountered the problem yesterday, despite logging it as "CRITICAL", I had to wait nearly 5 hours for a response, which effectively said not much beyond "we noticed the problem and fixed it and we're monitoring it". So I don't have a lot of faith that today's response will be any better.
I moved to HostV because of similar problems I was encountering with shared hosting, and was assured before signing up that the kind of problem I'm seeing doesn't happen. So now I'm outlaying more than 10 times the cost for almost exactly the same problems and a similarly unhelpful response to it.
By publicly posting the problem, I would hope that someone at HostV will ensure the problem is addressed PROPERLY, rather than bandaided again, and that hopefully we will all be able to see just how good HostV's support CAN be (as evidenced in another similar post).
I await HostV/Cirtex's response.
As shown in the uptime information about, server uptime is 23:15, because I rebooted the virtual server yesterday to see if that helped. It didn't. In fact, it took over 20 minutes for the server to come back up, which is why I'm not going to do it again.
Our VPS is being hit several times a day with hacking attempts. We have been actively monitoring error logs and can see the failed attempts. I was just wondering if there is a better way to track such attempts or another system log that wold provide additional info on these attacks? or maybe some 3rd party logging scripts?
View 13 Replies View RelatedI just had a quick question about backup solutions. What advantage would I have by setting up 2 HD's in a RAID-1 array as opposed to just doing daily automated backups on one of the drives.
The way I see it, if I have automated backups, HD use for that backup drive is limited to say 20 minutes a day. In a RAID-1 array however, both drives are used at the same rate. Wouldn't this provide better life expectancy for the backup drive, granted it is at the expense of having a guaranteed instant replacement for that original drive?
Reason I'm asking is because I'm setting up a Mac Mini for a friend as a web server and he would like to have data backups. The only way to add space is to intall an external hard drive so my options are a bit limited.
i want to run /scripts/cpbackup every day begain with 12:00 AM and i put this line in /tmp/crontab.XXXXwuxGUI File
0 0 * * * /scripts/cpbackup
but the backup didn't work and do the job
I would like to create an exact copy of my live drive on a daily basis via cron. Is there a good mechanism for doing this *without* taking the main drive offline? It seems like the two common backup solutions: dd and rsync both have issues in this area. I don’t think Rsync can create an exact mirror (including partitions) and dd looks like you need to unmount the drive(s) first.
Both drives are of identical size and installed via the ide controller.
I recently got a dedi from Hivelocity, and they installed CSF/LFD. On my previous hosts, I didn't have this, just cPHulk. With this dedi, I'm receiving nearly a dozen daily emails from LFD with IPs that have been blocked for multiple failed logins, mostly with username root, but also sales, staff, admin, system, etc., and a few for port scanning.
Is this normal? I've already disabled direct root login via SSH, and I'm not really worried about anyone actually managing to gain access, I'm just curious about the high number of attempts. On previous hosts, where I actually had active sites and forums, with links posted on other forums that are indexed and nicely ranked by Google, I rarely received any emails from cPBrute at all.
I have my WHM/cPanel installation configured with daily and weekly backups. I checked at what time of the day the server load was at the minimum and configured the cPanel backup cron to run then.
The problem now is: Backing up a few hundred accounts results in a high server load. My server configuration:
Dual Processor Quad Core Xeon 5335 2.0GHz with 4GB RAM and 2 x 250GB SATA HDD hosted at SoftLayer.
The accounts are located on the first HDD and the backup archives are placed on the second HDD.
What can I do about this? I'd like to take daily backups of all accounts but not if my server load increases up to 10... That kind of renders the cPanel backup feature useless if it doesn't even work on a powerful server like this one...
Would it help if I use an application such as Auto Nice Daemon to give the backup process a lower priority? But then again that won't work on the MySQL dumps? And I think it's not a CPU problem but an I/O wait problem? Other processes have to wait for disk access because the disk-intensive backup process is running?
How can I setup daily exim statistics?
From WHM, it shows for about one month exim statistics.
Is there any way to have daily exim statistics?