I'm facing a very critical issue in my server and i think its kind of DDOS attack!
the server is running normal and then i noticed that the server load is going up till reaches about 400! and all the services went down!
the cause of this issue was the Apache...
I noticed that the normal SLOT ( Total megabytes transferred this slot ) is in the range between 0.1 and 0.5(max) but the upnormal when the server load went up each slot was in range between (150 : 200)!
my conclusion is that someone sending a large packets to the server...
is there a limitation for this slot for not going up in that way?
Server Version: Apache/2.2.9 (Unix) mod_ssl/2.2.9 OpenSSL/0.9.8b mod_auth_passthrough/2.1 mod_bwlimited/1.4 FrontPage/5.0.2.2635 PHP/5.2.6
I have a strange issue on a Plesk 12 VPS. Sometimes the sites result in a "502 Bad Gateway (nginx)". This happens 1 or 2 times a day on different times.
In the httpd log I see a record "can't apply process slot" and in nginx log I see "connect() failed (111: Connection refused) while connecting to upstream" but restarting apache and/or nginx will not always result in a working site.
When I restart iptables everything is working fine again.
Im running the latest version of apache on my box and have 15 ip's on my box. Each ip goes to the site hosted on apache is there a way to limit it to one. Say for example i have these ips on my box 1.1.1.1 - 1.1.1.15
How do i select one of those ips to go to the site and have the rest not go anywhere.
I have a server running Apache 2 with php 5 as an apache module. There are 2 PHP scripts that get about 500k hits a day. These scripts have to parse data out of a remote webpage and display it on an image. They used to make loads go up to 40-50, but I have added a cache which only updates every 4 hours. This helped a lot, but loads are still around 10 when the script updates, and it slows down the server. Memory usage is fine. The server is an AMD Athlon 64 2800+ with 1 gb of ram and an 80GB SATA hard drive.
Here's `top` when the cache had just been cleared.
Quote:
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 28004 named 16 0 121m 11m 3796 S 12.3 1.3 0:00.96 apache2 28003 named 15 0 121m 11m 3740 S 12.0 1.3 0:01.00 apache2 5316 tarball 15 0 43032 29m 2320 S 11.3 3.1 326:08.68 ircd 27998 named 16 0 121m 11m 3808 S 11.3 1.3 0:00.51 apache2 27989 named 15 0 121m 11m 3800 S 10.3 1.3 0:01.14 apache2 28007 named 16 0 121m 11m 3776 R 8.0 1.2 0:00.24 apache2 28008 named 15 0 121m 11m 3776 S 7.0 1.2 0:00.22 apache2 27979 named 16 0 121m 11m 3752 R 6.0 1.3 0:02.06 apache2 27983 named 16 0 121m 11m 3748 R 6.0 1.3 0:01.94 apache2 27985 named 15 0 121m 11m 3748 S 6.0 1.3 0:01.05 apache2 27992 named 16 0 121m 11m 3792 S 5.0 1.3 0:00.33 apache2 27980 named 15 0 121m 11m 3796 R 2.3 1.3 0:03.24 apache2 28009 named 15 0 121m 11m 3796 S 1.7 1.3 0:00.82 apache2 27715 root 15 0 5192 1164 844 R 0.3 0.1 0:00.94 top 27960 named 15 0 121m 11m 3808 S 0.3 1.3 0:01.42 apache2 27984 named 15 0 121m 11m 3804 S 0.3 1.3 0:01.94 apache2 27987 named 15 0 121m 11m 3796 S 0.3 1.3 0:01.04 apache2 28006 named 15 0 121m 10m 3292 S 0.3 1.2 0:00.50 apache2
Idle CPU usage usually goes anywhere from 30%-0%. Is there any way to limit apache from using more than 75% cpu, or any other ways to drop cpu usage?
i have one centos dedicated server and cpanel i will going to apache status is very request link and 2 site are very high in the list and my apache worked very hard and going to down any hours of your seems i must how control it?
I am in a bind with Apache's multi process limit. Let me explain what I am doing. There's this website which has career details of all the football players since the beginning of professional football. They have a simple web form which allows you to look at a player's profile by entering his name or his 7 digit numeric id number (on that website).
One of my client wants a list of all the players with a certain "flag" in their profile. So I created an automatic form submission and HTML parsing script to get details of all the players with that "flag" in their profile. Let me not go into too much details and tell you that after applying a few pattern rules to the id number, the number of possible id numbers comes to about 1 million (instead of 10^7; each field can have {0,1,2,3,4,5,6,7,8,9}=10 digits, so net combinations = 10*10*10*10*10*10*10).
Therefore, to completely automate this process I wrote a script which would generate an id number, submit the form with that id number, and parse the resulting HTML profile for the "flag". If the script finds a hit on the flag, it stores all the fields of that player in a database. This script is working absolutely fine but the speed I was getting was about one check per second which means that I would have to leave the script running for about 11 days (to process all of about 1 million checks).
So i came up with this idea to divide the check into ten parts and i created separate scripts for each part. Now basically the first script checks for the first 100 thousand combinations, the second checks for another 100 thousand combinations, and so on.
The problem is that I am able to get only two of these scripts running at the same time. So it would still take me at least 5 days to get all the results. The rest of the scripts just sit there in the server's backlog. This is definitely due to Apache's limitation to handle multiple processes. The server I am using to run this script as well the target webserver both run on Apache2. I am sure it's not a problem with the receiving server. It has to be my Apache web server which is running the scripts. I have tried using mpm_winnt (on a windows server) as well as the prefork and worker modules (on a linux server) without any luck. Has any of you ever faced the same situation?
For those concerned about the legitimacy of this work, rest assured, this is absolutely legit. There's nothing in the website's use policy which restricts somebody from doing this. Moreover, my client hired me to do this only because the website owners were not able to hand over the data he required. They gave the stupid reason that they are helpless in providing the data because they don't have a system in place which would allow them to do a search restriction!
I've been having trouble the past few days with someone who's been "attacking" my site so to speak by continuously downloading very large files with as many connections as (he) can open. I operate a large downloads site for computer games, this person has selected the largest files (like 400-500MB). Not sure of the real intent other than to clog up my bandwidth capacity. Also he appears to be using proxies since as soon as I ban one, another shows up seeminly from China.
Anyway, I have mod_bw and I've limited the number of connections in the downloads area to 2. While that works ok, his tool uses threads like a download manager would and he's using up 30-40 child threads for his 2 file downloads.
So 2 questions,
Is there anyway to not only limit file downloads to 2, but limit the number of connections per request? Many of my visitors do use download managers and I'd like for them to continue using them but use a reasonable number of threads like 6 or 8, but not 30.
Also, is there a way to restrict access to someone using a proxy?
How to increase the Open Files limits descriptor in Apache. In the earlier version of Cpanel, we had an option of Raise FD Size Limit to 16384, but the option no longer appear while rebuilding Apache. What is the way to do it and make the change permanent?
My configuration is Apache 2.2.3 using Tomcat - AJP with mod_proxy_ajp, mod_ssl.We have configured Kerberos but some users are getting an error - Size of a request header field exceeds server limit.
Users with headers above 8K are getting this error, users less than 8K can get in fine. How can I increase this header limit in Apache/Tomcat? I have tried multiple suggestions found on google and other sites.
Here is what I tried:
Adding the following to the http.conf LimitRequestFieldSize 65536 ProxyIOBufferSize 65536
Adding the following to server.xml packetSize="65536"
editing a workers.propeties file, but we dont have any files on the server with that name.
I am trying many OS's but none of them worked as it should:
- Windows 2000: install aborted - Windows XP: install aborted - CentOS 4.4: install OK but Kernel panic on start-up - CentOS 3.8: instal OK, only 3.8 Gb identified but the OS out of 8 Gb - CentOS 3.8 64 bits: couldn't install, CPUs support only 32 bits
Add to this, the machine boots on EL Kernel on CentOS 4.4 but not on the SMP Kernel!
how I could run this machine on Linux with 8 Gb RAM?
i dont want clients taking the servers I/O and server load over 4.00 when they do major update ect... querys on sql. is there a way to limit the ammount they can do?
I am going to run a VPS as a VPN proxy server and therefore I was asking myself if it is possible to freeze or shut down the VPS before it exceeds it bandwidth limit of 100GB a month?
My server has 8GB RAM, however processes are limited to 3GB it seems, even if most of the memory is free (I have a mathematical program that mmaps lots of files and needs more than 3GB). I read somewhere that this is because the OS is 32-bit, but that it could be bypassed with "hugemem" kernel support.
I doubt that my host would install this for me, so how can I do this myself without breaking my server I've never done anything like alter a Linux kernel. My current OS is:
Linux 2.6.9-67.0.20.ELsmp #1 SMP Wed Jun 18 12:40:47 EDT 2008 i686 i686 i386 GNU/Linux With WHM 11.2.0, cPanel 11.11.0-S16999, REDHAT Enterprise 4 i686 - WHM X v3.1.0.
I recently used these forums to help narrow my search for a reliable, competent host. I choose to avoid the larger hosts that sells unlimited or large amounts of disk space and bandwidth. I went with a smaller host that sells a 1GB disk space and 10GB bandwidth package that is reasonably priced. Keep in mind that my site is new and will likely only be visited by friends and family. I feel this package is appropriate for my needs.
Should I be concerned about a host that will suspend my site if I reach my limit? As a customer I would rather be notified to upgrade my account instead of having my site suspended. This practice of automatically suspending sites may very well be the normal procedure for most hosts thus the reason for my post today. Please share your opinions on this topic.
How do you limit CPU/RAM per account on a shared web server without virtualization? Something like what Dreamhost is now offering, for instance for user "john", assign 256MB of RAM and 10% of CPU. Without Virtuozzo and friends.