Ddos :: Server Filelimits: Increasing File System Limits Succeeded
Jul 7, 2008
floodkoruma is a script which securing our servers from syn floods. But I couldnt understand our connection lost from server on that screen. Last screen is it.
last log messages
Jul 7 19:41:27 server filelimits: Increasing file system limits succeeded
Jul 7 19:42:25 server kernel: printk: 234 messages suppressed.
Jul 7 19:42:30 server kernel: printk: 1026977 messages suppressed.
Jul 7 19:49:42 server syslogd 1.4.1: restart.
Jul 7 19:49:42 server syslog: syslogd startup succeeded
as you can see I rebooted from apc server. But before it
Jul 7 19:42:30 server kernel: printk: 1026977 messages suppressed.
I have a problem with NO_OF_CONNECTIONS. The default is 150
For example, if a website has 200 thumbnails in one page, then the user will get banned. But in my case, each time a user have only 1 connection(He only access 1 flv file each time).
So, is that safe for me to decrease the number to 20.
I can see a lot of IP having more than 80 connections, which I think they are ddos attack.
I have a new CentOS 7, with Plesk 12, CentOS 7 by default has XFS filesystem.
I try migrate sites from another Plesk Server but Plesk agent say: "hard disk quota is not supported due to configuration of server file system" (my CentOS 7)
I added "usrquota,grpquota", then mount -o remount / ; but when I try quotacheck -fmv / I gest this:
[root@ns ~]$ quotacheck -fmv / quotacheck: Skipping /dev/mapper/centos-root [/] quotacheck: Cannot find filesystem to check or filesystem not mounted with quota option.
but quotaon command works:
[root@ns ~]$ quotaon / quotaon: Enforcing group quota already on /dev/mapper/centos-root quotaon: Enforcing user quota already on /dev/mapper/centos-root
The problem here is why Plesk does not recognize quotas as enabled on CentOS 7??
We are a Brazilian WebHosting company and ran through this years with many and many DDoS issues against our network, causing a lot of downtime and losses for us and our customers.
Along these years our company developed a lot of software to fight against those attacks and completely block them.
We thought about offering in the future a service to block attacks on a whole network for Web Hosting companies that are having those kind of problems.
I will expose here what these softwares will do and I would like to get feedbacks from everyone if its interesting or if its not and the reason.
Software will ...
a) Detect any IP that is port scanning the network. - This software detects if one specific IP is sending more than X packets in a given amount of time.
b) Detect any IP that is attacking the network with DDoS. - This software detects if one specific IP is sending more than X bytes in a given amount of time
If "a" happens the software will,
- Send a customized e-mail to the owner of the ASN (The owner of the source IP) telling that the IP x is scanning the network. Will send a complete log with the scan including the source and the destination IP. Will not stop to send the e-mail until the scan stop.
If "b" happens the software will,
- Send a customized e-mail to the owner of the ASN (The owner of the source IP) telling that the IP x is attacking the network. Will send a complete log with the scan including the source and the destination IP. Will not stop to send the e-mail until the attack stop.
- The software will input a Blackhole on the destination IP at the border/core router. The IP with the blackhole will not exist for people outside your local country (This will not affect customer if their users are in your local country and will block attackers outside your country).
- The software will block source IPs to that destionation IPs based on a specific software that we have developed and cannot tell how it works. This is the must of the system, works very well and to not affect customer/end users from accessing the server/network.
- Server with two NICs. One connected to the border in promiscous mode and another to access the server (required)
- The announce of the BGP on your own border (optional). If you do not have this, the software that will input the Blackhole on the destination IP will not work.
I have been playing around with different virtualization platforms:
- OpenVZ (newer kernels do not support hard-cpu limits for whatever reason)
- Xen Server
- Windows Hyper-V
- Linux KVM
However, none of them seem to be able to stablish HARD limits on resources for a virtual machine. Or am I missing something?
HyperVM supposedly has hard-limits because they use OpenVZ older kernels, right? -- I have not tried Parallels Containers do they have hard-limits enforced?
I'm trying to start a website on a shoestring budget, but my programmer and host want to squeeze more out of us. We currently have a Custom VPS, w/cPanel and WHM. Everytime I try to upload a file(s) more than 10 mb, it will not go thru. My programmer told me to ask the server to increase capacity, they tell me it will cost me. My programmer says the same thing. Now if my programmer can do it, I assume it will be done through cPanel. Is there anyway I can do it myself, so for example, a file of lets say 25 mb will get uploaded? I have access to my cPanel.
I go to whm and create news account on my dedicated server and it Succeeded then try going to url but it doesn’t work it look like there are no account with this name then I go to whm and account list try click on temporary url of account which I was create it it give me 404 any one know what is problem?
The last packet successfully received from the server was 60.410.682 milliseconds ago. The last packet sent successfully to the server was 60.410.687 milliseconds ago. is longer than the server configured value of 'wait_timeout'. You should consider either expiring and/or testing connection validity before use in your application, increasing the server configured values for client timeouts, or using the Connector/J connection property 'auto Reconnect= true' to avoid this problem.
i got this message from my provider after asking for a reboot:
Quote:
Your server had an EXT3 FS error scrolling across the screen.
I rebooted and the server mounted clean, but you may end up needing a file system check (FSCK).
I want to do that myself so i can schedule the downtime after midnight, what are the procedures to fix this erros to avoid the need to reboot the server often
Are you familiar with the file management system of oneandone.
Doing an ftp upload there is already an index file and many folders on the system and the technical manual has no indication on whether to upload into any of the folders or on the system?. By the way, if you upload into the system interface, your site will not appear online.....
Is anyone here running GFS? The responsibility of managing a small cluster of them is about to fall into my lap, and the only documentation I can find is on Wikipedia, which is troubling. I've got the man pages, but I was hoping for more of a document outlining how it works.
Why would lock_dlm2 or gfs_scand take up close to 100% CPU with minimal traffic on the machine, for example? What do those do? How can I tune it to not do that?
I'm not so much looking for specific answers here about tuning, but am more curious about where I should be looking for documentation. I find it hard to believe that there is none?
I am trying to figure out what file system to use for my server. It has 24 hard drives, 2 run the OS in RAID 1, and the other 22 are in RAID 10. When I was installing the OS (Ubuntu 8), I kept on getting problems when I tried to partition and format the second drive (the one with the 22 disks in RAID 10) and it keeps failing on me. I then changed the file system type from ext3 to XFS and it worked fine. I also gave it another try and did not partition/format the second drive and decided to do it manually once the OS was installed. When I did it it told me that the file system was too large for ext3. So my guess is that ext3 has a limit on the size of the file system it is being installed on.
Anyway, so I am wondering, is there any other file system that will get me the best performance, mainly I/O performance, that I can install? I would like to stick with Ubuntu OS. This server will mainly serve large files for download over HTTP.
thing is there is no SDC volume. there's SDA, SDA1 and SDA2 reflecting the primary drive in the system. if i put in the root password I can see all the data, everything looks fine.
how would i go about fixing this? system just reboots over and over.
I have a dedicated server and till few days back i was able to edit my files fine but this morning when i am trying to edit any file, it gives me back this error:
[user@domainname theme]$ chmod 777 header.php chmod: changing permissions of `header.php': Read-only file system [user@domainname theme]$
[root@domainname theme]$ chmod 777 directoryy chmod: changing permissions of `directoryy': Read-only file system [root@domainname theme]$
I tried both as normal user and root and same results. Do you think the hosting guys changed the permissions of the file system or something?
I have a centos server & whenever I reboot it, it goes into File System check and takes about 1 hr to turn online.
The irony is, I reboot the server only when the load goes high (esp when traffic is high).. And the server is down for long times when traffic is high.
The server config is pretty good, but it shows these problems once in 15-20 days.
How do I sweep my entire server and chmod a particular filename located in the cgi-bin to either 0, 755, etc?
For example, to disable a particular perl script running on my system on over 100 accounts in the /cgi-bin/file.cgi I want to chmod the file on every account that it comes up on that /cgi-bin/file.cgi needs to be chmod'd to 0.
Anyone know how to do this thru ssh or another method?
I've been debating this in my head in that we have a number of servers that need to mount a remote server (for backups actually) and debating about the best way of doing it. Traditionally I would use NFS but this is going to have to run over the public internet and NFS is insecure at the best of times.
Now I've got root on all the servers and I was thinking of using sshfs via fuse but just wondered if anyone had any input of the stability of it or any other way.
R1Soft wouldn't work in this method as the backups are created by the user
I recently had an issue where my box wasnt listing accounts (on logging into WHM for the first time it would, thereafter browsing different functions in WHM it would fail to list any accounts), would not list any zone items when editing DNS zones and in general was acting very strange.
I think the tech support chap narrowed it down to zero free inodes on the filesystem (i was even getting errors when editing files with 'vi'). This was increased for the VPS and all issues seemed to be resolved...
However named and httpd were not starting after reboots. Again on looking closely named and httpd were missing from /etc/init.d (on CentOS 5.3)! This is very strange and i certainly didnt modify those nor delete such critical files.
For a second opinion, is there any cPanel script that can be ran to fix the issues, i am concerned other things have been affected but havent manifested themselves yet (other files deleted etc). Does cPanel update script create the init.d files or is this done by the CentOS operating systems itself? Are these files modified during a cPanel update script?
These init.d files for named and httpd have been readded (copied across from another box) and it seems to be ok again, but ideas on howto proceed much appreciated, as i mentioned i dont want any nasty supprises!
They are located in different places on the file system. And i need to redirect from one project to another internally, so url for site.loc is preserved.
E.g. requesting site.loc/hey/there i need apache to serve files form proj2.0.
First, i know that on the .htaccess level we cannot use RewriteRule to file-system path (for security reasons).
Okay, an Alias is a workaround. Say I add an Alias to virtual host as following:
Code: Alias /newsite /some/path/to/proj2.0
Then if i'll add the rule to proj's .htaccess:
Code: RewriteRule ^hey/there /newsite
This will work.
But, the webroot does not work:
Code: RewriteRule ^$ /newsite
Is it i'm doing something wrong or there is some quirk about the webroot?