Any Way To Reduce Tcp Time_wait Period On A VPS
Jul 7, 2007apparently the sysctl commands / config does not work for VPS's. any way to reduce the time_wait period? got nearly 900 connections in that state!
View 0 Repliesapparently the sysctl commands / config does not work for VPS's. any way to reduce the time_wait period? got nearly 900 connections in that state!
View 0 RepliesI have slow connections via https because I have so many connections on apache that are sitting with TIME_WAIT status using my connections
View 1 Replies View Relatedi have problem in time_wait it's very high
netstat -an|grep ":80"|awk '/tcp/ {print $6}'|sort| uniq -c
13 ESTABLISHED
15 FIN_WAIT1
2 FIN_WAIT2
1 LAST_ACK
2 LISTEN
10 SYN_RECV
1026 TIME_WAIT
Our customer is been banned and we have receive notify (we use csf)
The email received:
----------------------------------------------------
Time: Sun May 18 08:52:53 2008
IP: 81.22.77.88 (**)
Connections: 491
Blocked: permanently
Connections:
tcp 0 0 72.39.255.200:20 81.22.77.88:5201 TIME_WAIT
tcp 0 0 72.39.255.200:20 81.22.77.88:5457 TIME_WAIT
tcp 0 0 72.39.255.200:20 81.22.77.88:5456 TIME_WAIT
tcp 0 0 72.39.255.200:20 81.22.77.88:5200 TIME_WAIT
tcp 0 0 72.39.255.200:20 81.22.77.88:5203 TIME_WAIT
tcp 0 0 72.39.255.200:20 81.22.77.88:5459 TIME_WAIT
.....
.....
----------------------------------------------------
IP is the same x all 491 connection, but change port
Exactly what mean?
I've one main web server, the problem is that many people (now including myself) are often receiving "Connection timed out" messages in their web browser when trying to visit websites. This web server is a CentOS 5 machine and the HTTP server in use is Apache 2.2.
Of course, I've considered contacting server admin people who will look at this sort of thing on a one-off price or manage my servers at a periodic billing rate - but I'd much prefer to see what others have to say here first... hopefully learn some new stuff. It isn't a huge problem right now, but it can be annoying browsing the websites because a refresh would be required to connect again. I've learnt everything I know about Linux etc myself so far, through the likes of WebHostingTalk.. now is time for me to learn about TCP, HTTP, Apache and more if anybody has any ideas about this problem.
When running netstat, I'm seeing a rather large amount of TIME_WAIT's, I'm thinking this could have something todo with the connection time outs?
Here is my netstat output for TCP: [url]- notice all of the HTTP TIME_WAIT's for gangsternation.net? (also, a couple of other sites with less traffic)
I have a big problem with my web hosts!
I had a domain registered with them, then I registered some more domains with them.
My original domain went down last Sunday and I checked the dns status which said it had gone into the redemption period (they are supposed to have an automatic renewal system).
I contacted them and after about 5 days, they got back to me saying there was a problem with the payment for the new domains, so they did not renew my original domain.
However, I did not receive any communication to say there was a problem with the payments and I even have a receipt from them to say I have paid! I also have multiple emails in my sent box to them following up the payments. They did respond to some of the emails, but they never mentioned payment issues.
I definitely do not want to lose my domain, but I urgently need some data from my site.
I know they back up their servers, so where do I stand on getting my data back?
I contacted the registration company (aitdomains), who said I need to get the hosting company to contact them to get the domain out of the redemption period (I was trying to do it myself so I could move the domain to a new hosting company). Without the hosts, it seems as though I cannot do anything.
I'm losing business every day my site is down.
Has anyone got any advice? Do they have the right to not renew one of my domains without any warning whatsoever, especially seeing as the actual issue is with a different domain? The domain with the actual issue is up and running by the way.
Do you think that having a 30 day cancellation period for a server is a bit excessive? This seems like ploy so that they get an extra month out of you don't you think? Do you think we know whats going to happen, and plan a month ahead of time for it? Some of these providers have 30 day cancellation periods, which I think is absurd.
View 11 Replies View Relatedtime to write few words about the providers I've been working with. I was with Hetzner for about a year (2007-ish). Initially I've been reluctant a bit to go with them because of a high setup fee, but after exploring other options I've decided to give them a shot.
For those interested, all my correspondance with them was in English.
Ordering process was easy and painless. Initially I'd ordered DS3000 server, and in less than 24 hours I got login credentials for it. Due to some last-minute business changes I have made some calculations and realized that DS5000 would have fitted my needs better, and I asked whether it's possible to "upgrade" my account to DS5000 before any work has been done on DS3000. They politely replied I should cancel my old order within 14 days and I'll be given a full refund on "no questions asked" policy stipulated by a german trade law, I suppose, which gives you right to cancel (within 2 weeks) any contract that has been arranged remotely.
Now that felt good. The only catch is that the cancellation letter had to be in writing, so I had to find a fax machine from which it could be sent to their billing dept.
In the meantime I have ordered a DS5000 and again received a fully provisioned box in less than 24 hours. Few days after I received my first invoice, stating 83.19 EUR setup fee and 49.58 monthly recurring fee. Huh? Only then I've realized that their stated prices include VAT which can't be billed to non-EU residents. Excellent!
Now, to the service provided. Over a year long period, I only had two downtimes: one planned and one uplanned. The unplanned one was a half-hour long network outage. The planned downtime was due to the power lines work in the datacenter my box was in. It had been announced 2 weeks in advance, and the downtime was about 2 hours.
It is worth noting that all automated emails I've received from Hetzner were bilingual (German/English).
I had nothing but praise for their services/price ratio. Their network is excellent, and the hardware is top-notch. Keep in mind that my bandwidth requirements were low, as I have never used more than 500 GB/month.
To summarize:
For: excellent hardware deal, network (for European users), remote reboot and network rescue system provided free of charge
Against: if you need more than 1 TB/month, you should look elsewhere. KVM/IP charged extra.
I am wondering what grace period after the due date your dedicated server provider offers you before suspending your server.
Please let me know how many days before the due date your provider sends out your invoices , and after late how many days before they suspend your dedicated server. We are looking to amend our policy on the matter and would like to hear what others in the industry are doing.
I have a dedicated server with a provider. They have a notice period of 30 days, to cancel the server. The billing date of the server is 13th of every month.
I sent them the cancellation form on 30th of last month, asking them to terminate the server by 1st of next month (thereby 30 days notice). But they say that they will bill me on this 13th for another full month, and they'll cancel the server only on 13th of next month! WTF? So, the notice period becomes 43 days!
I have never experienced something like this with other providers I'm using but just wanted to know..
What do you guys think about this report?
[url]
Last year my web host stated that my site was over utilizing allowed resources for my plan. Specifically, they state that I was overusing the CPU. At the time, I had to upgrade my plan in order to stay online. I would like to move to a new host, but the prospective hosts are all suggesting a dedicated server because of my CPU usage. I don't want to pay that kind of money, so I would really like to curb the CPU problem. Does anyone know how to reduce the CPU of a Wordpress blog? I tried posting this question over at Wordpress.org, but I haven't received a single reply.
View 11 Replies View RelatedMy server details
Intel 2.4 Ghz P4 Celeron
os-redhat
RAM-2GB DDR
harddisk-160+50Gb
bandwith-3000Gb
now iam haveing more that 0.50-1.20 cpu useage,also cpu useage is also increased i hosted only 6 site out what only one site use MSQL all other site are plain download site,how to reduce the cpu load can u figure me out this issue also give me some tips to reduce the cpu usage
13438 nobody 15 0 42276 22m 13m R 22 2.2 1:51.94 httpd
10620 nobody 16 0 41928 16m 8468 S 19 1.7 0:28.54 httpd
11397 nobody 15 0 41524 12m 4784 S 18 1.3 0:06.04 httpd
10745 nobody 15 0 42376 14m 5316 S 17 1.4 0:06.62 httpd
The values in bold are the CPU percentage taken up by each apache process.
So i had to need to each apache process,who can i reduce it?what config i have to change to reduce it?
we have a dual xeon 2.8G +2G ram CPANEL server normally load is well under2 and stable
we also use Incremental Backup and choose Per Account Only for MySQL backup
This server mainly hosts a big site, whose SQL is 1.2G big
Everytime server is running backup, load burst to 7 and accessing to website hanged. We are thinking to change SQL backup method to Entire MySQL Directory , but was told while it is processing SQL server will be stopped ?? wil Entire MySQL Directory reduce server load while processing SQL backup?
I have smokeping monitoring my game servers and so far in the little time that it has been running all my game servers have been encountering an average of 4 to 10% packet loss. Are there are tweaks i can run on the server computer to reduce packet loss? (registry modifications, etc.)
I downloaded a TCP tweak program called "TCP Optimizer" is it safe to run on a Windows 2003 Server OS?
The colo connection is an OC 192 and i have a 100Mbit ethernet card.
Here are my current TCP settings:
Quote:
[SYSTEMCurrentControlSetServicesTcpipParameters]
TcpWindowSize=-1
GlobalMaxTcpWindowSize=-1
EnablePMTUDiscovery=-1
EnablePMTUBHDetect=-1
SackOpts=-1
DefaultTTL=-1
TcpMaxDupAcks=-1
Tcp1323Opts=-1
DisableUserTOSSetting=-1
DefaultTOSValue=-1
[SYSTEMCurrentControlSetServicesAfdParameters]
DefaultReceiveWindow=-1
[SoftwareMicrosoftWindowsCurrentVersionInternet Settings]
MaxConnectionsPerServer=-1
MaxConnectionsPer1_0Server=-1
[SYSTEMCurrentControlSetServicesICSharingSettingsGeneral]
InternetMTU=-1
[SOFTWAREMicrosoftWindowsCurrentVersionExplorerRemoteComputerNameSpace{D6277990-4C6A-11CF-8D87-00AA0060F5BF}]
{D6277990-4C6A-11CF-8D87-00AA0060F5BF}=-1
[SYSTEMCurrentControlSetServicesDnscacheParameters]
MaxNegativeCacheTtl=-1
NegativeCacheTime=-1
NetFailureCacheTime=-1
NegativeSOACacheTime=-1
[SOFTWAREPoliciesMicrosoftWindowsPsched]
NonBestEffortLimit=-5
[SYSTEMCurrentControlSetServicesTcpipServiceProvider]
LocalPriority=499
HostsPriority=500
DnsPriority=2000
NetbtPriority=2001
[SystemCurrentControlSetServicesLanmanServerParameters]
SizReqBuf=-1
[SYSTEMCurrentControlSetServicesNdisWanParametersProtocols]
ProtocolMTU=-2
[SYSTEMCurrentControlSetServicesTcpipParametersInterfaces{D63AC0FA-D2C9-4D83-B057-31A353516AB3}]
MTU=-1
TcpWindowSize=-1
[SYSTEMCurrentControlSetServicesPschedParametersAdapters{D63AC0FA-D2C9-4D83-B057-31A353516AB3}]
NonBestEffortLimit=-2
[SYSTEMCurrentControlSetServicesTcpipParametersInterfaces{8190D94A-3B2D-45C4-998D-312E99D6061D}]
MTU=-1
TcpWindowSize=-1
[SYSTEMCurrentControlSetServicesPschedParametersAdapters{8190D94A-3B2D-45C4-998D-312E99D6061D}]
NonBestEffortLimit=-2
I know there is no device can protect you from ddos attacks, but I wonder which one is the best to help you reduce the attacks? It might be intelligent to "feel" the attacks? Brand names from Cisco, Foundry, Nokia...?
View 2 Replies View RelatedHow to reduce downtime - multiple name servers ?
Say I have two different hosting accounts at two different web hosts. One at host1.com and another at host2.com. In both the hosts I keep the same files. I use an external registrar and use the name servers for one of my domains as follows:
ns1.host1.com
ns1.host2.com
ns2.host1.com
ns2.host2.com
What would happen in that case, if say host1 is down sometime? Will the name servers point to host2.com?
If this could work, then the probability of downtime of a site would become almost 0 .
Google will like this ?
Another question is how to easily sincronize both cpanel accounts?
Few months ago I bought new small VPS box (OpenVZ, 128 MB RAM) in order to place there a new monitoring node of my site monitoring system. Such small amount of RAM is a challenge for operating system optimisation techniques (OpenVZ doesn’t have “swap” as Xen does).
First of all I discovered that apache2-mpm-worker (Apache implementation that uses threads) consumes more memory (100MB) than the classic version that use separate processes (20MB). I had to switch to apache2-mpm-prefork version then.
Next unpleasant suprise: small Python app eats 100MB of virtual memory! I checked that virtual (not resident) memory is taken into account by VPS. I applied some tools to locate memory bottleneck, but without success. Next I added logs with current memory usage to track call that causes big memory consumption. I tracked the following line:
server = WSGIServer(app)
is guilty for high memory increase. After few minutes of googling I located problem: default stack size for a thread. Details:This line creates few threads to handle concurrent calls
Stack size is counted towards virtual memory
Default stack size is very high on Linux (8MB)
Every thread uses separate stack
=> multi threaded application will use at least number_of_threads * 8MB virtual memory!
First solution: use limits.conf file. I altered /etc/security/limits.conf file and changed default stack size. But I couldn’t make this change to alter Python scripts called from Apache (any suggestions why?).
Second (working) solution: lower default stack size using ulimit. For processes launched from Apache I altered /etc/init.d/apache2 script and added:
ulimit -s 256
Now every thread (in apache / Python application) will use only 128 kB of virtual memory (I lowered VSZ from 70 MB to 17 MB this way). Now I have additional space to enlarge MySQL buffers to make DB operations faster.
There’s even better place to inject ulimit system-wide: you can insert this call in:
/etc/init.d/rc
script. Then ulimit will be applied to all daemons (as Apache) and all login sessions. I reduced virtual memory usage by 50% this way.
Note: you may increase stack size on stack overflow errors. In my opinion 256 kb is safe option for most systems, you may increase if in doubt. Still memory savings are big.
its 2am night here, and my sites are down....now there is no way i have too much traffic at midnight, also all my websites are new !
this is happening consistently since today morning and im getting no support apart from jargon filled replies from customer care
how do i tweak apache settings and what settings do i make to avoid this ?
im wondering what will happen after few months when my websites actually have good traffic coming in ?
We have checked your server. Please see the load average and process list given below:
The value 4.42 was the CPU load average at the time. A normal load should be below 1.00. I could see that Apache service is causing high load in your server.
So you can tweak Apache in order to reduce the CPU load. Please check and let us know if you need any further assistance.
++++++++++++++++++
[root@chi07 ~]# vzctl exec 18403 w
03:16:20 up 2 min, 0 users, load average: 4.42, 1.42, 0.50
USER TTY FROM LOGIN@ IDLE JCPU PCPU WHAT
1) Use PHP as isapi module
2) Open Internet Information Services (IIS) Manager > Right Click On "Web Sites" select properties > Click Service Tab > Open HTTP Compression > Select Compress Application Files and Compress Static Files
3) Use eAccelerator (PHP accelerator, optimizer, and dynamic content cache) with This options;
eaccelerator.shm_size="64"
eaccelerator.cache_dir="c: mpmmcache"
eaccelerator.enable="1"
eaccelerator.optimizer="1"
eaccelerator.check_mtime="1"
eaccelerator.debug="0"
eaccelerator.filter=""
eaccelerator.shm_max="0"
eaccelerator.shm_ttl="0"
eaccelerator.shm_prune_period="0"
eaccelerator.shm_only="0"
eaccelerator.compress="1"
eaccelerator.compress_level="9"
eaccelerator.keys = "shm"
eaccelerator.sessions = "shm"
eaccelerator.content = "shm"
4) Don't load this extensions,
extension=php_mbstring.dll
extension=php_domxml.dll
extension=php_xslt.dll
Only use this Extensions in php.ini
extension=php_sqlite.dll
extension=php_curl.dll
extension=php_gd2.dll
extension=php_gettext.dll
extension=php_iconv.dll
extension=php_imap.dll
extension=php_mssql.dll
extension=php_sockets.dll
extension="eaccelerator.dll"
upload_tmp_dir= "C:WINDOWSTemp"
This Settings tested on Windows Server 2003 SP2 IIS6 with PHP v4.4.7
I have registered a domain using godaddy. I have hosted my site on a server of my shared hosting provider(lets call them X).Currently I have pointed mydomain.com to the server and it is up and running.Sometimes, I have experienced downtimes.
In order to solve this problem, I have hosted a clone of my site on another server from another hosting company(lets call them Y).
1. I want mydomain.com to point to Y when X is down
2. again point it to X when it is up.
My main aim is to have my site live with less downtime. The probability having both servers down is very less.
I dont know if it is technically feasible, just a thought out of dirty mind. I tried to google but was not able to find an answer specific to my problem.
Can anybody tell me how to achieve this through godaddy domain.
Added note, My site is not a commercial site and I cant afford large dedicated servers with clustering and failovers.
I have around 800 time-wait connections
here's a typical connection from china:
tcp 0 0 72.18.203.114:80 222.213.72.22:3059 TIME_WAIT
This is pretty annoying because my server load is at 50% all the time because of chinese TIME_WAIT connections messing up my server at port 80.
I read something about MSL (?) and tcp_time_wait_interval but I don't know if that will do what I want.
I would like to know if there's any way we could reduce "conversion times" for videos while converting them with "mencoder" and "x264 codec"?
Is it possible to cluster 2-3 or more servers (Quad core/8GB RAM) so that I can reduce the conversion timings effectively?
The original video sizes are around 500-700MB on an average and I'd like to convert them using mencoder in x264 codec @ 500Kbps bitrate and 2pass settings.
Ofcourse it'll take atleast 1-2hours to encode 1 such video at these settings on a 'single' such server, so is there a way to reduce the conversion times to around 10-15minutes/video by using "parallel encoding" with x264 codec?P.S: I know how to form a cluster, using beowulf/rocks, etc...what I need help with is 'using' the cluster with x264 codec
Basically I would like to record only certain status code entries in my access log. For example I would like to skip all entries with 200 status.
The documentation under "Modifiers" [URL] .... works somewhat but it still makes an entry in the AccessLog file.
This line in httpd.conf
I did a quick search on this and could not see it as already being posted
It seems quite a clever but simple idea - remove a lot of the oxygen from the air to help reduce the risk of fire. What do those of you operating your own facilities make of this? Is anyone already doing this?
[url]
i want to edit "Reduce your SELECT DISTINCT queries without LIMIT clauses" in my my.cnf.
View 1 Replies View Related