I'm on a vps using Dell PowerEdge Servers and 750 GB of transfer.
I have concurrent users settings to where i now have them at "50" and I'm wondering how much higher i can go without troubles.
My app is a perl cgi script in flat files and 95% of visitors will pass through my domain without seeing my webpage. They are only there approx 1 second, maybe 2 as they are quickly redirected from this platform.
My host has a custom solution installed to check for apache stalls every few minutes and it will restart apache when it sticks - and it will stick from time to time.
Just now i do about 75000 per day but plan to double to 150,000 next week and soon after will triple to 225,000 visitors per day.
Should i increase concurrent users now in prep for the increase to 150k or at what level.
Please give some suggestions for concurrent settings at future traffic levels. This host has a multiple server set up in case one fails - i should be good, i think.
Like the topic suggested, anybody have a rough estimate of how many concurrent users game websites such as wildtangent and instantaction have? Also, what kind of server configuration would be necessary to support those kind of users assuming you have 1 server for web and 1 for database.
I am using to lighttpd's lighttpd-status to estimate concurrent connections.
When I refresh the panel, it shows that there are around 100-150 connections and around 150 requests/s in the last 5 seconds.
My vmstats show that CPU is 98% idle. Blocks written/read is neglible. MySql key_buffer set to 2gb and I'm pretty sure it's not mysql. The overwhelming majority of requests do not access mysql.
EDIT: Uh oh, I just realized that tcp_mem could be a huge bottle neck.
I just set it to:
net.ipv4.tcp_mem = 4096000 87380000 4194304000
It was previously: net.ipv4.tcp_mem = somenumber somenumber 393,216 <<<--- WTF!
x1000 for my read values (it's an access server only). I can't benchmark the server right now so let me know if you have any suggestions besides this. I do think that this was the problem. When under load images could not be accessed either.
I like to create some service plans using the cli-tools, /usr/local/psa/bin/service_plan.I am able to create a service plan, but I'm unable to create a service plan inside a reseller plan. For example I cannot "tell" the service_plan script to add the created serviceplan to a reseller plan. Is it possible to create a serviceplan inside a reseller plan, using the cli?
On my server, users can connect to any database as long as they have the database user and password. This makes it easier to hack any database on the server. What I want to do is to make the users can only connect to their own databases and not other's.
I tried changing the localhost ip address but it didn't work ( I assume I didn't do it the right way)
I was looking at some load balancers hosting companies offer and some of the load balancer specs say they can handle up to 15million concurrent sessions(users online at the same time), so does this mean if i had a site like wikipedia that had 15 million users online at the same time, would i be able to do this with only 2 dedicated servers, or will the Cpu's not be enough?
Currently I use WHM/cPanel latest on a few boxes, I have (at least to my knowledge) copied the exact compile settings between servers, yet some of my servers allow me to do concurrent file downloads (using DownThemAll or another "accelerator") whilst others only let me make 1 connection. These servers also don't show me the final filesize while downloading. So my question is, what did I set/unset to cause this to happen? It's been bugging me for a month or so now and no amount of recompiling has fixed the issue (aside from a clean install of the OS/cPanel). I can provide a list of modules and other config settings if required.
I am planning to get a Juniper firewall, but due to SSG140 has a maximum of 48,000 concurrent sessions per second, so it triggers me how do I measure the concurrent session of a linux server of the total throughput instead of just port 80?
I have noticed that the CPU on my VPS maxes out. This occurs when there are many concurrent visitors to any of the 7 domains. 6 of the domains run Wordpress and 1 runs php-bb forum. When I run top during high cpu I see multiple "php-cgi":
If everyone can take a look at this (while all domains were turned off) and give me any advice to what is eating up resources so badly? The load averages are at 1.3+ with only yum trying to run on the box and it has already been running to nearly two hours were this would typically take 10-15 minutes max prior.
Host: FutureHost Type: VPS DC: Dallas Plan: VPS Elite with 1.5GB of memory
I have a VPS, and i wanted to know if the CPU / Load Average that shows at my Plesk control panel reports only the usage of my part of the server cpu, or if it is the real/complete cpu usage for the entire server?
So in a simple way, is another user with a vps at the same server is doing a high cpu task at is vps, will it show at my plesk cpu load average?
Having a reseller account(didn't need it, but was talked into it originally), on the average how many domains should I expect to be hosted on one IP? Right now I show 466 for my IP. This is more for curiosity at this point than anything.
I am considering adding offsite secondary backup servers in house on a full T1 (multiple? or a fractional T3?). Servers will be located in Ontario Canada. Does anyone know the average, upper, and lower cost for a full or even partial T1? I am trying to go as cheap as possible since the line probably wont be used unless the datacenter looses service.
If anyones had any experience with T1, as far as price, reliability, best providers etc
I have to move about 50 GB each night from one server to another.
This is the command I'm using:
/bin/nice -n +19 scp -c blowfish -l 18000 -P 22222 root@XX.XX.XX.XX:/backup/dailybacks/*.tar.gz ./ There is no private lan so i have to use internet. I'm using blowfish, which will reduce cpu load, and also limiting bandwidth transfer, usign -l 18000. How ever, I still do see some high load averages..
Do you have any other suggestion to optimize CPU performance while running scp?