Increasing APC Cache Memory Size
Dec 11, 2008Right now APC shm_size is set to30 and since I am only using about 20% of the RAM in VPS, I thought it would be best to increase it a bit. Now how do I go about doing that?
View 2 RepliesRight now APC shm_size is set to30 and since I am only using about 20% of the RAM in VPS, I thought it would be best to increase it a bit. Now how do I go about doing that?
View 2 Replieshow big I should set the cache size for eaccelerator?
Here's the information from the control.php
Caching enabled yes
Optimizer enabled yes
Memory usage 100.00% (16.00MB/ 16.00MB)
Free memory 0.00MB
Cached scripts 148
Removed scripts 0
Cached keys 0
When free is run, what is the difference between bufferd versus cached memory?
View 9 Replies View RelatedMy RAM is 2G.
The cache memory of my server is eating, After 2 weeks, the cache memory is below 1G, then the server crashed.
After I reboot the machine, the cache memory back to normal. But it starts to eating again.
I have attached the graph.
I am using lighttpd, php, mysql
I run few export procedures which take a long time ot process (lots of joined sql) and generate quite large output. Plesk 11.0.x required some php/fcgif adjustments but I got it to work.
Now I switched ot Plesk 12 and have to set it up again.
I started with Plesk / PHP Settings for this domain:
memory_limit set = 2G
max_execution_time = 600
max_input_time = 600
post_max_size = 2G
Default subscription has no setting for above values but I have added them per this domain os not sure why it ignores my 2G value?
Error:
mod_fcgid: stderr: PHP Fatal error: Allowed memory size of 1047527424 bytes exhausted (tried to allocate 84 bytes) in
does mod_cache use real disk space or real memory to locate "cache"? I mean if i use mod_cache, does my vps/server will use more disk space/memory?
Quote:
mod_cache implements an RFC 2616 compliant HTTP content cache that can be used to cache either local or proxied content. mod_cache requires the services of one or more storage management modules. Two storage management modules are included in the base Apache distribution:
One of our resellers has an account.. When looking into cpanel, it says that that account is using 3300megs. When we go into the ftp of that account, in reality it is only using 1.3megs. This is a huge difference! Most of folders are empty.
We are using the latest version of WHM and Cpanel.
I use apache with CentOS VPS hosting for my blog. I only host one blog in this VPS account. I have 1.5GB RAM and I have 7, 500 page preview per day. My page loading time is 2-3 seconds (according to the pingdom tool).
I want to know what is the best performance (faster web page loading) W3 Total cache option for VPS hosting blog. Currently I use Disk to enhance for page cache and database cache for disk.
any problem increasing max_user_connections ? (for example 600)
View 2 Replies View RelatedI have bought a hosting from new hosting company.
I have total 5GB space wherein I can host unlimited domains.
When I uploaded my files ( for 4 sites ) I saw that I have used total 16 MB. But today when I logged in there I noticed that the cPanel shows that I have used total 22.75 MB
But I didn't upload anything that may take upto 6MB space
I just uploaded few images that may take maximum 100KB.
My hosting provider said this
"The extra disk space could be email messages stored on the server, mysql database files, etc... All these things are accounted for within your disk space."
But I replied that I don't use any database and I don't have any single email in my account so how that space can by consumed? Waiting for their further reply.
I'm trying to start a website on a shoestring budget, but my programmer and host want to squeeze more out of us. We currently have a Custom VPS, w/cPanel and WHM. Everytime I try to upload a file(s) more than 10 mb, it will not go thru. My programmer told me to ask the server to increase capacity, they tell me it will cost me. My programmer says the same thing. Now if my programmer can do it, I assume it will be done through cPanel. Is there anyway I can do it myself, so for example, a file of lets say 25 mb will get uploaded? I have access to my cPanel.
View 14 Replies View RelatedApache keeps increasing processes and when it reaches 256 value, it crashes. These are the values from our conf file. This doenst happen everyday, it occurs once or twice a week and at random timing.
Timeout 120
KeepAlive Off
MaxKeepAliveRequests 100
KeepAliveTimeout 15
<IfModule prefork.c>
StartServers 8
MinSpareServers 5
MaxSpareServers 20
ServerLimit 256
MaxClients 256
MaxRequestsPerChild 4000
</IfModule>
I enabled server-status and found request for index.html, is holding the processes. Only thing i see in access logs is ""OPTIONS * HTTP/1.0" 200 - "-" "Apache (internal dummy connection)""
I have had over 4 different VPS providers and in all of them, I was always using less than 5 gigs and most of it is from the OS and the VPS files, my website is less than 400 megs of size including the backups. For some reasons, my Disk Quota is at 8.3 gigs and it was at 8.2 gigs yesterday. My website's files are not that big and the Database is like 30 megs and the backups are compressed so they are like 4 megs. The problem is that I really don't know what files are making my Disk Quota to go up since I haven't uploaded anything or download files to my VPS.
View 6 Replies View RelatedI issued a self-signed certificate for Exim by running the following commands:
Code:
openssl req -new -x509 -keyout /etc/exim.key.tmp -out /etc/exim.cert
openssl rsa -in /etc/exim.key.tmp -out /etc/exim.key
rm -f /etc/exim.key.tmp
service exim restart
The problem is the certificate expires after one month. I have already edited my /usr/share/ssl/openssl.cnf with the following lines in order to increase this expiration time to one year and it worked for the SSL certificate issued for Apache, but it has no effect on the Exim certificate.
Code:
default_days = 365 # how long to certify for
default_crl_days= 365 # how long before next CRL
Should I edit another file or set it on another place? Does anyone know?
floodkoruma is a script which securing our servers from syn floods. But I couldnt understand our connection lost from server on that screen. Last screen is it.
last log messages
Jul 7 19:41:27 server filelimits: Increasing file system limits succeeded
Jul 7 19:42:25 server kernel: printk: 234 messages suppressed.
Jul 7 19:42:30 server kernel: printk: 1026977 messages suppressed.
Jul 7 19:49:42 server syslogd 1.4.1: restart.
Jul 7 19:49:42 server syslog: syslogd startup succeeded
as you can see I rebooted from apc server. But before it
Jul 7 19:42:30 server kernel: printk: 1026977 messages suppressed.
The last packet successfully received from the server was 60.410.682 milliseconds ago. The last packet sent successfully to the server was 60.410.687 milliseconds ago. is longer than the server configured value of 'wait_timeout'. You should consider either expiring and/or testing connection validity before use in your application, increasing the server configured values for client timeouts, or using the Connector/J connection property 'auto Reconnect= true' to avoid this problem.
View 2 Replies View RelatedI have a VPS. And have had an issue both when it was 1Gig and now I recently downgraded it to 768m, because I am moving some sites to a dedicated.
However, the part I am having trouble grasping is that when I look at graphs from Munin, it will typically always show 200-400MB free memory (and free -m and top agrees with munin), but Munin shows 'committed' memory that is above the total Ram on the VPS and once the 'committed' ram exceeds the VPS limit, processes start failing.
So, why is 'committed' memory exceeding the RAM on my VPS, when Munin, free -m and top all show there is free memory available?
Code:
root@server [~]# free -m
total used free shared buffers cached
Mem: 768 449 318 0 0 0
-/+ buffers/cache: 449 318
Swap: 0 0 0
Here's a graph that munin produces that shows the 'committed' memory exceeding the total memory.
[url]
I just got a new server Dual E5520 with 6GB RAM, SAS 15k rpm raid10. It's running well. However, the memory usage is just around 2.5GB, even when I have more traffic. Here is the kernel info
Quote:
# uname -a
Linux server2.[url]2.6.18-128.1.10.el5 #1 SMP Thu May 7 10:35:59 EDT 2009 x86_64 x86_64 x86_64 GNU/Linux
Any idea that we can put more content into memory?
I look after a number of sites and monitor their stats. We use both webalizer and AWSTATS so we can have a comparison. Up until recently the stats for both of them where relatively the same, webalizer usually showing higher numbers as it doesn't filter bots but the progression of increase and decrease in stats was proportional. However, over the last few months a curious trend is appearing, the stats in AWSTATS are decreasing every month whereas the stats in Webalizer are increasing, the gap between them now is huge.
View 0 Replies View RelatedI have moved my domain out of hostgator like a month ago..
[url]
The whois shows my new nameservers and IP
Why is my page being redirected to hostgator suspended page.
My domain is not even registered with them
the domain is nuzil.com
any reviews about that.
the NOCONA and IRWINDALE are old cpu,
i find the main difference of them is L2 cache (1MA2M),
i want to ask what service need more L2 cache?
for example:a lot of db usage? or httpd? or?
The Linux Server got down when the MaxClients 256 is reached.
Error Log:
"server reached MaxClients setting, consider raising the MaxClients setting"
So that I have tried to increased the MaxClients Value to 500, after changed the value in httpd.conf and restart I get following error message.
" [notice] SIGHUP received. Attempting to restart
WARNING: MaxClients of 500 exceeds ServerLimit value of 256 servers,
lowering MaxClients to 256. To increase, please see the ServerLimit
directive."
So that I tried changed the Server limit in /usr/local/apache/include/httpd.h header file. but it seems like there is no entry.
Apache Version : 2.2.8
So I have added the ServerLimit 500 entry in httpd.conf file and restart the httpd service. But still shows the same warning mesg. Please help me regarding this.
We have the Dedicated server for Flash Game Server with following configuration.
RHEL4 OS
2GB RAM
Intel(R) Xeon(R) X3210 @ 2.13GHz
Cpanel Installed.
Apache 2.2.8
PHP 5.2.4
MySQL 4.1.2 (MySQL Server is working in differend server)
I guess I have finally seen the adverse effects of raising the conntrack table max too high.
May 15 09:13:52 cp4 kernel: [6430723.486626] dst cache overflow
May 15 09:13:52 cp4 kernel: [6430723.622616] dst cache overflow
May 15 09:13:56 cp4 kernel: [6430727.562862] dst cache overflow
May 15 09:13:56 cp4 kernel: [6430727.698868] dst cache overflow
May 15 09:13:56 cp4 kernel: [6430727.844221] dst cache overflow
May 15 09:13:56 cp4 kernel: [6430727.991276] dst cache overflow
May 15 09:13:56 cp4 kernel: [6430728.131962] dst cache overflow
I got tons of these during an attack today. I have googled around for a lil while and not have been able to find any useful info on raising this cache level up. Would anyone here know how to do this?
I see no sysctl settings or anything of that nature for it.
I'm running shared hosting and would like to keep the amount kept in cache down so that there is always more memory free... how would i go about doing that?
are these values good?
echo 20 > /proc/sys/vm/dirty_background_ratio
echo 60 > /proc/sys/vm/dirty_ratio
I have reseller account from small web service company.
they are great and better than the famous company.
but I have one problem. I have personal blog, some time I don't see the new comment, also my visitor see the comment before 4 days ago only.
and there is vb forum, some times new member can't login and only you see the old topic, and some times you see everything ok and up to date.
me and all my visitor have the same problem and that can't be from internet service provider because they are from several country.
I had such problem before 4 years and it was because server Cache.
I didn't name the company because they are great and I don't want to blame them before I know certainly what cause the problem.
What tools do you use to check for DNS Cache Poisoning ? Is there any way it can be prevented and is the problem very prevalent?
View 1 Replies View RelatedI seem to have the opposite problem of what most people complain about... I'm using some custom-built PHP scripts, the output of which is not getting cached. I want the output cached, because it doesn't change often.
If it's relevant, I'm using ob_start() to serve up a GZIP-compressed page.
I start off with a header("Cache-Control: maxage=3600, must-revalidate"). Yes, it's first, and yes, it's showing up properly in the browser.
However, requesting the page again returns an HTTP 200, not the 304 I'm expecting. It's pulling down the whole page again. It's not changing in between requests, and I'm simply visiting the URL again, not hitting Refresh. (Although it really shouldn't matter.)