I have Django (python framework) on a server, and I have a little problem. The application is kept in cache by FastCGI
When you make changes to your application you have to restart it. Touching the file doesn't solve my problem. The only solution I have is renaming the .fcgi file always, and if you use an orifinal file name, it actually works like before, prooving it's kept in cache.
What would you do? A cron job to remove these files maybe?
I use apache with CentOS VPS hosting for my blog. I only host one blog in this VPS account. I have 1.5GB RAM and I have 7, 500 page preview per day. My page loading time is 2-3 seconds (according to the pingdom tool).
I want to know what is the best performance (faster web page loading) W3 Total cache option for VPS hosting blog. Currently I use Disk to enhance for page cache and database cache for disk.
Its causing big problem at my friend's site, it often leads to 500, 501 and 502 pages, earlier some one mentioned even fcgi could be reason for the error pages as it might show error page if php file takes longer time to load,
so I would like to to turn off fcgi at his site, he uses hypervm-lxadmin, I couldn't find an option when I searched, may be I have missed it, can u guide me through how to solve this issue
and I would like to know if turning off f-cgi will it cause any problem to the site, such as tech, issue to be fixed
I've been using Apache for a long time, switched to fcgi, went to check load and I see that some sites are using like 10-15% of cpu (dual quad harpertown) for a 10 seconds or so. Wordpress blog which is using 700 mb of traffic per month for example.
Is this normal?
I wasn't been able to see that in the past so really din't knew that.
I`d like to run php per user using spawn-fcgi (I want to user have own pid file for php process). Is it possible to work? I thing that the way is to add something in VirtualHost file for users something like spawn-fcgi -f /usr/local/php4/bin/php -P /tmp/$user.pid -s /tmp/$user.sock
I have php running as fastcgi for nginx and I'm trying to setup monit to monitor the php process and restart it when it crashes. The problem is that I can't seem to figure out which pidfile I should have monit look at.
Here's part of my script that starts the spawn-fcgi process:
## ABSOLUTE path to the spawn-fcgi binary SPAWNFCGI="/usr/local/bin/spawn-fcgi"
## ABSOLUTE path to the PHP binary FCGIPROGRAM="/etc/lighttpd/php/bin/php"
FCGIPID="/var/run/php-fcgi.pid"
## TCP port to which to bind on localhost FCGIPORT="1026"
## number of PHP children to spawn PHP_FCGI_CHILDREN=8
## maximum number of requests a single PHP process can serve before it is restarted PHP_FCGI_MAX_REQUESTS=1500
I added the FCGIPID (saw it in an example), but it doesn't seem to do anything. I tried creating a pid file but the pid in it doesn't get updated when I start/stop the script.
I guess I have finally seen the adverse effects of raising the conntrack table max too high.
May 15 09:13:52 cp4 kernel: [6430723.486626] dst cache overflow May 15 09:13:52 cp4 kernel: [6430723.622616] dst cache overflow May 15 09:13:56 cp4 kernel: [6430727.562862] dst cache overflow May 15 09:13:56 cp4 kernel: [6430727.698868] dst cache overflow May 15 09:13:56 cp4 kernel: [6430727.844221] dst cache overflow May 15 09:13:56 cp4 kernel: [6430727.991276] dst cache overflow May 15 09:13:56 cp4 kernel: [6430728.131962] dst cache overflow
I got tons of these during an attack today. I have googled around for a lil while and not have been able to find any useful info on raising this cache level up. Would anyone here know how to do this?
I see no sysctl settings or anything of that nature for it.
I'm running shared hosting and would like to keep the amount kept in cache down so that there is always more memory free... how would i go about doing that?
I seem to have the opposite problem of what most people complain about... I'm using some custom-built PHP scripts, the output of which is not getting cached. I want the output cached, because it doesn't change often.
If it's relevant, I'm using ob_start() to serve up a GZIP-compressed page.
I start off with a header("Cache-Control: maxage=3600, must-revalidate"). Yes, it's first, and yes, it's showing up properly in the browser.
However, requesting the page again returns an HTTP 200, not the 304 I'm expecting. It's pulling down the whole page again. It's not changing in between requests, and I'm simply visiting the URL again, not hitting Refresh. (Although it really shouldn't matter.)
I made changes in httpd.conf to redirect website to another website; after 15min I removed redirect but until now when client request website they are redirecting. I'm sure I remove redirect.
we are locating in UAE, UAE has transparent proxy for all Internet connections so I think the problem in proxy cache, How i can confirm it? then can I avoid it ?
also when I put dot "." at the end of link site working without redirect otherwise it's not working.
I'm assuming a corporate proxy cache is what they have set up. I have a client and every time I send them changes to a temporary page I'm hosting for review they can't see it.
They can hit refresh over and over but never see the new updates unless I change the name of the folder its in.
This is very annoying and it only happens with them and one other corporate client i have. They check on multiple computers and it will never refresh and load the new changes. I think this is their network cache that their IT dept. set up.
How can I get around this? I tried an htaccess trick I looked up for expiring files but it didnt work.
These files are on a shared hosting of mine on an Apache server.
root@host# free total used free shared buffers cached Mem: 4016936 2598976 1417960 0 138424 1558652 -/+ buffers/cache: 901900 3115036 Swap: 5275640 0 5275640 Eventually, the cache reaches 2600000 and i would like to keep the cache smaller so that the free RAM is always steady around 500k for when a lot of traffic comes through.
We just upgraded our server with 8 brand new seagate cheetah 15k.5's, a battery backup unit, and a 256mb dimm for the raid controller. In the boot process, i noticed an error about caching or something.
After analyzing the dmesg log, i found the error: sda: asking for cache data failed sda: assuming drive cache: write through
It seems like the kernel can't get to the raid controllers cache, so it switches to the write through setting.
I've benchmarked the harddisks with the write through, and write back setting. The odd thing is that both settings deliver the same performance.
Normally, write back increases the performance with like 100%... That's why we bought the battery backup unit.
So something is going wrong, but where lays the problem?
Server:
Quote:
8 X seagate cheetah 15k.5, U320, 16mb cache, SCA, 73GB 1 X chenbro backplane, U320, SCA, 2 channels, 8 ports 1 X LSI megaraid 320-2x raid controller, U320, 2 channels, battery pack and 256 upgraded dimm 6 GB DDR PC3200, ECC, CL3 2 X AMD opteron dual cores (4 X 2.0 ghz)