Squid Frontend Gives 0 Network Output For Seconds At The Time
Mar 27, 2009
I have an apache server on windows which I wanted to speed up using caching systems.
I tried installing squid, and I got it working with a basic example config. It seemed to work well, however under more heavy load I experienced some weird behaviour where network output is 0 for several seconds at the time, and all clients just hang in the wind and wait for a respons.
Remote to the server is working fine, so it's definitely a squid issue.. with only apache running the server is constantly sending out data, no halts there.
Any hints on what errors I should be looking for?
View 1 Replies
ADVERTISEMENT
Jun 7, 2015
I have installed NGINX + PHP-FPM + PHP5Memcached + Memcached + APC on my VServer. My Problem is Page loading time of around 40-60 seconds. Installed is 12.0.18 Update #49
[URL] .....
Connection Setup: TIME
Stalled: 1.860 ms
Request/Response
TIME
Request sent: 0.214 ms
Waiting (TTFB): 47.08 s
Content Download: 41.757 ms
Explanation 47.12
I have reconfigured the Domains but nothing changed and Bootstrap Repair
Code:
/usr/local/psa/admin/bin/httpdmng --reconfigure-all
/usr/local/psa/bootstrapper/pp12.0.18-bootstrapper/bootstrapper.sh repair
If I disable Nginx and use "Apache Module" for PHP than the TTFB time is shorter but is not the prefered way. If I use "CGI" or "FASTCGI" without Nginx than the Page loading time is a little bit shorter but with Erros on the Website.
I tried to reinstall Nginx from Plesk but this gave me Error after enable it
root@vserver:/# /usr/local/psa/admin/sbin/nginxmng -e
[2015-06-08 00:26:12] ERR [util_exec] proc_close() failed
...
[2015-06-08 00:26:13] ERR [util_exec] proc_close() failed
[2015-06-08 00:26:15] ERR [panel] Apache config (14337159720.96731500) generation failed: Template_Exception: Destination directory '/etc/nginx/plesk.conf.d/vhosts' not exist
file: /opt/psa/admin/plib/Template/Writer/Webserver/Abstract.php
line: 75
code: 0
Destination directory '/etc/nginx/plesk.conf.d/vhosts' not existClick to expand...
View 3 Replies
View Related
Feb 5, 2007
while i am restoring db (110MB) via SSH following error occur
Code:
ERROR 1064 (42000) at line 145689: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '
Fatal error: Maximum execution time of 30 seconds exceeded in ' at line 1
View 12 Replies
View Related
Jun 25, 2009
I want to count the traffic for every IP passed through squid proxy server.
Is it possible to record the traffic numbers for every IP in external .txt file
It would be better if it can write outgoing and ingoing traffic.
View 7 Replies
View Related
Sep 25, 2008
I can't get access to a certain site. I always get the page with:
network time out - server at *** takes to long to respons. More people have noticed this and apparently it only happens to people with certain specific providers. And not all the time. Some times they DO get access eventy to they belong to the same ISP. So I guess an ISP isn't blocking access to it otherwise it would be permenantly/The site administrator insists that certain ISP's are blocking his site. He's hosting it on his own server. The domain belongs is registered at namecheap.com.
If an ISP is blocking this site (if that's possible?), that would lead to that 'network timeout' page wouldn't it?
What is the most likely reason for getting a timeout page anyway?
View 7 Replies
View Related
May 2, 2009
from: domaintools.com
Host IP Address Ping Time
1. 206.71.148.249 Timed Out
2. 206.71.148.249 206.71.148.249 126.20ms
3. 206.71.148.249 206.71.148.249 82.35ms
4. 206.71.148.249 Timed Out
5. 206.71.148.249 206.71.148.249 87.01ms
6. 206.71.148.249 Timed Out
7. 206.71.148.249 Timed Out
use: Enotch Networks
this issue from networks (enotch) or from my ISP?
because enotch staff said my ISP problem!
View 6 Replies
View Related
Dec 10, 2007
I run a rather busy forum. Recently we've finally moved onto a dual server setup. Our setup consists of this:
Files/Http:
C2D 6400
2GB RAM
2x 80GB HDD's in RAID 1
Lighttpd/Xcache/php 5.2.5
DB Server:
Xeon 5360 (2.4Ghz 4MB Cache)
2GB RAM (Upgrade to 4GB coming this week... if we can get these issues sorted anyway).
2x 36GB Raptor's in RAID.
MySQL 5.0.x
=====
Now, when theres less than 1,000 users online my board the server runs fine. The front end server has a load of <2.0 and the DB server goes no higher than 4-5. As soon as there's slightly more load placed on the server it's like they seem to choke. It appears that there's no MySQL data transferred between the servers for a couple of seconds before it goes back down. It's almost like a choking effect. The servers are connected directly to each other and have 100Mbps bandwidth between the two over internal IP's. At most there's probably 60 pages being served a second. On average perhaps 25/second.
Could it be a problem with my MySQL settings? Here's my.cnf
Code:
[mysqld]
safe-show-database
old_passwords
back_log = 75
max_connections = 650
key_buffer = 256M
myisam_sort_buffer_size = 64M
join_buffer_size = 1M
read_buffer_size = 1M
sort_buffer_size = 2M
table_cache = 4000
thread_cache_size = 384
wait_timeout = 20
connect_timeout = 10
tmp_table_size = 64M
max_heap_table_size = 64M
max_allowed_packet = 64M
net_buffer_length = 16384
max_connect_errors = 10
thread_concurrency = 16
read_rnd_buffer_size = 786432
bulk_insert_buffer_size = 8M
query_cache_limit = 4M
query_cache_size = 64M
query_cache_type = 1
query_prealloc_size = 262144
query_alloc_block_size = 65536
transaction_alloc_block_size = 8192
transaction_prealloc_size = 4096
default-storage-engine = MyISAM
innodb_data_file_path = ibdata1:20M:autoextend
innodb_buffer_pool_size=256M
innodb_additional_mem_pool_size=20M
[mysqld_safe]
nice = -10
open_files_limit = 8192
[mysqldump]
quick
max_allowed_packet = 16M
[myisamchk]
key_buffer = 64M
sort_buffer = 64M
read_buffer = 16M
write_buffer = 16M
[mysqlhotcopy]
interactive-timeout
I am running out of ideas and this is very frustrating. At the moment the site is more speedy on a single server setup than spread over the two...
View 8 Replies
View Related
Mar 26, 2007
to set up a serious of scheduled tasks. For example at 9:35:12 PM I would need a php script to run. And its very important that it run at the 12 second mark for the 35th minute. This seems like an application for a cronjob. Anyway, it looks like you can only have cronjobs run every minute, not seconds.
View 1 Replies
View Related
Feb 13, 2007
Today, I put a new server on our racks.
Problem: the machine crashes exactly every 10 minutes. The crash occurs with no entry on the logs and with 0.00 load. It is as if someone take out the current every 10 minutes.
Here are the specs:
- 2 CPU Intel Xeon 2.0
- 8 Gb RAM ECC
- 2 x 250 Gb HDs
This machine needs plenty of current. I wonder if I am not going over the rack power quota. May be there is a system to allow overages for then 10 minutes, then it cut back the current to the rack quota.
View 2 Replies
View Related
May 13, 2009
i have a small cluster (one web and one db) setup and i host a rather popular group of 4-5 sites that allow users to dynamicly create their own mobile chat communitys automaticly. each site gets its own mysql db created and populated automaticly.
this is all fine,
but in the last 24hours weird things have begun happening, previously i had the sql max_connections set to 500 and this was perfectly adquate for the demand but now even when i set the connection to 4000+ they are all maxxed out within 5-10 minutes, and mysql processlist shows thousands of unauthenticated user connections sitting at login status,
i have gone through the sites and all their mysql configs are fine so i cant see what the issue is.
server specs below
db server:
dual amd opteron 246
8GB ram
120gb hd(64gb free)
33gb swap (rarly used but their for emergencys)
centos 5 64bit.
direct 100mbit lan to web serv
only mysql,ssh and webmin running, no other apps installed
web server:
amd athlon 64 3800+
plesk 9.2.1
4gb ram
2x120gb hds
apache status onthe web server only shows 120ish http connections but the sql keeps climbing
View 8 Replies
View Related
Jan 10, 2008
My log is filling up with errors + 500 internal displaying:
2008-01-09 16:17:50: (mod_fastcgi.c.2703) fcgi-server re-enabled: unix:/tmp/php-fastcgi.socket-1
2008-01-09 16:17:59: (mod_fastcgi.c.1731) connect failed: Connection refused on unix:/tmp/php-fastcgi.socket-1
2008-01-09 16:17:59: (mod_fastcgi.c.2885) backend died; we'll disable it for 5 seconds and send the request to another backend instead: reconnects: 0 load: 5
2008-01-09 16:18:05: (mod_fastcgi.c.2703) fcgi-server re-enabled: unix:/tmp/php-fastcgi.socket-1
2008-01-09 16:18:18: (mod_fastcgi.c.1731) connect failed: Connection refused on unix:/tmp/php-fastcgi.socket-1
2008-01-09 16:18:18: (mod_fastcgi.c.2885) backend died; we'll disable it for 5 seconds and send the request to another backend instead: reconnects: 0 load: 5
2008-01-09 16:18:24: (mod_fastcgi.c.2703) fcgi-server re-enabled: unix:/tmp/php-fastcgi.socket-1
2008-01-09 16:18:33: (mod_fastcgi.c.1731) connect failed: Connection refused on unix:/tmp/php-fastcgi.socket-1
I have tried all sorts of combos.
Core2Duo 1 processor
Lighttpd 1.4.18
PHP 5.2.5
xcache 1.2.1
2gig ram
fastcgi.server = ( ".php" =>
( "localhost" =>
(
"socket" => "/tmp/php-fastcgi.socket",
"bin-path" => "/usr/local/php5/bin/php-cgi",
"min-procs" => 2,
"max-procs" => 6,
"bin-environment" => (
"PHP_FCGI_CHILDREN" => "10",
"PHP_FCGI_MAX_REQUESTS" => "1000"
)
)
)
View 13 Replies
View Related
Oct 17, 2013
My site will wait for 30s almost everytime before loading any of the page itself.Specs of my install:
- DigitalOcean Droplet (VPS) with Ubuntu Server 12.10: 512 Ram and 20GB SSD (not even coming close to needing more RAM, still have 240MB free according to top)
- Wordpress 3.6.1
- 5 plugins: W3 Total Cache, Wordpress SEO by yoast, WP Better Security, WP Smush.it, and Redirection (problem occured before adding the last 2, I can't remember about the others)
- No traffic to speak of. I get maybe 10 uniques/day.
- Apache 2.2.22
- MySQL 5.5.32
I've optimized my site itself the best I can, minifying and combine js and css files, using the WP Smush. It plugin to compress images, serving jQuery from a CDN, but none of that worked the 30 second wait (though it did shave about 10 seconds off the load time after the wait for response).
I was using cloudflare and had to fiddle with the nameservers of my domain, but cloudflare didn't work at all and I switched the nameservers back to normal pointing DNS directly at my site to eliminate the obvious causes. I'm comfortable with Linux and the command line. This is the link to my site: [URL] ....
View 3 Replies
View Related
Nov 16, 2008
After having conversation with many WHT members and few other system admins I have not been been to resolve an issue yet.
I have a Basic VPS and squid runs fine on it.
Debian 4
Squid 3
Now the issue is that I have 2 IP allocated to my VPS. But no matter what configuration i have on squid.conf file , no matter what version of Squid i use,I am not able to have the additional Ip on my vps as Outgoing External IP address.
I have also tried this config setting :
acl ip1 myip 192.168.1.2
acl ip2 myip 192.168.1.3
acl ip3 myip 192.168.1.4
tcp_outgoing_address 192.168.1.2 ip1
tcp_outgoing_address 192.168.1.3 ip2
tcp_outgoing_address 192.168.1.4 ip3
But no luck yet.
View 4 Replies
View Related
Jul 30, 2007
I currently have a site running on 8 servers : 5 web servers (apache2/php5), 2 DB servers (mysql 5), and one front reverse proxy server.
Currently I use apache as the reverse proxy (with mod_proxy of course).
I have it do 3 type of things:
1) serve some static files (the website's static files) directly from the front server. The files are stored in local directories.
2) cache some other static files (user uploaded images and files) on the front server after downloading them once from the backend webservers. This is done via mod_cache.
3) route some requests to specific web servers depending on a subdomain (on the first few letters of the subdomain more precisely). To do this i use rewrite rules like :
RewriteCond %{HTTP_HOST}^sub1(.*).domain.com$
RewriteRule ^(.*) http://sub1%1.webserver1.com/$1 [P,L]
RewriteCond %{HTTP_HOST}^sub2(.*).domain.com$
RewriteRule ^(.*) http://sub2%1.webserver2.com/$1 [P,L]
etc.
My web servers are not in a cluster from this point of view, so it is important that the reverse proxy is able to route requests based on subdomain like this.
Now I have a few weird performance problems on the front server. CPU, hard disk usage and memory usage keep staying at relatively constant (and always low) levels, yet the server load periodically spikes to places anywhere between 4 and 12 during the day. This seems to be mod_cache related (spikes disappear when i disable it) but I can't figure what's happenning, and I'm reading everywhere squid is a better alternative to do reverse proxying.
Only, I don't know if i can do the same as mentionned above with squid. From what I read, I know I can do 2). However I'm not sure if squid is able to serve some files (based on URL patterns) directly from the local file system rather than querying them / caching them locally ? And can squid route the reverse proxy requests to different web servers based on the subdomain in a URL?
View 6 Replies
View Related
Dec 17, 2007
Since moving servers I've been plagued by constant disconnects whilst using FTP:
421 No transfer timeout (300 seconds): closing control connection
I've gone through Proftpd's forum and documentation numerous times to try and find a solution to this but have been unsuccessful so far.
Within proftpd's config file it's set at:
TimeoutLogin 120
TimeoutIdle 3600
TimeoutNoTransfer 3600
TimeoutStalled 3600
So I am unsure where it is getting the "300 seconds" from.
Even though the error states 300 seconds, this problem happens way before 300 seconds every time and has happened during the transfer of files (when the connection has been active and in use).
I've tried 3 different FTP clients and used the "Keep Alive" option in each and it has absolutely no effect.
I am unsure if APF is causing the problem, I can't see any problems in any of the server logs in relation to ftp.
View 5 Replies
View Related
May 29, 2008
The DC installed Squid. It manages the load fine but the php code on my page is cached and doesn't work.
Is there a way to get squid to not cache php? in that httpd can directly call php while squid does everything else?
View 1 Replies
View Related
Apr 5, 2009
I just installed Squid V3. I set up putty to ssh socks proxy to my vps that I installed squid on.
Here is a snippet of my cache.logs, but there is no cache log of websites I have visited. The access.log is empty.
How to I tell if squid is working?
Quote:
2009/04/04 22:52:37| Starting Squid Cache version 3.0.STABLE13-20090405 for i686-pc-linux-gnu...
2009/04/04 22:52:37| Process ID 9886
2009/04/04 22:52:37| With 1024 file descriptors available
2009/04/04 22:52:37| Performing DNS Tests...
2009/04/04 22:52:37| Successful DNS name lookup tests...
2009/04/04 22:52:37| DNS Socket created at 0.0.0.0, port 36048, FD 7
2009/04/04 22:52:37| Adding nameserver 208.67.222.222 from /etc/resolv.conf
2009/04/04 22:52:37| Adding nameserver 208.67.220.220 from /etc/resolv.conf
2009/04/04 22:52:37| Unlinkd pipe opened on FD 12
2009/04/04 22:52:37| Swap maxSize 102400 KB, estimated 7876 objects
2009/04/04 22:52:37| Target number of buckets: 393
2009/04/04 22:52:37| Using 8192 Store buckets
2009/04/04 22:52:37| Max Mem size: 8192 KB
2009/04/04 22:52:37| Max Swap size: 102400 KB
2009/04/04 22:52:37| Version 1 of swap file with LFS support detected...
2009/04/04 22:52:37| Rebuilding storage in /usr/local/squid/var/cache (CLEAN)
2009/04/04 22:52:37| Using Least Load store dir selection
2009/04/04 22:52:37| Set Current Directory to /usr/local/squid/var/cache
2009/04/04 22:52:37| Loaded Icons.
2009/04/04 22:52:37| Accepting HTTP connections at 0.0.0.0, port 8080, FD 14.
2009/04/04 22:52:37| Accepting ICP messages at 0.0.0.0, port 3130, FD 15.
2009/04/04 22:52:37| HTCP Disabled.
2009/04/04 22:52:37| Ready to serve requests.
2009/04/04 22:52:37| Done reading /usr/local/squid/var/cache swaplog (0 entries)
2009/04/04 22:52:37| Finished rebuilding storage from disk.
2009/04/04 22:52:37| 0 Entries scanned
2009/04/04 22:52:37| 0 Invalid entries.
2009/04/04 22:52:37| 0 With invalid flags.
2009/04/04 22:52:37| 0 Objects loaded.
2009/04/04 22:52:37| 0 Objects expired.
2009/04/04 22:52:37| 0 Objects cancelled.
2009/04/04 22:52:37| 0 Duplicate URLs purged.
2009/04/04 22:52:37| 0 Swapfile clashes avoided.
2009/04/04 22:52:37| Took 0.02 seconds ( 0.00 objects/sec).
2009/04/04 22:52:37| Beginning Validation Procedure
2009/04/04 22:52:37| Completed Validation Procedure
2009/04/04 22:52:37| Validated 25 Entries
2009/04/04 22:52:37| store_swap_size = 0
2009/04/04 22:52:38| storeLateRelease: released 0 objects
2009/04/04 22:59:06| Squid is already running! Process ID 9886
View 10 Replies
View Related
Jun 25, 2008
to install squid-2.5.STABLE14 with yum, but when i run "yum install squid" another version going for install.
View 2 Replies
View Related
Jul 22, 2008
Whenever I am using getacoder and try to post a message on the private message boards I get a squid error like this:
ERROR
The requested URL could not be retrieved
--------------------------------------------------------------------------------
While trying to retrieve the URL: [url]
The following error was encountered:
Zero Sized Reply
Squid did not receive any data for this request.
Your cache administrator is webmaster.
--------------------------------------------------------------------------------
Generated Tue, 22 Jul 2008 16:09:13 GMT by igw-ipcop.netarcs.com (squid/2.5.STABLE14)
Could anyone with server geekish skills tell me what may be the problem here might be (I should mention that their annoying support contact form uses the same script hence I can't even get in touch)? What's that ipcop thing about? Do they have some program at the server level filtering IP and mine is no good or what?
View 6 Replies
View Related
Jun 21, 2007
I'm aware the REMOTE_ADDR revealed in Squid needs to be a legitimate IP address to communicate properly across the internet. But I'd like Squid to use and publically reveal a different IP address than the default system IP address on our proxy servers. Does anyone know if its possible to make the Squid REMOTE_ADDR use a different IP address on the system other than default.
I've defined a different IP address and port for http_port at the top of the squid.conf file. And I can connect to this IP address and Port successfully. But when I run the connection through an IP address checker, or session environment test, it reveals the actual system IP address and not the http_port IP address.
View 0 Replies
View Related
Jan 28, 2007
Can someone provide me a guide to fully install and configure Squid.
View 2 Replies
View Related
Jul 27, 2007
I'm trying to set up a caching squid server to speed up website access. How can I selectively choose to cache certain PHP scripts while ignoring others? I can't seem to get it to work. I've commented out the following lines:
#acl QUERY urlpath_regex cgi-bin?
#no_cache deny QUERY
Yet in the squid/access.log file, I'm still seeing these:
Code:
1185561374.207 47 192.168.1.101 TCP_MISS/200 22267 GET http://www.mysite.com/? - DIRECT/192.168.1.108 text/html
View 1 Replies
View Related
Sep 1, 2007
I want to software load balance one of my website using squid. It doesn't look like it is possible with squid 2.5. Squid 2.6 is a upgrade for FC6. I am running FC4 and it cannot be installed with a lot of dependency failures.
Has anyone successfully installed Squid 2.6 on FC4?
View 1 Replies
View Related
May 21, 2009
i need help regarding my squid proxy when i go to this site. whatismyip.com i have this result
Your IP Address Is: 119.95.IP.IP
Other IPs Detected: 67.IP.IP.185
Possible Proxy Detected: 1.1 67.IP.IP.185ORT (squid/2.6.STABLE21)
how can i completly hide my ip via squid? this squid is running on my dedicated server,
View 2 Replies
View Related
Oct 4, 2008
I would like to use cPanel Apache as the backend web server, and Squid cache as the front end http accelerator.
My VPS has two IP addresses, however, I want the httpd acceleration to occur only on one IP.
So far, I have installed squid cache and edited its config file to this:
http_port 74.50.118.189:80
httpd_accel_host localhost
httpd_accel_port 80
httpd_accel_single_host on
httpd_accel_uses_host_header on
acl all src 0.0.0.0/0.0.0.0
http_access allow all
My site has a few subdomains and I would like them to work.
So, what do I do now in the apache config (which I think is here: /etc/httpd/conf/httpd.conf ?)
View 1 Replies
View Related
Apr 19, 2007
For 2 weeks I am under DDoS.
The type of DDoS is the one that comes from DC clients.
I have managed to mitigate the attack and to get everything working ok.
I do not like the solution I came up with for many reasons and I found that squid can be good on stopping bad requests like the one that DC clients send when the attack occurs.
I am kinda new to squid and I do not know all the settings.
I have configured It and everything works great when there is no DDoS.
But when the attacks starts , nothing works. Squid does not log anything in access_log and also, there is no load, just a lot of connections to squid.
Is there a limit for max concurrent connections in squid ?
Or the ideea of using squid as a reverse proxy without caching, just to stop bad requests is a bad one ? (I do not need snort-inline, I have some issues with it).
View 2 Replies
View Related
Mar 10, 2007
I want visitors from my site to be able to connect trough my squidproxy (installed on the same webserver as the site is) They will only be able to visit 3 or 4 sites trough the proxy. (These will be added to a whitelist in squid)
Preferably i want to to set it up so that users MUST visit my website to make the connection trough squid. Squid is already setup, but how do i link a site trough squid?
Preferably i would like users be able to click a link on my website, that opens an external site trough squid.
View 2 Replies
View Related
Oct 7, 2007
Im currently running cent0S 5. I recently just installed Squid Version 2.6.STABLE6 for a client to enable him to use as proxy. However it seems that sites like whatismyip.com and ipchicken.com are resolving back to my clients IP address and not the servers.
There is only one IP on my server and I think the problem may deal with X-Headers? (correct me if I am wrong)
Is there any way to use the server IP address for when my customer is using the proxy server.
My squid.conf looks like the following:
Code:
Code:
http_port 8080
forwarded_for off
icp_port 0
cache_mem 64 MB
cache_dir ufs /var/spool/squid 100 16 128
maximum_object_size 4096 KB
cache_store_log none
cache_access_log /var/log/squid/access.log
hierarchy_stoplist cgi-bin ?
acl QUERY urlpath_regex cgi-bin ?
no_cache deny QUERY
visible_hostname proxyserver
acl all src 0.0.0.0/0.0.0.0
acl manager proto cache_object
acl localhost src xxx.xx.xxx.xxx
acl SSL_ports port 443 563 10000
acl Safe_ports port 80
acl Safe_ports port 21
acl Safe_ports port 443 563
acl Safe_ports port 70
acl Safe_ports port 210
acl Safe_ports port 1025-65535
acl Safe_ports port 280
acl Safe_ports port 488
acl Safe_ports port 591
acl Safe_ports port 777
acl Safe_ports port 901
acl purge method PURGE
acl CONNECT method CONNECT
acl LocalNet src xxx.xx.xxx.xx
http_access allow manager localhost
http_access deny manager
http_access allow purge localhost
http_access deny purge
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow localhost
http_access allow LocalNet
http_access deny all
icp_access allow all
log_fqdn on
##### This side is to make the proxy tranparency
#httpd_accel_with_proxy on
#httpd_accel_uses_host_header on
#httpd_accel_host virtual
#httpd_accel_port 80
######------------------------------
error_directory /usr/share/squid/errors/English
#httpd_accel_uses_host_header off
#anonymize_headers deny From Referer Server
forwarded_for on
http_port ServerIP:8080 transparent
# no forwarded quite useless for an anonymizer
forwarded_for off
# no client stat
client_db off
# Paranoid anonymize
header_access Allow allow all
header_access Authorization allow all
header_access Cache-Control allow all
header_access Content-Encoding allow all
header_access Content-Length allow all
header_access Content-Type allow all
header_access Date allow all
header_access Expires allow all
header_access Host allow all
header_access If-Modified-Since allow all
header_access Last-Modified allow all
header_access Location allow all
header_access Pragma allow all
header_access Accept allow all
header_access Charset allow all
header_access Accept-Encoding allow all
header_access Accept-Language allow all
header_access Content-Language allow all
header_access Mime-Version allow all
header_access Retry-After allow all
header_access Title allow all
header_access Connection allow all
header_access Proxy-Connection allow all
header_access All deny all
header_access Cookie allow all
header_access Set-Cookie allow all
header_replace User-Agent Anonymous Proxy at example.com
View 3 Replies
View Related
Mar 18, 2007
I've been hearing other admins talk about using squid to speed thins up on web servers. Yes, not as a network proxy, but as simple cache engine for dinamic sites.
Any experience with this?
View 12 Replies
View Related