• Willkommen im Linux Club - dem deutschsprachigen Supportforum für GNU/Linux. Registriere dich kostenlos, um alle Inhalte zu sehen und Fragen zu stellen.

Squid und eine Portal Seite / captive portal

Nikkita

Newbie
Hallo Leute,

ich konfiguriere ein Proxy mit Squid 2.7 und das funktioniert alles so weit.
Nur hätte ich gerne so eine Splash page (Portalseite) wie aus dem wiki von squid:
http://wiki.squid-cache.org/ConfigExamples/Portal/Splash

Leider funktioniert das nicht ich bekomm bei der Umleitung auf die Splash seite ein Fehler.
Ohne diese splash_page funktioniert der Proxy einwandfrei.
Ich vermute einfach mal ein fehler in der Konfiguration vom der Splash_page

Hier der Auszug aus der Squid Access bei der Umleitung und aus testzwecken leite ich momentan auf google um.
tail -f /var/log/squid/access.log:
Code:
1328096322.795      0 172.30.1.47 TCP_DENIED/302 412 GET http://www.google.de/ - NONE/- text/html
1328096322.815      0 172.30.1.47 TCP_DENIED/302 412 GET http://www.google.de/ - NONE/- text/html
1328096322.820      0 172.30.1.47 TCP_DENIED/302 412 GET http://www.google.de/ - NONE/- text/html
1328096322.825      0 172.30.1.47 TCP_DENIED/302 412 GET http://www.google.de/ - NONE/- text/html
1328096322.830      0 172.30.1.47 TCP_DENIED/302 412 GET http://www.google.de/ - NONE/- text/html
1328096322.836      0 172.30.1.47 TCP_DENIED/302 412 GET http://www.google.de/ - NONE/- text/html

Hier ist der Auszug aus der cache.log:
Code:
2012/02/01 12:37:11| Parser: retval 1: from 0->35: method 0->2; url 4->24; version 26->34 (1/1)
2012/02/01 12:37:11| aclMatchExternal: splash_page("172.30.1.47") = lookup needed
2012/02/01 12:37:11| externalAclLookup: lookup in 'splash_page' for '172.30.1.47'
2012/02/01 12:37:11| externalAclHandleReply: reply="ERR message="Welcome""
2012/02/01 12:37:11| external_acl_cache_add: Adding '172.30.1.47' = 0
2012/02/01 12:37:11| aclMatchExternal: splash_page = 0
2012/02/01 12:37:11| The request GET http://www.amazon.de/ is DENIED, because it matched 'existing_users'
2012/02/01 12:37:11| The reply for GET http://www.amazon.de/ is ALLOWED, because it matched 'existing_users'
2012/02/01 12:37:11| clientReadRequest: FD 17: no data to process ((11) Resource temporarily unavailable)
2012/02/01 12:37:12| Parser: retval 1: from 0->35: method 0->2; url 4->24; version 26->34 (1/1)
2012/02/01 12:37:12| aclMatchExternal: splash_page = 0
2012/02/01 12:37:12| The request GET http://www.google.de/ is DENIED, because it matched 'existing_users'
2012/02/01 12:37:12| The reply for GET http://www.google.de/ is ALLOWED, because it matched 'existing_users'
2012/02/01 12:37:12| clientReadRequest: FD 17: no data to process ((11) Resource temporarily unavailable)
2012/02/01 12:37:12| Parser: retval 1: from 0->35: method 0->2; url 4->24; version 26->34 (1/1)
2012/02/01 12:37:12| aclMatchExternal: splash_page = 0
2012/02/01 12:37:12| The request GET http://www.google.de/ is DENIED, because it matched 'existing_users'
2012/02/01 12:37:12| The reply for GET http://www.google.de/ is ALLOWED, because it matched 'existing_users'
2012/02/01 12:37:12| clientReadRequest: FD 17: no data to process ((11) Resource temporarily unavailable)
2012/02/01 12:37:12| Parser: retval 1: from 0->35: method 0->2; url 4->24; version 26->34 (1/1)
2012/02/01 12:37:12| aclMatchExternal: splash_page = 0
2012/02/01 12:37:12| The request GET http://www.google.de/ is DENIED, because it matched 'existing_users'
2012/02/01 12:37:12| The reply for GET http://www.google.de/ is ALLOWED, because it matched 'existing_users'
2012/02/01 12:37:12| clientReadRequest: FD 17: no data to process ((11) Resource temporarily unavailable)

Und hier ist die Config
Code:
#Recommended minimum configuration:
acl all src all
acl manager proto cache_object
acl localhost src 127.0.0.1/32
acl to_localhost dst 127.0.0.0/8 0.0.0.0/32
#
# Example rule allowing access from your local networks.
# Adapt to list your (internal) IP networks from where browsing
# should be allowed
#acl localnet src 10.0.0.0/8    # RFC1918 possible internal network
acl localnet src 172.16.0.0/12  # RFC1918 possible internal network
#acl localnet src 192.168.0.0/16        # RFC1918 possible internal network
acl SSL_ports port 443          # https
acl SSL_ports port 563          # snews
acl SSL_ports port 873          # rsync
acl Safe_ports port 80          # http
acl Safe_ports port 21          # ftp
acl Safe_ports port 443         # https
acl Safe_ports port 70          # gopher
acl Safe_ports port 210         # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280         # http-mgmt
acl Safe_ports port 488         # gss-http
acl Safe_ports port 591         # filemaker
acl Safe_ports port 777         # multiling http
acl Safe_ports port 631         # cups
acl Safe_ports port 873         # rsync
acl Safe_ports port 901         # SWAT
acl purge method PURGE
acl CONNECT method CONNECT

# Only allow cachemgr access from localhost
http_access allow manager localhost
http_access deny manager
# Only allow purge requests from localhost
http_access allow purge localhost
http_access deny purge
# Deny requests to unknown ports
http_access deny !Safe_ports
# Deny CONNECT to other than SSL ports
http_access deny CONNECT !SSL_ports
#
# We strongly recommend the following be uncommented to protect innocent
# web applications running on the proxy server who think the only
# one who can access services on "localhost" is a local user
#http_access deny to_localhost
#
# INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS

# Example rule allowing access from your local networks.
# Adapt localnet in the ACL section to list your (internal) IP networks
# from where browsing should be allowed
external_acl_type splash_page ttl=60 concurrency=100 %SRC /usr/lib/squid/squid_session -t 90 -b /usr/lib/squid/session.db

acl existing_users external splash_page

deny_info http://www.google.de existing_users

http_access deny !existing_users
http_access allow localnet
http_access allow localhost

# And finally deny all other access to this proxy
http_access deny all

# Only allow cachemgr access from localhost
http_access allow manager localhost
http_access deny manager
# Only allow purge requests from localhost
http_access allow purge localhost
http_access deny purge
# Deny requests to unknown ports
http_access deny !Safe_ports
# Deny CONNECT to other than SSL ports
http_access deny CONNECT !SSL_ports
#
# We strongly recommend the following be uncommented to protect innocent
# web applications running on the proxy server who think the only
# one who can access services on "localhost" is a local user
#http_access deny to_localhost
#
# INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS

# Example rule allowing access from your local networks.
# Adapt localnet in the ACL section to list your (internal) IP networks
# from where browsing should be allowed
external_acl_type splash_page ttl=60 concurrency=100 %SRC /usr/lib/squid/squid_session -t 90 -b /usr/lib/squid/session.db

acl existing_users external splash_page

deny_info http://www.google.de existing_users

http_access deny !existing_users
http_access allow localnet
http_access allow localhost
#http_access allow localhost
#http_access allow all all

# And finally deny all other access to this proxy
http_access deny all

# Squid normally listens to port 3128
http_port 3128

#Default:
cache_dir ufs /var/spool/squid 1000 16 256

#Suggested default:
refresh_pattern ^ftp:           1440    20%     10080
refresh_pattern ^gopher:        1440    0%      1440
refresh_pattern -i (/cgi-bin/|\?) 0     0%      0
refresh_pattern (Release|Packages(.gz)*)$       0       20%     2880
# example line deb packages
#refresh_pattern (\.deb|\.udeb)$   129600 100% 129600
refresh_pattern .   

# Don't upgrade ShoutCast responses to HTTP
acl shoutcast rep_header X-HTTP09-First-Line ^ICY.[0-9]
upgrade_http0.9 deny shoutcast


# Apache mod_gzip and mod_deflate known to be broken so don't trust
# Apache to signal ETag correctly on such responses
acl apache rep_header Server ^Apache
broken_vary_encoding allow apache

Ich hoffe mir kann einer helfen wo ich hier eine Fhler mache.


Gruß
Nikkita
 
Oben