You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@tomcat.apache.org by Alexander Diedler <ad...@tecracer.de> on 2011/11/16 17:13:30 UTC

mod_jk connection timeouts

Hello all,

 

We have a new cluster with 2 servers. Every server contains 2 instances of
Tomcat 6.0.32. Every node has an Aapche 2.2.21 installed an mod_jk is
configured.

In front of the these cluster there is a hardware loadbalancer cluster for
HA. 

Several times we get in the status manager of the mod_jk, that one or more
worker get the state ERR, and then they get back. In the mod_jk log there
are entries like “(worker2) Tomcat is down.” or “(worker1) connection to
Tomcat failed”. But these worker are locally on the same server as the
mod_jk!

 

More stupid is, that the mod_jk of the other node don´t recordnized, that
the node on the other physical server is down. Is there a big bug in my
config?

 

In server.xml of Node 1 / TC1

Membership

                Address:228.0.0.4

                Bind:172.30.5.78  (local IP of the server)

                Port: 45564

                Frequency:500

                dropTime:3000

Receiver

                Address:172.30.5.78

                Port:4000

                autoBind=100

                selectorTimeout:5000

maxThreads:6

 

In server.xml Node 1 / TC2

Membership

                Address:228.0.0.4

                Bind:172.30.5.78  (local IP of the server)

                Port: 45574

                Frequency:500

                dropTime:3000

Receiver

                Address:172.30.5.78

                Port:4000

                autoBind=100

                selectorTimeout:5000

maxThreads:6

 

 

In server.xml Node 2 / TC 1

Membership

                Address:228.0.0.4

                Bind:172.30.5.77  (local IP of the server)

                Port: 45564

                Frequency:500

                dropTime:3000

Receiver

                Address:172.30.5.77

                Port:4000

                autoBind=100

                selectorTimeout:5000

maxThreads:6

 

In server.xml / Node 2 / TC 2

Membership

                Address:228.0.0.4

                Bind:172.30.5.77  (local IP of the server)

                Port: 45574

                Frequency:500

                dropTime:3000

Receiver

                Address:172.30.5.77

                Port:4000

                autoBind=100

                selectorTimeout:5000

maxThreads:6

 

 

worker.properties:

# List the workers name

worker.list=
loadbalancer,loadbalancertc,jkstatus,worker3,worker4,worker11,worker12

worker.maintain= 60

# ----------------

# First worker - LB

# ----------------

worker.worker1.port=8010

worker.worker1.host=172.30.5.78

worker.worker1.type=ajp13

worker.worker1.lbfactor=100

worker.worker1.route=worker1

worker.worker1.connection_pool_timeout=600

worker.worker1.activation=active

 

 

# ----------------

# Second worker - LB

# ----------------

worker.worker2.port=8010

worker.worker2.host=172.30.5.77

worker.worker2.type=ajp13

worker.worker2.lbfactor=100

worker.worker2.route=worker2

worker.worker2.connection_pool_timeout=600

worker.worker2.activation=active

 

 

# ----------------

# Third worker - Standalone

# ----------------

worker.worker3.port=8010

worker.worker3.host=172.30.5.77

worker.worker3.type=ajp13

worker.worker3.lbfactor=100

worker.worker3.activation=active

 

 

# ----------------

# fourth worker - Standalone

# ----------------

worker.worker4.port=8010

worker.worker4.host=172.30.5.78

worker.worker4.type=ajp13

worker.worker4.lbfactor=100

worker.worker4.activation=active

 

 

# ----------------

# sixth worker TC2010 - LB

# ----------------

worker.worker6.port=8012

worker.worker6.host=172.30.5.78

worker.worker6.type=ajp13

worker.worker6.lbfactor=100

worker.worker6.activation=active

worker.worker6.route=worker6

worker.worker6.connection_pool_timeout=600

 

 

# ----------------

# seventh worker TC3110 - LB

# ----------------

worker.worker7.port=8012

worker.worker7.host=172.30.5.77

worker.worker7.type=ajp13

worker.worker7.lbfactor=100

worker.worker7.activation=active

worker.worker7.route=worker7

worker.worker7.connection_pool_timeout=600

 

 

##BBMAGK0

# ----------------

# eleventh worker TC2010 - Standalone

# ----------------

worker.worker11.port=8012

worker.worker11.host=172.30.5.78

worker.worker11.type=ajp13

worker.worker11.lbfactor=100

worker.worker11.activation=active

 

##BBMAGK1

# ----------------

# twelfth worker TC2010 - Standalone

# ----------------

worker.worker12.port=8012

worker.worker12.host=172.30.5.77

worker.worker12.type=ajp13

worker.worker12.lbfactor=100

worker.worker12.activation=active

 

 

 

# ----------------------

# Load Balancer worker 

# ----------------------

worker.loadbalancer.type=lb

worker.loadbalancer.balance_workers=worker1,worker2

worker.loadbalancer.sticky_session=true

worker.loadbalancer.sticky_session_force=false

worker.loadbalancer.method=Request

worker.loadbalancer.retries=5

worker.loadbalancer.secret=t

 

 

# ----------------------

# Load Balancer worker tc

# ----------------------

worker.loadbalancertc.type=lb

worker.loadbalancertc.balance_workers=worker6,worker7

worker.loadbalancertc.sticky_session=true

worker.loadbalancertc.sticky_session_force=false

worker.loadbalancertc.method=Request

worker.loadbalancertc.retries=5

worker.loadbalancertc.secret=t

 

 

# Define a 'jkstatus' worker using status

worker.jkstatus.type=status

 

 

 

 

Greetings

Alexander