You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@tomcat.apache.org by Alexander Diedler <ad...@tecracer.de> on 2011/05/13 14:40:40 UTC

multiple Loadbalancer workers - found each other webapps?

Hello,

Why see one tomcat the apps of another tomcat?

We have two servers with two installed Apache Tomcats 6.0.32 and one Apache
Webserver 2.2.14.

Server A:

Tomcat1 hosts App1 and App2 

Tomcat2 hosts App3

 

Server B:

Tomcat1 hosts App1 and App2 

Tomcat2 hosts App3

 

In the worker.properties we define two loadbalancer worker, but it seems,
that they found each other “cluster”.

 

Message from catalina log : 

[…]

13.05.2011 14:29:15 org.apache.catalina.startup.Catalina start

INFO: Server startup in 5793 ms

13.05.2011 14:29:18 org.apache.catalina.ha.session.ClusterSessionListener
messageReceived

WARNUNG: Context manager doesn't exist:article_finder_admin#

13.05.2011 14:29:19 org.apache.catalina.ha.session.ClusterSessionListener
messageReceived

WARNUNG: Context manager doesn't exist:article_finder_admin#

13.05.2011 14:29:31 org.apache.catalina.ha.session.ClusterSessionListener
messageReceived

WARNUNG: Context manager doesn't exist:article_finder_admin#

13.05.2011 14:29:35 org.apache.catalina.ha.session.ClusterSessionListener
messageReceived

WARNUNG: Context manager doesn't exist:article_finder_admin#

13.05.2011 14:29:35 org.apache.catalina.ha.session.ClusterSessionListener
messageReceived

WARNUNG: Context manager doesn't exist:article_finder_admin#

13.05.2011 14:29:35 org.apache.catalina.ha.session.ClusterSessionListener
messageReceived

WARNUNG: Context manager doesn't exist:article_finder_admin#

13.05.2011 14:29:39 org.apache.catalina.ha.session.ClusterSessionListener
messageReceived

WARNUNG: Context manager doesn't exist:akademie#

13.05.2011 14:29:41 org.apache.catalina.ha.session.ClusterSessionListener
messageReceived

WARNUNG: Context manager doesn't exist:akademie#

13.05.2011 14:29:45 org.apache.catalina.ha.session.ClusterSessionListener
messageReceived

WARNUNG: Context manager doesn't exist:extranet#

13.05.2011 14:29:47 org.apache.catalina.ha.session.ClusterSessionListener
messageReceived

WARNUNG: Context manager doesn't exist:extranet#

13.05.2011 14:29:47 org.apache.catalina.ha.session.ClusterSessionListener
messageReceived

WARNUNG: Context manager doesn't exist:extranet#

13.05.2011 14:29:50 org.apache.catalina.ha.session.ClusterSessionListener
messageReceived

WARNUNG: Context manager doesn't exist:extranet#

 

The Context Managers for article_Finder_admin, akademie and extranet are
Virtual Hosts in LoadbalancerA and doesn´t exists in conf directory of
Tomcat2.

 

We define the Cluster Valve inside the Engine (remark, these block was eqal
on every server.xml, or have we to customize these valve???) :

 

>>>>>> 

<Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"

                 channelSendOptions="8">

 

          <Manager className="org.apache.catalina.ha.session.DeltaManager"

                   expireSessionsOnShutdown="false"

                   notifyListenersOnReplication="true"/>

 

          <Channel
className="org.apache.catalina.tribes.group.GroupChannel">

            <Membership
className="org.apache.catalina.tribes.membership.McastService"

                        address="228.0.0.4"

                        port="45564"

                        frequency="500"

                        dropTime="3000"/>

            <Receiver
className="org.apache.catalina.tribes.transport.nio.NioReceiver"

                      address="auto"

                      port="4000"

                      autoBind="100"

                      selectorTimeout="5000"

                      maxThreads="6"/>

 

            <Sender
className="org.apache.catalina.tribes.transport.ReplicationTransmitter">

              <Transport
className="org.apache.catalina.tribes.transport.nio.PooledParallelSender"/>

            </Sender>

            <Interceptor
className="org.apache.catalina.tribes.group.interceptors.TcpFailureDetector"
/>

            <Interceptor
className="org.apache.catalina.tribes.group.interceptors.MessageDispatch15In
terceptor"/>

          </Channel>

 

          <Valve className="org.apache.catalina.ha.tcp.ReplicationValve"

                 filter=""/>

          <Valve
className="org.apache.catalina.ha.session.JvmRouteBinderValve"/>

                                               <!--

          <Deployer
className="org.apache.catalina.ha.deploy.FarmWarDeployer"

                    tempDir="/tmp/war-temp/"

                    deployDir="/tmp/war-deploy/"

                    watchDir="/tmp/war-listen/"

                    watchEnabled="false"/>

                                               -->

          <ClusterListener
className="org.apache.catalina.ha.session.JvmRouteSessionIDBinderListener"/>

          <ClusterListener
className="org.apache.catalina.ha.session.ClusterSessionListener"/>

        </Cluster>

<<<<<< 

 

 

Worker.properties:

 

# ----------------

# First worker

# ----------------

worker.worker1.port=8010

worker.worker1.host=192.168.100.1

worker.worker1.type=ajp13

worker.worker1.lbfactor=75

worker.worker1.route=worker1

#worker.worker1.connection_pool_size=250

#worker.worker1.connection_pool_minsize=126

worker.worker1.connection_pool_timeout=600

worker.worker1.activation=active

 

 

# ----------------

# Second worker

# ----------------

worker.worker2.port=8010

worker.worker2.host=192.168.100.2

worker.worker2.type=ajp13

worker.worker2.lbfactor=100

worker.worker2.route=worker2

#worker.worker2.connection_pool_size=250

#worker.worker2.connection_pool_minsize=126

worker.worker2.connection_pool_timeout=600

worker.worker2.activation=active

 

# ----------------

# sixth worker

# ----------------

worker.worker6.port=8012

worker.worker6.host=192.168.100.1

worker.worker6.type=ajp13

worker.worker6.lbfactor=1

worker.worker6.activation=active

 

# ----------------

# seventh worker

# ----------------

worker.worker7.port=8012

worker.worker7.host=192.168.100.2

worker.worker7.type=ajp13

worker.worker7.lbfactor=1

worker.worker7.activation=active

 

# ----------------------

# Load Balancer worker 

# ----------------------

worker.loadbalancer.type=lb

worker.loadbalancer.balance_workers=worker1,worker2

worker.loadbalancer.sticky_session=true

worker.loadbalancer.sticky_session_force=false

worker.loadbalancer.method=Busyness

worker.loadbalancer.retries=3

worker.loadbalancer.secret=xxx

 

# ----------------------

# Load Balancer worker tc

# ----------------------

worker.loadbalancertc.type=lb

worker.loadbalancertc.balance_workers=worker6,worker7

worker.loadbalancertc.sticky_session=true

worker.loadbalancertc.sticky_session_force=false

worker.loadbalancertc.method=Busyness

worker.loadbalancertc.retries=3

worker.loadbalancertc.secret=xxx


Re: multiple Loadbalancer workers - found each other webapps?

Posted by Mark Eggers <it...@yahoo.com>.
>________________________________
>From: Alexander Diedler <ad...@tecracer.de>
>To: "users@tomcat.apache.org" <us...@tomcat.apache.org>
>Sent: Friday, May 13, 2011 5:40 AM
>Subject: multiple Loadbalancer workers - found each other webapps?
>
>
>Hello,
>Why see one tomcat the apps of another tomcat?
>We have two servers with two installed Apache Tomcats 6.0.32 and one Apache Webserver 2.2.14.
>Server A:
>Tomcat1 hosts App1 and App2 
>Tomcat2 hosts App3
> 
>Server B:
>Tomcat1 hosts App1 and App2 
>Tomcat2 hosts App3
> 
>In the worker.properties we define two loadbalancer worker, but it seems, that they found each other “cluster”.
> 
>Message from catalina log : 
>[…]
>13.05.2011 14:29:15 org.apache.catalina.startup.Catalina start
>INFO: Server startup in 5793 ms
>13.05.2011 14:29:18 org.apache.catalina.ha.session.ClusterSessionListener messageReceived
>WARNUNG: Context manager doesn't exist:article_finder_admin#
>13.05.2011 14:29:19 org.apache.catalina.ha.session.ClusterSessionListener messageReceived
>WARNUNG: Context manager doesn't exist:article_finder_admin#
>13.05.2011 14:29:31 org.apache.catalina.ha.session.ClusterSessionListener messageReceived
>WARNUNG: Context manager doesn't exist:article_finder_admin#
>13.05.2011 14:29:35 org.apache.catalina.ha.session.ClusterSessionListener messageReceived
>WARNUNG: Context manager doesn't exist:article_finder_admin#
>13.05.2011 14:29:35 org.apache.catalina.ha.session.ClusterSessionListener messageReceived
>WARNUNG: Context manager doesn't exist:article_finder_admin#
>13.05.2011 14:29:35 org.apache.catalina.ha.session.ClusterSessionListener messageReceived
>WARNUNG: Context manager doesn't exist:article_finder_admin#
>13.05.2011 14:29:39 org.apache.catalina.ha.session.ClusterSessionListener messageReceived
>WARNUNG: Context manager doesn't exist:akademie#
>13.05.2011 14:29:41 org.apache.catalina.ha.session.ClusterSessionListener messageReceived
>WARNUNG: Context manager doesn't exist:akademie#
>13.05.2011 14:29:45 org.apache.catalina.ha.session.ClusterSessionListener messageReceived
>WARNUNG: Context manager doesn't exist:extranet#
>13.05.2011 14:29:47 org.apache.catalina.ha.session.ClusterSessionListener messageReceived
>WARNUNG: Context manager doesn't exist:extranet#
>13.05.2011 14:29:47 org.apache.catalina.ha.session.ClusterSessionListener messageReceived
>WARNUNG: Context manager doesn't exist:extranet#
>13.05.2011 14:29:50 org.apache.catalina.ha.session.ClusterSessionListener messageReceived
>WARNUNG: Context manager doesn't exist:extranet#
> 
>The Context Managers for article_Finder_admin, akademie and extranet are Virtual Hosts in LoadbalancerA and doesn´t exists in conf directory of Tomcat2.
> 
>We define the Cluster Valve inside the Engine (remark, these block was eqal on every server.xml, or have we to customize these valve???) :
> 
>>>>>>>  
><Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"
>                 channelSendOptions="8">
> 
>          <Manager className="org.apache.catalina.ha.session.DeltaManager"
>                   expireSessionsOnShutdown="false"
>                   notifyListenersOnReplication="true"/>
> 
>          <Channel className="org.apache.catalina.tribes.group.GroupChannel">
>            <Membership className="org.apache.catalina.tribes.membership.McastService"
>                        address="228.0.0.4"
>                        port="45564"
>                        frequency="500"
>                        dropTime="3000"/>
>            <Receiver className="org.apache.catalina.tribes.transport.nio.NioReceiver"
>                      address="auto"
>                      port="4000"
>                      autoBind="100"
>                      selectorTimeout="5000"
>                      maxThreads="6"/>
> 
>            <Sender className="org.apache.catalina.tribes.transport.ReplicationTransmitter">
>              <Transport className="org.apache.catalina.tribes.transport.nio.PooledParallelSender"/>
>            </Sender>
>            <Interceptor className="org.apache.catalina.tribes.group.interceptors.TcpFailureDetector"/>
>            <Interceptor className="org.apache.catalina.tribes.group.interceptors.MessageDispatch15Interceptor"/>
>          </Channel>
> 
>          <Valve className="org.apache.catalina.ha.tcp.ReplicationValve"
>                 filter=""/>
>          <Valve className="org.apache.catalina.ha.session.JvmRouteBinderValve"/>
>                                               <!--
>          <Deployer className="org.apache.catalina.ha.deploy.FarmWarDeployer"
>                    tempDir="/tmp/war-temp/"
>                    deployDir="/tmp/war-deploy/"
>                    watchDir="/tmp/war-listen/"
>                    watchEnabled="false"/>
>                                               -->
>          <ClusterListener className="org.apache.catalina.ha.session.JvmRouteSessionIDBinderListener"/>
>          <ClusterListener className="org.apache.catalina.ha.session.ClusterSessionListener"/>
>        </Cluster>
><<<<<<  
> 
> 
>Worker.properties:
> 
># ----------------
># First worker
># ----------------
>worker.worker1.port=8010
>worker.worker1.host=192.168.100.1
>worker.worker1.type=ajp13
>worker.worker1.lbfactor=75
>worker.worker1.route=worker1
>#worker.worker1.connection_pool_size=250
>#worker.worker1.connection_pool_minsize=126
>worker.worker1.connection_pool_timeout=600
>worker.worker1.activation=active
> 
> 
># ----------------
># Second worker
># ----------------
>worker.worker2.port=8010
>worker.worker2.host=192.168.100.2
>worker.worker2.type=ajp13
>worker.worker2.lbfactor=100
>worker.worker2.route=worker2
>#worker.worker2.connection_pool_size=250
>#worker.worker2.connection_pool_minsize=126
>worker.worker2.connection_pool_timeout=600
>worker.worker2.activation=active
> 
># ----------------
># sixth worker
># ----------------
>worker.worker6.port=8012
>worker.worker6.host=192.168.100.1
>worker.worker6.type=ajp13
>worker.worker6.lbfactor=1
>worker.worker6.activation=active
> 
># ----------------
># seventh worker
># ----------------
>worker.worker7.port=8012
>worker.worker7.host=192.168.100.2
>worker.worker7.type=ajp13
>worker.worker7.lbfactor=1
>worker.worker7.activation=active
> 
># ----------------------
># Load Balancer worker 
># ----------------------
>worker.loadbalancer.type=lb
>worker.loadbalancer.balance_workers=worker1,worker2
>worker.loadbalancer.sticky_session=true
>worker.loadbalancer.sticky_session_force=false
>worker.loadbalancer.method=Busyness
>worker.loadbalancer.retries=3
>worker.loadbalancer.secret=xxx
> 
># ----------------------
># Load Balancer worker tc
># ----------------------
>worker.loadbalancertc.type=lb
>worker.loadbalancertc.balance_workers=worker6,worker7
>worker.loadbalancertc.sticky_session=true
>worker.loadbalancertc.sticky_session_force=false
>worker.loadbalancertc.method=Busyness
>worker.loadbalancertc.retries=3
>worker.loadbalancertc.secret=xxx
>
>

From the docs: http://tomcat.apache.org/tomcat-6.0-doc/config/cluster-membership.html

The multicast address, in conjunction with the port is what creates a
cluster group. To divide up your farm into several different group, or
to split up QA from production, change the port or the address

So you'll need to change the following for each cluster you wish to define.

Your original configuration:

<Membership className="org.apache.catalina.tribes.membership.McastService"
     address="228.0.0.4"
     port="45564"
     frequency="500"
     dropTime="3000"/>


How this might work:

<!-- cluster Tomcat1 uses default values -->
<Membership className="org.apache.catalina.tribes.membership.McastService"
     address="228.0.0.4"
     port="45564"
     frequency="500"
     dropTime="3000"/>


<!-- cluster Tomcat2 uses a different port -->
<Membership className="org.apache.catalina.tribes.membership.McastService"
     address="228.0.0.4"
     port="45574"
     frequency="500"
     dropTime="3000"/>


I've not tried this, but the documentation says that this should work.

. . . . just my two cents.

/mde/

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org


Re: multiple Loadbalancer workers - found each other webapps?

Posted by Christopher Schultz <ch...@christopherschultz.net>.
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Alexander,

On 5/13/2011 8:40 AM, Alexander Diedler wrote:
> Why see one tomcat the apps of another tomcat?

Are you asking: "why does one Tomcat see the webapps deployed on another
Tomcat?"

> We have two servers with two installed Apache Tomcats 6.0.32 and one Apache
> Webserver 2.2.14.
> 
> Server A:
> Tomcat1 hosts App1 and App2 
> Tomcat2 hosts App3
> 
> Server B:
> Tomcat1 hosts App1 and App2 
> Tomcat2 hosts App3

So you have seemingly identical servers, each with two instances of Tomcat.

> In the worker.properties we define two loadbalancer worker, but it seems,
> that they found each other “cluster”.

> worker.loadbalancer.type=lb
> worker.loadbalancer.balance_workers=worker1,worker2

> worker.loadbalancertc.type=lb
> worker.loadbalancertc.balance_workers=worker6,worker7


> worker.worker1.port=8010
> worker.worker1.host=192.168.100.1
> worker.worker1.route=worker1

> worker.worker2.port=8010
> worker.worker2.host=192.168.100.2
> worker.worker2.route=worker2

> worker.worker6.port=8012
> worker.worker6.host=192.168.100.1

Missing 'route' setting, here.

> worker.worker7.port=8012
> worker.worker7.host=192.168.100.2

Missing 'route' setting, here. These missing 'route' settings probably
don't matter, as the default route is the name of the worker. It's not
much different from the rest of your workers.properties file where you
are explicitly setting these options to their defaults, etc. You might
want to consider using a "template" worker instead of repeating all this
stuff.

I also don't see a "worker.list" setting. Did you just omit that from
your post?

> WARNUNG: Context manager doesn't exist:article_finder_admin#
> WARNUNG: Context manager doesn't exist:article_finder_admin#
> WARNUNG: Context manager doesn't exist:article_finder_admin#
> WARNUNG: Context manager doesn't exist:article_finder_admin#
> WARNUNG: Context manager doesn't exist:article_finder_admin#
> WARNUNG: Context manager doesn't exist:article_finder_admin#
> WARNUNG: Context manager doesn't exist:akademie#
> WARNUNG: Context manager doesn't exist:akademie#
> WARNUNG: Context manager doesn't exist:extranet#
> WARNUNG: Context manager doesn't exist:extranet#
> WARNUNG: Context manager doesn't exist:extranet#
> WARNUNG: Context manager doesn't exist:extranet#

(Looks like there's a missing localized error message, there. What
version of TC are you running?)

> The Context Managers for article_Finder_admin, akademie and extranet are
> Virtual Hosts in LoadbalancerA and doesn´t exists in conf directory of
> Tomcat2.
> 
> We define the Cluster Valve inside the Engine (remark, these block was eqal
> on every server.xml, or have we to customize these valve???) :

What about the <Engine> block? Have you properly set the jvmRoute for
each of those on their respective servers?

Also, the <Membership> should probably be different on each server...
you need to specify the "address" attribute differently, I think.

> <Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"
>                  channelSendOptions="8">
>           <Manager className="org.apache.catalina.ha.session.DeltaManager"
>                    expireSessionsOnShutdown="false"
>                    notifyListenersOnReplication="true"/>

So, you are doing both session stickiness /and/ session replication?

If all 4 TC instances are members of the same cluster, then you should
expect all the errors you are getting. Try re-reading
http://tomcat.apache.org/tomcat-6.0-doc/cluster-howto.html (I know
you've at least visited it, because your configuration is copy/pasted
directly from that page), especially this part:

"
Also when using the delta manager it will replicate to all nodes, even
nodes that don't have the application deployed.
"

You might want to consider configuring your clusters separately instead
of one large cluster.

There is another suggestion in that documentation for getting around
this problem. I'll let you read the docs to find it.

- -chris
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.10 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAk3NSpEACgkQ9CaO5/Lv0PAhRwCffV6nLRXH9bdr4q4P4x7a3sib
bPkAnizFQeyD7da1HUSHwLn+UWXbxRsA
=jGP/
-----END PGP SIGNATURE-----

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org