You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@tomcat.apache.org by Tim K <ti...@gmail.com> on 2019/01/09 15:39:31 UTC

StaticMembers within Multiple Clusters

I'm trying to split 4 separate tomcat instances into 2 clusters (2x2) to
try and avoid the all-to-all traffic, but even when setting up the Receiver
and Static members to only speak to 1 other instance, some still seems to
find and add the other members outside of the defined config to the wrong
cluster.  I read that mcast is still used when you have StaticMembers,
could that be causing this issue?

Re: StaticMembers within Multiple Clusters

Posted by Christopher Schultz <ch...@christopherschultz.net>.
Tim,

On 1/25/23 11:26, Tim K wrote:
>> Can you post the rest of that stack trace?
> Yes, here are 2 stack traces that were encountered.  We basically had
> the cluster working for a few years.  We introduced a new Valve for
> authentication purposes.  Also, with this change we had to set a proxy
> in CATALINA_OPTS, I'm not sure if that affected local communication
> between the nodes?  For now we commented out the cluster on each of
> our nodes in order to have it running.
> 
> Exception in thread "Tribes-Task-Receiver[Catalina-Channel]-9"
> java.lang.NoClassDefFoundError: Could not initialize class
> org.apache.catalina.tribes.ChannelException
> 
>                  at
> org.apache.catalina.tribes.transport.nio.ParallelNioSender.sendMessage(ParallelNioSender.java:110)
> 
>                  at
> org.apache.catalina.tribes.transport.nio.PooledParallelSender.sendMessage(PooledParallelSender.java:51)
> 
>                  at
> org.apache.catalina.tribes.transport.ReplicationTransmitter.sendMessage(ReplicationTransmitter.java:65)
> 
>                  at
> org.apache.catalina.tribes.group.ChannelCoordinator.sendMessage(ChannelCoordinator.java:83)
> 
>                  at
> org.apache.catalina.tribes.group.ChannelInterceptorBase.sendMessage(ChannelInterceptorBase.java:89)
> 
>                  at
> org.apache.catalina.tribes.group.interceptors.MessageDispatchInterceptor.sendMessage(MessageDispatchInterceptor.java:93)
> 
>                  at
> org.apache.catalina.tribes.group.ChannelInterceptorBase.sendMessage(ChannelInterceptorBase.java:89)
> 
>                  at
> org.apache.catalina.tribes.group.interceptors.TcpFailureDetector.sendMessage(TcpFailureDetector.java:89)
> 
>                  at
> org.apache.catalina.tribes.group.ChannelInterceptorBase.sendMessage(ChannelInterceptorBase.java:89)
> 
>                  at
> org.apache.catalina.tribes.group.ChannelInterceptorBase.sendMessage(ChannelInterceptorBase.java:89)
> 
>                  at
> org.apache.catalina.tribes.group.interceptors.EncryptInterceptor.sendMessage(EncryptInterceptor.java:127)
> 
>                  at
> org.apache.catalina.tribes.group.ChannelInterceptorBase.sendMessage(ChannelInterceptorBase.java:89)
> 
>                  at
> org.apache.catalina.tribes.group.GroupChannel.send(GroupChannel.java:280)
> 
>                  at
> org.apache.catalina.tribes.group.GroupChannel.send(GroupChannel.java:231)
> 
>                  at
> org.apache.catalina.tribes.group.RpcChannel.messageReceived(RpcChannel.java:171)
> 
>                  at
> org.apache.catalina.tribes.group.GroupChannel.messageReceived(GroupChannel.java:345)
> 
>                  at
> org.apache.catalina.tribes.group.ChannelInterceptorBase.messageReceived(ChannelInterceptorBase.java:96)
> 
>                  at
> org.apache.catalina.tribes.group.interceptors.EncryptInterceptor.messageReceived(EncryptInterceptor.java:148)
> 
>                  at
> org.apache.catalina.tribes.group.ChannelInterceptorBase.messageReceived(ChannelInterceptorBase.java:96)
> 
>                  at
> org.apache.catalina.tribes.group.interceptors.TcpPingInterceptor.messageReceived(TcpPingInterceptor.java:182)
> 
>                  at
> org.apache.catalina.tribes.group.ChannelInterceptorBase.messageReceived(ChannelInterceptorBase.java:96)
> 
>                  at
> org.apache.catalina.tribes.group.interceptors.TcpFailureDetector.messageReceived(TcpFailureDetector.java:114)
> 
>                  at
> org.apache.catalina.tribes.group.ChannelInterceptorBase.messageReceived(ChannelInterceptorBase.java:96)
> 
>                  at
> org.apache.catalina.tribes.group.ChannelInterceptorBase.messageReceived(ChannelInterceptorBase.java:96)
> 
>                  at
> org.apache.catalina.tribes.group.ChannelCoordinator.messageReceived(ChannelCoordinator.java:288)
> 
>                  at
> org.apache.catalina.tribes.transport.ReceiverBase.messageDataReceived(ReceiverBase.java:272)
> 
>                  at
> org.apache.catalina.tribes.transport.nio.NioReplicationTask.drainChannel(NioReplicationTask.java:228)
> 
>                  at
> org.apache.catalina.tribes.transport.nio.NioReplicationTask.run(NioReplicationTask.java:103)
> 
>                  at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> 
>                  at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> 
>                  at java.lang.Thread.run(Thread.java:750)

Hmm... no "caused by"? That's disappointing. NCDFE usually means "you 
done broke your install" but it says that the initialization of the 
class failed, not that the class wasn't actually found.

Any way you can attach a debugger and find out what's causing that to be 
thrown?

>> What was your previous version of Tomcat?
> We were always on version 9.  We keep it pretty much up to date with
> the latest available.  I am not sure of the sub-version we started at
> where it was working.  I'm guessing it was whatever version was the
> latest around Jan-2019.
> 
>> Did you upgrade all nodes at the same time, or are you upgrading a single node in the cluster?
> All are at the same version, we have 4, they all get updated at the same time.
> 
>> How did you upgrade (e.g. installer, unzip/untar/etc.)?
> untar

Yeah, that definitely smells weird. I have no specific advice (other 
than "try a debugger") but I do want to at least validate that it 
doesn't look like you are doing anything wrong, or misinterpreting 
something obvious :)

-chris

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org


Re: StaticMembers within Multiple Clusters

Posted by Tim K <ti...@gmail.com>.
> Can you post the rest of that stack trace?
Yes, here are 2 stack traces that were encountered.  We basically had
the cluster working for a few years.  We introduced a new Valve for
authentication purposes.  Also, with this change we had to set a proxy
in CATALINA_OPTS, I'm not sure if that affected local communication
between the nodes?  For now we commented out the cluster on each of
our nodes in order to have it running.

Exception in thread "Tribes-Task-Receiver[Catalina-Channel]-9"
java.lang.NoClassDefFoundError: Could not initialize class
org.apache.catalina.tribes.ChannelException

                at
org.apache.catalina.tribes.transport.nio.ParallelNioSender.sendMessage(ParallelNioSender.java:110)

                at
org.apache.catalina.tribes.transport.nio.PooledParallelSender.sendMessage(PooledParallelSender.java:51)

                at
org.apache.catalina.tribes.transport.ReplicationTransmitter.sendMessage(ReplicationTransmitter.java:65)

                at
org.apache.catalina.tribes.group.ChannelCoordinator.sendMessage(ChannelCoordinator.java:83)

                at
org.apache.catalina.tribes.group.ChannelInterceptorBase.sendMessage(ChannelInterceptorBase.java:89)

                at
org.apache.catalina.tribes.group.interceptors.MessageDispatchInterceptor.sendMessage(MessageDispatchInterceptor.java:93)

                at
org.apache.catalina.tribes.group.ChannelInterceptorBase.sendMessage(ChannelInterceptorBase.java:89)

                at
org.apache.catalina.tribes.group.interceptors.TcpFailureDetector.sendMessage(TcpFailureDetector.java:89)

                at
org.apache.catalina.tribes.group.ChannelInterceptorBase.sendMessage(ChannelInterceptorBase.java:89)

                at
org.apache.catalina.tribes.group.ChannelInterceptorBase.sendMessage(ChannelInterceptorBase.java:89)

                at
org.apache.catalina.tribes.group.interceptors.EncryptInterceptor.sendMessage(EncryptInterceptor.java:127)

                at
org.apache.catalina.tribes.group.ChannelInterceptorBase.sendMessage(ChannelInterceptorBase.java:89)

                at
org.apache.catalina.tribes.group.GroupChannel.send(GroupChannel.java:280)

                at
org.apache.catalina.tribes.group.GroupChannel.send(GroupChannel.java:231)

                at
org.apache.catalina.tribes.group.RpcChannel.messageReceived(RpcChannel.java:171)

                at
org.apache.catalina.tribes.group.GroupChannel.messageReceived(GroupChannel.java:345)

                at
org.apache.catalina.tribes.group.ChannelInterceptorBase.messageReceived(ChannelInterceptorBase.java:96)

                at
org.apache.catalina.tribes.group.interceptors.EncryptInterceptor.messageReceived(EncryptInterceptor.java:148)

                at
org.apache.catalina.tribes.group.ChannelInterceptorBase.messageReceived(ChannelInterceptorBase.java:96)

                at
org.apache.catalina.tribes.group.interceptors.TcpPingInterceptor.messageReceived(TcpPingInterceptor.java:182)

                at
org.apache.catalina.tribes.group.ChannelInterceptorBase.messageReceived(ChannelInterceptorBase.java:96)

                at
org.apache.catalina.tribes.group.interceptors.TcpFailureDetector.messageReceived(TcpFailureDetector.java:114)

                at
org.apache.catalina.tribes.group.ChannelInterceptorBase.messageReceived(ChannelInterceptorBase.java:96)

                at
org.apache.catalina.tribes.group.ChannelInterceptorBase.messageReceived(ChannelInterceptorBase.java:96)

                at
org.apache.catalina.tribes.group.ChannelCoordinator.messageReceived(ChannelCoordinator.java:288)

                at
org.apache.catalina.tribes.transport.ReceiverBase.messageDataReceived(ReceiverBase.java:272)

                at
org.apache.catalina.tribes.transport.nio.NioReplicationTask.drainChannel(NioReplicationTask.java:228)

                at
org.apache.catalina.tribes.transport.nio.NioReplicationTask.run(NioReplicationTask.java:103)

                at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)

                at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)

                at java.lang.Thread.run(Thread.java:750)

WARNING [https-jsse-nio-9443-exec-22]
org.apache.catalina.tribes.transport.nio.ParallelNioSender.keepalive
Error during keepalive test for
sender:[org.apache.catalina.tribes.transport.nio.NioSender@ee16415]

                java.nio.channels.NotYetConnectedException

                                at
sun.nio.ch.SocketChannelImpl.ensureReadOpen(SocketChannelImpl.java:258)

                                at
sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:299)

                                at
org.apache.catalina.tribes.transport.nio.NioSender.read(NioSender.java:175)

                                at
org.apache.catalina.tribes.transport.nio.ParallelNioSender.keepalive(ParallelNioSender.java:395)

                                at
org.apache.catalina.tribes.transport.PooledSender.returnSender(PooledSender.java:48)

                                at
org.apache.catalina.tribes.transport.nio.PooledParallelSender.sendMessage(PooledParallelSender.java:57)

                                at
org.apache.catalina.tribes.transport.ReplicationTransmitter.sendMessage(ReplicationTransmitter.java:65)

                                at
org.apache.catalina.tribes.group.ChannelCoordinator.sendMessage(ChannelCoordinator.java:83)

                                at
org.apache.catalina.tribes.group.ChannelInterceptorBase.sendMessage(ChannelInterceptorBase.java:89)

                                at
org.apache.catalina.tribes.group.interceptors.MessageDispatchInterceptor.sendMessage(MessageDispatchInterceptor.java:93)

                                at
org.apache.catalina.tribes.group.ChannelInterceptorBase.sendMessage(ChannelInterceptorBase.java:89)

                                at
org.apache.catalina.tribes.group.interceptors.TcpFailureDetector.sendMessage(TcpFailureDetector.java:89)

                                at
org.apache.catalina.tribes.group.ChannelInterceptorBase.sendMessage(ChannelInterceptorBase.java:89)

                                at
org.apache.catalina.tribes.group.ChannelInterceptorBase.sendMessage(ChannelInterceptorBase.java:89)

                                at
org.apache.catalina.tribes.group.interceptors.EncryptInterceptor.sendMessage(EncryptInterceptor.java:127)

                                at
org.apache.catalina.tribes.group.ChannelInterceptorBase.sendMessage(ChannelInterceptorBase.java:89)

                                at
org.apache.catalina.tribes.group.GroupChannel.send(GroupChannel.java:280)

                                at
org.apache.catalina.tribes.group.GroupChannel.send(GroupChannel.java:231)

                                at
org.apache.catalina.tribes.tipis.LazyReplicatedMap.publishEntryInfo(LazyReplicatedMap.java:189)

                                at
org.apache.catalina.tribes.tipis.AbstractReplicatedMap.put(AbstractReplicatedMap.java:1177)

                                at
org.apache.catalina.tribes.tipis.AbstractReplicatedMap.put(AbstractReplicatedMap.java:1159)

                                at
org.apache.catalina.session.ManagerBase.add(ManagerBase.java:722)

                                at
org.apache.catalina.session.StandardSession.setId(StandardSession.java:359)

                                at
org.apache.catalina.ha.session.DeltaSession.setId(DeltaSession.java:327)

                                at
org.apache.catalina.ha.session.DeltaSession.setId(DeltaSession.java:345)

                                at
org.apache.catalina.session.ManagerBase.createSession(ManagerBase.java:763)

                                at
org.apache.catalina.connector.Request.doGetSession(Request.java:3104)

                                at
org.apache.catalina.connector.Request.getSessionInternal(Request.java:2757)
                                ... remove a few lines ...
                                at
org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:625)

                                at
org.apache.catalina.valves.AbstractAccessLogValve.invoke(AbstractAccessLogValve.java:687)

                                at
org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:135)

                                at
org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:92)

                                at
org.apache.catalina.valves.rewrite.RewriteValve.invoke(RewriteValve.java:555)

                                at
org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:78)

                                at
org.apache.catalina.ha.session.JvmRouteBinderValve.invoke(JvmRouteBinderValve.java:183)

                                at
org.apache.catalina.ha.tcp.ReplicationValve.invoke(ReplicationValve.java:329)

                                at
org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:360)

                                at
org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:399)

                                at
org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:65)

                                at
org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:891)

                                at
org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1784)

                                at
org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:49)

                                at
org.apache.tomcat.util.threads.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1191)

                                at
org.apache.tomcat.util.threads.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:659)

                                at
org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)

                                at java.lang.Thread.run(Thread.java:750)

> What was your previous version of Tomcat?
We were always on version 9.  We keep it pretty much up to date with
the latest available.  I am not sure of the sub-version we started at
where it was working.  I'm guessing it was whatever version was the
latest around Jan-2019.

> Did you upgrade all nodes at the same time, or are you upgrading a single node in the cluster?
All are at the same version, we have 4, they all get updated at the same time.

> How did you upgrade (e.g. installer, unzip/untar/etc.)?
untar

Thanks,
Tim

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org


Re: StaticMembers within Multiple Clusters

Posted by Christopher Schultz <ch...@christopherschultz.net>.
Tim,

On 1/15/23 12:26, Tim K wrote:
> I hate to bring back my original thread and I am probably not doing
> this correctly, but I've been seeing this message occur on my cluster.
> My tomcat is now at 9.0.70.  Possibly there was a breaking change
> since I first started using the cluster?
> 
> java.lang.NoClassDefFoundError: Could not initialize class
> org.apache.catalina.tribes.ChannelException

There was a new type of message introduced recently, but it should have 
been both forward- and backward-compatible.

Can you post the rest of that stack trace? What was your previous 
version of Tomcat? Did you upgrade all nodes at the same time, or are 
you upgrading a single node in the cluster? How did you upgrade (e.g. 
installer, unzip/untar/etc.)?

-chris

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org


Re: StaticMembers within Multiple Clusters

Posted by Tim K <ti...@gmail.com>.
I hate to bring back my original thread and I am probably not doing
this correctly, but I've been seeing this message occur on my cluster.
My tomcat is now at 9.0.70.  Possibly there was a breaking change
since I first started using the cluster?

java.lang.NoClassDefFoundError: Could not initialize class
org.apache.catalina.tribes.ChannelException

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org


Re: StaticMembers within Multiple Clusters

Posted by Tim K <ti...@gmail.com>.
On Tue, Feb 12, 2019, 3:17 AM Keiichi Fujino <kf...@apache.org> wrote:

>
> Are you using SSO(org.apache.catalina.authenticator.SingleSignOn)?
> DeltaManager/BackupManager replicate sessions. They do not replicate SSO
> entries.
>
> If you want to replicate SSO Entry in cluster, you can use
> ClusterSingleSignOn.
>
>
> http://tomcat.apache.org/tomcat-9.0-doc/config/cluster-valve.html#org.apache.catalina.ha.authenticator.ClusterSingleSignOn
>
>
>
> --
> Keiichi.Fujino
>

Yes.  I tried adding a Value element for ClusterSingleSignOn to my Cluster
(removed the JvmBinderValue I had) and upon login, I'm noticing that
subsequent calls to my app are removing my SSO cookie, but I don't
understand why.  It successfully gets created but a subsequent call
immediately removes it.

>

Re: StaticMembers within Multiple Clusters

Posted by Keiichi Fujino <kf...@apache.org>.
2019年2月12日(火) 1:28 Tim K <ti...@gmail.com>:

> On Fri, Jan 18, 2019, 12:44 PM Tim K <ti...@gmail.com> wrote:
>
> > On Fri, Jan 18, 2019 at 11:05 AM Christopher Schultz
> > <ch...@christopherschultz.net> wrote:
> > >
> > > -----BEGIN PGP SIGNED MESSAGE-----
> > > Hash: SHA256
> > >
> > > Tim,
> > >
> > > On 1/18/19 06:38, Tim K wrote:
> > > > Thanks for this.  The video helps explain it a bit better than the
> > > > documentation.  So I set it up with a backup manager instead of the
> > > > delta manager, changing the channelSendOptions to 6 for the
> > > > cluster.
> > >
> > > If you think you can help clarify the documentation, patches are of
> > > course always welcome.
> > >
> > > > From a maintenance standpoint, what is the best way to stop/start
> > > > the nodes without losing sessions; one at a time, letting it fully
> > > > come up before moving on to the next one (like a ripple restart)?
> > > > I presume you don't want too many nodes to be down at a single
> > > > time.
> > >
> > > I definitely wouldn't bring two down simultaneously if your can avoid
> > > it. The cluster needs time to re-stabalize after the loss of a member,
> > > meaning that new backup nodes must be selected for each session and
> > > then the sessions must be transmitted to those backups nodes. If you
> > > have small amounts of data in the sessions, this will probably be
> > > fairly fast. If you have lots of data or a very busy network, it will
> > > take longer.
> > >
> > > I would recommend setting-up a scenario (even in production) where you
> > > intentionally disable a node in the cluster and watch to see how long
> > > the cluster takes to re-stabalize. I think you'll learn a lot from
> > > that exercise and it will help you plan for scheduled maintenance and
> > > downtime.
> > >
> > > - -chris
> >
> > Is there a way to tell which server was assigned as the primary and
> > backup roles?
> >
> > When I stop a member, is it this line which would tell me how long it
> > took to sync up the sessions?
> > Relocation of map entries was complete in [X] ms.
> >
> > Another question, I'm using the StaticMembershipService; do I need to
> > define a LocalMember for each of my nodes or is that optional/assumed?
> >
> > Also, I recall reading something about the uniqueId might not really
> > be used?  Do I need to set that for each member?
> >
>
>
> I'm noticing my SSO cookie is being removed when I force myself to another
> node.  Is this a bug?
>
>
>
Are you using SSO(org.apache.catalina.authenticator.SingleSignOn)?
DeltaManager/BackupManager replicate sessions. They do not replicate SSO
entries.

If you want to replicate SSO Entry in cluster, you can use
ClusterSingleSignOn.

http://tomcat.apache.org/tomcat-9.0-doc/config/cluster-valve.html#org.apache.catalina.ha.authenticator.ClusterSingleSignOn



-- 
Keiichi.Fujino

Re: StaticMembers within Multiple Clusters

Posted by Tim K <ti...@gmail.com>.
On Fri, Jan 18, 2019, 12:44 PM Tim K <ti...@gmail.com> wrote:

> On Fri, Jan 18, 2019 at 11:05 AM Christopher Schultz
> <ch...@christopherschultz.net> wrote:
> >
> > -----BEGIN PGP SIGNED MESSAGE-----
> > Hash: SHA256
> >
> > Tim,
> >
> > On 1/18/19 06:38, Tim K wrote:
> > > Thanks for this.  The video helps explain it a bit better than the
> > > documentation.  So I set it up with a backup manager instead of the
> > > delta manager, changing the channelSendOptions to 6 for the
> > > cluster.
> >
> > If you think you can help clarify the documentation, patches are of
> > course always welcome.
> >
> > > From a maintenance standpoint, what is the best way to stop/start
> > > the nodes without losing sessions; one at a time, letting it fully
> > > come up before moving on to the next one (like a ripple restart)?
> > > I presume you don't want too many nodes to be down at a single
> > > time.
> >
> > I definitely wouldn't bring two down simultaneously if your can avoid
> > it. The cluster needs time to re-stabalize after the loss of a member,
> > meaning that new backup nodes must be selected for each session and
> > then the sessions must be transmitted to those backups nodes. If you
> > have small amounts of data in the sessions, this will probably be
> > fairly fast. If you have lots of data or a very busy network, it will
> > take longer.
> >
> > I would recommend setting-up a scenario (even in production) where you
> > intentionally disable a node in the cluster and watch to see how long
> > the cluster takes to re-stabalize. I think you'll learn a lot from
> > that exercise and it will help you plan for scheduled maintenance and
> > downtime.
> >
> > - -chris
>
> Is there a way to tell which server was assigned as the primary and
> backup roles?
>
> When I stop a member, is it this line which would tell me how long it
> took to sync up the sessions?
> Relocation of map entries was complete in [X] ms.
>
> Another question, I'm using the StaticMembershipService; do I need to
> define a LocalMember for each of my nodes or is that optional/assumed?
>
> Also, I recall reading something about the uniqueId might not really
> be used?  Do I need to set that for each member?
>


I'm noticing my SSO cookie is being removed when I force myself to another
node.  Is this a bug?

>

Re: StaticMembers within Multiple Clusters

Posted by Tim K <ti...@gmail.com>.
On Fri, Jan 18, 2019 at 11:05 AM Christopher Schultz
<ch...@christopherschultz.net> wrote:
>
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA256
>
> Tim,
>
> On 1/18/19 06:38, Tim K wrote:
> > Thanks for this.  The video helps explain it a bit better than the
> > documentation.  So I set it up with a backup manager instead of the
> > delta manager, changing the channelSendOptions to 6 for the
> > cluster.
>
> If you think you can help clarify the documentation, patches are of
> course always welcome.
>
> > From a maintenance standpoint, what is the best way to stop/start
> > the nodes without losing sessions; one at a time, letting it fully
> > come up before moving on to the next one (like a ripple restart)?
> > I presume you don't want too many nodes to be down at a single
> > time.
>
> I definitely wouldn't bring two down simultaneously if your can avoid
> it. The cluster needs time to re-stabalize after the loss of a member,
> meaning that new backup nodes must be selected for each session and
> then the sessions must be transmitted to those backups nodes. If you
> have small amounts of data in the sessions, this will probably be
> fairly fast. If you have lots of data or a very busy network, it will
> take longer.
>
> I would recommend setting-up a scenario (even in production) where you
> intentionally disable a node in the cluster and watch to see how long
> the cluster takes to re-stabalize. I think you'll learn a lot from
> that exercise and it will help you plan for scheduled maintenance and
> downtime.
>
> - -chris

Is there a way to tell which server was assigned as the primary and
backup roles?

When I stop a member, is it this line which would tell me how long it
took to sync up the sessions?
Relocation of map entries was complete in [X] ms.

Another question, I'm using the StaticMembershipService; do I need to
define a LocalMember for each of my nodes or is that optional/assumed?

Also, I recall reading something about the uniqueId might not really
be used?  Do I need to set that for each member?

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org


Re: StaticMembers within Multiple Clusters

Posted by Christopher Schultz <ch...@christopherschultz.net>.
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

Tim,

On 1/18/19 06:38, Tim K wrote:
> Thanks for this.  The video helps explain it a bit better than the 
> documentation.  So I set it up with a backup manager instead of the
> delta manager, changing the channelSendOptions to 6 for the
> cluster.

If you think you can help clarify the documentation, patches are of
course always welcome.

> From a maintenance standpoint, what is the best way to stop/start
> the nodes without losing sessions; one at a time, letting it fully
> come up before moving on to the next one (like a ripple restart)?
> I presume you don't want too many nodes to be down at a single
> time.

I definitely wouldn't bring two down simultaneously if your can avoid
it. The cluster needs time to re-stabalize after the loss of a member,
meaning that new backup nodes must be selected for each session and
then the sessions must be transmitted to those backups nodes. If you
have small amounts of data in the sessions, this will probably be
fairly fast. If you have lots of data or a very busy network, it will
take longer.

I would recommend setting-up a scenario (even in production) where you
intentionally disable a node in the cluster and watch to see how long
the cluster takes to re-stabalize. I think you'll learn a lot from
that exercise and it will help you plan for scheduled maintenance and
downtime.

- -chris
-----BEGIN PGP SIGNATURE-----
Comment: Using GnuPG with Thunderbird - https://www.enigmail.net/

iQIzBAEBCAAdFiEEMmKgYcQvxMe7tcJcHPApP6U8pFgFAlxB+WEACgkQHPApP6U8
pFgaTBAAgB/GvhBVqD4TBfqvGu6EjnuC74p/yrK8QnflWXLsLT8U8O9Omqd/o6/Q
uPstnAh9WW8ubQ9NA7GuIXLB2xo9zhfVcruncFSyHN4u65+PPy7zM6Jwc8ZXZ102
zO0esMIIHDa9b+ZqPxg2ire19jJnrPX0UqhBMiuNTUDl+8uNem7DcMBq8m0xsDPw
jbglN4+Wu3kSYRzyhETK4PGHVgHJN0AZqzIh968t6VqqcrSkHd8EXUu+qoh62KJI
MiEZJYKznzlm08Fb5N+a+cG60rV1zUI1QgYWnK/88gMNHib4Zg+x0HDdgIXSk1+D
lDsAJMNQQ/W1gs0b/O0bX15bZCZxlxYDzNVVg4XjII5YhHdk8XriRk0Kgx2XZBP2
dcDvg8nppTLtpmBw5rs/SAHRR6RjV89z9RX9GWYUtrRR5IolGdfF3jeBA6VObImI
X8xQ8qohy3U49WMPSae3Dg+8hQFWcDNw7+K9Q8j8ZhVW2JGk5ze0Q+zNnE0j5wuF
szspBj+BA9E7VszPBJ5RYBMRbm9/UktzxVH53LwwKr46n7UxHQ/nQ84/4KaWlTKl
EVD7/sa2dsFKfVxC0pf7d6WFs1SYkWouCeFrHNc3QDLdUHkuBsFBHMVhbQWe+YbJ
pXROlmYBd3UVyE/BUkmuBlyYzi5riRkg8n6vewR4MGqHbeWooEo=
=kiHi
-----END PGP SIGNATURE-----

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org


Re: StaticMembers within Multiple Clusters

Posted by Tim K <ti...@gmail.com>.
On Fri, Jan 18, 2019, 4:55 AM Mark Thomas <markt@apache.org wrote:

> On 18/01/2019 01:40, Tim K wrote:
> > On Thu, Jan 17, 2019, 3:36 PM Mark Thomas <markt@apache.org wrote:
> >
> >> On 17/01/2019 15:28, Tim K wrote:
> >>
> >>> With the DeltaManager, instead of it notifying all nodes when sessions
> >> get
> >>> established, is there a way for it to only share that single node's
> >>> sessions during a shutdown event of that particular node?  For example,
> >>> node 1 of 8 has 5 sessions on it.  When node 1 is shut down, only then
> >> will
> >>> it replicate those 5 sessions to the other 7 nodes...  This would
> reduce
> >>> the all-to-all traffic that would be occurring constantly but would put
> >>> more weight on a shutdown event... Also, another wrinkle, the shutdown
> >> port
> >>> is disabled in server.xml for security reasons.
> >>
> >> None of the managers can be configured to replicate only on shutdown.
> >>
> >> The BackupManager is closer to the behaviour you describe. Note it is a
> >> common misconception that in the BackupManager a single node acts as a
> >> backup for all the other nodes. This is incorrect. Backups are
> >> distributed on a round-robin basis between all other nodes.
> >>
> >> The shutdown port being disabled should be a non-issue. As long as you
> >> kill -15 rather than kill -9 then you'll get a clean shutdown.
> >>
> >> Mark
> >>
> >> ---------------------------------------------------------------------
> >> To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
> >> For additional commands, e-mail: users-help@tomcat.apache.org
> >
> >
> > So to switch to Backup manager, just swap out the DeltaManager for it on
> > each serve or just one?
>
> All of them.
>
> >  You're right, it does seem the backup manager was
> > a single node, even after looking again at the documentation.  Is the
> > backup manager "battle tested" enough since the documentation was
> written?
>
> Yes.
>
> > Is there less network traffic with backup manager?
>
> Yes.
>
> > My nodes all have the
> > same single app installed, I'm not sure what the advantage would be in
> > using the backup manager.
>
> Less network traffic.
>
> See:
> https://www.youtube.com/watch?v=6LoAdy9-jBI
>
> particularly from around 29:30
>
> Mark
>
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
> For additional commands, e-mail: users-help@tomcat.apache.org


Thanks for this.  The video helps explain it a bit better than the
documentation.  So I set it up with a backup manager instead of the delta
manager, changing the channelSendOptions to 6 for the cluster.

From a maintenance standpoint, what is the best way to stop/start the nodes
without losing sessions; one at a time, letting it fully come up before
moving on to the next one (like a ripple restart)?  I presume you don't
want too many nodes to be down at a single time.

Re: StaticMembers within Multiple Clusters

Posted by Mark Thomas <ma...@apache.org>.
On 18/01/2019 01:40, Tim K wrote:
> On Thu, Jan 17, 2019, 3:36 PM Mark Thomas <markt@apache.org wrote:
> 
>> On 17/01/2019 15:28, Tim K wrote:
>>
>>> With the DeltaManager, instead of it notifying all nodes when sessions
>> get
>>> established, is there a way for it to only share that single node's
>>> sessions during a shutdown event of that particular node?  For example,
>>> node 1 of 8 has 5 sessions on it.  When node 1 is shut down, only then
>> will
>>> it replicate those 5 sessions to the other 7 nodes...  This would reduce
>>> the all-to-all traffic that would be occurring constantly but would put
>>> more weight on a shutdown event... Also, another wrinkle, the shutdown
>> port
>>> is disabled in server.xml for security reasons.
>>
>> None of the managers can be configured to replicate only on shutdown.
>>
>> The BackupManager is closer to the behaviour you describe. Note it is a
>> common misconception that in the BackupManager a single node acts as a
>> backup for all the other nodes. This is incorrect. Backups are
>> distributed on a round-robin basis between all other nodes.
>>
>> The shutdown port being disabled should be a non-issue. As long as you
>> kill -15 rather than kill -9 then you'll get a clean shutdown.
>>
>> Mark
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
>> For additional commands, e-mail: users-help@tomcat.apache.org
> 
> 
> So to switch to Backup manager, just swap out the DeltaManager for it on
> each serve or just one?

All of them.

>  You're right, it does seem the backup manager was
> a single node, even after looking again at the documentation.  Is the
> backup manager "battle tested" enough since the documentation was written?

Yes.

> Is there less network traffic with backup manager?

Yes.

> My nodes all have the
> same single app installed, I'm not sure what the advantage would be in
> using the backup manager.

Less network traffic.

See:
https://www.youtube.com/watch?v=6LoAdy9-jBI

particularly from around 29:30

Mark



---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org


Re: StaticMembers within Multiple Clusters

Posted by Tim K <ti...@gmail.com>.
On Thu, Jan 17, 2019, 3:36 PM Mark Thomas <markt@apache.org wrote:

> On 17/01/2019 15:28, Tim K wrote:
>
> > With the DeltaManager, instead of it notifying all nodes when sessions
> get
> > established, is there a way for it to only share that single node's
> > sessions during a shutdown event of that particular node?  For example,
> > node 1 of 8 has 5 sessions on it.  When node 1 is shut down, only then
> will
> > it replicate those 5 sessions to the other 7 nodes...  This would reduce
> > the all-to-all traffic that would be occurring constantly but would put
> > more weight on a shutdown event... Also, another wrinkle, the shutdown
> port
> > is disabled in server.xml for security reasons.
>
> None of the managers can be configured to replicate only on shutdown.
>
> The BackupManager is closer to the behaviour you describe. Note it is a
> common misconception that in the BackupManager a single node acts as a
> backup for all the other nodes. This is incorrect. Backups are
> distributed on a round-robin basis between all other nodes.
>
> The shutdown port being disabled should be a non-issue. As long as you
> kill -15 rather than kill -9 then you'll get a clean shutdown.
>
> Mark
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
> For additional commands, e-mail: users-help@tomcat.apache.org


So to switch to Backup manager, just swap out the DeltaManager for it on
each serve or just one?  You're right, it does seem the backup manager was
a single node, even after looking again at the documentation.  Is the
backup manager "battle tested" enough since the documentation was written?
Is there less network traffic with backup manager?  My nodes all have the
same single app installed, I'm not sure what the advantage would be in
using the backup manager.

Re: StaticMembers within Multiple Clusters

Posted by Mark Thomas <ma...@apache.org>.
On 17/01/2019 15:28, Tim K wrote:

> With the DeltaManager, instead of it notifying all nodes when sessions get
> established, is there a way for it to only share that single node's
> sessions during a shutdown event of that particular node?  For example,
> node 1 of 8 has 5 sessions on it.  When node 1 is shut down, only then will
> it replicate those 5 sessions to the other 7 nodes...  This would reduce
> the all-to-all traffic that would be occurring constantly but would put
> more weight on a shutdown event... Also, another wrinkle, the shutdown port
> is disabled in server.xml for security reasons.

None of the managers can be configured to replicate only on shutdown.

The BackupManager is closer to the behaviour you describe. Note it is a
common misconception that in the BackupManager a single node acts as a
backup for all the other nodes. This is incorrect. Backups are
distributed on a round-robin basis between all other nodes.

The shutdown port being disabled should be a non-issue. As long as you
kill -15 rather than kill -9 then you'll get a clean shutdown.

Mark

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org


Re: StaticMembers within Multiple Clusters

Posted by Tim K <ti...@gmail.com>.
On Tue, Jan 15, 2019, 3:14 PM Mark Thomas <markt@apache.org wrote:

> On 15/01/2019 18:36, Tim K wrote:
>
> > Question: what's considered a "large" cluster  I've seen a lot of
> > documentation about small vs large but I'd like to know what is
> considered
> > large.  Could the DeltaManager handle one single cluster (all-to-all)
> with
> > 8 members with 8GB allocated to each jvm, separate servers?  Not storing
> > much in the session besides 3-4 short strings.
>
> It depends more on the frequency and size of session updates.
>
> With the DeltaManager traffic volume is proportional to n(n-1) where n
> is the number of nodes. With the BackupManager it is proportional to n.
>
> With 8 nodes the DeltaManager generates 7 times the cluster traffic that
> the BackManager generates. Whether your network will cope with that will
> depend on the app and usage pattern.
>
> Mark
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
> For additional commands, e-mail: users-help@tomcat.apache.org


With the DeltaManager, instead of it notifying all nodes when sessions get
established, is there a way for it to only share that single node's
sessions during a shutdown event of that particular node?  For example,
node 1 of 8 has 5 sessions on it.  When node 1 is shut down, only then will
it replicate those 5 sessions to the other 7 nodes...  This would reduce
the all-to-all traffic that would be occurring constantly but would put
more weight on a shutdown event... Also, another wrinkle, the shutdown port
is disabled in server.xml for security reasons.

Re: StaticMembers within Multiple Clusters

Posted by Mark Thomas <ma...@apache.org>.
On 15/01/2019 18:36, Tim K wrote:

> Question: what's considered a "large" cluster  I've seen a lot of
> documentation about small vs large but I'd like to know what is considered
> large.  Could the DeltaManager handle one single cluster (all-to-all) with
> 8 members with 8GB allocated to each jvm, separate servers?  Not storing
> much in the session besides 3-4 short strings.

It depends more on the frequency and size of session updates.

With the DeltaManager traffic volume is proportional to n(n-1) where n
is the number of nodes. With the BackupManager it is proportional to n.

With 8 nodes the DeltaManager generates 7 times the cluster traffic that
the BackManager generates. Whether your network will cope with that will
depend on the app and usage pattern.

Mark

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org


Re: StaticMembers within Multiple Clusters

Posted by Tim K <ti...@gmail.com>.
On Tue, Jan 15, 2019, 12:51 PM Tim K <tim.k.5967@gmail.com wrote:

> On Tue, Jan 15, 2019, 4:10 AM Keiichi Fujino <kfujino@apache.org wrote:
>
>> Hi
>>
>> If you use StaticMembershipInterceptor, you must set the
>> Cluster#channelStartOptions to 3 to avoid starting membershipservice.
>> If you are using Tomcat 9, you can also use StaticMembershipService
>> instead
>> of StaticMembershipInterceptor.
>>
>>
>> 2019年1月10日(木) 22:39 Tim K <ti...@gmail.com>:
>>
>> > On Wed, Jan 9, 2019, 2:16 PM Christopher Schultz <
>> > chris@christopherschultz.net wrote:
>> >
>> > > -----BEGIN PGP SIGNED MESSAGE-----
>> > > Hash: SHA256
>> > >
>> > > Tim,
>> > >
>> > > On 1/9/19 10:39, Tim K wrote:
>> > > > I'm trying to split 4 separate tomcat instances into 2 clusters
>> > > > (2x2) to try and avoid the all-to-all traffic, but even when
>> > > > setting up the Receiver and Static members to only speak to 1 other
>> > > > instance, some still seems to find and add the other members
>> > > > outside of the defined config to the wrong cluster.  I read that
>> > > > mcast is still used when you have StaticMembers, could that be
>> > > > causing this issue?
>> > >
>> > > Multicast is only used for membership, so if you are using static,
>> > > there should be no multicast.
>> > >
>> > > Do you want to post your configuration(s)?
>> > >
>> > > - -chris
>> > > -----BEGIN PGP SIGNATURE-----
>> > > Comment: Using GnuPG with Thunderbird - https://www.enigmail.net/
>> > >
>> > > iQIzBAEBCAAdFiEEMmKgYcQvxMe7tcJcHPApP6U8pFgFAlw2SIEACgkQHPApP6U8
>> > > pFh//A//WldkBxcRVWZ0Nj/FVjFqdhxr8dkystbk114wk8pjF/h5JOSmncQjvUn6
>> > > 999ZT54rXToS+Dl2svp9imR266o0+56bUvJVXG2O4NK7TQZsEsBdOsqfnWPoHBM3
>> > > kYS7lhzhlpmw1SDFlKVW0PnRX9acah5+SfVci5gL0cWTVqSkdmm4P6v2wqH1z7ej
>> > > AeMZ0w2LaoRH0TTxJk8cD2vJpvnB3oNjrtUhHZCPJCraITPHhFNFMOSmhhf3+e1S
>> > > K63D6l9kE3x4WDNtxKBBjr+5FaULM6kL5DotQlJPo0j7I4mL9DBgt2HkgTfoS39m
>> > > M7QBGBR4tZ1zRIJiGXQRViMRhqL+Xjny61RxtU7zUlfWSChTEonUiv5z6aZ7q5n2
>> > > xz1Evrw+gLmoR+YecOazMHef/7z6GFNCGyE80BFbR8LgHeOubaPfY+zhYw6iWSQP
>> > > eHt32x48vzPewYlV1HLJR7C1oXhFPN9QVT2r+UENcsMtlDdWIhaflw6nb3qXhP8N
>> > > t4xqlUJebON1KolHRXXReNgz6TieKLmup1jSgRvVhohSYBOputLB01PY5S7E6vLy
>> > > 33EZGHbCOWlZzC1qyiXRd7jIfkdsQ9oRRHknty1gi0id/20M+iqYS22ZggnXMtFX
>> > > P0lORhhEiWBSyMHytrIb+uO7HglocrSuQfgVaoAkiaRUDtyBdHg=
>> > > =PM9e
>> > > -----END PGP SIGNATURE-----
>> > >
>> > > ---------------------------------------------------------------------
>> > > To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
>> > > For additional commands, e-mail: users-help@tomcat.apache.org
>> >
>> >
>> > Essentially I'm trying to have server1 and server2 only in cluster1 and
>> > server3 and server4 in only cluster2, but for some reason, members are
>> > getting added to clusters that they aren't configured for.
>> >
>> >
>> >
>> > server1 config:
>> >
>> > <Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"
>> > channelSendOptions="8">
>> >
>> >                 <Manager
>> > className="org.apache.catalina.ha.session.DeltaManager"
>> > expireSessionsOnShutdown="false" notifyListenersOnReplication="true"/>
>> >
>> >                 <Channel
>> > className="org.apache.catalina.tribes.group.GroupChannel">
>> >
>> >                                 <Receiver
>> > className="org.apache.catalina.tribes.transport.nio.NioReceiver"
>> > address="auto" port="4000" autoBind="100" selectorTimeout="5000"
>> > maxThreads="6"/>
>> >
>> >                                 <Sender
>> > className="org.apache.catalina.tribes.transport.ReplicationTransmitter">
>> >
>> >                                                 <Transport
>> >
>> className="org.apache.catalina.tribes.transport.nio.PooledParallelSender"/>
>> >
>> >                                 </Sender>
>> >
>> >                                 <Interceptor
>> >
>> >
>> className="org.apache.catalina.tribes.group.interceptors.EncryptInterceptor"
>> > encryptionKey="****Removed****" />
>> >
>> >                                 <Interceptor
>> >
>> >
>> className="org.apache.catalina.tribes.group.interceptors.TcpPingInterceptor"/>
>> >
>> >                                 <Interceptor
>> >
>> >
>> className="org.apache.catalina.tribes.group.interceptors.TcpFailureDetector"/>
>> >
>> >                                 <Interceptor
>> >
>> >
>> className="org.apache.catalina.tribes.group.interceptors.MessageDispatchInterceptor"/>
>> >
>> >                                 <Interceptor
>> >
>> >
>> className="org.apache.catalina.tribes.group.interceptors.StaticMembershipInterceptor">
>> >
>> >                                                 <Member
>> > className="org.apache.catalina.tribes.membership.StaticMember"
>> > host="server2" port="4000" domain="cluster1"
>> > uniqueId="{1,0,2,3,4,5,6,7,8,9,10,11,12,13,14,15}"/>
>> >
>> >                                 </Interceptor>
>> >
>> >                 </Channel>
>> >
>> >                 <Valve
>> > className="org.apache.catalina.ha.tcp.ReplicationValve" filter=""/>
>> >
>> >                 <Valve
>> > className="org.apache.catalina.ha.session.JvmRouteBinderValve"/>
>> >
>> >                 <ClusterListener
>> > className="org.apache.catalina.ha.session.ClusterSessionListener"/>
>> >
>> > </Cluster>
>> >
>> >
>> >
>> > server2 [everything the same except the <Member/> is]:
>> >
>> > <Member className="org.apache.catalina.tribes.membership.StaticMember"
>> > host="server1" port="4000" domain="cluster1"
>> > uniqueId="{0,0,2,3,4,5,6,7,8,9,10,11,12,13,14,15}"/>
>> >
>> >
>> >
>> > server3 [everything the same except the <Member/> is]:
>> >
>> > <Member className="org.apache.catalina.tribes.membership.StaticMember"
>> > host="server4" port="4000" domain="cluster2"
>> > uniqueId="{4,0,2,3,4,5,6,7,8,9,10,11,12,13,14,15}"/>
>> >
>> >
>> >
>> > server4 [everything the same except the <Member/> is]:
>> >
>> > <Member className="org.apache.catalina.tribes.membership.StaticMember"
>> > host="server3" port="4000" domain="cluster2"
>> > uniqueId="{3,0,2,3,4,5,6,7,8,9,10,11,12,13,14,15}"/>
>> >
>>
>>
>> --
>> Keiichi.Fujino
>>
>
> I set Cluster#channelStartOptions to 3, continuing to use the StaticMembershipInterceptor
> for now with tomcat 9.  I can confirm this setting did prevent
> non-configured servers from getting added to clusters they were not
> configured for but It doesn't appear to persist the user session
> anymore.  I login and get a session on server1 and then stop it, expecting
> to fail over to server2, but it's not picking up the session started on the
> other member of the cluster.
>

I tried the StaticMembershipService instead
of StaticMembershipInterceptor.  I had to remove the
Cluster#channelStartOptions=3
to get it working (guess that was only needed for the
StaticMembershipInterceptor?).
I added a Membership element (above the Receiver) with the
StaticMembershipService
class, then put both a LocalMember and Member within it.  It appeared to
work.

>
Question: what's considered a "large" cluster  I've seen a lot of
documentation about small vs large but I'd like to know what is considered
large.  Could the DeltaManager handle one single cluster (all-to-all) with
8 members with 8GB allocated to each jvm, separate servers?  Not storing
much in the session besides 3-4 short strings.

Re: StaticMembers within Multiple Clusters

Posted by Tim K <ti...@gmail.com>.
On Tue, Jan 15, 2019, 4:10 AM Keiichi Fujino <kfujino@apache.org wrote:

> Hi
>
> If you use StaticMembershipInterceptor, you must set the
> Cluster#channelStartOptions to 3 to avoid starting membershipservice.
> If you are using Tomcat 9, you can also use StaticMembershipService instead
> of StaticMembershipInterceptor.
>
>
> 2019年1月10日(木) 22:39 Tim K <ti...@gmail.com>:
>
> > On Wed, Jan 9, 2019, 2:16 PM Christopher Schultz <
> > chris@christopherschultz.net wrote:
> >
> > > -----BEGIN PGP SIGNED MESSAGE-----
> > > Hash: SHA256
> > >
> > > Tim,
> > >
> > > On 1/9/19 10:39, Tim K wrote:
> > > > I'm trying to split 4 separate tomcat instances into 2 clusters
> > > > (2x2) to try and avoid the all-to-all traffic, but even when
> > > > setting up the Receiver and Static members to only speak to 1 other
> > > > instance, some still seems to find and add the other members
> > > > outside of the defined config to the wrong cluster.  I read that
> > > > mcast is still used when you have StaticMembers, could that be
> > > > causing this issue?
> > >
> > > Multicast is only used for membership, so if you are using static,
> > > there should be no multicast.
> > >
> > > Do you want to post your configuration(s)?
> > >
> > > - -chris
> > > -----BEGIN PGP SIGNATURE-----
> > > Comment: Using GnuPG with Thunderbird - https://www.enigmail.net/
> > >
> > > iQIzBAEBCAAdFiEEMmKgYcQvxMe7tcJcHPApP6U8pFgFAlw2SIEACgkQHPApP6U8
> > > pFh//A//WldkBxcRVWZ0Nj/FVjFqdhxr8dkystbk114wk8pjF/h5JOSmncQjvUn6
> > > 999ZT54rXToS+Dl2svp9imR266o0+56bUvJVXG2O4NK7TQZsEsBdOsqfnWPoHBM3
> > > kYS7lhzhlpmw1SDFlKVW0PnRX9acah5+SfVci5gL0cWTVqSkdmm4P6v2wqH1z7ej
> > > AeMZ0w2LaoRH0TTxJk8cD2vJpvnB3oNjrtUhHZCPJCraITPHhFNFMOSmhhf3+e1S
> > > K63D6l9kE3x4WDNtxKBBjr+5FaULM6kL5DotQlJPo0j7I4mL9DBgt2HkgTfoS39m
> > > M7QBGBR4tZ1zRIJiGXQRViMRhqL+Xjny61RxtU7zUlfWSChTEonUiv5z6aZ7q5n2
> > > xz1Evrw+gLmoR+YecOazMHef/7z6GFNCGyE80BFbR8LgHeOubaPfY+zhYw6iWSQP
> > > eHt32x48vzPewYlV1HLJR7C1oXhFPN9QVT2r+UENcsMtlDdWIhaflw6nb3qXhP8N
> > > t4xqlUJebON1KolHRXXReNgz6TieKLmup1jSgRvVhohSYBOputLB01PY5S7E6vLy
> > > 33EZGHbCOWlZzC1qyiXRd7jIfkdsQ9oRRHknty1gi0id/20M+iqYS22ZggnXMtFX
> > > P0lORhhEiWBSyMHytrIb+uO7HglocrSuQfgVaoAkiaRUDtyBdHg=
> > > =PM9e
> > > -----END PGP SIGNATURE-----
> > >
> > > ---------------------------------------------------------------------
> > > To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
> > > For additional commands, e-mail: users-help@tomcat.apache.org
> >
> >
> > Essentially I'm trying to have server1 and server2 only in cluster1 and
> > server3 and server4 in only cluster2, but for some reason, members are
> > getting added to clusters that they aren't configured for.
> >
> >
> >
> > server1 config:
> >
> > <Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"
> > channelSendOptions="8">
> >
> >                 <Manager
> > className="org.apache.catalina.ha.session.DeltaManager"
> > expireSessionsOnShutdown="false" notifyListenersOnReplication="true"/>
> >
> >                 <Channel
> > className="org.apache.catalina.tribes.group.GroupChannel">
> >
> >                                 <Receiver
> > className="org.apache.catalina.tribes.transport.nio.NioReceiver"
> > address="auto" port="4000" autoBind="100" selectorTimeout="5000"
> > maxThreads="6"/>
> >
> >                                 <Sender
> > className="org.apache.catalina.tribes.transport.ReplicationTransmitter">
> >
> >                                                 <Transport
> >
> className="org.apache.catalina.tribes.transport.nio.PooledParallelSender"/>
> >
> >                                 </Sender>
> >
> >                                 <Interceptor
> >
> >
> className="org.apache.catalina.tribes.group.interceptors.EncryptInterceptor"
> > encryptionKey="****Removed****" />
> >
> >                                 <Interceptor
> >
> >
> className="org.apache.catalina.tribes.group.interceptors.TcpPingInterceptor"/>
> >
> >                                 <Interceptor
> >
> >
> className="org.apache.catalina.tribes.group.interceptors.TcpFailureDetector"/>
> >
> >                                 <Interceptor
> >
> >
> className="org.apache.catalina.tribes.group.interceptors.MessageDispatchInterceptor"/>
> >
> >                                 <Interceptor
> >
> >
> className="org.apache.catalina.tribes.group.interceptors.StaticMembershipInterceptor">
> >
> >                                                 <Member
> > className="org.apache.catalina.tribes.membership.StaticMember"
> > host="server2" port="4000" domain="cluster1"
> > uniqueId="{1,0,2,3,4,5,6,7,8,9,10,11,12,13,14,15}"/>
> >
> >                                 </Interceptor>
> >
> >                 </Channel>
> >
> >                 <Valve
> > className="org.apache.catalina.ha.tcp.ReplicationValve" filter=""/>
> >
> >                 <Valve
> > className="org.apache.catalina.ha.session.JvmRouteBinderValve"/>
> >
> >                 <ClusterListener
> > className="org.apache.catalina.ha.session.ClusterSessionListener"/>
> >
> > </Cluster>
> >
> >
> >
> > server2 [everything the same except the <Member/> is]:
> >
> > <Member className="org.apache.catalina.tribes.membership.StaticMember"
> > host="server1" port="4000" domain="cluster1"
> > uniqueId="{0,0,2,3,4,5,6,7,8,9,10,11,12,13,14,15}"/>
> >
> >
> >
> > server3 [everything the same except the <Member/> is]:
> >
> > <Member className="org.apache.catalina.tribes.membership.StaticMember"
> > host="server4" port="4000" domain="cluster2"
> > uniqueId="{4,0,2,3,4,5,6,7,8,9,10,11,12,13,14,15}"/>
> >
> >
> >
> > server4 [everything the same except the <Member/> is]:
> >
> > <Member className="org.apache.catalina.tribes.membership.StaticMember"
> > host="server3" port="4000" domain="cluster2"
> > uniqueId="{3,0,2,3,4,5,6,7,8,9,10,11,12,13,14,15}"/>
> >
>
>
> --
> Keiichi.Fujino
>

I set Cluster#channelStartOptions to 3, continuing to use the
StaticMembershipInterceptor
for now with tomcat 9.  I can confirm this setting did prevent
non-configured servers from getting added to clusters they were not
configured for but It doesn't appear to persist the user session anymore.
I login and get a session on server1 and then stop it, expecting to fail
over to server2, but it's not picking up the session started on the other
member of the cluster.

>

Re: StaticMembers within Multiple Clusters

Posted by Christopher Schultz <ch...@christopherschultz.net>.
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

Keiichi,

On 1/15/19 04:10, Keiichi Fujino wrote:
> Hi
> 
> If you use StaticMembershipInterceptor, you must set the 
> Cluster#channelStartOptions to 3 to avoid starting
> membershipservice. If you are using Tomcat 9, you can also use
> StaticMembershipService instead of StaticMembershipInterceptor.

Is there any particular reason why the cluster components don't
perform sanity-checks about these kinds of things?

It seems like the StaticMembershipInterceptor could inspect those
options and issue warnings (or even refuse to start) if the
configuration does not make sense.

- -chris

> 2019年1月10日(木) 22:39 Tim K <ti...@gmail.com>:
> 
>> On Wed, Jan 9, 2019, 2:16 PM Christopher Schultz < 
>> chris@christopherschultz.net wrote:
>> 
> Tim,
> 
> On 1/9/19 10:39, Tim K wrote:
>>>>> I'm trying to split 4 separate tomcat instances into 2
>>>>> clusters (2x2) to try and avoid the all-to-all traffic, but
>>>>> even when setting up the Receiver and Static members to
>>>>> only speak to 1 other instance, some still seems to find
>>>>> and add the other members outside of the defined config to
>>>>> the wrong cluster.  I read that mcast is still used when
>>>>> you have StaticMembers, could that be causing this issue?
> 
> Multicast is only used for membership, so if you are using static, 
> there should be no multicast.
> 
> Do you want to post your configuration(s)?
> 
> -chris
>>> 
>>> --------------------------------------------------------------------
- -
>>>
>>> 
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
>>> For additional commands, e-mail: users-help@tomcat.apache.org
>> 
>> 
>> Essentially I'm trying to have server1 and server2 only in
>> cluster1 and server3 and server4 in only cluster2, but for some
>> reason, members are getting added to clusters that they aren't
>> configured for.
>> 
>> 
>> 
>> server1 config:
>> 
>> <Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster" 
>> channelSendOptions="8">
>> 
>> <Manager className="org.apache.catalina.ha.session.DeltaManager" 
>> expireSessionsOnShutdown="false"
>> notifyListenersOnReplication="true"/>
>> 
>> <Channel 
>> className="org.apache.catalina.tribes.group.GroupChannel">
>> 
>> <Receiver 
>> className="org.apache.catalina.tribes.transport.nio.NioReceiver" 
>> address="auto" port="4000" autoBind="100" selectorTimeout="5000" 
>> maxThreads="6"/>
>> 
>> <Sender 
>> className="org.apache.catalina.tribes.transport.ReplicationTransmitte
r">
>>
>>
>> 
<Transport
>> className="org.apache.catalina.tribes.transport.nio.PooledParallelSen
der"/>
>>
>>
>> 
</Sender>
>> 
>> <Interceptor
>> 
>> className="org.apache.catalina.tribes.group.interceptors.EncryptInter
ceptor"
>>
>> 
encryptionKey="****Removed****" />
>> 
>> <Interceptor
>> 
>> className="org.apache.catalina.tribes.group.interceptors.TcpPingInter
ceptor"/>
>>
>>
>> 
<Interceptor
>> 
>> className="org.apache.catalina.tribes.group.interceptors.TcpFailureDe
tector"/>
>>
>>
>> 
<Interceptor
>> 
>> className="org.apache.catalina.tribes.group.interceptors.MessageDispa
tchInterceptor"/>
>>
>>
>> 
<Interceptor
>> 
>> className="org.apache.catalina.tribes.group.interceptors.StaticMember
shipInterceptor">
>>
>>
>> 
<Member
>> className="org.apache.catalina.tribes.membership.StaticMember" 
>> host="server2" port="4000" domain="cluster1" 
>> uniqueId="{1,0,2,3,4,5,6,7,8,9,10,11,12,13,14,15}"/>
>> 
>> </Interceptor>
>> 
>> </Channel>
>> 
>> <Valve className="org.apache.catalina.ha.tcp.ReplicationValve"
>> filter=""/>
>> 
>> <Valve 
>> className="org.apache.catalina.ha.session.JvmRouteBinderValve"/>
>> 
>> <ClusterListener 
>> className="org.apache.catalina.ha.session.ClusterSessionListener"/>
>>
>>
>> 
</Cluster>
>> 
>> 
>> 
>> server2 [everything the same except the <Member/> is]:
>> 
>> <Member
>> className="org.apache.catalina.tribes.membership.StaticMember" 
>> host="server1" port="4000" domain="cluster1" 
>> uniqueId="{0,0,2,3,4,5,6,7,8,9,10,11,12,13,14,15}"/>
>> 
>> 
>> 
>> server3 [everything the same except the <Member/> is]:
>> 
>> <Member
>> className="org.apache.catalina.tribes.membership.StaticMember" 
>> host="server4" port="4000" domain="cluster2" 
>> uniqueId="{4,0,2,3,4,5,6,7,8,9,10,11,12,13,14,15}"/>
>> 
>> 
>> 
>> server4 [everything the same except the <Member/> is]:
>> 
>> <Member
>> className="org.apache.catalina.tribes.membership.StaticMember" 
>> host="server3" port="4000" domain="cluster2" 
>> uniqueId="{3,0,2,3,4,5,6,7,8,9,10,11,12,13,14,15}"/>
>> 
> 
> 
-----BEGIN PGP SIGNATURE-----
Comment: Using GnuPG with Thunderbird - https://www.enigmail.net/

iQIzBAEBCAAdFiEEMmKgYcQvxMe7tcJcHPApP6U8pFgFAlw96ewACgkQHPApP6U8
pFh5sxAAsdTpZFjxtH1o8ySD2vwZmS/PuKZEXCvZqsHl4IjDIsh1KnAOaE/jhm8u
N80eMQmQT8oMvYvSQ43gaiCpbnrclqCHgiPa8mQE91mdDB2jzjvhS/rwuoqMn1Kh
S99F6zH3WTaB8CGP4Sb6lgGRZMiuIrj2FKVXlOvuyHOq65Lm69lktc/ISQ3liC7F
+S5ii2cl9NJKF+ONi2IgEg+KJZXzl8Lo76ZEIwXLECE1LZyfN3cdUt3wOoKQCEfH
88Z3kzCjjGIiOneI/2WthJMybfl1eODv7ujzx5lQIf6cOaJvuBRi0GD9DSZ4LKqg
AOl5QbGos+CiugUo28bp/yaTEnzgQTa+bM/5x/VnO8ubNXqIAl/VbPCZO2CFhZAO
nuTrk2h+3VFVsQozNivIKI7sx7LYa2tN14embYDxiui8lQV7H2DmegODEG+dk7OA
7N8nljPn+T4T4txBk9uBBNheQpPaYNEG/csz+8j+lubo0pMa2jLoV9qoFiXTcja0
pkUxJ9jlGmhJ0uSFJttp1vTtzdsMBcxiwBjGwYmhMmlqRURPTBi4g2jTTG9GRh4x
t3gbFAWMQ5t7W5NY1KStnC4YFbdEDt372TRI+Cw32fakxsWPjBBS6ygCTI6Crc2K
BQXohOlTTHnMWAbux+2Bf5fLfATQx2uW64EvROupGx8vYz5B7RU=
=WI9r
-----END PGP SIGNATURE-----

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org


Re: StaticMembers within Multiple Clusters

Posted by Keiichi Fujino <kf...@apache.org>.
Hi

If you use StaticMembershipInterceptor, you must set the
Cluster#channelStartOptions to 3 to avoid starting membershipservice.
If you are using Tomcat 9, you can also use StaticMembershipService instead
of StaticMembershipInterceptor.


2019年1月10日(木) 22:39 Tim K <ti...@gmail.com>:

> On Wed, Jan 9, 2019, 2:16 PM Christopher Schultz <
> chris@christopherschultz.net wrote:
>
> > -----BEGIN PGP SIGNED MESSAGE-----
> > Hash: SHA256
> >
> > Tim,
> >
> > On 1/9/19 10:39, Tim K wrote:
> > > I'm trying to split 4 separate tomcat instances into 2 clusters
> > > (2x2) to try and avoid the all-to-all traffic, but even when
> > > setting up the Receiver and Static members to only speak to 1 other
> > > instance, some still seems to find and add the other members
> > > outside of the defined config to the wrong cluster.  I read that
> > > mcast is still used when you have StaticMembers, could that be
> > > causing this issue?
> >
> > Multicast is only used for membership, so if you are using static,
> > there should be no multicast.
> >
> > Do you want to post your configuration(s)?
> >
> > - -chris
> > -----BEGIN PGP SIGNATURE-----
> > Comment: Using GnuPG with Thunderbird - https://www.enigmail.net/
> >
> > iQIzBAEBCAAdFiEEMmKgYcQvxMe7tcJcHPApP6U8pFgFAlw2SIEACgkQHPApP6U8
> > pFh//A//WldkBxcRVWZ0Nj/FVjFqdhxr8dkystbk114wk8pjF/h5JOSmncQjvUn6
> > 999ZT54rXToS+Dl2svp9imR266o0+56bUvJVXG2O4NK7TQZsEsBdOsqfnWPoHBM3
> > kYS7lhzhlpmw1SDFlKVW0PnRX9acah5+SfVci5gL0cWTVqSkdmm4P6v2wqH1z7ej
> > AeMZ0w2LaoRH0TTxJk8cD2vJpvnB3oNjrtUhHZCPJCraITPHhFNFMOSmhhf3+e1S
> > K63D6l9kE3x4WDNtxKBBjr+5FaULM6kL5DotQlJPo0j7I4mL9DBgt2HkgTfoS39m
> > M7QBGBR4tZ1zRIJiGXQRViMRhqL+Xjny61RxtU7zUlfWSChTEonUiv5z6aZ7q5n2
> > xz1Evrw+gLmoR+YecOazMHef/7z6GFNCGyE80BFbR8LgHeOubaPfY+zhYw6iWSQP
> > eHt32x48vzPewYlV1HLJR7C1oXhFPN9QVT2r+UENcsMtlDdWIhaflw6nb3qXhP8N
> > t4xqlUJebON1KolHRXXReNgz6TieKLmup1jSgRvVhohSYBOputLB01PY5S7E6vLy
> > 33EZGHbCOWlZzC1qyiXRd7jIfkdsQ9oRRHknty1gi0id/20M+iqYS22ZggnXMtFX
> > P0lORhhEiWBSyMHytrIb+uO7HglocrSuQfgVaoAkiaRUDtyBdHg=
> > =PM9e
> > -----END PGP SIGNATURE-----
> >
> > ---------------------------------------------------------------------
> > To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
> > For additional commands, e-mail: users-help@tomcat.apache.org
>
>
> Essentially I'm trying to have server1 and server2 only in cluster1 and
> server3 and server4 in only cluster2, but for some reason, members are
> getting added to clusters that they aren't configured for.
>
>
>
> server1 config:
>
> <Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"
> channelSendOptions="8">
>
>                 <Manager
> className="org.apache.catalina.ha.session.DeltaManager"
> expireSessionsOnShutdown="false" notifyListenersOnReplication="true"/>
>
>                 <Channel
> className="org.apache.catalina.tribes.group.GroupChannel">
>
>                                 <Receiver
> className="org.apache.catalina.tribes.transport.nio.NioReceiver"
> address="auto" port="4000" autoBind="100" selectorTimeout="5000"
> maxThreads="6"/>
>
>                                 <Sender
> className="org.apache.catalina.tribes.transport.ReplicationTransmitter">
>
>                                                 <Transport
> className="org.apache.catalina.tribes.transport.nio.PooledParallelSender"/>
>
>                                 </Sender>
>
>                                 <Interceptor
>
> className="org.apache.catalina.tribes.group.interceptors.EncryptInterceptor"
> encryptionKey="****Removed****" />
>
>                                 <Interceptor
>
> className="org.apache.catalina.tribes.group.interceptors.TcpPingInterceptor"/>
>
>                                 <Interceptor
>
> className="org.apache.catalina.tribes.group.interceptors.TcpFailureDetector"/>
>
>                                 <Interceptor
>
> className="org.apache.catalina.tribes.group.interceptors.MessageDispatchInterceptor"/>
>
>                                 <Interceptor
>
> className="org.apache.catalina.tribes.group.interceptors.StaticMembershipInterceptor">
>
>                                                 <Member
> className="org.apache.catalina.tribes.membership.StaticMember"
> host="server2" port="4000" domain="cluster1"
> uniqueId="{1,0,2,3,4,5,6,7,8,9,10,11,12,13,14,15}"/>
>
>                                 </Interceptor>
>
>                 </Channel>
>
>                 <Valve
> className="org.apache.catalina.ha.tcp.ReplicationValve" filter=""/>
>
>                 <Valve
> className="org.apache.catalina.ha.session.JvmRouteBinderValve"/>
>
>                 <ClusterListener
> className="org.apache.catalina.ha.session.ClusterSessionListener"/>
>
> </Cluster>
>
>
>
> server2 [everything the same except the <Member/> is]:
>
> <Member className="org.apache.catalina.tribes.membership.StaticMember"
> host="server1" port="4000" domain="cluster1"
> uniqueId="{0,0,2,3,4,5,6,7,8,9,10,11,12,13,14,15}"/>
>
>
>
> server3 [everything the same except the <Member/> is]:
>
> <Member className="org.apache.catalina.tribes.membership.StaticMember"
> host="server4" port="4000" domain="cluster2"
> uniqueId="{4,0,2,3,4,5,6,7,8,9,10,11,12,13,14,15}"/>
>
>
>
> server4 [everything the same except the <Member/> is]:
>
> <Member className="org.apache.catalina.tribes.membership.StaticMember"
> host="server3" port="4000" domain="cluster2"
> uniqueId="{3,0,2,3,4,5,6,7,8,9,10,11,12,13,14,15}"/>
>


-- 
Keiichi.Fujino

Re: StaticMembers within Multiple Clusters

Posted by Christopher Schultz <ch...@christopherschultz.net>.
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

Tim,

On 1/10/19 08:30, Tim K wrote:
> On Wed, Jan 9, 2019, 2:16 PM Christopher Schultz < 
> chris@christopherschultz.net wrote:
> 
> Tim,
> 
> On 1/9/19 10:39, Tim K wrote:
>>>> I'm trying to split 4 separate tomcat instances into 2
>>>> clusters (2x2) to try and avoid the all-to-all traffic, but
>>>> even when setting up the Receiver and Static members to only
>>>> speak to 1 other instance, some still seems to find and add
>>>> the other members outside of the defined config to the wrong
>>>> cluster.  I read that mcast is still used when you have
>>>> StaticMembers, could that be causing this issue?
> 
> Multicast is only used for membership, so if you are using static, 
> there should be no multicast.
> 
> Do you want to post your configuration(s)?
> 
> -chris
>> 
>> ---------------------------------------------------------------------
>>
>> 
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
>> For additional commands, e-mail: users-help@tomcat.apache.org
> 
> 
> Essentially I'm trying to have server1 and server2 only in cluster1
> and server3 and server4 in only cluster2, but for some reason,
> members are getting added to clusters that they aren't configured
> for.
> 
> 
> 
> server1 config:
> 
> <Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster" 
> channelSendOptions="8">
> 
> <Manager className="org.apache.catalina.ha.session.DeltaManager" 
> expireSessionsOnShutdown="false"
> notifyListenersOnReplication="true"/>
> 
> <Channel 
> className="org.apache.catalina.tribes.group.GroupChannel">
> 
> <Receiver 
> className="org.apache.catalina.tribes.transport.nio.NioReceiver" 
> address="auto" port="4000" autoBind="100" selectorTimeout="5000" 
> maxThreads="6"/>
> 
> <Sender 
> className="org.apache.catalina.tribes.transport.ReplicationTransmitter
">
>
>  <Transport 
> className="org.apache.catalina.tribes.transport.nio.PooledParallelSend
er"/>
>
>  </Sender>
> 
> <Interceptor 
> className="org.apache.catalina.tribes.group.interceptors.EncryptInterc
eptor"
>
> 
encryptionKey="****Removed****" />
> 
> <Interceptor 
> className="org.apache.catalina.tribes.group.interceptors.TcpPingInterc
eptor"/>
>
>  <Interceptor 
> className="org.apache.catalina.tribes.group.interceptors.TcpFailureDet
ector"/>
>
>  <Interceptor 
> className="org.apache.catalina.tribes.group.interceptors.MessageDispat
chInterceptor"/>
>
>  <Interceptor 
> className="org.apache.catalina.tribes.group.interceptors.StaticMembers
hipInterceptor">
>
>  <Member 
> className="org.apache.catalina.tribes.membership.StaticMember" 
> host="server2" port="4000" domain="cluster1" 
> uniqueId="{1,0,2,3,4,5,6,7,8,9,10,11,12,13,14,15}"/>
> 
> </Interceptor>
> 
> </Channel>
> 
> <Valve className="org.apache.catalina.ha.tcp.ReplicationValve"
> filter=""/>
> 
> <Valve 
> className="org.apache.catalina.ha.session.JvmRouteBinderValve"/>
> 
> <ClusterListener 
> className="org.apache.catalina.ha.session.ClusterSessionListener"/>
>
>  </Cluster>
> 
> 
> 
> server2 [everything the same except the <Member/> is]:
> 
> <Member
> className="org.apache.catalina.tribes.membership.StaticMember" 
> host="server1" port="4000" domain="cluster1" 
> uniqueId="{0,0,2,3,4,5,6,7,8,9,10,11,12,13,14,15}"/>
> 
> 
> 
> server3 [everything the same except the <Member/> is]:
> 
> <Member
> className="org.apache.catalina.tribes.membership.StaticMember" 
> host="server4" port="4000" domain="cluster2" 
> uniqueId="{4,0,2,3,4,5,6,7,8,9,10,11,12,13,14,15}"/>
> 
> 
> 
> server4 [everything the same except the <Member/> is]:
> 
> <Member
> className="org.apache.catalina.tribes.membership.StaticMember" 
> host="server3" port="4000" domain="cluster2" 
> uniqueId="{3,0,2,3,4,5,6,7,8,9,10,11,12,13,14,15}"/>

I don't see any <Membership> element in your configuration. I think
you are missing this:

http://tomcat.apache.org/tomcat-9.0-doc/config/cluster-membership.html#S
tatic_Membership_Attributes

The documentation for the static membership interceptor doesn't seem
to mention that the <Membership> element is required as well. There is
definitely an opportunity to improve some of the documentation in this
area.

(It's also theoretically possible for the cluster setup to perform a
sanity-check; if you have configured a StaticMembershipInterceptor,
you'd better have a StaticMembership <Membership> manager to go with
it. I'm not clear on why they are separate things that can even be
configured separately/improperly.)

- -chris
-----BEGIN PGP SIGNATURE-----
Comment: Using GnuPG with Thunderbird - https://www.enigmail.net/

iQIzBAEBCAAdFiEEMmKgYcQvxMe7tcJcHPApP6U8pFgFAlw9H2sACgkQHPApP6U8
pFhPvxAArChTzGql18fvxrikxcxhrj9XyZfs4yy/gCPRKOoZb93pNh/rK3TiLApP
JSDdk6ZCMck7OciiMa7gp+QNPXA5IZf/oOrluZWTJgw72sMrq4WVWFjB3DDmQd1f
DZKSnQ9HE4nYo+n5QuX4Mu0uscaUVC2bfRhTzhmITcgqgoy52AS+DPJivXGbX52m
SnT+QyoYeYnTM500S8IT/nDeiwefEaQ5eKnq7yRQ19c7PTUPOZAlRQ80cI7lH7LC
96XxXKLdH0cLTZUBC7jvCG3mpSiBxdwHQIL7buyePdaMhIjEYsAyqeoCRjKDbt0V
rAorXY0my02QASZfnJbmQ5DK6ifPyTBlwFRbl/Y0HJF/4jYSh5ZlFiNFAWPiXupR
MlOTxQ9AdV/MAlKRxZZbLtGNM0eQDLcsaPOrL/5EaQgStWjfdifref2Cg9Wb1bEx
w1YbrdVPHUimmy5p5BytMkMQKNzeFpZlQnnt/ejD8vXYjaLvjDN/ruBSzpEeSNSR
mtMqYCk3fQiP3q54PAmL12hRjR7r6mXN1TWaFRDBEJsRI1MguNQUVLO36Y6Bxq9W
Yi+CwB+HiDRbnTlTafFtuR+ScuR1HZZBlkC+3CKdVt1R62mw7T2KPeXUGWL1c1xs
r5zPOHxPvOI1IfCo80FD55L3yZjsLExG0S9373pLx5iG9FfElnE=
=+EcZ
-----END PGP SIGNATURE-----

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org


Re: StaticMembers within Multiple Clusters

Posted by Tim K <ti...@gmail.com>.
On Wed, Jan 9, 2019, 2:16 PM Christopher Schultz <
chris@christopherschultz.net wrote:

> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA256
>
> Tim,
>
> On 1/9/19 10:39, Tim K wrote:
> > I'm trying to split 4 separate tomcat instances into 2 clusters
> > (2x2) to try and avoid the all-to-all traffic, but even when
> > setting up the Receiver and Static members to only speak to 1 other
> > instance, some still seems to find and add the other members
> > outside of the defined config to the wrong cluster.  I read that
> > mcast is still used when you have StaticMembers, could that be
> > causing this issue?
>
> Multicast is only used for membership, so if you are using static,
> there should be no multicast.
>
> Do you want to post your configuration(s)?
>
> - -chris
> -----BEGIN PGP SIGNATURE-----
> Comment: Using GnuPG with Thunderbird - https://www.enigmail.net/
>
> iQIzBAEBCAAdFiEEMmKgYcQvxMe7tcJcHPApP6U8pFgFAlw2SIEACgkQHPApP6U8
> pFh//A//WldkBxcRVWZ0Nj/FVjFqdhxr8dkystbk114wk8pjF/h5JOSmncQjvUn6
> 999ZT54rXToS+Dl2svp9imR266o0+56bUvJVXG2O4NK7TQZsEsBdOsqfnWPoHBM3
> kYS7lhzhlpmw1SDFlKVW0PnRX9acah5+SfVci5gL0cWTVqSkdmm4P6v2wqH1z7ej
> AeMZ0w2LaoRH0TTxJk8cD2vJpvnB3oNjrtUhHZCPJCraITPHhFNFMOSmhhf3+e1S
> K63D6l9kE3x4WDNtxKBBjr+5FaULM6kL5DotQlJPo0j7I4mL9DBgt2HkgTfoS39m
> M7QBGBR4tZ1zRIJiGXQRViMRhqL+Xjny61RxtU7zUlfWSChTEonUiv5z6aZ7q5n2
> xz1Evrw+gLmoR+YecOazMHef/7z6GFNCGyE80BFbR8LgHeOubaPfY+zhYw6iWSQP
> eHt32x48vzPewYlV1HLJR7C1oXhFPN9QVT2r+UENcsMtlDdWIhaflw6nb3qXhP8N
> t4xqlUJebON1KolHRXXReNgz6TieKLmup1jSgRvVhohSYBOputLB01PY5S7E6vLy
> 33EZGHbCOWlZzC1qyiXRd7jIfkdsQ9oRRHknty1gi0id/20M+iqYS22ZggnXMtFX
> P0lORhhEiWBSyMHytrIb+uO7HglocrSuQfgVaoAkiaRUDtyBdHg=
> =PM9e
> -----END PGP SIGNATURE-----
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
> For additional commands, e-mail: users-help@tomcat.apache.org


Essentially I'm trying to have server1 and server2 only in cluster1 and
server3 and server4 in only cluster2, but for some reason, members are
getting added to clusters that they aren't configured for.



server1 config:

<Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"
channelSendOptions="8">

                <Manager
className="org.apache.catalina.ha.session.DeltaManager"
expireSessionsOnShutdown="false" notifyListenersOnReplication="true"/>

                <Channel
className="org.apache.catalina.tribes.group.GroupChannel">

                                <Receiver
className="org.apache.catalina.tribes.transport.nio.NioReceiver"
address="auto" port="4000" autoBind="100" selectorTimeout="5000"
maxThreads="6"/>

                                <Sender
className="org.apache.catalina.tribes.transport.ReplicationTransmitter">

                                                <Transport
className="org.apache.catalina.tribes.transport.nio.PooledParallelSender"/>

                                </Sender>

                                <Interceptor
className="org.apache.catalina.tribes.group.interceptors.EncryptInterceptor"
encryptionKey="****Removed****" />

                                <Interceptor
className="org.apache.catalina.tribes.group.interceptors.TcpPingInterceptor"/>

                                <Interceptor
className="org.apache.catalina.tribes.group.interceptors.TcpFailureDetector"/>

                                <Interceptor
className="org.apache.catalina.tribes.group.interceptors.MessageDispatchInterceptor"/>

                                <Interceptor
className="org.apache.catalina.tribes.group.interceptors.StaticMembershipInterceptor">

                                                <Member
className="org.apache.catalina.tribes.membership.StaticMember"
host="server2" port="4000" domain="cluster1"
uniqueId="{1,0,2,3,4,5,6,7,8,9,10,11,12,13,14,15}"/>

                                </Interceptor>

                </Channel>

                <Valve
className="org.apache.catalina.ha.tcp.ReplicationValve" filter=""/>

                <Valve
className="org.apache.catalina.ha.session.JvmRouteBinderValve"/>

                <ClusterListener
className="org.apache.catalina.ha.session.ClusterSessionListener"/>

</Cluster>



server2 [everything the same except the <Member/> is]:

<Member className="org.apache.catalina.tribes.membership.StaticMember"
host="server1" port="4000" domain="cluster1"
uniqueId="{0,0,2,3,4,5,6,7,8,9,10,11,12,13,14,15}"/>



server3 [everything the same except the <Member/> is]:

<Member className="org.apache.catalina.tribes.membership.StaticMember"
host="server4" port="4000" domain="cluster2"
uniqueId="{4,0,2,3,4,5,6,7,8,9,10,11,12,13,14,15}"/>



server4 [everything the same except the <Member/> is]:

<Member className="org.apache.catalina.tribes.membership.StaticMember"
host="server3" port="4000" domain="cluster2"
uniqueId="{3,0,2,3,4,5,6,7,8,9,10,11,12,13,14,15}"/>

Re: StaticMembers within Multiple Clusters

Posted by Christopher Schultz <ch...@christopherschultz.net>.
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

Tim,

On 1/9/19 10:39, Tim K wrote:
> I'm trying to split 4 separate tomcat instances into 2 clusters
> (2x2) to try and avoid the all-to-all traffic, but even when
> setting up the Receiver and Static members to only speak to 1 other
> instance, some still seems to find and add the other members
> outside of the defined config to the wrong cluster.  I read that
> mcast is still used when you have StaticMembers, could that be
> causing this issue?

Multicast is only used for membership, so if you are using static,
there should be no multicast.

Do you want to post your configuration(s)?

- -chris
-----BEGIN PGP SIGNATURE-----
Comment: Using GnuPG with Thunderbird - https://www.enigmail.net/

iQIzBAEBCAAdFiEEMmKgYcQvxMe7tcJcHPApP6U8pFgFAlw2SIEACgkQHPApP6U8
pFh//A//WldkBxcRVWZ0Nj/FVjFqdhxr8dkystbk114wk8pjF/h5JOSmncQjvUn6
999ZT54rXToS+Dl2svp9imR266o0+56bUvJVXG2O4NK7TQZsEsBdOsqfnWPoHBM3
kYS7lhzhlpmw1SDFlKVW0PnRX9acah5+SfVci5gL0cWTVqSkdmm4P6v2wqH1z7ej
AeMZ0w2LaoRH0TTxJk8cD2vJpvnB3oNjrtUhHZCPJCraITPHhFNFMOSmhhf3+e1S
K63D6l9kE3x4WDNtxKBBjr+5FaULM6kL5DotQlJPo0j7I4mL9DBgt2HkgTfoS39m
M7QBGBR4tZ1zRIJiGXQRViMRhqL+Xjny61RxtU7zUlfWSChTEonUiv5z6aZ7q5n2
xz1Evrw+gLmoR+YecOazMHef/7z6GFNCGyE80BFbR8LgHeOubaPfY+zhYw6iWSQP
eHt32x48vzPewYlV1HLJR7C1oXhFPN9QVT2r+UENcsMtlDdWIhaflw6nb3qXhP8N
t4xqlUJebON1KolHRXXReNgz6TieKLmup1jSgRvVhohSYBOputLB01PY5S7E6vLy
33EZGHbCOWlZzC1qyiXRd7jIfkdsQ9oRRHknty1gi0id/20M+iqYS22ZggnXMtFX
P0lORhhEiWBSyMHytrIb+uO7HglocrSuQfgVaoAkiaRUDtyBdHg=
=PM9e
-----END PGP SIGNATURE-----

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org