You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@tomcat.apache.org by Pid <p...@pidster.com> on 2006/06/21 18:32:58 UTC

Tomcat session replication/cluster

I'm seeing an issue on 5.5.17 with a 2 node cluster config.
When a context is reloaded, it sends the context node name incorrectly
to the cluster.
E.g. context is called "website1"

SEVERE: Context manager doesn't exist:website1website1

The config I'm using is exactly the same as the default from server.xml,
except the cluster is defined in Engine, rather than each Host.




Filip Hanik - Dev Lists wrote:
> also, use Tomcat 5.5.17
> 
> Sean O'Reilly wrote:
>> Hi,
>>
>> I am trying to get in-memory session replication working and am testing
>> running 3 seperate tomcat instances on the same server.
>>
>> I am using tomcat-5.5.15 and apache-2.0.54 with jk2.
>>
>> Whenever i run my test app although it should be doing round-robin load
>> balancing it doesn't switch to another instance of tomcat until the
>> eighth request and does not appear to have sent the session information
>> across as the session ID changes.
>>
>> Here are my server.xml and workers2.properties files
>>
>> server.xml
>>
>> <Server port="8005" shutdown="SHUTDOWN">
>>
>>   <!-- Comment these entries out to disable JMX MBeans support used for
>> the        administration web application -->
>>   <Listener className="org.apache.catalina.core.AprLifecycleListener" />
>>   <Listener
>> className="org.apache.catalina.mbeans.ServerLifecycleListener" />
>>   <Listener
>> className="org.apache.catalina.mbeans.GlobalResourcesLifecycleListener"
>> />
>>   <Listener
>> className="org.apache.catalina.storeconfig.StoreConfigLifecycleListener"/>
>>
>>
>>   <!-- Global JNDI resources -->
>>   <GlobalNamingResources>
>>
>>     <!-- Test entry for demonstration purposes -->
>>     <Environment name="simpleValue" type="java.lang.Integer"
>> value="30"/>
>>
>>     <!-- Editable user database that can also be used by
>>          UserDatabaseRealm to authenticate users -->
>>     <Resource name="UserDatabase" auth="Container"
>>               type="org.apache.catalina.UserDatabase"
>>        description="User database that can be updated and saved"
>>            factory="org.apache.catalina.users.MemoryUserDatabaseFactory"
>>           pathname="conf/tomcat-users.xml" />
>>
>>   </GlobalNamingResources>
>>
>>   <!-- A "Service" is a collection of one or more "Connectors" that
>> share
>>        a single "Container" (and therefore the web applications visible
>>        within that Container).  Normally, that Container is an "Engine",
>>        but this is not required.
>>
>>        Note:  A "Service" is not itself a "Container", so you may not
>>        define subcomponents such as "Valves" or "Loggers" at this level.
>>    -->
>>
>>   <!-- Define the Tomcat Stand-Alone Service -->
>>   <Service name="Catalina">
>>
>>     <!-- A "Connector" represents an endpoint by which requests are
>> received
>>          and responses are returned.  Each Connector passes requests on
>> to the
>>          associated "Container" (normally an Engine) for processing.
>>
>>          By default, a non-SSL HTTP/1.1 Connector is established on
>> port 8080.
>>          You can also enable an SSL HTTP/1.1 Connector on port 8443 by
>>          following the instructions below and uncommenting the second
>> Connector
>>          entry.  SSL support requires the following steps (see the SSL
>> Config
>>          HOWTO in the Tomcat 5 documentation bundle for more detailed
>>          instructions):
>>          * If your JDK version 1.3 or prior, download and install JSSE
>> 1.0.2 or
>>            later, and put the JAR files into "$JAVA_HOME/jre/lib/ext".
>>          * Execute:
>>              %JAVA_HOME%\bin\keytool -genkey -alias tomcat -keyalg RSA
>> (Windows)
>>              $JAVA_HOME/bin/keytool -genkey -alias tomcat -keyalg RSA
>> (Unix)
>>            with a password value of "changeit" for both the certificate
>> and
>>            the keystore itself.
>>
>>          By default, DNS lookups are enabled when a web application
>> calls
>>          request.getRemoteHost().  This can have an adverse impact on
>>          performance, so you can disable it by setting the
>>          "enableLookups" attribute to "false".  When DNS lookups are
>> disabled,
>>          request.getRemoteHost() will return the String version of the
>>          IP address of the remote client.
>>     -->
>>
>>     <!-- Define a non-SSL HTTP/1.1 Connector on port 8080
>>     <Connector port="8080" maxHttpHeaderSize="8192"
>>                maxThreads="150" minSpareThreads="25"
>> maxSpareThreads="75"
>>                enableLookups="false" redirectPort="8443"
>> acceptCount="100"
>>                connectionTimeout="20000" disableUploadTimeout="true" />
>> -->
>>     <!-- Note : To disable connection timeouts, set connectionTimeout
>> value
>>      to 0 -->
>>     
>>     <!-- Note : To use gzip compression you could set the following
>> properties :
>>     
>>                compression="on"               
>> compressionMinSize="2048"               
>> noCompressionUserAgents="gozilla, traviata"               
>> compressableMimeType="text/html,text/xml"
>>     -->
>>
>>     <!-- Define a SSL HTTP/1.1 Connector on port 8443 -->
>>     <!--
>>     <Connector port="8443" maxHttpHeaderSize="8192"
>>                maxThreads="150" minSpareThreads="25"
>> maxSpareThreads="75"
>>                enableLookups="false" disableUploadTimeout="true"
>>                acceptCount="100" scheme="https" secure="true"
>>                clientAuth="false" sslProtocol="TLS" />
>>     -->
>>
>>     <!-- Define an AJP 1.3 Connector on port 8009 -->
>>     <Connector port="8009"                enableLookups="false"
>> redirectPort="8443"
>> protocol="AJP/1.3" />
>>
>>     <!-- Define a Proxied HTTP/1.1 Connector on port 8082 -->
>>     <!-- See proxy documentation for more information about using this.
>> -->
>>     <!--
>>     <Connector port="8082"                maxThreads="150"
>> minSpareThreads="25"
>> maxSpareThreads="75"
>>                enableLookups="false" acceptCount="100"
>> connectionTimeout="20000"
>>                proxyPort="80" disableUploadTimeout="true" />
>>     -->
>>
>>     <!-- An Engine represents the entry point (within Catalina) that
>> processes
>>          every request.  The Engine implementation for Tomcat stand
>> alone
>>          analyzes the HTTP headers included with the request, and
>> passes them
>>          on to the appropriate Host (virtual host). -->
>>
>>     <!-- You should set jvmRoute to support load-balancing via AJP ie :
>> -->
>>     <Engine name="Standalone" defaultHost="localhost"
>> jvmRoute="Tomcat5A">          
>>              <!-- Define the top level container in our container
>> hierarchy
>>     <Engine name="Catalina" defaultHost="localhost"> -->
>>
>>       <!-- The request dumper valve dumps useful debugging information
>> about
>>            the request headers and cookies that were received, and the
>> response
>>            headers and cookies that were sent, for all requests
>> received by
>>            this instance of Tomcat.  If you care only about requests to
>> a
>>            particular virtual host, or a particular application, nest
>> this
>>            element inside the corresponding <Host> or <Context> entry
>> instead.
>>
>>            For a similar mechanism that is portable to all Servlet 2.4
>>            containers, check out the "RequestDumperFilter" Filter in the
>>            example application (the source for this filter may be found
>> in
>>            "$CATALINA_HOME/webapps/examples/WEB-INF/classes/filters").
>>
>>            Request dumping is disabled by default.  Uncomment the
>> following
>>            element to enable it. -->
>>       <!--
>>       <Valve className="org.apache.catalina.valves.RequestDumperValve"/>
>>       -->
>>
>>       <!-- Because this Realm is here, an instance will be shared
>> globally -->
>>
>>       <!-- This Realm uses the UserDatabase configured in the global
>> JNDI
>>            resources under the key "UserDatabase".  Any edits
>>            that are performed against this UserDatabase are immediately
>>            available for use by the Realm.  -->
>>       <Realm className="org.apache.catalina.realm.UserDatabaseRealm"
>>              resourceName="UserDatabase"/>
>>
>>       <!-- Comment out the old realm but leave here for now in case we
>>            need to go back quickly -->
>>       <!--
>>       <Realm className="org.apache.catalina.realm.MemoryRealm" />
>>       -->
>>
>>       <!-- Replace the above Realm with one of the following to get a
>> Realm
>>            stored in a database and accessed via JDBC -->
>>
>>       <!--
>>       <Realm  className="org.apache.catalina.realm.JDBCRealm"
>>              driverName="org.gjt.mm.mysql.Driver"
>>           connectionURL="jdbc:mysql://localhost/authority"
>>          connectionName="test" connectionPassword="test"
>>               userTable="users" userNameCol="user_name"
>> userCredCol="user_pass"
>>           userRoleTable="user_roles" roleNameCol="role_name" />
>>       -->
>>
>>       <!--
>>       <Realm  className="org.apache.catalina.realm.JDBCRealm"
>>              driverName="oracle.jdbc.driver.OracleDriver"
>>           connectionURL="jdbc:oracle:thin:@ntserver:1521:ORCL"
>>          connectionName="scott" connectionPassword="tiger"
>>               userTable="users" userNameCol="user_name"
>> userCredCol="user_pass"
>>           userRoleTable="user_roles" roleNameCol="role_name" />
>>       -->
>>
>>       <!--
>>       <Realm  className="org.apache.catalina.realm.JDBCRealm"
>>              driverName="sun.jdbc.odbc.JdbcOdbcDriver"
>>           connectionURL="jdbc:odbc:CATALINA"
>>               userTable="users" userNameCol="user_name"
>> userCredCol="user_pass"
>>           userRoleTable="user_roles" roleNameCol="role_name" />
>>       -->
>>
>>       <!-- Define the default virtual host
>>            Note: XML Schema validation will not work with Xerces 2.2.
>>        -->
>>       <Host name="localhost" appBase="webapps"
>>        unpackWARs="true" autoDeploy="true"
>>        xmlValidation="false" xmlNamespaceAware="false">
>>
>>         <!-- Defines a cluster for this node,
>>              By defining this element, means that every manager will be
>> changed.
>>              So when running a cluster, only make sure that you have
>> webapps in there
>>              that need to be clustered and remove the other ones.
>>              A cluster has the following parameters:
>>
>>              className = the fully qualified name of the cluster class
>>
>>              clusterName = a descriptive name for your cluster, can be
>> anything
>>
>>              mcastAddr = the multicast address, has to be the same for
>> all the nodes
>>
>>              mcastPort = the multicast port, has to be the same for all
>> the nodes
>>                           mcastBindAddr = bind the multicast socket to
>> a specific
>> address
>>                           mcastTTL = the multicast TTL if you want to
>> limit your
>> broadcast
>>                           mcastSoTimeout = the multicast readtimeout
>>              mcastFrequency = the number of milliseconds in between
>> sending a "I'm alive" heartbeat
>>
>>              mcastDropTime = the number a milliseconds before a node is
>> considered "dead" if no heartbeat is received
>>
>>              tcpThreadCount = the number of threads to handle incoming
>> replication requests, optimal would be the same amount of threads as
>> nodes
>>              tcpListenAddress = the listen address (bind address) for
>> TCP cluster request on this host,                                 in
>> case of multiple ethernet cards.
>>                                 auto means that address becomes
>>                                
>> InetAddress.getLocalHost().getHostAddress()
>>
>>              tcpListenPort = the tcp listen port
>>
>>              tcpSelectorTimeout = the timeout (ms) for the
>> Selector.select() method in case the OS
>>                                   has a wakup bug in java.nio. Set to 0
>> for no timeout
>>
>>              printToScreen = true means that managers will also print
>> to std.out
>>
>>              expireSessionsOnShutdown = true means that
>>              useDirtyFlag = true means that we only replicate a session
>> after setAttribute,removeAttribute has been called.
>>                             false means to replicate the session after
>> each request.
>>                             false means that replication would work for
>> the following piece of code: (only for SimpleTcpReplicationManager)
>>                             <%
>>                             HashMap map =
>> (HashMap)session.getAttribute("map");
>>                             map.put("key","value");
>>                             %>
>>              replicationMode = can be either 'pooled', 'synchronous' or
>> 'asynchronous'.
>>                                * Pooled means that the replication
>> happens using several sockets in a synchronous way. Ie, the data gets
>> replicated, then the request return. This is the same as the
>> 'synchronous' setting except it uses a pool of sockets, hence it is
>> multithreaded. This is the fastest and safest configuration. To use
>> this, also increase the nr of tcp threads that you have dealing with
>> replication.
>>                                * Synchronous means that the thread that
>> executes the request, is also the
>>                                thread the replicates the data to the
>> other nodes, and will not return until all
>>                                nodes have received the information.
>>                                * Asynchronous means that there is a
>> specific 'sender' thread for each cluster node,
>>                                so the request thread will queue the
>> replication request into a "smart" queue,
>>                                and then return to the client.
>>                                The "smart" queue is a queue where when
>> a session is added to the queue, and the same session
>>                                already exists in the queue from a
>> previous request, that session will be replaced
>>                                in the queue instead of replicating two
>> requests. This almost never happens, unless there is a
>>                                large network delay.
>>         -->                     <!--
>>             When configuring for clustering, you also add in a valve to
>> catch all the requests
>>             coming in, at the end of the request, the session may or
>> may not be replicated.
>>             A session is replicated if and only if all the conditions
>> are met:
>>             1. useDirtyFlag is true or setAttribute or removeAttribute
>> has been called AND
>>             2. a session exists (has been created)
>>             3. the request is not trapped by the "filter" attribute
>>
>>             The filter attribute is to filter out requests that could
>> not modify the session,
>>             hence we don't replicate the session after the end of this
>> request.
>>             The filter is negative, ie, anything you put in the filter,
>> you mean to filter out,
>>             ie, no replication will be done on requests that match one
>> of the filters.
>>             The filter attribute is delimited by ;, so you can't escape
>> out ; even if you wanted to.
>>
>>             filter=".*\.gif;.*\.js;" means that we will not replicate
>> the session after requests with the URI
>>             ending with .gif and .js are intercepted.
>>                         The deployer element can be used to deploy
>> apps cluster
>> wide.
>>             Currently the deployment only deploys/undeploys to working
>> members in the cluster
>>             so no WARs are copied upons startup of a broken node.
>>             The deployer watches a directory (watchDir) for WAR files
>> when watchEnabled="true"
>>             When a new war file is added the war gets deployed to the
>> local instance,
>>             and then deployed to the other instances in the cluster.
>>             When a war file is deleted from the watchDir the war is
>> undeployed locally             and cluster wide
>>         -->
>>        
>>         <Cluster
>> className="org.apache.catalina.cluster.tcp.SimpleTcpCluster"
>>                 
>> managerClassName="org.apache.catalina.cluster.session.DeltaManager"
>>                  expireSessionsOnShutdown="false"
>>                  useDirtyFlag="true"
>>                  notifyListenersOnReplication="true">
>>
>>             <Membership                
>> className="org.apache.catalina.cluster.mcast.McastService"
>>                 mcastAddr="228.0.0.4"
>>                 mcastPort="45564"
>>                 mcastFrequency="500"
>>                 mcastDropTime="3000"/>
>>
>>             <Receiver                
>> className="org.apache.catalina.cluster.tcp.ReplicationListener"
>>                 tcpListenAddress="auto"
>>                 tcpListenPort="4001"
>>                 tcpSelectorTimeout="100"
>>                 tcpThreadCount="6"/>
>>
>>             <Sender
>>                
>> className="org.apache.catalina.cluster.tcp.ReplicationTransmitter"
>>                 replicationMode="pooled"
>>                 ackTimeout="15000"/>
>>
>>             <Valve
>> className="org.apache.catalina.cluster.tcp.ReplicationValve"
>>                   
>> filter=".*\.gif;.*\.js;.*\.jpg;.*\.png;.*\.htm;.*\.html;.*\.css;.*\.txt;"/>
>>
>>                                <Deployer
>> className="org.apache.catalina.cluster.deploy.FarmWarDeployer"
>>                       tempDir="/tmp/war-temp/"
>>                       deployDir="/tmp/war-deploy/"
>>                       watchDir="/tmp/war-listen/"
>>                       watchEnabled="false"/>
>>                                   <ClusterListener
>> className="org.apache.catalina.cluster.session.ClusterSessionListener"/>
>>         </Cluster>
>>       
>>
>>
>>         <!-- Normally, users must authenticate themselves to each web
>> app
>>              individually.  Uncomment the following entry if you would
>> like
>>              a user to be authenticated the first time they encounter a
>>              resource protected by a security constraint, and then have
>> that
>>              user identity maintained across *all* web applications
>> contained
>>              in this virtual host. -->
>>         <!--
>>         <Valve
>> className="org.apache.catalina.authenticator.SingleSignOn" />
>>         -->
>>
>>         <!-- Access log processes all requests for this virtual host.
>> By
>>              default, log files are created in the "logs" directory
>> relative to
>>              $CATALINA_HOME.  If you wish, you can specify a different
>>              directory with the "directory" attribute.  Specify either
>> a relative
>>              (to $CATALINA_HOME) or absolute path to the desired
>> directory.
>>         -->
>>         <!--
>>         <Valve className="org.apache.catalina.valves.AccessLogValve"
>>                  directory="logs"  prefix="localhost_access_log."
>> suffix=".txt"
>>                  pattern="common" resolveHosts="false"/>
>>         -->
>>
>>         <!-- Access log processes all requests for this virtual host.
>> By
>>              default, log files are created in the "logs" directory
>> relative to
>>              $CATALINA_HOME.  If you wish, you can specify a different
>>              directory with the "directory" attribute.  Specify either
>> a relative
>>              (to $CATALINA_HOME) or absolute path to the desired
>> directory.
>>              This access log implementation is optimized for maximum
>> performance,
>>              but is hardcoded to support only the "common" and
>> "combined" patterns.
>>         -->
>>         <!--
>>         <Valve
>> className="org.apache.catalina.valves.FastCommonAccessLogValve"
>>                  directory="logs"  prefix="localhost_access_log."
>> suffix=".txt"
>>                  pattern="common" resolveHosts="false"/>
>>         -->
>>         <!-- Access log processes all requests for this virtual host.
>> By
>>              default, log files are created in the "logs" directory
>> relative to
>>              $CATALINA_HOME.  If you wish, you can specify a different
>>              directory with the "directory" attribute.  Specify either
>> a relative
>>              (to $CATALINA_HOME) or absolute path to the desired
>> directory.
>>              This access log implementation is optimized for maximum
>> performance,
>>              but is hardcoded to support only the "common" and
>> "combined" patterns.
>>
>>              This valve use NIO direct Byte Buffer to asynchornously
>> store the
>>              log.
>>         -->
>>         <!--
>>         <Valve
>> className="org.apache.catalina.valves.ByteBufferAccessLogValve"
>>                  directory="logs"  prefix="localhost_access_log."
>> suffix=".txt"
>>                  pattern="common" resolveHosts="false"/>
>>         -->
>>
>>       </Host>
>>
>>     </Engine>
>>
>>   </Service>
>>
>> </Server>
>>
>>
>> workers2.properties
>>
>> [logger.apache2]
>> file="/etc/httpd/conf/logs/error.log"
>> level=INFO
>> debug=1
>>
>> # Config settings
>> [config]
>> file=/etc/httpd/conf/workers2.properties
>> debug=0
>>
>> # Shared memory file settings
>> [shm]
>> file=/etc/httpd/conf/jk2.shm
>> size=100000
>>
>> # Communcation channel settings for "Tomcat5A"
>> [channel.socket:localhost:8009]
>> host=localhost
>> port=8009
>> tomcatId=Tomcat5A
>> group=balanced
>> lb_factor=1
>> route=Tomcat5A
>>
>>
>> # Declare a Tomcat5A worker
>> [ajp13:localhost:8009]
>> channel=channel.socket:Tomcat5A
>>
>>
>> # Communcation channel settings for "Tomcat5B"
>> [channel.socket:localhost:8010]
>> host=localhost
>> port=8010
>> tomcatId=Tomcat5B
>> group=balanced
>> lb_factor=1
>> route=Tomcat5B
>>
>>
>> # Declare a Tomcat5B worker
>> [ajp13:localhost:8010]
>> channel=channel.socket:Tomcat5B
>>
>>
>> # Communcation channel settings for "Tomcat5C"
>> [channel.socket:localhost:8011]
>> host=localhost
>> port=8011
>> tomcatId=Tomcat5C
>> group=balanced
>> lb_factor=1
>> route=Tomcat5C
>>
>>
>> # Declare a Tomcat5C worker
>> [ajp13:localhost:8011]
>> channel=channel.socket:Tomcat5C
>>
>> # Load balanced Worker
>> [lb:balanced]
>> worker=ajp13:localhost:8009
>> worker=ajp13:localhost:8010
>> worker=ajp13:localhost:8011
>> timeout=90
>> attempts=3
>> recovery=30
>> stickySession=0
>> noWorkerMsg=Server Busy please retry later.
>> noWorkerCodeMsg=503
>>
>> # URI mappings for the tomcat worker
>> # Map the "jsp-examples" web application context to the web server URI
>> space
>> [uri:/jsp-examples/*]
>> info= Mapping for jsp-examples context for tomcat
>> context=/jsp-examples
>> group=balanced
>>
>> [shm]
>> file=/etc/httpd/conf/jk2.shm
>> size=1000000
>>
>> [uri:/servlets-examples/*]
>> context=/servlets-examples
>> group=balanced
>>
>> # Define a status worker
>> [status:]
>>
>> # Status URI mapping
>> [uri:/jkstatus/*]
>> group=status
>>
>>
>> obviously the server.xml files on the other 2 instances of tomcat are
>> the same except the ports and jvmRoute have been changed.
>>
>>
>> can anyone see where i am going wrong ?
>>
>> Thanks
>>
>>
>>
>>   
> 
> 

---------------------------------------------------------------------
To start a new topic, e-mail: users@tomcat.apache.org
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org


Re: Tomcat session replication/cluster

Posted by Filip Hanik - Dev Lists <de...@hanik.com>.
Sean O'Reilly wrote:
> On Fri, 23 Jun 2006 10:00:36 -0500
> Filip Hanik - Dev Lists <de...@hanik.com> wrote:
>
>   
>> Sean O'Reilly wrote:
>>     
>>> On Fri, 23 Jun 2006 09:05:18 -0500
>>> Filip Hanik - Dev Lists <de...@hanik.com> wrote:
>>>
>>>   
>>>       
>>>>> Hi Guys,
>>>>>
>>>>> I appear to be finally getting somewhere with the in-memory state
>>>>> replication but am now getting the following error when starting
>>>>> up my tomcat instances/
>>>>>
>>>>> WARNING: Manager [/jsp-examples], requesting session state from
>>>>> org.apache.catalina.cluster.mcast.McastMember[tcp://192.168.4.121:4001,catalina,192.168.4.121,4001,
>>>>> alive=74436]. This operation will timeout if no session state has
>>>>> been received within 60 seconds. 23-Jun-2006 13:27:38
>>>>> org.apache.catalina.cluster.session.DeltaManager
>>>>> waitForSendAllSessions SEVERE: Manager [/jsp-examples]: No session
>>>>> state send at 23/06/06 13:26 received, timing out after 60,140 ms.
>>>>> 23-Jun-2006 13:27:38 org.apache.catalina.core.ApplicationContext
>>>>> log INFO: ContextListener: contextInitialized() 23-Jun-2006
>>>>> 13:27:38 org.apache.catalina.core.ApplicationContext log INFO:
>>>>> SessionListener: contextInitialized() 23-Jun-2006 13:27:38
>>>>> org.apache.jk.common.ChannelSocket init INFO: JK: ajp13 listening
>>>>> on /0.0.0.0:8009 23-Jun-2006 13:27:38 org.apache.jk.server.JkMain
>>>>> start INFO: Jk running ID=0 time=0/224  config=null 23-Jun-2006
>>>>> 13:27:38 org.apache.catalina.storeconfig.StoreLoader load INFO:
>>>>> Find registry server-registry.xml at classpath resource
>>>>> 23-Jun-2006 13:27:39 org.apache.catalina.startup.Catalina start
>>>>> INFO: Server startup in 67102 ms
>>>>>
>>>>> Can anyone point me in the right direction as to why the session
>>>>> state is not being replicated ?
>>>>>   
>>>>>       
>>>>>           
>>>> Two things to check:
>>>> 1. What does the other server log say, maybe there is an error
>>>> there, does the other server know of this server?
>>>> 2. your server.xml, you would need to provide us with a little bit
>>>> more info
>>>>
>>>> Filip
>>>>
>>>> ---------------------------------------------------------------------
>>>> To start a new topic, e-mail: users@tomcat.apache.org
>>>> To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
>>>> For additional commands, e-mail: users-help@tomcat.apache.org
>>>>     
>>>>         
>>> I might be being a bit thick here !!!
>>>
>>> I have 3 servers !
>>>
>>> One is running apache2, mod_jk2 and tomcat-5.5.17, the other two
>>> just have tomcat-5.5.17. Do i need to have apache and mod_jk2
>>> running on all servers ?
>>>
>>> I am sure it would be easier to use mod_proxy_balancer and
>>> mod_proxy_ajp but can't find any documentation anywhere.
>>>
>>> Thanks for the help so far guys.
>>>   
>>>       
>> your problem is not related to Apache or mod_jk, its strictly Tomcat. 
>> Session state transfer fails.
>> and that's the path you need to persue.
>> 1. Check all your tomcat logs
>> 2. Make sure that node discovery is working, you should have in each
>> of your tomcat nodes, a info log statement saying it detected the
>> other two nodes.
>>
>> Filip
>>
>>
>>     
>
> Here is what is written to my logs
>
> 23-Jun-2006 16:26:38 org.apache.catalina.cluster.session.DeltaManager
> waitForSendAllSessions SEVERE: Manager [/jsp-examples]: No session
> state send at 23/06/06 16:25 received, timing out after 60,121 ms.
> 23-Jun-2006 16:26:38 org.apache.catalina.core.ApplicationContext log
> INFO: ContextListener: contextInitialized() 23-Jun-2006 16:26:38
> org.apache.catalina.core.ApplicationContext log INFO: SessionListener:
> contextInitialized() 23-Jun-2006 16:26:39
> org.apache.jk.common.ChannelSocket init INFO: JK: ajp13 listening
> on /0.0.0.0:8009 23-Jun-2006 16:26:39 org.apache.jk.server.JkMain start
> INFO: Jk running ID=0 time=0/230  config=null
> 23-Jun-2006 16:26:39 org.apache.catalina.storeconfig.StoreLoader load
> INFO: Find registry server-registry.xml at classpath resource
> 23-Jun-2006 16:26:39 org.apache.catalina.startup.Catalina start
> INFO: Server startup in 67074 ms
>   

I'm sure the is plenty more in the log earlier that you are kindfully 
omitting and that is vital

Filip



-- 


Filip Hanik

---------------------------------------------------------------------
To start a new topic, e-mail: users@tomcat.apache.org
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org


Re: Tomcat session replication/cluster

Posted by Pid <p...@pidster.com>.
How are you defining the cluster in your server.xml?

I did this recently, and had a bunch of small problems.
E.g. my server clocks weren't sync'd.

I found that enabling just the SimpleTcpCluster element (without all the
rest) helped me get up and running.


Sean O'Reilly wrote:
> On Fri, 23 Jun 2006 10:00:36 -0500
> Here is what is written to my logs
> 
> 23-Jun-2006 16:26:38 org.apache.catalina.cluster.session.DeltaManager
> waitForSendAllSessions SEVERE: Manager [/jsp-examples]: No session
> state send at 23/06/06 16:25 received, timing out after 60,121 ms.
> 23-Jun-2006 16:26:38 org.apache.catalina.core.ApplicationContext log
> INFO: ContextListener: contextInitialized() 23-Jun-2006 16:26:38
> org.apache.catalina.core.ApplicationContext log INFO: SessionListener:
> contextInitialized() 23-Jun-2006 16:26:39
> org.apache.jk.common.ChannelSocket init INFO: JK: ajp13 listening
> on /0.0.0.0:8009 23-Jun-2006 16:26:39 org.apache.jk.server.JkMain start
> INFO: Jk running ID=0 time=0/230  config=null
> 23-Jun-2006 16:26:39 org.apache.catalina.storeconfig.StoreLoader load
> INFO: Find registry server-registry.xml at classpath resource
> 23-Jun-2006 16:26:39 org.apache.catalina.startup.Catalina start
> INFO: Server startup in 67074 ms
> 

---------------------------------------------------------------------
To start a new topic, e-mail: users@tomcat.apache.org
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org


Re: Tomcat session replication/cluster

Posted by Sean O'Reilly <se...@secpay.com>.
On Fri, 23 Jun 2006 10:00:36 -0500
Filip Hanik - Dev Lists <de...@hanik.com> wrote:

> Sean O'Reilly wrote:
> > On Fri, 23 Jun 2006 09:05:18 -0500
> > Filip Hanik - Dev Lists <de...@hanik.com> wrote:
> >
> >   
> >>> Hi Guys,
> >>>
> >>> I appear to be finally getting somewhere with the in-memory state
> >>> replication but am now getting the following error when starting
> >>> up my tomcat instances/
> >>>
> >>> WARNING: Manager [/jsp-examples], requesting session state from
> >>> org.apache.catalina.cluster.mcast.McastMember[tcp://192.168.4.121:4001,catalina,192.168.4.121,4001,
> >>> alive=74436]. This operation will timeout if no session state has
> >>> been received within 60 seconds. 23-Jun-2006 13:27:38
> >>> org.apache.catalina.cluster.session.DeltaManager
> >>> waitForSendAllSessions SEVERE: Manager [/jsp-examples]: No session
> >>> state send at 23/06/06 13:26 received, timing out after 60,140 ms.
> >>> 23-Jun-2006 13:27:38 org.apache.catalina.core.ApplicationContext
> >>> log INFO: ContextListener: contextInitialized() 23-Jun-2006
> >>> 13:27:38 org.apache.catalina.core.ApplicationContext log INFO:
> >>> SessionListener: contextInitialized() 23-Jun-2006 13:27:38
> >>> org.apache.jk.common.ChannelSocket init INFO: JK: ajp13 listening
> >>> on /0.0.0.0:8009 23-Jun-2006 13:27:38 org.apache.jk.server.JkMain
> >>> start INFO: Jk running ID=0 time=0/224  config=null 23-Jun-2006
> >>> 13:27:38 org.apache.catalina.storeconfig.StoreLoader load INFO:
> >>> Find registry server-registry.xml at classpath resource
> >>> 23-Jun-2006 13:27:39 org.apache.catalina.startup.Catalina start
> >>> INFO: Server startup in 67102 ms
> >>>
> >>> Can anyone point me in the right direction as to why the session
> >>> state is not being replicated ?
> >>>   
> >>>       
> >> Two things to check:
> >> 1. What does the other server log say, maybe there is an error
> >> there, does the other server know of this server?
> >> 2. your server.xml, you would need to provide us with a little bit
> >> more info
> >>
> >> Filip
> >>
> >> ---------------------------------------------------------------------
> >> To start a new topic, e-mail: users@tomcat.apache.org
> >> To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
> >> For additional commands, e-mail: users-help@tomcat.apache.org
> >>     
> >
> > I might be being a bit thick here !!!
> >
> > I have 3 servers !
> >
> > One is running apache2, mod_jk2 and tomcat-5.5.17, the other two
> > just have tomcat-5.5.17. Do i need to have apache and mod_jk2
> > running on all servers ?
> >
> > I am sure it would be easier to use mod_proxy_balancer and
> > mod_proxy_ajp but can't find any documentation anywhere.
> >
> > Thanks for the help so far guys.
> >   
> your problem is not related to Apache or mod_jk, its strictly Tomcat. 
> Session state transfer fails.
> and that's the path you need to persue.
> 1. Check all your tomcat logs
> 2. Make sure that node discovery is working, you should have in each
> of your tomcat nodes, a info log statement saying it detected the
> other two nodes.
> 
> Filip
> 
> 

Here is what is written to my logs

23-Jun-2006 16:26:38 org.apache.catalina.cluster.session.DeltaManager
waitForSendAllSessions SEVERE: Manager [/jsp-examples]: No session
state send at 23/06/06 16:25 received, timing out after 60,121 ms.
23-Jun-2006 16:26:38 org.apache.catalina.core.ApplicationContext log
INFO: ContextListener: contextInitialized() 23-Jun-2006 16:26:38
org.apache.catalina.core.ApplicationContext log INFO: SessionListener:
contextInitialized() 23-Jun-2006 16:26:39
org.apache.jk.common.ChannelSocket init INFO: JK: ajp13 listening
on /0.0.0.0:8009 23-Jun-2006 16:26:39 org.apache.jk.server.JkMain start
INFO: Jk running ID=0 time=0/230  config=null
23-Jun-2006 16:26:39 org.apache.catalina.storeconfig.StoreLoader load
INFO: Find registry server-registry.xml at classpath resource
23-Jun-2006 16:26:39 org.apache.catalina.startup.Catalina start
INFO: Server startup in 67074 ms

-- 
Sean O'Reilly
Systems Administrator
SECPay Ltd

http://www.secpay.com

s.oreilly@secpay.com

Mobile 07917 463906

DDI 01732 300212

This email contains information which is confidential. It is for the
exclusive use of the addressee(s). If you are not the addressee, please
note that any distribution, dissemination, copying or use of this
communication or the information in it is prohibited. If you have
received this email in error, please telephone me immediately.

---------------------------------------------------------------------
To start a new topic, e-mail: users@tomcat.apache.org
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org


Re: Tomcat session replication/cluster

Posted by Filip Hanik - Dev Lists <de...@hanik.com>.
Sean O'Reilly wrote:
> On Fri, 23 Jun 2006 09:05:18 -0500
> Filip Hanik - Dev Lists <de...@hanik.com> wrote:
>
>   
>>> Hi Guys,
>>>
>>> I appear to be finally getting somewhere with the in-memory state
>>> replication but am now getting the following error when starting up
>>> my tomcat instances/
>>>
>>> WARNING: Manager [/jsp-examples], requesting session state from
>>> org.apache.catalina.cluster.mcast.McastMember[tcp://192.168.4.121:4001,catalina,192.168.4.121,4001,
>>> alive=74436]. This operation will timeout if no session state has
>>> been received within 60 seconds. 23-Jun-2006 13:27:38
>>> org.apache.catalina.cluster.session.DeltaManager
>>> waitForSendAllSessions SEVERE: Manager [/jsp-examples]: No session
>>> state send at 23/06/06 13:26 received, timing out after 60,140 ms.
>>> 23-Jun-2006 13:27:38 org.apache.catalina.core.ApplicationContext
>>> log INFO: ContextListener: contextInitialized() 23-Jun-2006 13:27:38
>>> org.apache.catalina.core.ApplicationContext log INFO:
>>> SessionListener: contextInitialized() 23-Jun-2006 13:27:38
>>> org.apache.jk.common.ChannelSocket init INFO: JK: ajp13 listening
>>> on /0.0.0.0:8009 23-Jun-2006 13:27:38 org.apache.jk.server.JkMain
>>> start INFO: Jk running ID=0 time=0/224  config=null 23-Jun-2006
>>> 13:27:38 org.apache.catalina.storeconfig.StoreLoader load INFO:
>>> Find registry server-registry.xml at classpath resource 23-Jun-2006
>>> 13:27:39 org.apache.catalina.startup.Catalina start INFO: Server
>>> startup in 67102 ms
>>>
>>> Can anyone point me in the right direction as to why the session
>>> state is not being replicated ?
>>>   
>>>       
>> Two things to check:
>> 1. What does the other server log say, maybe there is an error there, 
>> does the other server know of this server?
>> 2. your server.xml, you would need to provide us with a little bit
>> more info
>>
>> Filip
>>
>> ---------------------------------------------------------------------
>> To start a new topic, e-mail: users@tomcat.apache.org
>> To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
>> For additional commands, e-mail: users-help@tomcat.apache.org
>>     
>
> I might be being a bit thick here !!!
>
> I have 3 servers !
>
> One is running apache2, mod_jk2 and tomcat-5.5.17, the other two just
> have tomcat-5.5.17. Do i need to have apache and mod_jk2 running on all
> servers ?
>
> I am sure it would be easier to use mod_proxy_balancer and
> mod_proxy_ajp but can't find any documentation anywhere.
>
> Thanks for the help so far guys.
>   
your problem is not related to Apache or mod_jk, its strictly Tomcat. 
Session state transfer fails.
and that's the path you need to persue.
1. Check all your tomcat logs
2. Make sure that node discovery is working, you should have in each of 
your tomcat nodes, a info log statement saying it detected the other two 
nodes.

Filip


-- 


Filip Hanik

---------------------------------------------------------------------
To start a new topic, e-mail: users@tomcat.apache.org
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org


Re: Tomcat session replication/cluster (mod_proxy_ajp)

Posted by Pid <p...@pidster.com>.
could well be.
depends on your setup...


> ################################################################################################
> WARNING: Unable to asynchronously send session with
> id=[88798A041EC3F104045E5C22B47ADE77.jvm1-1151322148155] - message will
> be ignored. java.net.ConnectException: Connection refused at
> java.net.PlainSocketImpl.socketConnect(Native Method) at
> java.net.PlainSocketImpl.doConnect(PlainSocketImpl.java:333) at
> java.net.PlainSocketImpl.connectToAddress(PlainSocketImpl.java:195) at
> java.net.PlainSocketImpl.connect(PlainSocketImpl.java:182) at
> java.net.SocksSocketImpl.connect(SocksSocketImpl.java:366) at
> java.net.Socket.connect(Socket.java:507) at
> java.net.Socket.connect(Socket.java:457) at
> java.net.Socket.<init>(Socket.java:365) at
> java.net.Socket.<init>(Socket.java:207) at
> org.apache.catalina.cluster.tcp.DataSender.createSocket(DataSender.java:704)
> at
> org.apache.catalina.cluster.tcp.DataSender.openSocket(DataSender.java:679)
> at
> org.apache.catalina.cluster.tcp.DataSender.pushMessage(DataSender.java:803)
> at
> org.apache.catalina.cluster.tcp.FastAsyncSocketSender$FastQueueThread.pushQueuedMessages(FastAsyncSocketSender.java:476)
> at
> org.apache.catalina.cluster.tcp.FastAsyncSocketSender$FastQueueThread.run(FastAsyncSocketSender.java:442)
> #########################################################################################################################
> 
> Could this be a firewall problem on one of the receiving servers
> 

---------------------------------------------------------------------
To start a new topic, e-mail: users@tomcat.apache.org
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org


Re: Tomcat session replication/cluster (mod_proxy_ajp)

Posted by Sean O'Reilly <se...@secpay.com>.
On Mon, 26 Jun 2006 11:46:43 +0100
Pid <p...@pidster.com> wrote:

> 
> 
> Sean O'Reilly wrote:
> 
> > Still having some problems with load balancing and state replication
> > neither of which appear to be working. If i shutdown tomcat on the
> > main server i can still get to the application directory from one of
> > the other servers but get a 503 error if i try to run  any of the
> > applications ?? 
> 
> Where's the 503 coming from, Apache or Tomcat?
> 
> 
> 
> 
> ---------------------------------------------------------------------
> To start a new topic, e-mail: users@tomcat.apache.org
> To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
> For additional commands, e-mail: users-help@tomcat.apache.org

An exciting new error

################################################################################################
WARNING: Unable to asynchronously send session with
id=[88798A041EC3F104045E5C22B47ADE77.jvm1-1151322148155] - message will
be ignored. java.net.ConnectException: Connection refused at
java.net.PlainSocketImpl.socketConnect(Native Method) at
java.net.PlainSocketImpl.doConnect(PlainSocketImpl.java:333) at
java.net.PlainSocketImpl.connectToAddress(PlainSocketImpl.java:195) at
java.net.PlainSocketImpl.connect(PlainSocketImpl.java:182) at
java.net.SocksSocketImpl.connect(SocksSocketImpl.java:366) at
java.net.Socket.connect(Socket.java:507) at
java.net.Socket.connect(Socket.java:457) at
java.net.Socket.<init>(Socket.java:365) at
java.net.Socket.<init>(Socket.java:207) at
org.apache.catalina.cluster.tcp.DataSender.createSocket(DataSender.java:704)
at
org.apache.catalina.cluster.tcp.DataSender.openSocket(DataSender.java:679)
at
org.apache.catalina.cluster.tcp.DataSender.pushMessage(DataSender.java:803)
at
org.apache.catalina.cluster.tcp.FastAsyncSocketSender$FastQueueThread.pushQueuedMessages(FastAsyncSocketSender.java:476)
at
org.apache.catalina.cluster.tcp.FastAsyncSocketSender$FastQueueThread.run(FastAsyncSocketSender.java:442)
#########################################################################################################################

Could this be a firewall problem on one of the receiving servers

-- 
Sean O'Reilly
Systems Administrator
SECPay Ltd

http://www.secpay.com

s.oreilly@secpay.com

Mobile 07917 463906

DDI 01732 300212

This email contains information which is confidential. It is for the
exclusive use of the addressee(s). If you are not the addressee, please
note that any distribution, dissemination, copying or use of this
communication or the information in it is prohibited. If you have
received this email in error, please telephone me immediately.

---------------------------------------------------------------------
To start a new topic, e-mail: users@tomcat.apache.org
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org


Re: Tomcat session replication/cluster (mod_proxy_ajp)

Posted by Pid <p...@pidster.com>.

Sean O'Reilly wrote:

> Still having some problems with load balancing and state replication
> neither of which appear to be working. If i shutdown tomcat on the
> main server i can still get to the application directory from one of
> the other servers but get a 503 error if i try to run  any of the
> applications ?? 

Where's the 503 coming from, Apache or Tomcat?




---------------------------------------------------------------------
To start a new topic, e-mail: users@tomcat.apache.org
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org


Re: Tomcat session replication/cluster (mod_proxy_ajp)

Posted by Pid <p...@pidster.com>.
(I've manually configured the tcpListenAddress to be the node's main IP.
Obviously it's not 000.000.000.00.)
I've specified JvmRouteBinderValve, JvmRouteSessionIDBinderListener and
ClusterSessionListener.


<Cluster className="org.apache.catalina.cluster.tcp.SimpleTcpCluster"

managerClassName="org.apache.catalina.cluster.session.DeltaManager"
                 clusterName="ServerCluster01"
                 expireSessionsOnShutdown="false"
                 useDirtyFlag="true"
                 notifyListenersOnReplication="true">
            <Membership
                className="org.apache.catalina.cluster.mcast.McastService"
                mcastAddr="228.0.0.4"
                mcastPort="45564"
                mcastFrequency="500"
                mcastDropTime="3000"/>
            <Receiver

className="org.apache.catalina.cluster.tcp.ReplicationListener"
                tcpListenAddress="000.000.000.000"
                tcpListenPort="4001"
                tcpSelectorTimeout="100"
                tcpThreadCount="6"/>
            <Sender

className="org.apache.catalina.cluster.tcp.ReplicationTransmitter"
                replicationMode="pooled"
                autoConnect="true"
                ackTimeout="15000"
                waitForAck="true"/>

<Valve className="org.apache.catalina.cluster.tcp.ReplicationValve"
filter=".*\.gif;.*\.js;.*\.jpg;.*\.png;.*\.htm;.*\.html;.*\.css;.*\.txt;"/>

<Valve
className="org.apache.catalina.cluster.session.JvmRouteBinderValve"
enabled="true"/>
<ClusterListener
className="org.apache.catalina.cluster.session.JvmRouteSessionIDBinderListener"/>
<ClusterListener
className="org.apache.catalina.cluster.session.ClusterSessionListener" />

</Cluster>



Sean O'Reilly wrote:
> On Fri, 23 Jun 2006 16:43:16 +0100
> Pid <p...@pidster.com> wrote:
> 
>>
>> Sean O'Reilly wrote:
>>> On Fri, 23 Jun 2006 09:05:18 -0500
>>> Filip Hanik - Dev Lists <de...@hanik.com> wrote:
>>> I am sure it would be easier to use mod_proxy_balancer and
>>> mod_proxy_ajp but can't find any documentation anywhere.
>> My servers didn't have their clocks synchronised, check that.
>> Also try using the most basic cluster config to start with and work up
>> to more complex variations.
>>
>> There's not much to configure for proxy_ajp / balancer.
>> We're running Apache2.2 + Tomcat 5.5.17 + with mod_proxy_ajp &c.
>> AJP needs no config, which is nice.
>> Balancer also needs very little, see mod_proxy for details.
>> The route=TCS1 parameter is the jvmRoute set in the tomcat Engine.
>>
>> ### put this in your Apache vhost/conf
>>
>> ReWriteEngine     on
>> ProxyPreserveHost On
>> ProxyRequests     Off
>> ProxyVia          Off
>>
>> <Proxy balancer://mycluster>
>>    BalancerMember ajp://tomcat1:8009 smax=10 loadfactor=10 route=TCS1
>>    BalancerMember ajp://tomcat2:8009 smax=10 loadfactor=10 route=TCS2
>> </Proxy>
>>
>> RewriteRule ^\/(.+)\.jsp(.+)? balancer://mycluster/$1.jsp$2 [P,L]
>>
>> ProxyPass /favicon.ico !
>> ProxyPass /robots.txt  !
>> ProxyPass /images/     !
>> ProxyPass /forms/ balancer://mycluster/forms/ \
>> maxattempts=1 lbmethod=bytraffic stickysession=JSESSIONID
>>
>>
>>
>>
>>> Thanks for the help so far guys.
>>>
>> ---------------------------------------------------------------------
>> To start a new topic, e-mail: users@tomcat.apache.org
>> To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
>> For additional commands, e-mail: users-help@tomcat.apache.org
> ok here are latest config files
> 
> server.xml (cluster configuration)
> 
> <Cluster className="org.apache.catalina.cluster.tcp.SimpleTcpCluster"
>                 name="cluster"
>                 debug="10"
>                 serviceclass="org.apache.catalina.cluster.mcast.McastService"
>                 mcastAddress="228.0.0.4"
>                 mcastPort="45564"                 
> 		mcastFrequency="500"
>                 mcastDroptime="3000"
>                 tcpThreadCount="6"
>                 tcpListenAddress="auto"
>                 tcpListenPort="4001"
>                 tcpSelectorTimeout="100"
>                 printToScreen="false"
>                 expireSessionsOnShutdown="false"
>                 useDirtyFlag="true"
>                 replicationMode="synchronous"/>
> 
> <Valve className="org.apache.catalina.cluster.tcp.ReplicationValve"
>                    filter=".*\.gif;.*\.js;.*\.jpg;.*\.png;.*\.htm;.*\.html;.*\.css;.*\.txt;"/>
> ###################################################################################################
> connector.conf
> 
> ReWriteEngine   On
> ProxyPreserveHost       On
> ProxyRequests   Off
> ProxyVia        Off
> 
> <Proxy balancer://secpay_cluster>
>         BalancerMember ajp://localhost:8009 smax=10 loadfactor=1
> route=jvm1
>         BalancerMember ajp://192.168.4.3:8009 smax=10 loadfactor=1
> route=jvm2
>         BalancerMember ajp://192.168.4.1:8009 smax=10 loadfactor=1
> route=jvm3
> </proxy>
> 
> RewriteRule ^\/(.+)\.jsp(.+)? balancer://secpay_cluster/$1.jsp$2 [P,L]
> 
> ProxyPass /favicon.ico !
> ProxyPass /robots.txt  !
> ProxyPass /images/     !
> ProxyPass /jsp-examples/*       balancer://secpay_cluster/jsp-examples/
> maxattempts=1 lbmethod=byrequests nofailover=off
> stickysession=JSESSIONID
> ProxyPassReverse /jsp-example/* balancer://secpay_cluster/jsp-examples/
> maxattempts=1 lbmethod=byrequests nofailover=off
> stickysession=JSESSIONID
> ProxyPass /servlets-examples/*
> balancer://secpay_cluster/servlets-examples/ maxattempts=1
> lbmethod=byrequests nofailover=off stickysession=JSESSIONID
> ProxyPassReverse /servlets-examples/*
> balancer://secpay_cluster/servlets-examples/ maxattempts=1
> lbmethod=byrequests nofailover=off stickysession=JSESSIONID
> 
> ##############################################################################
> startup logs from one of the servers
> 
> 26-Jun-2006 11:13:45 org.apache.catalina.cluster.tcp.SimpleTcpCluster
> createDefaultClusterListener
> INFO: Add Default ClusterListener at cluster localhost
> 26-Jun-2006 11:13:45 org.apache.catalina.cluster.tcp.SimpleTcpCluster
> createDefaultClusterReceiver
> INFO: Add Default ClusterReceiver at cluster localhost
> 26-Jun-2006 11:13:45 org.apache.catalina.cluster.tcp.SimpleTcpCluster
> createDefaultClusterSender
> INFO: Add Default ClusterSender at cluster localhost
> 26-Jun-2006 11:13:45
> org.apache.catalina.cluster.tcp.SocketReplicationListener
> createServerSocket
> INFO: Open Socket at [127.0.0.1:8015]
> 26-Jun-2006 11:13:45
> org.apache.catalina.cluster.tcp.ReplicationTransmitter start
> INFO: Start ClusterSender at cluster
> Standalone:type=Cluster,host=localhost with name
> Standalone:type=ClusterSender,host=localhost
> 26-Jun-2006 11:13:45 org.apache.catalina.cluster.tcp.SimpleTcpCluster
> createDefaultMembershipService
> INFO: Add Default Membership Service at cluster localhost
> 26-Jun-2006 11:13:45 org.apache.catalina.cluster.mcast.McastService
> start
> INFO: Sleeping for 4000 milliseconds to establish cluster membership
> 26-Jun-2006 11:13:46 org.apache.catalina.cluster.tcp.SimpleTcpCluster
> memberAdded
> INFO: Replication member
> added:org.apache.catalina.cluster.mcast.McastMember[tcp://192.168.4.121:8015,catalina,192.168.4.121,8015,
> alive=154886]
> 26-Jun-2006 11:13:46
> org.apache.catalina.cluster.tcp.FastAsyncSocketSender checkThread
> INFO: Create sender [/192.168.4.121:8,015] queue thread to tcp
> background replication
> 26-Jun-2006 11:13:49 org.apache.catalina.cluster.mcast.McastService
> registerMBean
> INFO: membership mbean registered
> (Standalone:type=ClusterMembership,host=localhost)
> 26-Jun-2006 11:13:51 org.apache.catalina.core.ApplicationContext log
> INFO: org.apache.webapp.balancer.BalancerFilter: init(): ruleChain:
> [org.apache.webapp.balancer.RuleChain:
> [org.apache.webapp.balancer.rules.URLStringMatchRule: Target string: News / Redirect URL: http://www.cnn.com], [org.apache.webapp.balancer.rules.RequestParameterRule: Target param name: paramName / Target param value: paramValue / Redirect URL: http://www.yahoo.com], [org.apache.webapp.balancer.rules.AcceptEverythingRule: Redirect URL: http://jakarta.apache.org]]
> 26-Jun-2006 11:13:51 org.apache.catalina.cluster.session.DeltaManager
> start
> INFO: Register manager /jsp-examples to cluster element Host with name
> localhost
> 26-Jun-2006 11:13:51 org.apache.catalina.cluster.session.DeltaManager
> start
> INFO: Starting clustering manager at /jsp-examples
> 26-Jun-2006 11:13:51 org.apache.catalina.cluster.session.DeltaManager
> getAllClusterSessions
> WARNING: Manager [/jsp-examples], requesting session state from
> org.apache.catalina.cluster.mcast.McastMember[tcp://192.168.4.121:8015,catalina,192.168.4.121,8015,
> alive=159907]. This operation will timeout if no session state has been received within 60 seconds.
> 26-Jun-2006 11:14:51 org.apache.catalina.cluster.session.DeltaManager
> waitForSendAllSessions
> SEVERE: Manager [/jsp-examples]: No session state send at 26/06/06
> 11:13 received, timing out after 60,137 ms.
> 26-Jun-2006 11:14:51 org.apache.catalina.core.ApplicationContext log
> INFO: ContextListener: contextInitialized()
> 26-Jun-2006 11:14:51 org.apache.catalina.core.ApplicationContext log
> INFO: SessionListener: contextInitialized()
> 26-Jun-2006 11:14:52 org.apache.catalina.core.ApplicationContext log
> INFO: ContextListener: contextInitialized()
> 26-Jun-2006 11:14:52 org.apache.catalina.core.ApplicationContext log
> INFO: SessionListener: contextInitialized()
> 26-Jun-2006 11:14:52 org.apache.jk.common.ChannelSocket init
> INFO: JK: ajp13 listening on /0.0.0.0:8009
> 26-Jun-2006 11:14:52 org.apache.jk.server.JkMain start
> INFO: Jk running ID=0 time=0/144  config=null
> 26-Jun-2006 11:14:53 org.apache.catalina.storeconfig.StoreLoader load
> INFO: Find registry server-registry.xml at classpath resource
> 26-Jun-2006 11:14:53 org.apache.catalina.startup.Catalina start
> INFO: Server startup in 68534 ms
> 26-Jun-2006 11:19:57 org.apache.catalina.cluster.tcp.SimpleTcpCluster
> memberDisappeared
> INFO: Received member
> disappeared:org.apache.catalina.cluster.mcast.McastMember[tcp://192.168.4.121:8015,catalina,192.168.4.121,8015,
> alive=495495]
> 26-Jun-2006 11:19:57 org.apache.catalina.cluster.util.FastQueue remove
> INFO: FastQueue.remove: Remove aborted although queue enabled
> 
> 
> Still having some problems with load balancing and state replication
> neither of which appear to be working. If i shutdown tomcat on the
> main server i can still get to the application directory from one of
> the other servers but get a 503 error if i try to run  any of the
> applications ?? 
> 
> 
> 
> 

---------------------------------------------------------------------
To start a new topic, e-mail: users@tomcat.apache.org
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org


Re: Tomcat session replication/cluster (mod_proxy_ajp)

Posted by Sean O'Reilly <se...@secpay.com>.
On Fri, 23 Jun 2006 16:43:16 +0100
Pid <p...@pidster.com> wrote:

> 
> 
> Sean O'Reilly wrote:
> > On Fri, 23 Jun 2006 09:05:18 -0500
> > Filip Hanik - Dev Lists <de...@hanik.com> wrote:
> > I am sure it would be easier to use mod_proxy_balancer and
> > mod_proxy_ajp but can't find any documentation anywhere.
> 
> My servers didn't have their clocks synchronised, check that.
> Also try using the most basic cluster config to start with and work up
> to more complex variations.
> 
> There's not much to configure for proxy_ajp / balancer.
> We're running Apache2.2 + Tomcat 5.5.17 + with mod_proxy_ajp &c.
> AJP needs no config, which is nice.
> Balancer also needs very little, see mod_proxy for details.
> The route=TCS1 parameter is the jvmRoute set in the tomcat Engine.
> 
> ### put this in your Apache vhost/conf
> 
> ReWriteEngine     on
> ProxyPreserveHost On
> ProxyRequests     Off
> ProxyVia          Off
> 
> <Proxy balancer://mycluster>
>    BalancerMember ajp://tomcat1:8009 smax=10 loadfactor=10 route=TCS1
>    BalancerMember ajp://tomcat2:8009 smax=10 loadfactor=10 route=TCS2
> </Proxy>
> 
> RewriteRule ^\/(.+)\.jsp(.+)? balancer://mycluster/$1.jsp$2 [P,L]
> 
> ProxyPass /favicon.ico !
> ProxyPass /robots.txt  !
> ProxyPass /images/     !
> ProxyPass /forms/ balancer://mycluster/forms/ \
> maxattempts=1 lbmethod=bytraffic stickysession=JSESSIONID
> 
> 
> 
> 
> > Thanks for the help so far guys.
> > 
> 
> ---------------------------------------------------------------------
> To start a new topic, e-mail: users@tomcat.apache.org
> To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
> For additional commands, e-mail: users-help@tomcat.apache.org
ok here are latest config files

server.xml (cluster configuration)

<Cluster className="org.apache.catalina.cluster.tcp.SimpleTcpCluster"
                name="cluster"
                debug="10"
                serviceclass="org.apache.catalina.cluster.mcast.McastService"
                mcastAddress="228.0.0.4"
                mcastPort="45564"                 
		mcastFrequency="500"
                mcastDroptime="3000"
                tcpThreadCount="6"
                tcpListenAddress="auto"
                tcpListenPort="4001"
                tcpSelectorTimeout="100"
                printToScreen="false"
                expireSessionsOnShutdown="false"
                useDirtyFlag="true"
                replicationMode="synchronous"/>

<Valve className="org.apache.catalina.cluster.tcp.ReplicationValve"
                   filter=".*\.gif;.*\.js;.*\.jpg;.*\.png;.*\.htm;.*\.html;.*\.css;.*\.txt;"/>
###################################################################################################
connector.conf

ReWriteEngine   On
ProxyPreserveHost       On
ProxyRequests   Off
ProxyVia        Off

<Proxy balancer://secpay_cluster>
        BalancerMember ajp://localhost:8009 smax=10 loadfactor=1
route=jvm1
        BalancerMember ajp://192.168.4.3:8009 smax=10 loadfactor=1
route=jvm2
        BalancerMember ajp://192.168.4.1:8009 smax=10 loadfactor=1
route=jvm3
</proxy>

RewriteRule ^\/(.+)\.jsp(.+)? balancer://secpay_cluster/$1.jsp$2 [P,L]

ProxyPass /favicon.ico !
ProxyPass /robots.txt  !
ProxyPass /images/     !
ProxyPass /jsp-examples/*       balancer://secpay_cluster/jsp-examples/
maxattempts=1 lbmethod=byrequests nofailover=off
stickysession=JSESSIONID
ProxyPassReverse /jsp-example/* balancer://secpay_cluster/jsp-examples/
maxattempts=1 lbmethod=byrequests nofailover=off
stickysession=JSESSIONID
ProxyPass /servlets-examples/*
balancer://secpay_cluster/servlets-examples/ maxattempts=1
lbmethod=byrequests nofailover=off stickysession=JSESSIONID
ProxyPassReverse /servlets-examples/*
balancer://secpay_cluster/servlets-examples/ maxattempts=1
lbmethod=byrequests nofailover=off stickysession=JSESSIONID

##############################################################################
startup logs from one of the servers

26-Jun-2006 11:13:45 org.apache.catalina.cluster.tcp.SimpleTcpCluster
createDefaultClusterListener
INFO: Add Default ClusterListener at cluster localhost
26-Jun-2006 11:13:45 org.apache.catalina.cluster.tcp.SimpleTcpCluster
createDefaultClusterReceiver
INFO: Add Default ClusterReceiver at cluster localhost
26-Jun-2006 11:13:45 org.apache.catalina.cluster.tcp.SimpleTcpCluster
createDefaultClusterSender
INFO: Add Default ClusterSender at cluster localhost
26-Jun-2006 11:13:45
org.apache.catalina.cluster.tcp.SocketReplicationListener
createServerSocket
INFO: Open Socket at [127.0.0.1:8015]
26-Jun-2006 11:13:45
org.apache.catalina.cluster.tcp.ReplicationTransmitter start
INFO: Start ClusterSender at cluster
Standalone:type=Cluster,host=localhost with name
Standalone:type=ClusterSender,host=localhost
26-Jun-2006 11:13:45 org.apache.catalina.cluster.tcp.SimpleTcpCluster
createDefaultMembershipService
INFO: Add Default Membership Service at cluster localhost
26-Jun-2006 11:13:45 org.apache.catalina.cluster.mcast.McastService
start
INFO: Sleeping for 4000 milliseconds to establish cluster membership
26-Jun-2006 11:13:46 org.apache.catalina.cluster.tcp.SimpleTcpCluster
memberAdded
INFO: Replication member
added:org.apache.catalina.cluster.mcast.McastMember[tcp://192.168.4.121:8015,catalina,192.168.4.121,8015,
alive=154886]
26-Jun-2006 11:13:46
org.apache.catalina.cluster.tcp.FastAsyncSocketSender checkThread
INFO: Create sender [/192.168.4.121:8,015] queue thread to tcp
background replication
26-Jun-2006 11:13:49 org.apache.catalina.cluster.mcast.McastService
registerMBean
INFO: membership mbean registered
(Standalone:type=ClusterMembership,host=localhost)
26-Jun-2006 11:13:51 org.apache.catalina.core.ApplicationContext log
INFO: org.apache.webapp.balancer.BalancerFilter: init(): ruleChain:
[org.apache.webapp.balancer.RuleChain:
[org.apache.webapp.balancer.rules.URLStringMatchRule: Target string: News / Redirect URL: http://www.cnn.com], [org.apache.webapp.balancer.rules.RequestParameterRule: Target param name: paramName / Target param value: paramValue / Redirect URL: http://www.yahoo.com], [org.apache.webapp.balancer.rules.AcceptEverythingRule: Redirect URL: http://jakarta.apache.org]]
26-Jun-2006 11:13:51 org.apache.catalina.cluster.session.DeltaManager
start
INFO: Register manager /jsp-examples to cluster element Host with name
localhost
26-Jun-2006 11:13:51 org.apache.catalina.cluster.session.DeltaManager
start
INFO: Starting clustering manager at /jsp-examples
26-Jun-2006 11:13:51 org.apache.catalina.cluster.session.DeltaManager
getAllClusterSessions
WARNING: Manager [/jsp-examples], requesting session state from
org.apache.catalina.cluster.mcast.McastMember[tcp://192.168.4.121:8015,catalina,192.168.4.121,8015,
alive=159907]. This operation will timeout if no session state has been received within 60 seconds.
26-Jun-2006 11:14:51 org.apache.catalina.cluster.session.DeltaManager
waitForSendAllSessions
SEVERE: Manager [/jsp-examples]: No session state send at 26/06/06
11:13 received, timing out after 60,137 ms.
26-Jun-2006 11:14:51 org.apache.catalina.core.ApplicationContext log
INFO: ContextListener: contextInitialized()
26-Jun-2006 11:14:51 org.apache.catalina.core.ApplicationContext log
INFO: SessionListener: contextInitialized()
26-Jun-2006 11:14:52 org.apache.catalina.core.ApplicationContext log
INFO: ContextListener: contextInitialized()
26-Jun-2006 11:14:52 org.apache.catalina.core.ApplicationContext log
INFO: SessionListener: contextInitialized()
26-Jun-2006 11:14:52 org.apache.jk.common.ChannelSocket init
INFO: JK: ajp13 listening on /0.0.0.0:8009
26-Jun-2006 11:14:52 org.apache.jk.server.JkMain start
INFO: Jk running ID=0 time=0/144  config=null
26-Jun-2006 11:14:53 org.apache.catalina.storeconfig.StoreLoader load
INFO: Find registry server-registry.xml at classpath resource
26-Jun-2006 11:14:53 org.apache.catalina.startup.Catalina start
INFO: Server startup in 68534 ms
26-Jun-2006 11:19:57 org.apache.catalina.cluster.tcp.SimpleTcpCluster
memberDisappeared
INFO: Received member
disappeared:org.apache.catalina.cluster.mcast.McastMember[tcp://192.168.4.121:8015,catalina,192.168.4.121,8015,
alive=495495]
26-Jun-2006 11:19:57 org.apache.catalina.cluster.util.FastQueue remove
INFO: FastQueue.remove: Remove aborted although queue enabled


Still having some problems with load balancing and state replication
neither of which appear to be working. If i shutdown tomcat on the
main server i can still get to the application directory from one of
the other servers but get a 503 error if i try to run  any of the
applications ?? 




-- 
Sean O'Reilly
Systems Administrator
SECPay Ltd

http://www.secpay.com

s.oreilly@secpay.com

Mobile 07917 463906

DDI 01732 300212

This email contains information which is confidential. It is for the
exclusive use of the addressee(s). If you are not the addressee, please
note that any distribution, dissemination, copying or use of this
communication or the information in it is prohibited. If you have
received this email in error, please telephone me immediately.

---------------------------------------------------------------------
To start a new topic, e-mail: users@tomcat.apache.org
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org


Re: Tomcat session replication/cluster (mod_proxy_ajp)

Posted by Pid <p...@pidster.com>.

Sean O'Reilly wrote:
> On Fri, 23 Jun 2006 09:05:18 -0500
> Filip Hanik - Dev Lists <de...@hanik.com> wrote:
> I am sure it would be easier to use mod_proxy_balancer and
> mod_proxy_ajp but can't find any documentation anywhere.

My servers didn't have their clocks synchronised, check that.
Also try using the most basic cluster config to start with and work up
to more complex variations.

There's not much to configure for proxy_ajp / balancer.
We're running Apache2.2 + Tomcat 5.5.17 + with mod_proxy_ajp &c.
AJP needs no config, which is nice.
Balancer also needs very little, see mod_proxy for details.
The route=TCS1 parameter is the jvmRoute set in the tomcat Engine.

### put this in your Apache vhost/conf

ReWriteEngine     on
ProxyPreserveHost On
ProxyRequests     Off
ProxyVia          Off

<Proxy balancer://mycluster>
   BalancerMember ajp://tomcat1:8009 smax=10 loadfactor=10 route=TCS1
   BalancerMember ajp://tomcat2:8009 smax=10 loadfactor=10 route=TCS2
</Proxy>

RewriteRule ^\/(.+)\.jsp(.+)? balancer://mycluster/$1.jsp$2 [P,L]

ProxyPass /favicon.ico !
ProxyPass /robots.txt  !
ProxyPass /images/     !
ProxyPass /forms/ balancer://mycluster/forms/ \
maxattempts=1 lbmethod=bytraffic stickysession=JSESSIONID




> Thanks for the help so far guys.
> 

---------------------------------------------------------------------
To start a new topic, e-mail: users@tomcat.apache.org
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org


Re: Tomcat session replication/cluster

Posted by Sean O'Reilly <se...@secpay.com>.
On Fri, 23 Jun 2006 09:05:18 -0500
Filip Hanik - Dev Lists <de...@hanik.com> wrote:

> 
> > Hi Guys,
> >
> > I appear to be finally getting somewhere with the in-memory state
> > replication but am now getting the following error when starting up
> > my tomcat instances/
> >
> > WARNING: Manager [/jsp-examples], requesting session state from
> > org.apache.catalina.cluster.mcast.McastMember[tcp://192.168.4.121:4001,catalina,192.168.4.121,4001,
> > alive=74436]. This operation will timeout if no session state has
> > been received within 60 seconds. 23-Jun-2006 13:27:38
> > org.apache.catalina.cluster.session.DeltaManager
> > waitForSendAllSessions SEVERE: Manager [/jsp-examples]: No session
> > state send at 23/06/06 13:26 received, timing out after 60,140 ms.
> > 23-Jun-2006 13:27:38 org.apache.catalina.core.ApplicationContext
> > log INFO: ContextListener: contextInitialized() 23-Jun-2006 13:27:38
> > org.apache.catalina.core.ApplicationContext log INFO:
> > SessionListener: contextInitialized() 23-Jun-2006 13:27:38
> > org.apache.jk.common.ChannelSocket init INFO: JK: ajp13 listening
> > on /0.0.0.0:8009 23-Jun-2006 13:27:38 org.apache.jk.server.JkMain
> > start INFO: Jk running ID=0 time=0/224  config=null 23-Jun-2006
> > 13:27:38 org.apache.catalina.storeconfig.StoreLoader load INFO:
> > Find registry server-registry.xml at classpath resource 23-Jun-2006
> > 13:27:39 org.apache.catalina.startup.Catalina start INFO: Server
> > startup in 67102 ms
> >
> > Can anyone point me in the right direction as to why the session
> > state is not being replicated ?
> >   
> Two things to check:
> 1. What does the other server log say, maybe there is an error there, 
> does the other server know of this server?
> 2. your server.xml, you would need to provide us with a little bit
> more info
> 
> Filip
> 
> ---------------------------------------------------------------------
> To start a new topic, e-mail: users@tomcat.apache.org
> To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
> For additional commands, e-mail: users-help@tomcat.apache.org

I might be being a bit thick here !!!

I have 3 servers !

One is running apache2, mod_jk2 and tomcat-5.5.17, the other two just
have tomcat-5.5.17. Do i need to have apache and mod_jk2 running on all
servers ?

I am sure it would be easier to use mod_proxy_balancer and
mod_proxy_ajp but can't find any documentation anywhere.

Thanks for the help so far guys.

-- 
Sean O'Reilly
Systems Administrator
SECPay Ltd

http://www.secpay.com

s.oreilly@secpay.com

Mobile 07917 463906

DDI 01732 300212

This email contains information which is confidential. It is for the
exclusive use of the addressee(s). If you are not the addressee, please
note that any distribution, dissemination, copying or use of this
communication or the information in it is prohibited. If you have
received this email in error, please telephone me immediately.

---------------------------------------------------------------------
To start a new topic, e-mail: users@tomcat.apache.org
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org


Re: Tomcat session replication/cluster

Posted by Filip Hanik - Dev Lists <de...@hanik.com>.
> Hi Guys,
>
> I appear to be finally getting somewhere with the in-memory state
> replication but am now getting the following error when starting up my
> tomcat instances/
>
> WARNING: Manager [/jsp-examples], requesting session state from
> org.apache.catalina.cluster.mcast.McastMember[tcp://192.168.4.121:4001,catalina,192.168.4.121,4001,
> alive=74436]. This operation will timeout if no session state has been
> received within 60 seconds. 23-Jun-2006 13:27:38
> org.apache.catalina.cluster.session.DeltaManager waitForSendAllSessions
> SEVERE: Manager [/jsp-examples]: No session state send at 23/06/06
> 13:26 received, timing out after 60,140 ms. 23-Jun-2006 13:27:38
> org.apache.catalina.core.ApplicationContext log INFO: ContextListener:
> contextInitialized() 23-Jun-2006 13:27:38
> org.apache.catalina.core.ApplicationContext log INFO: SessionListener:
> contextInitialized() 23-Jun-2006 13:27:38
> org.apache.jk.common.ChannelSocket init INFO: JK: ajp13 listening
> on /0.0.0.0:8009 23-Jun-2006 13:27:38 org.apache.jk.server.JkMain start
> INFO: Jk running ID=0 time=0/224  config=null 23-Jun-2006 13:27:38
> org.apache.catalina.storeconfig.StoreLoader load INFO: Find registry
> server-registry.xml at classpath resource 23-Jun-2006 13:27:39
> org.apache.catalina.startup.Catalina start INFO: Server startup in
> 67102 ms
>
> Can anyone point me in the right direction as to why the session state
> is not being replicated ?
>   
Two things to check:
1. What does the other server log say, maybe there is an error there, 
does the other server know of this server?
2. your server.xml, you would need to provide us with a little bit more info

Filip

---------------------------------------------------------------------
To start a new topic, e-mail: users@tomcat.apache.org
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org


Re: Tomcat session replication/cluster

Posted by Sean O'Reilly <se...@secpay.com>.
On Thu, 22 Jun 2006 15:39:09 +0100
Pid <p...@pidster.com> wrote:

> In each case it's the ROOT context, so is it appending "" where
> there's no Context name/path: ie
> 
>  SEVERE: Context manager doesn't exist:host1+context1
> 
> where context1=""
> ?
> 
> It would still seem that the name parameter supplied to the
> getManagerName is carrying over it's previous value, and appending
> again, rather than resetting.  I'm not familiar enough with the code
> to see where it's occuring though.
> 
> 
> 
> Pid wrote:
> > OK,
> > 
> > I'm probably being dense here.
> > 
> > (There's only 1 context in each host, the ROOT context)
> > If I take the Host/Context offline in one node and restart it, the
> > logs on that node start showing the following:
> > 
> >  SEVERE: Context manager doesn't exist:host1
> > 
> > As the Context doesn't exist, which is the same message that appears
> > briefly in the logs during a restart, until that particular Host is
> > loaded (under normal circumstances).
> > 
> > This much I understand, and provides no problems for me.
> > 
> > 
> > With all Hosts available on each node of the cluster, I then update
> > the Context on one Host, (by adding a new jar, say).  The Context
> > has reloadable="true", so it does just that.
> > 
> > Once that context has updated, the other nodes start seeing:
> > 
> >  SEVERE: Context manager doesn't exist:host1host1
> > 
> > If I reload the context again, (without restarting the server), I
> > see this:
> > 
> >  SEVERE: Context manager doesn't exist:host1host1host1
> > 
> > I could go on, but I think you can see where this is going...
> > 
> > 
> > 
> > Peter Rossbach wrote:
> >> Hmm,
> >>
> >> look at o.a.c.cluster.tcp.SimpleTcpCluster
> >>
> >> L 626ff
> >>     private String getManagerName(String name, Manager manager) {
> >>         String clusterName = name ;
> >>         if(getContainer() instanceof Engine) {
> >>             Container context = manager.getContainer() ;
> >>             if(context != null && context instanceof Context) {
> >>                 Container host = ((Context)context).getParent();
> >>                 if(host != null && host instanceof Host)
> >>                     clusterName = host.getName()  + name ;
> >>             }
> >>         }
> >>         return clusterName;
> >>     }
> >>
> >>
> >> You see we append "hostname + context" as cluster engine container.
> >>
> >> Peter
> >>
> >>
> >>
> >> Am 22.06.2006 um 10:32 schrieb Pid:
> >>
> >>>
> >>> Filip Hanik - Dev Lists wrote:
> >>>> if the cluster is put in the engine element, the context names
> >>>> are prefixed with the engine name, since you can have multiple
> >>>> contexts with the same name in different host
> >>>> when reloading a context, you'll get these errors cause the
> >>>> context is not available during the reload
> >>>> this will be fixed with the new Apache Tribes module
> >>>> Filip
> >>> I understand that the context is not available during reload.
> >>> After reload has completed, the error persists.
> >>>
> >>> My Engine name is Catalina, it looks like the cluster isn't
> >>> sending the engine name, but the context name, appended to itself.
> >>>
> >>> You're implying that it should send Catalina+website1, but it's
> >>> sending website1+website1 instead.
> >>>
> >>> After startup:
> >>> Node1 sees Node2 send "website2"
> >>> Node2 sees Node1 send "website1"
> >>>
> >>> After context on Node1 is finished reloading:
> >>> Node1 sees Node2 send "website2"
> >>> Node2 sees Node1 send "website1website1"
> >>>
> >>> I think that the context name is being appended to itself.
> >>>
> >>>
> >>>> Pid wrote:
> >>>>> I'm seeing an issue on 5.5.17 with a 2 node cluster config.
> >>>>> When a context is reloaded, it sends the context node name
> >>>>> incorrectly to the cluster.
> >>>>> E.g. context is called "website1"
> >>>>>
> >>>>> SEVERE: Context manager doesn't exist:website1website1
> >>>>>
> >>>>> The config I'm using is exactly the same as the default from
> >>>>> server.xml,
> >>>>> except the cluster is defined in Engine, rather than each Host.
> >>>>>
> >>>>>
> >>>>>
> >>>>>
> >>>>> Filip Hanik - Dev Lists wrote:
> >>>>>
> >>>>>> also, use Tomcat 5.5.17
> >>>>>>
> >>>>>> Sean O'Reilly wrote:
> >>>>>>
> >>>>>>> Hi,
> >>>>>>>
> >>>>>>> I am trying to get in-memory session replication working and
> >>>>>>> am testing
> >>>>>>> running 3 seperate tomcat instances on the same server.
> >>>>>>>
> >>>>>>> I am using tomcat-5.5.15 and apache-2.0.54 with jk2.
> >>>>>>>
> >>>>>>> Whenever i run my test app although it should be doing
> >>>>>>> round-robin load
> >>>>>>> balancing it doesn't switch to another instance of tomcat
> >>>>>>> until the eighth request and does not appear to have sent the
> >>>>>>> session information
> >>>>>>> across as the session ID changes.
> >>>>>>>
> >>>>>>> Here are my server.xml and workers2.properties files
> >>>>>>>
> >>>>>>> server.xml
> >>>>>>>
> >>>>>>> <Server port="8005" shutdown="SHUTDOWN">
> >>>>>>>
> >>>>>>>   <!-- Comment these entries out to disable JMX MBeans support
> >>>>>>> used for
> >>>>>>> the        administration web application -->
> >>>>>>>   <Listener
> >>>>>>> className="org.apache.catalina.core.AprLifecycleListener" />
> >>>>>>>   <Listener
> >>>>>>> className="org.apache.catalina.mbeans.ServerLifecycleListener" />
> >>>>>>>   <Listener
> >>>>>>> className="org.apache.catalina.mbeans.GlobalResourcesLifecycleListener"
> >>>>>>>
> >>>>>>> />
> >>>>>>>   <Listener
> >>>>>>> className="org.apache.catalina.storeconfig.StoreConfigLifecycleListener"/>
> >>>>>>>
> >>>>>>>
> >>>>>>>
> >>>>>>>
> >>>>>>>   <!-- Global JNDI resources -->
> >>>>>>>   <GlobalNamingResources>
> >>>>>>>
> >>>>>>>     <!-- Test entry for demonstration purposes -->
> >>>>>>>     <Environment name="simpleValue" type="java.lang.Integer"
> >>>>>>> value="30"/>
> >>>>>>>
> >>>>>>>     <!-- Editable user database that can also be used by
> >>>>>>>          UserDatabaseRealm to authenticate users -->
> >>>>>>>     <Resource name="UserDatabase" auth="Container"
> >>>>>>>               type="org.apache.catalina.UserDatabase"
> >>>>>>>        description="User database that can be updated and
> >>>>>>> saved"
> >>>>>>>
> >>>>>>> factory="org.apache.catalina.users.MemoryUserDatabaseFactory"
> >>>>>>>           pathname="conf/tomcat-users.xml" />
> >>>>>>>
> >>>>>>>   </GlobalNamingResources>
> >>>>>>>
> >>>>>>>   <!-- A "Service" is a collection of one or more
> >>>>>>> "Connectors" that share
> >>>>>>>        a single "Container" (and therefore the web
> >>>>>>> applications visible
> >>>>>>>        within that Container).  Normally, that Container is an
> >>>>>>> "Engine",
> >>>>>>>        but this is not required.
> >>>>>>>
> >>>>>>>        Note:  A "Service" is not itself a "Container", so you
> >>>>>>> may not define subcomponents such as "Valves" or "Loggers" at
> >>>>>>> this level.
> >>>>>>>    -->
> >>>>>>>
> >>>>>>>   <!-- Define the Tomcat Stand-Alone Service -->
> >>>>>>>   <Service name="Catalina">
> >>>>>>>
> >>>>>>>     <!-- A "Connector" represents an endpoint by which
> >>>>>>> requests are received
> >>>>>>>          and responses are returned.  Each Connector passes
> >>>>>>> requests on
> >>>>>>> to the
> >>>>>>>          associated "Container" (normally an Engine) for
> >>>>>>> processing.
> >>>>>>>
> >>>>>>>          By default, a non-SSL HTTP/1.1 Connector is
> >>>>>>> established on port 8080.
> >>>>>>>          You can also enable an SSL HTTP/1.1 Connector on port
> >>>>>>> 8443 by
> >>>>>>>          following the instructions below and uncommenting
> >>>>>>> the second Connector
> >>>>>>>          entry.  SSL support requires the following steps
> >>>>>>> (see the SSL
> >>>>>>> Config
> >>>>>>>          HOWTO in the Tomcat 5 documentation bundle for more
> >>>>>>> detailed instructions):
> >>>>>>>          * If your JDK version 1.3 or prior, download and
> >>>>>>> install JSSE
> >>>>>>> 1.0.2 or
> >>>>>>>            later, and put the JAR files into
> >>>>>>> "$JAVA_HOME/jre/lib/ext".
> >>>>>>>          * Execute:
> >>>>>>>              %JAVA_HOME%\bin\keytool -genkey -alias tomcat
> >>>>>>> -keyalg RSA
> >>>>>>> (Windows)
> >>>>>>>              $JAVA_HOME/bin/keytool -genkey -alias tomcat
> >>>>>>> -keyalg RSA (Unix)
> >>>>>>>            with a password value of "changeit" for both the
> >>>>>>> certificate
> >>>>>>> and
> >>>>>>>            the keystore itself.
> >>>>>>>
> >>>>>>>          By default, DNS lookups are enabled when a web
> >>>>>>> application calls
> >>>>>>>          request.getRemoteHost().  This can have an adverse
> >>>>>>> impact on performance, so you can disable it by setting the
> >>>>>>>          "enableLookups" attribute to "false".  When DNS
> >>>>>>> lookups are disabled,
> >>>>>>>          request.getRemoteHost() will return the String
> >>>>>>> version of the
> >>>>>>>          IP address of the remote client.
> >>>>>>>     -->
> >>>>>>>
> >>>>>>>     <!-- Define a non-SSL HTTP/1.1 Connector on port 8080
> >>>>>>>     <Connector port="8080" maxHttpHeaderSize="8192"
> >>>>>>>                maxThreads="150" minSpareThreads="25"
> >>>>>>> maxSpareThreads="75"
> >>>>>>>                enableLookups="false" redirectPort="8443"
> >>>>>>> acceptCount="100"
> >>>>>>>                connectionTimeout="20000"
> >>>>>>> disableUploadTimeout="true" />
> >>>>>>> -->
> >>>>>>>     <!-- Note : To disable connection timeouts, set
> >>>>>>> connectionTimeout value
> >>>>>>>      to 0 -->
> >>>>>>>         <!-- Note : To use gzip compression you could set the
> >>>>>>> following
> >>>>>>> properties :
> >>>>>>>                    compression="on"
> >>>>>>> compressionMinSize="2048"
> >>>>>>> noCompressionUserAgents="gozilla, traviata"
> >>>>>>> compressableMimeType="text/html,text/xml"
> >>>>>>>     -->
> >>>>>>>
> >>>>>>>     <!-- Define a SSL HTTP/1.1 Connector on port 8443 -->
> >>>>>>>     <!--
> >>>>>>>     <Connector port="8443" maxHttpHeaderSize="8192"
> >>>>>>>                maxThreads="150" minSpareThreads="25"
> >>>>>>> maxSpareThreads="75"
> >>>>>>>                enableLookups="false"
> >>>>>>> disableUploadTimeout="true" acceptCount="100" scheme="https"
> >>>>>>> secure="true" clientAuth="false" sslProtocol="TLS" />
> >>>>>>>     -->
> >>>>>>>
> >>>>>>>     <!-- Define an AJP 1.3 Connector on port 8009 -->
> >>>>>>>     <Connector port="8009"
> >>>>>>> enableLookups="false" redirectPort="8443"
> >>>>>>> protocol="AJP/1.3" />
> >>>>>>>
> >>>>>>>     <!-- Define a Proxied HTTP/1.1 Connector on port 8082 -->
> >>>>>>>     <!-- See proxy documentation for more information about
> >>>>>>> using this.
> >>>>>>> -->
> >>>>>>>     <!--
> >>>>>>>     <Connector port="8082"                maxThreads="150"
> >>>>>>> minSpareThreads="25"
> >>>>>>> maxSpareThreads="75"
> >>>>>>>                enableLookups="false" acceptCount="100"
> >>>>>>> connectionTimeout="20000"
> >>>>>>>                proxyPort="80" disableUploadTimeout="true" />
> >>>>>>>     -->
> >>>>>>>
> >>>>>>>     <!-- An Engine represents the entry point (within
> >>>>>>> Catalina) that processes
> >>>>>>>          every request.  The Engine implementation for Tomcat
> >>>>>>> stand alone
> >>>>>>>          analyzes the HTTP headers included with the request,
> >>>>>>> and passes them
> >>>>>>>          on to the appropriate Host (virtual host). -->
> >>>>>>>
> >>>>>>>     <!-- You should set jvmRoute to support load-balancing
> >>>>>>> via AJP ie :
> >>>>>>> -->
> >>>>>>>     <Engine name="Standalone" defaultHost="localhost"
> >>>>>>> jvmRoute="Tomcat5A">                       <!-- Define the
> >>>>>>> top level container in our container
> >>>>>>> hierarchy
> >>>>>>>     <Engine name="Catalina" defaultHost="localhost"> -->
> >>>>>>>
> >>>>>>>       <!-- The request dumper valve dumps useful debugging
> >>>>>>> information
> >>>>>>> about
> >>>>>>>            the request headers and cookies that were
> >>>>>>> received, and the
> >>>>>>> response
> >>>>>>>            headers and cookies that were sent, for all
> >>>>>>> requests received by
> >>>>>>>            this instance of Tomcat.  If you care only about
> >>>>>>> requests to
> >>>>>>> a
> >>>>>>>            particular virtual host, or a particular
> >>>>>>> application, nest this
> >>>>>>>            element inside the corresponding <Host> or
> >>>>>>> <Context> entry instead.
> >>>>>>>
> >>>>>>>            For a similar mechanism that is portable to all
> >>>>>>> Servlet 2.4
> >>>>>>>            containers, check out the "RequestDumperFilter"
> >>>>>>> Filter in the
> >>>>>>>            example application (the source for this filter
> >>>>>>> may be found
> >>>>>>> in
> >>>>>>>           
> >>>>>>> "$CATALINA_HOME/webapps/examples/WEB-INF/classes/filters").
> >>>>>>>
> >>>>>>>            Request dumping is disabled by default.  Uncomment
> >>>>>>> the following
> >>>>>>>            element to enable it. -->
> >>>>>>>       <!--
> >>>>>>>       <Valve
> >>>>>>> className="org.apache.catalina.valves.RequestDumperValve"/>
> >>>>>>>       -->
> >>>>>>>
> >>>>>>>       <!-- Because this Realm is here, an instance will be
> >>>>>>> shared globally -->
> >>>>>>>
> >>>>>>>       <!-- This Realm uses the UserDatabase configured in the
> >>>>>>> global JNDI
> >>>>>>>            resources under the key "UserDatabase".  Any edits
> >>>>>>>            that are performed against this UserDatabase are
> >>>>>>> immediately
> >>>>>>>            available for use by the Realm.  -->
> >>>>>>>       <Realm
> >>>>>>> className="org.apache.catalina.realm.UserDatabaseRealm"
> >>>>>>> resourceName="UserDatabase"/>
> >>>>>>>
> >>>>>>>       <!-- Comment out the old realm but leave here for now in
> >>>>>>> case we
> >>>>>>>            need to go back quickly -->
> >>>>>>>       <!--
> >>>>>>>       <Realm
> >>>>>>> className="org.apache.catalina.realm.MemoryRealm" /> -->
> >>>>>>>
> >>>>>>>       <!-- Replace the above Realm with one of the following
> >>>>>>> to get a Realm
> >>>>>>>            stored in a database and accessed via JDBC -->
> >>>>>>>
> >>>>>>>       <!--
> >>>>>>>       <Realm  className="org.apache.catalina.realm.JDBCRealm"
> >>>>>>>              driverName="org.gjt.mm.mysql.Driver"
> >>>>>>>           connectionURL="jdbc:mysql://localhost/authority"
> >>>>>>>          connectionName="test" connectionPassword="test"
> >>>>>>>               userTable="users" userNameCol="user_name"
> >>>>>>> userCredCol="user_pass"
> >>>>>>>           userRoleTable="user_roles"
> >>>>>>> roleNameCol="role_name" /> -->
> >>>>>>>
> >>>>>>>       <!--
> >>>>>>>       <Realm  className="org.apache.catalina.realm.JDBCRealm"
> >>>>>>>              driverName="oracle.jdbc.driver.OracleDriver"
> >>>>>>>           connectionURL="jdbc:oracle:thin:@ntserver:1521:ORCL"
> >>>>>>>          connectionName="scott" connectionPassword="tiger"
> >>>>>>>               userTable="users" userNameCol="user_name"
> >>>>>>> userCredCol="user_pass"
> >>>>>>>           userRoleTable="user_roles"
> >>>>>>> roleNameCol="role_name" /> -->
> >>>>>>>
> >>>>>>>       <!--
> >>>>>>>       <Realm  className="org.apache.catalina.realm.JDBCRealm"
> >>>>>>>              driverName="sun.jdbc.odbc.JdbcOdbcDriver"
> >>>>>>>           connectionURL="jdbc:odbc:CATALINA"
> >>>>>>>               userTable="users" userNameCol="user_name"
> >>>>>>> userCredCol="user_pass"
> >>>>>>>           userRoleTable="user_roles"
> >>>>>>> roleNameCol="role_name" /> -->
> >>>>>>>
> >>>>>>>       <!-- Define the default virtual host
> >>>>>>>            Note: XML Schema validation will not work with
> >>>>>>> Xerces 2.2. -->
> >>>>>>>       <Host name="localhost" appBase="webapps"
> >>>>>>>        unpackWARs="true" autoDeploy="true"
> >>>>>>>        xmlValidation="false" xmlNamespaceAware="false">
> >>>>>>>
> >>>>>>>         <!-- Defines a cluster for this node,
> >>>>>>>              By defining this element, means that every
> >>>>>>> manager will be
> >>>>>>> changed.
> >>>>>>>              So when running a cluster, only make sure that
> >>>>>>> you have webapps in there
> >>>>>>>              that need to be clustered and remove the other
> >>>>>>> ones. A cluster has the following parameters:
> >>>>>>>
> >>>>>>>              className = the fully qualified name of the
> >>>>>>> cluster class
> >>>>>>>
> >>>>>>>              clusterName = a descriptive name for your
> >>>>>>> cluster, can be
> >>>>>>> anything
> >>>>>>>
> >>>>>>>              mcastAddr = the multicast address, has to be the
> >>>>>>> same for
> >>>>>>> all the nodes
> >>>>>>>
> >>>>>>>              mcastPort = the multicast port, has to be the
> >>>>>>> same for all
> >>>>>>> the nodes
> >>>>>>>                           mcastBindAddr = bind the multicast
> >>>>>>> socket to
> >>>>>>> a specific
> >>>>>>> address
> >>>>>>>                           mcastTTL = the multicast TTL if you
> >>>>>>> want to limit your
> >>>>>>> broadcast
> >>>>>>>                           mcastSoTimeout = the multicast
> >>>>>>> readtimeout mcastFrequency = the number of milliseconds in
> >>>>>>> between sending a "I'm alive" heartbeat
> >>>>>>>
> >>>>>>>              mcastDropTime = the number a milliseconds before
> >>>>>>> a node is
> >>>>>>> considered "dead" if no heartbeat is received
> >>>>>>>
> >>>>>>>              tcpThreadCount = the number of threads to handle
> >>>>>>> incoming
> >>>>>>> replication requests, optimal would be the same amount of
> >>>>>>> threads as nodes
> >>>>>>>              tcpListenAddress = the listen address (bind
> >>>>>>> address) for TCP cluster request on this
> >>>>>>> host,                                 in case of multiple
> >>>>>>> ethernet cards. auto means that address becomes
> >>>>>>>
> >>>>>>> InetAddress.getLocalHost().getHostAddress()
> >>>>>>>
> >>>>>>>              tcpListenPort = the tcp listen port
> >>>>>>>
> >>>>>>>              tcpSelectorTimeout = the timeout (ms) for the
> >>>>>>> Selector.select() method in case the OS
> >>>>>>>                                   has a wakup bug in
> >>>>>>> java.nio. Set to 0
> >>>>>>> for no timeout
> >>>>>>>
> >>>>>>>              printToScreen = true means that managers will
> >>>>>>> also print to std.out
> >>>>>>>
> >>>>>>>              expireSessionsOnShutdown = true means that
> >>>>>>>              useDirtyFlag = true means that we only replicate
> >>>>>>> a session
> >>>>>>> after setAttribute,removeAttribute has been called.
> >>>>>>>                             false means to replicate the
> >>>>>>> session after
> >>>>>>> each request.
> >>>>>>>                             false means that replication would
> >>>>>>> work for
> >>>>>>> the following piece of code: (only for
> >>>>>>> SimpleTcpReplicationManager) <%
> >>>>>>>                             HashMap map =
> >>>>>>> (HashMap)session.getAttribute("map");
> >>>>>>>                             map.put("key","value");
> >>>>>>>                             %>
> >>>>>>>              replicationMode = can be either 'pooled',
> >>>>>>> 'synchronous' or
> >>>>>>> 'asynchronous'.
> >>>>>>>                                * Pooled means that the
> >>>>>>> replication happens using several sockets in a synchronous
> >>>>>>> way. Ie, the data gets replicated, then the request return.
> >>>>>>> This is the same as the 'synchronous' setting except it uses
> >>>>>>> a pool of sockets, hence it is multithreaded. This is the
> >>>>>>> fastest and safest configuration. To use this, also increase
> >>>>>>> the nr of tcp threads that you have dealing with replication.
> >>>>>>>                                * Synchronous means that the
> >>>>>>> thread that
> >>>>>>> executes the request, is also the
> >>>>>>>                                thread the replicates the data
> >>>>>>> to the other nodes, and will not return until all
> >>>>>>>                                nodes have received the
> >>>>>>> information.
> >>>>>>>                                * Asynchronous means that
> >>>>>>> there is a specific 'sender' thread for each cluster node,
> >>>>>>>                                so the request thread will
> >>>>>>> queue the replication request into a "smart" queue,
> >>>>>>>                                and then return to the client.
> >>>>>>>                                The "smart" queue is a queue
> >>>>>>> where when
> >>>>>>> a session is added to the queue, and the same session
> >>>>>>>                                already exists in the queue
> >>>>>>> from a previous request, that session will be replaced
> >>>>>>>                                in the queue instead of
> >>>>>>> replicating two
> >>>>>>> requests. This almost never happens, unless there is a
> >>>>>>>                                large network delay.
> >>>>>>>         -->                     <!--
> >>>>>>>             When configuring for clustering, you also add in a
> >>>>>>> valve to
> >>>>>>> catch all the requests
> >>>>>>>             coming in, at the end of the request, the session
> >>>>>>> may or may not be replicated.
> >>>>>>>             A session is replicated if and only if all the
> >>>>>>> conditions are met:
> >>>>>>>             1. useDirtyFlag is true or setAttribute or
> >>>>>>> removeAttribute
> >>>>>>> has been called AND
> >>>>>>>             2. a session exists (has been created)
> >>>>>>>             3. the request is not trapped by the "filter"
> >>>>>>> attribute
> >>>>>>>
> >>>>>>>             The filter attribute is to filter out requests
> >>>>>>> that could not modify the session,
> >>>>>>>             hence we don't replicate the session after the
> >>>>>>> end of this
> >>>>>>> request.
> >>>>>>>             The filter is negative, ie, anything you put in
> >>>>>>> the filter,
> >>>>>>> you mean to filter out,
> >>>>>>>             ie, no replication will be done on requests that
> >>>>>>> match one
> >>>>>>> of the filters.
> >>>>>>>             The filter attribute is delimited by ;, so you
> >>>>>>> can't escape
> >>>>>>> out ; even if you wanted to.
> >>>>>>>
> >>>>>>>             filter=".*\.gif;.*\.js;" means that we will not
> >>>>>>> replicate the session after requests with the URI
> >>>>>>>             ending with .gif and .js are intercepted.
> >>>>>>>                         The deployer element can be used to
> >>>>>>> deploy apps cluster
> >>>>>>> wide.
> >>>>>>>             Currently the deployment only deploys/undeploys to
> >>>>>>> working
> >>>>>>> members in the cluster
> >>>>>>>             so no WARs are copied upons startup of a broken
> >>>>>>> node. The deployer watches a directory (watchDir) for WAR
> >>>>>>> files when watchEnabled="true"
> >>>>>>>             When a new war file is added the war gets
> >>>>>>> deployed to the local instance,
> >>>>>>>             and then deployed to the other instances in the
> >>>>>>> cluster. When a war file is deleted from the watchDir the war
> >>>>>>> is undeployed locally             and cluster wide
> >>>>>>>         -->
> >>>>>>>                <Cluster
> >>>>>>> className="org.apache.catalina.cluster.tcp.SimpleTcpCluster"
> >>>>>>>
> >>>>>>> managerClassName="org.apache.catalina.cluster.session.DeltaManager"
> >>>>>>>                  expireSessionsOnShutdown="false"
> >>>>>>>                  useDirtyFlag="true"
> >>>>>>>                  notifyListenersOnReplication="true">
> >>>>>>>
> >>>>>>>             <Membership
> >>>>>>> className="org.apache.catalina.cluster.mcast.McastService"
> >>>>>>>                 mcastAddr="228.0.0.4"
> >>>>>>>                 mcastPort="45564"
> >>>>>>>                 mcastFrequency="500"
> >>>>>>>                 mcastDropTime="3000"/>
> >>>>>>>
> >>>>>>>             <Receiver
> >>>>>>> className="org.apache.catalina.cluster.tcp.ReplicationListener"
> >>>>>>>                 tcpListenAddress="auto"
> >>>>>>>                 tcpListenPort="4001"
> >>>>>>>                 tcpSelectorTimeout="100"
> >>>>>>>                 tcpThreadCount="6"/>
> >>>>>>>
> >>>>>>>             <Sender
> >>>>>>>
> >>>>>>> className="org.apache.catalina.cluster.tcp.ReplicationTransmitter"
> >>>>>>>                 replicationMode="pooled"
> >>>>>>>                 ackTimeout="15000"/>
> >>>>>>>
> >>>>>>>             <Valve
> >>>>>>> className="org.apache.catalina.cluster.tcp.ReplicationValve"
> >>>>>>>
> >>>>>>> filter=".*\.gif;.*\.js;.*\.jpg;.*\.png;.*\.htm;.*\.html;.*\.css;.*\.txt;"/>
> >>>>>>>
> >>>>>>>
> >>>>>>>
> >>>>>>>                                <Deployer
> >>>>>>> className="org.apache.catalina.cluster.deploy.FarmWarDeployer"
> >>>>>>>                       tempDir="/tmp/war-temp/"
> >>>>>>>                       deployDir="/tmp/war-deploy/"
> >>>>>>>                       watchDir="/tmp/war-listen/"
> >>>>>>>                       watchEnabled="false"/>
> >>>>>>>                                   <ClusterListener
> >>>>>>> className="org.apache.catalina.cluster.session.ClusterSessionListener"/>
> >>>>>>>
> >>>>>>>
> >>>>>>>         </Cluster>
> >>>>>>>
> >>>>>>>
> >>>>>>>         <!-- Normally, users must authenticate themselves to
> >>>>>>> each web app
> >>>>>>>              individually.  Uncomment the following entry if
> >>>>>>> you would
> >>>>>>> like
> >>>>>>>              a user to be authenticated the first time they
> >>>>>>> encounter a
> >>>>>>>              resource protected by a security constraint, and
> >>>>>>> then have
> >>>>>>> that
> >>>>>>>              user identity maintained across *all* web
> >>>>>>> applications contained
> >>>>>>>              in this virtual host. -->
> >>>>>>>         <!--
> >>>>>>>         <Valve
> >>>>>>> className="org.apache.catalina.authenticator.SingleSignOn" />
> >>>>>>>         -->
> >>>>>>>
> >>>>>>>         <!-- Access log processes all requests for this
> >>>>>>> virtual host. By
> >>>>>>>              default, log files are created in the "logs"
> >>>>>>> directory relative to
> >>>>>>>              $CATALINA_HOME.  If you wish, you can specify a
> >>>>>>> different
> >>>>>>>              directory with the "directory" attribute.
> >>>>>>> Specify either
> >>>>>>> a relative
> >>>>>>>              (to $CATALINA_HOME) or absolute path to the
> >>>>>>> desired directory.
> >>>>>>>         -->
> >>>>>>>         <!--
> >>>>>>>         <Valve
> >>>>>>> className="org.apache.catalina.valves.AccessLogValve"
> >>>>>>> directory="logs"  prefix="localhost_access_log." suffix=".txt"
> >>>>>>>                  pattern="common" resolveHosts="false"/>
> >>>>>>>         -->
> >>>>>>>
> >>>>>>>         <!-- Access log processes all requests for this
> >>>>>>> virtual host. By
> >>>>>>>              default, log files are created in the "logs"
> >>>>>>> directory relative to
> >>>>>>>              $CATALINA_HOME.  If you wish, you can specify a
> >>>>>>> different
> >>>>>>>              directory with the "directory" attribute.
> >>>>>>> Specify either
> >>>>>>> a relative
> >>>>>>>              (to $CATALINA_HOME) or absolute path to the
> >>>>>>> desired directory.
> >>>>>>>              This access log implementation is optimized for
> >>>>>>> maximum performance,
> >>>>>>>              but is hardcoded to support only the "common" and
> >>>>>>> "combined" patterns.
> >>>>>>>         -->
> >>>>>>>         <!--
> >>>>>>>         <Valve
> >>>>>>> className="org.apache.catalina.valves.FastCommonAccessLogValve"
> >>>>>>>                  directory="logs"
> >>>>>>> prefix="localhost_access_log." suffix=".txt"
> >>>>>>>                  pattern="common" resolveHosts="false"/>
> >>>>>>>         -->
> >>>>>>>         <!-- Access log processes all requests for this
> >>>>>>> virtual host. By
> >>>>>>>              default, log files are created in the "logs"
> >>>>>>> directory relative to
> >>>>>>>              $CATALINA_HOME.  If you wish, you can specify a
> >>>>>>> different
> >>>>>>>              directory with the "directory" attribute.
> >>>>>>> Specify either
> >>>>>>> a relative
> >>>>>>>              (to $CATALINA_HOME) or absolute path to the
> >>>>>>> desired directory.
> >>>>>>>              This access log implementation is optimized for
> >>>>>>> maximum performance,
> >>>>>>>              but is hardcoded to support only the "common" and
> >>>>>>> "combined" patterns.
> >>>>>>>
> >>>>>>>              This valve use NIO direct Byte Buffer to
> >>>>>>> asynchornously store the
> >>>>>>>              log.
> >>>>>>>         -->
> >>>>>>>         <!--
> >>>>>>>         <Valve
> >>>>>>> className="org.apache.catalina.valves.ByteBufferAccessLogValve"
> >>>>>>>                  directory="logs"
> >>>>>>> prefix="localhost_access_log." suffix=".txt"
> >>>>>>>                  pattern="common" resolveHosts="false"/>
> >>>>>>>         -->
> >>>>>>>
> >>>>>>>       </Host>
> >>>>>>>
> >>>>>>>     </Engine>
> >>>>>>>
> >>>>>>>   </Service>
> >>>>>>>
> >>>>>>> </Server>
> >>>>>>>
> >>>>>>>
> >>>>>>> workers2.properties
> >>>>>>>
> >>>>>>> [logger.apache2]
> >>>>>>> file="/etc/httpd/conf/logs/error.log"
> >>>>>>> level=INFO
> >>>>>>> debug=1
> >>>>>>>
> >>>>>>> # Config settings
> >>>>>>> [config]
> >>>>>>> file=/etc/httpd/conf/workers2.properties
> >>>>>>> debug=0
> >>>>>>>
> >>>>>>> # Shared memory file settings
> >>>>>>> [shm]
> >>>>>>> file=/etc/httpd/conf/jk2.shm
> >>>>>>> size=100000
> >>>>>>>
> >>>>>>> # Communcation channel settings for "Tomcat5A"
> >>>>>>> [channel.socket:localhost:8009]
> >>>>>>> host=localhost
> >>>>>>> port=8009
> >>>>>>> tomcatId=Tomcat5A
> >>>>>>> group=balanced
> >>>>>>> lb_factor=1
> >>>>>>> route=Tomcat5A
> >>>>>>>
> >>>>>>>
> >>>>>>> # Declare a Tomcat5A worker
> >>>>>>> [ajp13:localhost:8009]
> >>>>>>> channel=channel.socket:Tomcat5A
> >>>>>>>
> >>>>>>>
> >>>>>>> # Communcation channel settings for "Tomcat5B"
> >>>>>>> [channel.socket:localhost:8010]
> >>>>>>> host=localhost
> >>>>>>> port=8010
> >>>>>>> tomcatId=Tomcat5B
> >>>>>>> group=balanced
> >>>>>>> lb_factor=1
> >>>>>>> route=Tomcat5B
> >>>>>>>
> >>>>>>>
> >>>>>>> # Declare a Tomcat5B worker
> >>>>>>> [ajp13:localhost:8010]
> >>>>>>> channel=channel.socket:Tomcat5B
> >>>>>>>
> >>>>>>>
> >>>>>>> # Communcation channel settings for "Tomcat5C"
> >>>>>>> [channel.socket:localhost:8011]
> >>>>>>> host=localhost
> >>>>>>> port=8011
> >>>>>>> tomcatId=Tomcat5C
> >>>>>>> group=balanced
> >>>>>>> lb_factor=1
> >>>>>>> route=Tomcat5C
> >>>>>>>
> >>>>>>>
> >>>>>>> # Declare a Tomcat5C worker
> >>>>>>> [ajp13:localhost:8011]
> >>>>>>> channel=channel.socket:Tomcat5C
> >>>>>>>
> >>>>>>> # Load balanced Worker
> >>>>>>> [lb:balanced]
> >>>>>>> worker=ajp13:localhost:8009
> >>>>>>> worker=ajp13:localhost:8010
> >>>>>>> worker=ajp13:localhost:8011
> >>>>>>> timeout=90
> >>>>>>> attempts=3
> >>>>>>> recovery=30
> >>>>>>> stickySession=0
> >>>>>>> noWorkerMsg=Server Busy please retry later.
> >>>>>>> noWorkerCodeMsg=503
> >>>>>>>
> >>>>>>> # URI mappings for the tomcat worker
> >>>>>>> # Map the "jsp-examples" web application context to the web
> >>>>>>> server URI
> >>>>>>> space
> >>>>>>> [uri:/jsp-examples/*]
> >>>>>>> info= Mapping for jsp-examples context for tomcat
> >>>>>>> context=/jsp-examples
> >>>>>>> group=balanced
> >>>>>>>
> >>>>>>> [shm]
> >>>>>>> file=/etc/httpd/conf/jk2.shm
> >>>>>>> size=1000000
> >>>>>>>
> >>>>>>> [uri:/servlets-examples/*]
> >>>>>>> context=/servlets-examples
> >>>>>>> group=balanced
> >>>>>>>
> >>>>>>> # Define a status worker
> >>>>>>> [status:]
> >>>>>>>
> >>>>>>> # Status URI mapping
> >>>>>>> [uri:/jkstatus/*]
> >>>>>>> group=status
> >>>>>>>
> >>>>>>>
> >>>>>>> obviously the server.xml files on the other 2 instances of
> >>>>>>> tomcat are the same except the ports and jvmRoute have been
> >>>>>>> changed.
> >>>>>>>
> >>>>>>>
> >>>>>>> can anyone see where i am going wrong ?
> >>>>>>>
> >>>>>>> Thanks
> >>>>>>>
> >>>>>>>
> >>>>>>>
> >>>>>>>
> >>>>> ---------------------------------------------------------------------
> >>>>> To start a new topic, e-mail: users@tomcat.apache.org
> >>>>> To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
> >>>>> For additional commands, e-mail: users-help@tomcat.apache.org
> >>>>>
> >>>>>
> >>>>>
> >>>>
> >>> ---------------------------------------------------------------------
> >>> To start a new topic, e-mail: users@tomcat.apache.org
> >>> To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
> >>> For additional commands, e-mail: users-help@tomcat.apache.org
> >>>
> >>>
> >>
> >> ---------------------------------------------------------------------
> >> To start a new topic, e-mail: users@tomcat.apache.org
> >> To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
> >> For additional commands, e-mail: users-help@tomcat.apache.org
> >>
> >>
> >>
> > 
> > ---------------------------------------------------------------------
> > To start a new topic, e-mail: users@tomcat.apache.org
> > To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
> > For additional commands, e-mail: users-help@tomcat.apache.org
> > 
> > 
> > 
> 
> ---------------------------------------------------------------------
> To start a new topic, e-mail: users@tomcat.apache.org
> To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
> For additional commands, e-mail: users-help@tomcat.apache.org

Hi Guys,

I appear to be finally getting somewhere with the in-memory state
replication but am now getting the following error when starting up my
tomcat instances/

WARNING: Manager [/jsp-examples], requesting session state from
org.apache.catalina.cluster.mcast.McastMember[tcp://192.168.4.121:4001,catalina,192.168.4.121,4001,
alive=74436]. This operation will timeout if no session state has been
received within 60 seconds. 23-Jun-2006 13:27:38
org.apache.catalina.cluster.session.DeltaManager waitForSendAllSessions
SEVERE: Manager [/jsp-examples]: No session state send at 23/06/06
13:26 received, timing out after 60,140 ms. 23-Jun-2006 13:27:38
org.apache.catalina.core.ApplicationContext log INFO: ContextListener:
contextInitialized() 23-Jun-2006 13:27:38
org.apache.catalina.core.ApplicationContext log INFO: SessionListener:
contextInitialized() 23-Jun-2006 13:27:38
org.apache.jk.common.ChannelSocket init INFO: JK: ajp13 listening
on /0.0.0.0:8009 23-Jun-2006 13:27:38 org.apache.jk.server.JkMain start
INFO: Jk running ID=0 time=0/224  config=null 23-Jun-2006 13:27:38
org.apache.catalina.storeconfig.StoreLoader load INFO: Find registry
server-registry.xml at classpath resource 23-Jun-2006 13:27:39
org.apache.catalina.startup.Catalina start INFO: Server startup in
67102 ms

Can anyone point me in the right direction as to why the session state
is not being replicated ?

Cheers

-- 
Sean O'Reilly
Systems Administrator
SECPay Ltd

http://www.secpay.com

s.oreilly@secpay.com

Mobile 07917 463906

DDI 01732 300212

This email contains information which is confidential. It is for the
exclusive use of the addressee(s). If you are not the addressee, please
note that any distribution, dissemination, copying or use of this
communication or the information in it is prohibited. If you have
received this email in error, please telephone me immediately.

---------------------------------------------------------------------
To start a new topic, e-mail: users@tomcat.apache.org
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org


Re: Tomcat session replication/cluster

Posted by Pid <p...@pidster.com>.
In each case it's the ROOT context, so is it appending "" where there's
no Context name/path: ie

 SEVERE: Context manager doesn't exist:host1+context1

where context1=""
?

It would still seem that the name parameter supplied to the
getManagerName is carrying over it's previous value, and appending
again, rather than resetting.  I'm not familiar enough with the code to
see where it's occuring though.



Pid wrote:
> OK,
> 
> I'm probably being dense here.
> 
> (There's only 1 context in each host, the ROOT context)
> If I take the Host/Context offline in one node and restart it, the logs
> on that node start showing the following:
> 
>  SEVERE: Context manager doesn't exist:host1
> 
> As the Context doesn't exist, which is the same message that appears
> briefly in the logs during a restart, until that particular Host is
> loaded (under normal circumstances).
> 
> This much I understand, and provides no problems for me.
> 
> 
> With all Hosts available on each node of the cluster, I then update the
> Context on one Host, (by adding a new jar, say).  The Context has
> reloadable="true", so it does just that.
> 
> Once that context has updated, the other nodes start seeing:
> 
>  SEVERE: Context manager doesn't exist:host1host1
> 
> If I reload the context again, (without restarting the server), I see this:
> 
>  SEVERE: Context manager doesn't exist:host1host1host1
> 
> I could go on, but I think you can see where this is going...
> 
> 
> 
> Peter Rossbach wrote:
>> Hmm,
>>
>> look at o.a.c.cluster.tcp.SimpleTcpCluster
>>
>> L 626ff
>>     private String getManagerName(String name, Manager manager) {
>>         String clusterName = name ;
>>         if(getContainer() instanceof Engine) {
>>             Container context = manager.getContainer() ;
>>             if(context != null && context instanceof Context) {
>>                 Container host = ((Context)context).getParent();
>>                 if(host != null && host instanceof Host)
>>                     clusterName = host.getName()  + name ;
>>             }
>>         }
>>         return clusterName;
>>     }
>>
>>
>> You see we append "hostname + context" as cluster engine container.
>>
>> Peter
>>
>>
>>
>> Am 22.06.2006 um 10:32 schrieb Pid:
>>
>>>
>>> Filip Hanik - Dev Lists wrote:
>>>> if the cluster is put in the engine element, the context names are
>>>> prefixed with the engine name, since you can have multiple contexts with
>>>> the same name in different host
>>>> when reloading a context, you'll get these errors cause the context is
>>>> not available during the reload
>>>> this will be fixed with the new Apache Tribes module
>>>> Filip
>>> I understand that the context is not available during reload. After
>>> reload has completed, the error persists.
>>>
>>> My Engine name is Catalina, it looks like the cluster isn't sending the
>>> engine name, but the context name, appended to itself.
>>>
>>> You're implying that it should send Catalina+website1, but it's sending
>>> website1+website1 instead.
>>>
>>> After startup:
>>> Node1 sees Node2 send "website2"
>>> Node2 sees Node1 send "website1"
>>>
>>> After context on Node1 is finished reloading:
>>> Node1 sees Node2 send "website2"
>>> Node2 sees Node1 send "website1website1"
>>>
>>> I think that the context name is being appended to itself.
>>>
>>>
>>>> Pid wrote:
>>>>> I'm seeing an issue on 5.5.17 with a 2 node cluster config.
>>>>> When a context is reloaded, it sends the context node name incorrectly
>>>>> to the cluster.
>>>>> E.g. context is called "website1"
>>>>>
>>>>> SEVERE: Context manager doesn't exist:website1website1
>>>>>
>>>>> The config I'm using is exactly the same as the default from
>>>>> server.xml,
>>>>> except the cluster is defined in Engine, rather than each Host.
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> Filip Hanik - Dev Lists wrote:
>>>>>
>>>>>> also, use Tomcat 5.5.17
>>>>>>
>>>>>> Sean O'Reilly wrote:
>>>>>>
>>>>>>> Hi,
>>>>>>>
>>>>>>> I am trying to get in-memory session replication working and am
>>>>>>> testing
>>>>>>> running 3 seperate tomcat instances on the same server.
>>>>>>>
>>>>>>> I am using tomcat-5.5.15 and apache-2.0.54 with jk2.
>>>>>>>
>>>>>>> Whenever i run my test app although it should be doing round-robin
>>>>>>> load
>>>>>>> balancing it doesn't switch to another instance of tomcat until the
>>>>>>> eighth request and does not appear to have sent the session
>>>>>>> information
>>>>>>> across as the session ID changes.
>>>>>>>
>>>>>>> Here are my server.xml and workers2.properties files
>>>>>>>
>>>>>>> server.xml
>>>>>>>
>>>>>>> <Server port="8005" shutdown="SHUTDOWN">
>>>>>>>
>>>>>>>   <!-- Comment these entries out to disable JMX MBeans support
>>>>>>> used for
>>>>>>> the        administration web application -->
>>>>>>>   <Listener
>>>>>>> className="org.apache.catalina.core.AprLifecycleListener" />
>>>>>>>   <Listener
>>>>>>> className="org.apache.catalina.mbeans.ServerLifecycleListener" />
>>>>>>>   <Listener
>>>>>>> className="org.apache.catalina.mbeans.GlobalResourcesLifecycleListener"
>>>>>>>
>>>>>>> />
>>>>>>>   <Listener
>>>>>>> className="org.apache.catalina.storeconfig.StoreConfigLifecycleListener"/>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>   <!-- Global JNDI resources -->
>>>>>>>   <GlobalNamingResources>
>>>>>>>
>>>>>>>     <!-- Test entry for demonstration purposes -->
>>>>>>>     <Environment name="simpleValue" type="java.lang.Integer"
>>>>>>> value="30"/>
>>>>>>>
>>>>>>>     <!-- Editable user database that can also be used by
>>>>>>>          UserDatabaseRealm to authenticate users -->
>>>>>>>     <Resource name="UserDatabase" auth="Container"
>>>>>>>               type="org.apache.catalina.UserDatabase"
>>>>>>>        description="User database that can be updated and saved"
>>>>>>>
>>>>>>> factory="org.apache.catalina.users.MemoryUserDatabaseFactory"
>>>>>>>           pathname="conf/tomcat-users.xml" />
>>>>>>>
>>>>>>>   </GlobalNamingResources>
>>>>>>>
>>>>>>>   <!-- A "Service" is a collection of one or more "Connectors" that
>>>>>>> share
>>>>>>>        a single "Container" (and therefore the web applications
>>>>>>> visible
>>>>>>>        within that Container).  Normally, that Container is an
>>>>>>> "Engine",
>>>>>>>        but this is not required.
>>>>>>>
>>>>>>>        Note:  A "Service" is not itself a "Container", so you may not
>>>>>>>        define subcomponents such as "Valves" or "Loggers" at this
>>>>>>> level.
>>>>>>>    -->
>>>>>>>
>>>>>>>   <!-- Define the Tomcat Stand-Alone Service -->
>>>>>>>   <Service name="Catalina">
>>>>>>>
>>>>>>>     <!-- A "Connector" represents an endpoint by which requests are
>>>>>>> received
>>>>>>>          and responses are returned.  Each Connector passes
>>>>>>> requests on
>>>>>>> to the
>>>>>>>          associated "Container" (normally an Engine) for processing.
>>>>>>>
>>>>>>>          By default, a non-SSL HTTP/1.1 Connector is established on
>>>>>>> port 8080.
>>>>>>>          You can also enable an SSL HTTP/1.1 Connector on port
>>>>>>> 8443 by
>>>>>>>          following the instructions below and uncommenting the second
>>>>>>> Connector
>>>>>>>          entry.  SSL support requires the following steps (see the
>>>>>>> SSL
>>>>>>> Config
>>>>>>>          HOWTO in the Tomcat 5 documentation bundle for more detailed
>>>>>>>          instructions):
>>>>>>>          * If your JDK version 1.3 or prior, download and install
>>>>>>> JSSE
>>>>>>> 1.0.2 or
>>>>>>>            later, and put the JAR files into
>>>>>>> "$JAVA_HOME/jre/lib/ext".
>>>>>>>          * Execute:
>>>>>>>              %JAVA_HOME%\bin\keytool -genkey -alias tomcat -keyalg
>>>>>>> RSA
>>>>>>> (Windows)
>>>>>>>              $JAVA_HOME/bin/keytool -genkey -alias tomcat -keyalg RSA
>>>>>>> (Unix)
>>>>>>>            with a password value of "changeit" for both the
>>>>>>> certificate
>>>>>>> and
>>>>>>>            the keystore itself.
>>>>>>>
>>>>>>>          By default, DNS lookups are enabled when a web application
>>>>>>> calls
>>>>>>>          request.getRemoteHost().  This can have an adverse impact on
>>>>>>>          performance, so you can disable it by setting the
>>>>>>>          "enableLookups" attribute to "false".  When DNS lookups are
>>>>>>> disabled,
>>>>>>>          request.getRemoteHost() will return the String version of
>>>>>>> the
>>>>>>>          IP address of the remote client.
>>>>>>>     -->
>>>>>>>
>>>>>>>     <!-- Define a non-SSL HTTP/1.1 Connector on port 8080
>>>>>>>     <Connector port="8080" maxHttpHeaderSize="8192"
>>>>>>>                maxThreads="150" minSpareThreads="25"
>>>>>>> maxSpareThreads="75"
>>>>>>>                enableLookups="false" redirectPort="8443"
>>>>>>> acceptCount="100"
>>>>>>>                connectionTimeout="20000"
>>>>>>> disableUploadTimeout="true" />
>>>>>>> -->
>>>>>>>     <!-- Note : To disable connection timeouts, set connectionTimeout
>>>>>>> value
>>>>>>>      to 0 -->
>>>>>>>         <!-- Note : To use gzip compression you could set the
>>>>>>> following
>>>>>>> properties :
>>>>>>>                    compression="on"
>>>>>>> compressionMinSize="2048"
>>>>>>> noCompressionUserAgents="gozilla, traviata"
>>>>>>> compressableMimeType="text/html,text/xml"
>>>>>>>     -->
>>>>>>>
>>>>>>>     <!-- Define a SSL HTTP/1.1 Connector on port 8443 -->
>>>>>>>     <!--
>>>>>>>     <Connector port="8443" maxHttpHeaderSize="8192"
>>>>>>>                maxThreads="150" minSpareThreads="25"
>>>>>>> maxSpareThreads="75"
>>>>>>>                enableLookups="false" disableUploadTimeout="true"
>>>>>>>                acceptCount="100" scheme="https" secure="true"
>>>>>>>                clientAuth="false" sslProtocol="TLS" />
>>>>>>>     -->
>>>>>>>
>>>>>>>     <!-- Define an AJP 1.3 Connector on port 8009 -->
>>>>>>>     <Connector port="8009"                enableLookups="false"
>>>>>>> redirectPort="8443"
>>>>>>> protocol="AJP/1.3" />
>>>>>>>
>>>>>>>     <!-- Define a Proxied HTTP/1.1 Connector on port 8082 -->
>>>>>>>     <!-- See proxy documentation for more information about using
>>>>>>> this.
>>>>>>> -->
>>>>>>>     <!--
>>>>>>>     <Connector port="8082"                maxThreads="150"
>>>>>>> minSpareThreads="25"
>>>>>>> maxSpareThreads="75"
>>>>>>>                enableLookups="false" acceptCount="100"
>>>>>>> connectionTimeout="20000"
>>>>>>>                proxyPort="80" disableUploadTimeout="true" />
>>>>>>>     -->
>>>>>>>
>>>>>>>     <!-- An Engine represents the entry point (within Catalina) that
>>>>>>> processes
>>>>>>>          every request.  The Engine implementation for Tomcat stand
>>>>>>> alone
>>>>>>>          analyzes the HTTP headers included with the request, and
>>>>>>> passes them
>>>>>>>          on to the appropriate Host (virtual host). -->
>>>>>>>
>>>>>>>     <!-- You should set jvmRoute to support load-balancing via AJP
>>>>>>> ie :
>>>>>>> -->
>>>>>>>     <Engine name="Standalone" defaultHost="localhost"
>>>>>>> jvmRoute="Tomcat5A">                       <!-- Define the top level
>>>>>>> container in our container
>>>>>>> hierarchy
>>>>>>>     <Engine name="Catalina" defaultHost="localhost"> -->
>>>>>>>
>>>>>>>       <!-- The request dumper valve dumps useful debugging
>>>>>>> information
>>>>>>> about
>>>>>>>            the request headers and cookies that were received, and
>>>>>>> the
>>>>>>> response
>>>>>>>            headers and cookies that were sent, for all requests
>>>>>>> received by
>>>>>>>            this instance of Tomcat.  If you care only about
>>>>>>> requests to
>>>>>>> a
>>>>>>>            particular virtual host, or a particular application, nest
>>>>>>> this
>>>>>>>            element inside the corresponding <Host> or <Context> entry
>>>>>>> instead.
>>>>>>>
>>>>>>>            For a similar mechanism that is portable to all Servlet
>>>>>>> 2.4
>>>>>>>            containers, check out the "RequestDumperFilter" Filter in
>>>>>>> the
>>>>>>>            example application (the source for this filter may be
>>>>>>> found
>>>>>>> in
>>>>>>>           
>>>>>>> "$CATALINA_HOME/webapps/examples/WEB-INF/classes/filters").
>>>>>>>
>>>>>>>            Request dumping is disabled by default.  Uncomment the
>>>>>>> following
>>>>>>>            element to enable it. -->
>>>>>>>       <!--
>>>>>>>       <Valve
>>>>>>> className="org.apache.catalina.valves.RequestDumperValve"/>
>>>>>>>       -->
>>>>>>>
>>>>>>>       <!-- Because this Realm is here, an instance will be shared
>>>>>>> globally -->
>>>>>>>
>>>>>>>       <!-- This Realm uses the UserDatabase configured in the global
>>>>>>> JNDI
>>>>>>>            resources under the key "UserDatabase".  Any edits
>>>>>>>            that are performed against this UserDatabase are
>>>>>>> immediately
>>>>>>>            available for use by the Realm.  -->
>>>>>>>       <Realm className="org.apache.catalina.realm.UserDatabaseRealm"
>>>>>>>              resourceName="UserDatabase"/>
>>>>>>>
>>>>>>>       <!-- Comment out the old realm but leave here for now in
>>>>>>> case we
>>>>>>>            need to go back quickly -->
>>>>>>>       <!--
>>>>>>>       <Realm className="org.apache.catalina.realm.MemoryRealm" />
>>>>>>>       -->
>>>>>>>
>>>>>>>       <!-- Replace the above Realm with one of the following to get a
>>>>>>> Realm
>>>>>>>            stored in a database and accessed via JDBC -->
>>>>>>>
>>>>>>>       <!--
>>>>>>>       <Realm  className="org.apache.catalina.realm.JDBCRealm"
>>>>>>>              driverName="org.gjt.mm.mysql.Driver"
>>>>>>>           connectionURL="jdbc:mysql://localhost/authority"
>>>>>>>          connectionName="test" connectionPassword="test"
>>>>>>>               userTable="users" userNameCol="user_name"
>>>>>>> userCredCol="user_pass"
>>>>>>>           userRoleTable="user_roles" roleNameCol="role_name" />
>>>>>>>       -->
>>>>>>>
>>>>>>>       <!--
>>>>>>>       <Realm  className="org.apache.catalina.realm.JDBCRealm"
>>>>>>>              driverName="oracle.jdbc.driver.OracleDriver"
>>>>>>>           connectionURL="jdbc:oracle:thin:@ntserver:1521:ORCL"
>>>>>>>          connectionName="scott" connectionPassword="tiger"
>>>>>>>               userTable="users" userNameCol="user_name"
>>>>>>> userCredCol="user_pass"
>>>>>>>           userRoleTable="user_roles" roleNameCol="role_name" />
>>>>>>>       -->
>>>>>>>
>>>>>>>       <!--
>>>>>>>       <Realm  className="org.apache.catalina.realm.JDBCRealm"
>>>>>>>              driverName="sun.jdbc.odbc.JdbcOdbcDriver"
>>>>>>>           connectionURL="jdbc:odbc:CATALINA"
>>>>>>>               userTable="users" userNameCol="user_name"
>>>>>>> userCredCol="user_pass"
>>>>>>>           userRoleTable="user_roles" roleNameCol="role_name" />
>>>>>>>       -->
>>>>>>>
>>>>>>>       <!-- Define the default virtual host
>>>>>>>            Note: XML Schema validation will not work with Xerces 2.2.
>>>>>>>        -->
>>>>>>>       <Host name="localhost" appBase="webapps"
>>>>>>>        unpackWARs="true" autoDeploy="true"
>>>>>>>        xmlValidation="false" xmlNamespaceAware="false">
>>>>>>>
>>>>>>>         <!-- Defines a cluster for this node,
>>>>>>>              By defining this element, means that every manager
>>>>>>> will be
>>>>>>> changed.
>>>>>>>              So when running a cluster, only make sure that you have
>>>>>>> webapps in there
>>>>>>>              that need to be clustered and remove the other ones.
>>>>>>>              A cluster has the following parameters:
>>>>>>>
>>>>>>>              className = the fully qualified name of the cluster
>>>>>>> class
>>>>>>>
>>>>>>>              clusterName = a descriptive name for your cluster,
>>>>>>> can be
>>>>>>> anything
>>>>>>>
>>>>>>>              mcastAddr = the multicast address, has to be the same
>>>>>>> for
>>>>>>> all the nodes
>>>>>>>
>>>>>>>              mcastPort = the multicast port, has to be the same
>>>>>>> for all
>>>>>>> the nodes
>>>>>>>                           mcastBindAddr = bind the multicast
>>>>>>> socket to
>>>>>>> a specific
>>>>>>> address
>>>>>>>                           mcastTTL = the multicast TTL if you want to
>>>>>>> limit your
>>>>>>> broadcast
>>>>>>>                           mcastSoTimeout = the multicast readtimeout
>>>>>>>              mcastFrequency = the number of milliseconds in between
>>>>>>> sending a "I'm alive" heartbeat
>>>>>>>
>>>>>>>              mcastDropTime = the number a milliseconds before a
>>>>>>> node is
>>>>>>> considered "dead" if no heartbeat is received
>>>>>>>
>>>>>>>              tcpThreadCount = the number of threads to handle
>>>>>>> incoming
>>>>>>> replication requests, optimal would be the same amount of threads as
>>>>>>> nodes
>>>>>>>              tcpListenAddress = the listen address (bind address) for
>>>>>>> TCP cluster request on this host,                                 in
>>>>>>> case of multiple ethernet cards.
>>>>>>>                                 auto means that address becomes
>>>>>>>
>>>>>>> InetAddress.getLocalHost().getHostAddress()
>>>>>>>
>>>>>>>              tcpListenPort = the tcp listen port
>>>>>>>
>>>>>>>              tcpSelectorTimeout = the timeout (ms) for the
>>>>>>> Selector.select() method in case the OS
>>>>>>>                                   has a wakup bug in java.nio. Set
>>>>>>> to 0
>>>>>>> for no timeout
>>>>>>>
>>>>>>>              printToScreen = true means that managers will also print
>>>>>>> to std.out
>>>>>>>
>>>>>>>              expireSessionsOnShutdown = true means that
>>>>>>>              useDirtyFlag = true means that we only replicate a
>>>>>>> session
>>>>>>> after setAttribute,removeAttribute has been called.
>>>>>>>                             false means to replicate the session
>>>>>>> after
>>>>>>> each request.
>>>>>>>                             false means that replication would
>>>>>>> work for
>>>>>>> the following piece of code: (only for SimpleTcpReplicationManager)
>>>>>>>                             <%
>>>>>>>                             HashMap map =
>>>>>>> (HashMap)session.getAttribute("map");
>>>>>>>                             map.put("key","value");
>>>>>>>                             %>
>>>>>>>              replicationMode = can be either 'pooled',
>>>>>>> 'synchronous' or
>>>>>>> 'asynchronous'.
>>>>>>>                                * Pooled means that the replication
>>>>>>> happens using several sockets in a synchronous way. Ie, the data gets
>>>>>>> replicated, then the request return. This is the same as the
>>>>>>> 'synchronous' setting except it uses a pool of sockets, hence it is
>>>>>>> multithreaded. This is the fastest and safest configuration. To use
>>>>>>> this, also increase the nr of tcp threads that you have dealing with
>>>>>>> replication.
>>>>>>>                                * Synchronous means that the thread
>>>>>>> that
>>>>>>> executes the request, is also the
>>>>>>>                                thread the replicates the data to the
>>>>>>> other nodes, and will not return until all
>>>>>>>                                nodes have received the information.
>>>>>>>                                * Asynchronous means that there is a
>>>>>>> specific 'sender' thread for each cluster node,
>>>>>>>                                so the request thread will queue the
>>>>>>> replication request into a "smart" queue,
>>>>>>>                                and then return to the client.
>>>>>>>                                The "smart" queue is a queue where
>>>>>>> when
>>>>>>> a session is added to the queue, and the same session
>>>>>>>                                already exists in the queue from a
>>>>>>> previous request, that session will be replaced
>>>>>>>                                in the queue instead of replicating
>>>>>>> two
>>>>>>> requests. This almost never happens, unless there is a
>>>>>>>                                large network delay.
>>>>>>>         -->                     <!--
>>>>>>>             When configuring for clustering, you also add in a
>>>>>>> valve to
>>>>>>> catch all the requests
>>>>>>>             coming in, at the end of the request, the session may or
>>>>>>> may not be replicated.
>>>>>>>             A session is replicated if and only if all the conditions
>>>>>>> are met:
>>>>>>>             1. useDirtyFlag is true or setAttribute or
>>>>>>> removeAttribute
>>>>>>> has been called AND
>>>>>>>             2. a session exists (has been created)
>>>>>>>             3. the request is not trapped by the "filter" attribute
>>>>>>>
>>>>>>>             The filter attribute is to filter out requests that could
>>>>>>> not modify the session,
>>>>>>>             hence we don't replicate the session after the end of
>>>>>>> this
>>>>>>> request.
>>>>>>>             The filter is negative, ie, anything you put in the
>>>>>>> filter,
>>>>>>> you mean to filter out,
>>>>>>>             ie, no replication will be done on requests that match
>>>>>>> one
>>>>>>> of the filters.
>>>>>>>             The filter attribute is delimited by ;, so you can't
>>>>>>> escape
>>>>>>> out ; even if you wanted to.
>>>>>>>
>>>>>>>             filter=".*\.gif;.*\.js;" means that we will not replicate
>>>>>>> the session after requests with the URI
>>>>>>>             ending with .gif and .js are intercepted.
>>>>>>>                         The deployer element can be used to deploy
>>>>>>> apps cluster
>>>>>>> wide.
>>>>>>>             Currently the deployment only deploys/undeploys to
>>>>>>> working
>>>>>>> members in the cluster
>>>>>>>             so no WARs are copied upons startup of a broken node.
>>>>>>>             The deployer watches a directory (watchDir) for WAR files
>>>>>>> when watchEnabled="true"
>>>>>>>             When a new war file is added the war gets deployed to the
>>>>>>> local instance,
>>>>>>>             and then deployed to the other instances in the cluster.
>>>>>>>             When a war file is deleted from the watchDir the war is
>>>>>>> undeployed locally             and cluster wide
>>>>>>>         -->
>>>>>>>                <Cluster
>>>>>>> className="org.apache.catalina.cluster.tcp.SimpleTcpCluster"
>>>>>>>
>>>>>>> managerClassName="org.apache.catalina.cluster.session.DeltaManager"
>>>>>>>                  expireSessionsOnShutdown="false"
>>>>>>>                  useDirtyFlag="true"
>>>>>>>                  notifyListenersOnReplication="true">
>>>>>>>
>>>>>>>             <Membership
>>>>>>> className="org.apache.catalina.cluster.mcast.McastService"
>>>>>>>                 mcastAddr="228.0.0.4"
>>>>>>>                 mcastPort="45564"
>>>>>>>                 mcastFrequency="500"
>>>>>>>                 mcastDropTime="3000"/>
>>>>>>>
>>>>>>>             <Receiver
>>>>>>> className="org.apache.catalina.cluster.tcp.ReplicationListener"
>>>>>>>                 tcpListenAddress="auto"
>>>>>>>                 tcpListenPort="4001"
>>>>>>>                 tcpSelectorTimeout="100"
>>>>>>>                 tcpThreadCount="6"/>
>>>>>>>
>>>>>>>             <Sender
>>>>>>>
>>>>>>> className="org.apache.catalina.cluster.tcp.ReplicationTransmitter"
>>>>>>>                 replicationMode="pooled"
>>>>>>>                 ackTimeout="15000"/>
>>>>>>>
>>>>>>>             <Valve
>>>>>>> className="org.apache.catalina.cluster.tcp.ReplicationValve"
>>>>>>>
>>>>>>> filter=".*\.gif;.*\.js;.*\.jpg;.*\.png;.*\.htm;.*\.html;.*\.css;.*\.txt;"/>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>                                <Deployer
>>>>>>> className="org.apache.catalina.cluster.deploy.FarmWarDeployer"
>>>>>>>                       tempDir="/tmp/war-temp/"
>>>>>>>                       deployDir="/tmp/war-deploy/"
>>>>>>>                       watchDir="/tmp/war-listen/"
>>>>>>>                       watchEnabled="false"/>
>>>>>>>                                   <ClusterListener
>>>>>>> className="org.apache.catalina.cluster.session.ClusterSessionListener"/>
>>>>>>>
>>>>>>>
>>>>>>>         </Cluster>
>>>>>>>
>>>>>>>
>>>>>>>         <!-- Normally, users must authenticate themselves to each web
>>>>>>> app
>>>>>>>              individually.  Uncomment the following entry if you
>>>>>>> would
>>>>>>> like
>>>>>>>              a user to be authenticated the first time they
>>>>>>> encounter a
>>>>>>>              resource protected by a security constraint, and then
>>>>>>> have
>>>>>>> that
>>>>>>>              user identity maintained across *all* web applications
>>>>>>> contained
>>>>>>>              in this virtual host. -->
>>>>>>>         <!--
>>>>>>>         <Valve
>>>>>>> className="org.apache.catalina.authenticator.SingleSignOn" />
>>>>>>>         -->
>>>>>>>
>>>>>>>         <!-- Access log processes all requests for this virtual host.
>>>>>>> By
>>>>>>>              default, log files are created in the "logs" directory
>>>>>>> relative to
>>>>>>>              $CATALINA_HOME.  If you wish, you can specify a
>>>>>>> different
>>>>>>>              directory with the "directory" attribute.  Specify
>>>>>>> either
>>>>>>> a relative
>>>>>>>              (to $CATALINA_HOME) or absolute path to the desired
>>>>>>> directory.
>>>>>>>         -->
>>>>>>>         <!--
>>>>>>>         <Valve className="org.apache.catalina.valves.AccessLogValve"
>>>>>>>                  directory="logs"  prefix="localhost_access_log."
>>>>>>> suffix=".txt"
>>>>>>>                  pattern="common" resolveHosts="false"/>
>>>>>>>         -->
>>>>>>>
>>>>>>>         <!-- Access log processes all requests for this virtual host.
>>>>>>> By
>>>>>>>              default, log files are created in the "logs" directory
>>>>>>> relative to
>>>>>>>              $CATALINA_HOME.  If you wish, you can specify a
>>>>>>> different
>>>>>>>              directory with the "directory" attribute.  Specify
>>>>>>> either
>>>>>>> a relative
>>>>>>>              (to $CATALINA_HOME) or absolute path to the desired
>>>>>>> directory.
>>>>>>>              This access log implementation is optimized for maximum
>>>>>>> performance,
>>>>>>>              but is hardcoded to support only the "common" and
>>>>>>> "combined" patterns.
>>>>>>>         -->
>>>>>>>         <!--
>>>>>>>         <Valve
>>>>>>> className="org.apache.catalina.valves.FastCommonAccessLogValve"
>>>>>>>                  directory="logs"  prefix="localhost_access_log."
>>>>>>> suffix=".txt"
>>>>>>>                  pattern="common" resolveHosts="false"/>
>>>>>>>         -->
>>>>>>>         <!-- Access log processes all requests for this virtual host.
>>>>>>> By
>>>>>>>              default, log files are created in the "logs" directory
>>>>>>> relative to
>>>>>>>              $CATALINA_HOME.  If you wish, you can specify a
>>>>>>> different
>>>>>>>              directory with the "directory" attribute.  Specify
>>>>>>> either
>>>>>>> a relative
>>>>>>>              (to $CATALINA_HOME) or absolute path to the desired
>>>>>>> directory.
>>>>>>>              This access log implementation is optimized for maximum
>>>>>>> performance,
>>>>>>>              but is hardcoded to support only the "common" and
>>>>>>> "combined" patterns.
>>>>>>>
>>>>>>>              This valve use NIO direct Byte Buffer to asynchornously
>>>>>>> store the
>>>>>>>              log.
>>>>>>>         -->
>>>>>>>         <!--
>>>>>>>         <Valve
>>>>>>> className="org.apache.catalina.valves.ByteBufferAccessLogValve"
>>>>>>>                  directory="logs"  prefix="localhost_access_log."
>>>>>>> suffix=".txt"
>>>>>>>                  pattern="common" resolveHosts="false"/>
>>>>>>>         -->
>>>>>>>
>>>>>>>       </Host>
>>>>>>>
>>>>>>>     </Engine>
>>>>>>>
>>>>>>>   </Service>
>>>>>>>
>>>>>>> </Server>
>>>>>>>
>>>>>>>
>>>>>>> workers2.properties
>>>>>>>
>>>>>>> [logger.apache2]
>>>>>>> file="/etc/httpd/conf/logs/error.log"
>>>>>>> level=INFO
>>>>>>> debug=1
>>>>>>>
>>>>>>> # Config settings
>>>>>>> [config]
>>>>>>> file=/etc/httpd/conf/workers2.properties
>>>>>>> debug=0
>>>>>>>
>>>>>>> # Shared memory file settings
>>>>>>> [shm]
>>>>>>> file=/etc/httpd/conf/jk2.shm
>>>>>>> size=100000
>>>>>>>
>>>>>>> # Communcation channel settings for "Tomcat5A"
>>>>>>> [channel.socket:localhost:8009]
>>>>>>> host=localhost
>>>>>>> port=8009
>>>>>>> tomcatId=Tomcat5A
>>>>>>> group=balanced
>>>>>>> lb_factor=1
>>>>>>> route=Tomcat5A
>>>>>>>
>>>>>>>
>>>>>>> # Declare a Tomcat5A worker
>>>>>>> [ajp13:localhost:8009]
>>>>>>> channel=channel.socket:Tomcat5A
>>>>>>>
>>>>>>>
>>>>>>> # Communcation channel settings for "Tomcat5B"
>>>>>>> [channel.socket:localhost:8010]
>>>>>>> host=localhost
>>>>>>> port=8010
>>>>>>> tomcatId=Tomcat5B
>>>>>>> group=balanced
>>>>>>> lb_factor=1
>>>>>>> route=Tomcat5B
>>>>>>>
>>>>>>>
>>>>>>> # Declare a Tomcat5B worker
>>>>>>> [ajp13:localhost:8010]
>>>>>>> channel=channel.socket:Tomcat5B
>>>>>>>
>>>>>>>
>>>>>>> # Communcation channel settings for "Tomcat5C"
>>>>>>> [channel.socket:localhost:8011]
>>>>>>> host=localhost
>>>>>>> port=8011
>>>>>>> tomcatId=Tomcat5C
>>>>>>> group=balanced
>>>>>>> lb_factor=1
>>>>>>> route=Tomcat5C
>>>>>>>
>>>>>>>
>>>>>>> # Declare a Tomcat5C worker
>>>>>>> [ajp13:localhost:8011]
>>>>>>> channel=channel.socket:Tomcat5C
>>>>>>>
>>>>>>> # Load balanced Worker
>>>>>>> [lb:balanced]
>>>>>>> worker=ajp13:localhost:8009
>>>>>>> worker=ajp13:localhost:8010
>>>>>>> worker=ajp13:localhost:8011
>>>>>>> timeout=90
>>>>>>> attempts=3
>>>>>>> recovery=30
>>>>>>> stickySession=0
>>>>>>> noWorkerMsg=Server Busy please retry later.
>>>>>>> noWorkerCodeMsg=503
>>>>>>>
>>>>>>> # URI mappings for the tomcat worker
>>>>>>> # Map the "jsp-examples" web application context to the web server
>>>>>>> URI
>>>>>>> space
>>>>>>> [uri:/jsp-examples/*]
>>>>>>> info= Mapping for jsp-examples context for tomcat
>>>>>>> context=/jsp-examples
>>>>>>> group=balanced
>>>>>>>
>>>>>>> [shm]
>>>>>>> file=/etc/httpd/conf/jk2.shm
>>>>>>> size=1000000
>>>>>>>
>>>>>>> [uri:/servlets-examples/*]
>>>>>>> context=/servlets-examples
>>>>>>> group=balanced
>>>>>>>
>>>>>>> # Define a status worker
>>>>>>> [status:]
>>>>>>>
>>>>>>> # Status URI mapping
>>>>>>> [uri:/jkstatus/*]
>>>>>>> group=status
>>>>>>>
>>>>>>>
>>>>>>> obviously the server.xml files on the other 2 instances of tomcat are
>>>>>>> the same except the ports and jvmRoute have been changed.
>>>>>>>
>>>>>>>
>>>>>>> can anyone see where i am going wrong ?
>>>>>>>
>>>>>>> Thanks
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>> ---------------------------------------------------------------------
>>>>> To start a new topic, e-mail: users@tomcat.apache.org
>>>>> To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
>>>>> For additional commands, e-mail: users-help@tomcat.apache.org
>>>>>
>>>>>
>>>>>
>>>>
>>> ---------------------------------------------------------------------
>>> To start a new topic, e-mail: users@tomcat.apache.org
>>> To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
>>> For additional commands, e-mail: users-help@tomcat.apache.org
>>>
>>>
>>
>> ---------------------------------------------------------------------
>> To start a new topic, e-mail: users@tomcat.apache.org
>> To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
>> For additional commands, e-mail: users-help@tomcat.apache.org
>>
>>
>>
> 
> ---------------------------------------------------------------------
> To start a new topic, e-mail: users@tomcat.apache.org
> To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
> For additional commands, e-mail: users-help@tomcat.apache.org
> 
> 
> 

---------------------------------------------------------------------
To start a new topic, e-mail: users@tomcat.apache.org
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org


Re: Tomcat session replication/cluster

Posted by Filip Hanik - Dev Lists <de...@hanik.com>.
Makes sense, can we please ask you to open up a bug for us,
http://issues.apache.org/bugzilla/

thanks
Filip


Pid wrote:
> OK,
>
> I'm probably being dense here.
>
> (There's only 1 context in each host, the ROOT context)
> If I take the Host/Context offline in one node and restart it, the logs
> on that node start showing the following:
>
>  SEVERE: Context manager doesn't exist:host1
>
> As the Context doesn't exist, which is the same message that appears
> briefly in the logs during a restart, until that particular Host is
> loaded (under normal circumstances).
>
> This much I understand, and provides no problems for me.
>
>
> With all Hosts available on each node of the cluster, I then update the
> Context on one Host, (by adding a new jar, say).  The Context has
> reloadable="true", so it does just that.
>
> Once that context has updated, the other nodes start seeing:
>
>  SEVERE: Context manager doesn't exist:host1host1
>
> If I reload the context again, (without restarting the server), I see this:
>
>  SEVERE: Context manager doesn't exist:host1host1host1
>
> I could go on, but I think you can see where this is going...
>
>
>
> Peter Rossbach wrote:
>   
>> Hmm,
>>
>> look at o.a.c.cluster.tcp.SimpleTcpCluster
>>
>> L 626ff
>>     private String getManagerName(String name, Manager manager) {
>>         String clusterName = name ;
>>         if(getContainer() instanceof Engine) {
>>             Container context = manager.getContainer() ;
>>             if(context != null && context instanceof Context) {
>>                 Container host = ((Context)context).getParent();
>>                 if(host != null && host instanceof Host)
>>                     clusterName = host.getName()  + name ;
>>             }
>>         }
>>         return clusterName;
>>     }
>>
>>
>> You see we append "hostname + context" as cluster engine container.
>>
>> Peter
>>
>>
>>
>> Am 22.06.2006 um 10:32 schrieb Pid:
>>
>>     
>>> Filip Hanik - Dev Lists wrote:
>>>       
>>>> if the cluster is put in the engine element, the context names are
>>>> prefixed with the engine name, since you can have multiple contexts with
>>>> the same name in different host
>>>> when reloading a context, you'll get these errors cause the context is
>>>> not available during the reload
>>>> this will be fixed with the new Apache Tribes module
>>>> Filip
>>>>         
>>> I understand that the context is not available during reload. After
>>> reload has completed, the error persists.
>>>
>>> My Engine name is Catalina, it looks like the cluster isn't sending the
>>> engine name, but the context name, appended to itself.
>>>
>>> You're implying that it should send Catalina+website1, but it's sending
>>> website1+website1 instead.
>>>
>>> After startup:
>>> Node1 sees Node2 send "website2"
>>> Node2 sees Node1 send "website1"
>>>
>>> After context on Node1 is finished reloading:
>>> Node1 sees Node2 send "website2"
>>> Node2 sees Node1 send "website1website1"
>>>
>>> I think that the context name is being appended to itself.
>>>
>>>
>>>       
>>>> Pid wrote:
>>>>         
>>>>> I'm seeing an issue on 5.5.17 with a 2 node cluster config.
>>>>> When a context is reloaded, it sends the context node name incorrectly
>>>>> to the cluster.
>>>>> E.g. context is called "website1"
>>>>>
>>>>> SEVERE: Context manager doesn't exist:website1website1
>>>>>
>>>>> The config I'm using is exactly the same as the default from
>>>>> server.xml,
>>>>> except the cluster is defined in Engine, rather than each Host.
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> Filip Hanik - Dev Lists wrote:
>>>>>
>>>>>           
>>>>>> also, use Tomcat 5.5.17
>>>>>>
>>>>>> Sean O'Reilly wrote:
>>>>>>
>>>>>>             
>>>>>>> Hi,
>>>>>>>
>>>>>>> I am trying to get in-memory session replication working and am
>>>>>>> testing
>>>>>>> running 3 seperate tomcat instances on the same server.
>>>>>>>
>>>>>>> I am using tomcat-5.5.15 and apache-2.0.54 with jk2.
>>>>>>>
>>>>>>> Whenever i run my test app although it should be doing round-robin
>>>>>>> load
>>>>>>> balancing it doesn't switch to another instance of tomcat until the
>>>>>>> eighth request and does not appear to have sent the session
>>>>>>> information
>>>>>>> across as the session ID changes.
>>>>>>>
>>>>>>> Here are my server.xml and workers2.properties files
>>>>>>>
>>>>>>> server.xml
>>>>>>>
>>>>>>> <Server port="8005" shutdown="SHUTDOWN">
>>>>>>>
>>>>>>>   <!-- Comment these entries out to disable JMX MBeans support
>>>>>>> used for
>>>>>>> the        administration web application -->
>>>>>>>   <Listener
>>>>>>> className="org.apache.catalina.core.AprLifecycleListener" />
>>>>>>>   <Listener
>>>>>>> className="org.apache.catalina.mbeans.ServerLifecycleListener" />
>>>>>>>   <Listener
>>>>>>> className="org.apache.catalina.mbeans.GlobalResourcesLifecycleListener"
>>>>>>>
>>>>>>> />
>>>>>>>   <Listener
>>>>>>> className="org.apache.catalina.storeconfig.StoreConfigLifecycleListener"/>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>   <!-- Global JNDI resources -->
>>>>>>>   <GlobalNamingResources>
>>>>>>>
>>>>>>>     <!-- Test entry for demonstration purposes -->
>>>>>>>     <Environment name="simpleValue" type="java.lang.Integer"
>>>>>>> value="30"/>
>>>>>>>
>>>>>>>     <!-- Editable user database that can also be used by
>>>>>>>          UserDatabaseRealm to authenticate users -->
>>>>>>>     <Resource name="UserDatabase" auth="Container"
>>>>>>>               type="org.apache.catalina.UserDatabase"
>>>>>>>        description="User database that can be updated and saved"
>>>>>>>
>>>>>>> factory="org.apache.catalina.users.MemoryUserDatabaseFactory"
>>>>>>>           pathname="conf/tomcat-users.xml" />
>>>>>>>
>>>>>>>   </GlobalNamingResources>
>>>>>>>
>>>>>>>   <!-- A "Service" is a collection of one or more "Connectors" that
>>>>>>> share
>>>>>>>        a single "Container" (and therefore the web applications
>>>>>>> visible
>>>>>>>        within that Container).  Normally, that Container is an
>>>>>>> "Engine",
>>>>>>>        but this is not required.
>>>>>>>
>>>>>>>        Note:  A "Service" is not itself a "Container", so you may not
>>>>>>>        define subcomponents such as "Valves" or "Loggers" at this
>>>>>>> level.
>>>>>>>    -->
>>>>>>>
>>>>>>>   <!-- Define the Tomcat Stand-Alone Service -->
>>>>>>>   <Service name="Catalina">
>>>>>>>
>>>>>>>     <!-- A "Connector" represents an endpoint by which requests are
>>>>>>> received
>>>>>>>          and responses are returned.  Each Connector passes
>>>>>>> requests on
>>>>>>> to the
>>>>>>>          associated "Container" (normally an Engine) for processing.
>>>>>>>
>>>>>>>          By default, a non-SSL HTTP/1.1 Connector is established on
>>>>>>> port 8080.
>>>>>>>          You can also enable an SSL HTTP/1.1 Connector on port
>>>>>>> 8443 by
>>>>>>>          following the instructions below and uncommenting the second
>>>>>>> Connector
>>>>>>>          entry.  SSL support requires the following steps (see the
>>>>>>> SSL
>>>>>>> Config
>>>>>>>          HOWTO in the Tomcat 5 documentation bundle for more detailed
>>>>>>>          instructions):
>>>>>>>          * If your JDK version 1.3 or prior, download and install
>>>>>>> JSSE
>>>>>>> 1.0.2 or
>>>>>>>            later, and put the JAR files into
>>>>>>> "$JAVA_HOME/jre/lib/ext".
>>>>>>>          * Execute:
>>>>>>>              %JAVA_HOME%\bin\keytool -genkey -alias tomcat -keyalg
>>>>>>> RSA
>>>>>>> (Windows)
>>>>>>>              $JAVA_HOME/bin/keytool -genkey -alias tomcat -keyalg RSA
>>>>>>> (Unix)
>>>>>>>            with a password value of "changeit" for both the
>>>>>>> certificate
>>>>>>> and
>>>>>>>            the keystore itself.
>>>>>>>
>>>>>>>          By default, DNS lookups are enabled when a web application
>>>>>>> calls
>>>>>>>          request.getRemoteHost().  This can have an adverse impact on
>>>>>>>          performance, so you can disable it by setting the
>>>>>>>          "enableLookups" attribute to "false".  When DNS lookups are
>>>>>>> disabled,
>>>>>>>          request.getRemoteHost() will return the String version of
>>>>>>> the
>>>>>>>          IP address of the remote client.
>>>>>>>     -->
>>>>>>>
>>>>>>>     <!-- Define a non-SSL HTTP/1.1 Connector on port 8080
>>>>>>>     <Connector port="8080" maxHttpHeaderSize="8192"
>>>>>>>                maxThreads="150" minSpareThreads="25"
>>>>>>> maxSpareThreads="75"
>>>>>>>                enableLookups="false" redirectPort="8443"
>>>>>>> acceptCount="100"
>>>>>>>                connectionTimeout="20000"
>>>>>>> disableUploadTimeout="true" />
>>>>>>> -->
>>>>>>>     <!-- Note : To disable connection timeouts, set connectionTimeout
>>>>>>> value
>>>>>>>      to 0 -->
>>>>>>>         <!-- Note : To use gzip compression you could set the
>>>>>>> following
>>>>>>> properties :
>>>>>>>                    compression="on"
>>>>>>> compressionMinSize="2048"
>>>>>>> noCompressionUserAgents="gozilla, traviata"
>>>>>>> compressableMimeType="text/html,text/xml"
>>>>>>>     -->
>>>>>>>
>>>>>>>     <!-- Define a SSL HTTP/1.1 Connector on port 8443 -->
>>>>>>>     <!--
>>>>>>>     <Connector port="8443" maxHttpHeaderSize="8192"
>>>>>>>                maxThreads="150" minSpareThreads="25"
>>>>>>> maxSpareThreads="75"
>>>>>>>                enableLookups="false" disableUploadTimeout="true"
>>>>>>>                acceptCount="100" scheme="https" secure="true"
>>>>>>>                clientAuth="false" sslProtocol="TLS" />
>>>>>>>     -->
>>>>>>>
>>>>>>>     <!-- Define an AJP 1.3 Connector on port 8009 -->
>>>>>>>     <Connector port="8009"                enableLookups="false"
>>>>>>> redirectPort="8443"
>>>>>>> protocol="AJP/1.3" />
>>>>>>>
>>>>>>>     <!-- Define a Proxied HTTP/1.1 Connector on port 8082 -->
>>>>>>>     <!-- See proxy documentation for more information about using
>>>>>>> this.
>>>>>>> -->
>>>>>>>     <!--
>>>>>>>     <Connector port="8082"                maxThreads="150"
>>>>>>> minSpareThreads="25"
>>>>>>> maxSpareThreads="75"
>>>>>>>                enableLookups="false" acceptCount="100"
>>>>>>> connectionTimeout="20000"
>>>>>>>                proxyPort="80" disableUploadTimeout="true" />
>>>>>>>     -->
>>>>>>>
>>>>>>>     <!-- An Engine represents the entry point (within Catalina) that
>>>>>>> processes
>>>>>>>          every request.  The Engine implementation for Tomcat stand
>>>>>>> alone
>>>>>>>          analyzes the HTTP headers included with the request, and
>>>>>>> passes them
>>>>>>>          on to the appropriate Host (virtual host). -->
>>>>>>>
>>>>>>>     <!-- You should set jvmRoute to support load-balancing via AJP
>>>>>>> ie :
>>>>>>> -->
>>>>>>>     <Engine name="Standalone" defaultHost="localhost"
>>>>>>> jvmRoute="Tomcat5A">                       <!-- Define the top level
>>>>>>> container in our container
>>>>>>> hierarchy
>>>>>>>     <Engine name="Catalina" defaultHost="localhost"> -->
>>>>>>>
>>>>>>>       <!-- The request dumper valve dumps useful debugging
>>>>>>> information
>>>>>>> about
>>>>>>>            the request headers and cookies that were received, and
>>>>>>> the
>>>>>>> response
>>>>>>>            headers and cookies that were sent, for all requests
>>>>>>> received by
>>>>>>>            this instance of Tomcat.  If you care only about
>>>>>>> requests to
>>>>>>> a
>>>>>>>            particular virtual host, or a particular application, nest
>>>>>>> this
>>>>>>>            element inside the corresponding <Host> or <Context> entry
>>>>>>> instead.
>>>>>>>
>>>>>>>            For a similar mechanism that is portable to all Servlet
>>>>>>> 2.4
>>>>>>>            containers, check out the "RequestDumperFilter" Filter in
>>>>>>> the
>>>>>>>            example application (the source for this filter may be
>>>>>>> found
>>>>>>> in
>>>>>>>           
>>>>>>> "$CATALINA_HOME/webapps/examples/WEB-INF/classes/filters").
>>>>>>>
>>>>>>>            Request dumping is disabled by default.  Uncomment the
>>>>>>> following
>>>>>>>            element to enable it. -->
>>>>>>>       <!--
>>>>>>>       <Valve
>>>>>>> className="org.apache.catalina.valves.RequestDumperValve"/>
>>>>>>>       -->
>>>>>>>
>>>>>>>       <!-- Because this Realm is here, an instance will be shared
>>>>>>> globally -->
>>>>>>>
>>>>>>>       <!-- This Realm uses the UserDatabase configured in the global
>>>>>>> JNDI
>>>>>>>            resources under the key "UserDatabase".  Any edits
>>>>>>>            that are performed against this UserDatabase are
>>>>>>> immediately
>>>>>>>            available for use by the Realm.  -->
>>>>>>>       <Realm className="org.apache.catalina.realm.UserDatabaseRealm"
>>>>>>>              resourceName="UserDatabase"/>
>>>>>>>
>>>>>>>       <!-- Comment out the old realm but leave here for now in
>>>>>>> case we
>>>>>>>            need to go back quickly -->
>>>>>>>       <!--
>>>>>>>       <Realm className="org.apache.catalina.realm.MemoryRealm" />
>>>>>>>       -->
>>>>>>>
>>>>>>>       <!-- Replace the above Realm with one of the following to get a
>>>>>>> Realm
>>>>>>>            stored in a database and accessed via JDBC -->
>>>>>>>
>>>>>>>       <!--
>>>>>>>       <Realm  className="org.apache.catalina.realm.JDBCRealm"
>>>>>>>              driverName="org.gjt.mm.mysql.Driver"
>>>>>>>           connectionURL="jdbc:mysql://localhost/authority"
>>>>>>>          connectionName="test" connectionPassword="test"
>>>>>>>               userTable="users" userNameCol="user_name"
>>>>>>> userCredCol="user_pass"
>>>>>>>           userRoleTable="user_roles" roleNameCol="role_name" />
>>>>>>>       -->
>>>>>>>
>>>>>>>       <!--
>>>>>>>       <Realm  className="org.apache.catalina.realm.JDBCRealm"
>>>>>>>              driverName="oracle.jdbc.driver.OracleDriver"
>>>>>>>           connectionURL="jdbc:oracle:thin:@ntserver:1521:ORCL"
>>>>>>>          connectionName="scott" connectionPassword="tiger"
>>>>>>>               userTable="users" userNameCol="user_name"
>>>>>>> userCredCol="user_pass"
>>>>>>>           userRoleTable="user_roles" roleNameCol="role_name" />
>>>>>>>       -->
>>>>>>>
>>>>>>>       <!--
>>>>>>>       <Realm  className="org.apache.catalina.realm.JDBCRealm"
>>>>>>>              driverName="sun.jdbc.odbc.JdbcOdbcDriver"
>>>>>>>           connectionURL="jdbc:odbc:CATALINA"
>>>>>>>               userTable="users" userNameCol="user_name"
>>>>>>> userCredCol="user_pass"
>>>>>>>           userRoleTable="user_roles" roleNameCol="role_name" />
>>>>>>>       -->
>>>>>>>
>>>>>>>       <!-- Define the default virtual host
>>>>>>>            Note: XML Schema validation will not work with Xerces 2.2.
>>>>>>>        -->
>>>>>>>       <Host name="localhost" appBase="webapps"
>>>>>>>        unpackWARs="true" autoDeploy="true"
>>>>>>>        xmlValidation="false" xmlNamespaceAware="false">
>>>>>>>
>>>>>>>         <!-- Defines a cluster for this node,
>>>>>>>              By defining this element, means that every manager
>>>>>>> will be
>>>>>>> changed.
>>>>>>>              So when running a cluster, only make sure that you have
>>>>>>> webapps in there
>>>>>>>              that need to be clustered and remove the other ones.
>>>>>>>              A cluster has the following parameters:
>>>>>>>
>>>>>>>              className = the fully qualified name of the cluster
>>>>>>> class
>>>>>>>
>>>>>>>              clusterName = a descriptive name for your cluster,
>>>>>>> can be
>>>>>>> anything
>>>>>>>
>>>>>>>              mcastAddr = the multicast address, has to be the same
>>>>>>> for
>>>>>>> all the nodes
>>>>>>>
>>>>>>>              mcastPort = the multicast port, has to be the same
>>>>>>> for all
>>>>>>> the nodes
>>>>>>>                           mcastBindAddr = bind the multicast
>>>>>>> socket to
>>>>>>> a specific
>>>>>>> address
>>>>>>>                           mcastTTL = the multicast TTL if you want to
>>>>>>> limit your
>>>>>>> broadcast
>>>>>>>                           mcastSoTimeout = the multicast readtimeout
>>>>>>>              mcastFrequency = the number of milliseconds in between
>>>>>>> sending a "I'm alive" heartbeat
>>>>>>>
>>>>>>>              mcastDropTime = the number a milliseconds before a
>>>>>>> node is
>>>>>>> considered "dead" if no heartbeat is received
>>>>>>>
>>>>>>>              tcpThreadCount = the number of threads to handle
>>>>>>> incoming
>>>>>>> replication requests, optimal would be the same amount of threads as
>>>>>>> nodes
>>>>>>>              tcpListenAddress = the listen address (bind address) for
>>>>>>> TCP cluster request on this host,                                 in
>>>>>>> case of multiple ethernet cards.
>>>>>>>                                 auto means that address becomes
>>>>>>>
>>>>>>> InetAddress.getLocalHost().getHostAddress()
>>>>>>>
>>>>>>>              tcpListenPort = the tcp listen port
>>>>>>>
>>>>>>>              tcpSelectorTimeout = the timeout (ms) for the
>>>>>>> Selector.select() method in case the OS
>>>>>>>                                   has a wakup bug in java.nio. Set
>>>>>>> to 0
>>>>>>> for no timeout
>>>>>>>
>>>>>>>              printToScreen = true means that managers will also print
>>>>>>> to std.out
>>>>>>>
>>>>>>>              expireSessionsOnShutdown = true means that
>>>>>>>              useDirtyFlag = true means that we only replicate a
>>>>>>> session
>>>>>>> after setAttribute,removeAttribute has been called.
>>>>>>>                             false means to replicate the session
>>>>>>> after
>>>>>>> each request.
>>>>>>>                             false means that replication would
>>>>>>> work for
>>>>>>> the following piece of code: (only for SimpleTcpReplicationManager)
>>>>>>>                             <%
>>>>>>>                             HashMap map =
>>>>>>> (HashMap)session.getAttribute("map");
>>>>>>>                             map.put("key","value");
>>>>>>>                             %>
>>>>>>>              replicationMode = can be either 'pooled',
>>>>>>> 'synchronous' or
>>>>>>> 'asynchronous'.
>>>>>>>                                * Pooled means that the replication
>>>>>>> happens using several sockets in a synchronous way. Ie, the data gets
>>>>>>> replicated, then the request return. This is the same as the
>>>>>>> 'synchronous' setting except it uses a pool of sockets, hence it is
>>>>>>> multithreaded. This is the fastest and safest configuration. To use
>>>>>>> this, also increase the nr of tcp threads that you have dealing with
>>>>>>> replication.
>>>>>>>                                * Synchronous means that the thread
>>>>>>> that
>>>>>>> executes the request, is also the
>>>>>>>                                thread the replicates the data to the
>>>>>>> other nodes, and will not return until all
>>>>>>>                                nodes have received the information.
>>>>>>>                                * Asynchronous means that there is a
>>>>>>> specific 'sender' thread for each cluster node,
>>>>>>>                                so the request thread will queue the
>>>>>>> replication request into a "smart" queue,
>>>>>>>                                and then return to the client.
>>>>>>>                                The "smart" queue is a queue where
>>>>>>> when
>>>>>>> a session is added to the queue, and the same session
>>>>>>>                                already exists in the queue from a
>>>>>>> previous request, that session will be replaced
>>>>>>>                                in the queue instead of replicating
>>>>>>> two
>>>>>>> requests. This almost never happens, unless there is a
>>>>>>>                                large network delay.
>>>>>>>         -->                     <!--
>>>>>>>             When configuring for clustering, you also add in a
>>>>>>> valve to
>>>>>>> catch all the requests
>>>>>>>             coming in, at the end of the request, the session may or
>>>>>>> may not be replicated.
>>>>>>>             A session is replicated if and only if all the conditions
>>>>>>> are met:
>>>>>>>             1. useDirtyFlag is true or setAttribute or
>>>>>>> removeAttribute
>>>>>>> has been called AND
>>>>>>>             2. a session exists (has been created)
>>>>>>>             3. the request is not trapped by the "filter" attribute
>>>>>>>
>>>>>>>             The filter attribute is to filter out requests that could
>>>>>>> not modify the session,
>>>>>>>             hence we don't replicate the session after the end of
>>>>>>> this
>>>>>>> request.
>>>>>>>             The filter is negative, ie, anything you put in the
>>>>>>> filter,
>>>>>>> you mean to filter out,
>>>>>>>             ie, no replication will be done on requests that match
>>>>>>> one
>>>>>>> of the filters.
>>>>>>>             The filter attribute is delimited by ;, so you can't
>>>>>>> escape
>>>>>>> out ; even if you wanted to.
>>>>>>>
>>>>>>>             filter=".*\.gif;.*\.js;" means that we will not replicate
>>>>>>> the session after requests with the URI
>>>>>>>             ending with .gif and .js are intercepted.
>>>>>>>                         The deployer element can be used to deploy
>>>>>>> apps cluster
>>>>>>> wide.
>>>>>>>             Currently the deployment only deploys/undeploys to
>>>>>>> working
>>>>>>> members in the cluster
>>>>>>>             so no WARs are copied upons startup of a broken node.
>>>>>>>             The deployer watches a directory (watchDir) for WAR files
>>>>>>> when watchEnabled="true"
>>>>>>>             When a new war file is added the war gets deployed to the
>>>>>>> local instance,
>>>>>>>             and then deployed to the other instances in the cluster.
>>>>>>>             When a war file is deleted from the watchDir the war is
>>>>>>> undeployed locally             and cluster wide
>>>>>>>         -->
>>>>>>>                <Cluster
>>>>>>> className="org.apache.catalina.cluster.tcp.SimpleTcpCluster"
>>>>>>>
>>>>>>> managerClassName="org.apache.catalina.cluster.session.DeltaManager"
>>>>>>>                  expireSessionsOnShutdown="false"
>>>>>>>                  useDirtyFlag="true"
>>>>>>>                  notifyListenersOnReplication="true">
>>>>>>>
>>>>>>>             <Membership
>>>>>>> className="org.apache.catalina.cluster.mcast.McastService"
>>>>>>>                 mcastAddr="228.0.0.4"
>>>>>>>                 mcastPort="45564"
>>>>>>>                 mcastFrequency="500"
>>>>>>>                 mcastDropTime="3000"/>
>>>>>>>
>>>>>>>             <Receiver
>>>>>>> className="org.apache.catalina.cluster.tcp.ReplicationListener"
>>>>>>>                 tcpListenAddress="auto"
>>>>>>>                 tcpListenPort="4001"
>>>>>>>                 tcpSelectorTimeout="100"
>>>>>>>                 tcpThreadCount="6"/>
>>>>>>>
>>>>>>>             <Sender
>>>>>>>
>>>>>>> className="org.apache.catalina.cluster.tcp.ReplicationTransmitter"
>>>>>>>                 replicationMode="pooled"
>>>>>>>                 ackTimeout="15000"/>
>>>>>>>
>>>>>>>             <Valve
>>>>>>> className="org.apache.catalina.cluster.tcp.ReplicationValve"
>>>>>>>
>>>>>>> filter=".*\.gif;.*\.js;.*\.jpg;.*\.png;.*\.htm;.*\.html;.*\.css;.*\.txt;"/>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>                                <Deployer
>>>>>>> className="org.apache.catalina.cluster.deploy.FarmWarDeployer"
>>>>>>>                       tempDir="/tmp/war-temp/"
>>>>>>>                       deployDir="/tmp/war-deploy/"
>>>>>>>                       watchDir="/tmp/war-listen/"
>>>>>>>                       watchEnabled="false"/>
>>>>>>>                                   <ClusterListener
>>>>>>> className="org.apache.catalina.cluster.session.ClusterSessionListener"/>
>>>>>>>
>>>>>>>
>>>>>>>         </Cluster>
>>>>>>>
>>>>>>>
>>>>>>>         <!-- Normally, users must authenticate themselves to each web
>>>>>>> app
>>>>>>>              individually.  Uncomment the following entry if you
>>>>>>> would
>>>>>>> like
>>>>>>>              a user to be authenticated the first time they
>>>>>>> encounter a
>>>>>>>              resource protected by a security constraint, and then
>>>>>>> have
>>>>>>> that
>>>>>>>              user identity maintained across *all* web applications
>>>>>>> contained
>>>>>>>              in this virtual host. -->
>>>>>>>         <!--
>>>>>>>         <Valve
>>>>>>> className="org.apache.catalina.authenticator.SingleSignOn" />
>>>>>>>         -->
>>>>>>>
>>>>>>>         <!-- Access log processes all requests for this virtual host.
>>>>>>> By
>>>>>>>              default, log files are created in the "logs" directory
>>>>>>> relative to
>>>>>>>              $CATALINA_HOME.  If you wish, you can specify a
>>>>>>> different
>>>>>>>              directory with the "directory" attribute.  Specify
>>>>>>> either
>>>>>>> a relative
>>>>>>>              (to $CATALINA_HOME) or absolute path to the desired
>>>>>>> directory.
>>>>>>>         -->
>>>>>>>         <!--
>>>>>>>         <Valve className="org.apache.catalina.valves.AccessLogValve"
>>>>>>>                  directory="logs"  prefix="localhost_access_log."
>>>>>>> suffix=".txt"
>>>>>>>                  pattern="common" resolveHosts="false"/>
>>>>>>>         -->
>>>>>>>
>>>>>>>         <!-- Access log processes all requests for this virtual host.
>>>>>>> By
>>>>>>>              default, log files are created in the "logs" directory
>>>>>>> relative to
>>>>>>>              $CATALINA_HOME.  If you wish, you can specify a
>>>>>>> different
>>>>>>>              directory with the "directory" attribute.  Specify
>>>>>>> either
>>>>>>> a relative
>>>>>>>              (to $CATALINA_HOME) or absolute path to the desired
>>>>>>> directory.
>>>>>>>              This access log implementation is optimized for maximum
>>>>>>> performance,
>>>>>>>              but is hardcoded to support only the "common" and
>>>>>>> "combined" patterns.
>>>>>>>         -->
>>>>>>>         <!--
>>>>>>>         <Valve
>>>>>>> className="org.apache.catalina.valves.FastCommonAccessLogValve"
>>>>>>>                  directory="logs"  prefix="localhost_access_log."
>>>>>>> suffix=".txt"
>>>>>>>                  pattern="common" resolveHosts="false"/>
>>>>>>>         -->
>>>>>>>         <!-- Access log processes all requests for this virtual host.
>>>>>>> By
>>>>>>>              default, log files are created in the "logs" directory
>>>>>>> relative to
>>>>>>>              $CATALINA_HOME.  If you wish, you can specify a
>>>>>>> different
>>>>>>>              directory with the "directory" attribute.  Specify
>>>>>>> either
>>>>>>> a relative
>>>>>>>              (to $CATALINA_HOME) or absolute path to the desired
>>>>>>> directory.
>>>>>>>              This access log implementation is optimized for maximum
>>>>>>> performance,
>>>>>>>              but is hardcoded to support only the "common" and
>>>>>>> "combined" patterns.
>>>>>>>
>>>>>>>              This valve use NIO direct Byte Buffer to asynchornously
>>>>>>> store the
>>>>>>>              log.
>>>>>>>         -->
>>>>>>>         <!--
>>>>>>>         <Valve
>>>>>>> className="org.apache.catalina.valves.ByteBufferAccessLogValve"
>>>>>>>                  directory="logs"  prefix="localhost_access_log."
>>>>>>> suffix=".txt"
>>>>>>>                  pattern="common" resolveHosts="false"/>
>>>>>>>         -->
>>>>>>>
>>>>>>>       </Host>
>>>>>>>
>>>>>>>     </Engine>
>>>>>>>
>>>>>>>   </Service>
>>>>>>>
>>>>>>> </Server>
>>>>>>>
>>>>>>>
>>>>>>> workers2.properties
>>>>>>>
>>>>>>> [logger.apache2]
>>>>>>> file="/etc/httpd/conf/logs/error.log"
>>>>>>> level=INFO
>>>>>>> debug=1
>>>>>>>
>>>>>>> # Config settings
>>>>>>> [config]
>>>>>>> file=/etc/httpd/conf/workers2.properties
>>>>>>> debug=0
>>>>>>>
>>>>>>> # Shared memory file settings
>>>>>>> [shm]
>>>>>>> file=/etc/httpd/conf/jk2.shm
>>>>>>> size=100000
>>>>>>>
>>>>>>> # Communcation channel settings for "Tomcat5A"
>>>>>>> [channel.socket:localhost:8009]
>>>>>>> host=localhost
>>>>>>> port=8009
>>>>>>> tomcatId=Tomcat5A
>>>>>>> group=balanced
>>>>>>> lb_factor=1
>>>>>>> route=Tomcat5A
>>>>>>>
>>>>>>>
>>>>>>> # Declare a Tomcat5A worker
>>>>>>> [ajp13:localhost:8009]
>>>>>>> channel=channel.socket:Tomcat5A
>>>>>>>
>>>>>>>
>>>>>>> # Communcation channel settings for "Tomcat5B"
>>>>>>> [channel.socket:localhost:8010]
>>>>>>> host=localhost
>>>>>>> port=8010
>>>>>>> tomcatId=Tomcat5B
>>>>>>> group=balanced
>>>>>>> lb_factor=1
>>>>>>> route=Tomcat5B
>>>>>>>
>>>>>>>
>>>>>>> # Declare a Tomcat5B worker
>>>>>>> [ajp13:localhost:8010]
>>>>>>> channel=channel.socket:Tomcat5B
>>>>>>>
>>>>>>>
>>>>>>> # Communcation channel settings for "Tomcat5C"
>>>>>>> [channel.socket:localhost:8011]
>>>>>>> host=localhost
>>>>>>> port=8011
>>>>>>> tomcatId=Tomcat5C
>>>>>>> group=balanced
>>>>>>> lb_factor=1
>>>>>>> route=Tomcat5C
>>>>>>>
>>>>>>>
>>>>>>> # Declare a Tomcat5C worker
>>>>>>> [ajp13:localhost:8011]
>>>>>>> channel=channel.socket:Tomcat5C
>>>>>>>
>>>>>>> # Load balanced Worker
>>>>>>> [lb:balanced]
>>>>>>> worker=ajp13:localhost:8009
>>>>>>> worker=ajp13:localhost:8010
>>>>>>> worker=ajp13:localhost:8011
>>>>>>> timeout=90
>>>>>>> attempts=3
>>>>>>> recovery=30
>>>>>>> stickySession=0
>>>>>>> noWorkerMsg=Server Busy please retry later.
>>>>>>> noWorkerCodeMsg=503
>>>>>>>
>>>>>>> # URI mappings for the tomcat worker
>>>>>>> # Map the "jsp-examples" web application context to the web server
>>>>>>> URI
>>>>>>> space
>>>>>>> [uri:/jsp-examples/*]
>>>>>>> info= Mapping for jsp-examples context for tomcat
>>>>>>> context=/jsp-examples
>>>>>>> group=balanced
>>>>>>>
>>>>>>> [shm]
>>>>>>> file=/etc/httpd/conf/jk2.shm
>>>>>>> size=1000000
>>>>>>>
>>>>>>> [uri:/servlets-examples/*]
>>>>>>> context=/servlets-examples
>>>>>>> group=balanced
>>>>>>>
>>>>>>> # Define a status worker
>>>>>>> [status:]
>>>>>>>
>>>>>>> # Status URI mapping
>>>>>>> [uri:/jkstatus/*]
>>>>>>> group=status
>>>>>>>
>>>>>>>
>>>>>>> obviously the server.xml files on the other 2 instances of tomcat are
>>>>>>> the same except the ports and jvmRoute have been changed.
>>>>>>>
>>>>>>>
>>>>>>> can anyone see where i am going wrong ?
>>>>>>>
>>>>>>> Thanks
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>               
>>>>> ---------------------------------------------------------------------
>>>>> To start a new topic, e-mail: users@tomcat.apache.org
>>>>> To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
>>>>> For additional commands, e-mail: users-help@tomcat.apache.org
>>>>>
>>>>>
>>>>>
>>>>>           
>>>>         
>>> ---------------------------------------------------------------------
>>> To start a new topic, e-mail: users@tomcat.apache.org
>>> To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
>>> For additional commands, e-mail: users-help@tomcat.apache.org
>>>
>>>
>>>       
>> ---------------------------------------------------------------------
>> To start a new topic, e-mail: users@tomcat.apache.org
>> To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
>> For additional commands, e-mail: users-help@tomcat.apache.org
>>
>>
>>
>>     
>
> ---------------------------------------------------------------------
> To start a new topic, e-mail: users@tomcat.apache.org
> To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
> For additional commands, e-mail: users-help@tomcat.apache.org
>
>
>   


-- 


Filip Hanik

---------------------------------------------------------------------
To start a new topic, e-mail: users@tomcat.apache.org
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org


Re: Tomcat session replication/cluster

Posted by Pid <p...@pidster.com>.
OK,

I'm probably being dense here.

(There's only 1 context in each host, the ROOT context)
If I take the Host/Context offline in one node and restart it, the logs
on that node start showing the following:

 SEVERE: Context manager doesn't exist:host1

As the Context doesn't exist, which is the same message that appears
briefly in the logs during a restart, until that particular Host is
loaded (under normal circumstances).

This much I understand, and provides no problems for me.


With all Hosts available on each node of the cluster, I then update the
Context on one Host, (by adding a new jar, say).  The Context has
reloadable="true", so it does just that.

Once that context has updated, the other nodes start seeing:

 SEVERE: Context manager doesn't exist:host1host1

If I reload the context again, (without restarting the server), I see this:

 SEVERE: Context manager doesn't exist:host1host1host1

I could go on, but I think you can see where this is going...



Peter Rossbach wrote:
> Hmm,
> 
> look at o.a.c.cluster.tcp.SimpleTcpCluster
> 
> L 626ff
>     private String getManagerName(String name, Manager manager) {
>         String clusterName = name ;
>         if(getContainer() instanceof Engine) {
>             Container context = manager.getContainer() ;
>             if(context != null && context instanceof Context) {
>                 Container host = ((Context)context).getParent();
>                 if(host != null && host instanceof Host)
>                     clusterName = host.getName()  + name ;
>             }
>         }
>         return clusterName;
>     }
> 
> 
> You see we append "hostname + context" as cluster engine container.
> 
> Peter
> 
> 
> 
> Am 22.06.2006 um 10:32 schrieb Pid:
> 
>>
>>
>> Filip Hanik - Dev Lists wrote:
>>> if the cluster is put in the engine element, the context names are
>>> prefixed with the engine name, since you can have multiple contexts with
>>> the same name in different host
>>> when reloading a context, you'll get these errors cause the context is
>>> not available during the reload
>>> this will be fixed with the new Apache Tribes module
>>> Filip
>>
>> I understand that the context is not available during reload. After
>> reload has completed, the error persists.
>>
>> My Engine name is Catalina, it looks like the cluster isn't sending the
>> engine name, but the context name, appended to itself.
>>
>> You're implying that it should send Catalina+website1, but it's sending
>> website1+website1 instead.
>>
>> After startup:
>> Node1 sees Node2 send "website2"
>> Node2 sees Node1 send "website1"
>>
>> After context on Node1 is finished reloading:
>> Node1 sees Node2 send "website2"
>> Node2 sees Node1 send "website1website1"
>>
>> I think that the context name is being appended to itself.
>>
>>
>>> Pid wrote:
>>>> I'm seeing an issue on 5.5.17 with a 2 node cluster config.
>>>> When a context is reloaded, it sends the context node name incorrectly
>>>> to the cluster.
>>>> E.g. context is called "website1"
>>>>
>>>> SEVERE: Context manager doesn't exist:website1website1
>>>>
>>>> The config I'm using is exactly the same as the default from
>>>> server.xml,
>>>> except the cluster is defined in Engine, rather than each Host.
>>>>
>>>>
>>>>
>>>>
>>>> Filip Hanik - Dev Lists wrote:
>>>>
>>>>> also, use Tomcat 5.5.17
>>>>>
>>>>> Sean O'Reilly wrote:
>>>>>
>>>>>> Hi,
>>>>>>
>>>>>> I am trying to get in-memory session replication working and am
>>>>>> testing
>>>>>> running 3 seperate tomcat instances on the same server.
>>>>>>
>>>>>> I am using tomcat-5.5.15 and apache-2.0.54 with jk2.
>>>>>>
>>>>>> Whenever i run my test app although it should be doing round-robin
>>>>>> load
>>>>>> balancing it doesn't switch to another instance of tomcat until the
>>>>>> eighth request and does not appear to have sent the session
>>>>>> information
>>>>>> across as the session ID changes.
>>>>>>
>>>>>> Here are my server.xml and workers2.properties files
>>>>>>
>>>>>> server.xml
>>>>>>
>>>>>> <Server port="8005" shutdown="SHUTDOWN">
>>>>>>
>>>>>>   <!-- Comment these entries out to disable JMX MBeans support
>>>>>> used for
>>>>>> the        administration web application -->
>>>>>>   <Listener
>>>>>> className="org.apache.catalina.core.AprLifecycleListener" />
>>>>>>   <Listener
>>>>>> className="org.apache.catalina.mbeans.ServerLifecycleListener" />
>>>>>>   <Listener
>>>>>> className="org.apache.catalina.mbeans.GlobalResourcesLifecycleListener"
>>>>>>
>>>>>> />
>>>>>>   <Listener
>>>>>> className="org.apache.catalina.storeconfig.StoreConfigLifecycleListener"/>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>   <!-- Global JNDI resources -->
>>>>>>   <GlobalNamingResources>
>>>>>>
>>>>>>     <!-- Test entry for demonstration purposes -->
>>>>>>     <Environment name="simpleValue" type="java.lang.Integer"
>>>>>> value="30"/>
>>>>>>
>>>>>>     <!-- Editable user database that can also be used by
>>>>>>          UserDatabaseRealm to authenticate users -->
>>>>>>     <Resource name="UserDatabase" auth="Container"
>>>>>>               type="org.apache.catalina.UserDatabase"
>>>>>>        description="User database that can be updated and saved"
>>>>>>
>>>>>> factory="org.apache.catalina.users.MemoryUserDatabaseFactory"
>>>>>>           pathname="conf/tomcat-users.xml" />
>>>>>>
>>>>>>   </GlobalNamingResources>
>>>>>>
>>>>>>   <!-- A "Service" is a collection of one or more "Connectors" that
>>>>>> share
>>>>>>        a single "Container" (and therefore the web applications
>>>>>> visible
>>>>>>        within that Container).  Normally, that Container is an
>>>>>> "Engine",
>>>>>>        but this is not required.
>>>>>>
>>>>>>        Note:  A "Service" is not itself a "Container", so you may not
>>>>>>        define subcomponents such as "Valves" or "Loggers" at this
>>>>>> level.
>>>>>>    -->
>>>>>>
>>>>>>   <!-- Define the Tomcat Stand-Alone Service -->
>>>>>>   <Service name="Catalina">
>>>>>>
>>>>>>     <!-- A "Connector" represents an endpoint by which requests are
>>>>>> received
>>>>>>          and responses are returned.  Each Connector passes
>>>>>> requests on
>>>>>> to the
>>>>>>          associated "Container" (normally an Engine) for processing.
>>>>>>
>>>>>>          By default, a non-SSL HTTP/1.1 Connector is established on
>>>>>> port 8080.
>>>>>>          You can also enable an SSL HTTP/1.1 Connector on port
>>>>>> 8443 by
>>>>>>          following the instructions below and uncommenting the second
>>>>>> Connector
>>>>>>          entry.  SSL support requires the following steps (see the
>>>>>> SSL
>>>>>> Config
>>>>>>          HOWTO in the Tomcat 5 documentation bundle for more detailed
>>>>>>          instructions):
>>>>>>          * If your JDK version 1.3 or prior, download and install
>>>>>> JSSE
>>>>>> 1.0.2 or
>>>>>>            later, and put the JAR files into
>>>>>> "$JAVA_HOME/jre/lib/ext".
>>>>>>          * Execute:
>>>>>>              %JAVA_HOME%\bin\keytool -genkey -alias tomcat -keyalg
>>>>>> RSA
>>>>>> (Windows)
>>>>>>              $JAVA_HOME/bin/keytool -genkey -alias tomcat -keyalg RSA
>>>>>> (Unix)
>>>>>>            with a password value of "changeit" for both the
>>>>>> certificate
>>>>>> and
>>>>>>            the keystore itself.
>>>>>>
>>>>>>          By default, DNS lookups are enabled when a web application
>>>>>> calls
>>>>>>          request.getRemoteHost().  This can have an adverse impact on
>>>>>>          performance, so you can disable it by setting the
>>>>>>          "enableLookups" attribute to "false".  When DNS lookups are
>>>>>> disabled,
>>>>>>          request.getRemoteHost() will return the String version of
>>>>>> the
>>>>>>          IP address of the remote client.
>>>>>>     -->
>>>>>>
>>>>>>     <!-- Define a non-SSL HTTP/1.1 Connector on port 8080
>>>>>>     <Connector port="8080" maxHttpHeaderSize="8192"
>>>>>>                maxThreads="150" minSpareThreads="25"
>>>>>> maxSpareThreads="75"
>>>>>>                enableLookups="false" redirectPort="8443"
>>>>>> acceptCount="100"
>>>>>>                connectionTimeout="20000"
>>>>>> disableUploadTimeout="true" />
>>>>>> -->
>>>>>>     <!-- Note : To disable connection timeouts, set connectionTimeout
>>>>>> value
>>>>>>      to 0 -->
>>>>>>         <!-- Note : To use gzip compression you could set the
>>>>>> following
>>>>>> properties :
>>>>>>                    compression="on"
>>>>>> compressionMinSize="2048"
>>>>>> noCompressionUserAgents="gozilla, traviata"
>>>>>> compressableMimeType="text/html,text/xml"
>>>>>>     -->
>>>>>>
>>>>>>     <!-- Define a SSL HTTP/1.1 Connector on port 8443 -->
>>>>>>     <!--
>>>>>>     <Connector port="8443" maxHttpHeaderSize="8192"
>>>>>>                maxThreads="150" minSpareThreads="25"
>>>>>> maxSpareThreads="75"
>>>>>>                enableLookups="false" disableUploadTimeout="true"
>>>>>>                acceptCount="100" scheme="https" secure="true"
>>>>>>                clientAuth="false" sslProtocol="TLS" />
>>>>>>     -->
>>>>>>
>>>>>>     <!-- Define an AJP 1.3 Connector on port 8009 -->
>>>>>>     <Connector port="8009"                enableLookups="false"
>>>>>> redirectPort="8443"
>>>>>> protocol="AJP/1.3" />
>>>>>>
>>>>>>     <!-- Define a Proxied HTTP/1.1 Connector on port 8082 -->
>>>>>>     <!-- See proxy documentation for more information about using
>>>>>> this.
>>>>>> -->
>>>>>>     <!--
>>>>>>     <Connector port="8082"                maxThreads="150"
>>>>>> minSpareThreads="25"
>>>>>> maxSpareThreads="75"
>>>>>>                enableLookups="false" acceptCount="100"
>>>>>> connectionTimeout="20000"
>>>>>>                proxyPort="80" disableUploadTimeout="true" />
>>>>>>     -->
>>>>>>
>>>>>>     <!-- An Engine represents the entry point (within Catalina) that
>>>>>> processes
>>>>>>          every request.  The Engine implementation for Tomcat stand
>>>>>> alone
>>>>>>          analyzes the HTTP headers included with the request, and
>>>>>> passes them
>>>>>>          on to the appropriate Host (virtual host). -->
>>>>>>
>>>>>>     <!-- You should set jvmRoute to support load-balancing via AJP
>>>>>> ie :
>>>>>> -->
>>>>>>     <Engine name="Standalone" defaultHost="localhost"
>>>>>> jvmRoute="Tomcat5A">                       <!-- Define the top level
>>>>>> container in our container
>>>>>> hierarchy
>>>>>>     <Engine name="Catalina" defaultHost="localhost"> -->
>>>>>>
>>>>>>       <!-- The request dumper valve dumps useful debugging
>>>>>> information
>>>>>> about
>>>>>>            the request headers and cookies that were received, and
>>>>>> the
>>>>>> response
>>>>>>            headers and cookies that were sent, for all requests
>>>>>> received by
>>>>>>            this instance of Tomcat.  If you care only about
>>>>>> requests to
>>>>>> a
>>>>>>            particular virtual host, or a particular application, nest
>>>>>> this
>>>>>>            element inside the corresponding <Host> or <Context> entry
>>>>>> instead.
>>>>>>
>>>>>>            For a similar mechanism that is portable to all Servlet
>>>>>> 2.4
>>>>>>            containers, check out the "RequestDumperFilter" Filter in
>>>>>> the
>>>>>>            example application (the source for this filter may be
>>>>>> found
>>>>>> in
>>>>>>           
>>>>>> "$CATALINA_HOME/webapps/examples/WEB-INF/classes/filters").
>>>>>>
>>>>>>            Request dumping is disabled by default.  Uncomment the
>>>>>> following
>>>>>>            element to enable it. -->
>>>>>>       <!--
>>>>>>       <Valve
>>>>>> className="org.apache.catalina.valves.RequestDumperValve"/>
>>>>>>       -->
>>>>>>
>>>>>>       <!-- Because this Realm is here, an instance will be shared
>>>>>> globally -->
>>>>>>
>>>>>>       <!-- This Realm uses the UserDatabase configured in the global
>>>>>> JNDI
>>>>>>            resources under the key "UserDatabase".  Any edits
>>>>>>            that are performed against this UserDatabase are
>>>>>> immediately
>>>>>>            available for use by the Realm.  -->
>>>>>>       <Realm className="org.apache.catalina.realm.UserDatabaseRealm"
>>>>>>              resourceName="UserDatabase"/>
>>>>>>
>>>>>>       <!-- Comment out the old realm but leave here for now in
>>>>>> case we
>>>>>>            need to go back quickly -->
>>>>>>       <!--
>>>>>>       <Realm className="org.apache.catalina.realm.MemoryRealm" />
>>>>>>       -->
>>>>>>
>>>>>>       <!-- Replace the above Realm with one of the following to get a
>>>>>> Realm
>>>>>>            stored in a database and accessed via JDBC -->
>>>>>>
>>>>>>       <!--
>>>>>>       <Realm  className="org.apache.catalina.realm.JDBCRealm"
>>>>>>              driverName="org.gjt.mm.mysql.Driver"
>>>>>>           connectionURL="jdbc:mysql://localhost/authority"
>>>>>>          connectionName="test" connectionPassword="test"
>>>>>>               userTable="users" userNameCol="user_name"
>>>>>> userCredCol="user_pass"
>>>>>>           userRoleTable="user_roles" roleNameCol="role_name" />
>>>>>>       -->
>>>>>>
>>>>>>       <!--
>>>>>>       <Realm  className="org.apache.catalina.realm.JDBCRealm"
>>>>>>              driverName="oracle.jdbc.driver.OracleDriver"
>>>>>>           connectionURL="jdbc:oracle:thin:@ntserver:1521:ORCL"
>>>>>>          connectionName="scott" connectionPassword="tiger"
>>>>>>               userTable="users" userNameCol="user_name"
>>>>>> userCredCol="user_pass"
>>>>>>           userRoleTable="user_roles" roleNameCol="role_name" />
>>>>>>       -->
>>>>>>
>>>>>>       <!--
>>>>>>       <Realm  className="org.apache.catalina.realm.JDBCRealm"
>>>>>>              driverName="sun.jdbc.odbc.JdbcOdbcDriver"
>>>>>>           connectionURL="jdbc:odbc:CATALINA"
>>>>>>               userTable="users" userNameCol="user_name"
>>>>>> userCredCol="user_pass"
>>>>>>           userRoleTable="user_roles" roleNameCol="role_name" />
>>>>>>       -->
>>>>>>
>>>>>>       <!-- Define the default virtual host
>>>>>>            Note: XML Schema validation will not work with Xerces 2.2.
>>>>>>        -->
>>>>>>       <Host name="localhost" appBase="webapps"
>>>>>>        unpackWARs="true" autoDeploy="true"
>>>>>>        xmlValidation="false" xmlNamespaceAware="false">
>>>>>>
>>>>>>         <!-- Defines a cluster for this node,
>>>>>>              By defining this element, means that every manager
>>>>>> will be
>>>>>> changed.
>>>>>>              So when running a cluster, only make sure that you have
>>>>>> webapps in there
>>>>>>              that need to be clustered and remove the other ones.
>>>>>>              A cluster has the following parameters:
>>>>>>
>>>>>>              className = the fully qualified name of the cluster
>>>>>> class
>>>>>>
>>>>>>              clusterName = a descriptive name for your cluster,
>>>>>> can be
>>>>>> anything
>>>>>>
>>>>>>              mcastAddr = the multicast address, has to be the same
>>>>>> for
>>>>>> all the nodes
>>>>>>
>>>>>>              mcastPort = the multicast port, has to be the same
>>>>>> for all
>>>>>> the nodes
>>>>>>                           mcastBindAddr = bind the multicast
>>>>>> socket to
>>>>>> a specific
>>>>>> address
>>>>>>                           mcastTTL = the multicast TTL if you want to
>>>>>> limit your
>>>>>> broadcast
>>>>>>                           mcastSoTimeout = the multicast readtimeout
>>>>>>              mcastFrequency = the number of milliseconds in between
>>>>>> sending a "I'm alive" heartbeat
>>>>>>
>>>>>>              mcastDropTime = the number a milliseconds before a
>>>>>> node is
>>>>>> considered "dead" if no heartbeat is received
>>>>>>
>>>>>>              tcpThreadCount = the number of threads to handle
>>>>>> incoming
>>>>>> replication requests, optimal would be the same amount of threads as
>>>>>> nodes
>>>>>>              tcpListenAddress = the listen address (bind address) for
>>>>>> TCP cluster request on this host,                                 in
>>>>>> case of multiple ethernet cards.
>>>>>>                                 auto means that address becomes
>>>>>>
>>>>>> InetAddress.getLocalHost().getHostAddress()
>>>>>>
>>>>>>              tcpListenPort = the tcp listen port
>>>>>>
>>>>>>              tcpSelectorTimeout = the timeout (ms) for the
>>>>>> Selector.select() method in case the OS
>>>>>>                                   has a wakup bug in java.nio. Set
>>>>>> to 0
>>>>>> for no timeout
>>>>>>
>>>>>>              printToScreen = true means that managers will also print
>>>>>> to std.out
>>>>>>
>>>>>>              expireSessionsOnShutdown = true means that
>>>>>>              useDirtyFlag = true means that we only replicate a
>>>>>> session
>>>>>> after setAttribute,removeAttribute has been called.
>>>>>>                             false means to replicate the session
>>>>>> after
>>>>>> each request.
>>>>>>                             false means that replication would
>>>>>> work for
>>>>>> the following piece of code: (only for SimpleTcpReplicationManager)
>>>>>>                             <%
>>>>>>                             HashMap map =
>>>>>> (HashMap)session.getAttribute("map");
>>>>>>                             map.put("key","value");
>>>>>>                             %>
>>>>>>              replicationMode = can be either 'pooled',
>>>>>> 'synchronous' or
>>>>>> 'asynchronous'.
>>>>>>                                * Pooled means that the replication
>>>>>> happens using several sockets in a synchronous way. Ie, the data gets
>>>>>> replicated, then the request return. This is the same as the
>>>>>> 'synchronous' setting except it uses a pool of sockets, hence it is
>>>>>> multithreaded. This is the fastest and safest configuration. To use
>>>>>> this, also increase the nr of tcp threads that you have dealing with
>>>>>> replication.
>>>>>>                                * Synchronous means that the thread
>>>>>> that
>>>>>> executes the request, is also the
>>>>>>                                thread the replicates the data to the
>>>>>> other nodes, and will not return until all
>>>>>>                                nodes have received the information.
>>>>>>                                * Asynchronous means that there is a
>>>>>> specific 'sender' thread for each cluster node,
>>>>>>                                so the request thread will queue the
>>>>>> replication request into a "smart" queue,
>>>>>>                                and then return to the client.
>>>>>>                                The "smart" queue is a queue where
>>>>>> when
>>>>>> a session is added to the queue, and the same session
>>>>>>                                already exists in the queue from a
>>>>>> previous request, that session will be replaced
>>>>>>                                in the queue instead of replicating
>>>>>> two
>>>>>> requests. This almost never happens, unless there is a
>>>>>>                                large network delay.
>>>>>>         -->                     <!--
>>>>>>             When configuring for clustering, you also add in a
>>>>>> valve to
>>>>>> catch all the requests
>>>>>>             coming in, at the end of the request, the session may or
>>>>>> may not be replicated.
>>>>>>             A session is replicated if and only if all the conditions
>>>>>> are met:
>>>>>>             1. useDirtyFlag is true or setAttribute or
>>>>>> removeAttribute
>>>>>> has been called AND
>>>>>>             2. a session exists (has been created)
>>>>>>             3. the request is not trapped by the "filter" attribute
>>>>>>
>>>>>>             The filter attribute is to filter out requests that could
>>>>>> not modify the session,
>>>>>>             hence we don't replicate the session after the end of
>>>>>> this
>>>>>> request.
>>>>>>             The filter is negative, ie, anything you put in the
>>>>>> filter,
>>>>>> you mean to filter out,
>>>>>>             ie, no replication will be done on requests that match
>>>>>> one
>>>>>> of the filters.
>>>>>>             The filter attribute is delimited by ;, so you can't
>>>>>> escape
>>>>>> out ; even if you wanted to.
>>>>>>
>>>>>>             filter=".*\.gif;.*\.js;" means that we will not replicate
>>>>>> the session after requests with the URI
>>>>>>             ending with .gif and .js are intercepted.
>>>>>>                         The deployer element can be used to deploy
>>>>>> apps cluster
>>>>>> wide.
>>>>>>             Currently the deployment only deploys/undeploys to
>>>>>> working
>>>>>> members in the cluster
>>>>>>             so no WARs are copied upons startup of a broken node.
>>>>>>             The deployer watches a directory (watchDir) for WAR files
>>>>>> when watchEnabled="true"
>>>>>>             When a new war file is added the war gets deployed to the
>>>>>> local instance,
>>>>>>             and then deployed to the other instances in the cluster.
>>>>>>             When a war file is deleted from the watchDir the war is
>>>>>> undeployed locally             and cluster wide
>>>>>>         -->
>>>>>>                <Cluster
>>>>>> className="org.apache.catalina.cluster.tcp.SimpleTcpCluster"
>>>>>>
>>>>>> managerClassName="org.apache.catalina.cluster.session.DeltaManager"
>>>>>>                  expireSessionsOnShutdown="false"
>>>>>>                  useDirtyFlag="true"
>>>>>>                  notifyListenersOnReplication="true">
>>>>>>
>>>>>>             <Membership
>>>>>> className="org.apache.catalina.cluster.mcast.McastService"
>>>>>>                 mcastAddr="228.0.0.4"
>>>>>>                 mcastPort="45564"
>>>>>>                 mcastFrequency="500"
>>>>>>                 mcastDropTime="3000"/>
>>>>>>
>>>>>>             <Receiver
>>>>>> className="org.apache.catalina.cluster.tcp.ReplicationListener"
>>>>>>                 tcpListenAddress="auto"
>>>>>>                 tcpListenPort="4001"
>>>>>>                 tcpSelectorTimeout="100"
>>>>>>                 tcpThreadCount="6"/>
>>>>>>
>>>>>>             <Sender
>>>>>>
>>>>>> className="org.apache.catalina.cluster.tcp.ReplicationTransmitter"
>>>>>>                 replicationMode="pooled"
>>>>>>                 ackTimeout="15000"/>
>>>>>>
>>>>>>             <Valve
>>>>>> className="org.apache.catalina.cluster.tcp.ReplicationValve"
>>>>>>
>>>>>> filter=".*\.gif;.*\.js;.*\.jpg;.*\.png;.*\.htm;.*\.html;.*\.css;.*\.txt;"/>
>>>>>>
>>>>>>
>>>>>>
>>>>>>                                <Deployer
>>>>>> className="org.apache.catalina.cluster.deploy.FarmWarDeployer"
>>>>>>                       tempDir="/tmp/war-temp/"
>>>>>>                       deployDir="/tmp/war-deploy/"
>>>>>>                       watchDir="/tmp/war-listen/"
>>>>>>                       watchEnabled="false"/>
>>>>>>                                   <ClusterListener
>>>>>> className="org.apache.catalina.cluster.session.ClusterSessionListener"/>
>>>>>>
>>>>>>
>>>>>>         </Cluster>
>>>>>>
>>>>>>
>>>>>>         <!-- Normally, users must authenticate themselves to each web
>>>>>> app
>>>>>>              individually.  Uncomment the following entry if you
>>>>>> would
>>>>>> like
>>>>>>              a user to be authenticated the first time they
>>>>>> encounter a
>>>>>>              resource protected by a security constraint, and then
>>>>>> have
>>>>>> that
>>>>>>              user identity maintained across *all* web applications
>>>>>> contained
>>>>>>              in this virtual host. -->
>>>>>>         <!--
>>>>>>         <Valve
>>>>>> className="org.apache.catalina.authenticator.SingleSignOn" />
>>>>>>         -->
>>>>>>
>>>>>>         <!-- Access log processes all requests for this virtual host.
>>>>>> By
>>>>>>              default, log files are created in the "logs" directory
>>>>>> relative to
>>>>>>              $CATALINA_HOME.  If you wish, you can specify a
>>>>>> different
>>>>>>              directory with the "directory" attribute.  Specify
>>>>>> either
>>>>>> a relative
>>>>>>              (to $CATALINA_HOME) or absolute path to the desired
>>>>>> directory.
>>>>>>         -->
>>>>>>         <!--
>>>>>>         <Valve className="org.apache.catalina.valves.AccessLogValve"
>>>>>>                  directory="logs"  prefix="localhost_access_log."
>>>>>> suffix=".txt"
>>>>>>                  pattern="common" resolveHosts="false"/>
>>>>>>         -->
>>>>>>
>>>>>>         <!-- Access log processes all requests for this virtual host.
>>>>>> By
>>>>>>              default, log files are created in the "logs" directory
>>>>>> relative to
>>>>>>              $CATALINA_HOME.  If you wish, you can specify a
>>>>>> different
>>>>>>              directory with the "directory" attribute.  Specify
>>>>>> either
>>>>>> a relative
>>>>>>              (to $CATALINA_HOME) or absolute path to the desired
>>>>>> directory.
>>>>>>              This access log implementation is optimized for maximum
>>>>>> performance,
>>>>>>              but is hardcoded to support only the "common" and
>>>>>> "combined" patterns.
>>>>>>         -->
>>>>>>         <!--
>>>>>>         <Valve
>>>>>> className="org.apache.catalina.valves.FastCommonAccessLogValve"
>>>>>>                  directory="logs"  prefix="localhost_access_log."
>>>>>> suffix=".txt"
>>>>>>                  pattern="common" resolveHosts="false"/>
>>>>>>         -->
>>>>>>         <!-- Access log processes all requests for this virtual host.
>>>>>> By
>>>>>>              default, log files are created in the "logs" directory
>>>>>> relative to
>>>>>>              $CATALINA_HOME.  If you wish, you can specify a
>>>>>> different
>>>>>>              directory with the "directory" attribute.  Specify
>>>>>> either
>>>>>> a relative
>>>>>>              (to $CATALINA_HOME) or absolute path to the desired
>>>>>> directory.
>>>>>>              This access log implementation is optimized for maximum
>>>>>> performance,
>>>>>>              but is hardcoded to support only the "common" and
>>>>>> "combined" patterns.
>>>>>>
>>>>>>              This valve use NIO direct Byte Buffer to asynchornously
>>>>>> store the
>>>>>>              log.
>>>>>>         -->
>>>>>>         <!--
>>>>>>         <Valve
>>>>>> className="org.apache.catalina.valves.ByteBufferAccessLogValve"
>>>>>>                  directory="logs"  prefix="localhost_access_log."
>>>>>> suffix=".txt"
>>>>>>                  pattern="common" resolveHosts="false"/>
>>>>>>         -->
>>>>>>
>>>>>>       </Host>
>>>>>>
>>>>>>     </Engine>
>>>>>>
>>>>>>   </Service>
>>>>>>
>>>>>> </Server>
>>>>>>
>>>>>>
>>>>>> workers2.properties
>>>>>>
>>>>>> [logger.apache2]
>>>>>> file="/etc/httpd/conf/logs/error.log"
>>>>>> level=INFO
>>>>>> debug=1
>>>>>>
>>>>>> # Config settings
>>>>>> [config]
>>>>>> file=/etc/httpd/conf/workers2.properties
>>>>>> debug=0
>>>>>>
>>>>>> # Shared memory file settings
>>>>>> [shm]
>>>>>> file=/etc/httpd/conf/jk2.shm
>>>>>> size=100000
>>>>>>
>>>>>> # Communcation channel settings for "Tomcat5A"
>>>>>> [channel.socket:localhost:8009]
>>>>>> host=localhost
>>>>>> port=8009
>>>>>> tomcatId=Tomcat5A
>>>>>> group=balanced
>>>>>> lb_factor=1
>>>>>> route=Tomcat5A
>>>>>>
>>>>>>
>>>>>> # Declare a Tomcat5A worker
>>>>>> [ajp13:localhost:8009]
>>>>>> channel=channel.socket:Tomcat5A
>>>>>>
>>>>>>
>>>>>> # Communcation channel settings for "Tomcat5B"
>>>>>> [channel.socket:localhost:8010]
>>>>>> host=localhost
>>>>>> port=8010
>>>>>> tomcatId=Tomcat5B
>>>>>> group=balanced
>>>>>> lb_factor=1
>>>>>> route=Tomcat5B
>>>>>>
>>>>>>
>>>>>> # Declare a Tomcat5B worker
>>>>>> [ajp13:localhost:8010]
>>>>>> channel=channel.socket:Tomcat5B
>>>>>>
>>>>>>
>>>>>> # Communcation channel settings for "Tomcat5C"
>>>>>> [channel.socket:localhost:8011]
>>>>>> host=localhost
>>>>>> port=8011
>>>>>> tomcatId=Tomcat5C
>>>>>> group=balanced
>>>>>> lb_factor=1
>>>>>> route=Tomcat5C
>>>>>>
>>>>>>
>>>>>> # Declare a Tomcat5C worker
>>>>>> [ajp13:localhost:8011]
>>>>>> channel=channel.socket:Tomcat5C
>>>>>>
>>>>>> # Load balanced Worker
>>>>>> [lb:balanced]
>>>>>> worker=ajp13:localhost:8009
>>>>>> worker=ajp13:localhost:8010
>>>>>> worker=ajp13:localhost:8011
>>>>>> timeout=90
>>>>>> attempts=3
>>>>>> recovery=30
>>>>>> stickySession=0
>>>>>> noWorkerMsg=Server Busy please retry later.
>>>>>> noWorkerCodeMsg=503
>>>>>>
>>>>>> # URI mappings for the tomcat worker
>>>>>> # Map the "jsp-examples" web application context to the web server
>>>>>> URI
>>>>>> space
>>>>>> [uri:/jsp-examples/*]
>>>>>> info= Mapping for jsp-examples context for tomcat
>>>>>> context=/jsp-examples
>>>>>> group=balanced
>>>>>>
>>>>>> [shm]
>>>>>> file=/etc/httpd/conf/jk2.shm
>>>>>> size=1000000
>>>>>>
>>>>>> [uri:/servlets-examples/*]
>>>>>> context=/servlets-examples
>>>>>> group=balanced
>>>>>>
>>>>>> # Define a status worker
>>>>>> [status:]
>>>>>>
>>>>>> # Status URI mapping
>>>>>> [uri:/jkstatus/*]
>>>>>> group=status
>>>>>>
>>>>>>
>>>>>> obviously the server.xml files on the other 2 instances of tomcat are
>>>>>> the same except the ports and jvmRoute have been changed.
>>>>>>
>>>>>>
>>>>>> can anyone see where i am going wrong ?
>>>>>>
>>>>>> Thanks
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>
>>>> ---------------------------------------------------------------------
>>>> To start a new topic, e-mail: users@tomcat.apache.org
>>>> To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
>>>> For additional commands, e-mail: users-help@tomcat.apache.org
>>>>
>>>>
>>>>
>>>
>>>
>>
>> ---------------------------------------------------------------------
>> To start a new topic, e-mail: users@tomcat.apache.org
>> To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
>> For additional commands, e-mail: users-help@tomcat.apache.org
>>
>>
> 
> 
> ---------------------------------------------------------------------
> To start a new topic, e-mail: users@tomcat.apache.org
> To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
> For additional commands, e-mail: users-help@tomcat.apache.org
> 
> 
> 

---------------------------------------------------------------------
To start a new topic, e-mail: users@tomcat.apache.org
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org


Re: Tomcat session replication/cluster

Posted by Peter Rossbach <pr...@objektpark.de>.
Hmm,

look at o.a.c.cluster.tcp.SimpleTcpCluster

L 626ff
     private String getManagerName(String name, Manager manager) {
         String clusterName = name ;
         if(getContainer() instanceof Engine) {
             Container context = manager.getContainer() ;
             if(context != null && context instanceof Context) {
                 Container host = ((Context)context).getParent();
                 if(host != null && host instanceof Host)
                     clusterName = host.getName()  + name ;
             }
         }
         return clusterName;
     }


You see we append "hostname + context" as cluster engine container.

Peter



Am 22.06.2006 um 10:32 schrieb Pid:

>
>
> Filip Hanik - Dev Lists wrote:
>> if the cluster is put in the engine element, the context names are
>> prefixed with the engine name, since you can have multiple  
>> contexts with
>> the same name in different host
>> when reloading a context, you'll get these errors cause the  
>> context is
>> not available during the reload
>> this will be fixed with the new Apache Tribes module
>> Filip
>
> I understand that the context is not available during reload. After
> reload has completed, the error persists.
>
> My Engine name is Catalina, it looks like the cluster isn't sending  
> the
> engine name, but the context name, appended to itself.
>
> You're implying that it should send Catalina+website1, but it's  
> sending
> website1+website1 instead.
>
> After startup:
> Node1 sees Node2 send "website2"
> Node2 sees Node1 send "website1"
>
> After context on Node1 is finished reloading:
> Node1 sees Node2 send "website2"
> Node2 sees Node1 send "website1website1"
>
> I think that the context name is being appended to itself.
>
>
>> Pid wrote:
>>> I'm seeing an issue on 5.5.17 with a 2 node cluster config.
>>> When a context is reloaded, it sends the context node name  
>>> incorrectly
>>> to the cluster.
>>> E.g. context is called "website1"
>>>
>>> SEVERE: Context manager doesn't exist:website1website1
>>>
>>> The config I'm using is exactly the same as the default from  
>>> server.xml,
>>> except the cluster is defined in Engine, rather than each Host.
>>>
>>>
>>>
>>>
>>> Filip Hanik - Dev Lists wrote:
>>>
>>>> also, use Tomcat 5.5.17
>>>>
>>>> Sean O'Reilly wrote:
>>>>
>>>>> Hi,
>>>>>
>>>>> I am trying to get in-memory session replication working and am  
>>>>> testing
>>>>> running 3 seperate tomcat instances on the same server.
>>>>>
>>>>> I am using tomcat-5.5.15 and apache-2.0.54 with jk2.
>>>>>
>>>>> Whenever i run my test app although it should be doing round- 
>>>>> robin load
>>>>> balancing it doesn't switch to another instance of tomcat until  
>>>>> the
>>>>> eighth request and does not appear to have sent the session  
>>>>> information
>>>>> across as the session ID changes.
>>>>>
>>>>> Here are my server.xml and workers2.properties files
>>>>>
>>>>> server.xml
>>>>>
>>>>> <Server port="8005" shutdown="SHUTDOWN">
>>>>>
>>>>>   <!-- Comment these entries out to disable JMX MBeans support  
>>>>> used for
>>>>> the        administration web application -->
>>>>>   <Listener
>>>>> className="org.apache.catalina.core.AprLifecycleListener" />
>>>>>   <Listener
>>>>> className="org.apache.catalina.mbeans.ServerLifecycleListener" />
>>>>>   <Listener
>>>>> className="org.apache.catalina.mbeans.GlobalResourcesLifecycleList 
>>>>> ener"
>>>>> />
>>>>>   <Listener
>>>>> className="org.apache.catalina.storeconfig.StoreConfigLifecycleLis 
>>>>> tener"/>
>>>>>
>>>>>
>>>>>
>>>>>   <!-- Global JNDI resources -->
>>>>>   <GlobalNamingResources>
>>>>>
>>>>>     <!-- Test entry for demonstration purposes -->
>>>>>     <Environment name="simpleValue" type="java.lang.Integer"
>>>>> value="30"/>
>>>>>
>>>>>     <!-- Editable user database that can also be used by
>>>>>          UserDatabaseRealm to authenticate users -->
>>>>>     <Resource name="UserDatabase" auth="Container"
>>>>>               type="org.apache.catalina.UserDatabase"
>>>>>        description="User database that can be updated and saved"
>>>>>
>>>>> factory="org.apache.catalina.users.MemoryUserDatabaseFactory"
>>>>>           pathname="conf/tomcat-users.xml" />
>>>>>
>>>>>   </GlobalNamingResources>
>>>>>
>>>>>   <!-- A "Service" is a collection of one or more "Connectors"  
>>>>> that
>>>>> share
>>>>>        a single "Container" (and therefore the web applications  
>>>>> visible
>>>>>        within that Container).  Normally, that Container is an
>>>>> "Engine",
>>>>>        but this is not required.
>>>>>
>>>>>        Note:  A "Service" is not itself a "Container", so you  
>>>>> may not
>>>>>        define subcomponents such as "Valves" or "Loggers" at this
>>>>> level.
>>>>>    -->
>>>>>
>>>>>   <!-- Define the Tomcat Stand-Alone Service -->
>>>>>   <Service name="Catalina">
>>>>>
>>>>>     <!-- A "Connector" represents an endpoint by which requests  
>>>>> are
>>>>> received
>>>>>          and responses are returned.  Each Connector passes  
>>>>> requests on
>>>>> to the
>>>>>          associated "Container" (normally an Engine) for  
>>>>> processing.
>>>>>
>>>>>          By default, a non-SSL HTTP/1.1 Connector is  
>>>>> established on
>>>>> port 8080.
>>>>>          You can also enable an SSL HTTP/1.1 Connector on port  
>>>>> 8443 by
>>>>>          following the instructions below and uncommenting the  
>>>>> second
>>>>> Connector
>>>>>          entry.  SSL support requires the following steps (see  
>>>>> the SSL
>>>>> Config
>>>>>          HOWTO in the Tomcat 5 documentation bundle for more  
>>>>> detailed
>>>>>          instructions):
>>>>>          * If your JDK version 1.3 or prior, download and  
>>>>> install JSSE
>>>>> 1.0.2 or
>>>>>            later, and put the JAR files into "$JAVA_HOME/jre/ 
>>>>> lib/ext".
>>>>>          * Execute:
>>>>>              %JAVA_HOME%\bin\keytool -genkey -alias tomcat - 
>>>>> keyalg RSA
>>>>> (Windows)
>>>>>              $JAVA_HOME/bin/keytool -genkey -alias tomcat - 
>>>>> keyalg RSA
>>>>> (Unix)
>>>>>            with a password value of "changeit" for both the  
>>>>> certificate
>>>>> and
>>>>>            the keystore itself.
>>>>>
>>>>>          By default, DNS lookups are enabled when a web  
>>>>> application
>>>>> calls
>>>>>          request.getRemoteHost().  This can have an adverse  
>>>>> impact on
>>>>>          performance, so you can disable it by setting the
>>>>>          "enableLookups" attribute to "false".  When DNS  
>>>>> lookups are
>>>>> disabled,
>>>>>          request.getRemoteHost() will return the String version  
>>>>> of the
>>>>>          IP address of the remote client.
>>>>>     -->
>>>>>
>>>>>     <!-- Define a non-SSL HTTP/1.1 Connector on port 8080
>>>>>     <Connector port="8080" maxHttpHeaderSize="8192"
>>>>>                maxThreads="150" minSpareThreads="25"
>>>>> maxSpareThreads="75"
>>>>>                enableLookups="false" redirectPort="8443"
>>>>> acceptCount="100"
>>>>>                connectionTimeout="20000"  
>>>>> disableUploadTimeout="true" />
>>>>> -->
>>>>>     <!-- Note : To disable connection timeouts, set  
>>>>> connectionTimeout
>>>>> value
>>>>>      to 0 -->
>>>>>         <!-- Note : To use gzip compression you could set the  
>>>>> following
>>>>> properties :
>>>>>                    compression="on"
>>>>> compressionMinSize="2048"
>>>>> noCompressionUserAgents="gozilla, traviata"
>>>>> compressableMimeType="text/html,text/xml"
>>>>>     -->
>>>>>
>>>>>     <!-- Define a SSL HTTP/1.1 Connector on port 8443 -->
>>>>>     <!--
>>>>>     <Connector port="8443" maxHttpHeaderSize="8192"
>>>>>                maxThreads="150" minSpareThreads="25"
>>>>> maxSpareThreads="75"
>>>>>                enableLookups="false" disableUploadTimeout="true"
>>>>>                acceptCount="100" scheme="https" secure="true"
>>>>>                clientAuth="false" sslProtocol="TLS" />
>>>>>     -->
>>>>>
>>>>>     <!-- Define an AJP 1.3 Connector on port 8009 -->
>>>>>     <Connector port="8009"                enableLookups="false"
>>>>> redirectPort="8443"
>>>>> protocol="AJP/1.3" />
>>>>>
>>>>>     <!-- Define a Proxied HTTP/1.1 Connector on port 8082 -->
>>>>>     <!-- See proxy documentation for more information about  
>>>>> using this.
>>>>> -->
>>>>>     <!--
>>>>>     <Connector port="8082"                maxThreads="150"
>>>>> minSpareThreads="25"
>>>>> maxSpareThreads="75"
>>>>>                enableLookups="false" acceptCount="100"
>>>>> connectionTimeout="20000"
>>>>>                proxyPort="80" disableUploadTimeout="true" />
>>>>>     -->
>>>>>
>>>>>     <!-- An Engine represents the entry point (within Catalina)  
>>>>> that
>>>>> processes
>>>>>          every request.  The Engine implementation for Tomcat  
>>>>> stand
>>>>> alone
>>>>>          analyzes the HTTP headers included with the request, and
>>>>> passes them
>>>>>          on to the appropriate Host (virtual host). -->
>>>>>
>>>>>     <!-- You should set jvmRoute to support load-balancing via  
>>>>> AJP ie :
>>>>> -->
>>>>>     <Engine name="Standalone" defaultHost="localhost"
>>>>> jvmRoute="Tomcat5A">                       <!-- Define the top  
>>>>> level
>>>>> container in our container
>>>>> hierarchy
>>>>>     <Engine name="Catalina" defaultHost="localhost"> -->
>>>>>
>>>>>       <!-- The request dumper valve dumps useful debugging  
>>>>> information
>>>>> about
>>>>>            the request headers and cookies that were received,  
>>>>> and the
>>>>> response
>>>>>            headers and cookies that were sent, for all requests
>>>>> received by
>>>>>            this instance of Tomcat.  If you care only about  
>>>>> requests to
>>>>> a
>>>>>            particular virtual host, or a particular  
>>>>> application, nest
>>>>> this
>>>>>            element inside the corresponding <Host> or <Context>  
>>>>> entry
>>>>> instead.
>>>>>
>>>>>            For a similar mechanism that is portable to all  
>>>>> Servlet 2.4
>>>>>            containers, check out the "RequestDumperFilter"  
>>>>> Filter in
>>>>> the
>>>>>            example application (the source for this filter may  
>>>>> be found
>>>>> in
>>>>>            "$CATALINA_HOME/webapps/examples/WEB-INF/classes/ 
>>>>> filters").
>>>>>
>>>>>            Request dumping is disabled by default.  Uncomment the
>>>>> following
>>>>>            element to enable it. -->
>>>>>       <!--
>>>>>       <Valve
>>>>> className="org.apache.catalina.valves.RequestDumperValve"/>
>>>>>       -->
>>>>>
>>>>>       <!-- Because this Realm is here, an instance will be shared
>>>>> globally -->
>>>>>
>>>>>       <!-- This Realm uses the UserDatabase configured in the  
>>>>> global
>>>>> JNDI
>>>>>            resources under the key "UserDatabase".  Any edits
>>>>>            that are performed against this UserDatabase are  
>>>>> immediately
>>>>>            available for use by the Realm.  -->
>>>>>       <Realm  
>>>>> className="org.apache.catalina.realm.UserDatabaseRealm"
>>>>>              resourceName="UserDatabase"/>
>>>>>
>>>>>       <!-- Comment out the old realm but leave here for now in  
>>>>> case we
>>>>>            need to go back quickly -->
>>>>>       <!--
>>>>>       <Realm className="org.apache.catalina.realm.MemoryRealm" />
>>>>>       -->
>>>>>
>>>>>       <!-- Replace the above Realm with one of the following to  
>>>>> get a
>>>>> Realm
>>>>>            stored in a database and accessed via JDBC -->
>>>>>
>>>>>       <!--
>>>>>       <Realm  className="org.apache.catalina.realm.JDBCRealm"
>>>>>              driverName="org.gjt.mm.mysql.Driver"
>>>>>           connectionURL="jdbc:mysql://localhost/authority"
>>>>>          connectionName="test" connectionPassword="test"
>>>>>               userTable="users" userNameCol="user_name"
>>>>> userCredCol="user_pass"
>>>>>           userRoleTable="user_roles" roleNameCol="role_name" />
>>>>>       -->
>>>>>
>>>>>       <!--
>>>>>       <Realm  className="org.apache.catalina.realm.JDBCRealm"
>>>>>              driverName="oracle.jdbc.driver.OracleDriver"
>>>>>           connectionURL="jdbc:oracle:thin:@ntserver:1521:ORCL"
>>>>>          connectionName="scott" connectionPassword="tiger"
>>>>>               userTable="users" userNameCol="user_name"
>>>>> userCredCol="user_pass"
>>>>>           userRoleTable="user_roles" roleNameCol="role_name" />
>>>>>       -->
>>>>>
>>>>>       <!--
>>>>>       <Realm  className="org.apache.catalina.realm.JDBCRealm"
>>>>>              driverName="sun.jdbc.odbc.JdbcOdbcDriver"
>>>>>           connectionURL="jdbc:odbc:CATALINA"
>>>>>               userTable="users" userNameCol="user_name"
>>>>> userCredCol="user_pass"
>>>>>           userRoleTable="user_roles" roleNameCol="role_name" />
>>>>>       -->
>>>>>
>>>>>       <!-- Define the default virtual host
>>>>>            Note: XML Schema validation will not work with  
>>>>> Xerces 2.2.
>>>>>        -->
>>>>>       <Host name="localhost" appBase="webapps"
>>>>>        unpackWARs="true" autoDeploy="true"
>>>>>        xmlValidation="false" xmlNamespaceAware="false">
>>>>>
>>>>>         <!-- Defines a cluster for this node,
>>>>>              By defining this element, means that every manager  
>>>>> will be
>>>>> changed.
>>>>>              So when running a cluster, only make sure that you  
>>>>> have
>>>>> webapps in there
>>>>>              that need to be clustered and remove the other ones.
>>>>>              A cluster has the following parameters:
>>>>>
>>>>>              className = the fully qualified name of the  
>>>>> cluster class
>>>>>
>>>>>              clusterName = a descriptive name for your cluster,  
>>>>> can be
>>>>> anything
>>>>>
>>>>>              mcastAddr = the multicast address, has to be the  
>>>>> same for
>>>>> all the nodes
>>>>>
>>>>>              mcastPort = the multicast port, has to be the same  
>>>>> for all
>>>>> the nodes
>>>>>                           mcastBindAddr = bind the multicast  
>>>>> socket to
>>>>> a specific
>>>>> address
>>>>>                           mcastTTL = the multicast TTL if you  
>>>>> want to
>>>>> limit your
>>>>> broadcast
>>>>>                           mcastSoTimeout = the multicast  
>>>>> readtimeout
>>>>>              mcastFrequency = the number of milliseconds in  
>>>>> between
>>>>> sending a "I'm alive" heartbeat
>>>>>
>>>>>              mcastDropTime = the number a milliseconds before a  
>>>>> node is
>>>>> considered "dead" if no heartbeat is received
>>>>>
>>>>>              tcpThreadCount = the number of threads to handle  
>>>>> incoming
>>>>> replication requests, optimal would be the same amount of  
>>>>> threads as
>>>>> nodes
>>>>>              tcpListenAddress = the listen address (bind  
>>>>> address) for
>>>>> TCP cluster request on this  
>>>>> host,                                 in
>>>>> case of multiple ethernet cards.
>>>>>                                 auto means that address becomes
>>>>>
>>>>> InetAddress.getLocalHost().getHostAddress()
>>>>>
>>>>>              tcpListenPort = the tcp listen port
>>>>>
>>>>>              tcpSelectorTimeout = the timeout (ms) for the
>>>>> Selector.select() method in case the OS
>>>>>                                   has a wakup bug in java.nio.  
>>>>> Set to 0
>>>>> for no timeout
>>>>>
>>>>>              printToScreen = true means that managers will also  
>>>>> print
>>>>> to std.out
>>>>>
>>>>>              expireSessionsOnShutdown = true means that
>>>>>              useDirtyFlag = true means that we only replicate a  
>>>>> session
>>>>> after setAttribute,removeAttribute has been called.
>>>>>                             false means to replicate the  
>>>>> session after
>>>>> each request.
>>>>>                             false means that replication would  
>>>>> work for
>>>>> the following piece of code: (only for  
>>>>> SimpleTcpReplicationManager)
>>>>>                             <%
>>>>>                             HashMap map =
>>>>> (HashMap)session.getAttribute("map");
>>>>>                             map.put("key","value");
>>>>>                             %>
>>>>>              replicationMode = can be either 'pooled',  
>>>>> 'synchronous' or
>>>>> 'asynchronous'.
>>>>>                                * Pooled means that the replication
>>>>> happens using several sockets in a synchronous way. Ie, the  
>>>>> data gets
>>>>> replicated, then the request return. This is the same as the
>>>>> 'synchronous' setting except it uses a pool of sockets, hence  
>>>>> it is
>>>>> multithreaded. This is the fastest and safest configuration. To  
>>>>> use
>>>>> this, also increase the nr of tcp threads that you have dealing  
>>>>> with
>>>>> replication.
>>>>>                                * Synchronous means that the  
>>>>> thread that
>>>>> executes the request, is also the
>>>>>                                thread the replicates the data  
>>>>> to the
>>>>> other nodes, and will not return until all
>>>>>                                nodes have received the  
>>>>> information.
>>>>>                                * Asynchronous means that there  
>>>>> is a
>>>>> specific 'sender' thread for each cluster node,
>>>>>                                so the request thread will queue  
>>>>> the
>>>>> replication request into a "smart" queue,
>>>>>                                and then return to the client.
>>>>>                                The "smart" queue is a queue  
>>>>> where when
>>>>> a session is added to the queue, and the same session
>>>>>                                already exists in the queue from a
>>>>> previous request, that session will be replaced
>>>>>                                in the queue instead of  
>>>>> replicating two
>>>>> requests. This almost never happens, unless there is a
>>>>>                                large network delay.
>>>>>         -->                     <!--
>>>>>             When configuring for clustering, you also add in a  
>>>>> valve to
>>>>> catch all the requests
>>>>>             coming in, at the end of the request, the session  
>>>>> may or
>>>>> may not be replicated.
>>>>>             A session is replicated if and only if all the  
>>>>> conditions
>>>>> are met:
>>>>>             1. useDirtyFlag is true or setAttribute or  
>>>>> removeAttribute
>>>>> has been called AND
>>>>>             2. a session exists (has been created)
>>>>>             3. the request is not trapped by the "filter"  
>>>>> attribute
>>>>>
>>>>>             The filter attribute is to filter out requests that  
>>>>> could
>>>>> not modify the session,
>>>>>             hence we don't replicate the session after the end  
>>>>> of this
>>>>> request.
>>>>>             The filter is negative, ie, anything you put in the  
>>>>> filter,
>>>>> you mean to filter out,
>>>>>             ie, no replication will be done on requests that  
>>>>> match one
>>>>> of the filters.
>>>>>             The filter attribute is delimited by ;, so you  
>>>>> can't escape
>>>>> out ; even if you wanted to.
>>>>>
>>>>>             filter=".*\.gif;.*\.js;" means that we will not  
>>>>> replicate
>>>>> the session after requests with the URI
>>>>>             ending with .gif and .js are intercepted.
>>>>>                         The deployer element can be used to deploy
>>>>> apps cluster
>>>>> wide.
>>>>>             Currently the deployment only deploys/undeploys to  
>>>>> working
>>>>> members in the cluster
>>>>>             so no WARs are copied upons startup of a broken node.
>>>>>             The deployer watches a directory (watchDir) for WAR  
>>>>> files
>>>>> when watchEnabled="true"
>>>>>             When a new war file is added the war gets deployed  
>>>>> to the
>>>>> local instance,
>>>>>             and then deployed to the other instances in the  
>>>>> cluster.
>>>>>             When a war file is deleted from the watchDir the  
>>>>> war is
>>>>> undeployed locally             and cluster wide
>>>>>         -->
>>>>>                <Cluster
>>>>> className="org.apache.catalina.cluster.tcp.SimpleTcpCluster"
>>>>>
>>>>> managerClassName="org.apache.catalina.cluster.session.DeltaManager 
>>>>> "
>>>>>                  expireSessionsOnShutdown="false"
>>>>>                  useDirtyFlag="true"
>>>>>                  notifyListenersOnReplication="true">
>>>>>
>>>>>             <Membership
>>>>> className="org.apache.catalina.cluster.mcast.McastService"
>>>>>                 mcastAddr="228.0.0.4"
>>>>>                 mcastPort="45564"
>>>>>                 mcastFrequency="500"
>>>>>                 mcastDropTime="3000"/>
>>>>>
>>>>>             <Receiver
>>>>> className="org.apache.catalina.cluster.tcp.ReplicationListener"
>>>>>                 tcpListenAddress="auto"
>>>>>                 tcpListenPort="4001"
>>>>>                 tcpSelectorTimeout="100"
>>>>>                 tcpThreadCount="6"/>
>>>>>
>>>>>             <Sender
>>>>>
>>>>> className="org.apache.catalina.cluster.tcp.ReplicationTransmitter"
>>>>>                 replicationMode="pooled"
>>>>>                 ackTimeout="15000"/>
>>>>>
>>>>>             <Valve
>>>>> className="org.apache.catalina.cluster.tcp.ReplicationValve"
>>>>>
>>>>> filter=".*\.gif;.*\.js;.*\.jpg;.*\.png;.*\.htm;.*\.html;.* 
>>>>> \.css;.*\.txt;"/>
>>>>>
>>>>>
>>>>>                                <Deployer
>>>>> className="org.apache.catalina.cluster.deploy.FarmWarDeployer"
>>>>>                       tempDir="/tmp/war-temp/"
>>>>>                       deployDir="/tmp/war-deploy/"
>>>>>                       watchDir="/tmp/war-listen/"
>>>>>                       watchEnabled="false"/>
>>>>>                                   <ClusterListener
>>>>> className="org.apache.catalina.cluster.session.ClusterSessionListe 
>>>>> ner"/>
>>>>>
>>>>>         </Cluster>
>>>>>
>>>>>
>>>>>         <!-- Normally, users must authenticate themselves to  
>>>>> each web
>>>>> app
>>>>>              individually.  Uncomment the following entry if  
>>>>> you would
>>>>> like
>>>>>              a user to be authenticated the first time they  
>>>>> encounter a
>>>>>              resource protected by a security constraint, and  
>>>>> then have
>>>>> that
>>>>>              user identity maintained across *all* web  
>>>>> applications
>>>>> contained
>>>>>              in this virtual host. -->
>>>>>         <!--
>>>>>         <Valve
>>>>> className="org.apache.catalina.authenticator.SingleSignOn" />
>>>>>         -->
>>>>>
>>>>>         <!-- Access log processes all requests for this virtual  
>>>>> host.
>>>>> By
>>>>>              default, log files are created in the "logs"  
>>>>> directory
>>>>> relative to
>>>>>              $CATALINA_HOME.  If you wish, you can specify a  
>>>>> different
>>>>>              directory with the "directory" attribute.  Specify  
>>>>> either
>>>>> a relative
>>>>>              (to $CATALINA_HOME) or absolute path to the desired
>>>>> directory.
>>>>>         -->
>>>>>         <!--
>>>>>         <Valve  
>>>>> className="org.apache.catalina.valves.AccessLogValve"
>>>>>                  directory="logs"  prefix="localhost_access_log."
>>>>> suffix=".txt"
>>>>>                  pattern="common" resolveHosts="false"/>
>>>>>         -->
>>>>>
>>>>>         <!-- Access log processes all requests for this virtual  
>>>>> host.
>>>>> By
>>>>>              default, log files are created in the "logs"  
>>>>> directory
>>>>> relative to
>>>>>              $CATALINA_HOME.  If you wish, you can specify a  
>>>>> different
>>>>>              directory with the "directory" attribute.  Specify  
>>>>> either
>>>>> a relative
>>>>>              (to $CATALINA_HOME) or absolute path to the desired
>>>>> directory.
>>>>>              This access log implementation is optimized for  
>>>>> maximum
>>>>> performance,
>>>>>              but is hardcoded to support only the "common" and
>>>>> "combined" patterns.
>>>>>         -->
>>>>>         <!--
>>>>>         <Valve
>>>>> className="org.apache.catalina.valves.FastCommonAccessLogValve"
>>>>>                  directory="logs"  prefix="localhost_access_log."
>>>>> suffix=".txt"
>>>>>                  pattern="common" resolveHosts="false"/>
>>>>>         -->
>>>>>         <!-- Access log processes all requests for this virtual  
>>>>> host.
>>>>> By
>>>>>              default, log files are created in the "logs"  
>>>>> directory
>>>>> relative to
>>>>>              $CATALINA_HOME.  If you wish, you can specify a  
>>>>> different
>>>>>              directory with the "directory" attribute.  Specify  
>>>>> either
>>>>> a relative
>>>>>              (to $CATALINA_HOME) or absolute path to the desired
>>>>> directory.
>>>>>              This access log implementation is optimized for  
>>>>> maximum
>>>>> performance,
>>>>>              but is hardcoded to support only the "common" and
>>>>> "combined" patterns.
>>>>>
>>>>>              This valve use NIO direct Byte Buffer to  
>>>>> asynchornously
>>>>> store the
>>>>>              log.
>>>>>         -->
>>>>>         <!--
>>>>>         <Valve
>>>>> className="org.apache.catalina.valves.ByteBufferAccessLogValve"
>>>>>                  directory="logs"  prefix="localhost_access_log."
>>>>> suffix=".txt"
>>>>>                  pattern="common" resolveHosts="false"/>
>>>>>         -->
>>>>>
>>>>>       </Host>
>>>>>
>>>>>     </Engine>
>>>>>
>>>>>   </Service>
>>>>>
>>>>> </Server>
>>>>>
>>>>>
>>>>> workers2.properties
>>>>>
>>>>> [logger.apache2]
>>>>> file="/etc/httpd/conf/logs/error.log"
>>>>> level=INFO
>>>>> debug=1
>>>>>
>>>>> # Config settings
>>>>> [config]
>>>>> file=/etc/httpd/conf/workers2.properties
>>>>> debug=0
>>>>>
>>>>> # Shared memory file settings
>>>>> [shm]
>>>>> file=/etc/httpd/conf/jk2.shm
>>>>> size=100000
>>>>>
>>>>> # Communcation channel settings for "Tomcat5A"
>>>>> [channel.socket:localhost:8009]
>>>>> host=localhost
>>>>> port=8009
>>>>> tomcatId=Tomcat5A
>>>>> group=balanced
>>>>> lb_factor=1
>>>>> route=Tomcat5A
>>>>>
>>>>>
>>>>> # Declare a Tomcat5A worker
>>>>> [ajp13:localhost:8009]
>>>>> channel=channel.socket:Tomcat5A
>>>>>
>>>>>
>>>>> # Communcation channel settings for "Tomcat5B"
>>>>> [channel.socket:localhost:8010]
>>>>> host=localhost
>>>>> port=8010
>>>>> tomcatId=Tomcat5B
>>>>> group=balanced
>>>>> lb_factor=1
>>>>> route=Tomcat5B
>>>>>
>>>>>
>>>>> # Declare a Tomcat5B worker
>>>>> [ajp13:localhost:8010]
>>>>> channel=channel.socket:Tomcat5B
>>>>>
>>>>>
>>>>> # Communcation channel settings for "Tomcat5C"
>>>>> [channel.socket:localhost:8011]
>>>>> host=localhost
>>>>> port=8011
>>>>> tomcatId=Tomcat5C
>>>>> group=balanced
>>>>> lb_factor=1
>>>>> route=Tomcat5C
>>>>>
>>>>>
>>>>> # Declare a Tomcat5C worker
>>>>> [ajp13:localhost:8011]
>>>>> channel=channel.socket:Tomcat5C
>>>>>
>>>>> # Load balanced Worker
>>>>> [lb:balanced]
>>>>> worker=ajp13:localhost:8009
>>>>> worker=ajp13:localhost:8010
>>>>> worker=ajp13:localhost:8011
>>>>> timeout=90
>>>>> attempts=3
>>>>> recovery=30
>>>>> stickySession=0
>>>>> noWorkerMsg=Server Busy please retry later.
>>>>> noWorkerCodeMsg=503
>>>>>
>>>>> # URI mappings for the tomcat worker
>>>>> # Map the "jsp-examples" web application context to the web  
>>>>> server URI
>>>>> space
>>>>> [uri:/jsp-examples/*]
>>>>> info= Mapping for jsp-examples context for tomcat
>>>>> context=/jsp-examples
>>>>> group=balanced
>>>>>
>>>>> [shm]
>>>>> file=/etc/httpd/conf/jk2.shm
>>>>> size=1000000
>>>>>
>>>>> [uri:/servlets-examples/*]
>>>>> context=/servlets-examples
>>>>> group=balanced
>>>>>
>>>>> # Define a status worker
>>>>> [status:]
>>>>>
>>>>> # Status URI mapping
>>>>> [uri:/jkstatus/*]
>>>>> group=status
>>>>>
>>>>>
>>>>> obviously the server.xml files on the other 2 instances of  
>>>>> tomcat are
>>>>> the same except the ports and jvmRoute have been changed.
>>>>>
>>>>>
>>>>> can anyone see where i am going wrong ?
>>>>>
>>>>> Thanks
>>>>>
>>>>>
>>>>>
>>>>>
>>>>
>>>
>>> -------------------------------------------------------------------- 
>>> -
>>> To start a new topic, e-mail: users@tomcat.apache.org
>>> To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
>>> For additional commands, e-mail: users-help@tomcat.apache.org
>>>
>>>
>>>
>>
>>
>
> ---------------------------------------------------------------------
> To start a new topic, e-mail: users@tomcat.apache.org
> To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
> For additional commands, e-mail: users-help@tomcat.apache.org
>
>


---------------------------------------------------------------------
To start a new topic, e-mail: users@tomcat.apache.org
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org


Re: Tomcat session replication/cluster

Posted by Pid <p...@pidster.com>.

Filip Hanik - Dev Lists wrote:
> if the cluster is put in the engine element, the context names are
> prefixed with the engine name, since you can have multiple contexts with
> the same name in different host
> when reloading a context, you'll get these errors cause the context is
> not available during the reload
> this will be fixed with the new Apache Tribes module
> Filip

I understand that the context is not available during reload. After
reload has completed, the error persists.

My Engine name is Catalina, it looks like the cluster isn't sending the
engine name, but the context name, appended to itself.

You're implying that it should send Catalina+website1, but it's sending
website1+website1 instead.

After startup:
Node1 sees Node2 send "website2"
Node2 sees Node1 send "website1"

After context on Node1 is finished reloading:
Node1 sees Node2 send "website2"
Node2 sees Node1 send "website1website1"

I think that the context name is being appended to itself.


> Pid wrote:
>> I'm seeing an issue on 5.5.17 with a 2 node cluster config.
>> When a context is reloaded, it sends the context node name incorrectly
>> to the cluster.
>> E.g. context is called "website1"
>>
>> SEVERE: Context manager doesn't exist:website1website1
>>
>> The config I'm using is exactly the same as the default from server.xml,
>> except the cluster is defined in Engine, rather than each Host.
>>
>>
>>
>>
>> Filip Hanik - Dev Lists wrote:
>>  
>>> also, use Tomcat 5.5.17
>>>
>>> Sean O'Reilly wrote:
>>>    
>>>> Hi,
>>>>
>>>> I am trying to get in-memory session replication working and am testing
>>>> running 3 seperate tomcat instances on the same server.
>>>>
>>>> I am using tomcat-5.5.15 and apache-2.0.54 with jk2.
>>>>
>>>> Whenever i run my test app although it should be doing round-robin load
>>>> balancing it doesn't switch to another instance of tomcat until the
>>>> eighth request and does not appear to have sent the session information
>>>> across as the session ID changes.
>>>>
>>>> Here are my server.xml and workers2.properties files
>>>>
>>>> server.xml
>>>>
>>>> <Server port="8005" shutdown="SHUTDOWN">
>>>>
>>>>   <!-- Comment these entries out to disable JMX MBeans support used for
>>>> the        administration web application -->
>>>>   <Listener
>>>> className="org.apache.catalina.core.AprLifecycleListener" />
>>>>   <Listener
>>>> className="org.apache.catalina.mbeans.ServerLifecycleListener" />
>>>>   <Listener
>>>> className="org.apache.catalina.mbeans.GlobalResourcesLifecycleListener"
>>>> />
>>>>   <Listener
>>>> className="org.apache.catalina.storeconfig.StoreConfigLifecycleListener"/>
>>>>
>>>>
>>>>
>>>>   <!-- Global JNDI resources -->
>>>>   <GlobalNamingResources>
>>>>
>>>>     <!-- Test entry for demonstration purposes -->
>>>>     <Environment name="simpleValue" type="java.lang.Integer"
>>>> value="30"/>
>>>>
>>>>     <!-- Editable user database that can also be used by
>>>>          UserDatabaseRealm to authenticate users -->
>>>>     <Resource name="UserDatabase" auth="Container"
>>>>               type="org.apache.catalina.UserDatabase"
>>>>        description="User database that can be updated and saved"
>>>>           
>>>> factory="org.apache.catalina.users.MemoryUserDatabaseFactory"
>>>>           pathname="conf/tomcat-users.xml" />
>>>>
>>>>   </GlobalNamingResources>
>>>>
>>>>   <!-- A "Service" is a collection of one or more "Connectors" that
>>>> share
>>>>        a single "Container" (and therefore the web applications visible
>>>>        within that Container).  Normally, that Container is an
>>>> "Engine",
>>>>        but this is not required.
>>>>
>>>>        Note:  A "Service" is not itself a "Container", so you may not
>>>>        define subcomponents such as "Valves" or "Loggers" at this
>>>> level.
>>>>    -->
>>>>
>>>>   <!-- Define the Tomcat Stand-Alone Service -->
>>>>   <Service name="Catalina">
>>>>
>>>>     <!-- A "Connector" represents an endpoint by which requests are
>>>> received
>>>>          and responses are returned.  Each Connector passes requests on
>>>> to the
>>>>          associated "Container" (normally an Engine) for processing.
>>>>
>>>>          By default, a non-SSL HTTP/1.1 Connector is established on
>>>> port 8080.
>>>>          You can also enable an SSL HTTP/1.1 Connector on port 8443 by
>>>>          following the instructions below and uncommenting the second
>>>> Connector
>>>>          entry.  SSL support requires the following steps (see the SSL
>>>> Config
>>>>          HOWTO in the Tomcat 5 documentation bundle for more detailed
>>>>          instructions):
>>>>          * If your JDK version 1.3 or prior, download and install JSSE
>>>> 1.0.2 or
>>>>            later, and put the JAR files into "$JAVA_HOME/jre/lib/ext".
>>>>          * Execute:
>>>>              %JAVA_HOME%\bin\keytool -genkey -alias tomcat -keyalg RSA
>>>> (Windows)
>>>>              $JAVA_HOME/bin/keytool -genkey -alias tomcat -keyalg RSA
>>>> (Unix)
>>>>            with a password value of "changeit" for both the certificate
>>>> and
>>>>            the keystore itself.
>>>>
>>>>          By default, DNS lookups are enabled when a web application
>>>> calls
>>>>          request.getRemoteHost().  This can have an adverse impact on
>>>>          performance, so you can disable it by setting the
>>>>          "enableLookups" attribute to "false".  When DNS lookups are
>>>> disabled,
>>>>          request.getRemoteHost() will return the String version of the
>>>>          IP address of the remote client.
>>>>     -->
>>>>
>>>>     <!-- Define a non-SSL HTTP/1.1 Connector on port 8080
>>>>     <Connector port="8080" maxHttpHeaderSize="8192"
>>>>                maxThreads="150" minSpareThreads="25"
>>>> maxSpareThreads="75"
>>>>                enableLookups="false" redirectPort="8443"
>>>> acceptCount="100"
>>>>                connectionTimeout="20000" disableUploadTimeout="true" />
>>>> -->
>>>>     <!-- Note : To disable connection timeouts, set connectionTimeout
>>>> value
>>>>      to 0 -->
>>>>         <!-- Note : To use gzip compression you could set the following
>>>> properties :
>>>>                    compression="on"              
>>>> compressionMinSize="2048"              
>>>> noCompressionUserAgents="gozilla, traviata"              
>>>> compressableMimeType="text/html,text/xml"
>>>>     -->
>>>>
>>>>     <!-- Define a SSL HTTP/1.1 Connector on port 8443 -->
>>>>     <!--
>>>>     <Connector port="8443" maxHttpHeaderSize="8192"
>>>>                maxThreads="150" minSpareThreads="25"
>>>> maxSpareThreads="75"
>>>>                enableLookups="false" disableUploadTimeout="true"
>>>>                acceptCount="100" scheme="https" secure="true"
>>>>                clientAuth="false" sslProtocol="TLS" />
>>>>     -->
>>>>
>>>>     <!-- Define an AJP 1.3 Connector on port 8009 -->
>>>>     <Connector port="8009"                enableLookups="false"
>>>> redirectPort="8443"
>>>> protocol="AJP/1.3" />
>>>>
>>>>     <!-- Define a Proxied HTTP/1.1 Connector on port 8082 -->
>>>>     <!-- See proxy documentation for more information about using this.
>>>> -->
>>>>     <!--
>>>>     <Connector port="8082"                maxThreads="150"
>>>> minSpareThreads="25"
>>>> maxSpareThreads="75"
>>>>                enableLookups="false" acceptCount="100"
>>>> connectionTimeout="20000"
>>>>                proxyPort="80" disableUploadTimeout="true" />
>>>>     -->
>>>>
>>>>     <!-- An Engine represents the entry point (within Catalina) that
>>>> processes
>>>>          every request.  The Engine implementation for Tomcat stand
>>>> alone
>>>>          analyzes the HTTP headers included with the request, and
>>>> passes them
>>>>          on to the appropriate Host (virtual host). -->
>>>>
>>>>     <!-- You should set jvmRoute to support load-balancing via AJP ie :
>>>> -->
>>>>     <Engine name="Standalone" defaultHost="localhost"
>>>> jvmRoute="Tomcat5A">                       <!-- Define the top level
>>>> container in our container
>>>> hierarchy
>>>>     <Engine name="Catalina" defaultHost="localhost"> -->
>>>>
>>>>       <!-- The request dumper valve dumps useful debugging information
>>>> about
>>>>            the request headers and cookies that were received, and the
>>>> response
>>>>            headers and cookies that were sent, for all requests
>>>> received by
>>>>            this instance of Tomcat.  If you care only about requests to
>>>> a
>>>>            particular virtual host, or a particular application, nest
>>>> this
>>>>            element inside the corresponding <Host> or <Context> entry
>>>> instead.
>>>>
>>>>            For a similar mechanism that is portable to all Servlet 2.4
>>>>            containers, check out the "RequestDumperFilter" Filter in
>>>> the
>>>>            example application (the source for this filter may be found
>>>> in
>>>>            "$CATALINA_HOME/webapps/examples/WEB-INF/classes/filters").
>>>>
>>>>            Request dumping is disabled by default.  Uncomment the
>>>> following
>>>>            element to enable it. -->
>>>>       <!--
>>>>       <Valve
>>>> className="org.apache.catalina.valves.RequestDumperValve"/>
>>>>       -->
>>>>
>>>>       <!-- Because this Realm is here, an instance will be shared
>>>> globally -->
>>>>
>>>>       <!-- This Realm uses the UserDatabase configured in the global
>>>> JNDI
>>>>            resources under the key "UserDatabase".  Any edits
>>>>            that are performed against this UserDatabase are immediately
>>>>            available for use by the Realm.  -->
>>>>       <Realm className="org.apache.catalina.realm.UserDatabaseRealm"
>>>>              resourceName="UserDatabase"/>
>>>>
>>>>       <!-- Comment out the old realm but leave here for now in case we
>>>>            need to go back quickly -->
>>>>       <!--
>>>>       <Realm className="org.apache.catalina.realm.MemoryRealm" />
>>>>       -->
>>>>
>>>>       <!-- Replace the above Realm with one of the following to get a
>>>> Realm
>>>>            stored in a database and accessed via JDBC -->
>>>>
>>>>       <!--
>>>>       <Realm  className="org.apache.catalina.realm.JDBCRealm"
>>>>              driverName="org.gjt.mm.mysql.Driver"
>>>>           connectionURL="jdbc:mysql://localhost/authority"
>>>>          connectionName="test" connectionPassword="test"
>>>>               userTable="users" userNameCol="user_name"
>>>> userCredCol="user_pass"
>>>>           userRoleTable="user_roles" roleNameCol="role_name" />
>>>>       -->
>>>>
>>>>       <!--
>>>>       <Realm  className="org.apache.catalina.realm.JDBCRealm"
>>>>              driverName="oracle.jdbc.driver.OracleDriver"
>>>>           connectionURL="jdbc:oracle:thin:@ntserver:1521:ORCL"
>>>>          connectionName="scott" connectionPassword="tiger"
>>>>               userTable="users" userNameCol="user_name"
>>>> userCredCol="user_pass"
>>>>           userRoleTable="user_roles" roleNameCol="role_name" />
>>>>       -->
>>>>
>>>>       <!--
>>>>       <Realm  className="org.apache.catalina.realm.JDBCRealm"
>>>>              driverName="sun.jdbc.odbc.JdbcOdbcDriver"
>>>>           connectionURL="jdbc:odbc:CATALINA"
>>>>               userTable="users" userNameCol="user_name"
>>>> userCredCol="user_pass"
>>>>           userRoleTable="user_roles" roleNameCol="role_name" />
>>>>       -->
>>>>
>>>>       <!-- Define the default virtual host
>>>>            Note: XML Schema validation will not work with Xerces 2.2.
>>>>        -->
>>>>       <Host name="localhost" appBase="webapps"
>>>>        unpackWARs="true" autoDeploy="true"
>>>>        xmlValidation="false" xmlNamespaceAware="false">
>>>>
>>>>         <!-- Defines a cluster for this node,
>>>>              By defining this element, means that every manager will be
>>>> changed.
>>>>              So when running a cluster, only make sure that you have
>>>> webapps in there
>>>>              that need to be clustered and remove the other ones.
>>>>              A cluster has the following parameters:
>>>>
>>>>              className = the fully qualified name of the cluster class
>>>>
>>>>              clusterName = a descriptive name for your cluster, can be
>>>> anything
>>>>
>>>>              mcastAddr = the multicast address, has to be the same for
>>>> all the nodes
>>>>
>>>>              mcastPort = the multicast port, has to be the same for all
>>>> the nodes
>>>>                           mcastBindAddr = bind the multicast socket to
>>>> a specific
>>>> address
>>>>                           mcastTTL = the multicast TTL if you want to
>>>> limit your
>>>> broadcast
>>>>                           mcastSoTimeout = the multicast readtimeout
>>>>              mcastFrequency = the number of milliseconds in between
>>>> sending a "I'm alive" heartbeat
>>>>
>>>>              mcastDropTime = the number a milliseconds before a node is
>>>> considered "dead" if no heartbeat is received
>>>>
>>>>              tcpThreadCount = the number of threads to handle incoming
>>>> replication requests, optimal would be the same amount of threads as
>>>> nodes
>>>>              tcpListenAddress = the listen address (bind address) for
>>>> TCP cluster request on this host,                                 in
>>>> case of multiple ethernet cards.
>>>>                                 auto means that address becomes
>>>>                               
>>>> InetAddress.getLocalHost().getHostAddress()
>>>>
>>>>              tcpListenPort = the tcp listen port
>>>>
>>>>              tcpSelectorTimeout = the timeout (ms) for the
>>>> Selector.select() method in case the OS
>>>>                                   has a wakup bug in java.nio. Set to 0
>>>> for no timeout
>>>>
>>>>              printToScreen = true means that managers will also print
>>>> to std.out
>>>>
>>>>              expireSessionsOnShutdown = true means that
>>>>              useDirtyFlag = true means that we only replicate a session
>>>> after setAttribute,removeAttribute has been called.
>>>>                             false means to replicate the session after
>>>> each request.
>>>>                             false means that replication would work for
>>>> the following piece of code: (only for SimpleTcpReplicationManager)
>>>>                             <%
>>>>                             HashMap map =
>>>> (HashMap)session.getAttribute("map");
>>>>                             map.put("key","value");
>>>>                             %>
>>>>              replicationMode = can be either 'pooled', 'synchronous' or
>>>> 'asynchronous'.
>>>>                                * Pooled means that the replication
>>>> happens using several sockets in a synchronous way. Ie, the data gets
>>>> replicated, then the request return. This is the same as the
>>>> 'synchronous' setting except it uses a pool of sockets, hence it is
>>>> multithreaded. This is the fastest and safest configuration. To use
>>>> this, also increase the nr of tcp threads that you have dealing with
>>>> replication.
>>>>                                * Synchronous means that the thread that
>>>> executes the request, is also the
>>>>                                thread the replicates the data to the
>>>> other nodes, and will not return until all
>>>>                                nodes have received the information.
>>>>                                * Asynchronous means that there is a
>>>> specific 'sender' thread for each cluster node,
>>>>                                so the request thread will queue the
>>>> replication request into a "smart" queue,
>>>>                                and then return to the client.
>>>>                                The "smart" queue is a queue where when
>>>> a session is added to the queue, and the same session
>>>>                                already exists in the queue from a
>>>> previous request, that session will be replaced
>>>>                                in the queue instead of replicating two
>>>> requests. This almost never happens, unless there is a
>>>>                                large network delay.
>>>>         -->                     <!--
>>>>             When configuring for clustering, you also add in a valve to
>>>> catch all the requests
>>>>             coming in, at the end of the request, the session may or
>>>> may not be replicated.
>>>>             A session is replicated if and only if all the conditions
>>>> are met:
>>>>             1. useDirtyFlag is true or setAttribute or removeAttribute
>>>> has been called AND
>>>>             2. a session exists (has been created)
>>>>             3. the request is not trapped by the "filter" attribute
>>>>
>>>>             The filter attribute is to filter out requests that could
>>>> not modify the session,
>>>>             hence we don't replicate the session after the end of this
>>>> request.
>>>>             The filter is negative, ie, anything you put in the filter,
>>>> you mean to filter out,
>>>>             ie, no replication will be done on requests that match one
>>>> of the filters.
>>>>             The filter attribute is delimited by ;, so you can't escape
>>>> out ; even if you wanted to.
>>>>
>>>>             filter=".*\.gif;.*\.js;" means that we will not replicate
>>>> the session after requests with the URI
>>>>             ending with .gif and .js are intercepted.
>>>>                         The deployer element can be used to deploy
>>>> apps cluster
>>>> wide.
>>>>             Currently the deployment only deploys/undeploys to working
>>>> members in the cluster
>>>>             so no WARs are copied upons startup of a broken node.
>>>>             The deployer watches a directory (watchDir) for WAR files
>>>> when watchEnabled="true"
>>>>             When a new war file is added the war gets deployed to the
>>>> local instance,
>>>>             and then deployed to the other instances in the cluster.
>>>>             When a war file is deleted from the watchDir the war is
>>>> undeployed locally             and cluster wide
>>>>         -->
>>>>                <Cluster
>>>> className="org.apache.catalina.cluster.tcp.SimpleTcpCluster"
>>>>                
>>>> managerClassName="org.apache.catalina.cluster.session.DeltaManager"
>>>>                  expireSessionsOnShutdown="false"
>>>>                  useDirtyFlag="true"
>>>>                  notifyListenersOnReplication="true">
>>>>
>>>>             <Membership               
>>>> className="org.apache.catalina.cluster.mcast.McastService"
>>>>                 mcastAddr="228.0.0.4"
>>>>                 mcastPort="45564"
>>>>                 mcastFrequency="500"
>>>>                 mcastDropTime="3000"/>
>>>>
>>>>             <Receiver               
>>>> className="org.apache.catalina.cluster.tcp.ReplicationListener"
>>>>                 tcpListenAddress="auto"
>>>>                 tcpListenPort="4001"
>>>>                 tcpSelectorTimeout="100"
>>>>                 tcpThreadCount="6"/>
>>>>
>>>>             <Sender
>>>>               
>>>> className="org.apache.catalina.cluster.tcp.ReplicationTransmitter"
>>>>                 replicationMode="pooled"
>>>>                 ackTimeout="15000"/>
>>>>
>>>>             <Valve
>>>> className="org.apache.catalina.cluster.tcp.ReplicationValve"
>>>>                  
>>>> filter=".*\.gif;.*\.js;.*\.jpg;.*\.png;.*\.htm;.*\.html;.*\.css;.*\.txt;"/>
>>>>
>>>>
>>>>                                <Deployer
>>>> className="org.apache.catalina.cluster.deploy.FarmWarDeployer"
>>>>                       tempDir="/tmp/war-temp/"
>>>>                       deployDir="/tmp/war-deploy/"
>>>>                       watchDir="/tmp/war-listen/"
>>>>                       watchEnabled="false"/>
>>>>                                   <ClusterListener
>>>> className="org.apache.catalina.cluster.session.ClusterSessionListener"/>
>>>>
>>>>         </Cluster>
>>>>      
>>>>
>>>>         <!-- Normally, users must authenticate themselves to each web
>>>> app
>>>>              individually.  Uncomment the following entry if you would
>>>> like
>>>>              a user to be authenticated the first time they encounter a
>>>>              resource protected by a security constraint, and then have
>>>> that
>>>>              user identity maintained across *all* web applications
>>>> contained
>>>>              in this virtual host. -->
>>>>         <!--
>>>>         <Valve
>>>> className="org.apache.catalina.authenticator.SingleSignOn" />
>>>>         -->
>>>>
>>>>         <!-- Access log processes all requests for this virtual host.
>>>> By
>>>>              default, log files are created in the "logs" directory
>>>> relative to
>>>>              $CATALINA_HOME.  If you wish, you can specify a different
>>>>              directory with the "directory" attribute.  Specify either
>>>> a relative
>>>>              (to $CATALINA_HOME) or absolute path to the desired
>>>> directory.
>>>>         -->
>>>>         <!--
>>>>         <Valve className="org.apache.catalina.valves.AccessLogValve"
>>>>                  directory="logs"  prefix="localhost_access_log."
>>>> suffix=".txt"
>>>>                  pattern="common" resolveHosts="false"/>
>>>>         -->
>>>>
>>>>         <!-- Access log processes all requests for this virtual host.
>>>> By
>>>>              default, log files are created in the "logs" directory
>>>> relative to
>>>>              $CATALINA_HOME.  If you wish, you can specify a different
>>>>              directory with the "directory" attribute.  Specify either
>>>> a relative
>>>>              (to $CATALINA_HOME) or absolute path to the desired
>>>> directory.
>>>>              This access log implementation is optimized for maximum
>>>> performance,
>>>>              but is hardcoded to support only the "common" and
>>>> "combined" patterns.
>>>>         -->
>>>>         <!--
>>>>         <Valve
>>>> className="org.apache.catalina.valves.FastCommonAccessLogValve"
>>>>                  directory="logs"  prefix="localhost_access_log."
>>>> suffix=".txt"
>>>>                  pattern="common" resolveHosts="false"/>
>>>>         -->
>>>>         <!-- Access log processes all requests for this virtual host.
>>>> By
>>>>              default, log files are created in the "logs" directory
>>>> relative to
>>>>              $CATALINA_HOME.  If you wish, you can specify a different
>>>>              directory with the "directory" attribute.  Specify either
>>>> a relative
>>>>              (to $CATALINA_HOME) or absolute path to the desired
>>>> directory.
>>>>              This access log implementation is optimized for maximum
>>>> performance,
>>>>              but is hardcoded to support only the "common" and
>>>> "combined" patterns.
>>>>
>>>>              This valve use NIO direct Byte Buffer to asynchornously
>>>> store the
>>>>              log.
>>>>         -->
>>>>         <!--
>>>>         <Valve
>>>> className="org.apache.catalina.valves.ByteBufferAccessLogValve"
>>>>                  directory="logs"  prefix="localhost_access_log."
>>>> suffix=".txt"
>>>>                  pattern="common" resolveHosts="false"/>
>>>>         -->
>>>>
>>>>       </Host>
>>>>
>>>>     </Engine>
>>>>
>>>>   </Service>
>>>>
>>>> </Server>
>>>>
>>>>
>>>> workers2.properties
>>>>
>>>> [logger.apache2]
>>>> file="/etc/httpd/conf/logs/error.log"
>>>> level=INFO
>>>> debug=1
>>>>
>>>> # Config settings
>>>> [config]
>>>> file=/etc/httpd/conf/workers2.properties
>>>> debug=0
>>>>
>>>> # Shared memory file settings
>>>> [shm]
>>>> file=/etc/httpd/conf/jk2.shm
>>>> size=100000
>>>>
>>>> # Communcation channel settings for "Tomcat5A"
>>>> [channel.socket:localhost:8009]
>>>> host=localhost
>>>> port=8009
>>>> tomcatId=Tomcat5A
>>>> group=balanced
>>>> lb_factor=1
>>>> route=Tomcat5A
>>>>
>>>>
>>>> # Declare a Tomcat5A worker
>>>> [ajp13:localhost:8009]
>>>> channel=channel.socket:Tomcat5A
>>>>
>>>>
>>>> # Communcation channel settings for "Tomcat5B"
>>>> [channel.socket:localhost:8010]
>>>> host=localhost
>>>> port=8010
>>>> tomcatId=Tomcat5B
>>>> group=balanced
>>>> lb_factor=1
>>>> route=Tomcat5B
>>>>
>>>>
>>>> # Declare a Tomcat5B worker
>>>> [ajp13:localhost:8010]
>>>> channel=channel.socket:Tomcat5B
>>>>
>>>>
>>>> # Communcation channel settings for "Tomcat5C"
>>>> [channel.socket:localhost:8011]
>>>> host=localhost
>>>> port=8011
>>>> tomcatId=Tomcat5C
>>>> group=balanced
>>>> lb_factor=1
>>>> route=Tomcat5C
>>>>
>>>>
>>>> # Declare a Tomcat5C worker
>>>> [ajp13:localhost:8011]
>>>> channel=channel.socket:Tomcat5C
>>>>
>>>> # Load balanced Worker
>>>> [lb:balanced]
>>>> worker=ajp13:localhost:8009
>>>> worker=ajp13:localhost:8010
>>>> worker=ajp13:localhost:8011
>>>> timeout=90
>>>> attempts=3
>>>> recovery=30
>>>> stickySession=0
>>>> noWorkerMsg=Server Busy please retry later.
>>>> noWorkerCodeMsg=503
>>>>
>>>> # URI mappings for the tomcat worker
>>>> # Map the "jsp-examples" web application context to the web server URI
>>>> space
>>>> [uri:/jsp-examples/*]
>>>> info= Mapping for jsp-examples context for tomcat
>>>> context=/jsp-examples
>>>> group=balanced
>>>>
>>>> [shm]
>>>> file=/etc/httpd/conf/jk2.shm
>>>> size=1000000
>>>>
>>>> [uri:/servlets-examples/*]
>>>> context=/servlets-examples
>>>> group=balanced
>>>>
>>>> # Define a status worker
>>>> [status:]
>>>>
>>>> # Status URI mapping
>>>> [uri:/jkstatus/*]
>>>> group=status
>>>>
>>>>
>>>> obviously the server.xml files on the other 2 instances of tomcat are
>>>> the same except the ports and jvmRoute have been changed.
>>>>
>>>>
>>>> can anyone see where i am going wrong ?
>>>>
>>>> Thanks
>>>>
>>>>
>>>>
>>>>         
>>>     
>>
>> ---------------------------------------------------------------------
>> To start a new topic, e-mail: users@tomcat.apache.org
>> To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
>> For additional commands, e-mail: users-help@tomcat.apache.org
>>
>>
>>   
> 
> 

---------------------------------------------------------------------
To start a new topic, e-mail: users@tomcat.apache.org
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org


Re: Tomcat session replication/cluster

Posted by Filip Hanik - Dev Lists <de...@hanik.com>.
if the cluster is put in the engine element, the context names are 
prefixed with the engine name, since you can have multiple contexts with 
the same name in different host
when reloading a context, you'll get these errors cause the context is 
not available during the reload
this will be fixed with the new Apache Tribes module
Filip

Pid wrote:
> I'm seeing an issue on 5.5.17 with a 2 node cluster config.
> When a context is reloaded, it sends the context node name incorrectly
> to the cluster.
> E.g. context is called "website1"
>
> SEVERE: Context manager doesn't exist:website1website1
>
> The config I'm using is exactly the same as the default from server.xml,
> except the cluster is defined in Engine, rather than each Host.
>
>
>
>
> Filip Hanik - Dev Lists wrote:
>   
>> also, use Tomcat 5.5.17
>>
>> Sean O'Reilly wrote:
>>     
>>> Hi,
>>>
>>> I am trying to get in-memory session replication working and am testing
>>> running 3 seperate tomcat instances on the same server.
>>>
>>> I am using tomcat-5.5.15 and apache-2.0.54 with jk2.
>>>
>>> Whenever i run my test app although it should be doing round-robin load
>>> balancing it doesn't switch to another instance of tomcat until the
>>> eighth request and does not appear to have sent the session information
>>> across as the session ID changes.
>>>
>>> Here are my server.xml and workers2.properties files
>>>
>>> server.xml
>>>
>>> <Server port="8005" shutdown="SHUTDOWN">
>>>
>>>   <!-- Comment these entries out to disable JMX MBeans support used for
>>> the        administration web application -->
>>>   <Listener className="org.apache.catalina.core.AprLifecycleListener" />
>>>   <Listener
>>> className="org.apache.catalina.mbeans.ServerLifecycleListener" />
>>>   <Listener
>>> className="org.apache.catalina.mbeans.GlobalResourcesLifecycleListener"
>>> />
>>>   <Listener
>>> className="org.apache.catalina.storeconfig.StoreConfigLifecycleListener"/>
>>>
>>>
>>>   <!-- Global JNDI resources -->
>>>   <GlobalNamingResources>
>>>
>>>     <!-- Test entry for demonstration purposes -->
>>>     <Environment name="simpleValue" type="java.lang.Integer"
>>> value="30"/>
>>>
>>>     <!-- Editable user database that can also be used by
>>>          UserDatabaseRealm to authenticate users -->
>>>     <Resource name="UserDatabase" auth="Container"
>>>               type="org.apache.catalina.UserDatabase"
>>>        description="User database that can be updated and saved"
>>>            factory="org.apache.catalina.users.MemoryUserDatabaseFactory"
>>>           pathname="conf/tomcat-users.xml" />
>>>
>>>   </GlobalNamingResources>
>>>
>>>   <!-- A "Service" is a collection of one or more "Connectors" that
>>> share
>>>        a single "Container" (and therefore the web applications visible
>>>        within that Container).  Normally, that Container is an "Engine",
>>>        but this is not required.
>>>
>>>        Note:  A "Service" is not itself a "Container", so you may not
>>>        define subcomponents such as "Valves" or "Loggers" at this level.
>>>    -->
>>>
>>>   <!-- Define the Tomcat Stand-Alone Service -->
>>>   <Service name="Catalina">
>>>
>>>     <!-- A "Connector" represents an endpoint by which requests are
>>> received
>>>          and responses are returned.  Each Connector passes requests on
>>> to the
>>>          associated "Container" (normally an Engine) for processing.
>>>
>>>          By default, a non-SSL HTTP/1.1 Connector is established on
>>> port 8080.
>>>          You can also enable an SSL HTTP/1.1 Connector on port 8443 by
>>>          following the instructions below and uncommenting the second
>>> Connector
>>>          entry.  SSL support requires the following steps (see the SSL
>>> Config
>>>          HOWTO in the Tomcat 5 documentation bundle for more detailed
>>>          instructions):
>>>          * If your JDK version 1.3 or prior, download and install JSSE
>>> 1.0.2 or
>>>            later, and put the JAR files into "$JAVA_HOME/jre/lib/ext".
>>>          * Execute:
>>>              %JAVA_HOME%\bin\keytool -genkey -alias tomcat -keyalg RSA
>>> (Windows)
>>>              $JAVA_HOME/bin/keytool -genkey -alias tomcat -keyalg RSA
>>> (Unix)
>>>            with a password value of "changeit" for both the certificate
>>> and
>>>            the keystore itself.
>>>
>>>          By default, DNS lookups are enabled when a web application
>>> calls
>>>          request.getRemoteHost().  This can have an adverse impact on
>>>          performance, so you can disable it by setting the
>>>          "enableLookups" attribute to "false".  When DNS lookups are
>>> disabled,
>>>          request.getRemoteHost() will return the String version of the
>>>          IP address of the remote client.
>>>     -->
>>>
>>>     <!-- Define a non-SSL HTTP/1.1 Connector on port 8080
>>>     <Connector port="8080" maxHttpHeaderSize="8192"
>>>                maxThreads="150" minSpareThreads="25"
>>> maxSpareThreads="75"
>>>                enableLookups="false" redirectPort="8443"
>>> acceptCount="100"
>>>                connectionTimeout="20000" disableUploadTimeout="true" />
>>> -->
>>>     <!-- Note : To disable connection timeouts, set connectionTimeout
>>> value
>>>      to 0 -->
>>>     
>>>     <!-- Note : To use gzip compression you could set the following
>>> properties :
>>>     
>>>                compression="on"               
>>> compressionMinSize="2048"               
>>> noCompressionUserAgents="gozilla, traviata"               
>>> compressableMimeType="text/html,text/xml"
>>>     -->
>>>
>>>     <!-- Define a SSL HTTP/1.1 Connector on port 8443 -->
>>>     <!--
>>>     <Connector port="8443" maxHttpHeaderSize="8192"
>>>                maxThreads="150" minSpareThreads="25"
>>> maxSpareThreads="75"
>>>                enableLookups="false" disableUploadTimeout="true"
>>>                acceptCount="100" scheme="https" secure="true"
>>>                clientAuth="false" sslProtocol="TLS" />
>>>     -->
>>>
>>>     <!-- Define an AJP 1.3 Connector on port 8009 -->
>>>     <Connector port="8009"                enableLookups="false"
>>> redirectPort="8443"
>>> protocol="AJP/1.3" />
>>>
>>>     <!-- Define a Proxied HTTP/1.1 Connector on port 8082 -->
>>>     <!-- See proxy documentation for more information about using this.
>>> -->
>>>     <!--
>>>     <Connector port="8082"                maxThreads="150"
>>> minSpareThreads="25"
>>> maxSpareThreads="75"
>>>                enableLookups="false" acceptCount="100"
>>> connectionTimeout="20000"
>>>                proxyPort="80" disableUploadTimeout="true" />
>>>     -->
>>>
>>>     <!-- An Engine represents the entry point (within Catalina) that
>>> processes
>>>          every request.  The Engine implementation for Tomcat stand
>>> alone
>>>          analyzes the HTTP headers included with the request, and
>>> passes them
>>>          on to the appropriate Host (virtual host). -->
>>>
>>>     <!-- You should set jvmRoute to support load-balancing via AJP ie :
>>> -->
>>>     <Engine name="Standalone" defaultHost="localhost"
>>> jvmRoute="Tomcat5A">          
>>>              <!-- Define the top level container in our container
>>> hierarchy
>>>     <Engine name="Catalina" defaultHost="localhost"> -->
>>>
>>>       <!-- The request dumper valve dumps useful debugging information
>>> about
>>>            the request headers and cookies that were received, and the
>>> response
>>>            headers and cookies that were sent, for all requests
>>> received by
>>>            this instance of Tomcat.  If you care only about requests to
>>> a
>>>            particular virtual host, or a particular application, nest
>>> this
>>>            element inside the corresponding <Host> or <Context> entry
>>> instead.
>>>
>>>            For a similar mechanism that is portable to all Servlet 2.4
>>>            containers, check out the "RequestDumperFilter" Filter in the
>>>            example application (the source for this filter may be found
>>> in
>>>            "$CATALINA_HOME/webapps/examples/WEB-INF/classes/filters").
>>>
>>>            Request dumping is disabled by default.  Uncomment the
>>> following
>>>            element to enable it. -->
>>>       <!--
>>>       <Valve className="org.apache.catalina.valves.RequestDumperValve"/>
>>>       -->
>>>
>>>       <!-- Because this Realm is here, an instance will be shared
>>> globally -->
>>>
>>>       <!-- This Realm uses the UserDatabase configured in the global
>>> JNDI
>>>            resources under the key "UserDatabase".  Any edits
>>>            that are performed against this UserDatabase are immediately
>>>            available for use by the Realm.  -->
>>>       <Realm className="org.apache.catalina.realm.UserDatabaseRealm"
>>>              resourceName="UserDatabase"/>
>>>
>>>       <!-- Comment out the old realm but leave here for now in case we
>>>            need to go back quickly -->
>>>       <!--
>>>       <Realm className="org.apache.catalina.realm.MemoryRealm" />
>>>       -->
>>>
>>>       <!-- Replace the above Realm with one of the following to get a
>>> Realm
>>>            stored in a database and accessed via JDBC -->
>>>
>>>       <!--
>>>       <Realm  className="org.apache.catalina.realm.JDBCRealm"
>>>              driverName="org.gjt.mm.mysql.Driver"
>>>           connectionURL="jdbc:mysql://localhost/authority"
>>>          connectionName="test" connectionPassword="test"
>>>               userTable="users" userNameCol="user_name"
>>> userCredCol="user_pass"
>>>           userRoleTable="user_roles" roleNameCol="role_name" />
>>>       -->
>>>
>>>       <!--
>>>       <Realm  className="org.apache.catalina.realm.JDBCRealm"
>>>              driverName="oracle.jdbc.driver.OracleDriver"
>>>           connectionURL="jdbc:oracle:thin:@ntserver:1521:ORCL"
>>>          connectionName="scott" connectionPassword="tiger"
>>>               userTable="users" userNameCol="user_name"
>>> userCredCol="user_pass"
>>>           userRoleTable="user_roles" roleNameCol="role_name" />
>>>       -->
>>>
>>>       <!--
>>>       <Realm  className="org.apache.catalina.realm.JDBCRealm"
>>>              driverName="sun.jdbc.odbc.JdbcOdbcDriver"
>>>           connectionURL="jdbc:odbc:CATALINA"
>>>               userTable="users" userNameCol="user_name"
>>> userCredCol="user_pass"
>>>           userRoleTable="user_roles" roleNameCol="role_name" />
>>>       -->
>>>
>>>       <!-- Define the default virtual host
>>>            Note: XML Schema validation will not work with Xerces 2.2.
>>>        -->
>>>       <Host name="localhost" appBase="webapps"
>>>        unpackWARs="true" autoDeploy="true"
>>>        xmlValidation="false" xmlNamespaceAware="false">
>>>
>>>         <!-- Defines a cluster for this node,
>>>              By defining this element, means that every manager will be
>>> changed.
>>>              So when running a cluster, only make sure that you have
>>> webapps in there
>>>              that need to be clustered and remove the other ones.
>>>              A cluster has the following parameters:
>>>
>>>              className = the fully qualified name of the cluster class
>>>
>>>              clusterName = a descriptive name for your cluster, can be
>>> anything
>>>
>>>              mcastAddr = the multicast address, has to be the same for
>>> all the nodes
>>>
>>>              mcastPort = the multicast port, has to be the same for all
>>> the nodes
>>>                           mcastBindAddr = bind the multicast socket to
>>> a specific
>>> address
>>>                           mcastTTL = the multicast TTL if you want to
>>> limit your
>>> broadcast
>>>                           mcastSoTimeout = the multicast readtimeout
>>>              mcastFrequency = the number of milliseconds in between
>>> sending a "I'm alive" heartbeat
>>>
>>>              mcastDropTime = the number a milliseconds before a node is
>>> considered "dead" if no heartbeat is received
>>>
>>>              tcpThreadCount = the number of threads to handle incoming
>>> replication requests, optimal would be the same amount of threads as
>>> nodes
>>>              tcpListenAddress = the listen address (bind address) for
>>> TCP cluster request on this host,                                 in
>>> case of multiple ethernet cards.
>>>                                 auto means that address becomes
>>>                                
>>> InetAddress.getLocalHost().getHostAddress()
>>>
>>>              tcpListenPort = the tcp listen port
>>>
>>>              tcpSelectorTimeout = the timeout (ms) for the
>>> Selector.select() method in case the OS
>>>                                   has a wakup bug in java.nio. Set to 0
>>> for no timeout
>>>
>>>              printToScreen = true means that managers will also print
>>> to std.out
>>>
>>>              expireSessionsOnShutdown = true means that
>>>              useDirtyFlag = true means that we only replicate a session
>>> after setAttribute,removeAttribute has been called.
>>>                             false means to replicate the session after
>>> each request.
>>>                             false means that replication would work for
>>> the following piece of code: (only for SimpleTcpReplicationManager)
>>>                             <%
>>>                             HashMap map =
>>> (HashMap)session.getAttribute("map");
>>>                             map.put("key","value");
>>>                             %>
>>>              replicationMode = can be either 'pooled', 'synchronous' or
>>> 'asynchronous'.
>>>                                * Pooled means that the replication
>>> happens using several sockets in a synchronous way. Ie, the data gets
>>> replicated, then the request return. This is the same as the
>>> 'synchronous' setting except it uses a pool of sockets, hence it is
>>> multithreaded. This is the fastest and safest configuration. To use
>>> this, also increase the nr of tcp threads that you have dealing with
>>> replication.
>>>                                * Synchronous means that the thread that
>>> executes the request, is also the
>>>                                thread the replicates the data to the
>>> other nodes, and will not return until all
>>>                                nodes have received the information.
>>>                                * Asynchronous means that there is a
>>> specific 'sender' thread for each cluster node,
>>>                                so the request thread will queue the
>>> replication request into a "smart" queue,
>>>                                and then return to the client.
>>>                                The "smart" queue is a queue where when
>>> a session is added to the queue, and the same session
>>>                                already exists in the queue from a
>>> previous request, that session will be replaced
>>>                                in the queue instead of replicating two
>>> requests. This almost never happens, unless there is a
>>>                                large network delay.
>>>         -->                     <!--
>>>             When configuring for clustering, you also add in a valve to
>>> catch all the requests
>>>             coming in, at the end of the request, the session may or
>>> may not be replicated.
>>>             A session is replicated if and only if all the conditions
>>> are met:
>>>             1. useDirtyFlag is true or setAttribute or removeAttribute
>>> has been called AND
>>>             2. a session exists (has been created)
>>>             3. the request is not trapped by the "filter" attribute
>>>
>>>             The filter attribute is to filter out requests that could
>>> not modify the session,
>>>             hence we don't replicate the session after the end of this
>>> request.
>>>             The filter is negative, ie, anything you put in the filter,
>>> you mean to filter out,
>>>             ie, no replication will be done on requests that match one
>>> of the filters.
>>>             The filter attribute is delimited by ;, so you can't escape
>>> out ; even if you wanted to.
>>>
>>>             filter=".*\.gif;.*\.js;" means that we will not replicate
>>> the session after requests with the URI
>>>             ending with .gif and .js are intercepted.
>>>                         The deployer element can be used to deploy
>>> apps cluster
>>> wide.
>>>             Currently the deployment only deploys/undeploys to working
>>> members in the cluster
>>>             so no WARs are copied upons startup of a broken node.
>>>             The deployer watches a directory (watchDir) for WAR files
>>> when watchEnabled="true"
>>>             When a new war file is added the war gets deployed to the
>>> local instance,
>>>             and then deployed to the other instances in the cluster.
>>>             When a war file is deleted from the watchDir the war is
>>> undeployed locally             and cluster wide
>>>         -->
>>>        
>>>         <Cluster
>>> className="org.apache.catalina.cluster.tcp.SimpleTcpCluster"
>>>                 
>>> managerClassName="org.apache.catalina.cluster.session.DeltaManager"
>>>                  expireSessionsOnShutdown="false"
>>>                  useDirtyFlag="true"
>>>                  notifyListenersOnReplication="true">
>>>
>>>             <Membership                
>>> className="org.apache.catalina.cluster.mcast.McastService"
>>>                 mcastAddr="228.0.0.4"
>>>                 mcastPort="45564"
>>>                 mcastFrequency="500"
>>>                 mcastDropTime="3000"/>
>>>
>>>             <Receiver                
>>> className="org.apache.catalina.cluster.tcp.ReplicationListener"
>>>                 tcpListenAddress="auto"
>>>                 tcpListenPort="4001"
>>>                 tcpSelectorTimeout="100"
>>>                 tcpThreadCount="6"/>
>>>
>>>             <Sender
>>>                
>>> className="org.apache.catalina.cluster.tcp.ReplicationTransmitter"
>>>                 replicationMode="pooled"
>>>                 ackTimeout="15000"/>
>>>
>>>             <Valve
>>> className="org.apache.catalina.cluster.tcp.ReplicationValve"
>>>                   
>>> filter=".*\.gif;.*\.js;.*\.jpg;.*\.png;.*\.htm;.*\.html;.*\.css;.*\.txt;"/>
>>>
>>>                                <Deployer
>>> className="org.apache.catalina.cluster.deploy.FarmWarDeployer"
>>>                       tempDir="/tmp/war-temp/"
>>>                       deployDir="/tmp/war-deploy/"
>>>                       watchDir="/tmp/war-listen/"
>>>                       watchEnabled="false"/>
>>>                                   <ClusterListener
>>> className="org.apache.catalina.cluster.session.ClusterSessionListener"/>
>>>         </Cluster>
>>>       
>>>
>>>
>>>         <!-- Normally, users must authenticate themselves to each web
>>> app
>>>              individually.  Uncomment the following entry if you would
>>> like
>>>              a user to be authenticated the first time they encounter a
>>>              resource protected by a security constraint, and then have
>>> that
>>>              user identity maintained across *all* web applications
>>> contained
>>>              in this virtual host. -->
>>>         <!--
>>>         <Valve
>>> className="org.apache.catalina.authenticator.SingleSignOn" />
>>>         -->
>>>
>>>         <!-- Access log processes all requests for this virtual host.
>>> By
>>>              default, log files are created in the "logs" directory
>>> relative to
>>>              $CATALINA_HOME.  If you wish, you can specify a different
>>>              directory with the "directory" attribute.  Specify either
>>> a relative
>>>              (to $CATALINA_HOME) or absolute path to the desired
>>> directory.
>>>         -->
>>>         <!--
>>>         <Valve className="org.apache.catalina.valves.AccessLogValve"
>>>                  directory="logs"  prefix="localhost_access_log."
>>> suffix=".txt"
>>>                  pattern="common" resolveHosts="false"/>
>>>         -->
>>>
>>>         <!-- Access log processes all requests for this virtual host.
>>> By
>>>              default, log files are created in the "logs" directory
>>> relative to
>>>              $CATALINA_HOME.  If you wish, you can specify a different
>>>              directory with the "directory" attribute.  Specify either
>>> a relative
>>>              (to $CATALINA_HOME) or absolute path to the desired
>>> directory.
>>>              This access log implementation is optimized for maximum
>>> performance,
>>>              but is hardcoded to support only the "common" and
>>> "combined" patterns.
>>>         -->
>>>         <!--
>>>         <Valve
>>> className="org.apache.catalina.valves.FastCommonAccessLogValve"
>>>                  directory="logs"  prefix="localhost_access_log."
>>> suffix=".txt"
>>>                  pattern="common" resolveHosts="false"/>
>>>         -->
>>>         <!-- Access log processes all requests for this virtual host.
>>> By
>>>              default, log files are created in the "logs" directory
>>> relative to
>>>              $CATALINA_HOME.  If you wish, you can specify a different
>>>              directory with the "directory" attribute.  Specify either
>>> a relative
>>>              (to $CATALINA_HOME) or absolute path to the desired
>>> directory.
>>>              This access log implementation is optimized for maximum
>>> performance,
>>>              but is hardcoded to support only the "common" and
>>> "combined" patterns.
>>>
>>>              This valve use NIO direct Byte Buffer to asynchornously
>>> store the
>>>              log.
>>>         -->
>>>         <!--
>>>         <Valve
>>> className="org.apache.catalina.valves.ByteBufferAccessLogValve"
>>>                  directory="logs"  prefix="localhost_access_log."
>>> suffix=".txt"
>>>                  pattern="common" resolveHosts="false"/>
>>>         -->
>>>
>>>       </Host>
>>>
>>>     </Engine>
>>>
>>>   </Service>
>>>
>>> </Server>
>>>
>>>
>>> workers2.properties
>>>
>>> [logger.apache2]
>>> file="/etc/httpd/conf/logs/error.log"
>>> level=INFO
>>> debug=1
>>>
>>> # Config settings
>>> [config]
>>> file=/etc/httpd/conf/workers2.properties
>>> debug=0
>>>
>>> # Shared memory file settings
>>> [shm]
>>> file=/etc/httpd/conf/jk2.shm
>>> size=100000
>>>
>>> # Communcation channel settings for "Tomcat5A"
>>> [channel.socket:localhost:8009]
>>> host=localhost
>>> port=8009
>>> tomcatId=Tomcat5A
>>> group=balanced
>>> lb_factor=1
>>> route=Tomcat5A
>>>
>>>
>>> # Declare a Tomcat5A worker
>>> [ajp13:localhost:8009]
>>> channel=channel.socket:Tomcat5A
>>>
>>>
>>> # Communcation channel settings for "Tomcat5B"
>>> [channel.socket:localhost:8010]
>>> host=localhost
>>> port=8010
>>> tomcatId=Tomcat5B
>>> group=balanced
>>> lb_factor=1
>>> route=Tomcat5B
>>>
>>>
>>> # Declare a Tomcat5B worker
>>> [ajp13:localhost:8010]
>>> channel=channel.socket:Tomcat5B
>>>
>>>
>>> # Communcation channel settings for "Tomcat5C"
>>> [channel.socket:localhost:8011]
>>> host=localhost
>>> port=8011
>>> tomcatId=Tomcat5C
>>> group=balanced
>>> lb_factor=1
>>> route=Tomcat5C
>>>
>>>
>>> # Declare a Tomcat5C worker
>>> [ajp13:localhost:8011]
>>> channel=channel.socket:Tomcat5C
>>>
>>> # Load balanced Worker
>>> [lb:balanced]
>>> worker=ajp13:localhost:8009
>>> worker=ajp13:localhost:8010
>>> worker=ajp13:localhost:8011
>>> timeout=90
>>> attempts=3
>>> recovery=30
>>> stickySession=0
>>> noWorkerMsg=Server Busy please retry later.
>>> noWorkerCodeMsg=503
>>>
>>> # URI mappings for the tomcat worker
>>> # Map the "jsp-examples" web application context to the web server URI
>>> space
>>> [uri:/jsp-examples/*]
>>> info= Mapping for jsp-examples context for tomcat
>>> context=/jsp-examples
>>> group=balanced
>>>
>>> [shm]
>>> file=/etc/httpd/conf/jk2.shm
>>> size=1000000
>>>
>>> [uri:/servlets-examples/*]
>>> context=/servlets-examples
>>> group=balanced
>>>
>>> # Define a status worker
>>> [status:]
>>>
>>> # Status URI mapping
>>> [uri:/jkstatus/*]
>>> group=status
>>>
>>>
>>> obviously the server.xml files on the other 2 instances of tomcat are
>>> the same except the ports and jvmRoute have been changed.
>>>
>>>
>>> can anyone see where i am going wrong ?
>>>
>>> Thanks
>>>
>>>
>>>
>>>   
>>>       
>>     
>
> ---------------------------------------------------------------------
> To start a new topic, e-mail: users@tomcat.apache.org
> To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
> For additional commands, e-mail: users-help@tomcat.apache.org
>
>
>   


-- 


Filip Hanik

---------------------------------------------------------------------
To start a new topic, e-mail: users@tomcat.apache.org
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org