You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@tomcat.apache.org by Mark Eggers <it...@yahoo.com> on 2010/08/08 01:11:13 UTC

Clustering within a single system

I apologize for the wall of text.

I'm working through the clustering and farm deployment documentation
for Tomcat 6 / Tomcat 7. To that end I've set up a 3 node cluster load
balanced by an Apache web server on a single system.

The environment and configuration particulars are at the end of the mail 
message.

Problem summary:

The application is a quick sample that does the following:

1. Form-based login
2. Some personal information that can be customized (session variable)
3. Random pet generated from a pet factory and a properties file (session 
variable)
4. Exit message on logout containing the chosen pet and the personalization info

If I pre-deploy the distributable application to all three nodes, I
get reasonable log entries, and the application works as expected.

If I start up Tomcat and then deploy the application to the three
nodes, I get error messages in the logs, and the application is not
available until the last timeout for clustering has expired (default
of 60 seconds). However, once this timeout has passed, clustering
seems to work.

I observe similar problems when using the farm deployer. The application
does not seem to be available until it is deployed across the 3 nodes
and the cluster timeout has passed.

I have not tried these tests in a controlled fashion using Tomcat from
subversion.

So, how do I remove the errors from the logs?

. . . . just my two cents.

/mde/

Sequence of events which works cleanly:

1. Copy the RPets.war file to each of the three $CATALINA_BASE/webapps
   directories
2. Start up each Tomcat instance - wait 5 seconds between startups so
   ports can be chosen for clustering (I could configure these)
3. Test the application

Log samples:

1. Cluster membership adds normally (logs from one node)

INFO: Receiver Server Socket bound to:/127.0.0.1:4001
INFO: Replication member added:
  org.apache.catalina.tribes.membership.MemberImpl[tcp://{127, 0, 0, 1}:4000,
  {127, 0, 0, 1},4000,
  alive=6538,id={67 -53 75 3 -46 11 67 34 -67 -33 -126 -125 107 115 76 -17 },
  payload={}, command={}, domain={}, ]
INFO: Replication member added:
  org.apache.catalina.tribes.membership.MemberImpl[tcp://{127, 0, 0, 1}:4002,
  {127, 0, 0, 1},4002,
  alive=1013,id={62 -72 -97 73 10 97 74 -30 -109 -12 93 -125 114 -109 4 24 },
  payload={}, command={}, domain={}, ]

2. Session replication information received normally (logs from one
   node)
INFO: Deploying configuration descriptor RPets.xml
INFO: Register manager /RPets to cluster element Host with name localhost
INFO: Starting clustering manager at /RPets
WARNING: Manager [/RPets], requesting session state from
  org.apache.catalina.tribes.membership.MemberImpl[tcp://{127, 0, 0, 1}:4000,
  {127, 0, 0, 1},4000, alive=9540,
  id={67 -53 75 3 -46 11 67 34 -67 -33 -126 -125 107 115 76 -17 },
  payload={}, command={}, domain={}, ].
  This operation will timeout if no session state has been received within 60 
seconds.
INFO: Manager [/RPets]; session state send at 8/7/10 11:30 AM received in 275 
ms.

Operation:

The application works as expected. I can turn off the node that is
being accessed, and after a brief pause the web browser will continue
to display the results. All session variables are duplicated as
expected.

I do notice the following on shutdown:

SEVERE: The web application [/RPets] appears to have started a thread
  named [pool-1-thread-1] but has failed to stop it.
  This is very likely to create a memory leak.

This error is not present when running on a Tomcat instance without
clustering enabled (web application is marked distributable, but no
<Cluster> element is present).

Sequence of events which causes the errors:

1. Start up each Tomcat instance - wait 5 seconds between startups so
   ports can be chosen for clustering (I could configure these)
2. Copy the RPets.war file to each of the three $CATALINA_BASE/webapps
   directories
3. Test the application

Log samples:

1. Cluster membership adds normally (logs from one node)

INFO: Replication member added:
  org.apache.catalina.tribes.membership.MemberImpl[tcp://{127, 0, 0, 1}:4000,
  {127, 0, 0, 1},4000,
  alive=6538,id={-33 84 107 57 -77 93 76 39 -85 92 -37 -74 71 26 38 -35 },
  payload={}, command={}, domain={}, ]
INFO: Replication member added:
  org.apache.catalina.tribes.membership.MemberImpl[tcp://{127, 0, 0, 1}:4002,
  {127, 0, 0, 1},4002,
  alive=1030,id={-121 38 109 -46 -48 66 77 -80 -70 -101 127 80 5 58 119 -70 },
  payload={}, command={}, domain={}, ]

2. After copying the war file to $CATALINA_BASE/webapps (no delay to
   make sure application is deployed)

INFO: Deploying web application archive RPets.war
INFO: Register manager /RPets to cluster element Host with name localhost
INFO: Starting clustering manager at /RPets
WARNING: Manager [/RPets],
  requesting session state from
  org.apache.catalina.tribes.membership.MemberImpl[tcp://{127, 0, 0, 1}:4000,
  {127, 0, 0, 1},4000, alive=661430,
  id={-33 84 107 57 -77 93 76 39 -85 92 -37 -74 71 26 38 -35 },
  payload={}, command={}, domain={}, ].
  This operation will timeout if no session state has been received within 60 
seconds.
SEVERE: Manager [/RPets]: No session state send at 8/7/10 12:41 PM received,
  timing out after 60,034 ms.
WARNING: Manager [/RPets]: Drop message SESSION-GET-ALL
  inside GET_ALL_SESSIONS sync phase start date 8/7/10 12:41 PM
  message date 12/31/69 4:00 PM

(what is the message date of 12/31/69 4:00 PM ?)

Operation:

The application works as expected once the cluster timeout is
reached. I can turn off the node that is being accessed, and after a
brief pause the web browser will continue to display the results. All
session variables are duplicated as expected.

The particulars:

1. OS - Fedora 13 2.6.33.6-147.2.4.fc13.i686
   a. firewall modified to permit multicast
      -A INPUT -d 224.0.0.0/4 -m state --state NEW -j ACCEPT
   b. multicast route added
      ip route add to multicast 224.0.0.0/4 dev eth0
   c. host name added to 127.0.0.1
2. Java - JRE/JDK 1.6.0_21 from Sun/Oracle
3. Tomcat - 6.0.29 from tomcat.apache.org
   a. Using 1 $CATALINA_HOME and 3 $CATALINA_BASE instances
   b. libtcnative built, installed, and recognized
4. Distributable web application (RPets - for Random Pets)
5. Apache 2.2.15 (packaged by Fedora)
6. mod_jk 1.2.30 built from source
7. NetBeans 6.8 development environment (hence the path="/RPets" in
   context.xml)

The configuration files:

1. Typical server.xml file without farm deployment. Each server.xml
   file has the following ports changed: shutdown, HTTP/1.1 and
   redirect, AJP and redirect.

<?xml version='1.0' encoding='utf-8'?>
<Server port="8015" shutdown="SHUTDOWN">
  <Listener className="org.apache.catalina.core.AprLifecycleListener"
            SSLEngine="on" />
  <Listener className="org.apache.catalina.core.JasperListener" />
  <Listener className="org.apache.catalina.core.JreMemoryLeakPreventionListener" 
/>
  <Listener className="org.apache.catalina.mbeans.ServerLifecycleListener" />
  <Listener
    className="org.apache.catalina.mbeans.GlobalResourcesLifecycleListener" />
  <GlobalNamingResources>
    <Resource name="UserDatabase" auth="Container"
              type="org.apache.catalina.UserDatabase"
              description="User database that can be updated and saved"
              factory="org.apache.catalina.users.MemoryUserDatabaseFactory"
              pathname="conf/tomcat-users.xml" />
  </GlobalNamingResources>
  <Service name="Catalina">
    <Connector port="8090" protocol="HTTP/1.1" 
               connectionTimeout="20000" 
               redirectPort="8453"/>
    <Connector port="8019" protocol="AJP/1.3" redirectPort="8453"/>
    <Engine name="Catalina" defaultHost="localhost" jvmRoute="deimos">
      <Realm className="org.apache.catalina.realm.UserDatabaseRealm"
         resourceName="UserDatabase"/>
      <Host name="localhost"  appBase="webapps"
            unpackWARs="true" autoDeploy="true"
            xmlValidation="false" xmlNamespaceAware="false">
        <!-- cluster here since farm deployment doesn't work at the
         Engine level
        -->
    <Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"/>
        <Valve className="org.apache.catalina.valves.AccessLogValve"
           directory="logs" prefix="localhost_access_log."
           suffix=".txt" pattern="common" resolveHosts="false"/>
      </Host>
    </Engine>
  </Service>
</Server>

2. Farm deployment modification shown below:

    <!-- master node -->
    <Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster">
      <Deployer className="org.apache.catalina.ha.deploy.FarmWarDeployer"
            tempDir="/home/myhome/Apache/bhosts/tc-06/deimos-host/temp-dir/"
            deployDir="/home/myhome/Apache/bhosts/tc-06/deimos-host/webapps/"
            watchDir="/home/myhome/Apache/bhosts/tc-06/deimos-host/watch-dir/"
            watchEnabled="true"/>
    </Cluster>

    <!-- slave nodes -->
    <Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster">
      <Deployer className="org.apache.catalina.ha.deploy.FarmWarDeployer"
            tempDir="/home/myhome/Apache/bhosts/tc-06/mars-host/temp-dir/"
            deployDir="/home/myhome/Apache/bhosts/tc-06/mars-host/webapps/"
            watchDir="/home/myhome/Apache/bhosts/tc-06/mars-host/watch-dir/"
            watchEnabled="false"/>
    </Cluster>

3. startup.sh, shutdown.sh, and setenv.sh files for Tomcat. The directory for
   $CATALINA_BASE is changed per node. The JMX port is changed per
   node.

#!/bin/bash
export CATALINA_BASE=/home/myhome/Apache/bhosts/tc-06/deimos-host
export CATALINA_HOME=/home/myhome/Apache/apache-tomcat-6.0.29
$CATALINA_HOME/bin/startup.sh

#!/bin/bash
export CATALINA_BASE=/home/myhome/Apache/bhosts/tc-06/deimos-host
export CATALINA_HOME=/home/myhome/Apache/apache-tomcat-6.0.29
$CATALINA_HOME/bin/shutdown.sh

#!/bin/bash
export 
CATALINA_OPTS="-Djava.library.path=/home/myhome/Apache/apache-tomcat-6.0.29/bin/libs
 -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.port=9004 
-Dcom.sun.management.jmxremote.ssl=false 
-Dcom.sun.management.jmxremote.authenticate=false"
export JAVA_OPTS="-Dlog4j.home=$CATALINA_BASE/logs"

4. web.xml and context.xml for the test application

<?xml version="1.0" encoding="UTF-8"?>
<web-app version="2.5" xmlns="http://java.sun.com/xml/ns/javaee"
  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  xsi:schemaLocation=
  "http://java.sun.com/xml/ns/javaee 
http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd">
    <distributable/>
    <context-param>
        <description>Properties file for the PetFactory</description>
        <param-name>properties</param-name>
        <param-value>pets.properties</param-value>
    </context-param>
    <listener>
        <description>ServletContextListener</description>
        <listener-class>rpets.utils.PetFactoryListener</listener-class>
    </listener>
    <servlet>
        <servlet-name>Personalize</servlet-name>
        <servlet-class>rpets.controller.Personalize</servlet-class>
    </servlet>
    <servlet>
        <servlet-name>Leave</servlet-name>
        <servlet-class>rpets.controller.Leave</servlet-class>
    </servlet>
    <servlet>
        <servlet-name>Randomize</servlet-name>
        <servlet-class>rpets.controller.Randomize</servlet-class>
    </servlet>
    <servlet>
        <servlet-name>Finalize</servlet-name>
        <servlet-class>rpets.controller.Finalize</servlet-class>
    </servlet>
    <servlet-mapping>
        <servlet-name>Personalize</servlet-name>
        <url-pattern>/protected/Personalize</url-pattern>
    </servlet-mapping>
    <servlet-mapping>
        <servlet-name>Leave</servlet-name>
        <url-pattern>/Leave</url-pattern>
    </servlet-mapping>
    <servlet-mapping>
        <servlet-name>Randomize</servlet-name>
        <url-pattern>/protected/Randomize</url-pattern>
    </servlet-mapping>
    <servlet-mapping>
        <servlet-name>Finalize</servlet-name>
        <url-pattern>/protected/Finalize</url-pattern>
    </servlet-mapping>
    <session-config>
        <session-timeout>
            30
        </session-timeout>
    </session-config>
    <welcome-file-list>
        <welcome-file>index.jsp</welcome-file>
    </welcome-file-list>
    <security-constraint>
        <display-name>User</display-name>
        <web-resource-collection>
            <web-resource-name>RandomPets</web-resource-name>
            <description>generate a random pet</description>
            <url-pattern>/protected/*</url-pattern>
        </web-resource-collection>
        <auth-constraint>
            <description>basic user of the web site</description>
            <role-name>user</role-name>
        </auth-constraint>
    </security-constraint>
    <login-config>
        <auth-method>FORM</auth-method>
        <realm-name>Random Pets</realm-name>
        <form-login-config>
            <form-login-page>/login.html</form-login-page>
            <form-error-page>/errorlogin.html</form-error-page>
        </form-login-config>
    </login-config>
    <security-role>
        <description>basic user of web site</description>
        <role-name>user</role-name>
    </security-role>
</web-app>

<?xml version="1.0" encoding="UTF-8"?>
<Context antiJARLocking="true" path="/RPets">
    <Resource
        name="jdbc/auth"
        description="Pet authentication"
        type="javax.sql.DataSource"
        auth="Container"
        driverClassName="com.mysql.jdbc.Driver"
        maxActive="10" maxIdle="3"
        maxWait="10000"
        password="******"
        url="jdbc:mysql://localhost/petauth"
        validationQuery="SELECT 1"
        username="******"/>
    <Realm className="org.apache.catalina.realm.DataSourceRealm"
           userTable="users"
           userNameCol="username"
           userCredCol="password"
           userRoleTable="roles"
           roleNameCol="rolename"
           localDataSource="true"
           dataSourceName="jdbc/auth"/>
</Context>

5. Apache configuration files:

# workers.properties
worker.list=jk-status,jk-manager,lb,deimos,mars,phobos

worker.jk-status.type=status
worker.jk-status.read_only=true
worker.jk-manager.type=status

worker.template.type=ajp13
worker.template.host=[my-ip-address]
worker.template.socket_connect_timeout=5000
worker.template.socket_keepalive=true
worker.template.ping_mode=A
worker.template.ping_timeout=10000
worker.template.connection_pool_minsize=0
worker.template.connection_pool_timeout=600
worker.template.reply_timeout=300000
worker.template.recovery_options=3

worker.deimos.reference=worker.template
worker.deimos.port=8019

worker.mars.reference=worker.template
worker.mars.port=8029

worker.phobos.reference=worker.template
worker.phobos.port=8039

worker.lb.type=lb
worker.lb.error_escalation_time=0
worker.lb.max_reply_timeouts=10
worker.lb.balance_workers=deimos,mars,phobos

# Local uriworkermap.properties file
/examples=lb
/examples/*=lb
/docs=lb
/docs/*=lb
/RPets=lb
/RPets/*=lb

# This will be included in the main httpd.conf file
LoadModule jk_module modules/mod_jk.so
<IfModule jk_module>
    JkWorkersFile conf.d/workers.properties
    JkLogFile logs/mod_jk.log
    JkLogLevel info
    JkShmFile /var/run/httpd/mod_jk.shm
    JkOptions +RejectUnsafeURI
    JkWatchdogInterval 60
    <Location /jk-status>
        JkMount jk-status
        Order deny,allow
        Deny from all
        Allow from 127.0.0.1
    </Location>
    <Location /jk-manager>
        JkMount jk-manager
        Order deny,allow
        Deny from all
        Allow from 127.0.0.1
    </Location>
    JkMountFile conf.d/uriworkermap.properties
</IfModule>


      


---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org