You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@tomcat.apache.org by Dilan Kelanibandara <di...@beyondm.net> on 2006/06/18 06:22:43 UTC
tomcat5.5.17 cluster(using jdk1.5) error - OutOfMemoryError in starting up on AS4
Hello ,
I am getting OutOfMemoryError continuously when starting up two cluster
nodes of tomcat5.5.17 (jdk1.5 on Advanced server 4). Any way it was working
fine for 3 weeks time. This error occurs previously only one time and when
restarted the tomcat, it worked.
Following is a part of catalina.out relevent to that error for node 1.
============ ========================================================
INFO: Start ClusterSender at cluster Catalina:type=Cluster,host=localhost
with name Catalina:type=ClusterSender,host=localhost
Jun 17, 2006 8:44:15 PM org.apache.catalina.cluster.mcast.McastService start
INFO: Sleeping for 2000 milliseconds to establish cluster membership
Exception in thread "Cluster-MembershipReceiver" java.lang.OutOfMemoryError:
Java heap space
Jun 17, 2006 8:44:17 PM org.apache.catalina.cluster.mcast.McastService
registerMBean
INFO: membership mbean registered
(Catalina:type=ClusterMembership,host=localhost)
Jun 17, 2006 8:44:17 PM org.apache.catalina.cluster.deploy.FarmWarDeployer
start
INFO: Cluster FarmWarDeployer started.
Jun 17, 2006 8:44:19 PM org.apache.catalina.cluster.session.DeltaManager
start
INFO: Register manager /StockTradingServer to cluster element Host with name
localhost Jun 17, 2006 8:44:19 PM
org.apache.catalina.cluster.session.DeltaManager
start
INFO: Starting clustering manager at /StockTradingServer Jun 17, 2006
8:44:19 PM org.apache.catalina.cluster.session.DeltaManager
getAllClusterSessions
INFO: Manager [/StockTradingServer]: skipping state transfer. No members
active in cluster group.
============================================================================
=====
node2 startup log is as follows
============================================================================
=====
INFO: Cluster is about to start
Jun 17, 2006 8:53:00 PM
org.apache.catalina.cluster.tcp.ReplicationTransmitter start
INFO: Start ClusterSender at cluster Catalina:type=Cluster,host=localhost
with name Catalina:type=ClusterSender,host=localhost
Jun 17, 2006 8:53:00 PM org.apache.catalina.cluster.mcast.McastService start
INFO: Sleeping for 2000 milliseconds to establish cluster membership
Exception in thread "Cluster-MembershipReceiver" java.lang.OutOfMemoryError:
Java heap space
Jun 17, 2006 8:53:02 PM org.apache.catalina.cluster.mcast.McastService
registerMBean
INFO: membership mbean registered
(Catalina:type=ClusterMembership,host=localhost)
Jun 17, 2006 8:53:02 PM org.apache.catalina.cluster.deploy.FarmWarDeployer
start
INFO: Cluster FarmWarDeployer started.
Jun 17, 2006 8:53:04 PM org.apache.catalina.cluster.session.DeltaManager
start
INFO: Register manager /StockTradingServer to cluster element Host with name
localhost Jun 17, 2006 8:53:04 PM
org.apache.catalina.cluster.session.DeltaManager
start
============================================================================
=
Any way my clustor was working fine for 3 weeks time and started to give
this error in startup of both the nodes.
I have an IBMHTTPServer with jk connector for load balancing and that load
is comming my tomcat cluster.
following is the server.xml file for both the servers.
============================================================================
=
<!-- Example Server Configuration File -->
<!-- Note that component elements are nested corresponding to their
parent-child relationships with each other -->
<!-- A "Server" is a singleton element that represents the entire JVM,
which may contain one or more "Service" instances. The Server
listens for a shutdown command on the indicated port.
Note: A "Server" is not itself a "Container", so you may not
define subcomponents such as "Valves" or "Loggers" at this level.
-->
<Server port="8005" shutdown="SHUTDOWN">
<!-- Comment these entries out to disable JMX MBeans support used for the
administration web application --> <Listener
className="org.apache.catalina.core.AprLifecycleListener" /> <Listener
className="org.apache.catalina.mbeans.ServerLifecycleListener" /> <Listener
className="org.apache.catalina.mbeans.GlobalResourcesLifecycleListener" />
<Listener
className="org.apache.catalina.storeconfig.StoreConfigLifecycleListener"/>
<!-- Global JNDI resources -->
<GlobalNamingResources>
<!-- Test entry for demonstration purposes -->
<Environment name="simpleValue" type="java.lang.Integer" value="30"/>
<!-- Editable user database that can also be used by
UserDatabaseRealm to authenticate users -->
<Resource name="UserDatabase" auth="Container"
type="org.apache.catalina.UserDatabase"
description="User database that can be updated and saved"
factory="org.apache.catalina.users.MemoryUserDatabaseFactory"
pathname="conf/tomcat-users.xml" />
</GlobalNamingResources>
<!-- A "Service" is a collection of one or more "Connectors" that share
a single "Container" (and therefore the web applications visible
within that Container). Normally, that Container is an "Engine",
but this is not required.
Note: A "Service" is not itself a "Container", so you may not
define subcomponents such as "Valves" or "Loggers" at this level.
-->
<!-- Define the Tomcat Stand-Alone Service --> <Service name="Catalina">
<!-- A "Connector" represents an endpoint by which requests are received
and responses are returned. Each Connector passes requests on to
the
associated "Container" (normally an Engine) for processing.
By default, a non-SSL HTTP/1.1 Connector is established on port
8080.
You can also enable an SSL HTTP/1.1 Connector on port 8443 by
following the instructions below and uncommenting the second
Connector
entry. SSL support requires the following steps (see the SSL Config
HOWTO in the Tomcat 5 documentation bundle for more detailed
instructions):
* If your JDK version 1.3 or prior, download and install JSSE 1.0.2
or
later, and put the JAR files into "$JAVA_HOME/jre/lib/ext".
* Execute:
%JAVA_HOME%\bin\keytool -genkey -alias tomcat -keyalg RSA
(Windows)
$JAVA_HOME/bin/keytool -genkey -alias tomcat -keyalg RSA (Unix)
with a password value of "changeit" for both the certificate and
the keystore itself.
By default, DNS lookups are enabled when a web application calls
request.getRemoteHost(). This can have an adverse impact on
performance, so you can disable it by setting the
"enableLookups" attribute to "false". When DNS lookups are
disabled,
request.getRemoteHost() will return the String version of the
IP address of the remote client.
-->
<!-- Define a non-SSL HTTP/1.1 Connector on port 8080 -->
<Connector port="8088" maxHttpHeaderSize="8192"
maxThreads="300" minSpareThreads="25" maxSpareThreads="75"
enableLookups="false" redirectPort="8443" acceptCount="100"
connectionTimeout="20000" disableUploadTimeout="true" />
<!-- Note : To disable connection timeouts, set connectionTimeout value
to 0 -->
<!-- Note : To use gzip compression you could set the following
properties :
compression="on"
compressionMinSize="2048"
noCompressionUserAgents="gozilla, traviata"
compressableMimeType="text/html,text/xml"
-->
<!-- Define a SSL HTTP/1.1 Connector on port 8443 -->
<!--
<Connector port="8443" maxHttpHeaderSize="8192"
maxThreads="150" minSpareThreads="25" maxSpareThreads="75"
enableLookups="false" disableUploadTimeout="true"
acceptCount="100" scheme="https" secure="true"
clientAuth="false" sslProtocol="TLS" />
-->
<!-- Define an AJP 1.3 Connector on port 8009 -->
<Connector port="8009"
enableLookups="false" redirectPort="8443" protocol="AJP/1.3"
/>
<!-- Define a Proxied HTTP/1.1 Connector on port 8082 -->
<!-- See proxy documentation for more information about using this. -->
<!--
<Connector port="8082"
maxThreads="150" minSpareThreads="25" maxSpareThreads="75"
enableLookups="false" acceptCount="100"
connectionTimeout="20000"
proxyPort="80" disableUploadTimeout="true" />
-->
<!-- An Engine represents the entry point (within Catalina) that
processes
every request. The Engine implementation for Tomcat stand alone
analyzes the HTTP headers included with the request, and passes them
on to the appropriate Host (virtual host). -->
<!-- You should set jvmRoute to support load-balancing via AJP ie :
<Engine name="Standalone" defaultHost="localhost" jvmRoute="jvm1">
-->
<!-- Define the top level container in our container hierarchy -->
<Engine name="Catalina" defaultHost="localhost" jvmRoute="node01">
<!-- <Engine name="Catalina" jvmRoute="$JAVA_HOME" > -->
<!-- The request dumper valve dumps useful debugging information about
the request headers and cookies that were received, and the
response
headers and cookies that were sent, for all requests received by
this instance of Tomcat. If you care only about requests to a
particular virtual host, or a particular application, nest this
element inside the corresponding <Host> or <Context> entry
instead.
For a similar mechanism that is portable to all Servlet 2.4
containers, check out the "RequestDumperFilter" Filter in the
example application (the source for this filter may be found in
"$CATALINA_HOME/webapps/examples/WEB-INF/classes/filters").
Request dumping is disabled by default. Uncomment the following
element to enable it. -->
<!--
<Valve className="org.apache.catalina.valves.RequestDumperValve"/>
-->
<!-- Because this Realm is here, an instance will be shared globally
-->
<!-- This Realm uses the UserDatabase configured in the global JNDI
resources under the key "UserDatabase". Any edits
that are performed against this UserDatabase are immediately
available for use by the Realm. -->
<Realm className="org.apache.catalina.realm.UserDatabaseRealm"
resourceName="UserDatabase"/>
<!-- Comment out the old realm but leave here for now in case we
need to go back quickly -->
<!--
<Realm className="org.apache.catalina.realm.MemoryRealm" />
-->
<!-- Replace the above Realm with one of the following to get a Realm
stored in a database and accessed via JDBC -->
<Realm className="org.apache.catalina.realm.JDBCRealm"
driverName="org.gjt.mm.mysql.Driver"
connectionURL="jdbc:mysql://172.16.1.55:3306/kutila"
connectionName="kutila" connectionPassword="kutila"
userTable="users" userNameCol="user_name"
userCredCol="user_pass"
userRoleTable="user_roles" roleNameCol="role_name" />
<!--
<Realm className="org.apache.catalina.realm.JDBCRealm"
driverName="oracle.jdbc.driver.OracleDriver"
connectionURL="jdbc:oracle:thin:@ntserver:1521:ORCL"
connectionName="scott" connectionPassword="tiger"
userTable="users" userNameCol="user_name"
userCredCol="user_pass"
userRoleTable="user_roles" roleNameCol="role_name" />
-->
<!--
<Realm className="org.apache.catalina.realm.JDBCRealm"
driverName="sun.jdbc.odbc.JdbcOdbcDriver"
connectionURL="jdbc:odbc:CATALINA"
userTable="users" userNameCol="user_name"
userCredCol="user_pass"
userRoleTable="user_roles" roleNameCol="role_name" />
-->
<!-- Define the default virtual host
Note: XML Schema validation will not work with Xerces 2.2.
-->
<Host name="localhost" appBase="webapps"
unpackWARs="true" autoDeploy="true"
xmlValidation="false" xmlNamespaceAware="false">
<!-- Defines a cluster for this node,
By defining this element, means that every manager will be
changed.
So when running a cluster, only make sure that you have webapps
in there
that need to be clustered and remove the other ones.
A cluster has the following parameters:
className = the fully qualified name of the cluster class
clusterName = a descriptive name for your cluster, can be
anything
mcastAddr = the multicast address, has to be the same for all
the nodes
mcastPort = the multicast port, has to be the same for all the
nodes
mcastBindAddress = bind the multicast socket to a specific
address
mcastTTL = the multicast TTL if you want to limit your broadcast
mcastSoTimeout = the multicast readtimeout
mcastFrequency = the number of milliseconds in between sending a
"I'm alive" heartbeat
mcastDropTime = the number a milliseconds before a node is
considered "dead" if no heartbeat is received
tcpThreadCount = the number of threads to handle incoming
replication requests, optimal would be the same amount of threads as nodes
tcpListenAddress = the listen address (bind address) for TCP
cluster request on this host,
in case of multiple ethernet cards.
auto means that address becomes
InetAddress.getLocalHost().getHostAddress()
tcpListenPort = the tcp listen port
tcpSelectorTimeout = the timeout (ms) for the Selector.select()
method in case the OS
has a wakup bug in java.nio. Set to 0 for
no timeout
printToScreen = true means that managers will also print to
std.out
expireSessionsOnShutdown = true means that
useDirtyFlag = true means that we only replicate a session after
setAttribute,removeAttribute has been called.
false means to replicate the session after each
request.
false means that replication would work for the
following piece of code: (only for SimpleTcpReplicationManager)
<%
HashMap map =
(HashMap)session.getAttribute("map");
map.put("key","value");
%>
replicationMode = can be either 'pooled', 'synchronous' or
'asynchronous'.
* Pooled means that the replication happens
using several sockets in a synchronous way. Ie, the data gets replicated,
then the request return. This is the same as the 'synchronous' setting
except it uses a pool of sockets, hence it is multithreaded. This is the
fastest and safest configuration. To use this, also increase the nr of tcp
threads that you have dealing with replication.
* Synchronous means that the thread that
executes the request, is also the
thread the replicates the data to the other
nodes, and will not return until all
nodes have received the information.
* Asynchronous means that there is a specific
'sender' thread for each cluster node,
so the request thread will queue the
replication request into a "smart" queue,
and then return to the client.
The "smart" queue is a queue where when a
session is added to the queue, and the same session
already exists in the queue from a previous
request, that session will be replaced
in the queue instead of replicating two
requests. This almost never happens, unless there is a
large network delay.
-->
<!--
When configuring for clustering, you also add in a valve to catch
all the requests
coming in, at the end of the request, the session may or may not
be replicated.
A session is replicated if and only if all the conditions are
met:
1. useDirtyFlag is true or setAttribute or removeAttribute has
been called AND
2. a session exists (has been created)
3. the request is not trapped by the "filter" attribute
The filter attribute is to filter out requests that could not
modify the session,
hence we don't replicate the session after the end of this
request.
The filter is negative, ie, anything you put in the filter, you
mean to filter out,
ie, no replication will be done on requests that match one of the
filters.
The filter attribute is delimited by ;, so you can't escape out ;
even if you wanted to.
filter=".*\.gif;.*\.js;" means that we will not replicate the
session after requests with the URI
ending with .gif and .js are intercepted.
The deployer element can be used to deploy apps cluster wide.
Currently the deployment only deploys/undeploys to working
members in the cluster
so no WARs are copied upons startup of a broken node.
The deployer watches a directory (watchDir) for WAR files when
watchEnabled="true"
When a new war file is added the war gets deployed to the local
instance,
and then deployed to the other instances in the cluster.
When a war file is deleted from the watchDir the war is
undeployed locally
and cluster wide
-->
<Cluster className="org.apache.catalina.cluster.tcp.SimpleTcpCluster"
managerClassName="org.apache.catalina.cluster.session.DeltaManager"
expireSessionsOnShutdown="true"
useDirtyFlag="true"
notifyListenersOnReplication="true">
<Membership
className="org.apache.catalina.cluster.mcast.McastService"
mcastAddr="228.0.0.4"
mcastPort="45564"
mcastFrequency="500"
mcastDropTime="3000"/>
<Receiver
className="org.apache.catalina.cluster.tcp.ReplicationListener"
tcpListenAddress="auto"
tcpListenPort="4001"
tcpSelectorTimeout="100"
tcpThreadCount="2"/>
<Sender
className="org.apache.catalina.cluster.tcp.ReplicationTransmitter"
replicationMode="pooled"
ackTimeout="15000"
waitForAck="true"/>
<Valve
className="org.apache.catalina.cluster.tcp.ReplicationValve"
filter=".*\.gif;.*\.js;.*\.jpg;.*\.png;.*\.htm;.*\.html;.*\.css;.*\.txt;"/>
<Deployer
className="org.apache.catalina.cluster.deploy.FarmWarDeployer"
tempDir="/tmp/war-temp/"
deployDir="/tmp/war-deploy/"
watchDir="/tmp/war-listen/"
watchEnabled="false"/>
<ClusterListener
className="org.apache.catalina.cluster.session.ClusterSessionListener"/>
</Cluster>
<!-- Normally, users must authenticate themselves to each web app
individually. Uncomment the following entry if you would like
a user to be authenticated the first time they encounter a
resource protected by a security constraint, and then have that
user identity maintained across *all* web applications contained
in this virtual host. -->
<!--
<Valve className="org.apache.catalina.authenticator.SingleSignOn" />
-->
<!-- Access log processes all requests for this virtual host. By
default, log files are created in the "logs" directory relative
to
$CATALINA_HOME. If you wish, you can specify a different
directory with the "directory" attribute. Specify either a
relative
(to $CATALINA_HOME) or absolute path to the desired directory.
-->
<!--
<Valve className="org.apache.catalina.valves.AccessLogValve"
directory="logs" prefix="localhost_access_log."
suffix=".txt"
pattern="common" resolveHosts="false"/>
-->
<!-- Access log processes all requests for this virtual host. By
default, log files are created in the "logs" directory relative
to
$CATALINA_HOME. If you wish, you can specify a different
directory with the "directory" attribute. Specify either a
relative
(to $CATALINA_HOME) or absolute path to the desired directory.
This access log implementation is optimized for maximum
performance,
but is hardcoded to support only the "common" and "combined"
patterns.
-->
<!--
<Valve
className="org.apache.catalina.valves.FastCommonAccessLogValve"
directory="logs" prefix="localhost_access_log."
suffix=".txt"
pattern="common" resolveHosts="false"/>
-->
</Host>
</Engine>
</Service>
</Server>
===========================================================
I appriceate your prompt help on this since this is a very critical
application live for the moment. Always send me an email for any
clarifications.
Thanks and best regards,
Dilan
RE: tomcat5.5.17 cluster(using jdk1.5) error - OutOfMemoryError in starting up on AS4
Posted by Dilan Kelanibandara <di...@beyondm.net>.
Hi Peter,
I tried with increasing tcpThreadCount to '6' and commented out
> <Deployer
>
> className="org.apache.catalina.cluster.deploy.FarmWarDeployer"
>
> tempDir="/tmp/war-temp/"
>
> deployDir="/tmp/war-deploy/"
>
> watchDir="/tmp/war-listen/"
>
> watchEnabled="false"/>
Section for one cluster node. And restarted that node. Then also the
situation is same. Following is the Catalina.out log part relavant to that
===================================================================
Jun 18, 2006 9:10:45 AM org.apache.catalina.cluster.tcp.SimpleTcpCluster
start
INFO: Cluster is about to start
Jun 18, 2006 9:10:45 AM
org.apache.catalina.cluster.tcp.ReplicationTransmitter start
INFO: Start ClusterSender at cluster Catalina:type=Cluster,host=localhost
with name Catalina:type=ClusterSender,host=localhost
Jun 18, 2006 9:10:45 AM org.apache.catalina.cluster.mcast.McastService start
INFO: Sleeping for 2000 milliseconds to establish cluster membership
Jun 18, 2006 9:10:45 AM org.apache.catalina.cluster.tcp.SimpleTcpCluster
memberAdded
INFO: Replication member
added:org.apache.catalina.cluster.mcast.McastMember[tcp://172.16.1.52:4001,c
atalina,172.16.1.52,4001, alive=44275638]
Exception in thread "Cluster-MembershipReceiver" java.lang.OutOfMemoryError:
Java heap space
Jun 18, 2006 9:10:47 AM org.apache.catalina.cluster.mcast.McastService
registerMBean
INFO: membership mbean registered
(Catalina:type=ClusterMembership,host=localhost)
Jun 18, 2006 9:10:49 AM org.apache.catalina.cluster.session.DeltaManager
start
INFO: Register manager /StockTradingServer to cluster element Host with name
localhost
Jun 18, 2006 9:10:49 AM org.apache.catalina.cluster.session.DeltaManager
start
INFO: Starting clustering manager at /StockTradingServer
Jun 18, 2006 9:10:49 AM org.apache.catalina.cluster.session.DeltaManager
getAllClusterSessions
WARNING: Manager [/StockTradingServer], requesting session state from
org.apache.catalina.cluster.mcast.McastMember[tcp://172.16.1.52:4001,catalin
a,172.16.1.52,4001, alive=44275638]. This operation will timeout if no
session state has been received within 60 seconds.
Jun 18, 2006 9:11:49 AM org.apache.catalina.cluster.session.DeltaManager
waitForSendAllSessions
SEVERE: Manager [/StockTradingServer]: No session state send at 6/18/06 9:10
AM received, timing out af
===================================================================
Thanks,
Dilan.
-----Original Message-----
From: Peter Rossbach [mailto:pr@objektpark.de]
Sent: Sunday, June 18, 2006 7:37 AM
To: Tomcat Users List
Subject: Re: tomcat5.5.17 cluster(using jdk1.5) error - OutOfMemoryError in
starting up on AS4
Hi,
Which JVM memory parameter you use?
At pooled mode use more receiver worker set tcpThreadCount="6"!
You really need deployer? Deployer generate at every startup a large
cluster message.
Regards
Peter
Am 18.06.2006 um 06:22 schrieb Dilan Kelanibandara:
>
>
> Hello ,
>
>
>
> I am getting OutOfMemoryError continuously when starting up two
> cluster
> nodes of tomcat5.5.17 (jdk1.5 on Advanced server 4). Any way it was
> working
> fine for 3 weeks time. This error occurs previously only one time
> and when
> restarted the tomcat, it worked.
>
>
>
>
>
> Following is a part of catalina.out relevent to that error for
> node 1.
>
> ============ ========================================================
>
>
>
> INFO: Start ClusterSender at cluster
> Catalina:type=Cluster,host=localhost
>
> with name Catalina:type=ClusterSender,host=localhost
>
> Jun 17, 2006 8:44:15 PM
> org.apache.catalina.cluster.mcast.McastService start
>
> INFO: Sleeping for 2000 milliseconds to establish cluster membership
> Exception in thread "Cluster-MembershipReceiver"
> java.lang.OutOfMemoryError:
>
>
> Java heap space
>
> Jun 17, 2006 8:44:17 PM org.apache.catalina.cluster.mcast.McastService
>
> registerMBean
>
> INFO: membership mbean registered
>
> (Catalina:type=ClusterMembership,host=localhost)
>
> Jun 17, 2006 8:44:17 PM
> org.apache.catalina.cluster.deploy.FarmWarDeployer
>
> start
>
> INFO: Cluster FarmWarDeployer started.
>
> Jun 17, 2006 8:44:19 PM
> org.apache.catalina.cluster.session.DeltaManager
>
> start
>
> INFO: Register manager /StockTradingServer to cluster element Host
> with name
> localhost Jun 17, 2006 8:44:19 PM
> org.apache.catalina.cluster.session.DeltaManager
>
> start
>
> INFO: Starting clustering manager at /StockTradingServer Jun 17, 2006
> 8:44:19 PM org.apache.catalina.cluster.session.DeltaManager
>
> getAllClusterSessions
>
> INFO: Manager [/StockTradingServer]: skipping state transfer. No
> members
> active in cluster group.
>
>
>
> ======================================================================
> ======
> =====
>
> node2 startup log is as follows
>
> ======================================================================
> ======
> =====
>
> INFO: Cluster is about to start
>
> Jun 17, 2006 8:53:00 PM
>
> org.apache.catalina.cluster.tcp.ReplicationTransmitter start
>
> INFO: Start ClusterSender at cluster
> Catalina:type=Cluster,host=localhost
>
> with name Catalina:type=ClusterSender,host=localhost
>
> Jun 17, 2006 8:53:00 PM
> org.apache.catalina.cluster.mcast.McastService start
>
> INFO: Sleeping for 2000 milliseconds to establish cluster membership
> Exception in thread "Cluster-MembershipReceiver"
> java.lang.OutOfMemoryError:
>
>
> Java heap space
>
> Jun 17, 2006 8:53:02 PM org.apache.catalina.cluster.mcast.McastService
>
> registerMBean
>
> INFO: membership mbean registered
>
> (Catalina:type=ClusterMembership,host=localhost)
>
> Jun 17, 2006 8:53:02 PM
> org.apache.catalina.cluster.deploy.FarmWarDeployer
>
> start
>
> INFO: Cluster FarmWarDeployer started.
>
> Jun 17, 2006 8:53:04 PM
> org.apache.catalina.cluster.session.DeltaManager
>
> start
>
> INFO: Register manager /StockTradingServer to cluster element Host
> with name
> localhost Jun 17, 2006 8:53:04 PM
> org.apache.catalina.cluster.session.DeltaManager
>
> start
>
>
>
> ======================================================================
> ======
> =
>
> Any way my clustor was working fine for 3 weeks time and started to
> give
> this error in startup of both the nodes.
>
>
>
> I have an IBMHTTPServer with jk connector for load balancing and
> that load
> is comming my tomcat cluster.
>
>
>
> following is the server.xml file for both the servers.
>
>
>
> ======================================================================
> ======
> =
>
>
>
> <!-- Example Server Configuration File -->
>
> <!-- Note that component elements are nested corresponding to their
>
> parent-child relationships with each other -->
>
>
>
> <!-- A "Server" is a singleton element that represents the entire JVM,
>
> which may contain one or more "Service" instances. The Server
>
> listens for a shutdown command on the indicated port.
>
>
>
> Note: A "Server" is not itself a "Container", so you may not
>
> define subcomponents such as "Valves" or "Loggers" at this level.
>
> -->
>
>
>
> <Server port="8005" shutdown="SHUTDOWN">
>
>
>
> <!-- Comment these entries out to disable JMX MBeans support used
> for the
>
> administration web application --> <Listener
> className="org.apache.catalina.core.AprLifecycleListener" />
> <Listener
> className="org.apache.catalina.mbeans.ServerLifecycleListener" />
> <Listener
> className="org.apache.catalina.mbeans.GlobalResourcesLifecycleListener
> " />
> <Listener
> className="org.apache.catalina.storeconfig.StoreConfigLifecycleListene
> r"/>
>
>
>
> <!-- Global JNDI resources -->
>
> <GlobalNamingResources>
>
>
>
> <!-- Test entry for demonstration purposes -->
>
> <Environment name="simpleValue" type="java.lang.Integer"
> value="30"/>
>
>
>
> <!-- Editable user database that can also be used by
>
> UserDatabaseRealm to authenticate users -->
>
> <Resource name="UserDatabase" auth="Container"
>
> type="org.apache.catalina.UserDatabase"
>
> description="User database that can be updated and saved"
>
>
> factory="org.apache.catalina.users.MemoryUserDatabaseFactory"
>
> pathname="conf/tomcat-users.xml" />
>
>
>
> </GlobalNamingResources>
>
>
>
> <!-- A "Service" is a collection of one or more "Connectors" that
> share
>
> a single "Container" (and therefore the web applications visible
>
> within that Container). Normally, that Container is an
> "Engine",
>
> but this is not required.
>
>
>
> Note: A "Service" is not itself a "Container", so you may not
>
> define subcomponents such as "Valves" or "Loggers" at this
> level.
>
> -->
>
>
>
> <!-- Define the Tomcat Stand-Alone Service --> <Service
> name="Catalina">
>
>
>
> <!-- A "Connector" represents an endpoint by which requests are
> received
>
> and responses are returned. Each Connector passes requests
> on to
> the
>
> associated "Container" (normally an Engine) for processing.
>
>
>
> By default, a non-SSL HTTP/1.1 Connector is established on
> port
> 8080.
>
> You can also enable an SSL HTTP/1.1 Connector on port 8443 by
>
> following the instructions below and uncommenting the second
> Connector
>
> entry. SSL support requires the following steps (see the
> SSL Config
> HOWTO in the Tomcat 5 documentation bundle for more detailed
>
> instructions):
>
> * If your JDK version 1.3 or prior, download and install
> JSSE 1.0.2
> or
>
> later, and put the JAR files into "$JAVA_HOME/jre/lib/ext".
>
> * Execute:
>
> %JAVA_HOME%\bin\keytool -genkey -alias tomcat -keyalg RSA
>
> (Windows)
>
> $JAVA_HOME/bin/keytool -genkey -alias tomcat -keyalg
> RSA (Unix)
> with a password value of "changeit" for both the certificate and
>
> the keystore itself.
>
>
>
> By default, DNS lookups are enabled when a web application
> calls
>
> request.getRemoteHost(). This can have an adverse impact on
>
> performance, so you can disable it by setting the
>
> "enableLookups" attribute to "false". When DNS lookups are
> disabled,
>
> request.getRemoteHost() will return the String version of the
>
> IP address of the remote client.
>
> -->
>
>
>
> <!-- Define a non-SSL HTTP/1.1 Connector on port 8080 -->
>
> <Connector port="8088" maxHttpHeaderSize="8192"
>
> maxThreads="300" minSpareThreads="25"
> maxSpareThreads="75"
>
> enableLookups="false" redirectPort="8443"
> acceptCount="100"
>
> connectionTimeout="20000" disableUploadTimeout="true" />
>
> <!-- Note : To disable connection timeouts, set
> connectionTimeout value
>
> to 0 -->
>
>
>
> <!-- Note : To use gzip compression you could set the following
> properties :
>
>
>
> compression="on"
>
> compressionMinSize="2048"
>
> noCompressionUserAgents="gozilla, traviata"
>
> compressableMimeType="text/html,text/xml"
>
> -->
>
>
>
> <!-- Define a SSL HTTP/1.1 Connector on port 8443 -->
>
> <!--
>
> <Connector port="8443" maxHttpHeaderSize="8192"
>
> maxThreads="150" minSpareThreads="25"
> maxSpareThreads="75"
>
> enableLookups="false" disableUploadTimeout="true"
>
> acceptCount="100" scheme="https" secure="true"
>
> clientAuth="false" sslProtocol="TLS" />
>
> -->
>
>
>
> <!-- Define an AJP 1.3 Connector on port 8009 -->
>
> <Connector port="8009"
>
> enableLookups="false" redirectPort="8443"
> protocol="AJP/1.3"
>
> />
>
>
>
> <!-- Define a Proxied HTTP/1.1 Connector on port 8082 -->
>
> <!-- See proxy documentation for more information about using
> this. -->
>
> <!--
>
> <Connector port="8082"
>
> maxThreads="150" minSpareThreads="25"
> maxSpareThreads="75"
>
> enableLookups="false" acceptCount="100"
>
> connectionTimeout="20000"
>
> proxyPort="80" disableUploadTimeout="true" />
>
> -->
>
>
>
> <!-- An Engine represents the entry point (within Catalina) that
> processes
>
> every request. The Engine implementation for Tomcat stand
> alone
>
> analyzes the HTTP headers included with the request, and
> passes them
> on to the appropriate Host (virtual host). -->
>
>
>
> <!-- You should set jvmRoute to support load-balancing via AJP ie :
>
> <Engine name="Standalone" defaultHost="localhost" jvmRoute="jvm1">
>
> -->
>
>
>
> <!-- Define the top level container in our container hierarchy -->
>
> <Engine name="Catalina" defaultHost="localhost" jvmRoute="node01">
>
> <!-- <Engine name="Catalina" jvmRoute="$JAVA_HOME" > -->
>
>
>
> <!-- The request dumper valve dumps useful debugging
> information about
>
> the request headers and cookies that were received, and the
> response
>
> headers and cookies that were sent, for all requests
> received by
>
> this instance of Tomcat. If you care only about requests
> to a
>
> particular virtual host, or a particular application,
> nest this
>
> element inside the corresponding <Host> or <Context> entry
> instead.
>
>
>
> For a similar mechanism that is portable to all Servlet 2.4
>
> containers, check out the "RequestDumperFilter" Filter in
> the
>
> example application (the source for this filter may be
> found in
>
> "$CATALINA_HOME/webapps/examples/WEB-INF/classes/filters").
>
>
>
> Request dumping is disabled by default. Uncomment the
> following
>
> element to enable it. -->
>
> <!--
>
> <Valve
> className="org.apache.catalina.valves.RequestDumperValve"/>
>
> -->
>
>
>
> <!-- Because this Realm is here, an instance will be shared
> globally
> -->
>
>
>
> <!-- This Realm uses the UserDatabase configured in the global
> JNDI
>
> resources under the key "UserDatabase". Any edits
>
> that are performed against this UserDatabase are immediately
>
> available for use by the Realm. -->
>
> <Realm className="org.apache.catalina.realm.UserDatabaseRealm"
>
> resourceName="UserDatabase"/>
>
>
>
> <!-- Comment out the old realm but leave here for now in case we
>
> need to go back quickly -->
>
> <!--
>
> <Realm className="org.apache.catalina.realm.MemoryRealm" />
>
> -->
>
>
>
> <!-- Replace the above Realm with one of the following to get
> a Realm
>
> stored in a database and accessed via JDBC -->
>
>
>
>
>
> <Realm className="org.apache.catalina.realm.JDBCRealm"
>
> driverName="org.gjt.mm.mysql.Driver"
>
> connectionURL="jdbc:mysql://172.16.1.55:3306/kutila"
>
> connectionName="kutila" connectionPassword="kutila"
>
> userTable="users" userNameCol="user_name"
>
> userCredCol="user_pass"
>
> userRoleTable="user_roles" roleNameCol="role_name" />
>
>
>
>
>
> <!--
>
> <Realm className="org.apache.catalina.realm.JDBCRealm"
>
> driverName="oracle.jdbc.driver.OracleDriver"
>
> connectionURL="jdbc:oracle:thin:@ntserver:1521:ORCL"
>
> connectionName="scott" connectionPassword="tiger"
>
> userTable="users" userNameCol="user_name"
>
> userCredCol="user_pass"
>
> userRoleTable="user_roles" roleNameCol="role_name" />
>
> -->
>
>
>
> <!--
>
> <Realm className="org.apache.catalina.realm.JDBCRealm"
>
> driverName="sun.jdbc.odbc.JdbcOdbcDriver"
>
> connectionURL="jdbc:odbc:CATALINA"
>
> userTable="users" userNameCol="user_name"
>
> userCredCol="user_pass"
>
> userRoleTable="user_roles" roleNameCol="role_name" />
>
> -->
>
>
>
> <!-- Define the default virtual host
>
> Note: XML Schema validation will not work with Xerces 2.2.
>
> -->
>
> <Host name="localhost" appBase="webapps"
>
> unpackWARs="true" autoDeploy="true"
>
> xmlValidation="false" xmlNamespaceAware="false">
>
>
>
> <!-- Defines a cluster for this node,
>
> By defining this element, means that every manager will be
> changed.
>
> So when running a cluster, only make sure that you have
> webapps
> in there
>
> that need to be clustered and remove the other ones.
>
> A cluster has the following parameters:
>
>
>
> className = the fully qualified name of the cluster class
>
>
>
> clusterName = a descriptive name for your cluster, can be
> anything
>
>
>
> mcastAddr = the multicast address, has to be the same
> for all
> the nodes
>
>
>
> mcastPort = the multicast port, has to be the same for
> all the
> nodes
>
>
>
> mcastBindAddress = bind the multicast socket to a specific
> address
>
>
>
> mcastTTL = the multicast TTL if you want to limit your
> broadcast
>
> mcastSoTimeout = the multicast readtimeout
>
>
>
> mcastFrequency = the number of milliseconds in between
> sending a
> "I'm alive" heartbeat
>
>
>
> mcastDropTime = the number a milliseconds before a node is
> considered "dead" if no heartbeat is received
>
>
>
> tcpThreadCount = the number of threads to handle incoming
> replication requests, optimal would be the same amount of threads
> as nodes
>
>
>
> tcpListenAddress = the listen address (bind address)
> for TCP
> cluster request on this host,
>
> in case of multiple ethernet cards.
>
> auto means that address becomes
>
> InetAddress.getLocalHost
> ().getHostAddress()
>
>
>
> tcpListenPort = the tcp listen port
>
>
>
> tcpSelectorTimeout = the timeout (ms) for the
> Selector.select()
> method in case the OS
>
> has a wakup bug in java.nio. Set
> to 0 for
> no timeout
>
>
>
> printToScreen = true means that managers will also
> print to
> std.out
>
>
>
> expireSessionsOnShutdown = true means that
>
>
>
> useDirtyFlag = true means that we only replicate a
> session after
> setAttribute,removeAttribute has been called.
>
> false means to replicate the session
> after each
> request.
>
> false means that replication would work
> for the
> following piece of code: (only for SimpleTcpReplicationManager)
>
> <%
>
> HashMap map =
> (HashMap)session.getAttribute("map");
>
> map.put("key","value");
>
> %>
>
> replicationMode = can be either 'pooled', 'synchronous' or
> 'asynchronous'.
>
> * Pooled means that the replication
> happens
> using several sockets in a synchronous way. Ie, the data gets
> replicated,
> then the request return. This is the same as the 'synchronous' setting
> except it uses a pool of sockets, hence it is multithreaded. This
> is the
> fastest and safest configuration. To use this, also increase the nr
> of tcp
> threads that you have dealing with replication.
>
> * Synchronous means that the thread that
> executes the request, is also the
>
> thread the replicates the data to the
> other
> nodes, and will not return until all
>
> nodes have received the information.
>
> * Asynchronous means that there is a
> specific
> 'sender' thread for each cluster node,
>
> so the request thread will queue the
> replication request into a "smart" queue,
>
> and then return to the client.
>
> The "smart" queue is a queue where
> when a
> session is added to the queue, and the same session
>
> already exists in the queue from a
> previous
> request, that session will be replaced
>
> in the queue instead of replicating two
> requests. This almost never happens, unless there is a
>
> large network delay.
>
> -->
>
> <!--
>
> When configuring for clustering, you also add in a valve
> to catch
> all the requests
>
> coming in, at the end of the request, the session may or
> may not
> be replicated.
>
> A session is replicated if and only if all the
> conditions are
>
> met:
>
> 1. useDirtyFlag is true or setAttribute or
> removeAttribute has
> been called AND
>
> 2. a session exists (has been created)
>
> 3. the request is not trapped by the "filter" attribute
>
>
>
> The filter attribute is to filter out requests that
> could not
> modify the session,
>
> hence we don't replicate the session after the end of this
> request.
>
> The filter is negative, ie, anything you put in the
> filter, you
> mean to filter out,
>
> ie, no replication will be done on requests that match
> one of the
> filters.
>
> The filter attribute is delimited by ;, so you can't
> escape out ;
> even if you wanted to.
>
>
>
> filter=".*\.gif;.*\.js;" means that we will not
> replicate the
> session after requests with the URI
>
> ending with .gif and .js are intercepted.
>
>
>
> The deployer element can be used to deploy apps cluster
> wide.
>
> Currently the deployment only deploys/undeploys to working
> members in the cluster
>
> so no WARs are copied upons startup of a broken node.
>
> The deployer watches a directory (watchDir) for WAR
> files when
> watchEnabled="true"
>
> When a new war file is added the war gets deployed to
> the local
> instance,
>
> and then deployed to the other instances in the cluster.
>
> When a war file is deleted from the watchDir the war is
> undeployed locally
>
> and cluster wide
>
> -->
>
>
>
>
>
> <Cluster
> className="org.apache.catalina.cluster.tcp.SimpleTcpCluster"
>
>
> managerClassName="org.apache.catalina.cluster.session.DeltaManager"
>
> expireSessionsOnShutdown="true"
>
> useDirtyFlag="true"
>
> notifyListenersOnReplication="true">
>
>
>
> <Membership
>
>
> className="org.apache.catalina.cluster.mcast.McastService"
>
> mcastAddr="228.0.0.4"
>
> mcastPort="45564"
>
> mcastFrequency="500"
>
> mcastDropTime="3000"/>
>
>
>
> <Receiver
>
>
>
> className="org.apache.catalina.cluster.tcp.ReplicationListener"
>
> tcpListenAddress="auto"
>
> tcpListenPort="4001"
>
> tcpSelectorTimeout="100"
>
> tcpThreadCount="2"/>
>
>
>
> <Sender
>
>
>
> className="org.apache.catalina.cluster.tcp.ReplicationTransmitter"
>
> replicationMode="pooled"
>
> ackTimeout="15000"
>
> waitForAck="true"/>
>
>
>
> <Valve
>
> className="org.apache.catalina.cluster.tcp.ReplicationValve"
>
>
>
> filter=".*\.gif;.*\.js;.*\.jpg;.*\.png;.*\.htm;.*\.html;.*\.css;.*
> \.txt;"/>
>
>
>
> <Deployer
>
> className="org.apache.catalina.cluster.deploy.FarmWarDeployer"
>
> tempDir="/tmp/war-temp/"
>
> deployDir="/tmp/war-deploy/"
>
> watchDir="/tmp/war-listen/"
>
> watchEnabled="false"/>
>
>
>
> <ClusterListener
>
> className="org.apache.catalina.cluster.session.ClusterSessionListener"
> />
>
> </Cluster>
>
>
>
>
>
>
>
>
>
> <!-- Normally, users must authenticate themselves to each
> web app
>
> individually. Uncomment the following entry if you
> would like
>
> a user to be authenticated the first time they encounter a
>
> resource protected by a security constraint, and then
> have that
>
> user identity maintained across *all* web applications
> contained
> in this virtual host. -->
>
> <!--
>
> <Valve
> className="org.apache.catalina.authenticator.SingleSignOn" />
>
> -->
>
>
>
> <!-- Access log processes all requests for this virtual
> host. By
>
> default, log files are created in the "logs" directory
> relative
> to
>
> $CATALINA_HOME. If you wish, you can specify a different
>
> directory with the "directory" attribute. Specify
> either a
> relative
>
> (to $CATALINA_HOME) or absolute path to the desired
> directory.
>
> -->
>
> <!--
>
> <Valve className="org.apache.catalina.valves.AccessLogValve"
>
> directory="logs" prefix="localhost_access_log."
>
> suffix=".txt"
>
> pattern="common" resolveHosts="false"/>
>
> -->
>
>
>
> <!-- Access log processes all requests for this virtual
> host. By
>
> default, log files are created in the "logs" directory
> relative
> to
>
> $CATALINA_HOME. If you wish, you can specify a different
>
> directory with the "directory" attribute. Specify
> either a
> relative
>
> (to $CATALINA_HOME) or absolute path to the desired
> directory.
>
> This access log implementation is optimized for maximum
> performance,
>
> but is hardcoded to support only the "common" and
> "combined"
>
> patterns.
>
> -->
>
> <!--
>
> <Valve
>
> className="org.apache.catalina.valves.FastCommonAccessLogValve"
>
> directory="logs" prefix="localhost_access_log."
>
> suffix=".txt"
>
> pattern="common" resolveHosts="false"/>
>
> -->
>
>
>
> </Host>
>
>
>
> </Engine>
>
>
>
> </Service>
>
>
>
> </Server>
>
>
>
> ===========================================================
>
>
>
> I appriceate your prompt help on this since this is a very critical
> application live for the moment. Always send me an email for any
> clarifications.
>
>
>
> Thanks and best regards,
>
> Dilan
>
>
>
>
>
---------------------------------------------------------------------
To start a new topic, e-mail: users@tomcat.apache.org
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org
---------------------------------------------------------------------
To start a new topic, e-mail: users@tomcat.apache.org
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org
Re: tomcat5.5.17 cluster(using jdk1.5) error - OutOfMemoryError in
starting up on AS4
Posted by Filip Hanik - Dev Lists <de...@hanik.com>.
the only risk would be if you are running two environments, maybe QA and
production, you don't want the cluster membership to cross over
another option is to just change address and port in server.xml from the
default
Filip
Peter Rossbach wrote:
> HI,
>
> I see no risk with the default membership config.
>
> Peter
>
>
>
> Am 18.06.2006 um 19:29 schrieb Dilan Kelanibandara:
>
>> Hi Peter,
>>
>> No. No service is up and running on 4564. I did only commenting the
>> member
>> and restarted both the servers. So far it is working fine. I have a
>> doubt in
>> future weather there is any effect on my server?
>>
>> Can you please explain me the risk. Or is it ok to run the server
>> with this
>> configuration?.
>> Thanks and best regards,
>> Dilan
>>
>> -----Original Message-----
>> From: Peter Rossbach [mailto:pr@objektpark.de]
>> Sent: Sunday, June 18, 2006 8:14 PM
>> To: Tomcat Users List
>> Subject: Re: tomcat5.5.17 cluster(using jdk1.5) error -
>> OutOfMemoryError in
>> starting up on AS4
>>
>> OK!
>>
>> As you comment the Membership service out, following default is used:
>>
>> McastService mService= new McastService();
>> mService.setMcastAddr("228.0.0.4");
>> mService.setMcastPort(8012);
>> mService.setMcastFrequency(1000);
>> mService.setMcastDropTime(30000);
>> transferProperty("service",mService);
>> setMembershipService(mService);
>> }
>>
>> Have you start another service at 45564 ?
>>
>> Regards
>>
>>
>>
>> Am 18.06.2006 um 16:54 schrieb Dilan Kelanibandara:
>>
>>> Hi Peter,
>>>
>>> I was having the memory problem when cluster manager trying to
>>> multicast the
>>> request when tomcat startup.
>>> As a trial I commented multicast element of cluster configuration in
>>> server.xml and restarted both tomcats
>>>
>>> This is the multicast element which I commented.
>>> ==============================
>>> <!--
>>> <Membership
>>>
>>> className="org.apache.catalina.cluster.mcast.McastService"
>>> mcastAddr="228.0.0.4"
>>> mcastPort="45564"
>>> mcastFrequency="500"
>>> mcastDropTime="3000"/>
>>> -->
>>>
>>> ==============================
>>>
>>> Then tomcat started without an outofmemoryerror. Also replication
>>> members
>>> are added to each other. I ran both of servers with my applicaiton
>>> for some
>>> time. It is working fine. Session replication is happening as
>>> usual. Can you
>>> let me know can I proceed with this setup or is there any effect of my
>>> commenting on session replication ?
>>>
>>> Can you kindly let me know?
>>>
>>> Thanks and best regards,
>>> Dilan
>>>
>>>
>>> -----Original Message-----
>>> From: Peter Rossbach [mailto:pr@objektpark.de]
>>> Sent: Sunday, June 18, 2006 9:50 AM
>>> To: Tomcat Users List
>>> Subject: Re: tomcat5.5.17 cluster(using jdk1.5) error -
>>> OutOfMemoryError in
>>> starting up on AS4
>>>
>>> Use more JVM Options to analyse the mem usage
>>>
>>> Work with more faster mem allocation
>>>
>>> -XX:NewSize=128m -XX:MaxNewSize=128m -XX:SurvivorRatio=8
>>> -Xverbosegc
>>>
>>> Or better use a Memory Profiler...
>>>
>>> But the membership not allocate much memory, very strange effect :-(
>>>
>>> Peter
>>>
>>>
>>> Am 18.06.2006 um 08:01 schrieb Dilan Kelanibandara:
>>>
>>>> Hi Peter,
>>>> I am using default JVM parameters coming with tomcat5.5.17. In the
>>>> tomcat
>>>> server.xml file it says tcpThreadCount is normally equal to no.of
>>>> nodes (ie
>>>> 2 in this case).That is why I changed that to 2.
>>>>
>>>> I tried increasing JVM parameters for heap size in tomcat
>>>> Min=1024m
>>>> Max=1024m
>>>> also.I tried with both 512m also. But in both the occasion it is
>>>> the same
>>>> result.
>>>> Thank you for your kind attention.
>>>> I want further clarifications.
>>>> Best regards,
>>>> Dilan
>>>> -----Original Message-----
>>>> From: Peter Rossbach [mailto:pr@objektpark.de]
>>>> Sent: Sunday, June 18, 2006 7:37 AM
>>>> To: Tomcat Users List
>>>> Subject: Re: tomcat5.5.17 cluster(using jdk1.5) error -
>>>> OutOfMemoryError in
>>>> starting up on AS4
>>>>
>>>> Hi,
>>>>
>>>> Which JVM memory parameter you use?
>>>> At pooled mode use more receiver worker set tcpThreadCount="6"!
>>>> You really need deployer? Deployer generate at every startup a large
>>>> cluster message.
>>>>
>>>> Regards
>>>> Peter
>>>>
>>>>
>>>> Am 18.06.2006 um 06:22 schrieb Dilan Kelanibandara:
>>>>
>>>>>
>>>>>
>>>>> Hello ,
>>>>>
>>>>>
>>>>>
>>>>> I am getting OutOfMemoryError continuously when starting up two
>>>>> cluster
>>>>> nodes of tomcat5.5.17 (jdk1.5 on Advanced server 4). Any way it was
>>>>> working
>>>>> fine for 3 weeks time. This error occurs previously only one time
>>>>> and when
>>>>> restarted the tomcat, it worked.
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> Following is a part of catalina.out relevent to that error for
>>>>> node 1.
>>>>>
>>>>> ============
>>>>> ========================================================
>>>>>
>>>>>
>>>>>
>>>>> INFO: Start ClusterSender at cluster
>>>>> Catalina:type=Cluster,host=localhost
>>>>>
>>>>> with name Catalina:type=ClusterSender,host=localhost
>>>>>
>>>>> Jun 17, 2006 8:44:15 PM
>>>>> org.apache.catalina.cluster.mcast.McastService start
>>>>>
>>>>> INFO: Sleeping for 2000 milliseconds to establish cluster membership
>>>>> Exception in thread "Cluster-MembershipReceiver"
>>>>> java.lang.OutOfMemoryError:
>>>>>
>>>>>
>>>>> Java heap space
>>>>>
>>>>> Jun 17, 2006 8:44:17 PM
>>>>> org.apache.catalina.cluster.mcast.McastService
>>>>>
>>>>> registerMBean
>>>>>
>>>>> INFO: membership mbean registered
>>>>>
>>>>> (Catalina:type=ClusterMembership,host=localhost)
>>>>>
>>>>> Jun 17, 2006 8:44:17 PM
>>>>> org.apache.catalina.cluster.deploy.FarmWarDeployer
>>>>>
>>>>> start
>>>>>
>>>>> INFO: Cluster FarmWarDeployer started.
>>>>>
>>>>> Jun 17, 2006 8:44:19 PM
>>>>> org.apache.catalina.cluster.session.DeltaManager
>>>>>
>>>>> start
>>>>>
>>>>> INFO: Register manager /StockTradingServer to cluster element Host
>>>>> with name
>>>>> localhost Jun 17, 2006 8:44:19 PM
>>>>> org.apache.catalina.cluster.session.DeltaManager
>>>>>
>>>>> start
>>>>>
>>>>> INFO: Starting clustering manager at /StockTradingServer Jun 17,
>>>>> 2006
>>>>> 8:44:19 PM org.apache.catalina.cluster.session.DeltaManager
>>>>>
>>>>> getAllClusterSessions
>>>>>
>>>>> INFO: Manager [/StockTradingServer]: skipping state transfer. No
>>>>> members
>>>>> active in cluster group.
>>>>>
>>>>>
>>>>>
>>>>> ====================================================================
>>>>> =
>>>>> =
>>>>> ======
>>>>> =====
>>>>>
>>>>> node2 startup log is as follows
>>>>>
>>>>> ====================================================================
>>>>> =
>>>>> =
>>>>> ======
>>>>> =====
>>>>>
>>>>> INFO: Cluster is about to start
>>>>>
>>>>> Jun 17, 2006 8:53:00 PM
>>>>>
>>>>> org.apache.catalina.cluster.tcp.ReplicationTransmitter start
>>>>>
>>>>> INFO: Start ClusterSender at cluster
>>>>> Catalina:type=Cluster,host=localhost
>>>>>
>>>>> with name Catalina:type=ClusterSender,host=localhost
>>>>>
>>>>> Jun 17, 2006 8:53:00 PM
>>>>> org.apache.catalina.cluster.mcast.McastService start
>>>>>
>>>>> INFO: Sleeping for 2000 milliseconds to establish cluster membership
>>>>> Exception in thread "Cluster-MembershipReceiver"
>>>>> java.lang.OutOfMemoryError:
>>>>>
>>>>>
>>>>> Java heap space
>>>>>
>>>>> Jun 17, 2006 8:53:02 PM
>>>>> org.apache.catalina.cluster.mcast.McastService
>>>>>
>>>>> registerMBean
>>>>>
>>>>> INFO: membership mbean registered
>>>>>
>>>>> (Catalina:type=ClusterMembership,host=localhost)
>>>>>
>>>>> Jun 17, 2006 8:53:02 PM
>>>>> org.apache.catalina.cluster.deploy.FarmWarDeployer
>>>>>
>>>>> start
>>>>>
>>>>> INFO: Cluster FarmWarDeployer started.
>>>>>
>>>>> Jun 17, 2006 8:53:04 PM
>>>>> org.apache.catalina.cluster.session.DeltaManager
>>>>>
>>>>> start
>>>>>
>>>>> INFO: Register manager /StockTradingServer to cluster element Host
>>>>> with name
>>>>> localhost Jun 17, 2006 8:53:04 PM
>>>>> org.apache.catalina.cluster.session.DeltaManager
>>>>>
>>>>> start
>>>>>
>>>>>
>>>>>
>>>>> ====================================================================
>>>>> =
>>>>> =
>>>>> ======
>>>>> =
>>>>>
>>>>> Any way my clustor was working fine for 3 weeks time and started to
>>>>> give
>>>>> this error in startup of both the nodes.
>>>>>
>>>>>
>>>>>
>>>>> I have an IBMHTTPServer with jk connector for load balancing and
>>>>> that load
>>>>> is comming my tomcat cluster.
>>>>>
>>>>>
>>>>>
>>>>> following is the server.xml file for both the servers.
>>>>>
>>>>>
>>>>>
>>>>> ====================================================================
>>>>> =
>>>>> =
>>>>> ======
>>>>> =
>>>>>
>>>>>
>>>>>
>>>>> <!-- Example Server Configuration File -->
>>>>>
>>>>> <!-- Note that component elements are nested corresponding to their
>>>>>
>>>>> parent-child relationships with each other -->
>>>>>
>>>>>
>>>>>
>>>>> <!-- A "Server" is a singleton element that represents the entire
>>>>> JVM,
>>>>>
>>>>> which may contain one or more "Service" instances. The Server
>>>>>
>>>>> listens for a shutdown command on the indicated port.
>>>>>
>>>>>
>>>>>
>>>>> Note: A "Server" is not itself a "Container", so you may not
>>>>>
>>>>> define subcomponents such as "Valves" or "Loggers" at this
>>>>> level.
>>>>>
>>>>> -->
>>>>>
>>>>>
>>>>>
>>>>> <Server port="8005" shutdown="SHUTDOWN">
>>>>>
>>>>>
>>>>>
>>>>> <!-- Comment these entries out to disable JMX MBeans support used
>>>>> for the
>>>>>
>>>>> administration web application --> <Listener
>>>>> className="org.apache.catalina.core.AprLifecycleListener" />
>>>>> <Listener
>>>>> className="org.apache.catalina.mbeans.ServerLifecycleListener" />
>>>>> <Listener
>>>>> className="org.apache.catalina.mbeans.GlobalResourcesLifecycleListen
>>>>> e
>>>>> r
>>>>> " />
>>>>> <Listener
>>>>> className="org.apache.catalina.storeconfig.StoreConfigLifecycleListe
>>>>> n
>>>>> e
>>>>> r"/>
>>>>>
>>>>>
>>>>>
>>>>> <!-- Global JNDI resources -->
>>>>>
>>>>> <GlobalNamingResources>
>>>>>
>>>>>
>>>>>
>>>>> <!-- Test entry for demonstration purposes -->
>>>>>
>>>>> <Environment name="simpleValue" type="java.lang.Integer"
>>>>> value="30"/>
>>>>>
>>>>>
>>>>>
>>>>> <!-- Editable user database that can also be used by
>>>>>
>>>>> UserDatabaseRealm to authenticate users -->
>>>>>
>>>>> <Resource name="UserDatabase" auth="Container"
>>>>>
>>>>> type="org.apache.catalina.UserDatabase"
>>>>>
>>>>> description="User database that can be updated and saved"
>>>>>
>>>>>
>>>>> factory="org.apache.catalina.users.MemoryUserDatabaseFactory"
>>>>>
>>>>> pathname="conf/tomcat-users.xml" />
>>>>>
>>>>>
>>>>>
>>>>> </GlobalNamingResources>
>>>>>
>>>>>
>>>>>
>>>>> <!-- A "Service" is a collection of one or more "Connectors" that
>>>>> share
>>>>>
>>>>> a single "Container" (and therefore the web applications
>>>>> visible
>>>>>
>>>>> within that Container). Normally, that Container is an
>>>>> "Engine",
>>>>>
>>>>> but this is not required.
>>>>>
>>>>>
>>>>>
>>>>> Note: A "Service" is not itself a "Container", so you may not
>>>>>
>>>>> define subcomponents such as "Valves" or "Loggers" at this
>>>>> level.
>>>>>
>>>>> -->
>>>>>
>>>>>
>>>>>
>>>>> <!-- Define the Tomcat Stand-Alone Service --> <Service
>>>>> name="Catalina">
>>>>>
>>>>>
>>>>>
>>>>> <!-- A "Connector" represents an endpoint by which requests are
>>>>> received
>>>>>
>>>>> and responses are returned. Each Connector passes requests
>>>>> on to
>>>>> the
>>>>>
>>>>> associated "Container" (normally an Engine) for processing.
>>>>>
>>>>>
>>>>>
>>>>> By default, a non-SSL HTTP/1.1 Connector is established on
>>>>> port
>>>>> 8080.
>>>>>
>>>>> You can also enable an SSL HTTP/1.1 Connector on port
>>>>> 8443 by
>>>>>
>>>>> following the instructions below and uncommenting the second
>>>>> Connector
>>>>>
>>>>> entry. SSL support requires the following steps (see the
>>>>> SSL Config
>>>>> HOWTO in the Tomcat 5 documentation bundle for more detailed
>>>>>
>>>>> instructions):
>>>>>
>>>>> * If your JDK version 1.3 or prior, download and install
>>>>> JSSE 1.0.2
>>>>> or
>>>>>
>>>>> later, and put the JAR files into "$JAVA_HOME/jre/lib/
>>>>> ext".
>>>>>
>>>>> * Execute:
>>>>>
>>>>> %JAVA_HOME%\bin\keytool -genkey -alias tomcat -keyalg
>>>>> RSA
>>>>>
>>>>> (Windows)
>>>>>
>>>>> $JAVA_HOME/bin/keytool -genkey -alias tomcat -keyalg
>>>>> RSA (Unix)
>>>>> with a password value of "changeit" for both the certificate and
>>>>>
>>>>> the keystore itself.
>>>>>
>>>>>
>>>>>
>>>>> By default, DNS lookups are enabled when a web application
>>>>> calls
>>>>>
>>>>> request.getRemoteHost(). This can have an adverse impact on
>>>>>
>>>>> performance, so you can disable it by setting the
>>>>>
>>>>> "enableLookups" attribute to "false". When DNS lookups are
>>>>> disabled,
>>>>>
>>>>> request.getRemoteHost() will return the String version of
>>>>> the
>>>>>
>>>>> IP address of the remote client.
>>>>>
>>>>> -->
>>>>>
>>>>>
>>>>>
>>>>> <!-- Define a non-SSL HTTP/1.1 Connector on port 8080 -->
>>>>>
>>>>> <Connector port="8088" maxHttpHeaderSize="8192"
>>>>>
>>>>> maxThreads="300" minSpareThreads="25"
>>>>> maxSpareThreads="75"
>>>>>
>>>>> enableLookups="false" redirectPort="8443"
>>>>> acceptCount="100"
>>>>>
>>>>> connectionTimeout="20000"
>>>>> disableUploadTimeout="true" />
>>>>>
>>>>> <!-- Note : To disable connection timeouts, set
>>>>> connectionTimeout value
>>>>>
>>>>> to 0 -->
>>>>>
>>>>>
>>>>>
>>>>> <!-- Note : To use gzip compression you could set the
>>>>> following
>>>>> properties :
>>>>>
>>>>>
>>>>>
>>>>> compression="on"
>>>>>
>>>>> compressionMinSize="2048"
>>>>>
>>>>> noCompressionUserAgents="gozilla,
>>>>> traviata"
>>>>>
>>>>> compressableMimeType="text/html,text/xml"
>>>>>
>>>>> -->
>>>>>
>>>>>
>>>>>
>>>>> <!-- Define a SSL HTTP/1.1 Connector on port 8443 -->
>>>>>
>>>>> <!--
>>>>>
>>>>> <Connector port="8443" maxHttpHeaderSize="8192"
>>>>>
>>>>> maxThreads="150" minSpareThreads="25"
>>>>> maxSpareThreads="75"
>>>>>
>>>>> enableLookups="false" disableUploadTimeout="true"
>>>>>
>>>>> acceptCount="100" scheme="https" secure="true"
>>>>>
>>>>> clientAuth="false" sslProtocol="TLS" />
>>>>>
>>>>> -->
>>>>>
>>>>>
>>>>>
>>>>> <!-- Define an AJP 1.3 Connector on port 8009 -->
>>>>>
>>>>> <Connector port="8009"
>>>>>
>>>>> enableLookups="false" redirectPort="8443"
>>>>> protocol="AJP/1.3"
>>>>>
>>>>> />
>>>>>
>>>>>
>>>>>
>>>>> <!-- Define a Proxied HTTP/1.1 Connector on port 8082 -->
>>>>>
>>>>> <!-- See proxy documentation for more information about using
>>>>> this. -->
>>>>>
>>>>> <!--
>>>>>
>>>>> <Connector port="8082"
>>>>>
>>>>> maxThreads="150" minSpareThreads="25"
>>>>> maxSpareThreads="75"
>>>>>
>>>>> enableLookups="false" acceptCount="100"
>>>>>
>>>>> connectionTimeout="20000"
>>>>>
>>>>> proxyPort="80" disableUploadTimeout="true" />
>>>>>
>>>>> -->
>>>>>
>>>>>
>>>>>
>>>>> <!-- An Engine represents the entry point (within Catalina) that
>>>>> processes
>>>>>
>>>>> every request. The Engine implementation for Tomcat stand
>>>>> alone
>>>>>
>>>>> analyzes the HTTP headers included with the request, and
>>>>> passes them
>>>>> on to the appropriate Host (virtual host). -->
>>>>>
>>>>>
>>>>>
>>>>> <!-- You should set jvmRoute to support load-balancing via AJP
>>>>> ie :
>>>>>
>>>>> <Engine name="Standalone" defaultHost="localhost"
>>>>> jvmRoute="jvm1">
>>>>>
>>>>> -->
>>>>>
>>>>>
>>>>>
>>>>> <!-- Define the top level container in our container hierarchy
>>>>> -->
>>>>>
>>>>> <Engine name="Catalina" defaultHost="localhost"
>>>>> jvmRoute="node01">
>>>>>
>>>>> <!-- <Engine name="Catalina" jvmRoute="$JAVA_HOME" > -->
>>>>>
>>>>>
>>>>>
>>>>> <!-- The request dumper valve dumps useful debugging
>>>>> information about
>>>>>
>>>>> the request headers and cookies that were received, and
>>>>> the
>>>>> response
>>>>>
>>>>> headers and cookies that were sent, for all requests
>>>>> received by
>>>>>
>>>>> this instance of Tomcat. If you care only about requests
>>>>> to a
>>>>>
>>>>> particular virtual host, or a particular application,
>>>>> nest this
>>>>>
>>>>> element inside the corresponding <Host> or <Context> entry
>>>>> instead.
>>>>>
>>>>>
>>>>>
>>>>> For a similar mechanism that is portable to all Servlet
>>>>> 2.4
>>>>>
>>>>> containers, check out the "RequestDumperFilter" Filter in
>>>>> the
>>>>>
>>>>> example application (the source for this filter may be
>>>>> found in
>>>>>
>>>>> "$CATALINA_HOME/webapps/examples/WEB-INF/classes/
>>>>> filters").
>>>>>
>>>>>
>>>>>
>>>>> Request dumping is disabled by default. Uncomment the
>>>>> following
>>>>>
>>>>> element to enable it. -->
>>>>>
>>>>> <!--
>>>>>
>>>>> <Valve
>>>>> className="org.apache.catalina.valves.RequestDumperValve"/>
>>>>>
>>>>> -->
>>>>>
>>>>>
>>>>>
>>>>> <!-- Because this Realm is here, an instance will be shared
>>>>> globally
>>>>> -->
>>>>>
>>>>>
>>>>>
>>>>> <!-- This Realm uses the UserDatabase configured in the global
>>>>> JNDI
>>>>>
>>>>> resources under the key "UserDatabase". Any edits
>>>>>
>>>>> that are performed against this UserDatabase are
>>>>> immediately
>>>>>
>>>>> available for use by the Realm. -->
>>>>>
>>>>> <Realm className="org.apache.catalina.realm.UserDatabaseRealm"
>>>>>
>>>>> resourceName="UserDatabase"/>
>>>>>
>>>>>
>>>>>
>>>>> <!-- Comment out the old realm but leave here for now in
>>>>> case we
>>>>>
>>>>> need to go back quickly -->
>>>>>
>>>>> <!--
>>>>>
>>>>> <Realm className="org.apache.catalina.realm.MemoryRealm" />
>>>>>
>>>>> -->
>>>>>
>>>>>
>>>>>
>>>>> <!-- Replace the above Realm with one of the following to get
>>>>> a Realm
>>>>>
>>>>> stored in a database and accessed via JDBC -->
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> <Realm className="org.apache.catalina.realm.JDBCRealm"
>>>>>
>>>>> driverName="org.gjt.mm.mysql.Driver"
>>>>>
>>>>> connectionURL="jdbc:mysql://172.16.1.55:3306/kutila"
>>>>>
>>>>> connectionName="kutila" connectionPassword="kutila"
>>>>>
>>>>> userTable="users" userNameCol="user_name"
>>>>>
>>>>> userCredCol="user_pass"
>>>>>
>>>>> userRoleTable="user_roles" roleNameCol="role_name" />
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> <!--
>>>>>
>>>>> <Realm className="org.apache.catalina.realm.JDBCRealm"
>>>>>
>>>>> driverName="oracle.jdbc.driver.OracleDriver"
>>>>>
>>>>> connectionURL="jdbc:oracle:thin:@ntserver:1521:ORCL"
>>>>>
>>>>> connectionName="scott" connectionPassword="tiger"
>>>>>
>>>>> userTable="users" userNameCol="user_name"
>>>>>
>>>>> userCredCol="user_pass"
>>>>>
>>>>> userRoleTable="user_roles" roleNameCol="role_name" />
>>>>>
>>>>> -->
>>>>>
>>>>>
>>>>>
>>>>> <!--
>>>>>
>>>>> <Realm className="org.apache.catalina.realm.JDBCRealm"
>>>>>
>>>>> driverName="sun.jdbc.odbc.JdbcOdbcDriver"
>>>>>
>>>>> connectionURL="jdbc:odbc:CATALINA"
>>>>>
>>>>> userTable="users" userNameCol="user_name"
>>>>>
>>>>> userCredCol="user_pass"
>>>>>
>>>>> userRoleTable="user_roles" roleNameCol="role_name" />
>>>>>
>>>>> -->
>>>>>
>>>>>
>>>>>
>>>>> <!-- Define the default virtual host
>>>>>
>>>>> Note: XML Schema validation will not work with Xerces 2.2.
>>>>>
>>>>> -->
>>>>>
>>>>> <Host name="localhost" appBase="webapps"
>>>>>
>>>>> unpackWARs="true" autoDeploy="true"
>>>>>
>>>>> xmlValidation="false" xmlNamespaceAware="false">
>>>>>
>>>>>
>>>>>
>>>>> <!-- Defines a cluster for this node,
>>>>>
>>>>> By defining this element, means that every manager
>>>>> will be
>>>>> changed.
>>>>>
>>>>> So when running a cluster, only make sure that you have
>>>>> webapps
>>>>> in there
>>>>>
>>>>> that need to be clustered and remove the other ones.
>>>>>
>>>>> A cluster has the following parameters:
>>>>>
>>>>>
>>>>>
>>>>> className = the fully qualified name of the cluster
>>>>> class
>>>>>
>>>>>
>>>>>
>>>>> clusterName = a descriptive name for your cluster,
>>>>> can be
>>>>> anything
>>>>>
>>>>>
>>>>>
>>>>> mcastAddr = the multicast address, has to be the same
>>>>> for all
>>>>> the nodes
>>>>>
>>>>>
>>>>>
>>>>> mcastPort = the multicast port, has to be the same for
>>>>> all the
>>>>> nodes
>>>>>
>>>>>
>>>>>
>>>>> mcastBindAddress = bind the multicast socket to a
>>>>> specific
>>>>> address
>>>>>
>>>>>
>>>>>
>>>>> mcastTTL = the multicast TTL if you want to limit your
>>>>> broadcast
>>>>>
>>>>> mcastSoTimeout = the multicast readtimeout
>>>>>
>>>>>
>>>>>
>>>>> mcastFrequency = the number of milliseconds in between
>>>>> sending a
>>>>> "I'm alive" heartbeat
>>>>>
>>>>>
>>>>>
>>>>> mcastDropTime = the number a milliseconds before a
>>>>> node is
>>>>> considered "dead" if no heartbeat is received
>>>>>
>>>>>
>>>>>
>>>>> tcpThreadCount = the number of threads to handle
>>>>> incoming
>>>>> replication requests, optimal would be the same amount of threads
>>>>> as nodes
>>>>>
>>>>>
>>>>>
>>>>> tcpListenAddress = the listen address (bind address)
>>>>> for TCP
>>>>> cluster request on this host,
>>>>>
>>>>> in case of multiple ethernet cards.
>>>>>
>>>>> auto means that address becomes
>>>>>
>>>>> InetAddress.getLocalHost
>>>>> ().getHostAddress()
>>>>>
>>>>>
>>>>>
>>>>> tcpListenPort = the tcp listen port
>>>>>
>>>>>
>>>>>
>>>>> tcpSelectorTimeout = the timeout (ms) for the
>>>>> Selector.select()
>>>>> method in case the OS
>>>>>
>>>>> has a wakup bug in java.nio. Set
>>>>> to 0 for
>>>>> no timeout
>>>>>
>>>>>
>>>>>
>>>>> printToScreen = true means that managers will also
>>>>> print to
>>>>> std.out
>>>>>
>>>>>
>>>>>
>>>>> expireSessionsOnShutdown = true means that
>>>>>
>>>>>
>>>>>
>>>>> useDirtyFlag = true means that we only replicate a
>>>>> session after
>>>>> setAttribute,removeAttribute has been called.
>>>>>
>>>>> false means to replicate the session
>>>>> after each
>>>>> request.
>>>>>
>>>>> false means that replication would work
>>>>> for the
>>>>> following piece of code: (only for SimpleTcpReplicationManager)
>>>>>
>>>>> <%
>>>>>
>>>>> HashMap map =
>>>>> (HashMap)session.getAttribute("map");
>>>>>
>>>>> map.put("key","value");
>>>>>
>>>>> %>
>>>>>
>>>>> replicationMode = can be either 'pooled',
>>>>> 'synchronous' or
>>>>> 'asynchronous'.
>>>>>
>>>>> * Pooled means that the replication
>>>>> happens
>>>>> using several sockets in a synchronous way. Ie, the data gets
>>>>> replicated,
>>>>> then the request return. This is the same as the 'synchronous'
>>>>> setting
>>>>> except it uses a pool of sockets, hence it is multithreaded. This
>>>>> is the
>>>>> fastest and safest configuration. To use this, also increase the nr
>>>>> of tcp
>>>>> threads that you have dealing with replication.
>>>>>
>>>>> * Synchronous means that the thread
>>>>> that
>>>>> executes the request, is also the
>>>>>
>>>>> thread the replicates the data to the
>>>>> other
>>>>> nodes, and will not return until all
>>>>>
>>>>> nodes have received the information.
>>>>>
>>>>> * Asynchronous means that there is a
>>>>> specific
>>>>> 'sender' thread for each cluster node,
>>>>>
>>>>> so the request thread will queue the
>>>>> replication request into a "smart" queue,
>>>>>
>>>>> and then return to the client.
>>>>>
>>>>> The "smart" queue is a queue where
>>>>> when a
>>>>> session is added to the queue, and the same session
>>>>>
>>>>> already exists in the queue from a
>>>>> previous
>>>>> request, that session will be replaced
>>>>>
>>>>> in the queue instead of replicating
>>>>> two
>>>>> requests. This almost never happens, unless there is a
>>>>>
>>>>> large network delay.
>>>>>
>>>>> -->
>>>>>
>>>>> <!--
>>>>>
>>>>> When configuring for clustering, you also add in a valve
>>>>> to catch
>>>>> all the requests
>>>>>
>>>>> coming in, at the end of the request, the session may or
>>>>> may not
>>>>> be replicated.
>>>>>
>>>>> A session is replicated if and only if all the
>>>>> conditions are
>>>>>
>>>>> met:
>>>>>
>>>>> 1. useDirtyFlag is true or setAttribute or
>>>>> removeAttribute has
>>>>> been called AND
>>>>>
>>>>> 2. a session exists (has been created)
>>>>>
>>>>> 3. the request is not trapped by the "filter" attribute
>>>>>
>>>>>
>>>>>
>>>>> The filter attribute is to filter out requests that
>>>>> could not
>>>>> modify the session,
>>>>>
>>>>> hence we don't replicate the session after the end of
>>>>> this
>>>>> request.
>>>>>
>>>>> The filter is negative, ie, anything you put in the
>>>>> filter, you
>>>>> mean to filter out,
>>>>>
>>>>> ie, no replication will be done on requests that match
>>>>> one of the
>>>>> filters.
>>>>>
>>>>> The filter attribute is delimited by ;, so you can't
>>>>> escape out ;
>>>>> even if you wanted to.
>>>>>
>>>>>
>>>>>
>>>>> filter=".*\.gif;.*\.js;" means that we will not
>>>>> replicate the
>>>>> session after requests with the URI
>>>>>
>>>>> ending with .gif and .js are intercepted.
>>>>>
>>>>>
>>>>>
>>>>> The deployer element can be used to deploy apps cluster
>>>>> wide.
>>>>>
>>>>> Currently the deployment only deploys/undeploys to
>>>>> working
>>>>> members in the cluster
>>>>>
>>>>> so no WARs are copied upons startup of a broken node.
>>>>>
>>>>> The deployer watches a directory (watchDir) for WAR
>>>>> files when
>>>>> watchEnabled="true"
>>>>>
>>>>> When a new war file is added the war gets deployed to
>>>>> the local
>>>>> instance,
>>>>>
>>>>> and then deployed to the other instances in the cluster.
>>>>>
>>>>> When a war file is deleted from the watchDir the war is
>>>>> undeployed locally
>>>>>
>>>>> and cluster wide
>>>>>
>>>>> -->
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> <Cluster
>>>>> className="org.apache.catalina.cluster.tcp.SimpleTcpCluster"
>>>>>
>>>>>
>>>>> managerClassName="org.apache.catalina.cluster.session.DeltaManager"
>>>>>
>>>>> expireSessionsOnShutdown="true"
>>>>>
>>>>> useDirtyFlag="true"
>>>>>
>>>>> notifyListenersOnReplication="true">
>>>>>
>>>>>
>>>>>
>>>>> <Membership
>>>>>
>>>>>
>>>>> className="org.apache.catalina.cluster.mcast.McastService"
>>>>>
>>>>> mcastAddr="228.0.0.4"
>>>>>
>>>>> mcastPort="45564"
>>>>>
>>>>> mcastFrequency="500"
>>>>>
>>>>> mcastDropTime="3000"/>
>>>>>
>>>>>
>>>>>
>>>>> <Receiver
>>>>>
>>>>>
>>>>>
>>>>> className="org.apache.catalina.cluster.tcp.ReplicationListener"
>>>>>
>>>>> tcpListenAddress="auto"
>>>>>
>>>>> tcpListenPort="4001"
>>>>>
>>>>> tcpSelectorTimeout="100"
>>>>>
>>>>> tcpThreadCount="2"/>
>>>>>
>>>>>
>>>>>
>>>>> <Sender
>>>>>
>>>>>
>>>>>
>>>>> className="org.apache.catalina.cluster.tcp.ReplicationTransmitter"
>>>>>
>>>>> replicationMode="pooled"
>>>>>
>>>>> ackTimeout="15000"
>>>>>
>>>>> waitForAck="true"/>
>>>>>
>>>>>
>>>>>
>>>>> <Valve
>>>>>
>>>>> className="org.apache.catalina.cluster.tcp.ReplicationValve"
>>>>>
>>>>>
>>>>>
>>>>> filter=".*\.gif;.*\.js;.*\.jpg;.*\.png;.*\.htm;.*\.html;.*\.css;.*
>>>>> \.txt;"/>
>>>>>
>>>>>
>>>>>
>>>>> <Deployer
>>>>>
>>>>> className="org.apache.catalina.cluster.deploy.FarmWarDeployer"
>>>>>
>>>>> tempDir="/tmp/war-temp/"
>>>>>
>>>>> deployDir="/tmp/war-deploy/"
>>>>>
>>>>> watchDir="/tmp/war-listen/"
>>>>>
>>>>> watchEnabled="false"/>
>>>>>
>>>>>
>>>>>
>>>>> <ClusterListener
>>>>>
>>>>> className="org.apache.catalina.cluster.session.ClusterSessionListene
>>>>> r
>>>>> "
>>>>> />
>>>>>
>>>>> </Cluster>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> <!-- Normally, users must authenticate themselves to each
>>>>> web app
>>>>>
>>>>> individually. Uncomment the following entry if you
>>>>> would like
>>>>>
>>>>> a user to be authenticated the first time they
>>>>> encounter a
>>>>>
>>>>> resource protected by a security constraint, and then
>>>>> have that
>>>>>
>>>>> user identity maintained across *all* web applications
>>>>> contained
>>>>> in this virtual host. -->
>>>>>
>>>>> <!--
>>>>>
>>>>> <Valve
>>>>> className="org.apache.catalina.authenticator.SingleSignOn" />
>>>>>
>>>>> -->
>>>>>
>>>>>
>>>>>
>>>>> <!-- Access log processes all requests for this virtual
>>>>> host. By
>>>>>
>>>>> default, log files are created in the "logs" directory
>>>>> relative
>>>>> to
>>>>>
>>>>> $CATALINA_HOME. If you wish, you can specify a
>>>>> different
>>>>>
>>>>> directory with the "directory" attribute. Specify
>>>>> either a
>>>>> relative
>>>>>
>>>>> (to $CATALINA_HOME) or absolute path to the desired
>>>>> directory.
>>>>>
>>>>> -->
>>>>>
>>>>> <!--
>>>>>
>>>>> <Valve className="org.apache.catalina.valves.AccessLogValve"
>>>>>
>>>>> directory="logs" prefix="localhost_access_log."
>>>>>
>>>>> suffix=".txt"
>>>>>
>>>>> pattern="common" resolveHosts="false"/>
>>>>>
>>>>> -->
>>>>>
>>>>>
>>>>>
>>>>> <!-- Access log processes all requests for this virtual
>>>>> host. By
>>>>>
>>>>> default, log files are created in the "logs" directory
>>>>> relative
>>>>> to
>>>>>
>>>>> $CATALINA_HOME. If you wish, you can specify a
>>>>> different
>>>>>
>>>>> directory with the "directory" attribute. Specify
>>>>> either a
>>>>> relative
>>>>>
>>>>> (to $CATALINA_HOME) or absolute path to the desired
>>>>> directory.
>>>>>
>>>>> This access log implementation is optimized for maximum
>>>>> performance,
>>>>>
>>>>> but is hardcoded to support only the "common" and
>>>>> "combined"
>>>>>
>>>>> patterns.
>>>>>
>>>>> -->
>>>>>
>>>>> <!--
>>>>>
>>>>> <Valve
>>>>>
>>>>> className="org.apache.catalina.valves.FastCommonAccessLogValve"
>>>>>
>>>>> directory="logs" prefix="localhost_access_log."
>>>>>
>>>>> suffix=".txt"
>>>>>
>>>>> pattern="common" resolveHosts="false"/>
>>>>>
>>>>> -->
>>>>>
>>>>>
>>>>>
>>>>> </Host>
>>>>>
>>>>>
>>>>>
>>>>> </Engine>
>>>>>
>>>>>
>>>>>
>>>>> </Service>
>>>>>
>>>>>
>>>>>
>>>>> </Server>
>>>>>
>>>>>
>>>>>
>>>>> ===========================================================
>>>>>
>>>>>
>>>>>
>>>>> I appriceate your prompt help on this since this is a very critical
>>>>> application live for the moment. Always send me an email for any
>>>>> clarifications.
>>>>>
>>>>>
>>>>>
>>>>> Thanks and best regards,
>>>>>
>>>>> Dilan
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>
>>>>
>>>> ---------------------------------------------------------------------
>>>> To start a new topic, e-mail: users@tomcat.apache.org
>>>> To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
>>>> For additional commands, e-mail: users-help@tomcat.apache.org
>>>>
>>>>
>>>>
>>>> ---------------------------------------------------------------------
>>>> To start a new topic, e-mail: users@tomcat.apache.org
>>>> To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
>>>> For additional commands, e-mail: users-help@tomcat.apache.org
>>>>
>>>>
>>>
>>>
>>>
>>> ---------------------------------------------------------------------
>>> To start a new topic, e-mail: users@tomcat.apache.org
>>> To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
>>> For additional commands, e-mail: users-help@tomcat.apache.org
>>>
>>>
>>
>>
>> ---------------------------------------------------------------------
>> To start a new topic, e-mail: users@tomcat.apache.org
>> To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
>> For additional commands, e-mail: users-help@tomcat.apache.org
>>
>>
>>
>> ---------------------------------------------------------------------
>> To start a new topic, e-mail: users@tomcat.apache.org
>> To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
>> For additional commands, e-mail: users-help@tomcat.apache.org
>>
>>
>
>
> ---------------------------------------------------------------------
> To start a new topic, e-mail: users@tomcat.apache.org
> To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
> For additional commands, e-mail: users-help@tomcat.apache.org
>
--
Filip Hanik
---------------------------------------------------------------------
To start a new topic, e-mail: users@tomcat.apache.org
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org
RE: tomcat5.5.17 cluster(using jdk1.5) error - OutOfMemoryError in starting up on AS4
Posted by Dilan Kelanibandara <di...@beyondm.net>.
Hi Peter,
Thank you very much for your kind attention on my query. I will go ahead
with the default setting.
Best Regards,
Dilan.
-----Original Message-----
From: Peter Rossbach [mailto:pr@objektpark.de]
Sent: Sunday, June 18, 2006 9:47 PM
To: Tomcat Users List
Subject: Re: tomcat5.5.17 cluster(using jdk1.5) error - OutOfMemoryError in
starting up on AS4
HI,
I see no risk with the default membership config.
Peter
Am 18.06.2006 um 19:29 schrieb Dilan Kelanibandara:
> Hi Peter,
>
> No. No service is up and running on 4564. I did only commenting the
> member
> and restarted both the servers. So far it is working fine. I have a
> doubt in
> future weather there is any effect on my server?
>
> Can you please explain me the risk. Or is it ok to run the server
> with this
> configuration?.
> Thanks and best regards,
> Dilan
>
> -----Original Message-----
> From: Peter Rossbach [mailto:pr@objektpark.de]
> Sent: Sunday, June 18, 2006 8:14 PM
> To: Tomcat Users List
> Subject: Re: tomcat5.5.17 cluster(using jdk1.5) error -
> OutOfMemoryError in
> starting up on AS4
>
> OK!
>
> As you comment the Membership service out, following default is used:
>
> McastService mService= new McastService();
> mService.setMcastAddr("228.0.0.4");
> mService.setMcastPort(8012);
> mService.setMcastFrequency(1000);
> mService.setMcastDropTime(30000);
> transferProperty("service",mService);
> setMembershipService(mService);
> }
>
> Have you start another service at 45564 ?
>
> Regards
>
>
>
> Am 18.06.2006 um 16:54 schrieb Dilan Kelanibandara:
>
>> Hi Peter,
>>
>> I was having the memory problem when cluster manager trying to
>> multicast the
>> request when tomcat startup.
>> As a trial I commented multicast element of cluster configuration in
>> server.xml and restarted both tomcats
>>
>> This is the multicast element which I commented.
>> ==============================
>> <!--
>> <Membership
>>
>> className="org.apache.catalina.cluster.mcast.McastService"
>> mcastAddr="228.0.0.4"
>> mcastPort="45564"
>> mcastFrequency="500"
>> mcastDropTime="3000"/>
>> -->
>>
>> ==============================
>>
>> Then tomcat started without an outofmemoryerror. Also replication
>> members
>> are added to each other. I ran both of servers with my applicaiton
>> for some
>> time. It is working fine. Session replication is happening as
>> usual. Can you
>> let me know can I proceed with this setup or is there any effect
>> of my
>> commenting on session replication ?
>>
>> Can you kindly let me know?
>>
>> Thanks and best regards,
>> Dilan
>>
>>
>> -----Original Message-----
>> From: Peter Rossbach [mailto:pr@objektpark.de]
>> Sent: Sunday, June 18, 2006 9:50 AM
>> To: Tomcat Users List
>> Subject: Re: tomcat5.5.17 cluster(using jdk1.5) error -
>> OutOfMemoryError in
>> starting up on AS4
>>
>> Use more JVM Options to analyse the mem usage
>>
>> Work with more faster mem allocation
>>
>> -XX:NewSize=128m -XX:MaxNewSize=128m -XX:SurvivorRatio=8
>> -Xverbosegc
>>
>> Or better use a Memory Profiler...
>>
>> But the membership not allocate much memory, very strange effect :-(
>>
>> Peter
>>
>>
>> Am 18.06.2006 um 08:01 schrieb Dilan Kelanibandara:
>>
>>> Hi Peter,
>>> I am using default JVM parameters coming with tomcat5.5.17. In the
>>> tomcat
>>> server.xml file it says tcpThreadCount is normally equal to no.of
>>> nodes (ie
>>> 2 in this case).That is why I changed that to 2.
>>>
>>> I tried increasing JVM parameters for heap size in tomcat
>>> Min=1024m
>>> Max=1024m
>>> also.I tried with both 512m also. But in both the occasion it is
>>> the same
>>> result.
>>> Thank you for your kind attention.
>>> I want further clarifications.
>>> Best regards,
>>> Dilan
>>> -----Original Message-----
>>> From: Peter Rossbach [mailto:pr@objektpark.de]
>>> Sent: Sunday, June 18, 2006 7:37 AM
>>> To: Tomcat Users List
>>> Subject: Re: tomcat5.5.17 cluster(using jdk1.5) error -
>>> OutOfMemoryError in
>>> starting up on AS4
>>>
>>> Hi,
>>>
>>> Which JVM memory parameter you use?
>>> At pooled mode use more receiver worker set tcpThreadCount="6"!
>>> You really need deployer? Deployer generate at every startup a large
>>> cluster message.
>>>
>>> Regards
>>> Peter
>>>
>>>
>>> Am 18.06.2006 um 06:22 schrieb Dilan Kelanibandara:
>>>
>>>>
>>>>
>>>> Hello ,
>>>>
>>>>
>>>>
>>>> I am getting OutOfMemoryError continuously when starting up two
>>>> cluster
>>>> nodes of tomcat5.5.17 (jdk1.5 on Advanced server 4). Any way it was
>>>> working
>>>> fine for 3 weeks time. This error occurs previously only one time
>>>> and when
>>>> restarted the tomcat, it worked.
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> Following is a part of catalina.out relevent to that error for
>>>> node 1.
>>>>
>>>> ============
>>>> ========================================================
>>>>
>>>>
>>>>
>>>> INFO: Start ClusterSender at cluster
>>>> Catalina:type=Cluster,host=localhost
>>>>
>>>> with name Catalina:type=ClusterSender,host=localhost
>>>>
>>>> Jun 17, 2006 8:44:15 PM
>>>> org.apache.catalina.cluster.mcast.McastService start
>>>>
>>>> INFO: Sleeping for 2000 milliseconds to establish cluster
>>>> membership
>>>> Exception in thread "Cluster-MembershipReceiver"
>>>> java.lang.OutOfMemoryError:
>>>>
>>>>
>>>> Java heap space
>>>>
>>>> Jun 17, 2006 8:44:17 PM
>>>> org.apache.catalina.cluster.mcast.McastService
>>>>
>>>> registerMBean
>>>>
>>>> INFO: membership mbean registered
>>>>
>>>> (Catalina:type=ClusterMembership,host=localhost)
>>>>
>>>> Jun 17, 2006 8:44:17 PM
>>>> org.apache.catalina.cluster.deploy.FarmWarDeployer
>>>>
>>>> start
>>>>
>>>> INFO: Cluster FarmWarDeployer started.
>>>>
>>>> Jun 17, 2006 8:44:19 PM
>>>> org.apache.catalina.cluster.session.DeltaManager
>>>>
>>>> start
>>>>
>>>> INFO: Register manager /StockTradingServer to cluster element Host
>>>> with name
>>>> localhost Jun 17, 2006 8:44:19 PM
>>>> org.apache.catalina.cluster.session.DeltaManager
>>>>
>>>> start
>>>>
>>>> INFO: Starting clustering manager at /StockTradingServer Jun 17,
>>>> 2006
>>>> 8:44:19 PM org.apache.catalina.cluster.session.DeltaManager
>>>>
>>>> getAllClusterSessions
>>>>
>>>> INFO: Manager [/StockTradingServer]: skipping state transfer. No
>>>> members
>>>> active in cluster group.
>>>>
>>>>
>>>>
>>>> ===================================================================
>>>> =
>>>> =
>>>> =
>>>> ======
>>>> =====
>>>>
>>>> node2 startup log is as follows
>>>>
>>>> ===================================================================
>>>> =
>>>> =
>>>> =
>>>> ======
>>>> =====
>>>>
>>>> INFO: Cluster is about to start
>>>>
>>>> Jun 17, 2006 8:53:00 PM
>>>>
>>>> org.apache.catalina.cluster.tcp.ReplicationTransmitter start
>>>>
>>>> INFO: Start ClusterSender at cluster
>>>> Catalina:type=Cluster,host=localhost
>>>>
>>>> with name Catalina:type=ClusterSender,host=localhost
>>>>
>>>> Jun 17, 2006 8:53:00 PM
>>>> org.apache.catalina.cluster.mcast.McastService start
>>>>
>>>> INFO: Sleeping for 2000 milliseconds to establish cluster
>>>> membership
>>>> Exception in thread "Cluster-MembershipReceiver"
>>>> java.lang.OutOfMemoryError:
>>>>
>>>>
>>>> Java heap space
>>>>
>>>> Jun 17, 2006 8:53:02 PM
>>>> org.apache.catalina.cluster.mcast.McastService
>>>>
>>>> registerMBean
>>>>
>>>> INFO: membership mbean registered
>>>>
>>>> (Catalina:type=ClusterMembership,host=localhost)
>>>>
>>>> Jun 17, 2006 8:53:02 PM
>>>> org.apache.catalina.cluster.deploy.FarmWarDeployer
>>>>
>>>> start
>>>>
>>>> INFO: Cluster FarmWarDeployer started.
>>>>
>>>> Jun 17, 2006 8:53:04 PM
>>>> org.apache.catalina.cluster.session.DeltaManager
>>>>
>>>> start
>>>>
>>>> INFO: Register manager /StockTradingServer to cluster element Host
>>>> with name
>>>> localhost Jun 17, 2006 8:53:04 PM
>>>> org.apache.catalina.cluster.session.DeltaManager
>>>>
>>>> start
>>>>
>>>>
>>>>
>>>> ===================================================================
>>>> =
>>>> =
>>>> =
>>>> ======
>>>> =
>>>>
>>>> Any way my clustor was working fine for 3 weeks time and started to
>>>> give
>>>> this error in startup of both the nodes.
>>>>
>>>>
>>>>
>>>> I have an IBMHTTPServer with jk connector for load balancing and
>>>> that load
>>>> is comming my tomcat cluster.
>>>>
>>>>
>>>>
>>>> following is the server.xml file for both the servers.
>>>>
>>>>
>>>>
>>>> ===================================================================
>>>> =
>>>> =
>>>> =
>>>> ======
>>>> =
>>>>
>>>>
>>>>
>>>> <!-- Example Server Configuration File -->
>>>>
>>>> <!-- Note that component elements are nested corresponding to their
>>>>
>>>> parent-child relationships with each other -->
>>>>
>>>>
>>>>
>>>> <!-- A "Server" is a singleton element that represents the entire
>>>> JVM,
>>>>
>>>> which may contain one or more "Service" instances. The Server
>>>>
>>>> listens for a shutdown command on the indicated port.
>>>>
>>>>
>>>>
>>>> Note: A "Server" is not itself a "Container", so you may not
>>>>
>>>> define subcomponents such as "Valves" or "Loggers" at this
>>>> level.
>>>>
>>>> -->
>>>>
>>>>
>>>>
>>>> <Server port="8005" shutdown="SHUTDOWN">
>>>>
>>>>
>>>>
>>>> <!-- Comment these entries out to disable JMX MBeans support used
>>>> for the
>>>>
>>>> administration web application --> <Listener
>>>> className="org.apache.catalina.core.AprLifecycleListener" />
>>>> <Listener
>>>> className="org.apache.catalina.mbeans.ServerLifecycleListener" />
>>>> <Listener
>>>> className="org.apache.catalina.mbeans.GlobalResourcesLifecycleListe
>>>> n
>>>> e
>>>> r
>>>> " />
>>>> <Listener
>>>> className="org.apache.catalina.storeconfig.StoreConfigLifecycleList
>>>> e
>>>> n
>>>> e
>>>> r"/>
>>>>
>>>>
>>>>
>>>> <!-- Global JNDI resources -->
>>>>
>>>> <GlobalNamingResources>
>>>>
>>>>
>>>>
>>>> <!-- Test entry for demonstration purposes -->
>>>>
>>>> <Environment name="simpleValue" type="java.lang.Integer"
>>>> value="30"/>
>>>>
>>>>
>>>>
>>>> <!-- Editable user database that can also be used by
>>>>
>>>> UserDatabaseRealm to authenticate users -->
>>>>
>>>> <Resource name="UserDatabase" auth="Container"
>>>>
>>>> type="org.apache.catalina.UserDatabase"
>>>>
>>>> description="User database that can be updated and saved"
>>>>
>>>>
>>>> factory="org.apache.catalina.users.MemoryUserDatabaseFactory"
>>>>
>>>> pathname="conf/tomcat-users.xml" />
>>>>
>>>>
>>>>
>>>> </GlobalNamingResources>
>>>>
>>>>
>>>>
>>>> <!-- A "Service" is a collection of one or more "Connectors" that
>>>> share
>>>>
>>>> a single "Container" (and therefore the web applications
>>>> visible
>>>>
>>>> within that Container). Normally, that Container is an
>>>> "Engine",
>>>>
>>>> but this is not required.
>>>>
>>>>
>>>>
>>>> Note: A "Service" is not itself a "Container", so you may
>>>> not
>>>>
>>>> define subcomponents such as "Valves" or "Loggers" at this
>>>> level.
>>>>
>>>> -->
>>>>
>>>>
>>>>
>>>> <!-- Define the Tomcat Stand-Alone Service --> <Service
>>>> name="Catalina">
>>>>
>>>>
>>>>
>>>> <!-- A "Connector" represents an endpoint by which requests are
>>>> received
>>>>
>>>> and responses are returned. Each Connector passes requests
>>>> on to
>>>> the
>>>>
>>>> associated "Container" (normally an Engine) for processing.
>>>>
>>>>
>>>>
>>>> By default, a non-SSL HTTP/1.1 Connector is established on
>>>> port
>>>> 8080.
>>>>
>>>> You can also enable an SSL HTTP/1.1 Connector on port
>>>> 8443 by
>>>>
>>>> following the instructions below and uncommenting the
>>>> second
>>>> Connector
>>>>
>>>> entry. SSL support requires the following steps (see the
>>>> SSL Config
>>>> HOWTO in the Tomcat 5 documentation bundle for more detailed
>>>>
>>>> instructions):
>>>>
>>>> * If your JDK version 1.3 or prior, download and install
>>>> JSSE 1.0.2
>>>> or
>>>>
>>>> later, and put the JAR files into "$JAVA_HOME/jre/lib/
>>>> ext".
>>>>
>>>> * Execute:
>>>>
>>>> %JAVA_HOME%\bin\keytool -genkey -alias tomcat -keyalg
>>>> RSA
>>>>
>>>> (Windows)
>>>>
>>>> $JAVA_HOME/bin/keytool -genkey -alias tomcat -keyalg
>>>> RSA (Unix)
>>>> with a password value of "changeit" for both the certificate and
>>>>
>>>> the keystore itself.
>>>>
>>>>
>>>>
>>>> By default, DNS lookups are enabled when a web application
>>>> calls
>>>>
>>>> request.getRemoteHost(). This can have an adverse
>>>> impact on
>>>>
>>>> performance, so you can disable it by setting the
>>>>
>>>> "enableLookups" attribute to "false". When DNS lookups are
>>>> disabled,
>>>>
>>>> request.getRemoteHost() will return the String version of
>>>> the
>>>>
>>>> IP address of the remote client.
>>>>
>>>> -->
>>>>
>>>>
>>>>
>>>> <!-- Define a non-SSL HTTP/1.1 Connector on port 8080 -->
>>>>
>>>> <Connector port="8088" maxHttpHeaderSize="8192"
>>>>
>>>> maxThreads="300" minSpareThreads="25"
>>>> maxSpareThreads="75"
>>>>
>>>> enableLookups="false" redirectPort="8443"
>>>> acceptCount="100"
>>>>
>>>> connectionTimeout="20000"
>>>> disableUploadTimeout="true" />
>>>>
>>>> <!-- Note : To disable connection timeouts, set
>>>> connectionTimeout value
>>>>
>>>> to 0 -->
>>>>
>>>>
>>>>
>>>> <!-- Note : To use gzip compression you could set the
>>>> following
>>>> properties :
>>>>
>>>>
>>>>
>>>> compression="on"
>>>>
>>>> compressionMinSize="2048"
>>>>
>>>> noCompressionUserAgents="gozilla,
>>>> traviata"
>>>>
>>>> compressableMimeType="text/html,text/xml"
>>>>
>>>> -->
>>>>
>>>>
>>>>
>>>> <!-- Define a SSL HTTP/1.1 Connector on port 8443 -->
>>>>
>>>> <!--
>>>>
>>>> <Connector port="8443" maxHttpHeaderSize="8192"
>>>>
>>>> maxThreads="150" minSpareThreads="25"
>>>> maxSpareThreads="75"
>>>>
>>>> enableLookups="false" disableUploadTimeout="true"
>>>>
>>>> acceptCount="100" scheme="https" secure="true"
>>>>
>>>> clientAuth="false" sslProtocol="TLS" />
>>>>
>>>> -->
>>>>
>>>>
>>>>
>>>> <!-- Define an AJP 1.3 Connector on port 8009 -->
>>>>
>>>> <Connector port="8009"
>>>>
>>>> enableLookups="false" redirectPort="8443"
>>>> protocol="AJP/1.3"
>>>>
>>>> />
>>>>
>>>>
>>>>
>>>> <!-- Define a Proxied HTTP/1.1 Connector on port 8082 -->
>>>>
>>>> <!-- See proxy documentation for more information about using
>>>> this. -->
>>>>
>>>> <!--
>>>>
>>>> <Connector port="8082"
>>>>
>>>> maxThreads="150" minSpareThreads="25"
>>>> maxSpareThreads="75"
>>>>
>>>> enableLookups="false" acceptCount="100"
>>>>
>>>> connectionTimeout="20000"
>>>>
>>>> proxyPort="80" disableUploadTimeout="true" />
>>>>
>>>> -->
>>>>
>>>>
>>>>
>>>> <!-- An Engine represents the entry point (within Catalina) that
>>>> processes
>>>>
>>>> every request. The Engine implementation for Tomcat stand
>>>> alone
>>>>
>>>> analyzes the HTTP headers included with the request, and
>>>> passes them
>>>> on to the appropriate Host (virtual host). -->
>>>>
>>>>
>>>>
>>>> <!-- You should set jvmRoute to support load-balancing via AJP
>>>> ie :
>>>>
>>>> <Engine name="Standalone" defaultHost="localhost"
>>>> jvmRoute="jvm1">
>>>>
>>>> -->
>>>>
>>>>
>>>>
>>>> <!-- Define the top level container in our container hierarchy
>>>> -->
>>>>
>>>> <Engine name="Catalina" defaultHost="localhost"
>>>> jvmRoute="node01">
>>>>
>>>> <!-- <Engine name="Catalina" jvmRoute="$JAVA_HOME" > -->
>>>>
>>>>
>>>>
>>>> <!-- The request dumper valve dumps useful debugging
>>>> information about
>>>>
>>>> the request headers and cookies that were received, and
>>>> the
>>>> response
>>>>
>>>> headers and cookies that were sent, for all requests
>>>> received by
>>>>
>>>> this instance of Tomcat. If you care only about requests
>>>> to a
>>>>
>>>> particular virtual host, or a particular application,
>>>> nest this
>>>>
>>>> element inside the corresponding <Host> or <Context>
>>>> entry
>>>> instead.
>>>>
>>>>
>>>>
>>>> For a similar mechanism that is portable to all Servlet
>>>> 2.4
>>>>
>>>> containers, check out the "RequestDumperFilter" Filter in
>>>> the
>>>>
>>>> example application (the source for this filter may be
>>>> found in
>>>>
>>>> "$CATALINA_HOME/webapps/examples/WEB-INF/classes/
>>>> filters").
>>>>
>>>>
>>>>
>>>> Request dumping is disabled by default. Uncomment the
>>>> following
>>>>
>>>> element to enable it. -->
>>>>
>>>> <!--
>>>>
>>>> <Valve
>>>> className="org.apache.catalina.valves.RequestDumperValve"/>
>>>>
>>>> -->
>>>>
>>>>
>>>>
>>>> <!-- Because this Realm is here, an instance will be shared
>>>> globally
>>>> -->
>>>>
>>>>
>>>>
>>>> <!-- This Realm uses the UserDatabase configured in the global
>>>> JNDI
>>>>
>>>> resources under the key "UserDatabase". Any edits
>>>>
>>>> that are performed against this UserDatabase are
>>>> immediately
>>>>
>>>> available for use by the Realm. -->
>>>>
>>>> <Realm className="org.apache.catalina.realm.UserDatabaseRealm"
>>>>
>>>> resourceName="UserDatabase"/>
>>>>
>>>>
>>>>
>>>> <!-- Comment out the old realm but leave here for now in
>>>> case we
>>>>
>>>> need to go back quickly -->
>>>>
>>>> <!--
>>>>
>>>> <Realm className="org.apache.catalina.realm.MemoryRealm" />
>>>>
>>>> -->
>>>>
>>>>
>>>>
>>>> <!-- Replace the above Realm with one of the following to get
>>>> a Realm
>>>>
>>>> stored in a database and accessed via JDBC -->
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> <Realm className="org.apache.catalina.realm.JDBCRealm"
>>>>
>>>> driverName="org.gjt.mm.mysql.Driver"
>>>>
>>>> connectionURL="jdbc:mysql://172.16.1.55:3306/kutila"
>>>>
>>>> connectionName="kutila" connectionPassword="kutila"
>>>>
>>>> userTable="users" userNameCol="user_name"
>>>>
>>>> userCredCol="user_pass"
>>>>
>>>> userRoleTable="user_roles" roleNameCol="role_name" />
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> <!--
>>>>
>>>> <Realm className="org.apache.catalina.realm.JDBCRealm"
>>>>
>>>> driverName="oracle.jdbc.driver.OracleDriver"
>>>>
>>>> connectionURL="jdbc:oracle:thin:@ntserver:1521:ORCL"
>>>>
>>>> connectionName="scott" connectionPassword="tiger"
>>>>
>>>> userTable="users" userNameCol="user_name"
>>>>
>>>> userCredCol="user_pass"
>>>>
>>>> userRoleTable="user_roles" roleNameCol="role_name" />
>>>>
>>>> -->
>>>>
>>>>
>>>>
>>>> <!--
>>>>
>>>> <Realm className="org.apache.catalina.realm.JDBCRealm"
>>>>
>>>> driverName="sun.jdbc.odbc.JdbcOdbcDriver"
>>>>
>>>> connectionURL="jdbc:odbc:CATALINA"
>>>>
>>>> userTable="users" userNameCol="user_name"
>>>>
>>>> userCredCol="user_pass"
>>>>
>>>> userRoleTable="user_roles" roleNameCol="role_name" />
>>>>
>>>> -->
>>>>
>>>>
>>>>
>>>> <!-- Define the default virtual host
>>>>
>>>> Note: XML Schema validation will not work with Xerces
>>>> 2.2.
>>>>
>>>> -->
>>>>
>>>> <Host name="localhost" appBase="webapps"
>>>>
>>>> unpackWARs="true" autoDeploy="true"
>>>>
>>>> xmlValidation="false" xmlNamespaceAware="false">
>>>>
>>>>
>>>>
>>>> <!-- Defines a cluster for this node,
>>>>
>>>> By defining this element, means that every manager
>>>> will be
>>>> changed.
>>>>
>>>> So when running a cluster, only make sure that you have
>>>> webapps
>>>> in there
>>>>
>>>> that need to be clustered and remove the other ones.
>>>>
>>>> A cluster has the following parameters:
>>>>
>>>>
>>>>
>>>> className = the fully qualified name of the cluster
>>>> class
>>>>
>>>>
>>>>
>>>> clusterName = a descriptive name for your cluster,
>>>> can be
>>>> anything
>>>>
>>>>
>>>>
>>>> mcastAddr = the multicast address, has to be the same
>>>> for all
>>>> the nodes
>>>>
>>>>
>>>>
>>>> mcastPort = the multicast port, has to be the same for
>>>> all the
>>>> nodes
>>>>
>>>>
>>>>
>>>> mcastBindAddress = bind the multicast socket to a
>>>> specific
>>>> address
>>>>
>>>>
>>>>
>>>> mcastTTL = the multicast TTL if you want to limit your
>>>> broadcast
>>>>
>>>> mcastSoTimeout = the multicast readtimeout
>>>>
>>>>
>>>>
>>>> mcastFrequency = the number of milliseconds in between
>>>> sending a
>>>> "I'm alive" heartbeat
>>>>
>>>>
>>>>
>>>> mcastDropTime = the number a milliseconds before a
>>>> node is
>>>> considered "dead" if no heartbeat is received
>>>>
>>>>
>>>>
>>>> tcpThreadCount = the number of threads to handle
>>>> incoming
>>>> replication requests, optimal would be the same amount of threads
>>>> as nodes
>>>>
>>>>
>>>>
>>>> tcpListenAddress = the listen address (bind address)
>>>> for TCP
>>>> cluster request on this host,
>>>>
>>>> in case of multiple ethernet cards.
>>>>
>>>> auto means that address becomes
>>>>
>>>> InetAddress.getLocalHost
>>>> ().getHostAddress()
>>>>
>>>>
>>>>
>>>> tcpListenPort = the tcp listen port
>>>>
>>>>
>>>>
>>>> tcpSelectorTimeout = the timeout (ms) for the
>>>> Selector.select()
>>>> method in case the OS
>>>>
>>>> has a wakup bug in java.nio. Set
>>>> to 0 for
>>>> no timeout
>>>>
>>>>
>>>>
>>>> printToScreen = true means that managers will also
>>>> print to
>>>> std.out
>>>>
>>>>
>>>>
>>>> expireSessionsOnShutdown = true means that
>>>>
>>>>
>>>>
>>>> useDirtyFlag = true means that we only replicate a
>>>> session after
>>>> setAttribute,removeAttribute has been called.
>>>>
>>>> false means to replicate the session
>>>> after each
>>>> request.
>>>>
>>>> false means that replication would work
>>>> for the
>>>> following piece of code: (only for SimpleTcpReplicationManager)
>>>>
>>>> <%
>>>>
>>>> HashMap map =
>>>> (HashMap)session.getAttribute("map");
>>>>
>>>> map.put("key","value");
>>>>
>>>> %>
>>>>
>>>> replicationMode = can be either 'pooled',
>>>> 'synchronous' or
>>>> 'asynchronous'.
>>>>
>>>> * Pooled means that the replication
>>>> happens
>>>> using several sockets in a synchronous way. Ie, the data gets
>>>> replicated,
>>>> then the request return. This is the same as the 'synchronous'
>>>> setting
>>>> except it uses a pool of sockets, hence it is multithreaded. This
>>>> is the
>>>> fastest and safest configuration. To use this, also increase the nr
>>>> of tcp
>>>> threads that you have dealing with replication.
>>>>
>>>> * Synchronous means that the thread
>>>> that
>>>> executes the request, is also the
>>>>
>>>> thread the replicates the data to the
>>>> other
>>>> nodes, and will not return until all
>>>>
>>>> nodes have received the information.
>>>>
>>>> * Asynchronous means that there is a
>>>> specific
>>>> 'sender' thread for each cluster node,
>>>>
>>>> so the request thread will queue the
>>>> replication request into a "smart" queue,
>>>>
>>>> and then return to the client.
>>>>
>>>> The "smart" queue is a queue where
>>>> when a
>>>> session is added to the queue, and the same session
>>>>
>>>> already exists in the queue from a
>>>> previous
>>>> request, that session will be replaced
>>>>
>>>> in the queue instead of replicating
>>>> two
>>>> requests. This almost never happens, unless there is a
>>>>
>>>> large network delay.
>>>>
>>>> -->
>>>>
>>>> <!--
>>>>
>>>> When configuring for clustering, you also add in a valve
>>>> to catch
>>>> all the requests
>>>>
>>>> coming in, at the end of the request, the session may or
>>>> may not
>>>> be replicated.
>>>>
>>>> A session is replicated if and only if all the
>>>> conditions are
>>>>
>>>> met:
>>>>
>>>> 1. useDirtyFlag is true or setAttribute or
>>>> removeAttribute has
>>>> been called AND
>>>>
>>>> 2. a session exists (has been created)
>>>>
>>>> 3. the request is not trapped by the "filter" attribute
>>>>
>>>>
>>>>
>>>> The filter attribute is to filter out requests that
>>>> could not
>>>> modify the session,
>>>>
>>>> hence we don't replicate the session after the end of
>>>> this
>>>> request.
>>>>
>>>> The filter is negative, ie, anything you put in the
>>>> filter, you
>>>> mean to filter out,
>>>>
>>>> ie, no replication will be done on requests that match
>>>> one of the
>>>> filters.
>>>>
>>>> The filter attribute is delimited by ;, so you can't
>>>> escape out ;
>>>> even if you wanted to.
>>>>
>>>>
>>>>
>>>> filter=".*\.gif;.*\.js;" means that we will not
>>>> replicate the
>>>> session after requests with the URI
>>>>
>>>> ending with .gif and .js are intercepted.
>>>>
>>>>
>>>>
>>>> The deployer element can be used to deploy apps cluster
>>>> wide.
>>>>
>>>> Currently the deployment only deploys/undeploys to
>>>> working
>>>> members in the cluster
>>>>
>>>> so no WARs are copied upons startup of a broken node.
>>>>
>>>> The deployer watches a directory (watchDir) for WAR
>>>> files when
>>>> watchEnabled="true"
>>>>
>>>> When a new war file is added the war gets deployed to
>>>> the local
>>>> instance,
>>>>
>>>> and then deployed to the other instances in the cluster.
>>>>
>>>> When a war file is deleted from the watchDir the war is
>>>> undeployed locally
>>>>
>>>> and cluster wide
>>>>
>>>> -->
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> <Cluster
>>>> className="org.apache.catalina.cluster.tcp.SimpleTcpCluster"
>>>>
>>>>
>>>> managerClassName="org.apache.catalina.cluster.session.DeltaManager"
>>>>
>>>> expireSessionsOnShutdown="true"
>>>>
>>>> useDirtyFlag="true"
>>>>
>>>> notifyListenersOnReplication="true">
>>>>
>>>>
>>>>
>>>> <Membership
>>>>
>>>>
>>>> className="org.apache.catalina.cluster.mcast.McastService"
>>>>
>>>> mcastAddr="228.0.0.4"
>>>>
>>>> mcastPort="45564"
>>>>
>>>> mcastFrequency="500"
>>>>
>>>> mcastDropTime="3000"/>
>>>>
>>>>
>>>>
>>>> <Receiver
>>>>
>>>>
>>>>
>>>> className="org.apache.catalina.cluster.tcp.ReplicationListener"
>>>>
>>>> tcpListenAddress="auto"
>>>>
>>>> tcpListenPort="4001"
>>>>
>>>> tcpSelectorTimeout="100"
>>>>
>>>> tcpThreadCount="2"/>
>>>>
>>>>
>>>>
>>>> <Sender
>>>>
>>>>
>>>>
>>>> className="org.apache.catalina.cluster.tcp.ReplicationTransmitter"
>>>>
>>>> replicationMode="pooled"
>>>>
>>>> ackTimeout="15000"
>>>>
>>>> waitForAck="true"/>
>>>>
>>>>
>>>>
>>>> <Valve
>>>>
>>>> className="org.apache.catalina.cluster.tcp.ReplicationValve"
>>>>
>>>>
>>>>
>>>> filter=".*\.gif;.*\.js;.*\.jpg;.*\.png;.*\.htm;.*\.html;.*\.css;.*
>>>> \.txt;"/>
>>>>
>>>>
>>>>
>>>> <Deployer
>>>>
>>>> className="org.apache.catalina.cluster.deploy.FarmWarDeployer"
>>>>
>>>> tempDir="/tmp/war-temp/"
>>>>
>>>> deployDir="/tmp/war-deploy/"
>>>>
>>>> watchDir="/tmp/war-listen/"
>>>>
>>>> watchEnabled="false"/>
>>>>
>>>>
>>>>
>>>> <ClusterListener
>>>>
>>>> className="org.apache.catalina.cluster.session.ClusterSessionListen
>>>> e
>>>> r
>>>> "
>>>> />
>>>>
>>>> </Cluster>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> <!-- Normally, users must authenticate themselves to each
>>>> web app
>>>>
>>>> individually. Uncomment the following entry if you
>>>> would like
>>>>
>>>> a user to be authenticated the first time they
>>>> encounter a
>>>>
>>>> resource protected by a security constraint, and then
>>>> have that
>>>>
>>>> user identity maintained across *all* web applications
>>>> contained
>>>> in this virtual host. -->
>>>>
>>>> <!--
>>>>
>>>> <Valve
>>>> className="org.apache.catalina.authenticator.SingleSignOn" />
>>>>
>>>> -->
>>>>
>>>>
>>>>
>>>> <!-- Access log processes all requests for this virtual
>>>> host. By
>>>>
>>>> default, log files are created in the "logs" directory
>>>> relative
>>>> to
>>>>
>>>> $CATALINA_HOME. If you wish, you can specify a
>>>> different
>>>>
>>>> directory with the "directory" attribute. Specify
>>>> either a
>>>> relative
>>>>
>>>> (to $CATALINA_HOME) or absolute path to the desired
>>>> directory.
>>>>
>>>> -->
>>>>
>>>> <!--
>>>>
>>>> <Valve className="org.apache.catalina.valves.AccessLogValve"
>>>>
>>>> directory="logs" prefix="localhost_access_log."
>>>>
>>>> suffix=".txt"
>>>>
>>>> pattern="common" resolveHosts="false"/>
>>>>
>>>> -->
>>>>
>>>>
>>>>
>>>> <!-- Access log processes all requests for this virtual
>>>> host. By
>>>>
>>>> default, log files are created in the "logs" directory
>>>> relative
>>>> to
>>>>
>>>> $CATALINA_HOME. If you wish, you can specify a
>>>> different
>>>>
>>>> directory with the "directory" attribute. Specify
>>>> either a
>>>> relative
>>>>
>>>> (to $CATALINA_HOME) or absolute path to the desired
>>>> directory.
>>>>
>>>> This access log implementation is optimized for maximum
>>>> performance,
>>>>
>>>> but is hardcoded to support only the "common" and
>>>> "combined"
>>>>
>>>> patterns.
>>>>
>>>> -->
>>>>
>>>> <!--
>>>>
>>>> <Valve
>>>>
>>>> className="org.apache.catalina.valves.FastCommonAccessLogValve"
>>>>
>>>> directory="logs" prefix="localhost_access_log."
>>>>
>>>> suffix=".txt"
>>>>
>>>> pattern="common" resolveHosts="false"/>
>>>>
>>>> -->
>>>>
>>>>
>>>>
>>>> </Host>
>>>>
>>>>
>>>>
>>>> </Engine>
>>>>
>>>>
>>>>
>>>> </Service>
>>>>
>>>>
>>>>
>>>> </Server>
>>>>
>>>>
>>>>
>>>> ===========================================================
>>>>
>>>>
>>>>
>>>> I appriceate your prompt help on this since this is a very critical
>>>> application live for the moment. Always send me an email for any
>>>> clarifications.
>>>>
>>>>
>>>>
>>>> Thanks and best regards,
>>>>
>>>> Dilan
>>>>
>>>>
>>>>
>>>>
>>>>
>>>
>>>
>>> --------------------------------------------------------------------
>>> -
>>> To start a new topic, e-mail: users@tomcat.apache.org
>>> To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
>>> For additional commands, e-mail: users-help@tomcat.apache.org
>>>
>>>
>>>
>>> --------------------------------------------------------------------
>>> -
>>> To start a new topic, e-mail: users@tomcat.apache.org
>>> To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
>>> For additional commands, e-mail: users-help@tomcat.apache.org
>>>
>>>
>>
>>
>>
>> ---------------------------------------------------------------------
>> To start a new topic, e-mail: users@tomcat.apache.org
>> To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
>> For additional commands, e-mail: users-help@tomcat.apache.org
>>
>>
>
>
> ---------------------------------------------------------------------
> To start a new topic, e-mail: users@tomcat.apache.org
> To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
> For additional commands, e-mail: users-help@tomcat.apache.org
>
>
>
> ---------------------------------------------------------------------
> To start a new topic, e-mail: users@tomcat.apache.org
> To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
> For additional commands, e-mail: users-help@tomcat.apache.org
>
>
---------------------------------------------------------------------
To start a new topic, e-mail: users@tomcat.apache.org
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org
---------------------------------------------------------------------
To start a new topic, e-mail: users@tomcat.apache.org
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org
Re: tomcat5.5.17 cluster(using jdk1.5) error - OutOfMemoryError in starting up on AS4
Posted by Peter Rossbach <pr...@objektpark.de>.
HI,
I see no risk with the default membership config.
Peter
Am 18.06.2006 um 19:29 schrieb Dilan Kelanibandara:
> Hi Peter,
>
> No. No service is up and running on 4564. I did only commenting the
> member
> and restarted both the servers. So far it is working fine. I have a
> doubt in
> future weather there is any effect on my server?
>
> Can you please explain me the risk. Or is it ok to run the server
> with this
> configuration?.
> Thanks and best regards,
> Dilan
>
> -----Original Message-----
> From: Peter Rossbach [mailto:pr@objektpark.de]
> Sent: Sunday, June 18, 2006 8:14 PM
> To: Tomcat Users List
> Subject: Re: tomcat5.5.17 cluster(using jdk1.5) error -
> OutOfMemoryError in
> starting up on AS4
>
> OK!
>
> As you comment the Membership service out, following default is used:
>
> McastService mService= new McastService();
> mService.setMcastAddr("228.0.0.4");
> mService.setMcastPort(8012);
> mService.setMcastFrequency(1000);
> mService.setMcastDropTime(30000);
> transferProperty("service",mService);
> setMembershipService(mService);
> }
>
> Have you start another service at 45564 ?
>
> Regards
>
>
>
> Am 18.06.2006 um 16:54 schrieb Dilan Kelanibandara:
>
>> Hi Peter,
>>
>> I was having the memory problem when cluster manager trying to
>> multicast the
>> request when tomcat startup.
>> As a trial I commented multicast element of cluster configuration in
>> server.xml and restarted both tomcats
>>
>> This is the multicast element which I commented.
>> ==============================
>> <!--
>> <Membership
>>
>> className="org.apache.catalina.cluster.mcast.McastService"
>> mcastAddr="228.0.0.4"
>> mcastPort="45564"
>> mcastFrequency="500"
>> mcastDropTime="3000"/>
>> -->
>>
>> ==============================
>>
>> Then tomcat started without an outofmemoryerror. Also replication
>> members
>> are added to each other. I ran both of servers with my applicaiton
>> for some
>> time. It is working fine. Session replication is happening as
>> usual. Can you
>> let me know can I proceed with this setup or is there any effect
>> of my
>> commenting on session replication ?
>>
>> Can you kindly let me know?
>>
>> Thanks and best regards,
>> Dilan
>>
>>
>> -----Original Message-----
>> From: Peter Rossbach [mailto:pr@objektpark.de]
>> Sent: Sunday, June 18, 2006 9:50 AM
>> To: Tomcat Users List
>> Subject: Re: tomcat5.5.17 cluster(using jdk1.5) error -
>> OutOfMemoryError in
>> starting up on AS4
>>
>> Use more JVM Options to analyse the mem usage
>>
>> Work with more faster mem allocation
>>
>> -XX:NewSize=128m -XX:MaxNewSize=128m -XX:SurvivorRatio=8
>> -Xverbosegc
>>
>> Or better use a Memory Profiler...
>>
>> But the membership not allocate much memory, very strange effect :-(
>>
>> Peter
>>
>>
>> Am 18.06.2006 um 08:01 schrieb Dilan Kelanibandara:
>>
>>> Hi Peter,
>>> I am using default JVM parameters coming with tomcat5.5.17. In the
>>> tomcat
>>> server.xml file it says tcpThreadCount is normally equal to no.of
>>> nodes (ie
>>> 2 in this case).That is why I changed that to 2.
>>>
>>> I tried increasing JVM parameters for heap size in tomcat
>>> Min=1024m
>>> Max=1024m
>>> also.I tried with both 512m also. But in both the occasion it is
>>> the same
>>> result.
>>> Thank you for your kind attention.
>>> I want further clarifications.
>>> Best regards,
>>> Dilan
>>> -----Original Message-----
>>> From: Peter Rossbach [mailto:pr@objektpark.de]
>>> Sent: Sunday, June 18, 2006 7:37 AM
>>> To: Tomcat Users List
>>> Subject: Re: tomcat5.5.17 cluster(using jdk1.5) error -
>>> OutOfMemoryError in
>>> starting up on AS4
>>>
>>> Hi,
>>>
>>> Which JVM memory parameter you use?
>>> At pooled mode use more receiver worker set tcpThreadCount="6"!
>>> You really need deployer? Deployer generate at every startup a large
>>> cluster message.
>>>
>>> Regards
>>> Peter
>>>
>>>
>>> Am 18.06.2006 um 06:22 schrieb Dilan Kelanibandara:
>>>
>>>>
>>>>
>>>> Hello ,
>>>>
>>>>
>>>>
>>>> I am getting OutOfMemoryError continuously when starting up two
>>>> cluster
>>>> nodes of tomcat5.5.17 (jdk1.5 on Advanced server 4). Any way it was
>>>> working
>>>> fine for 3 weeks time. This error occurs previously only one time
>>>> and when
>>>> restarted the tomcat, it worked.
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> Following is a part of catalina.out relevent to that error for
>>>> node 1.
>>>>
>>>> ============
>>>> ========================================================
>>>>
>>>>
>>>>
>>>> INFO: Start ClusterSender at cluster
>>>> Catalina:type=Cluster,host=localhost
>>>>
>>>> with name Catalina:type=ClusterSender,host=localhost
>>>>
>>>> Jun 17, 2006 8:44:15 PM
>>>> org.apache.catalina.cluster.mcast.McastService start
>>>>
>>>> INFO: Sleeping for 2000 milliseconds to establish cluster
>>>> membership
>>>> Exception in thread "Cluster-MembershipReceiver"
>>>> java.lang.OutOfMemoryError:
>>>>
>>>>
>>>> Java heap space
>>>>
>>>> Jun 17, 2006 8:44:17 PM
>>>> org.apache.catalina.cluster.mcast.McastService
>>>>
>>>> registerMBean
>>>>
>>>> INFO: membership mbean registered
>>>>
>>>> (Catalina:type=ClusterMembership,host=localhost)
>>>>
>>>> Jun 17, 2006 8:44:17 PM
>>>> org.apache.catalina.cluster.deploy.FarmWarDeployer
>>>>
>>>> start
>>>>
>>>> INFO: Cluster FarmWarDeployer started.
>>>>
>>>> Jun 17, 2006 8:44:19 PM
>>>> org.apache.catalina.cluster.session.DeltaManager
>>>>
>>>> start
>>>>
>>>> INFO: Register manager /StockTradingServer to cluster element Host
>>>> with name
>>>> localhost Jun 17, 2006 8:44:19 PM
>>>> org.apache.catalina.cluster.session.DeltaManager
>>>>
>>>> start
>>>>
>>>> INFO: Starting clustering manager at /StockTradingServer Jun 17,
>>>> 2006
>>>> 8:44:19 PM org.apache.catalina.cluster.session.DeltaManager
>>>>
>>>> getAllClusterSessions
>>>>
>>>> INFO: Manager [/StockTradingServer]: skipping state transfer. No
>>>> members
>>>> active in cluster group.
>>>>
>>>>
>>>>
>>>> ===================================================================
>>>> =
>>>> =
>>>> =
>>>> ======
>>>> =====
>>>>
>>>> node2 startup log is as follows
>>>>
>>>> ===================================================================
>>>> =
>>>> =
>>>> =
>>>> ======
>>>> =====
>>>>
>>>> INFO: Cluster is about to start
>>>>
>>>> Jun 17, 2006 8:53:00 PM
>>>>
>>>> org.apache.catalina.cluster.tcp.ReplicationTransmitter start
>>>>
>>>> INFO: Start ClusterSender at cluster
>>>> Catalina:type=Cluster,host=localhost
>>>>
>>>> with name Catalina:type=ClusterSender,host=localhost
>>>>
>>>> Jun 17, 2006 8:53:00 PM
>>>> org.apache.catalina.cluster.mcast.McastService start
>>>>
>>>> INFO: Sleeping for 2000 milliseconds to establish cluster
>>>> membership
>>>> Exception in thread "Cluster-MembershipReceiver"
>>>> java.lang.OutOfMemoryError:
>>>>
>>>>
>>>> Java heap space
>>>>
>>>> Jun 17, 2006 8:53:02 PM
>>>> org.apache.catalina.cluster.mcast.McastService
>>>>
>>>> registerMBean
>>>>
>>>> INFO: membership mbean registered
>>>>
>>>> (Catalina:type=ClusterMembership,host=localhost)
>>>>
>>>> Jun 17, 2006 8:53:02 PM
>>>> org.apache.catalina.cluster.deploy.FarmWarDeployer
>>>>
>>>> start
>>>>
>>>> INFO: Cluster FarmWarDeployer started.
>>>>
>>>> Jun 17, 2006 8:53:04 PM
>>>> org.apache.catalina.cluster.session.DeltaManager
>>>>
>>>> start
>>>>
>>>> INFO: Register manager /StockTradingServer to cluster element Host
>>>> with name
>>>> localhost Jun 17, 2006 8:53:04 PM
>>>> org.apache.catalina.cluster.session.DeltaManager
>>>>
>>>> start
>>>>
>>>>
>>>>
>>>> ===================================================================
>>>> =
>>>> =
>>>> =
>>>> ======
>>>> =
>>>>
>>>> Any way my clustor was working fine for 3 weeks time and started to
>>>> give
>>>> this error in startup of both the nodes.
>>>>
>>>>
>>>>
>>>> I have an IBMHTTPServer with jk connector for load balancing and
>>>> that load
>>>> is comming my tomcat cluster.
>>>>
>>>>
>>>>
>>>> following is the server.xml file for both the servers.
>>>>
>>>>
>>>>
>>>> ===================================================================
>>>> =
>>>> =
>>>> =
>>>> ======
>>>> =
>>>>
>>>>
>>>>
>>>> <!-- Example Server Configuration File -->
>>>>
>>>> <!-- Note that component elements are nested corresponding to their
>>>>
>>>> parent-child relationships with each other -->
>>>>
>>>>
>>>>
>>>> <!-- A "Server" is a singleton element that represents the entire
>>>> JVM,
>>>>
>>>> which may contain one or more "Service" instances. The Server
>>>>
>>>> listens for a shutdown command on the indicated port.
>>>>
>>>>
>>>>
>>>> Note: A "Server" is not itself a "Container", so you may not
>>>>
>>>> define subcomponents such as "Valves" or "Loggers" at this
>>>> level.
>>>>
>>>> -->
>>>>
>>>>
>>>>
>>>> <Server port="8005" shutdown="SHUTDOWN">
>>>>
>>>>
>>>>
>>>> <!-- Comment these entries out to disable JMX MBeans support used
>>>> for the
>>>>
>>>> administration web application --> <Listener
>>>> className="org.apache.catalina.core.AprLifecycleListener" />
>>>> <Listener
>>>> className="org.apache.catalina.mbeans.ServerLifecycleListener" />
>>>> <Listener
>>>> className="org.apache.catalina.mbeans.GlobalResourcesLifecycleListe
>>>> n
>>>> e
>>>> r
>>>> " />
>>>> <Listener
>>>> className="org.apache.catalina.storeconfig.StoreConfigLifecycleList
>>>> e
>>>> n
>>>> e
>>>> r"/>
>>>>
>>>>
>>>>
>>>> <!-- Global JNDI resources -->
>>>>
>>>> <GlobalNamingResources>
>>>>
>>>>
>>>>
>>>> <!-- Test entry for demonstration purposes -->
>>>>
>>>> <Environment name="simpleValue" type="java.lang.Integer"
>>>> value="30"/>
>>>>
>>>>
>>>>
>>>> <!-- Editable user database that can also be used by
>>>>
>>>> UserDatabaseRealm to authenticate users -->
>>>>
>>>> <Resource name="UserDatabase" auth="Container"
>>>>
>>>> type="org.apache.catalina.UserDatabase"
>>>>
>>>> description="User database that can be updated and saved"
>>>>
>>>>
>>>> factory="org.apache.catalina.users.MemoryUserDatabaseFactory"
>>>>
>>>> pathname="conf/tomcat-users.xml" />
>>>>
>>>>
>>>>
>>>> </GlobalNamingResources>
>>>>
>>>>
>>>>
>>>> <!-- A "Service" is a collection of one or more "Connectors" that
>>>> share
>>>>
>>>> a single "Container" (and therefore the web applications
>>>> visible
>>>>
>>>> within that Container). Normally, that Container is an
>>>> "Engine",
>>>>
>>>> but this is not required.
>>>>
>>>>
>>>>
>>>> Note: A "Service" is not itself a "Container", so you may
>>>> not
>>>>
>>>> define subcomponents such as "Valves" or "Loggers" at this
>>>> level.
>>>>
>>>> -->
>>>>
>>>>
>>>>
>>>> <!-- Define the Tomcat Stand-Alone Service --> <Service
>>>> name="Catalina">
>>>>
>>>>
>>>>
>>>> <!-- A "Connector" represents an endpoint by which requests are
>>>> received
>>>>
>>>> and responses are returned. Each Connector passes requests
>>>> on to
>>>> the
>>>>
>>>> associated "Container" (normally an Engine) for processing.
>>>>
>>>>
>>>>
>>>> By default, a non-SSL HTTP/1.1 Connector is established on
>>>> port
>>>> 8080.
>>>>
>>>> You can also enable an SSL HTTP/1.1 Connector on port
>>>> 8443 by
>>>>
>>>> following the instructions below and uncommenting the
>>>> second
>>>> Connector
>>>>
>>>> entry. SSL support requires the following steps (see the
>>>> SSL Config
>>>> HOWTO in the Tomcat 5 documentation bundle for more detailed
>>>>
>>>> instructions):
>>>>
>>>> * If your JDK version 1.3 or prior, download and install
>>>> JSSE 1.0.2
>>>> or
>>>>
>>>> later, and put the JAR files into "$JAVA_HOME/jre/lib/
>>>> ext".
>>>>
>>>> * Execute:
>>>>
>>>> %JAVA_HOME%\bin\keytool -genkey -alias tomcat -keyalg
>>>> RSA
>>>>
>>>> (Windows)
>>>>
>>>> $JAVA_HOME/bin/keytool -genkey -alias tomcat -keyalg
>>>> RSA (Unix)
>>>> with a password value of "changeit" for both the certificate and
>>>>
>>>> the keystore itself.
>>>>
>>>>
>>>>
>>>> By default, DNS lookups are enabled when a web application
>>>> calls
>>>>
>>>> request.getRemoteHost(). This can have an adverse
>>>> impact on
>>>>
>>>> performance, so you can disable it by setting the
>>>>
>>>> "enableLookups" attribute to "false". When DNS lookups are
>>>> disabled,
>>>>
>>>> request.getRemoteHost() will return the String version of
>>>> the
>>>>
>>>> IP address of the remote client.
>>>>
>>>> -->
>>>>
>>>>
>>>>
>>>> <!-- Define a non-SSL HTTP/1.1 Connector on port 8080 -->
>>>>
>>>> <Connector port="8088" maxHttpHeaderSize="8192"
>>>>
>>>> maxThreads="300" minSpareThreads="25"
>>>> maxSpareThreads="75"
>>>>
>>>> enableLookups="false" redirectPort="8443"
>>>> acceptCount="100"
>>>>
>>>> connectionTimeout="20000"
>>>> disableUploadTimeout="true" />
>>>>
>>>> <!-- Note : To disable connection timeouts, set
>>>> connectionTimeout value
>>>>
>>>> to 0 -->
>>>>
>>>>
>>>>
>>>> <!-- Note : To use gzip compression you could set the
>>>> following
>>>> properties :
>>>>
>>>>
>>>>
>>>> compression="on"
>>>>
>>>> compressionMinSize="2048"
>>>>
>>>> noCompressionUserAgents="gozilla,
>>>> traviata"
>>>>
>>>> compressableMimeType="text/html,text/xml"
>>>>
>>>> -->
>>>>
>>>>
>>>>
>>>> <!-- Define a SSL HTTP/1.1 Connector on port 8443 -->
>>>>
>>>> <!--
>>>>
>>>> <Connector port="8443" maxHttpHeaderSize="8192"
>>>>
>>>> maxThreads="150" minSpareThreads="25"
>>>> maxSpareThreads="75"
>>>>
>>>> enableLookups="false" disableUploadTimeout="true"
>>>>
>>>> acceptCount="100" scheme="https" secure="true"
>>>>
>>>> clientAuth="false" sslProtocol="TLS" />
>>>>
>>>> -->
>>>>
>>>>
>>>>
>>>> <!-- Define an AJP 1.3 Connector on port 8009 -->
>>>>
>>>> <Connector port="8009"
>>>>
>>>> enableLookups="false" redirectPort="8443"
>>>> protocol="AJP/1.3"
>>>>
>>>> />
>>>>
>>>>
>>>>
>>>> <!-- Define a Proxied HTTP/1.1 Connector on port 8082 -->
>>>>
>>>> <!-- See proxy documentation for more information about using
>>>> this. -->
>>>>
>>>> <!--
>>>>
>>>> <Connector port="8082"
>>>>
>>>> maxThreads="150" minSpareThreads="25"
>>>> maxSpareThreads="75"
>>>>
>>>> enableLookups="false" acceptCount="100"
>>>>
>>>> connectionTimeout="20000"
>>>>
>>>> proxyPort="80" disableUploadTimeout="true" />
>>>>
>>>> -->
>>>>
>>>>
>>>>
>>>> <!-- An Engine represents the entry point (within Catalina) that
>>>> processes
>>>>
>>>> every request. The Engine implementation for Tomcat stand
>>>> alone
>>>>
>>>> analyzes the HTTP headers included with the request, and
>>>> passes them
>>>> on to the appropriate Host (virtual host). -->
>>>>
>>>>
>>>>
>>>> <!-- You should set jvmRoute to support load-balancing via AJP
>>>> ie :
>>>>
>>>> <Engine name="Standalone" defaultHost="localhost"
>>>> jvmRoute="jvm1">
>>>>
>>>> -->
>>>>
>>>>
>>>>
>>>> <!-- Define the top level container in our container hierarchy
>>>> -->
>>>>
>>>> <Engine name="Catalina" defaultHost="localhost"
>>>> jvmRoute="node01">
>>>>
>>>> <!-- <Engine name="Catalina" jvmRoute="$JAVA_HOME" > -->
>>>>
>>>>
>>>>
>>>> <!-- The request dumper valve dumps useful debugging
>>>> information about
>>>>
>>>> the request headers and cookies that were received, and
>>>> the
>>>> response
>>>>
>>>> headers and cookies that were sent, for all requests
>>>> received by
>>>>
>>>> this instance of Tomcat. If you care only about requests
>>>> to a
>>>>
>>>> particular virtual host, or a particular application,
>>>> nest this
>>>>
>>>> element inside the corresponding <Host> or <Context>
>>>> entry
>>>> instead.
>>>>
>>>>
>>>>
>>>> For a similar mechanism that is portable to all Servlet
>>>> 2.4
>>>>
>>>> containers, check out the "RequestDumperFilter" Filter in
>>>> the
>>>>
>>>> example application (the source for this filter may be
>>>> found in
>>>>
>>>> "$CATALINA_HOME/webapps/examples/WEB-INF/classes/
>>>> filters").
>>>>
>>>>
>>>>
>>>> Request dumping is disabled by default. Uncomment the
>>>> following
>>>>
>>>> element to enable it. -->
>>>>
>>>> <!--
>>>>
>>>> <Valve
>>>> className="org.apache.catalina.valves.RequestDumperValve"/>
>>>>
>>>> -->
>>>>
>>>>
>>>>
>>>> <!-- Because this Realm is here, an instance will be shared
>>>> globally
>>>> -->
>>>>
>>>>
>>>>
>>>> <!-- This Realm uses the UserDatabase configured in the global
>>>> JNDI
>>>>
>>>> resources under the key "UserDatabase". Any edits
>>>>
>>>> that are performed against this UserDatabase are
>>>> immediately
>>>>
>>>> available for use by the Realm. -->
>>>>
>>>> <Realm className="org.apache.catalina.realm.UserDatabaseRealm"
>>>>
>>>> resourceName="UserDatabase"/>
>>>>
>>>>
>>>>
>>>> <!-- Comment out the old realm but leave here for now in
>>>> case we
>>>>
>>>> need to go back quickly -->
>>>>
>>>> <!--
>>>>
>>>> <Realm className="org.apache.catalina.realm.MemoryRealm" />
>>>>
>>>> -->
>>>>
>>>>
>>>>
>>>> <!-- Replace the above Realm with one of the following to get
>>>> a Realm
>>>>
>>>> stored in a database and accessed via JDBC -->
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> <Realm className="org.apache.catalina.realm.JDBCRealm"
>>>>
>>>> driverName="org.gjt.mm.mysql.Driver"
>>>>
>>>> connectionURL="jdbc:mysql://172.16.1.55:3306/kutila"
>>>>
>>>> connectionName="kutila" connectionPassword="kutila"
>>>>
>>>> userTable="users" userNameCol="user_name"
>>>>
>>>> userCredCol="user_pass"
>>>>
>>>> userRoleTable="user_roles" roleNameCol="role_name" />
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> <!--
>>>>
>>>> <Realm className="org.apache.catalina.realm.JDBCRealm"
>>>>
>>>> driverName="oracle.jdbc.driver.OracleDriver"
>>>>
>>>> connectionURL="jdbc:oracle:thin:@ntserver:1521:ORCL"
>>>>
>>>> connectionName="scott" connectionPassword="tiger"
>>>>
>>>> userTable="users" userNameCol="user_name"
>>>>
>>>> userCredCol="user_pass"
>>>>
>>>> userRoleTable="user_roles" roleNameCol="role_name" />
>>>>
>>>> -->
>>>>
>>>>
>>>>
>>>> <!--
>>>>
>>>> <Realm className="org.apache.catalina.realm.JDBCRealm"
>>>>
>>>> driverName="sun.jdbc.odbc.JdbcOdbcDriver"
>>>>
>>>> connectionURL="jdbc:odbc:CATALINA"
>>>>
>>>> userTable="users" userNameCol="user_name"
>>>>
>>>> userCredCol="user_pass"
>>>>
>>>> userRoleTable="user_roles" roleNameCol="role_name" />
>>>>
>>>> -->
>>>>
>>>>
>>>>
>>>> <!-- Define the default virtual host
>>>>
>>>> Note: XML Schema validation will not work with Xerces
>>>> 2.2.
>>>>
>>>> -->
>>>>
>>>> <Host name="localhost" appBase="webapps"
>>>>
>>>> unpackWARs="true" autoDeploy="true"
>>>>
>>>> xmlValidation="false" xmlNamespaceAware="false">
>>>>
>>>>
>>>>
>>>> <!-- Defines a cluster for this node,
>>>>
>>>> By defining this element, means that every manager
>>>> will be
>>>> changed.
>>>>
>>>> So when running a cluster, only make sure that you have
>>>> webapps
>>>> in there
>>>>
>>>> that need to be clustered and remove the other ones.
>>>>
>>>> A cluster has the following parameters:
>>>>
>>>>
>>>>
>>>> className = the fully qualified name of the cluster
>>>> class
>>>>
>>>>
>>>>
>>>> clusterName = a descriptive name for your cluster,
>>>> can be
>>>> anything
>>>>
>>>>
>>>>
>>>> mcastAddr = the multicast address, has to be the same
>>>> for all
>>>> the nodes
>>>>
>>>>
>>>>
>>>> mcastPort = the multicast port, has to be the same for
>>>> all the
>>>> nodes
>>>>
>>>>
>>>>
>>>> mcastBindAddress = bind the multicast socket to a
>>>> specific
>>>> address
>>>>
>>>>
>>>>
>>>> mcastTTL = the multicast TTL if you want to limit your
>>>> broadcast
>>>>
>>>> mcastSoTimeout = the multicast readtimeout
>>>>
>>>>
>>>>
>>>> mcastFrequency = the number of milliseconds in between
>>>> sending a
>>>> "I'm alive" heartbeat
>>>>
>>>>
>>>>
>>>> mcastDropTime = the number a milliseconds before a
>>>> node is
>>>> considered "dead" if no heartbeat is received
>>>>
>>>>
>>>>
>>>> tcpThreadCount = the number of threads to handle
>>>> incoming
>>>> replication requests, optimal would be the same amount of threads
>>>> as nodes
>>>>
>>>>
>>>>
>>>> tcpListenAddress = the listen address (bind address)
>>>> for TCP
>>>> cluster request on this host,
>>>>
>>>> in case of multiple ethernet cards.
>>>>
>>>> auto means that address becomes
>>>>
>>>> InetAddress.getLocalHost
>>>> ().getHostAddress()
>>>>
>>>>
>>>>
>>>> tcpListenPort = the tcp listen port
>>>>
>>>>
>>>>
>>>> tcpSelectorTimeout = the timeout (ms) for the
>>>> Selector.select()
>>>> method in case the OS
>>>>
>>>> has a wakup bug in java.nio. Set
>>>> to 0 for
>>>> no timeout
>>>>
>>>>
>>>>
>>>> printToScreen = true means that managers will also
>>>> print to
>>>> std.out
>>>>
>>>>
>>>>
>>>> expireSessionsOnShutdown = true means that
>>>>
>>>>
>>>>
>>>> useDirtyFlag = true means that we only replicate a
>>>> session after
>>>> setAttribute,removeAttribute has been called.
>>>>
>>>> false means to replicate the session
>>>> after each
>>>> request.
>>>>
>>>> false means that replication would work
>>>> for the
>>>> following piece of code: (only for SimpleTcpReplicationManager)
>>>>
>>>> <%
>>>>
>>>> HashMap map =
>>>> (HashMap)session.getAttribute("map");
>>>>
>>>> map.put("key","value");
>>>>
>>>> %>
>>>>
>>>> replicationMode = can be either 'pooled',
>>>> 'synchronous' or
>>>> 'asynchronous'.
>>>>
>>>> * Pooled means that the replication
>>>> happens
>>>> using several sockets in a synchronous way. Ie, the data gets
>>>> replicated,
>>>> then the request return. This is the same as the 'synchronous'
>>>> setting
>>>> except it uses a pool of sockets, hence it is multithreaded. This
>>>> is the
>>>> fastest and safest configuration. To use this, also increase the nr
>>>> of tcp
>>>> threads that you have dealing with replication.
>>>>
>>>> * Synchronous means that the thread
>>>> that
>>>> executes the request, is also the
>>>>
>>>> thread the replicates the data to the
>>>> other
>>>> nodes, and will not return until all
>>>>
>>>> nodes have received the information.
>>>>
>>>> * Asynchronous means that there is a
>>>> specific
>>>> 'sender' thread for each cluster node,
>>>>
>>>> so the request thread will queue the
>>>> replication request into a "smart" queue,
>>>>
>>>> and then return to the client.
>>>>
>>>> The "smart" queue is a queue where
>>>> when a
>>>> session is added to the queue, and the same session
>>>>
>>>> already exists in the queue from a
>>>> previous
>>>> request, that session will be replaced
>>>>
>>>> in the queue instead of replicating
>>>> two
>>>> requests. This almost never happens, unless there is a
>>>>
>>>> large network delay.
>>>>
>>>> -->
>>>>
>>>> <!--
>>>>
>>>> When configuring for clustering, you also add in a valve
>>>> to catch
>>>> all the requests
>>>>
>>>> coming in, at the end of the request, the session may or
>>>> may not
>>>> be replicated.
>>>>
>>>> A session is replicated if and only if all the
>>>> conditions are
>>>>
>>>> met:
>>>>
>>>> 1. useDirtyFlag is true or setAttribute or
>>>> removeAttribute has
>>>> been called AND
>>>>
>>>> 2. a session exists (has been created)
>>>>
>>>> 3. the request is not trapped by the "filter" attribute
>>>>
>>>>
>>>>
>>>> The filter attribute is to filter out requests that
>>>> could not
>>>> modify the session,
>>>>
>>>> hence we don't replicate the session after the end of
>>>> this
>>>> request.
>>>>
>>>> The filter is negative, ie, anything you put in the
>>>> filter, you
>>>> mean to filter out,
>>>>
>>>> ie, no replication will be done on requests that match
>>>> one of the
>>>> filters.
>>>>
>>>> The filter attribute is delimited by ;, so you can't
>>>> escape out ;
>>>> even if you wanted to.
>>>>
>>>>
>>>>
>>>> filter=".*\.gif;.*\.js;" means that we will not
>>>> replicate the
>>>> session after requests with the URI
>>>>
>>>> ending with .gif and .js are intercepted.
>>>>
>>>>
>>>>
>>>> The deployer element can be used to deploy apps cluster
>>>> wide.
>>>>
>>>> Currently the deployment only deploys/undeploys to
>>>> working
>>>> members in the cluster
>>>>
>>>> so no WARs are copied upons startup of a broken node.
>>>>
>>>> The deployer watches a directory (watchDir) for WAR
>>>> files when
>>>> watchEnabled="true"
>>>>
>>>> When a new war file is added the war gets deployed to
>>>> the local
>>>> instance,
>>>>
>>>> and then deployed to the other instances in the cluster.
>>>>
>>>> When a war file is deleted from the watchDir the war is
>>>> undeployed locally
>>>>
>>>> and cluster wide
>>>>
>>>> -->
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> <Cluster
>>>> className="org.apache.catalina.cluster.tcp.SimpleTcpCluster"
>>>>
>>>>
>>>> managerClassName="org.apache.catalina.cluster.session.DeltaManager"
>>>>
>>>> expireSessionsOnShutdown="true"
>>>>
>>>> useDirtyFlag="true"
>>>>
>>>> notifyListenersOnReplication="true">
>>>>
>>>>
>>>>
>>>> <Membership
>>>>
>>>>
>>>> className="org.apache.catalina.cluster.mcast.McastService"
>>>>
>>>> mcastAddr="228.0.0.4"
>>>>
>>>> mcastPort="45564"
>>>>
>>>> mcastFrequency="500"
>>>>
>>>> mcastDropTime="3000"/>
>>>>
>>>>
>>>>
>>>> <Receiver
>>>>
>>>>
>>>>
>>>> className="org.apache.catalina.cluster.tcp.ReplicationListener"
>>>>
>>>> tcpListenAddress="auto"
>>>>
>>>> tcpListenPort="4001"
>>>>
>>>> tcpSelectorTimeout="100"
>>>>
>>>> tcpThreadCount="2"/>
>>>>
>>>>
>>>>
>>>> <Sender
>>>>
>>>>
>>>>
>>>> className="org.apache.catalina.cluster.tcp.ReplicationTransmitter"
>>>>
>>>> replicationMode="pooled"
>>>>
>>>> ackTimeout="15000"
>>>>
>>>> waitForAck="true"/>
>>>>
>>>>
>>>>
>>>> <Valve
>>>>
>>>> className="org.apache.catalina.cluster.tcp.ReplicationValve"
>>>>
>>>>
>>>>
>>>> filter=".*\.gif;.*\.js;.*\.jpg;.*\.png;.*\.htm;.*\.html;.*\.css;.*
>>>> \.txt;"/>
>>>>
>>>>
>>>>
>>>> <Deployer
>>>>
>>>> className="org.apache.catalina.cluster.deploy.FarmWarDeployer"
>>>>
>>>> tempDir="/tmp/war-temp/"
>>>>
>>>> deployDir="/tmp/war-deploy/"
>>>>
>>>> watchDir="/tmp/war-listen/"
>>>>
>>>> watchEnabled="false"/>
>>>>
>>>>
>>>>
>>>> <ClusterListener
>>>>
>>>> className="org.apache.catalina.cluster.session.ClusterSessionListen
>>>> e
>>>> r
>>>> "
>>>> />
>>>>
>>>> </Cluster>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> <!-- Normally, users must authenticate themselves to each
>>>> web app
>>>>
>>>> individually. Uncomment the following entry if you
>>>> would like
>>>>
>>>> a user to be authenticated the first time they
>>>> encounter a
>>>>
>>>> resource protected by a security constraint, and then
>>>> have that
>>>>
>>>> user identity maintained across *all* web applications
>>>> contained
>>>> in this virtual host. -->
>>>>
>>>> <!--
>>>>
>>>> <Valve
>>>> className="org.apache.catalina.authenticator.SingleSignOn" />
>>>>
>>>> -->
>>>>
>>>>
>>>>
>>>> <!-- Access log processes all requests for this virtual
>>>> host. By
>>>>
>>>> default, log files are created in the "logs" directory
>>>> relative
>>>> to
>>>>
>>>> $CATALINA_HOME. If you wish, you can specify a
>>>> different
>>>>
>>>> directory with the "directory" attribute. Specify
>>>> either a
>>>> relative
>>>>
>>>> (to $CATALINA_HOME) or absolute path to the desired
>>>> directory.
>>>>
>>>> -->
>>>>
>>>> <!--
>>>>
>>>> <Valve className="org.apache.catalina.valves.AccessLogValve"
>>>>
>>>> directory="logs" prefix="localhost_access_log."
>>>>
>>>> suffix=".txt"
>>>>
>>>> pattern="common" resolveHosts="false"/>
>>>>
>>>> -->
>>>>
>>>>
>>>>
>>>> <!-- Access log processes all requests for this virtual
>>>> host. By
>>>>
>>>> default, log files are created in the "logs" directory
>>>> relative
>>>> to
>>>>
>>>> $CATALINA_HOME. If you wish, you can specify a
>>>> different
>>>>
>>>> directory with the "directory" attribute. Specify
>>>> either a
>>>> relative
>>>>
>>>> (to $CATALINA_HOME) or absolute path to the desired
>>>> directory.
>>>>
>>>> This access log implementation is optimized for maximum
>>>> performance,
>>>>
>>>> but is hardcoded to support only the "common" and
>>>> "combined"
>>>>
>>>> patterns.
>>>>
>>>> -->
>>>>
>>>> <!--
>>>>
>>>> <Valve
>>>>
>>>> className="org.apache.catalina.valves.FastCommonAccessLogValve"
>>>>
>>>> directory="logs" prefix="localhost_access_log."
>>>>
>>>> suffix=".txt"
>>>>
>>>> pattern="common" resolveHosts="false"/>
>>>>
>>>> -->
>>>>
>>>>
>>>>
>>>> </Host>
>>>>
>>>>
>>>>
>>>> </Engine>
>>>>
>>>>
>>>>
>>>> </Service>
>>>>
>>>>
>>>>
>>>> </Server>
>>>>
>>>>
>>>>
>>>> ===========================================================
>>>>
>>>>
>>>>
>>>> I appriceate your prompt help on this since this is a very critical
>>>> application live for the moment. Always send me an email for any
>>>> clarifications.
>>>>
>>>>
>>>>
>>>> Thanks and best regards,
>>>>
>>>> Dilan
>>>>
>>>>
>>>>
>>>>
>>>>
>>>
>>>
>>> --------------------------------------------------------------------
>>> -
>>> To start a new topic, e-mail: users@tomcat.apache.org
>>> To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
>>> For additional commands, e-mail: users-help@tomcat.apache.org
>>>
>>>
>>>
>>> --------------------------------------------------------------------
>>> -
>>> To start a new topic, e-mail: users@tomcat.apache.org
>>> To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
>>> For additional commands, e-mail: users-help@tomcat.apache.org
>>>
>>>
>>
>>
>>
>> ---------------------------------------------------------------------
>> To start a new topic, e-mail: users@tomcat.apache.org
>> To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
>> For additional commands, e-mail: users-help@tomcat.apache.org
>>
>>
>
>
> ---------------------------------------------------------------------
> To start a new topic, e-mail: users@tomcat.apache.org
> To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
> For additional commands, e-mail: users-help@tomcat.apache.org
>
>
>
> ---------------------------------------------------------------------
> To start a new topic, e-mail: users@tomcat.apache.org
> To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
> For additional commands, e-mail: users-help@tomcat.apache.org
>
>
---------------------------------------------------------------------
To start a new topic, e-mail: users@tomcat.apache.org
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org
RE: tomcat5.5.17 cluster(using jdk1.5) error - OutOfMemoryError in starting up on AS4
Posted by Dilan Kelanibandara <di...@beyondm.net>.
Hi Peter,
No. No service is up and running on 4564. I did only commenting the member
and restarted both the servers. So far it is working fine. I have a doubt in
future weather there is any effect on my server?
Can you please explain me the risk. Or is it ok to run the server with this
configuration?.
Thanks and best regards,
Dilan
-----Original Message-----
From: Peter Rossbach [mailto:pr@objektpark.de]
Sent: Sunday, June 18, 2006 8:14 PM
To: Tomcat Users List
Subject: Re: tomcat5.5.17 cluster(using jdk1.5) error - OutOfMemoryError in
starting up on AS4
OK!
As you comment the Membership service out, following default is used:
McastService mService= new McastService();
mService.setMcastAddr("228.0.0.4");
mService.setMcastPort(8012);
mService.setMcastFrequency(1000);
mService.setMcastDropTime(30000);
transferProperty("service",mService);
setMembershipService(mService);
}
Have you start another service at 45564 ?
Regards
Am 18.06.2006 um 16:54 schrieb Dilan Kelanibandara:
> Hi Peter,
>
> I was having the memory problem when cluster manager trying to
> multicast the
> request when tomcat startup.
> As a trial I commented multicast element of cluster configuration in
> server.xml and restarted both tomcats
>
> This is the multicast element which I commented.
> ==============================
> <!--
> <Membership
>
> className="org.apache.catalina.cluster.mcast.McastService"
> mcastAddr="228.0.0.4"
> mcastPort="45564"
> mcastFrequency="500"
> mcastDropTime="3000"/>
> -->
>
> ==============================
>
> Then tomcat started without an outofmemoryerror. Also replication
> members
> are added to each other. I ran both of servers with my applicaiton
> for some
> time. It is working fine. Session replication is happening as
> usual. Can you
> let me know can I proceed with this setup or is there any effect of my
> commenting on session replication ?
>
> Can you kindly let me know?
>
> Thanks and best regards,
> Dilan
>
>
> -----Original Message-----
> From: Peter Rossbach [mailto:pr@objektpark.de]
> Sent: Sunday, June 18, 2006 9:50 AM
> To: Tomcat Users List
> Subject: Re: tomcat5.5.17 cluster(using jdk1.5) error -
> OutOfMemoryError in
> starting up on AS4
>
> Use more JVM Options to analyse the mem usage
>
> Work with more faster mem allocation
>
> -XX:NewSize=128m -XX:MaxNewSize=128m -XX:SurvivorRatio=8
> -Xverbosegc
>
> Or better use a Memory Profiler...
>
> But the membership not allocate much memory, very strange effect :-(
>
> Peter
>
>
> Am 18.06.2006 um 08:01 schrieb Dilan Kelanibandara:
>
>> Hi Peter,
>> I am using default JVM parameters coming with tomcat5.5.17. In the
>> tomcat
>> server.xml file it says tcpThreadCount is normally equal to no.of
>> nodes (ie
>> 2 in this case).That is why I changed that to 2.
>>
>> I tried increasing JVM parameters for heap size in tomcat
>> Min=1024m
>> Max=1024m
>> also.I tried with both 512m also. But in both the occasion it is
>> the same
>> result.
>> Thank you for your kind attention.
>> I want further clarifications.
>> Best regards,
>> Dilan
>> -----Original Message-----
>> From: Peter Rossbach [mailto:pr@objektpark.de]
>> Sent: Sunday, June 18, 2006 7:37 AM
>> To: Tomcat Users List
>> Subject: Re: tomcat5.5.17 cluster(using jdk1.5) error -
>> OutOfMemoryError in
>> starting up on AS4
>>
>> Hi,
>>
>> Which JVM memory parameter you use?
>> At pooled mode use more receiver worker set tcpThreadCount="6"!
>> You really need deployer? Deployer generate at every startup a large
>> cluster message.
>>
>> Regards
>> Peter
>>
>>
>> Am 18.06.2006 um 06:22 schrieb Dilan Kelanibandara:
>>
>>>
>>>
>>> Hello ,
>>>
>>>
>>>
>>> I am getting OutOfMemoryError continuously when starting up two
>>> cluster
>>> nodes of tomcat5.5.17 (jdk1.5 on Advanced server 4). Any way it was
>>> working
>>> fine for 3 weeks time. This error occurs previously only one time
>>> and when
>>> restarted the tomcat, it worked.
>>>
>>>
>>>
>>>
>>>
>>> Following is a part of catalina.out relevent to that error for
>>> node 1.
>>>
>>> ============
>>> ========================================================
>>>
>>>
>>>
>>> INFO: Start ClusterSender at cluster
>>> Catalina:type=Cluster,host=localhost
>>>
>>> with name Catalina:type=ClusterSender,host=localhost
>>>
>>> Jun 17, 2006 8:44:15 PM
>>> org.apache.catalina.cluster.mcast.McastService start
>>>
>>> INFO: Sleeping for 2000 milliseconds to establish cluster membership
>>> Exception in thread "Cluster-MembershipReceiver"
>>> java.lang.OutOfMemoryError:
>>>
>>>
>>> Java heap space
>>>
>>> Jun 17, 2006 8:44:17 PM
>>> org.apache.catalina.cluster.mcast.McastService
>>>
>>> registerMBean
>>>
>>> INFO: membership mbean registered
>>>
>>> (Catalina:type=ClusterMembership,host=localhost)
>>>
>>> Jun 17, 2006 8:44:17 PM
>>> org.apache.catalina.cluster.deploy.FarmWarDeployer
>>>
>>> start
>>>
>>> INFO: Cluster FarmWarDeployer started.
>>>
>>> Jun 17, 2006 8:44:19 PM
>>> org.apache.catalina.cluster.session.DeltaManager
>>>
>>> start
>>>
>>> INFO: Register manager /StockTradingServer to cluster element Host
>>> with name
>>> localhost Jun 17, 2006 8:44:19 PM
>>> org.apache.catalina.cluster.session.DeltaManager
>>>
>>> start
>>>
>>> INFO: Starting clustering manager at /StockTradingServer Jun 17,
>>> 2006
>>> 8:44:19 PM org.apache.catalina.cluster.session.DeltaManager
>>>
>>> getAllClusterSessions
>>>
>>> INFO: Manager [/StockTradingServer]: skipping state transfer. No
>>> members
>>> active in cluster group.
>>>
>>>
>>>
>>> ====================================================================
>>> =
>>> =
>>> ======
>>> =====
>>>
>>> node2 startup log is as follows
>>>
>>> ====================================================================
>>> =
>>> =
>>> ======
>>> =====
>>>
>>> INFO: Cluster is about to start
>>>
>>> Jun 17, 2006 8:53:00 PM
>>>
>>> org.apache.catalina.cluster.tcp.ReplicationTransmitter start
>>>
>>> INFO: Start ClusterSender at cluster
>>> Catalina:type=Cluster,host=localhost
>>>
>>> with name Catalina:type=ClusterSender,host=localhost
>>>
>>> Jun 17, 2006 8:53:00 PM
>>> org.apache.catalina.cluster.mcast.McastService start
>>>
>>> INFO: Sleeping for 2000 milliseconds to establish cluster membership
>>> Exception in thread "Cluster-MembershipReceiver"
>>> java.lang.OutOfMemoryError:
>>>
>>>
>>> Java heap space
>>>
>>> Jun 17, 2006 8:53:02 PM
>>> org.apache.catalina.cluster.mcast.McastService
>>>
>>> registerMBean
>>>
>>> INFO: membership mbean registered
>>>
>>> (Catalina:type=ClusterMembership,host=localhost)
>>>
>>> Jun 17, 2006 8:53:02 PM
>>> org.apache.catalina.cluster.deploy.FarmWarDeployer
>>>
>>> start
>>>
>>> INFO: Cluster FarmWarDeployer started.
>>>
>>> Jun 17, 2006 8:53:04 PM
>>> org.apache.catalina.cluster.session.DeltaManager
>>>
>>> start
>>>
>>> INFO: Register manager /StockTradingServer to cluster element Host
>>> with name
>>> localhost Jun 17, 2006 8:53:04 PM
>>> org.apache.catalina.cluster.session.DeltaManager
>>>
>>> start
>>>
>>>
>>>
>>> ====================================================================
>>> =
>>> =
>>> ======
>>> =
>>>
>>> Any way my clustor was working fine for 3 weeks time and started to
>>> give
>>> this error in startup of both the nodes.
>>>
>>>
>>>
>>> I have an IBMHTTPServer with jk connector for load balancing and
>>> that load
>>> is comming my tomcat cluster.
>>>
>>>
>>>
>>> following is the server.xml file for both the servers.
>>>
>>>
>>>
>>> ====================================================================
>>> =
>>> =
>>> ======
>>> =
>>>
>>>
>>>
>>> <!-- Example Server Configuration File -->
>>>
>>> <!-- Note that component elements are nested corresponding to their
>>>
>>> parent-child relationships with each other -->
>>>
>>>
>>>
>>> <!-- A "Server" is a singleton element that represents the entire
>>> JVM,
>>>
>>> which may contain one or more "Service" instances. The Server
>>>
>>> listens for a shutdown command on the indicated port.
>>>
>>>
>>>
>>> Note: A "Server" is not itself a "Container", so you may not
>>>
>>> define subcomponents such as "Valves" or "Loggers" at this
>>> level.
>>>
>>> -->
>>>
>>>
>>>
>>> <Server port="8005" shutdown="SHUTDOWN">
>>>
>>>
>>>
>>> <!-- Comment these entries out to disable JMX MBeans support used
>>> for the
>>>
>>> administration web application --> <Listener
>>> className="org.apache.catalina.core.AprLifecycleListener" />
>>> <Listener
>>> className="org.apache.catalina.mbeans.ServerLifecycleListener" />
>>> <Listener
>>> className="org.apache.catalina.mbeans.GlobalResourcesLifecycleListen
>>> e
>>> r
>>> " />
>>> <Listener
>>> className="org.apache.catalina.storeconfig.StoreConfigLifecycleListe
>>> n
>>> e
>>> r"/>
>>>
>>>
>>>
>>> <!-- Global JNDI resources -->
>>>
>>> <GlobalNamingResources>
>>>
>>>
>>>
>>> <!-- Test entry for demonstration purposes -->
>>>
>>> <Environment name="simpleValue" type="java.lang.Integer"
>>> value="30"/>
>>>
>>>
>>>
>>> <!-- Editable user database that can also be used by
>>>
>>> UserDatabaseRealm to authenticate users -->
>>>
>>> <Resource name="UserDatabase" auth="Container"
>>>
>>> type="org.apache.catalina.UserDatabase"
>>>
>>> description="User database that can be updated and saved"
>>>
>>>
>>> factory="org.apache.catalina.users.MemoryUserDatabaseFactory"
>>>
>>> pathname="conf/tomcat-users.xml" />
>>>
>>>
>>>
>>> </GlobalNamingResources>
>>>
>>>
>>>
>>> <!-- A "Service" is a collection of one or more "Connectors" that
>>> share
>>>
>>> a single "Container" (and therefore the web applications
>>> visible
>>>
>>> within that Container). Normally, that Container is an
>>> "Engine",
>>>
>>> but this is not required.
>>>
>>>
>>>
>>> Note: A "Service" is not itself a "Container", so you may not
>>>
>>> define subcomponents such as "Valves" or "Loggers" at this
>>> level.
>>>
>>> -->
>>>
>>>
>>>
>>> <!-- Define the Tomcat Stand-Alone Service --> <Service
>>> name="Catalina">
>>>
>>>
>>>
>>> <!-- A "Connector" represents an endpoint by which requests are
>>> received
>>>
>>> and responses are returned. Each Connector passes requests
>>> on to
>>> the
>>>
>>> associated "Container" (normally an Engine) for processing.
>>>
>>>
>>>
>>> By default, a non-SSL HTTP/1.1 Connector is established on
>>> port
>>> 8080.
>>>
>>> You can also enable an SSL HTTP/1.1 Connector on port
>>> 8443 by
>>>
>>> following the instructions below and uncommenting the second
>>> Connector
>>>
>>> entry. SSL support requires the following steps (see the
>>> SSL Config
>>> HOWTO in the Tomcat 5 documentation bundle for more detailed
>>>
>>> instructions):
>>>
>>> * If your JDK version 1.3 or prior, download and install
>>> JSSE 1.0.2
>>> or
>>>
>>> later, and put the JAR files into "$JAVA_HOME/jre/lib/
>>> ext".
>>>
>>> * Execute:
>>>
>>> %JAVA_HOME%\bin\keytool -genkey -alias tomcat -keyalg
>>> RSA
>>>
>>> (Windows)
>>>
>>> $JAVA_HOME/bin/keytool -genkey -alias tomcat -keyalg
>>> RSA (Unix)
>>> with a password value of "changeit" for both the certificate and
>>>
>>> the keystore itself.
>>>
>>>
>>>
>>> By default, DNS lookups are enabled when a web application
>>> calls
>>>
>>> request.getRemoteHost(). This can have an adverse impact on
>>>
>>> performance, so you can disable it by setting the
>>>
>>> "enableLookups" attribute to "false". When DNS lookups are
>>> disabled,
>>>
>>> request.getRemoteHost() will return the String version of
>>> the
>>>
>>> IP address of the remote client.
>>>
>>> -->
>>>
>>>
>>>
>>> <!-- Define a non-SSL HTTP/1.1 Connector on port 8080 -->
>>>
>>> <Connector port="8088" maxHttpHeaderSize="8192"
>>>
>>> maxThreads="300" minSpareThreads="25"
>>> maxSpareThreads="75"
>>>
>>> enableLookups="false" redirectPort="8443"
>>> acceptCount="100"
>>>
>>> connectionTimeout="20000"
>>> disableUploadTimeout="true" />
>>>
>>> <!-- Note : To disable connection timeouts, set
>>> connectionTimeout value
>>>
>>> to 0 -->
>>>
>>>
>>>
>>> <!-- Note : To use gzip compression you could set the
>>> following
>>> properties :
>>>
>>>
>>>
>>> compression="on"
>>>
>>> compressionMinSize="2048"
>>>
>>> noCompressionUserAgents="gozilla,
>>> traviata"
>>>
>>> compressableMimeType="text/html,text/xml"
>>>
>>> -->
>>>
>>>
>>>
>>> <!-- Define a SSL HTTP/1.1 Connector on port 8443 -->
>>>
>>> <!--
>>>
>>> <Connector port="8443" maxHttpHeaderSize="8192"
>>>
>>> maxThreads="150" minSpareThreads="25"
>>> maxSpareThreads="75"
>>>
>>> enableLookups="false" disableUploadTimeout="true"
>>>
>>> acceptCount="100" scheme="https" secure="true"
>>>
>>> clientAuth="false" sslProtocol="TLS" />
>>>
>>> -->
>>>
>>>
>>>
>>> <!-- Define an AJP 1.3 Connector on port 8009 -->
>>>
>>> <Connector port="8009"
>>>
>>> enableLookups="false" redirectPort="8443"
>>> protocol="AJP/1.3"
>>>
>>> />
>>>
>>>
>>>
>>> <!-- Define a Proxied HTTP/1.1 Connector on port 8082 -->
>>>
>>> <!-- See proxy documentation for more information about using
>>> this. -->
>>>
>>> <!--
>>>
>>> <Connector port="8082"
>>>
>>> maxThreads="150" minSpareThreads="25"
>>> maxSpareThreads="75"
>>>
>>> enableLookups="false" acceptCount="100"
>>>
>>> connectionTimeout="20000"
>>>
>>> proxyPort="80" disableUploadTimeout="true" />
>>>
>>> -->
>>>
>>>
>>>
>>> <!-- An Engine represents the entry point (within Catalina) that
>>> processes
>>>
>>> every request. The Engine implementation for Tomcat stand
>>> alone
>>>
>>> analyzes the HTTP headers included with the request, and
>>> passes them
>>> on to the appropriate Host (virtual host). -->
>>>
>>>
>>>
>>> <!-- You should set jvmRoute to support load-balancing via AJP
>>> ie :
>>>
>>> <Engine name="Standalone" defaultHost="localhost"
>>> jvmRoute="jvm1">
>>>
>>> -->
>>>
>>>
>>>
>>> <!-- Define the top level container in our container hierarchy
>>> -->
>>>
>>> <Engine name="Catalina" defaultHost="localhost"
>>> jvmRoute="node01">
>>>
>>> <!-- <Engine name="Catalina" jvmRoute="$JAVA_HOME" > -->
>>>
>>>
>>>
>>> <!-- The request dumper valve dumps useful debugging
>>> information about
>>>
>>> the request headers and cookies that were received, and
>>> the
>>> response
>>>
>>> headers and cookies that were sent, for all requests
>>> received by
>>>
>>> this instance of Tomcat. If you care only about requests
>>> to a
>>>
>>> particular virtual host, or a particular application,
>>> nest this
>>>
>>> element inside the corresponding <Host> or <Context> entry
>>> instead.
>>>
>>>
>>>
>>> For a similar mechanism that is portable to all Servlet
>>> 2.4
>>>
>>> containers, check out the "RequestDumperFilter" Filter in
>>> the
>>>
>>> example application (the source for this filter may be
>>> found in
>>>
>>> "$CATALINA_HOME/webapps/examples/WEB-INF/classes/
>>> filters").
>>>
>>>
>>>
>>> Request dumping is disabled by default. Uncomment the
>>> following
>>>
>>> element to enable it. -->
>>>
>>> <!--
>>>
>>> <Valve
>>> className="org.apache.catalina.valves.RequestDumperValve"/>
>>>
>>> -->
>>>
>>>
>>>
>>> <!-- Because this Realm is here, an instance will be shared
>>> globally
>>> -->
>>>
>>>
>>>
>>> <!-- This Realm uses the UserDatabase configured in the global
>>> JNDI
>>>
>>> resources under the key "UserDatabase". Any edits
>>>
>>> that are performed against this UserDatabase are
>>> immediately
>>>
>>> available for use by the Realm. -->
>>>
>>> <Realm className="org.apache.catalina.realm.UserDatabaseRealm"
>>>
>>> resourceName="UserDatabase"/>
>>>
>>>
>>>
>>> <!-- Comment out the old realm but leave here for now in
>>> case we
>>>
>>> need to go back quickly -->
>>>
>>> <!--
>>>
>>> <Realm className="org.apache.catalina.realm.MemoryRealm" />
>>>
>>> -->
>>>
>>>
>>>
>>> <!-- Replace the above Realm with one of the following to get
>>> a Realm
>>>
>>> stored in a database and accessed via JDBC -->
>>>
>>>
>>>
>>>
>>>
>>> <Realm className="org.apache.catalina.realm.JDBCRealm"
>>>
>>> driverName="org.gjt.mm.mysql.Driver"
>>>
>>> connectionURL="jdbc:mysql://172.16.1.55:3306/kutila"
>>>
>>> connectionName="kutila" connectionPassword="kutila"
>>>
>>> userTable="users" userNameCol="user_name"
>>>
>>> userCredCol="user_pass"
>>>
>>> userRoleTable="user_roles" roleNameCol="role_name" />
>>>
>>>
>>>
>>>
>>>
>>> <!--
>>>
>>> <Realm className="org.apache.catalina.realm.JDBCRealm"
>>>
>>> driverName="oracle.jdbc.driver.OracleDriver"
>>>
>>> connectionURL="jdbc:oracle:thin:@ntserver:1521:ORCL"
>>>
>>> connectionName="scott" connectionPassword="tiger"
>>>
>>> userTable="users" userNameCol="user_name"
>>>
>>> userCredCol="user_pass"
>>>
>>> userRoleTable="user_roles" roleNameCol="role_name" />
>>>
>>> -->
>>>
>>>
>>>
>>> <!--
>>>
>>> <Realm className="org.apache.catalina.realm.JDBCRealm"
>>>
>>> driverName="sun.jdbc.odbc.JdbcOdbcDriver"
>>>
>>> connectionURL="jdbc:odbc:CATALINA"
>>>
>>> userTable="users" userNameCol="user_name"
>>>
>>> userCredCol="user_pass"
>>>
>>> userRoleTable="user_roles" roleNameCol="role_name" />
>>>
>>> -->
>>>
>>>
>>>
>>> <!-- Define the default virtual host
>>>
>>> Note: XML Schema validation will not work with Xerces 2.2.
>>>
>>> -->
>>>
>>> <Host name="localhost" appBase="webapps"
>>>
>>> unpackWARs="true" autoDeploy="true"
>>>
>>> xmlValidation="false" xmlNamespaceAware="false">
>>>
>>>
>>>
>>> <!-- Defines a cluster for this node,
>>>
>>> By defining this element, means that every manager
>>> will be
>>> changed.
>>>
>>> So when running a cluster, only make sure that you have
>>> webapps
>>> in there
>>>
>>> that need to be clustered and remove the other ones.
>>>
>>> A cluster has the following parameters:
>>>
>>>
>>>
>>> className = the fully qualified name of the cluster
>>> class
>>>
>>>
>>>
>>> clusterName = a descriptive name for your cluster,
>>> can be
>>> anything
>>>
>>>
>>>
>>> mcastAddr = the multicast address, has to be the same
>>> for all
>>> the nodes
>>>
>>>
>>>
>>> mcastPort = the multicast port, has to be the same for
>>> all the
>>> nodes
>>>
>>>
>>>
>>> mcastBindAddress = bind the multicast socket to a
>>> specific
>>> address
>>>
>>>
>>>
>>> mcastTTL = the multicast TTL if you want to limit your
>>> broadcast
>>>
>>> mcastSoTimeout = the multicast readtimeout
>>>
>>>
>>>
>>> mcastFrequency = the number of milliseconds in between
>>> sending a
>>> "I'm alive" heartbeat
>>>
>>>
>>>
>>> mcastDropTime = the number a milliseconds before a
>>> node is
>>> considered "dead" if no heartbeat is received
>>>
>>>
>>>
>>> tcpThreadCount = the number of threads to handle
>>> incoming
>>> replication requests, optimal would be the same amount of threads
>>> as nodes
>>>
>>>
>>>
>>> tcpListenAddress = the listen address (bind address)
>>> for TCP
>>> cluster request on this host,
>>>
>>> in case of multiple ethernet cards.
>>>
>>> auto means that address becomes
>>>
>>> InetAddress.getLocalHost
>>> ().getHostAddress()
>>>
>>>
>>>
>>> tcpListenPort = the tcp listen port
>>>
>>>
>>>
>>> tcpSelectorTimeout = the timeout (ms) for the
>>> Selector.select()
>>> method in case the OS
>>>
>>> has a wakup bug in java.nio. Set
>>> to 0 for
>>> no timeout
>>>
>>>
>>>
>>> printToScreen = true means that managers will also
>>> print to
>>> std.out
>>>
>>>
>>>
>>> expireSessionsOnShutdown = true means that
>>>
>>>
>>>
>>> useDirtyFlag = true means that we only replicate a
>>> session after
>>> setAttribute,removeAttribute has been called.
>>>
>>> false means to replicate the session
>>> after each
>>> request.
>>>
>>> false means that replication would work
>>> for the
>>> following piece of code: (only for SimpleTcpReplicationManager)
>>>
>>> <%
>>>
>>> HashMap map =
>>> (HashMap)session.getAttribute("map");
>>>
>>> map.put("key","value");
>>>
>>> %>
>>>
>>> replicationMode = can be either 'pooled',
>>> 'synchronous' or
>>> 'asynchronous'.
>>>
>>> * Pooled means that the replication
>>> happens
>>> using several sockets in a synchronous way. Ie, the data gets
>>> replicated,
>>> then the request return. This is the same as the 'synchronous'
>>> setting
>>> except it uses a pool of sockets, hence it is multithreaded. This
>>> is the
>>> fastest and safest configuration. To use this, also increase the nr
>>> of tcp
>>> threads that you have dealing with replication.
>>>
>>> * Synchronous means that the thread
>>> that
>>> executes the request, is also the
>>>
>>> thread the replicates the data to the
>>> other
>>> nodes, and will not return until all
>>>
>>> nodes have received the information.
>>>
>>> * Asynchronous means that there is a
>>> specific
>>> 'sender' thread for each cluster node,
>>>
>>> so the request thread will queue the
>>> replication request into a "smart" queue,
>>>
>>> and then return to the client.
>>>
>>> The "smart" queue is a queue where
>>> when a
>>> session is added to the queue, and the same session
>>>
>>> already exists in the queue from a
>>> previous
>>> request, that session will be replaced
>>>
>>> in the queue instead of replicating
>>> two
>>> requests. This almost never happens, unless there is a
>>>
>>> large network delay.
>>>
>>> -->
>>>
>>> <!--
>>>
>>> When configuring for clustering, you also add in a valve
>>> to catch
>>> all the requests
>>>
>>> coming in, at the end of the request, the session may or
>>> may not
>>> be replicated.
>>>
>>> A session is replicated if and only if all the
>>> conditions are
>>>
>>> met:
>>>
>>> 1. useDirtyFlag is true or setAttribute or
>>> removeAttribute has
>>> been called AND
>>>
>>> 2. a session exists (has been created)
>>>
>>> 3. the request is not trapped by the "filter" attribute
>>>
>>>
>>>
>>> The filter attribute is to filter out requests that
>>> could not
>>> modify the session,
>>>
>>> hence we don't replicate the session after the end of
>>> this
>>> request.
>>>
>>> The filter is negative, ie, anything you put in the
>>> filter, you
>>> mean to filter out,
>>>
>>> ie, no replication will be done on requests that match
>>> one of the
>>> filters.
>>>
>>> The filter attribute is delimited by ;, so you can't
>>> escape out ;
>>> even if you wanted to.
>>>
>>>
>>>
>>> filter=".*\.gif;.*\.js;" means that we will not
>>> replicate the
>>> session after requests with the URI
>>>
>>> ending with .gif and .js are intercepted.
>>>
>>>
>>>
>>> The deployer element can be used to deploy apps cluster
>>> wide.
>>>
>>> Currently the deployment only deploys/undeploys to
>>> working
>>> members in the cluster
>>>
>>> so no WARs are copied upons startup of a broken node.
>>>
>>> The deployer watches a directory (watchDir) for WAR
>>> files when
>>> watchEnabled="true"
>>>
>>> When a new war file is added the war gets deployed to
>>> the local
>>> instance,
>>>
>>> and then deployed to the other instances in the cluster.
>>>
>>> When a war file is deleted from the watchDir the war is
>>> undeployed locally
>>>
>>> and cluster wide
>>>
>>> -->
>>>
>>>
>>>
>>>
>>>
>>> <Cluster
>>> className="org.apache.catalina.cluster.tcp.SimpleTcpCluster"
>>>
>>>
>>> managerClassName="org.apache.catalina.cluster.session.DeltaManager"
>>>
>>> expireSessionsOnShutdown="true"
>>>
>>> useDirtyFlag="true"
>>>
>>> notifyListenersOnReplication="true">
>>>
>>>
>>>
>>> <Membership
>>>
>>>
>>> className="org.apache.catalina.cluster.mcast.McastService"
>>>
>>> mcastAddr="228.0.0.4"
>>>
>>> mcastPort="45564"
>>>
>>> mcastFrequency="500"
>>>
>>> mcastDropTime="3000"/>
>>>
>>>
>>>
>>> <Receiver
>>>
>>>
>>>
>>> className="org.apache.catalina.cluster.tcp.ReplicationListener"
>>>
>>> tcpListenAddress="auto"
>>>
>>> tcpListenPort="4001"
>>>
>>> tcpSelectorTimeout="100"
>>>
>>> tcpThreadCount="2"/>
>>>
>>>
>>>
>>> <Sender
>>>
>>>
>>>
>>> className="org.apache.catalina.cluster.tcp.ReplicationTransmitter"
>>>
>>> replicationMode="pooled"
>>>
>>> ackTimeout="15000"
>>>
>>> waitForAck="true"/>
>>>
>>>
>>>
>>> <Valve
>>>
>>> className="org.apache.catalina.cluster.tcp.ReplicationValve"
>>>
>>>
>>>
>>> filter=".*\.gif;.*\.js;.*\.jpg;.*\.png;.*\.htm;.*\.html;.*\.css;.*
>>> \.txt;"/>
>>>
>>>
>>>
>>> <Deployer
>>>
>>> className="org.apache.catalina.cluster.deploy.FarmWarDeployer"
>>>
>>> tempDir="/tmp/war-temp/"
>>>
>>> deployDir="/tmp/war-deploy/"
>>>
>>> watchDir="/tmp/war-listen/"
>>>
>>> watchEnabled="false"/>
>>>
>>>
>>>
>>> <ClusterListener
>>>
>>> className="org.apache.catalina.cluster.session.ClusterSessionListene
>>> r
>>> "
>>> />
>>>
>>> </Cluster>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> <!-- Normally, users must authenticate themselves to each
>>> web app
>>>
>>> individually. Uncomment the following entry if you
>>> would like
>>>
>>> a user to be authenticated the first time they
>>> encounter a
>>>
>>> resource protected by a security constraint, and then
>>> have that
>>>
>>> user identity maintained across *all* web applications
>>> contained
>>> in this virtual host. -->
>>>
>>> <!--
>>>
>>> <Valve
>>> className="org.apache.catalina.authenticator.SingleSignOn" />
>>>
>>> -->
>>>
>>>
>>>
>>> <!-- Access log processes all requests for this virtual
>>> host. By
>>>
>>> default, log files are created in the "logs" directory
>>> relative
>>> to
>>>
>>> $CATALINA_HOME. If you wish, you can specify a
>>> different
>>>
>>> directory with the "directory" attribute. Specify
>>> either a
>>> relative
>>>
>>> (to $CATALINA_HOME) or absolute path to the desired
>>> directory.
>>>
>>> -->
>>>
>>> <!--
>>>
>>> <Valve className="org.apache.catalina.valves.AccessLogValve"
>>>
>>> directory="logs" prefix="localhost_access_log."
>>>
>>> suffix=".txt"
>>>
>>> pattern="common" resolveHosts="false"/>
>>>
>>> -->
>>>
>>>
>>>
>>> <!-- Access log processes all requests for this virtual
>>> host. By
>>>
>>> default, log files are created in the "logs" directory
>>> relative
>>> to
>>>
>>> $CATALINA_HOME. If you wish, you can specify a
>>> different
>>>
>>> directory with the "directory" attribute. Specify
>>> either a
>>> relative
>>>
>>> (to $CATALINA_HOME) or absolute path to the desired
>>> directory.
>>>
>>> This access log implementation is optimized for maximum
>>> performance,
>>>
>>> but is hardcoded to support only the "common" and
>>> "combined"
>>>
>>> patterns.
>>>
>>> -->
>>>
>>> <!--
>>>
>>> <Valve
>>>
>>> className="org.apache.catalina.valves.FastCommonAccessLogValve"
>>>
>>> directory="logs" prefix="localhost_access_log."
>>>
>>> suffix=".txt"
>>>
>>> pattern="common" resolveHosts="false"/>
>>>
>>> -->
>>>
>>>
>>>
>>> </Host>
>>>
>>>
>>>
>>> </Engine>
>>>
>>>
>>>
>>> </Service>
>>>
>>>
>>>
>>> </Server>
>>>
>>>
>>>
>>> ===========================================================
>>>
>>>
>>>
>>> I appriceate your prompt help on this since this is a very critical
>>> application live for the moment. Always send me an email for any
>>> clarifications.
>>>
>>>
>>>
>>> Thanks and best regards,
>>>
>>> Dilan
>>>
>>>
>>>
>>>
>>>
>>
>>
>> ---------------------------------------------------------------------
>> To start a new topic, e-mail: users@tomcat.apache.org
>> To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
>> For additional commands, e-mail: users-help@tomcat.apache.org
>>
>>
>>
>> ---------------------------------------------------------------------
>> To start a new topic, e-mail: users@tomcat.apache.org
>> To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
>> For additional commands, e-mail: users-help@tomcat.apache.org
>>
>>
>
>
>
> ---------------------------------------------------------------------
> To start a new topic, e-mail: users@tomcat.apache.org
> To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
> For additional commands, e-mail: users-help@tomcat.apache.org
>
>
---------------------------------------------------------------------
To start a new topic, e-mail: users@tomcat.apache.org
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org
---------------------------------------------------------------------
To start a new topic, e-mail: users@tomcat.apache.org
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org
Re: tomcat5.5.17 cluster(using jdk1.5) error - OutOfMemoryError in starting up on AS4
Posted by Peter Rossbach <pr...@objektpark.de>.
OK!
As you comment the Membership service out, following default is used:
McastService mService= new McastService();
mService.setMcastAddr("228.0.0.4");
mService.setMcastPort(8012);
mService.setMcastFrequency(1000);
mService.setMcastDropTime(30000);
transferProperty("service",mService);
setMembershipService(mService);
}
Have you start another service at 45564 ?
Regards
Am 18.06.2006 um 16:54 schrieb Dilan Kelanibandara:
> Hi Peter,
>
> I was having the memory problem when cluster manager trying to
> multicast the
> request when tomcat startup.
> As a trial I commented multicast element of cluster configuration in
> server.xml and restarted both tomcats
>
> This is the multicast element which I commented.
> ==============================
> <!--
> <Membership
>
> className="org.apache.catalina.cluster.mcast.McastService"
> mcastAddr="228.0.0.4"
> mcastPort="45564"
> mcastFrequency="500"
> mcastDropTime="3000"/>
> -->
>
> ==============================
>
> Then tomcat started without an outofmemoryerror. Also replication
> members
> are added to each other. I ran both of servers with my applicaiton
> for some
> time. It is working fine. Session replication is happening as
> usual. Can you
> let me know can I proceed with this setup or is there any effect of my
> commenting on session replication ?
>
> Can you kindly let me know?
>
> Thanks and best regards,
> Dilan
>
>
> -----Original Message-----
> From: Peter Rossbach [mailto:pr@objektpark.de]
> Sent: Sunday, June 18, 2006 9:50 AM
> To: Tomcat Users List
> Subject: Re: tomcat5.5.17 cluster(using jdk1.5) error -
> OutOfMemoryError in
> starting up on AS4
>
> Use more JVM Options to analyse the mem usage
>
> Work with more faster mem allocation
>
> -XX:NewSize=128m -XX:MaxNewSize=128m -XX:SurvivorRatio=8
> -Xverbosegc
>
> Or better use a Memory Profiler...
>
> But the membership not allocate much memory, very strange effect :-(
>
> Peter
>
>
> Am 18.06.2006 um 08:01 schrieb Dilan Kelanibandara:
>
>> Hi Peter,
>> I am using default JVM parameters coming with tomcat5.5.17. In the
>> tomcat
>> server.xml file it says tcpThreadCount is normally equal to no.of
>> nodes (ie
>> 2 in this case).That is why I changed that to 2.
>>
>> I tried increasing JVM parameters for heap size in tomcat
>> Min=1024m
>> Max=1024m
>> also.I tried with both 512m also. But in both the occasion it is
>> the same
>> result.
>> Thank you for your kind attention.
>> I want further clarifications.
>> Best regards,
>> Dilan
>> -----Original Message-----
>> From: Peter Rossbach [mailto:pr@objektpark.de]
>> Sent: Sunday, June 18, 2006 7:37 AM
>> To: Tomcat Users List
>> Subject: Re: tomcat5.5.17 cluster(using jdk1.5) error -
>> OutOfMemoryError in
>> starting up on AS4
>>
>> Hi,
>>
>> Which JVM memory parameter you use?
>> At pooled mode use more receiver worker set tcpThreadCount="6"!
>> You really need deployer? Deployer generate at every startup a large
>> cluster message.
>>
>> Regards
>> Peter
>>
>>
>> Am 18.06.2006 um 06:22 schrieb Dilan Kelanibandara:
>>
>>>
>>>
>>> Hello ,
>>>
>>>
>>>
>>> I am getting OutOfMemoryError continuously when starting up two
>>> cluster
>>> nodes of tomcat5.5.17 (jdk1.5 on Advanced server 4). Any way it was
>>> working
>>> fine for 3 weeks time. This error occurs previously only one time
>>> and when
>>> restarted the tomcat, it worked.
>>>
>>>
>>>
>>>
>>>
>>> Following is a part of catalina.out relevent to that error for
>>> node 1.
>>>
>>> ============
>>> ========================================================
>>>
>>>
>>>
>>> INFO: Start ClusterSender at cluster
>>> Catalina:type=Cluster,host=localhost
>>>
>>> with name Catalina:type=ClusterSender,host=localhost
>>>
>>> Jun 17, 2006 8:44:15 PM
>>> org.apache.catalina.cluster.mcast.McastService start
>>>
>>> INFO: Sleeping for 2000 milliseconds to establish cluster membership
>>> Exception in thread "Cluster-MembershipReceiver"
>>> java.lang.OutOfMemoryError:
>>>
>>>
>>> Java heap space
>>>
>>> Jun 17, 2006 8:44:17 PM
>>> org.apache.catalina.cluster.mcast.McastService
>>>
>>> registerMBean
>>>
>>> INFO: membership mbean registered
>>>
>>> (Catalina:type=ClusterMembership,host=localhost)
>>>
>>> Jun 17, 2006 8:44:17 PM
>>> org.apache.catalina.cluster.deploy.FarmWarDeployer
>>>
>>> start
>>>
>>> INFO: Cluster FarmWarDeployer started.
>>>
>>> Jun 17, 2006 8:44:19 PM
>>> org.apache.catalina.cluster.session.DeltaManager
>>>
>>> start
>>>
>>> INFO: Register manager /StockTradingServer to cluster element Host
>>> with name
>>> localhost Jun 17, 2006 8:44:19 PM
>>> org.apache.catalina.cluster.session.DeltaManager
>>>
>>> start
>>>
>>> INFO: Starting clustering manager at /StockTradingServer Jun 17,
>>> 2006
>>> 8:44:19 PM org.apache.catalina.cluster.session.DeltaManager
>>>
>>> getAllClusterSessions
>>>
>>> INFO: Manager [/StockTradingServer]: skipping state transfer. No
>>> members
>>> active in cluster group.
>>>
>>>
>>>
>>> ====================================================================
>>> =
>>> =
>>> ======
>>> =====
>>>
>>> node2 startup log is as follows
>>>
>>> ====================================================================
>>> =
>>> =
>>> ======
>>> =====
>>>
>>> INFO: Cluster is about to start
>>>
>>> Jun 17, 2006 8:53:00 PM
>>>
>>> org.apache.catalina.cluster.tcp.ReplicationTransmitter start
>>>
>>> INFO: Start ClusterSender at cluster
>>> Catalina:type=Cluster,host=localhost
>>>
>>> with name Catalina:type=ClusterSender,host=localhost
>>>
>>> Jun 17, 2006 8:53:00 PM
>>> org.apache.catalina.cluster.mcast.McastService start
>>>
>>> INFO: Sleeping for 2000 milliseconds to establish cluster membership
>>> Exception in thread "Cluster-MembershipReceiver"
>>> java.lang.OutOfMemoryError:
>>>
>>>
>>> Java heap space
>>>
>>> Jun 17, 2006 8:53:02 PM
>>> org.apache.catalina.cluster.mcast.McastService
>>>
>>> registerMBean
>>>
>>> INFO: membership mbean registered
>>>
>>> (Catalina:type=ClusterMembership,host=localhost)
>>>
>>> Jun 17, 2006 8:53:02 PM
>>> org.apache.catalina.cluster.deploy.FarmWarDeployer
>>>
>>> start
>>>
>>> INFO: Cluster FarmWarDeployer started.
>>>
>>> Jun 17, 2006 8:53:04 PM
>>> org.apache.catalina.cluster.session.DeltaManager
>>>
>>> start
>>>
>>> INFO: Register manager /StockTradingServer to cluster element Host
>>> with name
>>> localhost Jun 17, 2006 8:53:04 PM
>>> org.apache.catalina.cluster.session.DeltaManager
>>>
>>> start
>>>
>>>
>>>
>>> ====================================================================
>>> =
>>> =
>>> ======
>>> =
>>>
>>> Any way my clustor was working fine for 3 weeks time and started to
>>> give
>>> this error in startup of both the nodes.
>>>
>>>
>>>
>>> I have an IBMHTTPServer with jk connector for load balancing and
>>> that load
>>> is comming my tomcat cluster.
>>>
>>>
>>>
>>> following is the server.xml file for both the servers.
>>>
>>>
>>>
>>> ====================================================================
>>> =
>>> =
>>> ======
>>> =
>>>
>>>
>>>
>>> <!-- Example Server Configuration File -->
>>>
>>> <!-- Note that component elements are nested corresponding to their
>>>
>>> parent-child relationships with each other -->
>>>
>>>
>>>
>>> <!-- A "Server" is a singleton element that represents the entire
>>> JVM,
>>>
>>> which may contain one or more "Service" instances. The Server
>>>
>>> listens for a shutdown command on the indicated port.
>>>
>>>
>>>
>>> Note: A "Server" is not itself a "Container", so you may not
>>>
>>> define subcomponents such as "Valves" or "Loggers" at this
>>> level.
>>>
>>> -->
>>>
>>>
>>>
>>> <Server port="8005" shutdown="SHUTDOWN">
>>>
>>>
>>>
>>> <!-- Comment these entries out to disable JMX MBeans support used
>>> for the
>>>
>>> administration web application --> <Listener
>>> className="org.apache.catalina.core.AprLifecycleListener" />
>>> <Listener
>>> className="org.apache.catalina.mbeans.ServerLifecycleListener" />
>>> <Listener
>>> className="org.apache.catalina.mbeans.GlobalResourcesLifecycleListen
>>> e
>>> r
>>> " />
>>> <Listener
>>> className="org.apache.catalina.storeconfig.StoreConfigLifecycleListe
>>> n
>>> e
>>> r"/>
>>>
>>>
>>>
>>> <!-- Global JNDI resources -->
>>>
>>> <GlobalNamingResources>
>>>
>>>
>>>
>>> <!-- Test entry for demonstration purposes -->
>>>
>>> <Environment name="simpleValue" type="java.lang.Integer"
>>> value="30"/>
>>>
>>>
>>>
>>> <!-- Editable user database that can also be used by
>>>
>>> UserDatabaseRealm to authenticate users -->
>>>
>>> <Resource name="UserDatabase" auth="Container"
>>>
>>> type="org.apache.catalina.UserDatabase"
>>>
>>> description="User database that can be updated and saved"
>>>
>>>
>>> factory="org.apache.catalina.users.MemoryUserDatabaseFactory"
>>>
>>> pathname="conf/tomcat-users.xml" />
>>>
>>>
>>>
>>> </GlobalNamingResources>
>>>
>>>
>>>
>>> <!-- A "Service" is a collection of one or more "Connectors" that
>>> share
>>>
>>> a single "Container" (and therefore the web applications
>>> visible
>>>
>>> within that Container). Normally, that Container is an
>>> "Engine",
>>>
>>> but this is not required.
>>>
>>>
>>>
>>> Note: A "Service" is not itself a "Container", so you may not
>>>
>>> define subcomponents such as "Valves" or "Loggers" at this
>>> level.
>>>
>>> -->
>>>
>>>
>>>
>>> <!-- Define the Tomcat Stand-Alone Service --> <Service
>>> name="Catalina">
>>>
>>>
>>>
>>> <!-- A "Connector" represents an endpoint by which requests are
>>> received
>>>
>>> and responses are returned. Each Connector passes requests
>>> on to
>>> the
>>>
>>> associated "Container" (normally an Engine) for processing.
>>>
>>>
>>>
>>> By default, a non-SSL HTTP/1.1 Connector is established on
>>> port
>>> 8080.
>>>
>>> You can also enable an SSL HTTP/1.1 Connector on port
>>> 8443 by
>>>
>>> following the instructions below and uncommenting the second
>>> Connector
>>>
>>> entry. SSL support requires the following steps (see the
>>> SSL Config
>>> HOWTO in the Tomcat 5 documentation bundle for more detailed
>>>
>>> instructions):
>>>
>>> * If your JDK version 1.3 or prior, download and install
>>> JSSE 1.0.2
>>> or
>>>
>>> later, and put the JAR files into "$JAVA_HOME/jre/lib/
>>> ext".
>>>
>>> * Execute:
>>>
>>> %JAVA_HOME%\bin\keytool -genkey -alias tomcat -keyalg
>>> RSA
>>>
>>> (Windows)
>>>
>>> $JAVA_HOME/bin/keytool -genkey -alias tomcat -keyalg
>>> RSA (Unix)
>>> with a password value of "changeit" for both the certificate and
>>>
>>> the keystore itself.
>>>
>>>
>>>
>>> By default, DNS lookups are enabled when a web application
>>> calls
>>>
>>> request.getRemoteHost(). This can have an adverse impact on
>>>
>>> performance, so you can disable it by setting the
>>>
>>> "enableLookups" attribute to "false". When DNS lookups are
>>> disabled,
>>>
>>> request.getRemoteHost() will return the String version of
>>> the
>>>
>>> IP address of the remote client.
>>>
>>> -->
>>>
>>>
>>>
>>> <!-- Define a non-SSL HTTP/1.1 Connector on port 8080 -->
>>>
>>> <Connector port="8088" maxHttpHeaderSize="8192"
>>>
>>> maxThreads="300" minSpareThreads="25"
>>> maxSpareThreads="75"
>>>
>>> enableLookups="false" redirectPort="8443"
>>> acceptCount="100"
>>>
>>> connectionTimeout="20000"
>>> disableUploadTimeout="true" />
>>>
>>> <!-- Note : To disable connection timeouts, set
>>> connectionTimeout value
>>>
>>> to 0 -->
>>>
>>>
>>>
>>> <!-- Note : To use gzip compression you could set the
>>> following
>>> properties :
>>>
>>>
>>>
>>> compression="on"
>>>
>>> compressionMinSize="2048"
>>>
>>> noCompressionUserAgents="gozilla,
>>> traviata"
>>>
>>> compressableMimeType="text/html,text/xml"
>>>
>>> -->
>>>
>>>
>>>
>>> <!-- Define a SSL HTTP/1.1 Connector on port 8443 -->
>>>
>>> <!--
>>>
>>> <Connector port="8443" maxHttpHeaderSize="8192"
>>>
>>> maxThreads="150" minSpareThreads="25"
>>> maxSpareThreads="75"
>>>
>>> enableLookups="false" disableUploadTimeout="true"
>>>
>>> acceptCount="100" scheme="https" secure="true"
>>>
>>> clientAuth="false" sslProtocol="TLS" />
>>>
>>> -->
>>>
>>>
>>>
>>> <!-- Define an AJP 1.3 Connector on port 8009 -->
>>>
>>> <Connector port="8009"
>>>
>>> enableLookups="false" redirectPort="8443"
>>> protocol="AJP/1.3"
>>>
>>> />
>>>
>>>
>>>
>>> <!-- Define a Proxied HTTP/1.1 Connector on port 8082 -->
>>>
>>> <!-- See proxy documentation for more information about using
>>> this. -->
>>>
>>> <!--
>>>
>>> <Connector port="8082"
>>>
>>> maxThreads="150" minSpareThreads="25"
>>> maxSpareThreads="75"
>>>
>>> enableLookups="false" acceptCount="100"
>>>
>>> connectionTimeout="20000"
>>>
>>> proxyPort="80" disableUploadTimeout="true" />
>>>
>>> -->
>>>
>>>
>>>
>>> <!-- An Engine represents the entry point (within Catalina) that
>>> processes
>>>
>>> every request. The Engine implementation for Tomcat stand
>>> alone
>>>
>>> analyzes the HTTP headers included with the request, and
>>> passes them
>>> on to the appropriate Host (virtual host). -->
>>>
>>>
>>>
>>> <!-- You should set jvmRoute to support load-balancing via AJP
>>> ie :
>>>
>>> <Engine name="Standalone" defaultHost="localhost"
>>> jvmRoute="jvm1">
>>>
>>> -->
>>>
>>>
>>>
>>> <!-- Define the top level container in our container hierarchy
>>> -->
>>>
>>> <Engine name="Catalina" defaultHost="localhost"
>>> jvmRoute="node01">
>>>
>>> <!-- <Engine name="Catalina" jvmRoute="$JAVA_HOME" > -->
>>>
>>>
>>>
>>> <!-- The request dumper valve dumps useful debugging
>>> information about
>>>
>>> the request headers and cookies that were received, and
>>> the
>>> response
>>>
>>> headers and cookies that were sent, for all requests
>>> received by
>>>
>>> this instance of Tomcat. If you care only about requests
>>> to a
>>>
>>> particular virtual host, or a particular application,
>>> nest this
>>>
>>> element inside the corresponding <Host> or <Context> entry
>>> instead.
>>>
>>>
>>>
>>> For a similar mechanism that is portable to all Servlet
>>> 2.4
>>>
>>> containers, check out the "RequestDumperFilter" Filter in
>>> the
>>>
>>> example application (the source for this filter may be
>>> found in
>>>
>>> "$CATALINA_HOME/webapps/examples/WEB-INF/classes/
>>> filters").
>>>
>>>
>>>
>>> Request dumping is disabled by default. Uncomment the
>>> following
>>>
>>> element to enable it. -->
>>>
>>> <!--
>>>
>>> <Valve
>>> className="org.apache.catalina.valves.RequestDumperValve"/>
>>>
>>> -->
>>>
>>>
>>>
>>> <!-- Because this Realm is here, an instance will be shared
>>> globally
>>> -->
>>>
>>>
>>>
>>> <!-- This Realm uses the UserDatabase configured in the global
>>> JNDI
>>>
>>> resources under the key "UserDatabase". Any edits
>>>
>>> that are performed against this UserDatabase are
>>> immediately
>>>
>>> available for use by the Realm. -->
>>>
>>> <Realm className="org.apache.catalina.realm.UserDatabaseRealm"
>>>
>>> resourceName="UserDatabase"/>
>>>
>>>
>>>
>>> <!-- Comment out the old realm but leave here for now in
>>> case we
>>>
>>> need to go back quickly -->
>>>
>>> <!--
>>>
>>> <Realm className="org.apache.catalina.realm.MemoryRealm" />
>>>
>>> -->
>>>
>>>
>>>
>>> <!-- Replace the above Realm with one of the following to get
>>> a Realm
>>>
>>> stored in a database and accessed via JDBC -->
>>>
>>>
>>>
>>>
>>>
>>> <Realm className="org.apache.catalina.realm.JDBCRealm"
>>>
>>> driverName="org.gjt.mm.mysql.Driver"
>>>
>>> connectionURL="jdbc:mysql://172.16.1.55:3306/kutila"
>>>
>>> connectionName="kutila" connectionPassword="kutila"
>>>
>>> userTable="users" userNameCol="user_name"
>>>
>>> userCredCol="user_pass"
>>>
>>> userRoleTable="user_roles" roleNameCol="role_name" />
>>>
>>>
>>>
>>>
>>>
>>> <!--
>>>
>>> <Realm className="org.apache.catalina.realm.JDBCRealm"
>>>
>>> driverName="oracle.jdbc.driver.OracleDriver"
>>>
>>> connectionURL="jdbc:oracle:thin:@ntserver:1521:ORCL"
>>>
>>> connectionName="scott" connectionPassword="tiger"
>>>
>>> userTable="users" userNameCol="user_name"
>>>
>>> userCredCol="user_pass"
>>>
>>> userRoleTable="user_roles" roleNameCol="role_name" />
>>>
>>> -->
>>>
>>>
>>>
>>> <!--
>>>
>>> <Realm className="org.apache.catalina.realm.JDBCRealm"
>>>
>>> driverName="sun.jdbc.odbc.JdbcOdbcDriver"
>>>
>>> connectionURL="jdbc:odbc:CATALINA"
>>>
>>> userTable="users" userNameCol="user_name"
>>>
>>> userCredCol="user_pass"
>>>
>>> userRoleTable="user_roles" roleNameCol="role_name" />
>>>
>>> -->
>>>
>>>
>>>
>>> <!-- Define the default virtual host
>>>
>>> Note: XML Schema validation will not work with Xerces 2.2.
>>>
>>> -->
>>>
>>> <Host name="localhost" appBase="webapps"
>>>
>>> unpackWARs="true" autoDeploy="true"
>>>
>>> xmlValidation="false" xmlNamespaceAware="false">
>>>
>>>
>>>
>>> <!-- Defines a cluster for this node,
>>>
>>> By defining this element, means that every manager
>>> will be
>>> changed.
>>>
>>> So when running a cluster, only make sure that you have
>>> webapps
>>> in there
>>>
>>> that need to be clustered and remove the other ones.
>>>
>>> A cluster has the following parameters:
>>>
>>>
>>>
>>> className = the fully qualified name of the cluster
>>> class
>>>
>>>
>>>
>>> clusterName = a descriptive name for your cluster,
>>> can be
>>> anything
>>>
>>>
>>>
>>> mcastAddr = the multicast address, has to be the same
>>> for all
>>> the nodes
>>>
>>>
>>>
>>> mcastPort = the multicast port, has to be the same for
>>> all the
>>> nodes
>>>
>>>
>>>
>>> mcastBindAddress = bind the multicast socket to a
>>> specific
>>> address
>>>
>>>
>>>
>>> mcastTTL = the multicast TTL if you want to limit your
>>> broadcast
>>>
>>> mcastSoTimeout = the multicast readtimeout
>>>
>>>
>>>
>>> mcastFrequency = the number of milliseconds in between
>>> sending a
>>> "I'm alive" heartbeat
>>>
>>>
>>>
>>> mcastDropTime = the number a milliseconds before a
>>> node is
>>> considered "dead" if no heartbeat is received
>>>
>>>
>>>
>>> tcpThreadCount = the number of threads to handle
>>> incoming
>>> replication requests, optimal would be the same amount of threads
>>> as nodes
>>>
>>>
>>>
>>> tcpListenAddress = the listen address (bind address)
>>> for TCP
>>> cluster request on this host,
>>>
>>> in case of multiple ethernet cards.
>>>
>>> auto means that address becomes
>>>
>>> InetAddress.getLocalHost
>>> ().getHostAddress()
>>>
>>>
>>>
>>> tcpListenPort = the tcp listen port
>>>
>>>
>>>
>>> tcpSelectorTimeout = the timeout (ms) for the
>>> Selector.select()
>>> method in case the OS
>>>
>>> has a wakup bug in java.nio. Set
>>> to 0 for
>>> no timeout
>>>
>>>
>>>
>>> printToScreen = true means that managers will also
>>> print to
>>> std.out
>>>
>>>
>>>
>>> expireSessionsOnShutdown = true means that
>>>
>>>
>>>
>>> useDirtyFlag = true means that we only replicate a
>>> session after
>>> setAttribute,removeAttribute has been called.
>>>
>>> false means to replicate the session
>>> after each
>>> request.
>>>
>>> false means that replication would work
>>> for the
>>> following piece of code: (only for SimpleTcpReplicationManager)
>>>
>>> <%
>>>
>>> HashMap map =
>>> (HashMap)session.getAttribute("map");
>>>
>>> map.put("key","value");
>>>
>>> %>
>>>
>>> replicationMode = can be either 'pooled',
>>> 'synchronous' or
>>> 'asynchronous'.
>>>
>>> * Pooled means that the replication
>>> happens
>>> using several sockets in a synchronous way. Ie, the data gets
>>> replicated,
>>> then the request return. This is the same as the 'synchronous'
>>> setting
>>> except it uses a pool of sockets, hence it is multithreaded. This
>>> is the
>>> fastest and safest configuration. To use this, also increase the nr
>>> of tcp
>>> threads that you have dealing with replication.
>>>
>>> * Synchronous means that the thread
>>> that
>>> executes the request, is also the
>>>
>>> thread the replicates the data to the
>>> other
>>> nodes, and will not return until all
>>>
>>> nodes have received the information.
>>>
>>> * Asynchronous means that there is a
>>> specific
>>> 'sender' thread for each cluster node,
>>>
>>> so the request thread will queue the
>>> replication request into a "smart" queue,
>>>
>>> and then return to the client.
>>>
>>> The "smart" queue is a queue where
>>> when a
>>> session is added to the queue, and the same session
>>>
>>> already exists in the queue from a
>>> previous
>>> request, that session will be replaced
>>>
>>> in the queue instead of replicating
>>> two
>>> requests. This almost never happens, unless there is a
>>>
>>> large network delay.
>>>
>>> -->
>>>
>>> <!--
>>>
>>> When configuring for clustering, you also add in a valve
>>> to catch
>>> all the requests
>>>
>>> coming in, at the end of the request, the session may or
>>> may not
>>> be replicated.
>>>
>>> A session is replicated if and only if all the
>>> conditions are
>>>
>>> met:
>>>
>>> 1. useDirtyFlag is true or setAttribute or
>>> removeAttribute has
>>> been called AND
>>>
>>> 2. a session exists (has been created)
>>>
>>> 3. the request is not trapped by the "filter" attribute
>>>
>>>
>>>
>>> The filter attribute is to filter out requests that
>>> could not
>>> modify the session,
>>>
>>> hence we don't replicate the session after the end of
>>> this
>>> request.
>>>
>>> The filter is negative, ie, anything you put in the
>>> filter, you
>>> mean to filter out,
>>>
>>> ie, no replication will be done on requests that match
>>> one of the
>>> filters.
>>>
>>> The filter attribute is delimited by ;, so you can't
>>> escape out ;
>>> even if you wanted to.
>>>
>>>
>>>
>>> filter=".*\.gif;.*\.js;" means that we will not
>>> replicate the
>>> session after requests with the URI
>>>
>>> ending with .gif and .js are intercepted.
>>>
>>>
>>>
>>> The deployer element can be used to deploy apps cluster
>>> wide.
>>>
>>> Currently the deployment only deploys/undeploys to
>>> working
>>> members in the cluster
>>>
>>> so no WARs are copied upons startup of a broken node.
>>>
>>> The deployer watches a directory (watchDir) for WAR
>>> files when
>>> watchEnabled="true"
>>>
>>> When a new war file is added the war gets deployed to
>>> the local
>>> instance,
>>>
>>> and then deployed to the other instances in the cluster.
>>>
>>> When a war file is deleted from the watchDir the war is
>>> undeployed locally
>>>
>>> and cluster wide
>>>
>>> -->
>>>
>>>
>>>
>>>
>>>
>>> <Cluster
>>> className="org.apache.catalina.cluster.tcp.SimpleTcpCluster"
>>>
>>>
>>> managerClassName="org.apache.catalina.cluster.session.DeltaManager"
>>>
>>> expireSessionsOnShutdown="true"
>>>
>>> useDirtyFlag="true"
>>>
>>> notifyListenersOnReplication="true">
>>>
>>>
>>>
>>> <Membership
>>>
>>>
>>> className="org.apache.catalina.cluster.mcast.McastService"
>>>
>>> mcastAddr="228.0.0.4"
>>>
>>> mcastPort="45564"
>>>
>>> mcastFrequency="500"
>>>
>>> mcastDropTime="3000"/>
>>>
>>>
>>>
>>> <Receiver
>>>
>>>
>>>
>>> className="org.apache.catalina.cluster.tcp.ReplicationListener"
>>>
>>> tcpListenAddress="auto"
>>>
>>> tcpListenPort="4001"
>>>
>>> tcpSelectorTimeout="100"
>>>
>>> tcpThreadCount="2"/>
>>>
>>>
>>>
>>> <Sender
>>>
>>>
>>>
>>> className="org.apache.catalina.cluster.tcp.ReplicationTransmitter"
>>>
>>> replicationMode="pooled"
>>>
>>> ackTimeout="15000"
>>>
>>> waitForAck="true"/>
>>>
>>>
>>>
>>> <Valve
>>>
>>> className="org.apache.catalina.cluster.tcp.ReplicationValve"
>>>
>>>
>>>
>>> filter=".*\.gif;.*\.js;.*\.jpg;.*\.png;.*\.htm;.*\.html;.*\.css;.*
>>> \.txt;"/>
>>>
>>>
>>>
>>> <Deployer
>>>
>>> className="org.apache.catalina.cluster.deploy.FarmWarDeployer"
>>>
>>> tempDir="/tmp/war-temp/"
>>>
>>> deployDir="/tmp/war-deploy/"
>>>
>>> watchDir="/tmp/war-listen/"
>>>
>>> watchEnabled="false"/>
>>>
>>>
>>>
>>> <ClusterListener
>>>
>>> className="org.apache.catalina.cluster.session.ClusterSessionListene
>>> r
>>> "
>>> />
>>>
>>> </Cluster>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> <!-- Normally, users must authenticate themselves to each
>>> web app
>>>
>>> individually. Uncomment the following entry if you
>>> would like
>>>
>>> a user to be authenticated the first time they
>>> encounter a
>>>
>>> resource protected by a security constraint, and then
>>> have that
>>>
>>> user identity maintained across *all* web applications
>>> contained
>>> in this virtual host. -->
>>>
>>> <!--
>>>
>>> <Valve
>>> className="org.apache.catalina.authenticator.SingleSignOn" />
>>>
>>> -->
>>>
>>>
>>>
>>> <!-- Access log processes all requests for this virtual
>>> host. By
>>>
>>> default, log files are created in the "logs" directory
>>> relative
>>> to
>>>
>>> $CATALINA_HOME. If you wish, you can specify a
>>> different
>>>
>>> directory with the "directory" attribute. Specify
>>> either a
>>> relative
>>>
>>> (to $CATALINA_HOME) or absolute path to the desired
>>> directory.
>>>
>>> -->
>>>
>>> <!--
>>>
>>> <Valve className="org.apache.catalina.valves.AccessLogValve"
>>>
>>> directory="logs" prefix="localhost_access_log."
>>>
>>> suffix=".txt"
>>>
>>> pattern="common" resolveHosts="false"/>
>>>
>>> -->
>>>
>>>
>>>
>>> <!-- Access log processes all requests for this virtual
>>> host. By
>>>
>>> default, log files are created in the "logs" directory
>>> relative
>>> to
>>>
>>> $CATALINA_HOME. If you wish, you can specify a
>>> different
>>>
>>> directory with the "directory" attribute. Specify
>>> either a
>>> relative
>>>
>>> (to $CATALINA_HOME) or absolute path to the desired
>>> directory.
>>>
>>> This access log implementation is optimized for maximum
>>> performance,
>>>
>>> but is hardcoded to support only the "common" and
>>> "combined"
>>>
>>> patterns.
>>>
>>> -->
>>>
>>> <!--
>>>
>>> <Valve
>>>
>>> className="org.apache.catalina.valves.FastCommonAccessLogValve"
>>>
>>> directory="logs" prefix="localhost_access_log."
>>>
>>> suffix=".txt"
>>>
>>> pattern="common" resolveHosts="false"/>
>>>
>>> -->
>>>
>>>
>>>
>>> </Host>
>>>
>>>
>>>
>>> </Engine>
>>>
>>>
>>>
>>> </Service>
>>>
>>>
>>>
>>> </Server>
>>>
>>>
>>>
>>> ===========================================================
>>>
>>>
>>>
>>> I appriceate your prompt help on this since this is a very critical
>>> application live for the moment. Always send me an email for any
>>> clarifications.
>>>
>>>
>>>
>>> Thanks and best regards,
>>>
>>> Dilan
>>>
>>>
>>>
>>>
>>>
>>
>>
>> ---------------------------------------------------------------------
>> To start a new topic, e-mail: users@tomcat.apache.org
>> To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
>> For additional commands, e-mail: users-help@tomcat.apache.org
>>
>>
>>
>> ---------------------------------------------------------------------
>> To start a new topic, e-mail: users@tomcat.apache.org
>> To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
>> For additional commands, e-mail: users-help@tomcat.apache.org
>>
>>
>
>
>
> ---------------------------------------------------------------------
> To start a new topic, e-mail: users@tomcat.apache.org
> To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
> For additional commands, e-mail: users-help@tomcat.apache.org
>
>
---------------------------------------------------------------------
To start a new topic, e-mail: users@tomcat.apache.org
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org
RE: tomcat5.5.17 cluster(using jdk1.5) error - OutOfMemoryError in starting up on AS4
Posted by Dilan Kelanibandara <di...@beyondm.net>.
Hi Peter,
I was having the memory problem when cluster manager trying to multicast the
request when tomcat startup.
As a trial I commented multicast element of cluster configuration in
server.xml and restarted both tomcats
This is the multicast element which I commented.
==============================
<!--
<Membership
className="org.apache.catalina.cluster.mcast.McastService"
mcastAddr="228.0.0.4"
mcastPort="45564"
mcastFrequency="500"
mcastDropTime="3000"/>
-->
==============================
Then tomcat started without an outofmemoryerror. Also replication members
are added to each other. I ran both of servers with my applicaiton for some
time. It is working fine. Session replication is happening as usual. Can you
let me know can I proceed with this setup or is there any effect of my
commenting on session replication ?
Can you kindly let me know?
Thanks and best regards,
Dilan
-----Original Message-----
From: Peter Rossbach [mailto:pr@objektpark.de]
Sent: Sunday, June 18, 2006 9:50 AM
To: Tomcat Users List
Subject: Re: tomcat5.5.17 cluster(using jdk1.5) error - OutOfMemoryError in
starting up on AS4
Use more JVM Options to analyse the mem usage
Work with more faster mem allocation
-XX:NewSize=128m -XX:MaxNewSize=128m -XX:SurvivorRatio=8
-Xverbosegc
Or better use a Memory Profiler...
But the membership not allocate much memory, very strange effect :-(
Peter
Am 18.06.2006 um 08:01 schrieb Dilan Kelanibandara:
> Hi Peter,
> I am using default JVM parameters coming with tomcat5.5.17. In the
> tomcat
> server.xml file it says tcpThreadCount is normally equal to no.of
> nodes (ie
> 2 in this case).That is why I changed that to 2.
>
> I tried increasing JVM parameters for heap size in tomcat
> Min=1024m
> Max=1024m
> also.I tried with both 512m also. But in both the occasion it is
> the same
> result.
> Thank you for your kind attention.
> I want further clarifications.
> Best regards,
> Dilan
> -----Original Message-----
> From: Peter Rossbach [mailto:pr@objektpark.de]
> Sent: Sunday, June 18, 2006 7:37 AM
> To: Tomcat Users List
> Subject: Re: tomcat5.5.17 cluster(using jdk1.5) error -
> OutOfMemoryError in
> starting up on AS4
>
> Hi,
>
> Which JVM memory parameter you use?
> At pooled mode use more receiver worker set tcpThreadCount="6"!
> You really need deployer? Deployer generate at every startup a large
> cluster message.
>
> Regards
> Peter
>
>
> Am 18.06.2006 um 06:22 schrieb Dilan Kelanibandara:
>
>>
>>
>> Hello ,
>>
>>
>>
>> I am getting OutOfMemoryError continuously when starting up two
>> cluster
>> nodes of tomcat5.5.17 (jdk1.5 on Advanced server 4). Any way it was
>> working
>> fine for 3 weeks time. This error occurs previously only one time
>> and when
>> restarted the tomcat, it worked.
>>
>>
>>
>>
>>
>> Following is a part of catalina.out relevent to that error for
>> node 1.
>>
>> ============ ========================================================
>>
>>
>>
>> INFO: Start ClusterSender at cluster
>> Catalina:type=Cluster,host=localhost
>>
>> with name Catalina:type=ClusterSender,host=localhost
>>
>> Jun 17, 2006 8:44:15 PM
>> org.apache.catalina.cluster.mcast.McastService start
>>
>> INFO: Sleeping for 2000 milliseconds to establish cluster membership
>> Exception in thread "Cluster-MembershipReceiver"
>> java.lang.OutOfMemoryError:
>>
>>
>> Java heap space
>>
>> Jun 17, 2006 8:44:17 PM
>> org.apache.catalina.cluster.mcast.McastService
>>
>> registerMBean
>>
>> INFO: membership mbean registered
>>
>> (Catalina:type=ClusterMembership,host=localhost)
>>
>> Jun 17, 2006 8:44:17 PM
>> org.apache.catalina.cluster.deploy.FarmWarDeployer
>>
>> start
>>
>> INFO: Cluster FarmWarDeployer started.
>>
>> Jun 17, 2006 8:44:19 PM
>> org.apache.catalina.cluster.session.DeltaManager
>>
>> start
>>
>> INFO: Register manager /StockTradingServer to cluster element Host
>> with name
>> localhost Jun 17, 2006 8:44:19 PM
>> org.apache.catalina.cluster.session.DeltaManager
>>
>> start
>>
>> INFO: Starting clustering manager at /StockTradingServer Jun 17, 2006
>> 8:44:19 PM org.apache.catalina.cluster.session.DeltaManager
>>
>> getAllClusterSessions
>>
>> INFO: Manager [/StockTradingServer]: skipping state transfer. No
>> members
>> active in cluster group.
>>
>>
>>
>> =====================================================================
>> =
>> ======
>> =====
>>
>> node2 startup log is as follows
>>
>> =====================================================================
>> =
>> ======
>> =====
>>
>> INFO: Cluster is about to start
>>
>> Jun 17, 2006 8:53:00 PM
>>
>> org.apache.catalina.cluster.tcp.ReplicationTransmitter start
>>
>> INFO: Start ClusterSender at cluster
>> Catalina:type=Cluster,host=localhost
>>
>> with name Catalina:type=ClusterSender,host=localhost
>>
>> Jun 17, 2006 8:53:00 PM
>> org.apache.catalina.cluster.mcast.McastService start
>>
>> INFO: Sleeping for 2000 milliseconds to establish cluster membership
>> Exception in thread "Cluster-MembershipReceiver"
>> java.lang.OutOfMemoryError:
>>
>>
>> Java heap space
>>
>> Jun 17, 2006 8:53:02 PM
>> org.apache.catalina.cluster.mcast.McastService
>>
>> registerMBean
>>
>> INFO: membership mbean registered
>>
>> (Catalina:type=ClusterMembership,host=localhost)
>>
>> Jun 17, 2006 8:53:02 PM
>> org.apache.catalina.cluster.deploy.FarmWarDeployer
>>
>> start
>>
>> INFO: Cluster FarmWarDeployer started.
>>
>> Jun 17, 2006 8:53:04 PM
>> org.apache.catalina.cluster.session.DeltaManager
>>
>> start
>>
>> INFO: Register manager /StockTradingServer to cluster element Host
>> with name
>> localhost Jun 17, 2006 8:53:04 PM
>> org.apache.catalina.cluster.session.DeltaManager
>>
>> start
>>
>>
>>
>> =====================================================================
>> =
>> ======
>> =
>>
>> Any way my clustor was working fine for 3 weeks time and started to
>> give
>> this error in startup of both the nodes.
>>
>>
>>
>> I have an IBMHTTPServer with jk connector for load balancing and
>> that load
>> is comming my tomcat cluster.
>>
>>
>>
>> following is the server.xml file for both the servers.
>>
>>
>>
>> =====================================================================
>> =
>> ======
>> =
>>
>>
>>
>> <!-- Example Server Configuration File -->
>>
>> <!-- Note that component elements are nested corresponding to their
>>
>> parent-child relationships with each other -->
>>
>>
>>
>> <!-- A "Server" is a singleton element that represents the entire
>> JVM,
>>
>> which may contain one or more "Service" instances. The Server
>>
>> listens for a shutdown command on the indicated port.
>>
>>
>>
>> Note: A "Server" is not itself a "Container", so you may not
>>
>> define subcomponents such as "Valves" or "Loggers" at this level.
>>
>> -->
>>
>>
>>
>> <Server port="8005" shutdown="SHUTDOWN">
>>
>>
>>
>> <!-- Comment these entries out to disable JMX MBeans support used
>> for the
>>
>> administration web application --> <Listener
>> className="org.apache.catalina.core.AprLifecycleListener" />
>> <Listener
>> className="org.apache.catalina.mbeans.ServerLifecycleListener" />
>> <Listener
>> className="org.apache.catalina.mbeans.GlobalResourcesLifecycleListene
>> r
>> " />
>> <Listener
>> className="org.apache.catalina.storeconfig.StoreConfigLifecycleListen
>> e
>> r"/>
>>
>>
>>
>> <!-- Global JNDI resources -->
>>
>> <GlobalNamingResources>
>>
>>
>>
>> <!-- Test entry for demonstration purposes -->
>>
>> <Environment name="simpleValue" type="java.lang.Integer"
>> value="30"/>
>>
>>
>>
>> <!-- Editable user database that can also be used by
>>
>> UserDatabaseRealm to authenticate users -->
>>
>> <Resource name="UserDatabase" auth="Container"
>>
>> type="org.apache.catalina.UserDatabase"
>>
>> description="User database that can be updated and saved"
>>
>>
>> factory="org.apache.catalina.users.MemoryUserDatabaseFactory"
>>
>> pathname="conf/tomcat-users.xml" />
>>
>>
>>
>> </GlobalNamingResources>
>>
>>
>>
>> <!-- A "Service" is a collection of one or more "Connectors" that
>> share
>>
>> a single "Container" (and therefore the web applications
>> visible
>>
>> within that Container). Normally, that Container is an
>> "Engine",
>>
>> but this is not required.
>>
>>
>>
>> Note: A "Service" is not itself a "Container", so you may not
>>
>> define subcomponents such as "Valves" or "Loggers" at this
>> level.
>>
>> -->
>>
>>
>>
>> <!-- Define the Tomcat Stand-Alone Service --> <Service
>> name="Catalina">
>>
>>
>>
>> <!-- A "Connector" represents an endpoint by which requests are
>> received
>>
>> and responses are returned. Each Connector passes requests
>> on to
>> the
>>
>> associated "Container" (normally an Engine) for processing.
>>
>>
>>
>> By default, a non-SSL HTTP/1.1 Connector is established on
>> port
>> 8080.
>>
>> You can also enable an SSL HTTP/1.1 Connector on port 8443 by
>>
>> following the instructions below and uncommenting the second
>> Connector
>>
>> entry. SSL support requires the following steps (see the
>> SSL Config
>> HOWTO in the Tomcat 5 documentation bundle for more detailed
>>
>> instructions):
>>
>> * If your JDK version 1.3 or prior, download and install
>> JSSE 1.0.2
>> or
>>
>> later, and put the JAR files into "$JAVA_HOME/jre/lib/ext".
>>
>> * Execute:
>>
>> %JAVA_HOME%\bin\keytool -genkey -alias tomcat -keyalg RSA
>>
>> (Windows)
>>
>> $JAVA_HOME/bin/keytool -genkey -alias tomcat -keyalg
>> RSA (Unix)
>> with a password value of "changeit" for both the certificate and
>>
>> the keystore itself.
>>
>>
>>
>> By default, DNS lookups are enabled when a web application
>> calls
>>
>> request.getRemoteHost(). This can have an adverse impact on
>>
>> performance, so you can disable it by setting the
>>
>> "enableLookups" attribute to "false". When DNS lookups are
>> disabled,
>>
>> request.getRemoteHost() will return the String version of the
>>
>> IP address of the remote client.
>>
>> -->
>>
>>
>>
>> <!-- Define a non-SSL HTTP/1.1 Connector on port 8080 -->
>>
>> <Connector port="8088" maxHttpHeaderSize="8192"
>>
>> maxThreads="300" minSpareThreads="25"
>> maxSpareThreads="75"
>>
>> enableLookups="false" redirectPort="8443"
>> acceptCount="100"
>>
>> connectionTimeout="20000"
>> disableUploadTimeout="true" />
>>
>> <!-- Note : To disable connection timeouts, set
>> connectionTimeout value
>>
>> to 0 -->
>>
>>
>>
>> <!-- Note : To use gzip compression you could set the
>> following
>> properties :
>>
>>
>>
>> compression="on"
>>
>> compressionMinSize="2048"
>>
>> noCompressionUserAgents="gozilla, traviata"
>>
>> compressableMimeType="text/html,text/xml"
>>
>> -->
>>
>>
>>
>> <!-- Define a SSL HTTP/1.1 Connector on port 8443 -->
>>
>> <!--
>>
>> <Connector port="8443" maxHttpHeaderSize="8192"
>>
>> maxThreads="150" minSpareThreads="25"
>> maxSpareThreads="75"
>>
>> enableLookups="false" disableUploadTimeout="true"
>>
>> acceptCount="100" scheme="https" secure="true"
>>
>> clientAuth="false" sslProtocol="TLS" />
>>
>> -->
>>
>>
>>
>> <!-- Define an AJP 1.3 Connector on port 8009 -->
>>
>> <Connector port="8009"
>>
>> enableLookups="false" redirectPort="8443"
>> protocol="AJP/1.3"
>>
>> />
>>
>>
>>
>> <!-- Define a Proxied HTTP/1.1 Connector on port 8082 -->
>>
>> <!-- See proxy documentation for more information about using
>> this. -->
>>
>> <!--
>>
>> <Connector port="8082"
>>
>> maxThreads="150" minSpareThreads="25"
>> maxSpareThreads="75"
>>
>> enableLookups="false" acceptCount="100"
>>
>> connectionTimeout="20000"
>>
>> proxyPort="80" disableUploadTimeout="true" />
>>
>> -->
>>
>>
>>
>> <!-- An Engine represents the entry point (within Catalina) that
>> processes
>>
>> every request. The Engine implementation for Tomcat stand
>> alone
>>
>> analyzes the HTTP headers included with the request, and
>> passes them
>> on to the appropriate Host (virtual host). -->
>>
>>
>>
>> <!-- You should set jvmRoute to support load-balancing via AJP
>> ie :
>>
>> <Engine name="Standalone" defaultHost="localhost" jvmRoute="jvm1">
>>
>> -->
>>
>>
>>
>> <!-- Define the top level container in our container hierarchy -->
>>
>> <Engine name="Catalina" defaultHost="localhost" jvmRoute="node01">
>>
>> <!-- <Engine name="Catalina" jvmRoute="$JAVA_HOME" > -->
>>
>>
>>
>> <!-- The request dumper valve dumps useful debugging
>> information about
>>
>> the request headers and cookies that were received, and the
>> response
>>
>> headers and cookies that were sent, for all requests
>> received by
>>
>> this instance of Tomcat. If you care only about requests
>> to a
>>
>> particular virtual host, or a particular application,
>> nest this
>>
>> element inside the corresponding <Host> or <Context> entry
>> instead.
>>
>>
>>
>> For a similar mechanism that is portable to all Servlet 2.4
>>
>> containers, check out the "RequestDumperFilter" Filter in
>> the
>>
>> example application (the source for this filter may be
>> found in
>>
>> "$CATALINA_HOME/webapps/examples/WEB-INF/classes/filters").
>>
>>
>>
>> Request dumping is disabled by default. Uncomment the
>> following
>>
>> element to enable it. -->
>>
>> <!--
>>
>> <Valve
>> className="org.apache.catalina.valves.RequestDumperValve"/>
>>
>> -->
>>
>>
>>
>> <!-- Because this Realm is here, an instance will be shared
>> globally
>> -->
>>
>>
>>
>> <!-- This Realm uses the UserDatabase configured in the global
>> JNDI
>>
>> resources under the key "UserDatabase". Any edits
>>
>> that are performed against this UserDatabase are
>> immediately
>>
>> available for use by the Realm. -->
>>
>> <Realm className="org.apache.catalina.realm.UserDatabaseRealm"
>>
>> resourceName="UserDatabase"/>
>>
>>
>>
>> <!-- Comment out the old realm but leave here for now in case we
>>
>> need to go back quickly -->
>>
>> <!--
>>
>> <Realm className="org.apache.catalina.realm.MemoryRealm" />
>>
>> -->
>>
>>
>>
>> <!-- Replace the above Realm with one of the following to get
>> a Realm
>>
>> stored in a database and accessed via JDBC -->
>>
>>
>>
>>
>>
>> <Realm className="org.apache.catalina.realm.JDBCRealm"
>>
>> driverName="org.gjt.mm.mysql.Driver"
>>
>> connectionURL="jdbc:mysql://172.16.1.55:3306/kutila"
>>
>> connectionName="kutila" connectionPassword="kutila"
>>
>> userTable="users" userNameCol="user_name"
>>
>> userCredCol="user_pass"
>>
>> userRoleTable="user_roles" roleNameCol="role_name" />
>>
>>
>>
>>
>>
>> <!--
>>
>> <Realm className="org.apache.catalina.realm.JDBCRealm"
>>
>> driverName="oracle.jdbc.driver.OracleDriver"
>>
>> connectionURL="jdbc:oracle:thin:@ntserver:1521:ORCL"
>>
>> connectionName="scott" connectionPassword="tiger"
>>
>> userTable="users" userNameCol="user_name"
>>
>> userCredCol="user_pass"
>>
>> userRoleTable="user_roles" roleNameCol="role_name" />
>>
>> -->
>>
>>
>>
>> <!--
>>
>> <Realm className="org.apache.catalina.realm.JDBCRealm"
>>
>> driverName="sun.jdbc.odbc.JdbcOdbcDriver"
>>
>> connectionURL="jdbc:odbc:CATALINA"
>>
>> userTable="users" userNameCol="user_name"
>>
>> userCredCol="user_pass"
>>
>> userRoleTable="user_roles" roleNameCol="role_name" />
>>
>> -->
>>
>>
>>
>> <!-- Define the default virtual host
>>
>> Note: XML Schema validation will not work with Xerces 2.2.
>>
>> -->
>>
>> <Host name="localhost" appBase="webapps"
>>
>> unpackWARs="true" autoDeploy="true"
>>
>> xmlValidation="false" xmlNamespaceAware="false">
>>
>>
>>
>> <!-- Defines a cluster for this node,
>>
>> By defining this element, means that every manager
>> will be
>> changed.
>>
>> So when running a cluster, only make sure that you have
>> webapps
>> in there
>>
>> that need to be clustered and remove the other ones.
>>
>> A cluster has the following parameters:
>>
>>
>>
>> className = the fully qualified name of the cluster class
>>
>>
>>
>> clusterName = a descriptive name for your cluster, can be
>> anything
>>
>>
>>
>> mcastAddr = the multicast address, has to be the same
>> for all
>> the nodes
>>
>>
>>
>> mcastPort = the multicast port, has to be the same for
>> all the
>> nodes
>>
>>
>>
>> mcastBindAddress = bind the multicast socket to a
>> specific
>> address
>>
>>
>>
>> mcastTTL = the multicast TTL if you want to limit your
>> broadcast
>>
>> mcastSoTimeout = the multicast readtimeout
>>
>>
>>
>> mcastFrequency = the number of milliseconds in between
>> sending a
>> "I'm alive" heartbeat
>>
>>
>>
>> mcastDropTime = the number a milliseconds before a
>> node is
>> considered "dead" if no heartbeat is received
>>
>>
>>
>> tcpThreadCount = the number of threads to handle incoming
>> replication requests, optimal would be the same amount of threads
>> as nodes
>>
>>
>>
>> tcpListenAddress = the listen address (bind address)
>> for TCP
>> cluster request on this host,
>>
>> in case of multiple ethernet cards.
>>
>> auto means that address becomes
>>
>> InetAddress.getLocalHost
>> ().getHostAddress()
>>
>>
>>
>> tcpListenPort = the tcp listen port
>>
>>
>>
>> tcpSelectorTimeout = the timeout (ms) for the
>> Selector.select()
>> method in case the OS
>>
>> has a wakup bug in java.nio. Set
>> to 0 for
>> no timeout
>>
>>
>>
>> printToScreen = true means that managers will also
>> print to
>> std.out
>>
>>
>>
>> expireSessionsOnShutdown = true means that
>>
>>
>>
>> useDirtyFlag = true means that we only replicate a
>> session after
>> setAttribute,removeAttribute has been called.
>>
>> false means to replicate the session
>> after each
>> request.
>>
>> false means that replication would work
>> for the
>> following piece of code: (only for SimpleTcpReplicationManager)
>>
>> <%
>>
>> HashMap map =
>> (HashMap)session.getAttribute("map");
>>
>> map.put("key","value");
>>
>> %>
>>
>> replicationMode = can be either 'pooled',
>> 'synchronous' or
>> 'asynchronous'.
>>
>> * Pooled means that the replication
>> happens
>> using several sockets in a synchronous way. Ie, the data gets
>> replicated,
>> then the request return. This is the same as the 'synchronous'
>> setting
>> except it uses a pool of sockets, hence it is multithreaded. This
>> is the
>> fastest and safest configuration. To use this, also increase the nr
>> of tcp
>> threads that you have dealing with replication.
>>
>> * Synchronous means that the thread
>> that
>> executes the request, is also the
>>
>> thread the replicates the data to the
>> other
>> nodes, and will not return until all
>>
>> nodes have received the information.
>>
>> * Asynchronous means that there is a
>> specific
>> 'sender' thread for each cluster node,
>>
>> so the request thread will queue the
>> replication request into a "smart" queue,
>>
>> and then return to the client.
>>
>> The "smart" queue is a queue where
>> when a
>> session is added to the queue, and the same session
>>
>> already exists in the queue from a
>> previous
>> request, that session will be replaced
>>
>> in the queue instead of replicating two
>> requests. This almost never happens, unless there is a
>>
>> large network delay.
>>
>> -->
>>
>> <!--
>>
>> When configuring for clustering, you also add in a valve
>> to catch
>> all the requests
>>
>> coming in, at the end of the request, the session may or
>> may not
>> be replicated.
>>
>> A session is replicated if and only if all the
>> conditions are
>>
>> met:
>>
>> 1. useDirtyFlag is true or setAttribute or
>> removeAttribute has
>> been called AND
>>
>> 2. a session exists (has been created)
>>
>> 3. the request is not trapped by the "filter" attribute
>>
>>
>>
>> The filter attribute is to filter out requests that
>> could not
>> modify the session,
>>
>> hence we don't replicate the session after the end of this
>> request.
>>
>> The filter is negative, ie, anything you put in the
>> filter, you
>> mean to filter out,
>>
>> ie, no replication will be done on requests that match
>> one of the
>> filters.
>>
>> The filter attribute is delimited by ;, so you can't
>> escape out ;
>> even if you wanted to.
>>
>>
>>
>> filter=".*\.gif;.*\.js;" means that we will not
>> replicate the
>> session after requests with the URI
>>
>> ending with .gif and .js are intercepted.
>>
>>
>>
>> The deployer element can be used to deploy apps cluster
>> wide.
>>
>> Currently the deployment only deploys/undeploys to working
>> members in the cluster
>>
>> so no WARs are copied upons startup of a broken node.
>>
>> The deployer watches a directory (watchDir) for WAR
>> files when
>> watchEnabled="true"
>>
>> When a new war file is added the war gets deployed to
>> the local
>> instance,
>>
>> and then deployed to the other instances in the cluster.
>>
>> When a war file is deleted from the watchDir the war is
>> undeployed locally
>>
>> and cluster wide
>>
>> -->
>>
>>
>>
>>
>>
>> <Cluster
>> className="org.apache.catalina.cluster.tcp.SimpleTcpCluster"
>>
>>
>> managerClassName="org.apache.catalina.cluster.session.DeltaManager"
>>
>> expireSessionsOnShutdown="true"
>>
>> useDirtyFlag="true"
>>
>> notifyListenersOnReplication="true">
>>
>>
>>
>> <Membership
>>
>>
>> className="org.apache.catalina.cluster.mcast.McastService"
>>
>> mcastAddr="228.0.0.4"
>>
>> mcastPort="45564"
>>
>> mcastFrequency="500"
>>
>> mcastDropTime="3000"/>
>>
>>
>>
>> <Receiver
>>
>>
>>
>> className="org.apache.catalina.cluster.tcp.ReplicationListener"
>>
>> tcpListenAddress="auto"
>>
>> tcpListenPort="4001"
>>
>> tcpSelectorTimeout="100"
>>
>> tcpThreadCount="2"/>
>>
>>
>>
>> <Sender
>>
>>
>>
>> className="org.apache.catalina.cluster.tcp.ReplicationTransmitter"
>>
>> replicationMode="pooled"
>>
>> ackTimeout="15000"
>>
>> waitForAck="true"/>
>>
>>
>>
>> <Valve
>>
>> className="org.apache.catalina.cluster.tcp.ReplicationValve"
>>
>>
>>
>> filter=".*\.gif;.*\.js;.*\.jpg;.*\.png;.*\.htm;.*\.html;.*\.css;.*
>> \.txt;"/>
>>
>>
>>
>> <Deployer
>>
>> className="org.apache.catalina.cluster.deploy.FarmWarDeployer"
>>
>> tempDir="/tmp/war-temp/"
>>
>> deployDir="/tmp/war-deploy/"
>>
>> watchDir="/tmp/war-listen/"
>>
>> watchEnabled="false"/>
>>
>>
>>
>> <ClusterListener
>>
>> className="org.apache.catalina.cluster.session.ClusterSessionListener
>> "
>> />
>>
>> </Cluster>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>> <!-- Normally, users must authenticate themselves to each
>> web app
>>
>> individually. Uncomment the following entry if you
>> would like
>>
>> a user to be authenticated the first time they
>> encounter a
>>
>> resource protected by a security constraint, and then
>> have that
>>
>> user identity maintained across *all* web applications
>> contained
>> in this virtual host. -->
>>
>> <!--
>>
>> <Valve
>> className="org.apache.catalina.authenticator.SingleSignOn" />
>>
>> -->
>>
>>
>>
>> <!-- Access log processes all requests for this virtual
>> host. By
>>
>> default, log files are created in the "logs" directory
>> relative
>> to
>>
>> $CATALINA_HOME. If you wish, you can specify a different
>>
>> directory with the "directory" attribute. Specify
>> either a
>> relative
>>
>> (to $CATALINA_HOME) or absolute path to the desired
>> directory.
>>
>> -->
>>
>> <!--
>>
>> <Valve className="org.apache.catalina.valves.AccessLogValve"
>>
>> directory="logs" prefix="localhost_access_log."
>>
>> suffix=".txt"
>>
>> pattern="common" resolveHosts="false"/>
>>
>> -->
>>
>>
>>
>> <!-- Access log processes all requests for this virtual
>> host. By
>>
>> default, log files are created in the "logs" directory
>> relative
>> to
>>
>> $CATALINA_HOME. If you wish, you can specify a different
>>
>> directory with the "directory" attribute. Specify
>> either a
>> relative
>>
>> (to $CATALINA_HOME) or absolute path to the desired
>> directory.
>>
>> This access log implementation is optimized for maximum
>> performance,
>>
>> but is hardcoded to support only the "common" and
>> "combined"
>>
>> patterns.
>>
>> -->
>>
>> <!--
>>
>> <Valve
>>
>> className="org.apache.catalina.valves.FastCommonAccessLogValve"
>>
>> directory="logs" prefix="localhost_access_log."
>>
>> suffix=".txt"
>>
>> pattern="common" resolveHosts="false"/>
>>
>> -->
>>
>>
>>
>> </Host>
>>
>>
>>
>> </Engine>
>>
>>
>>
>> </Service>
>>
>>
>>
>> </Server>
>>
>>
>>
>> ===========================================================
>>
>>
>>
>> I appriceate your prompt help on this since this is a very critical
>> application live for the moment. Always send me an email for any
>> clarifications.
>>
>>
>>
>> Thanks and best regards,
>>
>> Dilan
>>
>>
>>
>>
>>
>
>
> ---------------------------------------------------------------------
> To start a new topic, e-mail: users@tomcat.apache.org
> To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
> For additional commands, e-mail: users-help@tomcat.apache.org
>
>
>
> ---------------------------------------------------------------------
> To start a new topic, e-mail: users@tomcat.apache.org
> To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
> For additional commands, e-mail: users-help@tomcat.apache.org
>
>
---------------------------------------------------------------------
To start a new topic, e-mail: users@tomcat.apache.org
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org
Re: tomcat5.5.17 cluster(using jdk1.5) error - OutOfMemoryError in starting up on AS4
Posted by Peter Rossbach <pr...@objektpark.de>.
Use more JVM Options to analyse the mem usage
Work with more faster mem allocation
-XX:NewSize=128m -XX:MaxNewSize=128m -XX:SurvivorRatio=8
-Xverbosegc
Or better use a Memory Profiler...
But the membership not allocate much memory, very strange effect :-(
Peter
Am 18.06.2006 um 08:01 schrieb Dilan Kelanibandara:
> Hi Peter,
> I am using default JVM parameters coming with tomcat5.5.17. In the
> tomcat
> server.xml file it says tcpThreadCount is normally equal to no.of
> nodes (ie
> 2 in this case).That is why I changed that to 2.
>
> I tried increasing JVM parameters for heap size in tomcat
> Min=1024m
> Max=1024m
> also.I tried with both 512m also. But in both the occasion it is
> the same
> result.
> Thank you for your kind attention.
> I want further clarifications.
> Best regards,
> Dilan
> -----Original Message-----
> From: Peter Rossbach [mailto:pr@objektpark.de]
> Sent: Sunday, June 18, 2006 7:37 AM
> To: Tomcat Users List
> Subject: Re: tomcat5.5.17 cluster(using jdk1.5) error -
> OutOfMemoryError in
> starting up on AS4
>
> Hi,
>
> Which JVM memory parameter you use?
> At pooled mode use more receiver worker set tcpThreadCount="6"!
> You really need deployer? Deployer generate at every startup a large
> cluster message.
>
> Regards
> Peter
>
>
> Am 18.06.2006 um 06:22 schrieb Dilan Kelanibandara:
>
>>
>>
>> Hello ,
>>
>>
>>
>> I am getting OutOfMemoryError continuously when starting up two
>> cluster
>> nodes of tomcat5.5.17 (jdk1.5 on Advanced server 4). Any way it was
>> working
>> fine for 3 weeks time. This error occurs previously only one time
>> and when
>> restarted the tomcat, it worked.
>>
>>
>>
>>
>>
>> Following is a part of catalina.out relevent to that error for
>> node 1.
>>
>> ============ ========================================================
>>
>>
>>
>> INFO: Start ClusterSender at cluster
>> Catalina:type=Cluster,host=localhost
>>
>> with name Catalina:type=ClusterSender,host=localhost
>>
>> Jun 17, 2006 8:44:15 PM
>> org.apache.catalina.cluster.mcast.McastService start
>>
>> INFO: Sleeping for 2000 milliseconds to establish cluster membership
>> Exception in thread "Cluster-MembershipReceiver"
>> java.lang.OutOfMemoryError:
>>
>>
>> Java heap space
>>
>> Jun 17, 2006 8:44:17 PM
>> org.apache.catalina.cluster.mcast.McastService
>>
>> registerMBean
>>
>> INFO: membership mbean registered
>>
>> (Catalina:type=ClusterMembership,host=localhost)
>>
>> Jun 17, 2006 8:44:17 PM
>> org.apache.catalina.cluster.deploy.FarmWarDeployer
>>
>> start
>>
>> INFO: Cluster FarmWarDeployer started.
>>
>> Jun 17, 2006 8:44:19 PM
>> org.apache.catalina.cluster.session.DeltaManager
>>
>> start
>>
>> INFO: Register manager /StockTradingServer to cluster element Host
>> with name
>> localhost Jun 17, 2006 8:44:19 PM
>> org.apache.catalina.cluster.session.DeltaManager
>>
>> start
>>
>> INFO: Starting clustering manager at /StockTradingServer Jun 17, 2006
>> 8:44:19 PM org.apache.catalina.cluster.session.DeltaManager
>>
>> getAllClusterSessions
>>
>> INFO: Manager [/StockTradingServer]: skipping state transfer. No
>> members
>> active in cluster group.
>>
>>
>>
>> =====================================================================
>> =
>> ======
>> =====
>>
>> node2 startup log is as follows
>>
>> =====================================================================
>> =
>> ======
>> =====
>>
>> INFO: Cluster is about to start
>>
>> Jun 17, 2006 8:53:00 PM
>>
>> org.apache.catalina.cluster.tcp.ReplicationTransmitter start
>>
>> INFO: Start ClusterSender at cluster
>> Catalina:type=Cluster,host=localhost
>>
>> with name Catalina:type=ClusterSender,host=localhost
>>
>> Jun 17, 2006 8:53:00 PM
>> org.apache.catalina.cluster.mcast.McastService start
>>
>> INFO: Sleeping for 2000 milliseconds to establish cluster membership
>> Exception in thread "Cluster-MembershipReceiver"
>> java.lang.OutOfMemoryError:
>>
>>
>> Java heap space
>>
>> Jun 17, 2006 8:53:02 PM
>> org.apache.catalina.cluster.mcast.McastService
>>
>> registerMBean
>>
>> INFO: membership mbean registered
>>
>> (Catalina:type=ClusterMembership,host=localhost)
>>
>> Jun 17, 2006 8:53:02 PM
>> org.apache.catalina.cluster.deploy.FarmWarDeployer
>>
>> start
>>
>> INFO: Cluster FarmWarDeployer started.
>>
>> Jun 17, 2006 8:53:04 PM
>> org.apache.catalina.cluster.session.DeltaManager
>>
>> start
>>
>> INFO: Register manager /StockTradingServer to cluster element Host
>> with name
>> localhost Jun 17, 2006 8:53:04 PM
>> org.apache.catalina.cluster.session.DeltaManager
>>
>> start
>>
>>
>>
>> =====================================================================
>> =
>> ======
>> =
>>
>> Any way my clustor was working fine for 3 weeks time and started to
>> give
>> this error in startup of both the nodes.
>>
>>
>>
>> I have an IBMHTTPServer with jk connector for load balancing and
>> that load
>> is comming my tomcat cluster.
>>
>>
>>
>> following is the server.xml file for both the servers.
>>
>>
>>
>> =====================================================================
>> =
>> ======
>> =
>>
>>
>>
>> <!-- Example Server Configuration File -->
>>
>> <!-- Note that component elements are nested corresponding to their
>>
>> parent-child relationships with each other -->
>>
>>
>>
>> <!-- A "Server" is a singleton element that represents the entire
>> JVM,
>>
>> which may contain one or more "Service" instances. The Server
>>
>> listens for a shutdown command on the indicated port.
>>
>>
>>
>> Note: A "Server" is not itself a "Container", so you may not
>>
>> define subcomponents such as "Valves" or "Loggers" at this level.
>>
>> -->
>>
>>
>>
>> <Server port="8005" shutdown="SHUTDOWN">
>>
>>
>>
>> <!-- Comment these entries out to disable JMX MBeans support used
>> for the
>>
>> administration web application --> <Listener
>> className="org.apache.catalina.core.AprLifecycleListener" />
>> <Listener
>> className="org.apache.catalina.mbeans.ServerLifecycleListener" />
>> <Listener
>> className="org.apache.catalina.mbeans.GlobalResourcesLifecycleListene
>> r
>> " />
>> <Listener
>> className="org.apache.catalina.storeconfig.StoreConfigLifecycleListen
>> e
>> r"/>
>>
>>
>>
>> <!-- Global JNDI resources -->
>>
>> <GlobalNamingResources>
>>
>>
>>
>> <!-- Test entry for demonstration purposes -->
>>
>> <Environment name="simpleValue" type="java.lang.Integer"
>> value="30"/>
>>
>>
>>
>> <!-- Editable user database that can also be used by
>>
>> UserDatabaseRealm to authenticate users -->
>>
>> <Resource name="UserDatabase" auth="Container"
>>
>> type="org.apache.catalina.UserDatabase"
>>
>> description="User database that can be updated and saved"
>>
>>
>> factory="org.apache.catalina.users.MemoryUserDatabaseFactory"
>>
>> pathname="conf/tomcat-users.xml" />
>>
>>
>>
>> </GlobalNamingResources>
>>
>>
>>
>> <!-- A "Service" is a collection of one or more "Connectors" that
>> share
>>
>> a single "Container" (and therefore the web applications
>> visible
>>
>> within that Container). Normally, that Container is an
>> "Engine",
>>
>> but this is not required.
>>
>>
>>
>> Note: A "Service" is not itself a "Container", so you may not
>>
>> define subcomponents such as "Valves" or "Loggers" at this
>> level.
>>
>> -->
>>
>>
>>
>> <!-- Define the Tomcat Stand-Alone Service --> <Service
>> name="Catalina">
>>
>>
>>
>> <!-- A "Connector" represents an endpoint by which requests are
>> received
>>
>> and responses are returned. Each Connector passes requests
>> on to
>> the
>>
>> associated "Container" (normally an Engine) for processing.
>>
>>
>>
>> By default, a non-SSL HTTP/1.1 Connector is established on
>> port
>> 8080.
>>
>> You can also enable an SSL HTTP/1.1 Connector on port 8443 by
>>
>> following the instructions below and uncommenting the second
>> Connector
>>
>> entry. SSL support requires the following steps (see the
>> SSL Config
>> HOWTO in the Tomcat 5 documentation bundle for more detailed
>>
>> instructions):
>>
>> * If your JDK version 1.3 or prior, download and install
>> JSSE 1.0.2
>> or
>>
>> later, and put the JAR files into "$JAVA_HOME/jre/lib/ext".
>>
>> * Execute:
>>
>> %JAVA_HOME%\bin\keytool -genkey -alias tomcat -keyalg RSA
>>
>> (Windows)
>>
>> $JAVA_HOME/bin/keytool -genkey -alias tomcat -keyalg
>> RSA (Unix)
>> with a password value of "changeit" for both the certificate and
>>
>> the keystore itself.
>>
>>
>>
>> By default, DNS lookups are enabled when a web application
>> calls
>>
>> request.getRemoteHost(). This can have an adverse impact on
>>
>> performance, so you can disable it by setting the
>>
>> "enableLookups" attribute to "false". When DNS lookups are
>> disabled,
>>
>> request.getRemoteHost() will return the String version of the
>>
>> IP address of the remote client.
>>
>> -->
>>
>>
>>
>> <!-- Define a non-SSL HTTP/1.1 Connector on port 8080 -->
>>
>> <Connector port="8088" maxHttpHeaderSize="8192"
>>
>> maxThreads="300" minSpareThreads="25"
>> maxSpareThreads="75"
>>
>> enableLookups="false" redirectPort="8443"
>> acceptCount="100"
>>
>> connectionTimeout="20000"
>> disableUploadTimeout="true" />
>>
>> <!-- Note : To disable connection timeouts, set
>> connectionTimeout value
>>
>> to 0 -->
>>
>>
>>
>> <!-- Note : To use gzip compression you could set the
>> following
>> properties :
>>
>>
>>
>> compression="on"
>>
>> compressionMinSize="2048"
>>
>> noCompressionUserAgents="gozilla, traviata"
>>
>> compressableMimeType="text/html,text/xml"
>>
>> -->
>>
>>
>>
>> <!-- Define a SSL HTTP/1.1 Connector on port 8443 -->
>>
>> <!--
>>
>> <Connector port="8443" maxHttpHeaderSize="8192"
>>
>> maxThreads="150" minSpareThreads="25"
>> maxSpareThreads="75"
>>
>> enableLookups="false" disableUploadTimeout="true"
>>
>> acceptCount="100" scheme="https" secure="true"
>>
>> clientAuth="false" sslProtocol="TLS" />
>>
>> -->
>>
>>
>>
>> <!-- Define an AJP 1.3 Connector on port 8009 -->
>>
>> <Connector port="8009"
>>
>> enableLookups="false" redirectPort="8443"
>> protocol="AJP/1.3"
>>
>> />
>>
>>
>>
>> <!-- Define a Proxied HTTP/1.1 Connector on port 8082 -->
>>
>> <!-- See proxy documentation for more information about using
>> this. -->
>>
>> <!--
>>
>> <Connector port="8082"
>>
>> maxThreads="150" minSpareThreads="25"
>> maxSpareThreads="75"
>>
>> enableLookups="false" acceptCount="100"
>>
>> connectionTimeout="20000"
>>
>> proxyPort="80" disableUploadTimeout="true" />
>>
>> -->
>>
>>
>>
>> <!-- An Engine represents the entry point (within Catalina) that
>> processes
>>
>> every request. The Engine implementation for Tomcat stand
>> alone
>>
>> analyzes the HTTP headers included with the request, and
>> passes them
>> on to the appropriate Host (virtual host). -->
>>
>>
>>
>> <!-- You should set jvmRoute to support load-balancing via AJP
>> ie :
>>
>> <Engine name="Standalone" defaultHost="localhost" jvmRoute="jvm1">
>>
>> -->
>>
>>
>>
>> <!-- Define the top level container in our container hierarchy -->
>>
>> <Engine name="Catalina" defaultHost="localhost" jvmRoute="node01">
>>
>> <!-- <Engine name="Catalina" jvmRoute="$JAVA_HOME" > -->
>>
>>
>>
>> <!-- The request dumper valve dumps useful debugging
>> information about
>>
>> the request headers and cookies that were received, and the
>> response
>>
>> headers and cookies that were sent, for all requests
>> received by
>>
>> this instance of Tomcat. If you care only about requests
>> to a
>>
>> particular virtual host, or a particular application,
>> nest this
>>
>> element inside the corresponding <Host> or <Context> entry
>> instead.
>>
>>
>>
>> For a similar mechanism that is portable to all Servlet 2.4
>>
>> containers, check out the "RequestDumperFilter" Filter in
>> the
>>
>> example application (the source for this filter may be
>> found in
>>
>> "$CATALINA_HOME/webapps/examples/WEB-INF/classes/filters").
>>
>>
>>
>> Request dumping is disabled by default. Uncomment the
>> following
>>
>> element to enable it. -->
>>
>> <!--
>>
>> <Valve
>> className="org.apache.catalina.valves.RequestDumperValve"/>
>>
>> -->
>>
>>
>>
>> <!-- Because this Realm is here, an instance will be shared
>> globally
>> -->
>>
>>
>>
>> <!-- This Realm uses the UserDatabase configured in the global
>> JNDI
>>
>> resources under the key "UserDatabase". Any edits
>>
>> that are performed against this UserDatabase are
>> immediately
>>
>> available for use by the Realm. -->
>>
>> <Realm className="org.apache.catalina.realm.UserDatabaseRealm"
>>
>> resourceName="UserDatabase"/>
>>
>>
>>
>> <!-- Comment out the old realm but leave here for now in case we
>>
>> need to go back quickly -->
>>
>> <!--
>>
>> <Realm className="org.apache.catalina.realm.MemoryRealm" />
>>
>> -->
>>
>>
>>
>> <!-- Replace the above Realm with one of the following to get
>> a Realm
>>
>> stored in a database and accessed via JDBC -->
>>
>>
>>
>>
>>
>> <Realm className="org.apache.catalina.realm.JDBCRealm"
>>
>> driverName="org.gjt.mm.mysql.Driver"
>>
>> connectionURL="jdbc:mysql://172.16.1.55:3306/kutila"
>>
>> connectionName="kutila" connectionPassword="kutila"
>>
>> userTable="users" userNameCol="user_name"
>>
>> userCredCol="user_pass"
>>
>> userRoleTable="user_roles" roleNameCol="role_name" />
>>
>>
>>
>>
>>
>> <!--
>>
>> <Realm className="org.apache.catalina.realm.JDBCRealm"
>>
>> driverName="oracle.jdbc.driver.OracleDriver"
>>
>> connectionURL="jdbc:oracle:thin:@ntserver:1521:ORCL"
>>
>> connectionName="scott" connectionPassword="tiger"
>>
>> userTable="users" userNameCol="user_name"
>>
>> userCredCol="user_pass"
>>
>> userRoleTable="user_roles" roleNameCol="role_name" />
>>
>> -->
>>
>>
>>
>> <!--
>>
>> <Realm className="org.apache.catalina.realm.JDBCRealm"
>>
>> driverName="sun.jdbc.odbc.JdbcOdbcDriver"
>>
>> connectionURL="jdbc:odbc:CATALINA"
>>
>> userTable="users" userNameCol="user_name"
>>
>> userCredCol="user_pass"
>>
>> userRoleTable="user_roles" roleNameCol="role_name" />
>>
>> -->
>>
>>
>>
>> <!-- Define the default virtual host
>>
>> Note: XML Schema validation will not work with Xerces 2.2.
>>
>> -->
>>
>> <Host name="localhost" appBase="webapps"
>>
>> unpackWARs="true" autoDeploy="true"
>>
>> xmlValidation="false" xmlNamespaceAware="false">
>>
>>
>>
>> <!-- Defines a cluster for this node,
>>
>> By defining this element, means that every manager
>> will be
>> changed.
>>
>> So when running a cluster, only make sure that you have
>> webapps
>> in there
>>
>> that need to be clustered and remove the other ones.
>>
>> A cluster has the following parameters:
>>
>>
>>
>> className = the fully qualified name of the cluster class
>>
>>
>>
>> clusterName = a descriptive name for your cluster, can be
>> anything
>>
>>
>>
>> mcastAddr = the multicast address, has to be the same
>> for all
>> the nodes
>>
>>
>>
>> mcastPort = the multicast port, has to be the same for
>> all the
>> nodes
>>
>>
>>
>> mcastBindAddress = bind the multicast socket to a
>> specific
>> address
>>
>>
>>
>> mcastTTL = the multicast TTL if you want to limit your
>> broadcast
>>
>> mcastSoTimeout = the multicast readtimeout
>>
>>
>>
>> mcastFrequency = the number of milliseconds in between
>> sending a
>> "I'm alive" heartbeat
>>
>>
>>
>> mcastDropTime = the number a milliseconds before a
>> node is
>> considered "dead" if no heartbeat is received
>>
>>
>>
>> tcpThreadCount = the number of threads to handle incoming
>> replication requests, optimal would be the same amount of threads
>> as nodes
>>
>>
>>
>> tcpListenAddress = the listen address (bind address)
>> for TCP
>> cluster request on this host,
>>
>> in case of multiple ethernet cards.
>>
>> auto means that address becomes
>>
>> InetAddress.getLocalHost
>> ().getHostAddress()
>>
>>
>>
>> tcpListenPort = the tcp listen port
>>
>>
>>
>> tcpSelectorTimeout = the timeout (ms) for the
>> Selector.select()
>> method in case the OS
>>
>> has a wakup bug in java.nio. Set
>> to 0 for
>> no timeout
>>
>>
>>
>> printToScreen = true means that managers will also
>> print to
>> std.out
>>
>>
>>
>> expireSessionsOnShutdown = true means that
>>
>>
>>
>> useDirtyFlag = true means that we only replicate a
>> session after
>> setAttribute,removeAttribute has been called.
>>
>> false means to replicate the session
>> after each
>> request.
>>
>> false means that replication would work
>> for the
>> following piece of code: (only for SimpleTcpReplicationManager)
>>
>> <%
>>
>> HashMap map =
>> (HashMap)session.getAttribute("map");
>>
>> map.put("key","value");
>>
>> %>
>>
>> replicationMode = can be either 'pooled',
>> 'synchronous' or
>> 'asynchronous'.
>>
>> * Pooled means that the replication
>> happens
>> using several sockets in a synchronous way. Ie, the data gets
>> replicated,
>> then the request return. This is the same as the 'synchronous'
>> setting
>> except it uses a pool of sockets, hence it is multithreaded. This
>> is the
>> fastest and safest configuration. To use this, also increase the nr
>> of tcp
>> threads that you have dealing with replication.
>>
>> * Synchronous means that the thread
>> that
>> executes the request, is also the
>>
>> thread the replicates the data to the
>> other
>> nodes, and will not return until all
>>
>> nodes have received the information.
>>
>> * Asynchronous means that there is a
>> specific
>> 'sender' thread for each cluster node,
>>
>> so the request thread will queue the
>> replication request into a "smart" queue,
>>
>> and then return to the client.
>>
>> The "smart" queue is a queue where
>> when a
>> session is added to the queue, and the same session
>>
>> already exists in the queue from a
>> previous
>> request, that session will be replaced
>>
>> in the queue instead of replicating two
>> requests. This almost never happens, unless there is a
>>
>> large network delay.
>>
>> -->
>>
>> <!--
>>
>> When configuring for clustering, you also add in a valve
>> to catch
>> all the requests
>>
>> coming in, at the end of the request, the session may or
>> may not
>> be replicated.
>>
>> A session is replicated if and only if all the
>> conditions are
>>
>> met:
>>
>> 1. useDirtyFlag is true or setAttribute or
>> removeAttribute has
>> been called AND
>>
>> 2. a session exists (has been created)
>>
>> 3. the request is not trapped by the "filter" attribute
>>
>>
>>
>> The filter attribute is to filter out requests that
>> could not
>> modify the session,
>>
>> hence we don't replicate the session after the end of this
>> request.
>>
>> The filter is negative, ie, anything you put in the
>> filter, you
>> mean to filter out,
>>
>> ie, no replication will be done on requests that match
>> one of the
>> filters.
>>
>> The filter attribute is delimited by ;, so you can't
>> escape out ;
>> even if you wanted to.
>>
>>
>>
>> filter=".*\.gif;.*\.js;" means that we will not
>> replicate the
>> session after requests with the URI
>>
>> ending with .gif and .js are intercepted.
>>
>>
>>
>> The deployer element can be used to deploy apps cluster
>> wide.
>>
>> Currently the deployment only deploys/undeploys to working
>> members in the cluster
>>
>> so no WARs are copied upons startup of a broken node.
>>
>> The deployer watches a directory (watchDir) for WAR
>> files when
>> watchEnabled="true"
>>
>> When a new war file is added the war gets deployed to
>> the local
>> instance,
>>
>> and then deployed to the other instances in the cluster.
>>
>> When a war file is deleted from the watchDir the war is
>> undeployed locally
>>
>> and cluster wide
>>
>> -->
>>
>>
>>
>>
>>
>> <Cluster
>> className="org.apache.catalina.cluster.tcp.SimpleTcpCluster"
>>
>>
>> managerClassName="org.apache.catalina.cluster.session.DeltaManager"
>>
>> expireSessionsOnShutdown="true"
>>
>> useDirtyFlag="true"
>>
>> notifyListenersOnReplication="true">
>>
>>
>>
>> <Membership
>>
>>
>> className="org.apache.catalina.cluster.mcast.McastService"
>>
>> mcastAddr="228.0.0.4"
>>
>> mcastPort="45564"
>>
>> mcastFrequency="500"
>>
>> mcastDropTime="3000"/>
>>
>>
>>
>> <Receiver
>>
>>
>>
>> className="org.apache.catalina.cluster.tcp.ReplicationListener"
>>
>> tcpListenAddress="auto"
>>
>> tcpListenPort="4001"
>>
>> tcpSelectorTimeout="100"
>>
>> tcpThreadCount="2"/>
>>
>>
>>
>> <Sender
>>
>>
>>
>> className="org.apache.catalina.cluster.tcp.ReplicationTransmitter"
>>
>> replicationMode="pooled"
>>
>> ackTimeout="15000"
>>
>> waitForAck="true"/>
>>
>>
>>
>> <Valve
>>
>> className="org.apache.catalina.cluster.tcp.ReplicationValve"
>>
>>
>>
>> filter=".*\.gif;.*\.js;.*\.jpg;.*\.png;.*\.htm;.*\.html;.*\.css;.*
>> \.txt;"/>
>>
>>
>>
>> <Deployer
>>
>> className="org.apache.catalina.cluster.deploy.FarmWarDeployer"
>>
>> tempDir="/tmp/war-temp/"
>>
>> deployDir="/tmp/war-deploy/"
>>
>> watchDir="/tmp/war-listen/"
>>
>> watchEnabled="false"/>
>>
>>
>>
>> <ClusterListener
>>
>> className="org.apache.catalina.cluster.session.ClusterSessionListener
>> "
>> />
>>
>> </Cluster>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>> <!-- Normally, users must authenticate themselves to each
>> web app
>>
>> individually. Uncomment the following entry if you
>> would like
>>
>> a user to be authenticated the first time they
>> encounter a
>>
>> resource protected by a security constraint, and then
>> have that
>>
>> user identity maintained across *all* web applications
>> contained
>> in this virtual host. -->
>>
>> <!--
>>
>> <Valve
>> className="org.apache.catalina.authenticator.SingleSignOn" />
>>
>> -->
>>
>>
>>
>> <!-- Access log processes all requests for this virtual
>> host. By
>>
>> default, log files are created in the "logs" directory
>> relative
>> to
>>
>> $CATALINA_HOME. If you wish, you can specify a different
>>
>> directory with the "directory" attribute. Specify
>> either a
>> relative
>>
>> (to $CATALINA_HOME) or absolute path to the desired
>> directory.
>>
>> -->
>>
>> <!--
>>
>> <Valve className="org.apache.catalina.valves.AccessLogValve"
>>
>> directory="logs" prefix="localhost_access_log."
>>
>> suffix=".txt"
>>
>> pattern="common" resolveHosts="false"/>
>>
>> -->
>>
>>
>>
>> <!-- Access log processes all requests for this virtual
>> host. By
>>
>> default, log files are created in the "logs" directory
>> relative
>> to
>>
>> $CATALINA_HOME. If you wish, you can specify a different
>>
>> directory with the "directory" attribute. Specify
>> either a
>> relative
>>
>> (to $CATALINA_HOME) or absolute path to the desired
>> directory.
>>
>> This access log implementation is optimized for maximum
>> performance,
>>
>> but is hardcoded to support only the "common" and
>> "combined"
>>
>> patterns.
>>
>> -->
>>
>> <!--
>>
>> <Valve
>>
>> className="org.apache.catalina.valves.FastCommonAccessLogValve"
>>
>> directory="logs" prefix="localhost_access_log."
>>
>> suffix=".txt"
>>
>> pattern="common" resolveHosts="false"/>
>>
>> -->
>>
>>
>>
>> </Host>
>>
>>
>>
>> </Engine>
>>
>>
>>
>> </Service>
>>
>>
>>
>> </Server>
>>
>>
>>
>> ===========================================================
>>
>>
>>
>> I appriceate your prompt help on this since this is a very critical
>> application live for the moment. Always send me an email for any
>> clarifications.
>>
>>
>>
>> Thanks and best regards,
>>
>> Dilan
>>
>>
>>
>>
>>
>
>
> ---------------------------------------------------------------------
> To start a new topic, e-mail: users@tomcat.apache.org
> To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
> For additional commands, e-mail: users-help@tomcat.apache.org
>
>
>
> ---------------------------------------------------------------------
> To start a new topic, e-mail: users@tomcat.apache.org
> To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
> For additional commands, e-mail: users-help@tomcat.apache.org
>
>
RE: tomcat5.5.17 cluster(using jdk1.5) error - OutOfMemoryError in starting up on AS4
Posted by Dilan Kelanibandara <di...@beyondm.net>.
Hi Peter,
I am using default JVM parameters coming with tomcat5.5.17. In the tomcat
server.xml file it says tcpThreadCount is normally equal to no.of nodes (ie
2 in this case).That is why I changed that to 2.
I tried increasing JVM parameters for heap size in tomcat
Min=1024m
Max=1024m
also.I tried with both 512m also. But in both the occasion it is the same
result.
Thank you for your kind attention.
I want further clarifications.
Best regards,
Dilan
-----Original Message-----
From: Peter Rossbach [mailto:pr@objektpark.de]
Sent: Sunday, June 18, 2006 7:37 AM
To: Tomcat Users List
Subject: Re: tomcat5.5.17 cluster(using jdk1.5) error - OutOfMemoryError in
starting up on AS4
Hi,
Which JVM memory parameter you use?
At pooled mode use more receiver worker set tcpThreadCount="6"!
You really need deployer? Deployer generate at every startup a large
cluster message.
Regards
Peter
Am 18.06.2006 um 06:22 schrieb Dilan Kelanibandara:
>
>
> Hello ,
>
>
>
> I am getting OutOfMemoryError continuously when starting up two
> cluster
> nodes of tomcat5.5.17 (jdk1.5 on Advanced server 4). Any way it was
> working
> fine for 3 weeks time. This error occurs previously only one time
> and when
> restarted the tomcat, it worked.
>
>
>
>
>
> Following is a part of catalina.out relevent to that error for
> node 1.
>
> ============ ========================================================
>
>
>
> INFO: Start ClusterSender at cluster
> Catalina:type=Cluster,host=localhost
>
> with name Catalina:type=ClusterSender,host=localhost
>
> Jun 17, 2006 8:44:15 PM
> org.apache.catalina.cluster.mcast.McastService start
>
> INFO: Sleeping for 2000 milliseconds to establish cluster membership
> Exception in thread "Cluster-MembershipReceiver"
> java.lang.OutOfMemoryError:
>
>
> Java heap space
>
> Jun 17, 2006 8:44:17 PM org.apache.catalina.cluster.mcast.McastService
>
> registerMBean
>
> INFO: membership mbean registered
>
> (Catalina:type=ClusterMembership,host=localhost)
>
> Jun 17, 2006 8:44:17 PM
> org.apache.catalina.cluster.deploy.FarmWarDeployer
>
> start
>
> INFO: Cluster FarmWarDeployer started.
>
> Jun 17, 2006 8:44:19 PM
> org.apache.catalina.cluster.session.DeltaManager
>
> start
>
> INFO: Register manager /StockTradingServer to cluster element Host
> with name
> localhost Jun 17, 2006 8:44:19 PM
> org.apache.catalina.cluster.session.DeltaManager
>
> start
>
> INFO: Starting clustering manager at /StockTradingServer Jun 17, 2006
> 8:44:19 PM org.apache.catalina.cluster.session.DeltaManager
>
> getAllClusterSessions
>
> INFO: Manager [/StockTradingServer]: skipping state transfer. No
> members
> active in cluster group.
>
>
>
> ======================================================================
> ======
> =====
>
> node2 startup log is as follows
>
> ======================================================================
> ======
> =====
>
> INFO: Cluster is about to start
>
> Jun 17, 2006 8:53:00 PM
>
> org.apache.catalina.cluster.tcp.ReplicationTransmitter start
>
> INFO: Start ClusterSender at cluster
> Catalina:type=Cluster,host=localhost
>
> with name Catalina:type=ClusterSender,host=localhost
>
> Jun 17, 2006 8:53:00 PM
> org.apache.catalina.cluster.mcast.McastService start
>
> INFO: Sleeping for 2000 milliseconds to establish cluster membership
> Exception in thread "Cluster-MembershipReceiver"
> java.lang.OutOfMemoryError:
>
>
> Java heap space
>
> Jun 17, 2006 8:53:02 PM org.apache.catalina.cluster.mcast.McastService
>
> registerMBean
>
> INFO: membership mbean registered
>
> (Catalina:type=ClusterMembership,host=localhost)
>
> Jun 17, 2006 8:53:02 PM
> org.apache.catalina.cluster.deploy.FarmWarDeployer
>
> start
>
> INFO: Cluster FarmWarDeployer started.
>
> Jun 17, 2006 8:53:04 PM
> org.apache.catalina.cluster.session.DeltaManager
>
> start
>
> INFO: Register manager /StockTradingServer to cluster element Host
> with name
> localhost Jun 17, 2006 8:53:04 PM
> org.apache.catalina.cluster.session.DeltaManager
>
> start
>
>
>
> ======================================================================
> ======
> =
>
> Any way my clustor was working fine for 3 weeks time and started to
> give
> this error in startup of both the nodes.
>
>
>
> I have an IBMHTTPServer with jk connector for load balancing and
> that load
> is comming my tomcat cluster.
>
>
>
> following is the server.xml file for both the servers.
>
>
>
> ======================================================================
> ======
> =
>
>
>
> <!-- Example Server Configuration File -->
>
> <!-- Note that component elements are nested corresponding to their
>
> parent-child relationships with each other -->
>
>
>
> <!-- A "Server" is a singleton element that represents the entire JVM,
>
> which may contain one or more "Service" instances. The Server
>
> listens for a shutdown command on the indicated port.
>
>
>
> Note: A "Server" is not itself a "Container", so you may not
>
> define subcomponents such as "Valves" or "Loggers" at this level.
>
> -->
>
>
>
> <Server port="8005" shutdown="SHUTDOWN">
>
>
>
> <!-- Comment these entries out to disable JMX MBeans support used
> for the
>
> administration web application --> <Listener
> className="org.apache.catalina.core.AprLifecycleListener" />
> <Listener
> className="org.apache.catalina.mbeans.ServerLifecycleListener" />
> <Listener
> className="org.apache.catalina.mbeans.GlobalResourcesLifecycleListener
> " />
> <Listener
> className="org.apache.catalina.storeconfig.StoreConfigLifecycleListene
> r"/>
>
>
>
> <!-- Global JNDI resources -->
>
> <GlobalNamingResources>
>
>
>
> <!-- Test entry for demonstration purposes -->
>
> <Environment name="simpleValue" type="java.lang.Integer"
> value="30"/>
>
>
>
> <!-- Editable user database that can also be used by
>
> UserDatabaseRealm to authenticate users -->
>
> <Resource name="UserDatabase" auth="Container"
>
> type="org.apache.catalina.UserDatabase"
>
> description="User database that can be updated and saved"
>
>
> factory="org.apache.catalina.users.MemoryUserDatabaseFactory"
>
> pathname="conf/tomcat-users.xml" />
>
>
>
> </GlobalNamingResources>
>
>
>
> <!-- A "Service" is a collection of one or more "Connectors" that
> share
>
> a single "Container" (and therefore the web applications visible
>
> within that Container). Normally, that Container is an
> "Engine",
>
> but this is not required.
>
>
>
> Note: A "Service" is not itself a "Container", so you may not
>
> define subcomponents such as "Valves" or "Loggers" at this
> level.
>
> -->
>
>
>
> <!-- Define the Tomcat Stand-Alone Service --> <Service
> name="Catalina">
>
>
>
> <!-- A "Connector" represents an endpoint by which requests are
> received
>
> and responses are returned. Each Connector passes requests
> on to
> the
>
> associated "Container" (normally an Engine) for processing.
>
>
>
> By default, a non-SSL HTTP/1.1 Connector is established on
> port
> 8080.
>
> You can also enable an SSL HTTP/1.1 Connector on port 8443 by
>
> following the instructions below and uncommenting the second
> Connector
>
> entry. SSL support requires the following steps (see the
> SSL Config
> HOWTO in the Tomcat 5 documentation bundle for more detailed
>
> instructions):
>
> * If your JDK version 1.3 or prior, download and install
> JSSE 1.0.2
> or
>
> later, and put the JAR files into "$JAVA_HOME/jre/lib/ext".
>
> * Execute:
>
> %JAVA_HOME%\bin\keytool -genkey -alias tomcat -keyalg RSA
>
> (Windows)
>
> $JAVA_HOME/bin/keytool -genkey -alias tomcat -keyalg
> RSA (Unix)
> with a password value of "changeit" for both the certificate and
>
> the keystore itself.
>
>
>
> By default, DNS lookups are enabled when a web application
> calls
>
> request.getRemoteHost(). This can have an adverse impact on
>
> performance, so you can disable it by setting the
>
> "enableLookups" attribute to "false". When DNS lookups are
> disabled,
>
> request.getRemoteHost() will return the String version of the
>
> IP address of the remote client.
>
> -->
>
>
>
> <!-- Define a non-SSL HTTP/1.1 Connector on port 8080 -->
>
> <Connector port="8088" maxHttpHeaderSize="8192"
>
> maxThreads="300" minSpareThreads="25"
> maxSpareThreads="75"
>
> enableLookups="false" redirectPort="8443"
> acceptCount="100"
>
> connectionTimeout="20000" disableUploadTimeout="true" />
>
> <!-- Note : To disable connection timeouts, set
> connectionTimeout value
>
> to 0 -->
>
>
>
> <!-- Note : To use gzip compression you could set the following
> properties :
>
>
>
> compression="on"
>
> compressionMinSize="2048"
>
> noCompressionUserAgents="gozilla, traviata"
>
> compressableMimeType="text/html,text/xml"
>
> -->
>
>
>
> <!-- Define a SSL HTTP/1.1 Connector on port 8443 -->
>
> <!--
>
> <Connector port="8443" maxHttpHeaderSize="8192"
>
> maxThreads="150" minSpareThreads="25"
> maxSpareThreads="75"
>
> enableLookups="false" disableUploadTimeout="true"
>
> acceptCount="100" scheme="https" secure="true"
>
> clientAuth="false" sslProtocol="TLS" />
>
> -->
>
>
>
> <!-- Define an AJP 1.3 Connector on port 8009 -->
>
> <Connector port="8009"
>
> enableLookups="false" redirectPort="8443"
> protocol="AJP/1.3"
>
> />
>
>
>
> <!-- Define a Proxied HTTP/1.1 Connector on port 8082 -->
>
> <!-- See proxy documentation for more information about using
> this. -->
>
> <!--
>
> <Connector port="8082"
>
> maxThreads="150" minSpareThreads="25"
> maxSpareThreads="75"
>
> enableLookups="false" acceptCount="100"
>
> connectionTimeout="20000"
>
> proxyPort="80" disableUploadTimeout="true" />
>
> -->
>
>
>
> <!-- An Engine represents the entry point (within Catalina) that
> processes
>
> every request. The Engine implementation for Tomcat stand
> alone
>
> analyzes the HTTP headers included with the request, and
> passes them
> on to the appropriate Host (virtual host). -->
>
>
>
> <!-- You should set jvmRoute to support load-balancing via AJP ie :
>
> <Engine name="Standalone" defaultHost="localhost" jvmRoute="jvm1">
>
> -->
>
>
>
> <!-- Define the top level container in our container hierarchy -->
>
> <Engine name="Catalina" defaultHost="localhost" jvmRoute="node01">
>
> <!-- <Engine name="Catalina" jvmRoute="$JAVA_HOME" > -->
>
>
>
> <!-- The request dumper valve dumps useful debugging
> information about
>
> the request headers and cookies that were received, and the
> response
>
> headers and cookies that were sent, for all requests
> received by
>
> this instance of Tomcat. If you care only about requests
> to a
>
> particular virtual host, or a particular application,
> nest this
>
> element inside the corresponding <Host> or <Context> entry
> instead.
>
>
>
> For a similar mechanism that is portable to all Servlet 2.4
>
> containers, check out the "RequestDumperFilter" Filter in
> the
>
> example application (the source for this filter may be
> found in
>
> "$CATALINA_HOME/webapps/examples/WEB-INF/classes/filters").
>
>
>
> Request dumping is disabled by default. Uncomment the
> following
>
> element to enable it. -->
>
> <!--
>
> <Valve
> className="org.apache.catalina.valves.RequestDumperValve"/>
>
> -->
>
>
>
> <!-- Because this Realm is here, an instance will be shared
> globally
> -->
>
>
>
> <!-- This Realm uses the UserDatabase configured in the global
> JNDI
>
> resources under the key "UserDatabase". Any edits
>
> that are performed against this UserDatabase are immediately
>
> available for use by the Realm. -->
>
> <Realm className="org.apache.catalina.realm.UserDatabaseRealm"
>
> resourceName="UserDatabase"/>
>
>
>
> <!-- Comment out the old realm but leave here for now in case we
>
> need to go back quickly -->
>
> <!--
>
> <Realm className="org.apache.catalina.realm.MemoryRealm" />
>
> -->
>
>
>
> <!-- Replace the above Realm with one of the following to get
> a Realm
>
> stored in a database and accessed via JDBC -->
>
>
>
>
>
> <Realm className="org.apache.catalina.realm.JDBCRealm"
>
> driverName="org.gjt.mm.mysql.Driver"
>
> connectionURL="jdbc:mysql://172.16.1.55:3306/kutila"
>
> connectionName="kutila" connectionPassword="kutila"
>
> userTable="users" userNameCol="user_name"
>
> userCredCol="user_pass"
>
> userRoleTable="user_roles" roleNameCol="role_name" />
>
>
>
>
>
> <!--
>
> <Realm className="org.apache.catalina.realm.JDBCRealm"
>
> driverName="oracle.jdbc.driver.OracleDriver"
>
> connectionURL="jdbc:oracle:thin:@ntserver:1521:ORCL"
>
> connectionName="scott" connectionPassword="tiger"
>
> userTable="users" userNameCol="user_name"
>
> userCredCol="user_pass"
>
> userRoleTable="user_roles" roleNameCol="role_name" />
>
> -->
>
>
>
> <!--
>
> <Realm className="org.apache.catalina.realm.JDBCRealm"
>
> driverName="sun.jdbc.odbc.JdbcOdbcDriver"
>
> connectionURL="jdbc:odbc:CATALINA"
>
> userTable="users" userNameCol="user_name"
>
> userCredCol="user_pass"
>
> userRoleTable="user_roles" roleNameCol="role_name" />
>
> -->
>
>
>
> <!-- Define the default virtual host
>
> Note: XML Schema validation will not work with Xerces 2.2.
>
> -->
>
> <Host name="localhost" appBase="webapps"
>
> unpackWARs="true" autoDeploy="true"
>
> xmlValidation="false" xmlNamespaceAware="false">
>
>
>
> <!-- Defines a cluster for this node,
>
> By defining this element, means that every manager will be
> changed.
>
> So when running a cluster, only make sure that you have
> webapps
> in there
>
> that need to be clustered and remove the other ones.
>
> A cluster has the following parameters:
>
>
>
> className = the fully qualified name of the cluster class
>
>
>
> clusterName = a descriptive name for your cluster, can be
> anything
>
>
>
> mcastAddr = the multicast address, has to be the same
> for all
> the nodes
>
>
>
> mcastPort = the multicast port, has to be the same for
> all the
> nodes
>
>
>
> mcastBindAddress = bind the multicast socket to a specific
> address
>
>
>
> mcastTTL = the multicast TTL if you want to limit your
> broadcast
>
> mcastSoTimeout = the multicast readtimeout
>
>
>
> mcastFrequency = the number of milliseconds in between
> sending a
> "I'm alive" heartbeat
>
>
>
> mcastDropTime = the number a milliseconds before a node is
> considered "dead" if no heartbeat is received
>
>
>
> tcpThreadCount = the number of threads to handle incoming
> replication requests, optimal would be the same amount of threads
> as nodes
>
>
>
> tcpListenAddress = the listen address (bind address)
> for TCP
> cluster request on this host,
>
> in case of multiple ethernet cards.
>
> auto means that address becomes
>
> InetAddress.getLocalHost
> ().getHostAddress()
>
>
>
> tcpListenPort = the tcp listen port
>
>
>
> tcpSelectorTimeout = the timeout (ms) for the
> Selector.select()
> method in case the OS
>
> has a wakup bug in java.nio. Set
> to 0 for
> no timeout
>
>
>
> printToScreen = true means that managers will also
> print to
> std.out
>
>
>
> expireSessionsOnShutdown = true means that
>
>
>
> useDirtyFlag = true means that we only replicate a
> session after
> setAttribute,removeAttribute has been called.
>
> false means to replicate the session
> after each
> request.
>
> false means that replication would work
> for the
> following piece of code: (only for SimpleTcpReplicationManager)
>
> <%
>
> HashMap map =
> (HashMap)session.getAttribute("map");
>
> map.put("key","value");
>
> %>
>
> replicationMode = can be either 'pooled', 'synchronous' or
> 'asynchronous'.
>
> * Pooled means that the replication
> happens
> using several sockets in a synchronous way. Ie, the data gets
> replicated,
> then the request return. This is the same as the 'synchronous' setting
> except it uses a pool of sockets, hence it is multithreaded. This
> is the
> fastest and safest configuration. To use this, also increase the nr
> of tcp
> threads that you have dealing with replication.
>
> * Synchronous means that the thread that
> executes the request, is also the
>
> thread the replicates the data to the
> other
> nodes, and will not return until all
>
> nodes have received the information.
>
> * Asynchronous means that there is a
> specific
> 'sender' thread for each cluster node,
>
> so the request thread will queue the
> replication request into a "smart" queue,
>
> and then return to the client.
>
> The "smart" queue is a queue where
> when a
> session is added to the queue, and the same session
>
> already exists in the queue from a
> previous
> request, that session will be replaced
>
> in the queue instead of replicating two
> requests. This almost never happens, unless there is a
>
> large network delay.
>
> -->
>
> <!--
>
> When configuring for clustering, you also add in a valve
> to catch
> all the requests
>
> coming in, at the end of the request, the session may or
> may not
> be replicated.
>
> A session is replicated if and only if all the
> conditions are
>
> met:
>
> 1. useDirtyFlag is true or setAttribute or
> removeAttribute has
> been called AND
>
> 2. a session exists (has been created)
>
> 3. the request is not trapped by the "filter" attribute
>
>
>
> The filter attribute is to filter out requests that
> could not
> modify the session,
>
> hence we don't replicate the session after the end of this
> request.
>
> The filter is negative, ie, anything you put in the
> filter, you
> mean to filter out,
>
> ie, no replication will be done on requests that match
> one of the
> filters.
>
> The filter attribute is delimited by ;, so you can't
> escape out ;
> even if you wanted to.
>
>
>
> filter=".*\.gif;.*\.js;" means that we will not
> replicate the
> session after requests with the URI
>
> ending with .gif and .js are intercepted.
>
>
>
> The deployer element can be used to deploy apps cluster
> wide.
>
> Currently the deployment only deploys/undeploys to working
> members in the cluster
>
> so no WARs are copied upons startup of a broken node.
>
> The deployer watches a directory (watchDir) for WAR
> files when
> watchEnabled="true"
>
> When a new war file is added the war gets deployed to
> the local
> instance,
>
> and then deployed to the other instances in the cluster.
>
> When a war file is deleted from the watchDir the war is
> undeployed locally
>
> and cluster wide
>
> -->
>
>
>
>
>
> <Cluster
> className="org.apache.catalina.cluster.tcp.SimpleTcpCluster"
>
>
> managerClassName="org.apache.catalina.cluster.session.DeltaManager"
>
> expireSessionsOnShutdown="true"
>
> useDirtyFlag="true"
>
> notifyListenersOnReplication="true">
>
>
>
> <Membership
>
>
> className="org.apache.catalina.cluster.mcast.McastService"
>
> mcastAddr="228.0.0.4"
>
> mcastPort="45564"
>
> mcastFrequency="500"
>
> mcastDropTime="3000"/>
>
>
>
> <Receiver
>
>
>
> className="org.apache.catalina.cluster.tcp.ReplicationListener"
>
> tcpListenAddress="auto"
>
> tcpListenPort="4001"
>
> tcpSelectorTimeout="100"
>
> tcpThreadCount="2"/>
>
>
>
> <Sender
>
>
>
> className="org.apache.catalina.cluster.tcp.ReplicationTransmitter"
>
> replicationMode="pooled"
>
> ackTimeout="15000"
>
> waitForAck="true"/>
>
>
>
> <Valve
>
> className="org.apache.catalina.cluster.tcp.ReplicationValve"
>
>
>
> filter=".*\.gif;.*\.js;.*\.jpg;.*\.png;.*\.htm;.*\.html;.*\.css;.*
> \.txt;"/>
>
>
>
> <Deployer
>
> className="org.apache.catalina.cluster.deploy.FarmWarDeployer"
>
> tempDir="/tmp/war-temp/"
>
> deployDir="/tmp/war-deploy/"
>
> watchDir="/tmp/war-listen/"
>
> watchEnabled="false"/>
>
>
>
> <ClusterListener
>
> className="org.apache.catalina.cluster.session.ClusterSessionListener"
> />
>
> </Cluster>
>
>
>
>
>
>
>
>
>
> <!-- Normally, users must authenticate themselves to each
> web app
>
> individually. Uncomment the following entry if you
> would like
>
> a user to be authenticated the first time they encounter a
>
> resource protected by a security constraint, and then
> have that
>
> user identity maintained across *all* web applications
> contained
> in this virtual host. -->
>
> <!--
>
> <Valve
> className="org.apache.catalina.authenticator.SingleSignOn" />
>
> -->
>
>
>
> <!-- Access log processes all requests for this virtual
> host. By
>
> default, log files are created in the "logs" directory
> relative
> to
>
> $CATALINA_HOME. If you wish, you can specify a different
>
> directory with the "directory" attribute. Specify
> either a
> relative
>
> (to $CATALINA_HOME) or absolute path to the desired
> directory.
>
> -->
>
> <!--
>
> <Valve className="org.apache.catalina.valves.AccessLogValve"
>
> directory="logs" prefix="localhost_access_log."
>
> suffix=".txt"
>
> pattern="common" resolveHosts="false"/>
>
> -->
>
>
>
> <!-- Access log processes all requests for this virtual
> host. By
>
> default, log files are created in the "logs" directory
> relative
> to
>
> $CATALINA_HOME. If you wish, you can specify a different
>
> directory with the "directory" attribute. Specify
> either a
> relative
>
> (to $CATALINA_HOME) or absolute path to the desired
> directory.
>
> This access log implementation is optimized for maximum
> performance,
>
> but is hardcoded to support only the "common" and
> "combined"
>
> patterns.
>
> -->
>
> <!--
>
> <Valve
>
> className="org.apache.catalina.valves.FastCommonAccessLogValve"
>
> directory="logs" prefix="localhost_access_log."
>
> suffix=".txt"
>
> pattern="common" resolveHosts="false"/>
>
> -->
>
>
>
> </Host>
>
>
>
> </Engine>
>
>
>
> </Service>
>
>
>
> </Server>
>
>
>
> ===========================================================
>
>
>
> I appriceate your prompt help on this since this is a very critical
> application live for the moment. Always send me an email for any
> clarifications.
>
>
>
> Thanks and best regards,
>
> Dilan
>
>
>
>
>
---------------------------------------------------------------------
To start a new topic, e-mail: users@tomcat.apache.org
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org
---------------------------------------------------------------------
To start a new topic, e-mail: users@tomcat.apache.org
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org
Re: tomcat5.5.17 cluster(using jdk1.5) error - OutOfMemoryError in starting up on AS4
Posted by Peter Rossbach <pr...@objektpark.de>.
Hi,
Which JVM memory parameter you use?
At pooled mode use more receiver worker set tcpThreadCount="6"!
You really need deployer? Deployer generate at every startup a large
cluster message.
Regards
Peter
Am 18.06.2006 um 06:22 schrieb Dilan Kelanibandara:
>
>
> Hello ,
>
>
>
> I am getting OutOfMemoryError continuously when starting up two
> cluster
> nodes of tomcat5.5.17 (jdk1.5 on Advanced server 4). Any way it was
> working
> fine for 3 weeks time. This error occurs previously only one time
> and when
> restarted the tomcat, it worked.
>
>
>
>
>
> Following is a part of catalina.out relevent to that error for
> node 1.
>
> ============ ========================================================
>
>
>
> INFO: Start ClusterSender at cluster
> Catalina:type=Cluster,host=localhost
>
> with name Catalina:type=ClusterSender,host=localhost
>
> Jun 17, 2006 8:44:15 PM
> org.apache.catalina.cluster.mcast.McastService start
>
> INFO: Sleeping for 2000 milliseconds to establish cluster membership
> Exception in thread "Cluster-MembershipReceiver"
> java.lang.OutOfMemoryError:
>
>
> Java heap space
>
> Jun 17, 2006 8:44:17 PM org.apache.catalina.cluster.mcast.McastService
>
> registerMBean
>
> INFO: membership mbean registered
>
> (Catalina:type=ClusterMembership,host=localhost)
>
> Jun 17, 2006 8:44:17 PM
> org.apache.catalina.cluster.deploy.FarmWarDeployer
>
> start
>
> INFO: Cluster FarmWarDeployer started.
>
> Jun 17, 2006 8:44:19 PM
> org.apache.catalina.cluster.session.DeltaManager
>
> start
>
> INFO: Register manager /StockTradingServer to cluster element Host
> with name
> localhost Jun 17, 2006 8:44:19 PM
> org.apache.catalina.cluster.session.DeltaManager
>
> start
>
> INFO: Starting clustering manager at /StockTradingServer Jun 17, 2006
> 8:44:19 PM org.apache.catalina.cluster.session.DeltaManager
>
> getAllClusterSessions
>
> INFO: Manager [/StockTradingServer]: skipping state transfer. No
> members
> active in cluster group.
>
>
>
> ======================================================================
> ======
> =====
>
> node2 startup log is as follows
>
> ======================================================================
> ======
> =====
>
> INFO: Cluster is about to start
>
> Jun 17, 2006 8:53:00 PM
>
> org.apache.catalina.cluster.tcp.ReplicationTransmitter start
>
> INFO: Start ClusterSender at cluster
> Catalina:type=Cluster,host=localhost
>
> with name Catalina:type=ClusterSender,host=localhost
>
> Jun 17, 2006 8:53:00 PM
> org.apache.catalina.cluster.mcast.McastService start
>
> INFO: Sleeping for 2000 milliseconds to establish cluster membership
> Exception in thread "Cluster-MembershipReceiver"
> java.lang.OutOfMemoryError:
>
>
> Java heap space
>
> Jun 17, 2006 8:53:02 PM org.apache.catalina.cluster.mcast.McastService
>
> registerMBean
>
> INFO: membership mbean registered
>
> (Catalina:type=ClusterMembership,host=localhost)
>
> Jun 17, 2006 8:53:02 PM
> org.apache.catalina.cluster.deploy.FarmWarDeployer
>
> start
>
> INFO: Cluster FarmWarDeployer started.
>
> Jun 17, 2006 8:53:04 PM
> org.apache.catalina.cluster.session.DeltaManager
>
> start
>
> INFO: Register manager /StockTradingServer to cluster element Host
> with name
> localhost Jun 17, 2006 8:53:04 PM
> org.apache.catalina.cluster.session.DeltaManager
>
> start
>
>
>
> ======================================================================
> ======
> =
>
> Any way my clustor was working fine for 3 weeks time and started to
> give
> this error in startup of both the nodes.
>
>
>
> I have an IBMHTTPServer with jk connector for load balancing and
> that load
> is comming my tomcat cluster.
>
>
>
> following is the server.xml file for both the servers.
>
>
>
> ======================================================================
> ======
> =
>
>
>
> <!-- Example Server Configuration File -->
>
> <!-- Note that component elements are nested corresponding to their
>
> parent-child relationships with each other -->
>
>
>
> <!-- A "Server" is a singleton element that represents the entire JVM,
>
> which may contain one or more "Service" instances. The Server
>
> listens for a shutdown command on the indicated port.
>
>
>
> Note: A "Server" is not itself a "Container", so you may not
>
> define subcomponents such as "Valves" or "Loggers" at this level.
>
> -->
>
>
>
> <Server port="8005" shutdown="SHUTDOWN">
>
>
>
> <!-- Comment these entries out to disable JMX MBeans support used
> for the
>
> administration web application --> <Listener
> className="org.apache.catalina.core.AprLifecycleListener" />
> <Listener
> className="org.apache.catalina.mbeans.ServerLifecycleListener" />
> <Listener
> className="org.apache.catalina.mbeans.GlobalResourcesLifecycleListener
> " />
> <Listener
> className="org.apache.catalina.storeconfig.StoreConfigLifecycleListene
> r"/>
>
>
>
> <!-- Global JNDI resources -->
>
> <GlobalNamingResources>
>
>
>
> <!-- Test entry for demonstration purposes -->
>
> <Environment name="simpleValue" type="java.lang.Integer"
> value="30"/>
>
>
>
> <!-- Editable user database that can also be used by
>
> UserDatabaseRealm to authenticate users -->
>
> <Resource name="UserDatabase" auth="Container"
>
> type="org.apache.catalina.UserDatabase"
>
> description="User database that can be updated and saved"
>
>
> factory="org.apache.catalina.users.MemoryUserDatabaseFactory"
>
> pathname="conf/tomcat-users.xml" />
>
>
>
> </GlobalNamingResources>
>
>
>
> <!-- A "Service" is a collection of one or more "Connectors" that
> share
>
> a single "Container" (and therefore the web applications visible
>
> within that Container). Normally, that Container is an
> "Engine",
>
> but this is not required.
>
>
>
> Note: A "Service" is not itself a "Container", so you may not
>
> define subcomponents such as "Valves" or "Loggers" at this
> level.
>
> -->
>
>
>
> <!-- Define the Tomcat Stand-Alone Service --> <Service
> name="Catalina">
>
>
>
> <!-- A "Connector" represents an endpoint by which requests are
> received
>
> and responses are returned. Each Connector passes requests
> on to
> the
>
> associated "Container" (normally an Engine) for processing.
>
>
>
> By default, a non-SSL HTTP/1.1 Connector is established on
> port
> 8080.
>
> You can also enable an SSL HTTP/1.1 Connector on port 8443 by
>
> following the instructions below and uncommenting the second
> Connector
>
> entry. SSL support requires the following steps (see the
> SSL Config
> HOWTO in the Tomcat 5 documentation bundle for more detailed
>
> instructions):
>
> * If your JDK version 1.3 or prior, download and install
> JSSE 1.0.2
> or
>
> later, and put the JAR files into "$JAVA_HOME/jre/lib/ext".
>
> * Execute:
>
> %JAVA_HOME%\bin\keytool -genkey -alias tomcat -keyalg RSA
>
> (Windows)
>
> $JAVA_HOME/bin/keytool -genkey -alias tomcat -keyalg
> RSA (Unix)
> with a password value of "changeit" for both the certificate and
>
> the keystore itself.
>
>
>
> By default, DNS lookups are enabled when a web application
> calls
>
> request.getRemoteHost(). This can have an adverse impact on
>
> performance, so you can disable it by setting the
>
> "enableLookups" attribute to "false". When DNS lookups are
> disabled,
>
> request.getRemoteHost() will return the String version of the
>
> IP address of the remote client.
>
> -->
>
>
>
> <!-- Define a non-SSL HTTP/1.1 Connector on port 8080 -->
>
> <Connector port="8088" maxHttpHeaderSize="8192"
>
> maxThreads="300" minSpareThreads="25"
> maxSpareThreads="75"
>
> enableLookups="false" redirectPort="8443"
> acceptCount="100"
>
> connectionTimeout="20000" disableUploadTimeout="true" />
>
> <!-- Note : To disable connection timeouts, set
> connectionTimeout value
>
> to 0 -->
>
>
>
> <!-- Note : To use gzip compression you could set the following
> properties :
>
>
>
> compression="on"
>
> compressionMinSize="2048"
>
> noCompressionUserAgents="gozilla, traviata"
>
> compressableMimeType="text/html,text/xml"
>
> -->
>
>
>
> <!-- Define a SSL HTTP/1.1 Connector on port 8443 -->
>
> <!--
>
> <Connector port="8443" maxHttpHeaderSize="8192"
>
> maxThreads="150" minSpareThreads="25"
> maxSpareThreads="75"
>
> enableLookups="false" disableUploadTimeout="true"
>
> acceptCount="100" scheme="https" secure="true"
>
> clientAuth="false" sslProtocol="TLS" />
>
> -->
>
>
>
> <!-- Define an AJP 1.3 Connector on port 8009 -->
>
> <Connector port="8009"
>
> enableLookups="false" redirectPort="8443"
> protocol="AJP/1.3"
>
> />
>
>
>
> <!-- Define a Proxied HTTP/1.1 Connector on port 8082 -->
>
> <!-- See proxy documentation for more information about using
> this. -->
>
> <!--
>
> <Connector port="8082"
>
> maxThreads="150" minSpareThreads="25"
> maxSpareThreads="75"
>
> enableLookups="false" acceptCount="100"
>
> connectionTimeout="20000"
>
> proxyPort="80" disableUploadTimeout="true" />
>
> -->
>
>
>
> <!-- An Engine represents the entry point (within Catalina) that
> processes
>
> every request. The Engine implementation for Tomcat stand
> alone
>
> analyzes the HTTP headers included with the request, and
> passes them
> on to the appropriate Host (virtual host). -->
>
>
>
> <!-- You should set jvmRoute to support load-balancing via AJP ie :
>
> <Engine name="Standalone" defaultHost="localhost" jvmRoute="jvm1">
>
> -->
>
>
>
> <!-- Define the top level container in our container hierarchy -->
>
> <Engine name="Catalina" defaultHost="localhost" jvmRoute="node01">
>
> <!-- <Engine name="Catalina" jvmRoute="$JAVA_HOME" > -->
>
>
>
> <!-- The request dumper valve dumps useful debugging
> information about
>
> the request headers and cookies that were received, and the
> response
>
> headers and cookies that were sent, for all requests
> received by
>
> this instance of Tomcat. If you care only about requests
> to a
>
> particular virtual host, or a particular application,
> nest this
>
> element inside the corresponding <Host> or <Context> entry
> instead.
>
>
>
> For a similar mechanism that is portable to all Servlet 2.4
>
> containers, check out the "RequestDumperFilter" Filter in
> the
>
> example application (the source for this filter may be
> found in
>
> "$CATALINA_HOME/webapps/examples/WEB-INF/classes/filters").
>
>
>
> Request dumping is disabled by default. Uncomment the
> following
>
> element to enable it. -->
>
> <!--
>
> <Valve
> className="org.apache.catalina.valves.RequestDumperValve"/>
>
> -->
>
>
>
> <!-- Because this Realm is here, an instance will be shared
> globally
> -->
>
>
>
> <!-- This Realm uses the UserDatabase configured in the global
> JNDI
>
> resources under the key "UserDatabase". Any edits
>
> that are performed against this UserDatabase are immediately
>
> available for use by the Realm. -->
>
> <Realm className="org.apache.catalina.realm.UserDatabaseRealm"
>
> resourceName="UserDatabase"/>
>
>
>
> <!-- Comment out the old realm but leave here for now in case we
>
> need to go back quickly -->
>
> <!--
>
> <Realm className="org.apache.catalina.realm.MemoryRealm" />
>
> -->
>
>
>
> <!-- Replace the above Realm with one of the following to get
> a Realm
>
> stored in a database and accessed via JDBC -->
>
>
>
>
>
> <Realm className="org.apache.catalina.realm.JDBCRealm"
>
> driverName="org.gjt.mm.mysql.Driver"
>
> connectionURL="jdbc:mysql://172.16.1.55:3306/kutila"
>
> connectionName="kutila" connectionPassword="kutila"
>
> userTable="users" userNameCol="user_name"
>
> userCredCol="user_pass"
>
> userRoleTable="user_roles" roleNameCol="role_name" />
>
>
>
>
>
> <!--
>
> <Realm className="org.apache.catalina.realm.JDBCRealm"
>
> driverName="oracle.jdbc.driver.OracleDriver"
>
> connectionURL="jdbc:oracle:thin:@ntserver:1521:ORCL"
>
> connectionName="scott" connectionPassword="tiger"
>
> userTable="users" userNameCol="user_name"
>
> userCredCol="user_pass"
>
> userRoleTable="user_roles" roleNameCol="role_name" />
>
> -->
>
>
>
> <!--
>
> <Realm className="org.apache.catalina.realm.JDBCRealm"
>
> driverName="sun.jdbc.odbc.JdbcOdbcDriver"
>
> connectionURL="jdbc:odbc:CATALINA"
>
> userTable="users" userNameCol="user_name"
>
> userCredCol="user_pass"
>
> userRoleTable="user_roles" roleNameCol="role_name" />
>
> -->
>
>
>
> <!-- Define the default virtual host
>
> Note: XML Schema validation will not work with Xerces 2.2.
>
> -->
>
> <Host name="localhost" appBase="webapps"
>
> unpackWARs="true" autoDeploy="true"
>
> xmlValidation="false" xmlNamespaceAware="false">
>
>
>
> <!-- Defines a cluster for this node,
>
> By defining this element, means that every manager will be
> changed.
>
> So when running a cluster, only make sure that you have
> webapps
> in there
>
> that need to be clustered and remove the other ones.
>
> A cluster has the following parameters:
>
>
>
> className = the fully qualified name of the cluster class
>
>
>
> clusterName = a descriptive name for your cluster, can be
> anything
>
>
>
> mcastAddr = the multicast address, has to be the same
> for all
> the nodes
>
>
>
> mcastPort = the multicast port, has to be the same for
> all the
> nodes
>
>
>
> mcastBindAddress = bind the multicast socket to a specific
> address
>
>
>
> mcastTTL = the multicast TTL if you want to limit your
> broadcast
>
> mcastSoTimeout = the multicast readtimeout
>
>
>
> mcastFrequency = the number of milliseconds in between
> sending a
> "I'm alive" heartbeat
>
>
>
> mcastDropTime = the number a milliseconds before a node is
> considered "dead" if no heartbeat is received
>
>
>
> tcpThreadCount = the number of threads to handle incoming
> replication requests, optimal would be the same amount of threads
> as nodes
>
>
>
> tcpListenAddress = the listen address (bind address)
> for TCP
> cluster request on this host,
>
> in case of multiple ethernet cards.
>
> auto means that address becomes
>
> InetAddress.getLocalHost
> ().getHostAddress()
>
>
>
> tcpListenPort = the tcp listen port
>
>
>
> tcpSelectorTimeout = the timeout (ms) for the
> Selector.select()
> method in case the OS
>
> has a wakup bug in java.nio. Set
> to 0 for
> no timeout
>
>
>
> printToScreen = true means that managers will also
> print to
> std.out
>
>
>
> expireSessionsOnShutdown = true means that
>
>
>
> useDirtyFlag = true means that we only replicate a
> session after
> setAttribute,removeAttribute has been called.
>
> false means to replicate the session
> after each
> request.
>
> false means that replication would work
> for the
> following piece of code: (only for SimpleTcpReplicationManager)
>
> <%
>
> HashMap map =
> (HashMap)session.getAttribute("map");
>
> map.put("key","value");
>
> %>
>
> replicationMode = can be either 'pooled', 'synchronous' or
> 'asynchronous'.
>
> * Pooled means that the replication
> happens
> using several sockets in a synchronous way. Ie, the data gets
> replicated,
> then the request return. This is the same as the 'synchronous' setting
> except it uses a pool of sockets, hence it is multithreaded. This
> is the
> fastest and safest configuration. To use this, also increase the nr
> of tcp
> threads that you have dealing with replication.
>
> * Synchronous means that the thread that
> executes the request, is also the
>
> thread the replicates the data to the
> other
> nodes, and will not return until all
>
> nodes have received the information.
>
> * Asynchronous means that there is a
> specific
> 'sender' thread for each cluster node,
>
> so the request thread will queue the
> replication request into a "smart" queue,
>
> and then return to the client.
>
> The "smart" queue is a queue where
> when a
> session is added to the queue, and the same session
>
> already exists in the queue from a
> previous
> request, that session will be replaced
>
> in the queue instead of replicating two
> requests. This almost never happens, unless there is a
>
> large network delay.
>
> -->
>
> <!--
>
> When configuring for clustering, you also add in a valve
> to catch
> all the requests
>
> coming in, at the end of the request, the session may or
> may not
> be replicated.
>
> A session is replicated if and only if all the
> conditions are
>
> met:
>
> 1. useDirtyFlag is true or setAttribute or
> removeAttribute has
> been called AND
>
> 2. a session exists (has been created)
>
> 3. the request is not trapped by the "filter" attribute
>
>
>
> The filter attribute is to filter out requests that
> could not
> modify the session,
>
> hence we don't replicate the session after the end of this
> request.
>
> The filter is negative, ie, anything you put in the
> filter, you
> mean to filter out,
>
> ie, no replication will be done on requests that match
> one of the
> filters.
>
> The filter attribute is delimited by ;, so you can't
> escape out ;
> even if you wanted to.
>
>
>
> filter=".*\.gif;.*\.js;" means that we will not
> replicate the
> session after requests with the URI
>
> ending with .gif and .js are intercepted.
>
>
>
> The deployer element can be used to deploy apps cluster
> wide.
>
> Currently the deployment only deploys/undeploys to working
> members in the cluster
>
> so no WARs are copied upons startup of a broken node.
>
> The deployer watches a directory (watchDir) for WAR
> files when
> watchEnabled="true"
>
> When a new war file is added the war gets deployed to
> the local
> instance,
>
> and then deployed to the other instances in the cluster.
>
> When a war file is deleted from the watchDir the war is
> undeployed locally
>
> and cluster wide
>
> -->
>
>
>
>
>
> <Cluster
> className="org.apache.catalina.cluster.tcp.SimpleTcpCluster"
>
>
> managerClassName="org.apache.catalina.cluster.session.DeltaManager"
>
> expireSessionsOnShutdown="true"
>
> useDirtyFlag="true"
>
> notifyListenersOnReplication="true">
>
>
>
> <Membership
>
>
> className="org.apache.catalina.cluster.mcast.McastService"
>
> mcastAddr="228.0.0.4"
>
> mcastPort="45564"
>
> mcastFrequency="500"
>
> mcastDropTime="3000"/>
>
>
>
> <Receiver
>
>
>
> className="org.apache.catalina.cluster.tcp.ReplicationListener"
>
> tcpListenAddress="auto"
>
> tcpListenPort="4001"
>
> tcpSelectorTimeout="100"
>
> tcpThreadCount="2"/>
>
>
>
> <Sender
>
>
>
> className="org.apache.catalina.cluster.tcp.ReplicationTransmitter"
>
> replicationMode="pooled"
>
> ackTimeout="15000"
>
> waitForAck="true"/>
>
>
>
> <Valve
>
> className="org.apache.catalina.cluster.tcp.ReplicationValve"
>
>
>
> filter=".*\.gif;.*\.js;.*\.jpg;.*\.png;.*\.htm;.*\.html;.*\.css;.*
> \.txt;"/>
>
>
>
> <Deployer
>
> className="org.apache.catalina.cluster.deploy.FarmWarDeployer"
>
> tempDir="/tmp/war-temp/"
>
> deployDir="/tmp/war-deploy/"
>
> watchDir="/tmp/war-listen/"
>
> watchEnabled="false"/>
>
>
>
> <ClusterListener
>
> className="org.apache.catalina.cluster.session.ClusterSessionListener"
> />
>
> </Cluster>
>
>
>
>
>
>
>
>
>
> <!-- Normally, users must authenticate themselves to each
> web app
>
> individually. Uncomment the following entry if you
> would like
>
> a user to be authenticated the first time they encounter a
>
> resource protected by a security constraint, and then
> have that
>
> user identity maintained across *all* web applications
> contained
> in this virtual host. -->
>
> <!--
>
> <Valve
> className="org.apache.catalina.authenticator.SingleSignOn" />
>
> -->
>
>
>
> <!-- Access log processes all requests for this virtual
> host. By
>
> default, log files are created in the "logs" directory
> relative
> to
>
> $CATALINA_HOME. If you wish, you can specify a different
>
> directory with the "directory" attribute. Specify
> either a
> relative
>
> (to $CATALINA_HOME) or absolute path to the desired
> directory.
>
> -->
>
> <!--
>
> <Valve className="org.apache.catalina.valves.AccessLogValve"
>
> directory="logs" prefix="localhost_access_log."
>
> suffix=".txt"
>
> pattern="common" resolveHosts="false"/>
>
> -->
>
>
>
> <!-- Access log processes all requests for this virtual
> host. By
>
> default, log files are created in the "logs" directory
> relative
> to
>
> $CATALINA_HOME. If you wish, you can specify a different
>
> directory with the "directory" attribute. Specify
> either a
> relative
>
> (to $CATALINA_HOME) or absolute path to the desired
> directory.
>
> This access log implementation is optimized for maximum
> performance,
>
> but is hardcoded to support only the "common" and
> "combined"
>
> patterns.
>
> -->
>
> <!--
>
> <Valve
>
> className="org.apache.catalina.valves.FastCommonAccessLogValve"
>
> directory="logs" prefix="localhost_access_log."
>
> suffix=".txt"
>
> pattern="common" resolveHosts="false"/>
>
> -->
>
>
>
> </Host>
>
>
>
> </Engine>
>
>
>
> </Service>
>
>
>
> </Server>
>
>
>
> ===========================================================
>
>
>
> I appriceate your prompt help on this since this is a very critical
> application live for the moment. Always send me an email for any
> clarifications.
>
>
>
> Thanks and best regards,
>
> Dilan
>
>
>
>
>
---------------------------------------------------------------------
To start a new topic, e-mail: users@tomcat.apache.org
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org