You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@tomcat.apache.org by Toni Menendez Lopez <to...@gmail.com> on 2009/03/11 14:03:29 UTC

Problems with LoadBalancing

Hello everybody,

I have following architecture :

2 Server with Apache and Tomcat

Versions :
APACHE
-------------
httpd -v
Server version: Apache/2.0.52
Server built:   May 24 2006 11:45:06

TOMCAT
-------------
./version.sh
Using JRE_HOME:       /opt/jdk1.5.0_10
Server version: Apache Tomcat/5.5.20
Server built:   Sep 12 2006 10:09:20
Server number:  5.5.20.0
OS Name:        Linux
OS Version:     2.6.9-55.0.2.ELsmp
Architecture:   i386
JVM Version:    1.5.0_10-b03
JVM Vendor:     Sun Microsystems Inc.


I am doing load balancing between both with JK, with an scenario of 50
reques per session (aprox), and 500 reqxseg ( aprox ).

The thing is the following, when I shutdown the passive server, I have a
download of my reqxseg, and the requests that where managed by the passive
server get stucked for long time, like 5 min.

New request afther shutting donw the passive server are well processed.

But the thing is if there is a way to reduce this time that requests are
stucked.

This is my worker :

# izonetv LoadBalancer Definition
worker.izonetv.balance_workers=izonetv-mifeas01_data,izonetv-mifeas02_data
worker.izonetv.method=Session
worker.izonetv.retries=1
worker.izonetv.sticky_session=True
#worker.izonetv.sticky_session_force=1
worker.izonetv.type=lb

# izonetv-mifeas01_data Node Definition
worker.izonetv-mifeas01_data.connect_timeout=10000
worker.izonetv-mifeas01_data.fail_on_status=404
worker.izonetv-mifeas01_data.host=mifeas01_data
worker.izonetv-mifeas01_data.lbfactor=1
worker.izonetv-mifeas01_data.port=8009
worker.izonetv-mifeas01_data.reply_timeout=30000
worker.izonetv-mifeas01_data.type=ajp13

# izonetv-mifeas02_data Node Definition
worker.izonetv-mifeas02_data.connect_timeout=10000
worker.izonetv-mifeas02_data.fail_on_status=404
worker.izonetv-mifeas02_data.host=mifeas02_data
worker.izonetv-mifeas02_data.lbfactor=1
worker.izonetv-mifeas02_data.port=8009
worker.izonetv-mifeas02_data.reply_timeout=30000
worker.izonetv-mifeas02_data.type=ajp13

Re: Problems with LoadBalancing

Posted by Toni Menendez Lopez <to...@gmail.com>.
 OK, now is all clear for me ....

So it is better not to use sticky_session_force=True, OK, is my actual
configuration so

Now is the question :

If I don´t have this sticky_session_force=True , what happen with the
requests of a session of a server which for any reason have switched off. I
mean, the worker who was handeling this session now does not exist, so it is
impossible to continue with the same SESSIONID. In this case, another worker
will try to manage the request, isn´t it ? It will change the SESSIONID for
a new SESSION ID ?

For me, in this case, is when my requests have a long delay....

May be I can do a test with few requests and just in the middle shutting
down the server with DEBUG in mod_jk, and ty to see the traces. So, in this
case I can see where are the delays ?

What do you think ?
Thank you very much, it is very, very, very helpfully for me


2009/3/13 Rainer Jung <ra...@kippdata.de>

> On 13.03.2009 14:08, Toni Menendez Lopez wrote:
>
>> The behaviour is the one that I explained in the first mail, that when I
>> stop one of the servers, I ahve very huge delays to respond to the
>> requests of the session that were managed by this server.
>>
>
> OK.
>
> After reading documentation I think that problem is related with
>> parameter : #worker.izonetv.sticky_session_force=1, which if is not
>> commented I am not able to launch any call.
>>
>
> Hmm, above you say you have huge delays, here you say you are not able to
> launch any call.
>
> Let me first explain stickyness:
>
> A request that is forwarded by mod_jk to Tomcat can carry a session id.
> Session ids can be part of a request by either setting a so-called session
> cookie. This cookie has the name JSESSIONID. Or the session id is added at
> the end of the URL in the form ";jsessionid=...".
>
> Now how does stickyness work:
>
> - you set the so called jvmRoute in server.xml of your Tomcats. Each Tomcat
> gets a different jvmRoute. Say you have jvmRoute node1 and node2.
>
> - Tomcat automatically adds the jvmRoute to the end of each session id,
> whenever it creates a session. The jvmRoute is separated from the rest of
> the id by a dot ".".
>
> - When a mod_jk load balancer operates sticky and it has to forward a
> request, that contains a session id, it looks for a dot in this id, and if
> it find it, it takes everything after the dot in the id as the name of the
> backend. The load balancer then looks for a member worker, whose name is
> equal to this jvmRoute. In the above example that would've been node1 or
> node2.
>
> By default any load balancer in mod_jk tries to be sticky, but if it either
> cant find the correct worker, or this worker is in error, it chooses another
> worker. If you set sticky_session_force, then you tell mod_jk that it should
> not try another worker in this case, and instead return an error.
>
> From your previously send mod_jk log file we can see, that your JBoss sets
> a session cookie for the root path "/". That means, this cookie will be sent
> fr every request to this host. Since it is a session cookie, this is a good
> candidate for desaster, because when you now switch the application, but it
> is served by the same host, the browser sends the JBoss session cookie,
> although that application will not know this session.
>
> With sticky_session_force set to true (not the default), when JBoss sends
> the redirect to the Tomcat webapp, the browser will send a request for this
> Tomcat webapp, but will also send the session cookie from JBoss, because
> JBoss set the cookie path to "/". Furthermore JBoss included a node name in
> the session id and the load balancer that handles to forwarding to Tomcat
> doesn't know about that node, so it can't preserve strict stickyness.
>
> Usually you don't want sticky_session_force.
>
> Normally so I comment this parameter otherwise my service does not work,
>>
>
> See above.
>
> and if I comment I think the mod_jk with requeso of a loose session try
>> to go to other worked and it does not sent any error and takes a very
>> long delay.
>>
>
> No it didn't in the error log you sent previously. It immediately returned
> an error because the request was handeled with forced stickyness, and the
> wofrker name given by the session id didn't exist.
>
> So, for this reason I am trying to investigate the problem with this
>> parameter.
>> So, focusing in this parameter, the log that I sent to you is with the
>> parameter commented, and I found a strange thing. I explain to you :
>>  My request is the following : http://159.23.98.22/cdp-fe/Trigger.do?
>> <http://159.23.98.22/cdp-fe/Trigger.do>.....
>> When I send this to apache mod_jk redirects this to my JBOSS application
>> ( MCDP worker) and my JBOSS aplication redirect this link to
>> http://159.23.98.22/CDP311/......
>> But now mod_jk when receives this second request is trying to send this
>> request to MCDP worker again and not to IZONETV worker.
>>
>
> The log shows that mod_jk tries to send it via izonetv:
>
> It says:
>
> - Found a wildchar match '/CDP311/*=izonetv'
> - Into handler jakarta-servlet worker=izonetv
> - found a worker izonetv
> - Service error=0 for worker=izonetv
>
> But it fails, because sticky_session_force was activated
>
> - service sticky_session=1 id='oK+zmQoPUFefT2vcqTSagg**.MCDP-mifeas02_data'
>
> and it can't find the worker MCDP-mifeas02_data as part of izonetv:
>
> - searching worker for session route MCDP-mifeas02_data
>
>
> Any idea why ?
>> Sorry, about the updating of mod_jk but is a close platform and I am not
>> able to update the mod_jk.
>>
>
> That's very bad. Someone should be able to update it and I do recommend
> that (although in this case it wpouldn't solve your problem).
>
> Thanks again,
>> Toni.
>>
>
> Regards,
>
> Rainer
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
> For additional commands, e-mail: users-help@tomcat.apache.org
>
>

Re: Problems with LoadBalancing

Posted by Rainer Jung <ra...@kippdata.de>.
On 13.03.2009 14:08, Toni Menendez Lopez wrote:
> The behaviour is the one that I explained in the first mail, that when I
> stop one of the servers, I ahve very huge delays to respond to the
> requests of the session that were managed by this server.

OK.

> After reading documentation I think that problem is related with
> parameter : #worker.izonetv.sticky_session_force=1, which if is not
> commented I am not able to launch any call.

Hmm, above you say you have huge delays, here you say you are not able 
to launch any call.

Let me first explain stickyness:

A request that is forwarded by mod_jk to Tomcat can carry a session id. 
Session ids can be part of a request by either setting a so-called 
session cookie. This cookie has the name JSESSIONID. Or the session id 
is added at the end of the URL in the form ";jsessionid=...".

Now how does stickyness work:

- you set the so called jvmRoute in server.xml of your Tomcats. Each 
Tomcat gets a different jvmRoute. Say you have jvmRoute node1 and node2.

- Tomcat automatically adds the jvmRoute to the end of each session id, 
whenever it creates a session. The jvmRoute is separated from the rest 
of the id by a dot ".".

- When a mod_jk load balancer operates sticky and it has to forward a 
request, that contains a session id, it looks for a dot in this id, and 
if it find it, it takes everything after the dot in the id as the name 
of the backend. The load balancer then looks for a member worker, whose 
name is equal to this jvmRoute. In the above example that would've been 
node1 or node2.

By default any load balancer in mod_jk tries to be sticky, but if it 
either cant find the correct worker, or this worker is in error, it 
chooses another worker. If you set sticky_session_force, then you tell 
mod_jk that it should not try another worker in this case, and instead 
return an error.

 From your previously send mod_jk log file we can see, that your JBoss 
sets a session cookie for the root path "/". That means, this cookie 
will be sent fr every request to this host. Since it is a session 
cookie, this is a good candidate for desaster, because when you now 
switch the application, but it is served by the same host, the browser 
sends the JBoss session cookie, although that application will not know 
this session.

With sticky_session_force set to true (not the default), when JBoss 
sends the redirect to the Tomcat webapp, the browser will send a request 
for this Tomcat webapp, but will also send the session cookie from 
JBoss, because JBoss set the cookie path to "/". Furthermore JBoss 
included a node name in the session id and the load balancer that 
handles to forwarding to Tomcat doesn't know about that node, so it 
can't preserve strict stickyness.

Usually you don't want sticky_session_force.

> Normally so I comment this parameter otherwise my service does not work,

See above.

> and if I comment I think the mod_jk with requeso of a loose session try
> to go to other worked and it does not sent any error and takes a very
> long delay.

No it didn't in the error log you sent previously. It immediately 
returned an error because the request was handeled with forced 
stickyness, and the wofrker name given by the session id didn't exist.

> So, for this reason I am trying to investigate the problem with this
> parameter.
> So, focusing in this parameter, the log that I sent to you is with the
> parameter commented, and I found a strange thing. I explain to you :
>   My request is the following : http://159.23.98.22/cdp-fe/Trigger.do?
> <http://159.23.98.22/cdp-fe/Trigger.do>.....
> When I send this to apache mod_jk redirects this to my JBOSS application
> ( MCDP worker) and my JBOSS aplication redirect this link to
> http://159.23.98.22/CDP311/......
> But now mod_jk when receives this second request is trying to send this
> request to MCDP worker again and not to IZONETV worker.

The log shows that mod_jk tries to send it via izonetv:

It says:

- Found a wildchar match '/CDP311/*=izonetv'
- Into handler jakarta-servlet worker=izonetv
- found a worker izonetv
- Service error=0 for worker=izonetv

But it fails, because sticky_session_force was activated

- service sticky_session=1 id='oK+zmQoPUFefT2vcqTSagg**.MCDP-mifeas02_data'

and it can't find the worker MCDP-mifeas02_data as part of izonetv:

- searching worker for session route MCDP-mifeas02_data


> Any idea why ?
> Sorry, about the updating of mod_jk but is a close platform and I am not
> able to update the mod_jk.

That's very bad. Someone should be able to update it and I do recommend 
that (although in this case it wpouldn't solve your problem).

> Thanks again,
> Toni.

Regards,

Rainer

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org


Re: Problems with LoadBalancing

Posted by Toni Menendez Lopez <to...@gmail.com>.
Sorry man, I explain you,

The behaviour is the one that I explained in the first mail, that when I
stop one of the servers, I ahve very huge delays to respond to the requests
of the session that were managed by this server.

After reading documentation I think that problem is related with parameter :
#worker.izonetv.sticky_session_force=1, which if is not commented I am not
able to launch any call.

Normally so I comment this parameter otherwise my service does not work, and
if I comment I think the mod_jk with requeso of a loose session try to go to
other worked and it does not sent any error and takes a very long delay.

So, for this reason I am trying to investigate the problem with this
parameter.

So, focusing in this parameter, the log that I sent to you is with the
parameter commented, and I found a strange thing. I explain to you :

 My request is the following :
http://159.23.98.22/cdp-fe/Trigger.do?<http://159.23.98.22/cdp-fe/Trigger.do>
.....

When I send this to apache mod_jk redirects this to my JBOSS application (
MCDP worker) and my JBOSS aplication redirect this link to

http://159.23.98.22/CDP311/......

But now mod_jk when receives this second request is trying to send this
request to MCDP worker again and not to IZONETV worker.

Any idea why ?


Sorry, about the updating of mod_jk but is a close platform and I am not
able to update the mod_jk.

Thanks again,

Toni.




2009/3/13 Rainer Jung <ra...@kippdata.de>

>  On 13.03.2009 10:54, Toni Menendez Lopez wrote:
>
>> Here is the trace with the error, from mod_jk.log with DEBUG mode, but I
>> suspect where is the error...
>> The thing is that I have 2 workers working one for JBOSS (MCDP)  and one
>> for TOMCAT(izonetv), and the thing is that in the call I do redirect
>> from JBOSS to TOMCAT, and it seems that in the second request for tomcat
>> the mod_jk still is trying to send the call to the MCDP worker.
>> Any idea how can I do the configuration ?
>> This is my worker :
>> # Automatically Generated workers.properties file
>> worker.list=MCDP,izonetv,status
>> # MCDP LoadBalancer Definition
>> worker.MCDP.balance_workers=MCDP-mifeas01_data,MCDP-mifeas02_data
>> worker.MCDP.method=Session
>> worker.MCDP.retries=1
>> worker.MCDP.sticky_session=1
>> worker.MCDP.sticky_session_force=1
>> worker.MCDP.type=lb
>> # MCDP-mifeas01_data Node Definition
>> worker.MCDP-mifeas01_data.connect_timeout=10000
>> worker.MCDP-mifeas01_data.fail_on_status=404
>> worker.MCDP-mifeas01_data.host=mifeas01_data
>> worker.MCDP-mifeas01_data.lbfactor=1
>> worker.MCDP-mifeas01_data.port=8109
>> worker.MCDP-mifeas01_data.reply_timeout=30000
>> worker.MCDP-mifeas01_data.type=ajp13
>> # MCDP-mifeas02_data Node Definition
>> worker.MCDP-mifeas02_data.connect_timeout=10000
>> worker.MCDP-mifeas02_data.fail_on_status=404
>> worker.MCDP-mifeas02_data.host=mifeas02_data
>> worker.MCDP-mifeas02_data.lbfactor=1
>> worker.MCDP-mifeas02_data.port=8109
>> worker.MCDP-mifeas02_data.reply_timeout=30000
>> worker.MCDP-mifeas02_data.type=ajp13
>> # izonetv LoadBalancer Definition
>> worker.izonetv.balance_workers=izonetv-mifeas01_data,izonetv-mifeas02_data
>> worker.izonetv.method=Session
>> worker.izonetv.retries=1
>> worker.izonetv.sticky_session=True
>> worker.izonetv.sticky_session_force=1
>> worker.izonetv.type=lb
>> # izonetv-mifeas01_data Node Definition
>> worker.izonetv-mifeas01_data.connect_timeout=10000
>> worker.izonetv-mifeas01_data.fail_on_status=404
>> worker.izonetv-mifeas01_data.host=mifeas01_data
>> worker.izonetv-mifeas01_data.lbfactor=1
>> worker.izonetv-mifeas01_data.port=8009
>> worker.izonetv-mifeas01_data.reply_timeout=30000
>> worker.izonetv-mifeas01_data.type=ajp13
>> # izonetv-mifeas02_data Node Definition
>> worker.izonetv-mifeas02_data.connect_timeout=10000
>> worker.izonetv-mifeas02_data.fail_on_status=404
>> worker.izonetv-mifeas02_data.host=mifeas02_data
>> worker.izonetv-mifeas02_data.lbfactor=1
>> worker.izonetv-mifeas02_data.port=8009
>> worker.izonetv-mifeas02_data.reply_timeout=30000
>> worker.izonetv-mifeas02_data.type=ajp13
>> # Status worker for managing load balancer
>> worker.status.type=status
>> Toni.
>>
>
> You are confusing me. I thought you wanted to discuss a problem related to
> shutting down a passive cluster node. Not you are talking about a problem,
> whether a request gets send to the right Tomcat.
>
> The log says that the redirect goes to a URL starting with
> http://159.23.98.22/CDP311/. <http://159.23.98.22/CDP311/>..
>
> and everything that matches /CDP311/ is mounted to izonetv. So mod_jk tries
> to send via izonetv, but all members of this node are in error state. We
> don't know why, because those nodes seem to have gone into error earlier and
> you didn't include those parts of the log file.
>
> BTW: Please first update your mod_jk, because 1.2.23 is 2 years old, and
> when trying to help it's much easier to assume the latest stable behaviour.
>
>
> Regards,
>
> Rainer
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
> For additional commands, e-mail: users-help@tomcat.apache.org
>
>

Re: Problems with LoadBalancing

Posted by Rainer Jung <ra...@kippdata.de>.
On 13.03.2009 10:54, Toni Menendez Lopez wrote:
> Here is the trace with the error, from mod_jk.log with DEBUG mode, but I
> suspect where is the error...
> The thing is that I have 2 workers working one for JBOSS (MCDP)  and one
> for TOMCAT(izonetv), and the thing is that in the call I do redirect
> from JBOSS to TOMCAT, and it seems that in the second request for tomcat
> the mod_jk still is trying to send the call to the MCDP worker.
> Any idea how can I do the configuration ?
> This is my worker :
> # Automatically Generated workers.properties file
> worker.list=MCDP,izonetv,status
> # MCDP LoadBalancer Definition
> worker.MCDP.balance_workers=MCDP-mifeas01_data,MCDP-mifeas02_data
> worker.MCDP.method=Session
> worker.MCDP.retries=1
> worker.MCDP.sticky_session=1
> worker.MCDP.sticky_session_force=1
> worker.MCDP.type=lb
> # MCDP-mifeas01_data Node Definition
> worker.MCDP-mifeas01_data.connect_timeout=10000
> worker.MCDP-mifeas01_data.fail_on_status=404
> worker.MCDP-mifeas01_data.host=mifeas01_data
> worker.MCDP-mifeas01_data.lbfactor=1
> worker.MCDP-mifeas01_data.port=8109
> worker.MCDP-mifeas01_data.reply_timeout=30000
> worker.MCDP-mifeas01_data.type=ajp13
> # MCDP-mifeas02_data Node Definition
> worker.MCDP-mifeas02_data.connect_timeout=10000
> worker.MCDP-mifeas02_data.fail_on_status=404
> worker.MCDP-mifeas02_data.host=mifeas02_data
> worker.MCDP-mifeas02_data.lbfactor=1
> worker.MCDP-mifeas02_data.port=8109
> worker.MCDP-mifeas02_data.reply_timeout=30000
> worker.MCDP-mifeas02_data.type=ajp13
> # izonetv LoadBalancer Definition
> worker.izonetv.balance_workers=izonetv-mifeas01_data,izonetv-mifeas02_data
> worker.izonetv.method=Session
> worker.izonetv.retries=1
> worker.izonetv.sticky_session=True
> worker.izonetv.sticky_session_force=1
> worker.izonetv.type=lb
> # izonetv-mifeas01_data Node Definition
> worker.izonetv-mifeas01_data.connect_timeout=10000
> worker.izonetv-mifeas01_data.fail_on_status=404
> worker.izonetv-mifeas01_data.host=mifeas01_data
> worker.izonetv-mifeas01_data.lbfactor=1
> worker.izonetv-mifeas01_data.port=8009
> worker.izonetv-mifeas01_data.reply_timeout=30000
> worker.izonetv-mifeas01_data.type=ajp13
> # izonetv-mifeas02_data Node Definition
> worker.izonetv-mifeas02_data.connect_timeout=10000
> worker.izonetv-mifeas02_data.fail_on_status=404
> worker.izonetv-mifeas02_data.host=mifeas02_data
> worker.izonetv-mifeas02_data.lbfactor=1
> worker.izonetv-mifeas02_data.port=8009
> worker.izonetv-mifeas02_data.reply_timeout=30000
> worker.izonetv-mifeas02_data.type=ajp13
> # Status worker for managing load balancer
> worker.status.type=status
> Toni.

You are confusing me. I thought you wanted to discuss a problem related 
to shutting down a passive cluster node. Not you are talking about a 
problem, whether a request gets send to the right Tomcat.

The log says that the redirect goes to a URL starting with 
http://159.23.98.22/CDP311/...

and everything that matches /CDP311/ is mounted to izonetv. So mod_jk 
tries to send via izonetv, but all members of this node are in error 
state. We don't know why, because those nodes seem to have gone into 
error earlier and you didn't include those parts of the log file.

BTW: Please first update your mod_jk, because 1.2.23 is 2 years old, and 
when trying to help it's much easier to assume the latest stable behaviour.

Regards,

Rainer

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org


Re: Problems with LoadBalancing

Posted by Toni Menendez Lopez <to...@gmail.com>.
Here is the trace with the error, from mod_jk.log with DEBUG mode, but I
suspect where is the error...

The thing is that I have 2 workers working one for JBOSS (MCDP)  and one for
TOMCAT(izonetv), and the thing is that in the call I do redirect from JBOSS
to TOMCAT, and it seems that in the second request for tomcat the mod_jk
still is trying to send the call to the MCDP worker.

Any idea how can I do the configuration ?

This is my worker :

# Automatically Generated workers.properties file
worker.list=MCDP,izonetv,status
# MCDP LoadBalancer Definition
worker.MCDP.balance_workers=MCDP-mifeas01_data,MCDP-mifeas02_data
worker.MCDP.method=Session
worker.MCDP.retries=1
worker.MCDP.sticky_session=1
worker.MCDP.sticky_session_force=1
worker.MCDP.type=lb
# MCDP-mifeas01_data Node Definition
worker.MCDP-mifeas01_data.connect_timeout=10000
worker.MCDP-mifeas01_data.fail_on_status=404
worker.MCDP-mifeas01_data.host=mifeas01_data
worker.MCDP-mifeas01_data.lbfactor=1
worker.MCDP-mifeas01_data.port=8109
worker.MCDP-mifeas01_data.reply_timeout=30000
worker.MCDP-mifeas01_data.type=ajp13
# MCDP-mifeas02_data Node Definition
worker.MCDP-mifeas02_data.connect_timeout=10000
worker.MCDP-mifeas02_data.fail_on_status=404
worker.MCDP-mifeas02_data.host=mifeas02_data
worker.MCDP-mifeas02_data.lbfactor=1
worker.MCDP-mifeas02_data.port=8109
worker.MCDP-mifeas02_data.reply_timeout=30000
worker.MCDP-mifeas02_data.type=ajp13
# izonetv LoadBalancer Definition
worker.izonetv.balance_workers=izonetv-mifeas01_data,izonetv-mifeas02_data
worker.izonetv.method=Session
worker.izonetv.retries=1
worker.izonetv.sticky_session=True
worker.izonetv.sticky_session_force=1
worker.izonetv.type=lb
# izonetv-mifeas01_data Node Definition
worker.izonetv-mifeas01_data.connect_timeout=10000
worker.izonetv-mifeas01_data.fail_on_status=404
worker.izonetv-mifeas01_data.host=mifeas01_data
worker.izonetv-mifeas01_data.lbfactor=1
worker.izonetv-mifeas01_data.port=8009
worker.izonetv-mifeas01_data.reply_timeout=30000
worker.izonetv-mifeas01_data.type=ajp13
# izonetv-mifeas02_data Node Definition
worker.izonetv-mifeas02_data.connect_timeout=10000
worker.izonetv-mifeas02_data.fail_on_status=404
worker.izonetv-mifeas02_data.host=mifeas02_data
worker.izonetv-mifeas02_data.lbfactor=1
worker.izonetv-mifeas02_data.port=8009
worker.izonetv-mifeas02_data.reply_timeout=30000
worker.izonetv-mifeas02_data.type=ajp13
# Status worker for managing load balancer
worker.status.type=status

Toni.




2009/3/12 Rainer Jung <ra...@kippdata.de>

> On 12.03.2009 17:45, Toni Menendez Lopez wrote:
>
>> Rainer,
>>
>> May be my problem is due to the fact I have the line :
>>
>> #worker.izonetv.sticky_session_force=1
>>
>> commented.
>>
>> The reason is that if I uncomment this line I have this error in Apache :
>>
>> [Thu Mar 12 17:31:37 2009][23548:6496] [error] service::jk_lb_worker.c
>> (1144): All tomcat instances failed, no more workers left for recovery
>>
>> Any idea what can be the problem ?
>>
>
> Set your JkLogLevel to info. Whenever an [error] occurs, look for the
> [info] messages before the error and having the same [pid:tid] numbers.
> Those will indicate, why the request failed.
>
> You can post those lines if you don't understand them.
>
> Most of the time such problems comes from webapps or backend sizing, which
> can't cope with the load. In these cases we always suggest to take 2 or 3
> thread dumps, each a couple of seconds apart from each other and have a look
> at them.
>
> On a Unix/Linux system you can take a thread dump using "kill -QUIT"
> against your process (it will not end Tomcat, it will simply write some
> information to catalina.out and then proceed normal processing, at least if
> you are sing JVM 1.4.2 or newer).
>
>
> Regards,
>
> Rainer
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
> For additional commands, e-mail: users-help@tomcat.apache.org
>
>

Re: Problems with LoadBalancing

Posted by Rainer Jung <ra...@kippdata.de>.
On 12.03.2009 17:45, Toni Menendez Lopez wrote:
> Rainer,
>
> May be my problem is due to the fact I have the line :
>
> #worker.izonetv.sticky_session_force=1
>
> commented.
>
> The reason is that if I uncomment this line I have this error in Apache :
>
> [Thu Mar 12 17:31:37 2009][23548:6496] [error] service::jk_lb_worker.c
> (1144): All tomcat instances failed, no more workers left for recovery
>
> Any idea what can be the problem ?

Set your JkLogLevel to info. Whenever an [error] occurs, look for the 
[info] messages before the error and having the same [pid:tid] numbers. 
Those will indicate, why the request failed.

You can post those lines if you don't understand them.

Most of the time such problems comes from webapps or backend sizing, 
which can't cope with the load. In these cases we always suggest to take 
2 or 3 thread dumps, each a couple of seconds apart from each other and 
have a look at them.

On a Unix/Linux system you can take a thread dump using "kill -QUIT" 
against your process (it will not end Tomcat, it will simply write some 
information to catalina.out and then proceed normal processing, at least 
if you are sing JVM 1.4.2 or newer).

Regards,

Rainer

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org


Re: Problems with LoadBalancing

Posted by Toni Menendez Lopez <to...@gmail.com>.
Rainer,

May be my problem is due to the fact I have the line :

#worker.izonetv.sticky_session_force=1

commented.

The reason is that if I uncomment this line I have this error in Apache :

[Thu Mar 12 17:31:37 2009][23548:6496] [error] service::jk_lb_worker.c
(1144): All tomcat instances failed, no more workers left for recovery

Any idea what can be the problem ?

Toni.

2009/3/11 Toni Menendez Lopez <to...@gmail.com>

>
>
> 2009/3/11 Rainer Jung <ra...@kippdata.de>
>
>> On 11.03.2009 14:03, Toni Menendez Lopez wrote:
>>
>>> Hello everybody,
>>> I have following architecture :
>>> 2 Server with Apache and Tomcat
>>> Versions :
>>> APACHE
>>> -------------
>>> httpd -v
>>> Server version: Apache/2.0.52
>>> Server built:   May 24 2006 11:45:06
>>> TOMCAT
>>> -------------
>>> ./version.sh
>>> Using JRE_HOME:       /opt/jdk1.5.0_10
>>> Server version: Apache Tomcat/5.5.20
>>> Server built:   Sep 12 2006 10:09:20
>>> Server number:  5.5.20.0
>>> OS Name:        Linux
>>> OS Version:     2.6.9-55.0.2.ELsmp
>>> Architecture:   i386
>>> JVM Version:    1.5.0_10-b03
>>> JVM Vendor:     Sun Microsystems Inc.
>>>
>>
>> What's your mod_jk version?
>
>
>   -->JK Version: mod_jk/1.2.23
>
>>
>>
>> I am doing load balancing between both with JK, with an scenario of 50
>>> reques per session (aprox), and 500 reqxseg ( aprox ).
>>>
>>
>> What is reqxseg?
>
>
> --> HTTP request per second.
>
>>
>>
>> The thing is the following, when I shutdown the passive server, I have a
>>>
>>
>> What is a "passive" server? I thought you do load balancing?
>> What do you mean by "shutdown"?
>>
>
> --> I have a red hat cluster in both servers which give to me an Virtual IP
> for both servers, this virtual IP is of one of the servers and is the
> entrace point for my architecture, so the requests are received only with
> one apache, which delivers the request to the tomcats of both servers.
>
> --> The passive servers in the one which does not have the IP, si the
> apache of this servers is just not receiving requests.
>
> --> Shutting down, is just swtch off the server ( just simulating a crash
> of the server).
>
>   download of my reqxseg, and the requests that where managed by the
>>> passive server get stucked for long time, like 5 min.
>>> New request afther shutting donw the passive server are well processed.
>>> But the thing is if there is a way to reduce this time that requests are
>>> stucked.
>>>
>>
>> The answer depends on your answer to m<y above questions.
>>
>> This is my worker :
>>> # izonetv LoadBalancer Definition
>>>
>>> worker.izonetv.balance_workers=izonetv-mifeas01_data,izonetv-mifeas02_data
>>> worker.izonetv.method=Session
>>> worker.izonetv.retries=1
>>> worker.izonetv.sticky_session=True
>>> #worker.izonetv.sticky_session_force=1
>>> worker.izonetv.type=lb
>>> # izonetv-mifeas01_data Node Definition
>>> worker.izonetv-mifeas01_data.connect_timeout=10000
>>> worker.izonetv-mifeas01_data.fail_on_status=404
>>>
>>
>> You don't want to fail on 404.
>
> --> Sorry what does mean this parameter ? I just copy from another
> configuration.
>
>>
>>
>> worker.izonetv-mifeas01_data.host=mifeas01_data
>>> worker.izonetv-mifeas01_data.lbfactor=1
>>> worker.izonetv-mifeas01_data.port=8009
>>> worker.izonetv-mifeas01_data.reply_timeout=30000
>>> worker.izonetv-mifeas01_data.type=ajp13
>>> # izonetv-mifeas02_data Node Definition
>>> worker.izonetv-mifeas02_data.connect_timeout=10000
>>> worker.izonetv-mifeas02_data.fail_on_status=404
>>>
>>
>> You don't want to fail on 404.
>>
>> worker.izonetv-mifeas02_data.host=mifeas02_data
>>> worker.izonetv-mifeas02_data.lbfactor=1
>>> worker.izonetv-mifeas02_data.port=8009
>>> worker.izonetv-mifeas02_data.reply_timeout=30000
>>> worker.izonetv-mifeas02_data.type=ajp13
>>>
>>
>> Regards,
>>
>> Rainer
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
>> For additional commands, e-mail: users-help@tomcat.apache.org
>>
>>
>

Re: Problems with LoadBalancing

Posted by Toni Menendez Lopez <to...@gmail.com>.
2009/3/11 Rainer Jung <ra...@kippdata.de>

> On 11.03.2009 14:03, Toni Menendez Lopez wrote:
>
>> Hello everybody,
>> I have following architecture :
>> 2 Server with Apache and Tomcat
>> Versions :
>> APACHE
>> -------------
>> httpd -v
>> Server version: Apache/2.0.52
>> Server built:   May 24 2006 11:45:06
>> TOMCAT
>> -------------
>> ./version.sh
>> Using JRE_HOME:       /opt/jdk1.5.0_10
>> Server version: Apache Tomcat/5.5.20
>> Server built:   Sep 12 2006 10:09:20
>> Server number:  5.5.20.0
>> OS Name:        Linux
>> OS Version:     2.6.9-55.0.2.ELsmp
>> Architecture:   i386
>> JVM Version:    1.5.0_10-b03
>> JVM Vendor:     Sun Microsystems Inc.
>>
>
> What's your mod_jk version?


-->JK Version:mod_jk/1.2.23

>
>
> I am doing load balancing between both with JK, with an scenario of 50
>> reques per session (aprox), and 500 reqxseg ( aprox ).
>>
>
> What is reqxseg?


--> HTTP request per second.

>
>
> The thing is the following, when I shutdown the passive server, I have a
>>
>
> What is a "passive" server? I thought you do load balancing?
> What do you mean by "shutdown"?
>

--> I have a red hat cluster in both servers which give to me an Virtual IP
for both servers, this virtual IP is of one of the servers and is the
entrace point for my architecture, so the requests are received only with
one apache, which delivers the request to the tomcats of both servers.

--> The passive servers in the one which does not have the IP, si the apache
of this servers is just not receiving requests.

--> Shutting down, is just swtch off the server ( just simulating a crash of
the server).

 download of my reqxseg, and the requests that where managed by the
>> passive server get stucked for long time, like 5 min.
>> New request afther shutting donw the passive server are well processed.
>> But the thing is if there is a way to reduce this time that requests are
>> stucked.
>>
>
> The answer depends on your answer to m<y above questions.
>
> This is my worker :
>> # izonetv LoadBalancer Definition
>> worker.izonetv.balance_workers=izonetv-mifeas01_data,izonetv-mifeas02_data
>> worker.izonetv.method=Session
>> worker.izonetv.retries=1
>> worker.izonetv.sticky_session=True
>> #worker.izonetv.sticky_session_force=1
>> worker.izonetv.type=lb
>> # izonetv-mifeas01_data Node Definition
>> worker.izonetv-mifeas01_data.connect_timeout=10000
>> worker.izonetv-mifeas01_data.fail_on_status=404
>>
>
> You don't want to fail on 404.

--> Sorry what does mean this parameter ? I just copy from another
configuration.

>
>
> worker.izonetv-mifeas01_data.host=mifeas01_data
>> worker.izonetv-mifeas01_data.lbfactor=1
>> worker.izonetv-mifeas01_data.port=8009
>> worker.izonetv-mifeas01_data.reply_timeout=30000
>> worker.izonetv-mifeas01_data.type=ajp13
>> # izonetv-mifeas02_data Node Definition
>> worker.izonetv-mifeas02_data.connect_timeout=10000
>> worker.izonetv-mifeas02_data.fail_on_status=404
>>
>
> You don't want to fail on 404.
>
> worker.izonetv-mifeas02_data.host=mifeas02_data
>> worker.izonetv-mifeas02_data.lbfactor=1
>> worker.izonetv-mifeas02_data.port=8009
>> worker.izonetv-mifeas02_data.reply_timeout=30000
>> worker.izonetv-mifeas02_data.type=ajp13
>>
>
> Regards,
>
> Rainer
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
> For additional commands, e-mail: users-help@tomcat.apache.org
>
>

Re: Problems with LoadBalancing

Posted by Rainer Jung <ra...@kippdata.de>.
On 11.03.2009 14:03, Toni Menendez Lopez wrote:
> Hello everybody,
> I have following architecture :
> 2 Server with Apache and Tomcat
> Versions :
> APACHE
> -------------
> httpd -v
> Server version: Apache/2.0.52
> Server built:   May 24 2006 11:45:06
> TOMCAT
> -------------
> ./version.sh
> Using JRE_HOME:       /opt/jdk1.5.0_10
> Server version: Apache Tomcat/5.5.20
> Server built:   Sep 12 2006 10:09:20
> Server number:  5.5.20.0
> OS Name:        Linux
> OS Version:     2.6.9-55.0.2.ELsmp
> Architecture:   i386
> JVM Version:    1.5.0_10-b03
> JVM Vendor:     Sun Microsystems Inc.

What's your mod_jk version?

> I am doing load balancing between both with JK, with an scenario of 50
> reques per session (aprox), and 500 reqxseg ( aprox ).

What is reqxseg?

> The thing is the following, when I shutdown the passive server, I have a

What is a "passive" server? I thought you do load balancing?
What do you mean by "shutdown"?

> download of my reqxseg, and the requests that where managed by the
> passive server get stucked for long time, like 5 min.
> New request afther shutting donw the passive server are well processed.
> But the thing is if there is a way to reduce this time that requests are
> stucked.

The answer depends on your answer to m<y above questions.

> This is my worker :
> # izonetv LoadBalancer Definition
> worker.izonetv.balance_workers=izonetv-mifeas01_data,izonetv-mifeas02_data
> worker.izonetv.method=Session
> worker.izonetv.retries=1
> worker.izonetv.sticky_session=True
> #worker.izonetv.sticky_session_force=1
> worker.izonetv.type=lb
> # izonetv-mifeas01_data Node Definition
> worker.izonetv-mifeas01_data.connect_timeout=10000
> worker.izonetv-mifeas01_data.fail_on_status=404

You don't want to fail on 404.

> worker.izonetv-mifeas01_data.host=mifeas01_data
> worker.izonetv-mifeas01_data.lbfactor=1
> worker.izonetv-mifeas01_data.port=8009
> worker.izonetv-mifeas01_data.reply_timeout=30000
> worker.izonetv-mifeas01_data.type=ajp13
> # izonetv-mifeas02_data Node Definition
> worker.izonetv-mifeas02_data.connect_timeout=10000
> worker.izonetv-mifeas02_data.fail_on_status=404

You don't want to fail on 404.

> worker.izonetv-mifeas02_data.host=mifeas02_data
> worker.izonetv-mifeas02_data.lbfactor=1
> worker.izonetv-mifeas02_data.port=8009
> worker.izonetv-mifeas02_data.reply_timeout=30000
> worker.izonetv-mifeas02_data.type=ajp13

Regards,

Rainer

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org