You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@servicemix.apache.org by Kevin k <ke...@gmail.com> on 2008/01/24 18:38:12 UTC

jms limiting

A quick question or 2 about jms providers:
Lets say a jms queue has many,many messages in it (messages going in quicker
than being processed)

1) how many messages are worked on at one,  or to ask it another way how
many threads does the jms provider kick off by default?
2) Is this number changeable?
3)  Assuming we have activeMQ setup correctly (load balanced, master slave,
anything else?), is there any problems running several the identical jms
provider on different servicemixes to get load balancing.

Thanks in advance.
-Kevin
-- 
View this message in context: http://www.nabble.com/jms-limiting-tp15070635s12049p15070635.html
Sent from the ServiceMix - User mailing list archive at Nabble.com.


Re: jms limiting

Posted by Michal <ca...@yahoo.com>.
Instead of controlling target endpoint (cpu intensive) you could control the
endpoint that sends the message: jms in this case. jms:consumer by default
sends only one message at a time - and waits for the response from the
target endpoint.
-- 
View this message in context: http://www.nabble.com/jms-limiting-tp15070635s12049p15154974.html
Sent from the ServiceMix - User mailing list archive at Nabble.com.


Re: jms limiting

Posted by Kevin k <ke...@gmail.com>.

Bruce Thanks again for all of your answers, the light is starting to dawn on
me.
The information about threadpool is what I think I will need.

To clarify my problem (Hopefully answering your questions).

The general problem I am tryng to solve is a asynchrnous communication.
The external client wants to drop off an http message and not wait for a
reply (other than servicemix received it).
The main reason for this is that the backend is cpu/time consuming process
and the client does not want to wait for the processing to finish.
This final backend process is a servicemix component.

Our solution for this was to put a jms queue in the middle of the process,
so when the client posts a message, servicemix can simple enqueue it and
return to the client (very quick).

This queue is then serviced by a servicemix jms-consumer endpoint which
reads off of the queue and sends them to the final servicemix endpoint for
processing.

Our concern is that if clients enqueue too many messages too quickly, then
servicemix will try to process too many of the messages at once.  This will
lead to the cpu on the single servicemix jvm to get overrun and not have the
resource availble for the http consumer to do it's job.

We thought we could alleviate this 2 ways (probably a combination of both of
them)
1)  Limit the number of messages any one servicemix will work on at a time
(it looks like this can be done using the ThreadPool's)
2) Have another servicemix running on a seperate box/jvm that is looking at
the same queues and processing them (which looks like it can be achieved by
clustering activemq)


Is there another solution to the asynch problem and/or the limiting of
resources any single endpoint can use within servicemix?

I also may be totally offbase, if so, please let me know.

Thanks again
-Kevin
-- 
View this message in context: http://www.nabble.com/jms-limiting-tp15070635s12049p15138543.html
Sent from the ServiceMix - User mailing list archive at Nabble.com.


Re: jms limiting

Posted by Bruce Snyder <br...@gmail.com>.
On Jan 24, 2008 6:52 PM, Kevin k <ke...@gmail.com> wrote:
>
> Thanks again for your answer.
>
> I think I'm going to back up and you can let me know if I'm on the wrong
> track or not.
>
> The scenario is I have a producer (http-consumer), that sends it's message
> to a jms consumer which places the message on the queue.
>
> Then I have a jms producer (which is a servicemix endpoint) that reads the
> message from the queue and forwards it on the to ultimate backend processor,
> so my flow looks like this. (leaving out the jencks stuff).
>
> message ---> http consumer ---> jms consumer ---> jms queue <-- jms provider
> ---> (final consumer)
>                      |-------------------------- serivicemix endpoints
> --------------------------------------------|
> Everything is within one JVM (the servicemix jvm)
>
> Our main concern here is that since the final consumer is cpu/time intensive
> we do not want it taking too many resources away from servicemix itself, so
> we want to
> a) make sure that service mix will not kick off too many of the jms
> provider/final consumer tasks that will make the jvm/entire box too busy to
> let service mix do what it should be doing

Well this is all a matter of configuration. Using the new endpoints
for servicemix-jms allows much more configuration:

http://servicemix.apache.org/servicemix-jms-new-endpoints.html

These new endpoints use the message listener container from the Spring
Framework which is highly configurable. Combined with the ability to
configure the thread pools in ServiceMix
(http://servicemix.apache.org/thread-pools.html), ServiceMIx and
servicemix-jms endpoints can be highly tuned to determine the flow of
messages.

Furthermore, as you mention below, additional servicemix-jms endpoints
can be deployed that each subscribe to the same queue and load balance
across JMS consumers as James mentioned in his reply.

> b) make sure we can start up other servicemix's (On other boxes that will
> only have jms provider/final consumer endpoints on them and can take some of
> the load off of the main service mix.
>
> we currently do not want to remove the jms provider/final endpoint
> completely off of the main service mix, we just want to make sure it will
> never overload the JVM.

Well this is where the component flow above confuses me, so I can't
quite respond yet. I don't quite understand what you're trying to do
exactly. Maybe there's some detail left out or something but the flow
above could be simplified to something like the following:

external client --> http-consumer --> jms-provider <-- external jms consumer

Are there additional details you're leaving out as to why the flow you
showed above contains a JMS consumer between the http-consumer and the
jms-provider?

> We also do not want the final endpoint to be a direct jms client, we want it
> to be a jbi component that waits for servicemix to call it.

Again, I'm a bit confused here. Is the final client external to
ServiceMix? If so, ServiceMix won't call it at all. Its an external
JMS client's job to subscribe to a queue so that the underlying
ActiveMQ broker will push messages to the client.

Please clarify my questions so that I can elaborate further.

Bruce
-- 
perl -e 'print unpack("u30","D0G)U8V4\@4VYY9&5R\"F)R=6-E+G-N>61E<D\!G;6%I;\"YC;VT*"
);'

Apache ActiveMQ - http://activemq.org/
Apache Camel - http://activemq.org/camel/
Apache ServiceMix - http://servicemix.org/
Apache Geronimo - http://geronimo.apache.org/

Blog: http://bruceblog.org/

Re: jms limiting

Posted by Kevin k <ke...@gmail.com>.
Thanks again for your answer.

I think I'm going to back up and you can let me know if I'm on the wrong
track or not.

The scenario is I have a producer (http-consumer), that sends it's message
to a jms consumer which places the message on the queue.

Then I have a jms producer (which is a servicemix endpoint) that reads the
message from the queue and forwards it on the to ultimate backend processor,
so my flow looks like this. (leaving out the jencks stuff).

message ---> http consumer ---> jms consumer ---> jms queue <-- jms provider
---> (final consumer)
                     |-------------------------- serivicemix endpoints
--------------------------------------------|
Everything is within one JVM (the servicemix jvm)

Our main concern here is that since the final consumer is cpu/time intensive
we do not want it taking too many resources away from servicemix itself, so
we want to
a) make sure that service mix will not kick off too many of the jms
provider/final consumer tasks that will make the jvm/entire box too busy to
let service mix do what it should be doing
b) make sure we can start up other servicemix's (On other boxes that will
only have jms provider/final consumer endpoints on them and can take some of
the load off of the main service mix.

we currently do not want to remove the jms provider/final endpoint
completely off of the main service mix, we just want to make sure it will
never overload the JVM.

We also do not want the final endpoint to be a direct jms client, we want it
to be a jbi component that waits for servicemix to call it.

Am I totally off base with either the requirements or the solution I have
laid out here.

Thanks again for you help and I hope these questions are not too dumb.

-Kevin


-- 
View this message in context: http://www.nabble.com/jms-limiting-tp15070635s12049p15079530.html
Sent from the ServiceMix - User mailing list archive at Nabble.com.


Re: jms limiting

Posted by Bruce Snyder <br...@gmail.com>.
On Jan 24, 2008 12:01 PM, Kevin k <ke...@gmail.com> wrote:
>
> Thanks for you quick response.
>
> Just to understand your answer,
> Does every consumer that has been configured work on one message at a time,
> or does the dispatcher kick off a new consumer for every message it has?

A JMS consumer is a an application that is completely separate from
the broker. It runs in a different JVM from the broker. The broker is
the message mediator between one JVM running a producer and a
different JVM running a consumer and it looks like this:

(producer) --- sends message --> (broker) <--- consumes message --- (consumer)

The broker does not kick off a consumer. A consumer connectds to a
broker and registers itself with the broker to indicate that it wants
to receive messages from a JMS destintation. The broker then simply
dispatches messages to a registered consumer via the connection the
consumer made to the broker when it registered itself.

> So lets say I have 100 messages in a queue and a single consumer configured.
> Would there be a single consumer that works on one message at a time, or
> would the the dispatcher try to kick off 100 separate consumers.

Again, the broker does not start up consumers. It simply dispatches
messages to any registered consumers.

One very common scenario with many messages stacked up in a queue is
that message processing by a consumer can be very slow. (NOTE:
Receiving a message and actually processing that message to send the
ack are two different tasks.) If the broker dispatches messages to the
consumer faster than the consumer can process and ack them, a slow
consumer situation will be encountered. ActiveMQ provides some
configuration options to handle this situation:

http://activemq.apache.org/slow-consumer-handling.html

> Then lets say I wanted to have 5 messages worked on at a time.
> Would I need to configure 5 separate consumers, or would I do that through
> the dispatcher throttling?

Well it depends on what exactly you want to do. You could start up
five consumers if you want and have the broker more or less load
balance across them, but only if you're using a JMS queue. This is
because the JMS spec guarantees once-and-only-once delivery of a
message, i.e., not two consumers will be able to ack the same message
from a queue. On the other hand, if you're using a JMS topic, every
registered consumer to the topic is guaranteed by

> One more question, is there any documentation on the dispatcher throttling?

Well the info above about the slow consumer handling certainly
applies. Also, today I blogged about an article written by Hiram, one
of the architects of ActiveMQ that discusses the threading in
ActiveMQ:

http://bsnyderblog.blogspot.com/2008/01/understanding-threads-allocated-in.html

Additionally, take a look at the following ActiveMQ documents:

http://activemq.apache.org/dispatch-policies.html
http://activemq.apache.org/consumer-dispatch-async.html
http://activemq.apache.org/how-do-i-change-dispatch-policy.html

Bruce
-- 
perl -e 'print unpack("u30","D0G)U8V4\@4VYY9&5R\"F)R=6-E+G-N>61E<D\!G;6%I;\"YC;VT*"
);'

Apache ActiveMQ - http://activemq.org/
Apache Camel - http://activemq.org/camel/
Apache ServiceMix - http://servicemix.org/
Apache Geronimo - http://geronimo.apache.org/

Blog: http://bruceblog.org/

Re: jms limiting

Posted by Kevin k <ke...@gmail.com>.
Thanks for you quick response.

Just to understand your answer,
Does every consumer that has been configured work on one message at a time,
or does the dispatcher kick off a new consumer for every message it has?

So lets say I have 100 messages in a queue and a single consumer configured.
Would there be a single consumer that works on one message at a time, or
would the the dispatcher try to kick off 100 separate consumers.

Then lets say I wanted to have 5 messages worked on at a time.
Would I need to configure 5 separate consumers, or would I do that through
the dispatcher throttling?


One more question, is there any documentation on the dispatcher throttling?


Thanks again for helping out a newbie.
-Kevin



bsnyder wrote:
> 
> On Jan 24, 2008 10:38 AM, Kevin k <ke...@gmail.com> wrote:
>>
>> A quick question or 2 about jms providers:
>> Lets say a jms queue has many,many messages in it (messages going in
>> quicker
>> than being processed)
>>
>> 1) how many messages are worked on at one,  or to ask it another way how
>> many threads does the jms provider kick off by default?
> 
> It's not matter of threading in the broker as much as it's a matter
> how how fast the consumer is and how many consumers there are.
> Threading in the broker will certainly affect how quickly messages are
> delivered to any consumer, but if that consumer can't keep up with the
> message dispatching then you need to etiher throttle the dispatching,
> add more consumers or both.
> 
>> 2) Is this number changeable?
>> 3)  Assuming we have activeMQ setup correctly (load balanced, master
>> slave,
>> anything else?), is there any problems running several the identical jms
>> provider on different servicemixes to get load balancing.
> 
> As long as you have the same SAs deployed on each instance of
> ServiceMix and the underlying ActiveMQ instances are set up in a
> network of brokers
> (http://activemq.apache.org/networks-of-brokers.html) if/when one of
> the instances crashes, the messages will be load balanced. Setting up
> this type of environment requires a lot of testing because there are
> many possible configurations for ActiveMQ as you began to point out by
> mentioning master/slave, etc.
> 
> Are you experiencing some type of issue? If you are having a problem
> then we should discuss that problem specifically. It's always better
> to talk about a concrete problem that *is* happening instead of
> talking at such a high level about how things *should* work.
> 
> Bruce
> -- 
> perl -e 'print
> unpack("u30","D0G)U8V4\@4VYY9&5R\"F)R=6-E+G-N>61E<D\!G;6%I;\"YC;VT*"
> );'
> 
> Apache ActiveMQ - http://activemq.org/
> Apache Camel - http://activemq.org/camel/
> Apache ServiceMix - http://servicemix.org/
> Apache Geronimo - http://geronimo.apache.org/
> 
> Blog: http://bruceblog.org/
> 
> 

-- 
View this message in context: http://www.nabble.com/jms-limiting-tp15070635s12049p15072288.html
Sent from the ServiceMix - User mailing list archive at Nabble.com.


Re: jms limiting

Posted by Bruce Snyder <br...@gmail.com>.
On Jan 24, 2008 10:38 AM, Kevin k <ke...@gmail.com> wrote:
>
> A quick question or 2 about jms providers:
> Lets say a jms queue has many,many messages in it (messages going in quicker
> than being processed)
>
> 1) how many messages are worked on at one,  or to ask it another way how
> many threads does the jms provider kick off by default?

It's not matter of threading in the broker as much as it's a matter
how how fast the consumer is and how many consumers there are.
Threading in the broker will certainly affect how quickly messages are
delivered to any consumer, but if that consumer can't keep up with the
message dispatching then you need to etiher throttle the dispatching,
add more consumers or both.

> 2) Is this number changeable?
> 3)  Assuming we have activeMQ setup correctly (load balanced, master slave,
> anything else?), is there any problems running several the identical jms
> provider on different servicemixes to get load balancing.

As long as you have the same SAs deployed on each instance of
ServiceMix and the underlying ActiveMQ instances are set up in a
network of brokers
(http://activemq.apache.org/networks-of-brokers.html) if/when one of
the instances crashes, the messages will be load balanced. Setting up
this type of environment requires a lot of testing because there are
many possible configurations for ActiveMQ as you began to point out by
mentioning master/slave, etc.

Are you experiencing some type of issue? If you are having a problem
then we should discuss that problem specifically. It's always better
to talk about a concrete problem that *is* happening instead of
talking at such a high level about how things *should* work.

Bruce
-- 
perl -e 'print unpack("u30","D0G)U8V4\@4VYY9&5R\"F)R=6-E+G-N>61E<D\!G;6%I;\"YC;VT*"
);'

Apache ActiveMQ - http://activemq.org/
Apache Camel - http://activemq.org/camel/
Apache ServiceMix - http://servicemix.org/
Apache Geronimo - http://geronimo.apache.org/

Blog: http://bruceblog.org/

Re: jms limiting

Posted by ja...@gmail.com.
you only need a single ActiveMQ broker to get load balancing across
consumers. The broker will push messages as quickly as possible to the
consumers. If your consumers are slow the broker will keep pending
messages on disk

On 24/01/2008, Kevin k <ke...@gmail.com> wrote:
>
> A quick question or 2 about jms providers:
> Lets say a jms queue has many,many messages in it (messages going in quicker
> than being processed)
>
> 1) how many messages are worked on at one,  or to ask it another way how
> many threads does the jms provider kick off by default?
> 2) Is this number changeable?
> 3)  Assuming we have activeMQ setup correctly (load balanced, master slave,
> anything else?), is there any problems running several the identical jms
> provider on different servicemixes to get load balancing.
>
> Thanks in advance.
> -Kevin
> --
> View this message in context:
> http://www.nabble.com/jms-limiting-tp15070635s12049p15070635.html
> Sent from the ServiceMix - User mailing list archive at Nabble.com.
>
>


-- 
James
-------
http://macstrac.blogspot.com/

Open Source Integration
http://open.iona.com