You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@activemq.apache.org by xabhi <xa...@gmail.com> on 2016/03/31 12:39:41 UTC

JMS to STOMP transformation causes throughput drop in STOMP consumers

Hi,
I am trying to benchmark throughput for my nodejs consumer (STOMP). The
producer is in Java sending JMS text and map messages.

With text messages, I see that nodejs consumer is able to handle 10
Kmsgs/sec without any pending messages

But when I send Map messages and using nodejs consumer (with header
'transformation': jms-map-json), the throughput drops to 0.5 Kmsgs/sec.

I am not able to understand where this bottleneck is coming. The broker has
messages in pending queue and i see unacknowledged messages in jconsole.

Why is that the node consumer can consumer text messages faster that map
messages if ultimately both are sent in TEXT format from ActiveMQ?

Does anyone from ActiveMQ dev knows about this behavior? Any help will be
appreciated

Thanks,
Abhishek




--
View this message in context: http://activemq.2283324.n4.nabble.com/JMS-to-STOMP-transformation-causes-throughput-drop-in-STOMP-consumers-tp4710148.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.

Re: JMS to STOMP transformation causes throughput drop in STOMP consumers

Posted by Tim Bain <tb...@alumni.duke.edu>.
Which process is the one spinning the CPU: the broker, or your client?

If it's the broker, you're in luck: the ActiveMQ broker is a Java process,
which means that all of the standard Java profiling tools (which will tell
you where a Java application is spending its time) are at your disposal and
can tell you why serialization is slow.  JVisualVM's profiler would
probably give you a pretty good starting point for answering your question,
and it's deployed as part of the JVM so you don't even have to install any
special tools...

Tim

On Wed, Apr 6, 2016 at 2:17 AM, xabhi <xa...@gmail.com> wrote:

> Hi,
>
> I am seeing the same behavior with Python STOMP consumer as well -
> throughput for TEXT msgs is 10K/s and for MAP messages with tranformation
> is
> 200 msgs/s.
>
>  Is there a way to improve it for JMS Map messages? Some broker setting?
> may
> be plugin different serialization library instead of xstream?
>
> Thanks,
> Abhishek
>
>
>
>
> --
> View this message in context:
> http://activemq.2283324.n4.nabble.com/JMS-to-STOMP-transformation-causes-throughput-drop-in-STOMP-consumers-tp4710148p4710409.html
> Sent from the ActiveMQ - User mailing list archive at Nabble.com.
>

Re: JMS to STOMP transformation causes throughput drop in STOMP consumers

Posted by xabhi <xa...@gmail.com>.
Hi,

I am seeing the same behavior with Python STOMP consumer as well -
throughput for TEXT msgs is 10K/s and for MAP messages with tranformation is
200 msgs/s.

 Is there a way to improve it for JMS Map messages? Some broker setting? may
be plugin different serialization library instead of xstream?

Thanks,
Abhishek




--
View this message in context: http://activemq.2283324.n4.nabble.com/JMS-to-STOMP-transformation-causes-throughput-drop-in-STOMP-consumers-tp4710148p4710409.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.