You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@activemq.apache.org by Leung Wang Hei <ge...@yahoo.com.hk> on 2015/05/19 12:49:24 UTC

Network bridge throughput capped at default Socket buffer size

Hi all,

There seems to be an invisible barrier in the socket buffer for MQ network
bridge.  We expect increasing tcp socket buffer size would give high
throughput but the outcome is not.  Here are the test details:

- 2 brokers(A, B) bridged together over WLAN with 140ms network latency.  
- One single duplex network connector is setup at broker B, statically
includes one topic
- 10 producers each sending 10K message.  All are AMQObjectMessage.
- Socket buffer size set as url argument in network connector at broker B
and transport connector at broker A
- Use wireshark to capture link traffic

Wireshark capture shows that throughput always capped at around
3.74Mbit/sec, the max throughput as with default 64K socket buffer. Attached
the config details.

I don't expect a bug in MQ, am I missing something?  Any advice would be
greatly appreciated.


*Broker A*
<transportConnectors>
             <transportConnector name="openwire"
uri="tcp://0.0.0.0:61616?transport.socketBufferSize=10485760"/>
             <transportConnector name="openwirelog"
uri="tcp://0.0.0.0:61617"/>
             <transportConnector name="stomp" uri="stomp://0.0.0.0:61613"/>
         </transportConnectors>

*Broker B*
 <destinationPolicy>
             <policyMap>
                 <policyEntries>

                     <policyEntry topic=">" producerFlowControl="false"
advisoryForDiscardingMessages="true" advisoryForSlowConsumers="true" >
                         <pendingSubscriberPolicy>
                             <vmCursor />
                         </pendingSubscriberPolicy>
                     </policyEntry>
                </policyEntries>
             </policyMap>
         </destinationPolicy>


<networkConnector name="nc1-hk"
uri="static://(tcp://brokerA:61616?socketBufferSize=10485760)" duplex="true"
networkTTL="2">
             <staticallyIncludedDestinations>
                 <topic physicalName="test"/>
             </staticallyIncludedDestinations>
</networkConnector>


*Linux traffic control*
tc qdisc add dev ens32 root handle 1: htb default 12
tc class add dev ens32 parent 1: classid 1:1 htb rate 20Mbit ceil 20MBit
tc qdisc add dev ens32 parent 1:1 handle 20: netem latency 140ms
tc filter add dev ens32 protocol ip parent 1:0 prio 1 u32 match ip dst
brokerB_Ip flowid 1:1


Best regards,
Leung Wang Hei




--
View this message in context: http://activemq.2283324.n4.nabble.com/Network-bridge-throughput-capped-at-default-Socket-buffer-size-tp4696643.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.

Re: Network bridge throughput capped at default Socket buffer size

Posted by Peter Hicks <pe...@poggs.co.uk>.

On 22/05/15 07:11, Leung Wang Hei wrote:
> As in my 2nd last comment, iperf3 bandwidth testing shows a max bandwidth of
> 20Mbit/sec.  This matches the expectation with the configured Traffic
> Control.
Just want to check - is this the maximum bandwidth over a single TCP 
session, or are you using multiple TCP sessions, or even UDP?


Peter


Re: Network bridge throughput capped at default Socket buffer size

Posted by Tim Bain <tb...@alumni.duke.edu>.
OK, and what are you seeing happen with the TCP congestion window of the
broker-to-broker connection?  Is it opening fully?
On May 22, 2015 12:29 AM, "Leung Wang Hei" <ge...@yahoo.com.hk> wrote:

> Hi Tim,
>
> Here are the OS config:
> *Broker A*
> $ cat /proc/sys/net/ipv4/tcp_rmem
> 4096    87380   16777216
> $ cat /proc/sys/net/ipv4/tcp_wmem
> 4096    87380   16777216
>
> *Broker B*
> $ cat /proc/sys/net/ipv4/tcp_rmem
> 4096    87380   16777216
> $ cat /proc/sys/net/ipv4/tcp_wmem
> 4096    87380   16777216
>
> As in my 2nd last comment, iperf3 bandwidth testing shows a max bandwidth
> of
> 20Mbit/sec.  This matches the expectation with the configured Traffic
> Control.
>
>
>
> --
> View this message in context:
> http://activemq.2283324.n4.nabble.com/Network-bridge-throughput-capped-at-default-Socket-buffer-size-tp4696643p4696847.html
> Sent from the ActiveMQ - User mailing list archive at Nabble.com.
>

Re: Network bridge throughput capped at default Socket buffer size

Posted by Leung Wang Hei <ge...@yahoo.com.hk>.
Hi Tim,

Here are the OS config:
*Broker A*
$ cat /proc/sys/net/ipv4/tcp_rmem
4096    87380   16777216
$ cat /proc/sys/net/ipv4/tcp_wmem
4096    87380   16777216

*Broker B*
$ cat /proc/sys/net/ipv4/tcp_rmem
4096    87380   16777216
$ cat /proc/sys/net/ipv4/tcp_wmem
4096    87380   16777216

As in my 2nd last comment, iperf3 bandwidth testing shows a max bandwidth of
20Mbit/sec.  This matches the expectation with the configured Traffic
Control.



--
View this message in context: http://activemq.2283324.n4.nabble.com/Network-bridge-throughput-capped-at-default-Socket-buffer-size-tp4696643p4696847.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.

Re: Network bridge throughput capped at default Socket buffer size

Posted by Tim Bain <tb...@alumni.duke.edu>.
You're right, I didn't catch that in your original message, sorry.

What did you find when you investigated my suggestions about the TCP
congestion window and your OS's max socket buffer size setting?

Also, have you confirmed that a non-ActiveMQ TCP socket connection can get
better throughput?  Do a sanity check and make sure this isn't your WAN
throttling you before you sink too much time into tweaking ActiveMQ.

Tim
On May 21, 2015 12:17 AM, "Leung Wang Hei" <ge...@yahoo.com.hk> wrote:

> Tim,
>
> I have used "transport.socketBufferSize=x" in transport connector broker A
> and only "?socketBufferSize=x" in broker B network connector.  When x=-1,
> warning is raised in MQ log:
>
> /[WARN ] org.apache.activemq.network.DiscoveryNetworkConnector - Could not
> start network bridge between:
> vm://activemq.auhkmq01?async=false&network=true and:
> tcp://brokerA:61616?socketBufferSize=-1 due to:
> java.lang.IllegalArgumentException: invalid receive size/
>
> If I prefix broker B config with "transport.", the parameter is considered
> invalid by MQ:
>
> /[WARN ] org.apache.activemq.network.DiscoveryNetworkConnector - Could not
> connect to remote URI: tcp://brokerA:61616?transport.socketBufferSize=-1:
> Invalid connect parameters: {transport.socketBufferSize=-1}/
>
> It looks like my initial config is correct.
>
>
>
>
> --
> View this message in context:
> http://activemq.2283324.n4.nabble.com/Network-bridge-throughput-capped-at-default-Socket-buffer-size-tp4696643p4696748.html
> Sent from the ActiveMQ - User mailing list archive at Nabble.com.
>

Re: Network bridge throughput capped at default Socket buffer size

Posted by Leung Wang Hei <ge...@yahoo.com.hk>.
Tim,

I have used "transport.socketBufferSize=x" in transport connector broker A
and only "?socketBufferSize=x" in broker B network connector.  When x=-1,
warning is raised in MQ log:

/[WARN ] org.apache.activemq.network.DiscoveryNetworkConnector - Could not
start network bridge between:
vm://activemq.auhkmq01?async=false&network=true and:
tcp://brokerA:61616?socketBufferSize=-1 due to:
java.lang.IllegalArgumentException: invalid receive size/

If I prefix broker B config with "transport.", the parameter is considered
invalid by MQ:

/[WARN ] org.apache.activemq.network.DiscoveryNetworkConnector - Could not
connect to remote URI: tcp://brokerA:61616?transport.socketBufferSize=-1:
Invalid connect parameters: {transport.socketBufferSize=-1}/

It looks like my initial config is correct.




--
View this message in context: http://activemq.2283324.n4.nabble.com/Network-bridge-throughput-capped-at-default-Socket-buffer-size-tp4696643p4696748.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.

Re: Network bridge throughput capped at default Socket buffer size

Posted by Tim Bain <tb...@alumni.duke.edu>.
Peter, I'm pretty sure that's why he's trying to adjust the socket buffer
size, and he's saying that the changes he's making aren't having the
desired effect.

Leung, I have a vague memory having to prefix the URI option with some
prefix (I'm pretty sure it was "transport.", as shown in the example at the
bottom of http://activemq.apache.org/tcp-transport-reference.html).  Give
it a try and see if that changes the behavior you're seeing.

Tim
On May 20, 2015 2:03 PM, "Peter Hicks" <pe...@poggs.co.uk> wrote:

> Hello
>
> On 19/05/15 11:49, Leung Wang Hei wrote:
>
>> There seems to be an invisible barrier in the socket buffer for MQ network
>> bridge.  We expect increasing tcp socket buffer size would give high
>> throughput but the outcome is not.  Here are the test details:
>>
>> - 2 brokers(A, B) bridged together over WLAN with 140ms network latency.
>> - One single duplex network connector is setup at broker B, statically
>> includes one topic
>> - 10 producers each sending 10K message.  All are AMQObjectMessage.
>> - Socket buffer size set as url argument in network connector at broker B
>> and transport connector at broker A
>> - Use wireshark to capture link traffic
>>
>> Wireshark capture shows that throughput always capped at around
>> 3.74Mbit/sec, the max throughput as with default 64K socket buffer.
>> Attached
>> the config details.
>>
>> I don't expect a bug in MQ, am I missing something?  Any advice would be
>> greatly appreciated.
>>
>>  It't not a bug in ActiveMQ, it's the result of the Bandwidth/Delay
> Product - take the bandwidth of your link in megabits/sec and divide it by
> the round trip time in milliseconds.
>
> See http://en.wikipedia.org/wiki/TCP_tuning for more details - you need
> to increase the TCP window size at both broker A and broker B to something
> larger so you can have more data "on the wire".
>
>
> Peter
>
>

Re: Network bridge throughput capped at default Socket buffer size

Posted by Peter Hicks <pe...@poggs.co.uk>.
Hello

On 19/05/15 11:49, Leung Wang Hei wrote:
> There seems to be an invisible barrier in the socket buffer for MQ network
> bridge.  We expect increasing tcp socket buffer size would give high
> throughput but the outcome is not.  Here are the test details:
>
> - 2 brokers(A, B) bridged together over WLAN with 140ms network latency.
> - One single duplex network connector is setup at broker B, statically
> includes one topic
> - 10 producers each sending 10K message.  All are AMQObjectMessage.
> - Socket buffer size set as url argument in network connector at broker B
> and transport connector at broker A
> - Use wireshark to capture link traffic
>
> Wireshark capture shows that throughput always capped at around
> 3.74Mbit/sec, the max throughput as with default 64K socket buffer. Attached
> the config details.
>
> I don't expect a bug in MQ, am I missing something?  Any advice would be
> greatly appreciated.
>
It't not a bug in ActiveMQ, it's the result of the Bandwidth/Delay 
Product - take the bandwidth of your link in megabits/sec and divide it 
by the round trip time in milliseconds.

See http://en.wikipedia.org/wiki/TCP_tuning for more details - you need 
to increase the TCP window size at both broker A and broker B to 
something larger so you can have more data "on the wire".


Peter


Re: Network bridge throughput capped at default Socket buffer size

Posted by Christian Posta <ch...@gmail.com>.
What is the traffic across the WAN for an app other than ActiveMQ? Or what
is the speed of the connection when done NOT over the WAN?

WANs prioritize traffic, wonder if you're hitting a bottleneck in the WAN?

On Tue, May 19, 2015 at 3:49 AM, Leung Wang Hei <ge...@yahoo.com.hk>
wrote:

> Hi all,
>
> There seems to be an invisible barrier in the socket buffer for MQ network
> bridge.  We expect increasing tcp socket buffer size would give high
> throughput but the outcome is not.  Here are the test details:
>
> - 2 brokers(A, B) bridged together over WLAN with 140ms network latency.
> - One single duplex network connector is setup at broker B, statically
> includes one topic
> - 10 producers each sending 10K message.  All are AMQObjectMessage.
> - Socket buffer size set as url argument in network connector at broker B
> and transport connector at broker A
> - Use wireshark to capture link traffic
>
> Wireshark capture shows that throughput always capped at around
> 3.74Mbit/sec, the max throughput as with default 64K socket buffer.
> Attached
> the config details.
>
> I don't expect a bug in MQ, am I missing something?  Any advice would be
> greatly appreciated.
>
>
> *Broker A*
> <transportConnectors>
>              <transportConnector name="openwire"
> uri="tcp://0.0.0.0:61616?transport.socketBufferSize=10485760"/>
>              <transportConnector name="openwirelog"
> uri="tcp://0.0.0.0:61617"/>
>              <transportConnector name="stomp" uri="stomp://0.0.0.0:61613
> "/>
>          </transportConnectors>
>
> *Broker B*
>  <destinationPolicy>
>              <policyMap>
>                  <policyEntries>
>
>                      <policyEntry topic=">" producerFlowControl="false"
> advisoryForDiscardingMessages="true" advisoryForSlowConsumers="true" >
>                          <pendingSubscriberPolicy>
>                              <vmCursor />
>                          </pendingSubscriberPolicy>
>                      </policyEntry>
>                 </policyEntries>
>              </policyMap>
>          </destinationPolicy>
>
>
> <networkConnector name="nc1-hk"
> uri="static://(tcp://brokerA:61616?socketBufferSize=10485760)"
> duplex="true"
> networkTTL="2">
>              <staticallyIncludedDestinations>
>                  <topic physicalName="test"/>
>              </staticallyIncludedDestinations>
> </networkConnector>
>
>
> *Linux traffic control*
> tc qdisc add dev ens32 root handle 1: htb default 12
> tc class add dev ens32 parent 1: classid 1:1 htb rate 20Mbit ceil 20MBit
> tc qdisc add dev ens32 parent 1:1 handle 20: netem latency 140ms
> tc filter add dev ens32 protocol ip parent 1:0 prio 1 u32 match ip dst
> brokerB_Ip flowid 1:1
>
>
> Best regards,
> Leung Wang Hei
>
>
>
>
> --
> View this message in context:
> http://activemq.2283324.n4.nabble.com/Network-bridge-throughput-capped-at-default-Socket-buffer-size-tp4696643.html
> Sent from the ActiveMQ - User mailing list archive at Nabble.com.
>



-- 
*Christian Posta*
twitter: @christianposta
http://www.christianposta.com/blog
http://fabric8.io