You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@qpid.apache.org by Dale Green <gr...@gmail.com> on 2016/07/01 12:35:27 UTC

Qpid JMS: client inactivity problem

Hi people,

I have another problem with Service Bus and Qpid JMS 0.9.0.

The broker uses 4 minutes idle timeout. With the sample code in the
examples, I verified that the client sends the first empty frame exactly 4
minutes after the consumer has connected. After that, Qpid sends an empty
frame every 2 minutes.

However in the application, I am working on (existing code), Service Bus
drops the connections too often (when there are no messages sent) because
of inactivity. I think the problem is that Qpid waits for the whole 4
minutes before sending the first empty frame. Moreover, I see always (when
the problem occurs) TCP retransmission for exactly this frame which maybe
delays it for 100-200ms. Therefore, dropping the connection is perhaps
correct.

I am appending some Qpid traces down here. My questions are:
- Is the broker's behavior correct?
- How can I tell Qpid to send the heartbeats more often to probably solve
this problem?

Thanks!

11:48:38,001 TRACE [FRAMES] SENT: Open{
containerId='ID::68508b21-bbc7-4318-9371-5debbad2937d:1',
hostname='10.10.0.62', maxFrameSize=1048576, channelMax=32767,
idleTimeOut=60500, outgoingLocales=null, incomingLocales=null,
offeredCapabilities=null,
desiredCapabilities=[sole-connection-for-container],
properties={product=QpidJMS, version=0.9.0, platform=JVM: 1.7.0_80,
24.80-b11, Oracle Corporation, OS: Linux, 3.16.0-4-amd64, amd64}}
11:48:38,006 TRACE [FRAMES] RECV: Open{
containerId='d7c7f05c1eff4183ac3dc9f2885e3f93_Gcw7', hostname='null',
maxFrameSize=65536, channelMax=4999, idleTimeOut=240000,
outgoingLocales=null, incomingLocales=null, offeredCapabilities=null,
desiredCapabilities=null, properties=null}
11:48:38,015 TRACE [FRAMES] SENT: Begin{remoteChannel=null,
nextOutgoingId=1, incomingWindow=2047, outgoingWindow=2147483647,
handleMax=65535, offeredCapabilities=null, desiredCapabilities=null,
properties=null}
11:48:38,017 TRACE [FRAMES] RECV: Begin{remoteChannel=0, nextOutgoingId=1,
incomingWindow=5000, outgoingWindow=2047, handleMax=7,
offeredCapabilities=null, desiredCapabilities=null, properties=null}
11:48:38,144 TRACE [FRAMES] SENT:
Attach{name='qpid-jms:receiver:ID::24295795-625b-46a2-9898-637cf9944fbd:1:1:1:ServiceBusDefaultNamespace/XXX',
handle=0, role=RECEIVER, sndSettleMode=UNSETTLED, rcvSettleMode=FIRST,
source=Source{address='ServiceBusDefaultNamespace/XXX', durable=NONE,
expiryPolicy=LINK_DETACH, timeout=0, dynamic=false,
dynamicNodeProperties=null, distributionMode=null, filter=null,
defaultOutcome=Modified{deliveryFailed=true, undeliverableHere=null,
messageAnnotations=null}, outcomes=[amqp:accepted:list, amqp:rejected:list,
amqp:released:list, amqp:modified:list], capabilities=[queue]},
target=Target{address='null', durable=NONE, expiryPolicy=SESSION_END,
timeout=0, dynamic=false, dynamicNodeProperties=null, capabilities=null},
unsettled=null, incompleteUnsettled=false, initialDeliveryCount=null,
maxMessageSize=null, offeredCapabilities=null, desiredCapabilities=null,
properties=null}
11:48:38,179 TRACE [FRAMES] RECV:
Attach{name='qpid-jms:receiver:ID::24295795-625b-46a2-9898-637cf9944fbd:1:1:1:ServiceBusDefaultNamespace/XXX',
handle=0, role=SENDER, sndSettleMode=UNSETTLED, rcvSettleMode=FIRST,
source=Source{address='ServiceBusDefaultNamespace/XXX', durable=NONE,
expiryPolicy=LINK_DETACH, timeout=0, dynamic=false,
dynamicNodeProperties=null, distributionMode=null, filter=null,
defaultOutcome=Modified{deliveryFailed=true, undeliverableHere=null,
messageAnnotations=null}, outcomes=[amqp:accepted:list, amqp:rejected:list,
amqp:released:list, amqp:modified:list], capabilities=[queue]},
target=Target{address='null', durable=NONE, expiryPolicy=SESSION_END,
timeout=0, dynamic=false, dynamicNodeProperties=null, capabilities=null},
unsettled=null, incompleteUnsettled=false, initialDeliveryCount=0,
maxMessageSize=262144, offeredCapabilities=null, desiredCapabilities=null,
properties={com.microsoft:tracking-id=d7c7f05c1eff4183ac3dc9f2885e3f93_Gcw7_Bcw7;51163:100752:100753}}
11:48:38,183 TRACE [FRAMES] SENT: Flow{nextIncomingId=1,
incomingWindow=2047, nextOutgoingId=1, outgoingWindow=2147483647, handle=0,
deliveryCount=0, linkCredit=1000, available=null, drain=false, echo=false,
properties=null}
...
11:49:30,966 TRACE [FRAMES] RECV: Empty Frame
...
11:50:23,911 TRACE [FRAMES] RECV: Empty Frame
...
11:51:16,857 TRACE [FRAMES] RECV: Empty Frame
...
11:52:09,804 TRACE [FRAMES] RECV: Empty Frame
...
11:52:38,019 TRACE [FRAMES] SENT: Empty Frame
(TCP Retransmission occurs within 100-200ms)
...
11:53:02,749 TRACE [FRAMES] RECV: Empty Frame
...
11:53:38,009 TRACE [FRAMES] RECV:
Close{error=Error{condition=amqp:connection:forced, description='Die
Verbindung war länger als zulässig
inaktiv..TrackingId:d7c7f05c1eff4183ac3dc9f2885e3f93_Gcw7,TimeStamp:01.07.2016
11:53:27', info=null}}

Re: Qpid JMS: client inactivity problem

Posted by Robbie Gemmell <ro...@gmail.com>.
I'm going to assume you just meant to say producers instead of
consumers everywhere in the previous mail? Not that it perhaps matters
here.

It isn't invalid from a client or protocol perspective to create a
session and then not create a producer or consumer on it, though it
wouldn't be particularly typical behaviour. I could see a
server/service provider considering a connection with no producers or
consumers on it for an extended period to be undesirable behaviour
perhaps indicative of problem application. You'd have to check with
Microsoft around such behaviour.

Robbie

On 5 July 2016 at 10:24, Dale Green <gr...@gmail.com> wrote:
> Of course, producers are needed, not consumers.
>
> On Tue, Jul 5, 2016 at 11:13 AM, Dale Green <gr...@gmail.com>
> wrote:
>
>> Hi Robbie,
>>
>> It seems that I made multiple mistakes in evaluating the problems, but I
>> finally managed to reproduce it with some very simple code.
>>
>> It looks that this timeout after 5 minutes happens always when there is no
>> consumer using the session. That is, we have only Open, Begin and Empty
>> frames, no Attach.
>>
>> Can you tell me if it is the correct broker's behavior or I am using the
>> client in a wrong way? This is existing code and such connections are used
>> only for sending messages, so the consumers are created on demand.
>>
>> Thanks!
>>
>> 2016-07-05 10:50:48,978 [ntLoopGroup-2-1] - TRACE
>> TransportSupport               - Getting SSLContext instance using
>> protocol: TLS
>> 2016-07-05 10:50:49,195 [ntLoopGroup-2-1] - TRACE
>> TransportSupport               - Default protocols from the SSLEngine:
>> [SSLv2Hello, TLSv1, TLSv1.1, TLSv1.2]
>> 2016-07-05 10:50:49,195 [ntLoopGroup-2-1] - TRACE
>> TransportSupport               - Disabled protocols: [SSLv2Hello, SSLv3]
>> 2016-07-05 10:50:49,195 [ntLoopGroup-2-1] - TRACE
>> TransportSupport               - Enabled protocols: [TLSv1, TLSv1.1,
>> TLSv1.2]
>> 2016-07-05 10:50:49,195 [ntLoopGroup-2-1] - TRACE
>> TransportSupport               - Default cipher suites from the SSLEngine:
>> [TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384,
>> TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384, TLS_RSA_WITH_AES_256_CBC_SHA256,
>> TLS_ECDH_ECDSA_WITH_AES_256_CBC_SHA384,
>> TLS_ECDH_RSA_WITH_AES_256_CBC_SHA384, TLS_DHE_RSA_WITH_AES_256_CBC_SHA256,
>> TLS_DHE_DSS_WITH_AES_256_CBC_SHA256, TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,
>> TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA, TLS_RSA_WITH_AES_256_CBC_SHA,
>> TLS_ECDH_ECDSA_WITH_AES_256_CBC_SHA, TLS_ECDH_RSA_WITH_AES_256_CBC_SHA,
>> TLS_DHE_RSA_WITH_AES_256_CBC_SHA, TLS_DHE_DSS_WITH_AES_256_CBC_SHA,
>> TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256,
>> TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256, TLS_RSA_WITH_AES_128_CBC_SHA256,
>> TLS_ECDH_ECDSA_WITH_AES_128_CBC_SHA256,
>> TLS_ECDH_RSA_WITH_AES_128_CBC_SHA256, TLS_DHE_RSA_WITH_AES_128_CBC_SHA256,
>> TLS_DHE_DSS_WITH_AES_128_CBC_SHA256, TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,
>> TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA, TLS_RSA_WITH_AES_128_CBC_SHA,
>> TLS_ECDH_ECDSA_WITH_AES_128_CBC_SHA, TLS_ECDH_RSA_WITH_AES_128_CBC_SHA,
>> TLS_DHE_RSA_WITH_AES_128_CBC_SHA, TLS_DHE_DSS_WITH_AES_128_CBC_SHA,
>> TLS_ECDHE_ECDSA_WITH_3DES_EDE_CBC_SHA, TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,
>> SSL_RSA_WITH_3DES_EDE_CBC_SHA, TLS_ECDH_ECDSA_WITH_3DES_EDE_CBC_SHA,
>> TLS_ECDH_RSA_WITH_3DES_EDE_CBC_SHA, SSL_DHE_RSA_WITH_3DES_EDE_CBC_SHA,
>> SSL_DHE_DSS_WITH_3DES_EDE_CBC_SHA, TLS_EMPTY_RENEGOTIATION_INFO_SCSV]
>> 2016-07-05 10:50:49,196 [ntLoopGroup-2-1] - TRACE
>> TransportSupport               - Enabled cipher suites:
>> [TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384,
>> TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384, TLS_RSA_WITH_AES_256_CBC_SHA256,
>> TLS_ECDH_ECDSA_WITH_AES_256_CBC_SHA384,
>> TLS_ECDH_RSA_WITH_AES_256_CBC_SHA384, TLS_DHE_RSA_WITH_AES_256_CBC_SHA256,
>> TLS_DHE_DSS_WITH_AES_256_CBC_SHA256, TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,
>> TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA, TLS_RSA_WITH_AES_256_CBC_SHA,
>> TLS_ECDH_ECDSA_WITH_AES_256_CBC_SHA, TLS_ECDH_RSA_WITH_AES_256_CBC_SHA,
>> TLS_DHE_RSA_WITH_AES_256_CBC_SHA, TLS_DHE_DSS_WITH_AES_256_CBC_SHA,
>> TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256,
>> TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256, TLS_RSA_WITH_AES_128_CBC_SHA256,
>> TLS_ECDH_ECDSA_WITH_AES_128_CBC_SHA256,
>> TLS_ECDH_RSA_WITH_AES_128_CBC_SHA256, TLS_DHE_RSA_WITH_AES_128_CBC_SHA256,
>> TLS_DHE_DSS_WITH_AES_128_CBC_SHA256, TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,
>> TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA, TLS_RSA_WITH_AES_128_CBC_SHA,
>> TLS_ECDH_ECDSA_WITH_AES_128_CBC_SHA, TLS_ECDH_RSA_WITH_AES_128_CBC_SHA,
>> TLS_DHE_RSA_WITH_AES_128_CBC_SHA, TLS_DHE_DSS_WITH_AES_128_CBC_SHA,
>> TLS_ECDHE_ECDSA_WITH_3DES_EDE_CBC_SHA, TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,
>> SSL_RSA_WITH_3DES_EDE_CBC_SHA, TLS_ECDH_ECDSA_WITH_3DES_EDE_CBC_SHA,
>> TLS_ECDH_RSA_WITH_3DES_EDE_CBC_SHA, SSL_DHE_RSA_WITH_3DES_EDE_CBC_SHA,
>> SSL_DHE_DSS_WITH_3DES_EDE_CBC_SHA, TLS_EMPTY_RENEGOTIATION_INFO_SCSV]
>> 2016-07-05 10:50:49,229 [ntLoopGroup-2-1] - TRACE
>> NettyTcpTransport              - Channel has become active! Channel is [id:
>> 0x5d81e8cf, /10.10.0.180:38039 => /10.10.0.62:5671]
>> 2016-07-05 10:50:49,356 [ntLoopGroup-2-1] - TRACE
>> NettySslTransport              - SSL Handshake has completed: [id:
>> 0x5d81e8cf, /10.10.0.180:38039 => /10.10.0.62:5671]
>> 2016-07-05 10:50:49,401 [rifyHost=false]] - TRACE
>> NettyTcpTransport              - Attempted write of: 8 bytes
>> 2016-07-05 10:50:49,404 [ntLoopGroup-2-1] - TRACE
>> NettyTcpTransport              - New data read: 8 bytes incoming:
>> UnpooledHeapByteBuf(ridx: 0, widx: 8, cap: 1024)
>> 2016-07-05 10:50:49,405 [rifyHost=false]] - TRACE
>> AmqpProvider                   - New Proton Event: CONNECTION_INIT
>> 2016-07-05 10:50:49,405 [ntLoopGroup-2-1] - TRACE
>> NettyTcpTransport              - New data read: 39 bytes incoming:
>> UnpooledHeapByteBuf(ridx: 0, widx: 39, cap: 69)
>> 2016-07-05 10:50:49,405 [rifyHost=false]] - TRACE
>> AmqpProvider                   - New Proton Event: CONNECTION_LOCAL_OPEN
>> 2016-07-05 10:50:49,411 [rifyHost=false]] - DEBUG
>> SaslMechanismFinder            - Skipping SASL-EXTERNAL mechanism because
>> the available credentials are not sufficient
>> 2016-07-05 10:50:49,411 [rifyHost=false]] - INFO
>> SaslMechanismFinder            - Best match for SASL auth was: SASL-PLAIN
>> 2016-07-05 10:50:49,412 [rifyHost=false]] - TRACE
>> NettyTcpTransport              - Attempted write of: 50 bytes
>> 2016-07-05 10:50:49,415 [ntLoopGroup-2-1] - TRACE
>> NettyTcpTransport              - New data read: 26 bytes incoming:
>> UnpooledHeapByteBuf(ridx: 0, widx: 26, cap: 53)
>> 2016-07-05 10:50:49,418 [rifyHost=false]] - TRACE
>> FRAMES                         - SENT: Open{
>> containerId='ID:d572f504-5ef7-4d72-9022-ee0400dbd194:1',
>> hostname='10.10.0.62', maxFrameSize=1048576, channelMax=32767,
>> idleTimeOut=500000, outgoingLocales=null, incomingLocales=null,
>> offeredCapabilities=null,
>> desiredCapabilities=[sole-connection-for-container],
>> properties={product=QpidJMS, version=0.11.0-SNAPSHOT, platform=JVM:
>> 1.7.0_101, 24.95-b01, Oracle Corporation, OS: Linux, 3.16.0-4-amd64, amd64}}
>> 2016-07-05 10:50:49,418 [rifyHost=false]] - TRACE
>> NettyTcpTransport              - Attempted write of: 265 bytes
>> 2016-07-05 10:50:49,419 [ntLoopGroup-2-1] - TRACE
>> NettyTcpTransport              - New data read: 8 bytes incoming:
>> UnpooledHeapByteBuf(ridx: 0, widx: 8, cap: 37)
>> 2016-07-05 10:50:49,421 [ntLoopGroup-2-1] - TRACE
>> NettyTcpTransport              - New data read: 72 bytes incoming:
>> UnpooledHeapByteBuf(ridx: 0, widx: 72, cap: 101)
>> 2016-07-05 10:50:49,421 [rifyHost=false]] - TRACE
>> FRAMES                         - RECV: Open{
>> containerId='d7c7f05c1eff4183ac3dc9f2885e3f93_Gcw7', hostname='null',
>> maxFrameSize=65536, channelMax=4999, idleTimeOut=240000,
>> outgoingLocales=null, incomingLocales=null, offeredCapabilities=null,
>> desiredCapabilities=null, properties=null}
>> 2016-07-05 10:50:49,421 [rifyHost=false]] - TRACE
>> AmqpProvider                   - New Proton Event: CONNECTION_REMOTE_OPEN
>> 2016-07-05 10:50:49,424 [rifyHost=false]] - TRACE
>> AmqpProvider                   - New Proton Event: SESSION_INIT
>> 2016-07-05 10:50:49,424 [rifyHost=false]] - TRACE
>> AmqpProvider                   - New Proton Event: SESSION_LOCAL_OPEN
>> 2016-07-05 10:50:49,426 [rifyHost=false]] - TRACE
>> FRAMES                         - SENT: Begin{remoteChannel=null,
>> nextOutgoingId=1, incomingWindow=2047, outgoingWindow=2147483647,
>> handleMax=65535, offeredCapabilities=null, desiredCapabilities=null,
>> properties=null}
>> 2016-07-05 10:50:49,426 [rifyHost=false]] - TRACE
>> NettyTcpTransport              - Attempted write of: 32 bytes
>> 2016-07-05 10:50:49,427 [ntLoopGroup-2-1] - TRACE
>> NettyTcpTransport              - New data read: 34 bytes incoming:
>> UnpooledHeapByteBuf(ridx: 0, widx: 34, cap: 69)
>> 2016-07-05 10:50:49,427 [rifyHost=false]] - TRACE
>> FRAMES                         - RECV: Begin{remoteChannel=0,
>> nextOutgoingId=1, incomingWindow=5000, outgoingWindow=2047, handleMax=7,
>> offeredCapabilities=null, desiredCapabilities=null, properties=null}
>> 2016-07-05 10:50:49,427 [rifyHost=false]] - TRACE
>> AmqpProvider                   - New Proton Event: SESSION_REMOTE_OPEN
>> 2016-07-05 10:50:49,427 [rifyHost=false]] - DEBUG
>> AmqpConnectionBuilder          - AmqpConnection {
>> ID:f5f875e9-3e36-484f-8480-5d9b1305eb56:1 } is now open:
>> 2016-07-05 10:50:49,427 [rifyHost=false]] - TRACE
>> AmqpProvider                   - IdleTimeoutCheck being initiated, initial
>> delay: 30000
>> 2016-07-05 10:50:49,428 [rifyHost=false]] - INFO
>> JmsConnection                  - Connection
>> ID:f5f875e9-3e36-484f-8480-5d9b1305eb56:1 connected to remote Broker:
>> amqps://10.10.0.62:5671/?transport.verifyHost=false
>> 2016-07-05 10:50:49,437 [rifyHost=false]] - TRACE
>> FRAMES                         - SENT: Begin{remoteChannel=null,
>> nextOutgoingId=1, incomingWindow=2047, outgoingWindow=2147483647,
>> handleMax=65535, offeredCapabilities=null, desiredCapabilities=null,
>> properties=null}
>> 2016-07-05 10:50:49,437 [rifyHost=false]] - TRACE
>> NettyTcpTransport              - Attempted write of: 32 bytes
>> 2016-07-05 10:50:49,438 [ntLoopGroup-2-1] - TRACE
>> NettyTcpTransport              - New data read: 34 bytes incoming:
>> UnpooledHeapByteBuf(ridx: 0, widx: 34, cap: 69)
>> 2016-07-05 10:50:49,438 [rifyHost=false]] - TRACE
>> FRAMES                         - RECV: Begin{remoteChannel=1,
>> nextOutgoingId=1, incomingWindow=5000, outgoingWindow=2047, handleMax=7,
>> offeredCapabilities=null, desiredCapabilities=null, properties=null}
>> 2016-07-05 10:50:49,438 [rifyHost=false]] - TRACE
>> AmqpProvider                   - New Proton Event: SESSION_INIT
>> 2016-07-05 10:50:49,439 [rifyHost=false]] - TRACE
>> AmqpProvider                   - New Proton Event: SESSION_LOCAL_OPEN
>> 2016-07-05 10:50:49,439 [rifyHost=false]] - TRACE
>> AmqpProvider                   - New Proton Event: SESSION_REMOTE_OPEN
>> 2016-07-05 10:51:19,428 [rifyHost=false]] - TRACE
>> AmqpProvider                   - IdleTimeoutCheck rescheduling with delay:
>> 30000
>> 2016-07-05 10:51:49,429 [rifyHost=false]] - TRACE
>> AmqpProvider                   - IdleTimeoutCheck rescheduling with delay:
>> 30000
>> 2016-07-05 10:52:19,429 [rifyHost=false]] - TRACE
>> AmqpProvider                   - IdleTimeoutCheck rescheduling with delay:
>> 30000
>> 2016-07-05 10:52:49,429 [rifyHost=false]] - TRACE
>> AmqpProvider                   - IdleTimeoutCheck rescheduling with delay:
>> 30000
>> 2016-07-05 10:53:19,430 [rifyHost=false]] - TRACE
>> FRAMES                         - SENT: Empty Frame
>> 2016-07-05 10:53:19,430 [rifyHost=false]] - TRACE
>> NettyTcpTransport              - Attempted write of: 8 bytes
>> 2016-07-05 10:53:19,431 [rifyHost=false]] - TRACE
>> AmqpProvider                   - IdleTimeoutCheck rescheduling with delay:
>> 30000
>> 2016-07-05 10:53:49,431 [rifyHost=false]] - TRACE
>> AmqpProvider                   - IdleTimeoutCheck rescheduling with delay:
>> 30000
>> 2016-07-05 10:54:19,431 [rifyHost=false]] - TRACE
>> AmqpProvider                   - IdleTimeoutCheck rescheduling with delay:
>> 30000
>> 2016-07-05 10:54:49,432 [rifyHost=false]] - TRACE
>> AmqpProvider                   - IdleTimeoutCheck rescheduling with delay:
>> 30000
>> 2016-07-05 10:55:19,432 [rifyHost=false]] - TRACE
>> FRAMES                         - SENT: Empty Frame
>> 2016-07-05 10:55:19,432 [rifyHost=false]] - TRACE
>> NettyTcpTransport              - Attempted write of: 8 bytes
>> 2016-07-05 10:55:19,433 [rifyHost=false]] - TRACE
>> AmqpProvider                   - IdleTimeoutCheck rescheduling with delay:
>> 30000
>> 2016-07-05 10:55:49,410 [ntLoopGroup-2-1] - TRACE
>> NettyTcpTransport              - New data read: 15 bytes incoming:
>> UnpooledHeapByteBuf(ridx: 0, widx: 15, cap: 53)
>> 2016-07-05 10:55:49,411 [rifyHost=false]] - TRACE
>> FRAMES                         - RECV: End{error=null}
>> 2016-07-05 10:55:49,411 [rifyHost=false]] - TRACE
>> AmqpProvider                   - New Proton Event: SESSION_REMOTE_CLOSE
>> 2016-07-05 10:55:49,411 [ntLoopGroup-2-1] - TRACE
>> NettyTcpTransport              - New data read: 190 bytes incoming:
>> UnpooledHeapByteBuf(ridx: 0, widx: 190, cap: 229)
>> 2016-07-05 10:55:49,411 [rifyHost=false]] - TRACE
>> AmqpProvider                   - New Proton Event: SESSION_LOCAL_CLOSE
>> 2016-07-05 10:55:49,412 [rifyHost=false]] - TRACE
>> FRAMES                         - SENT: End{error=null}
>> 2016-07-05 10:55:49,412 [rifyHost=false]] - TRACE
>> NettyTcpTransport              - Attempted write of: 12 bytes
>> 2016-07-05 10:55:49,412 [rifyHost=false]] - TRACE
>> FRAMES                         - RECV: End{error=null}
>> 2016-07-05 10:55:49,412 [rifyHost=false]] - TRACE
>> FRAMES                         - RECV:
>> Close{error=Error{condition=amqp:connection:forced, description='Die
>> Verbindung war länger als zulässig
>> inaktiv..TrackingId:d7c7f05c1eff4183ac3dc9f2885e3f93_Gcw7,TimeStamp:05.07.2016
>> 08:55:41', info=null}}
>> 2016-07-05 10:55:49,413 [rifyHost=false]] - TRACE
>> AmqpProvider                   - New Proton Event: SESSION_REMOTE_CLOSE
>> 2016-07-05 10:55:49,413 [rifyHost=false]] - TRACE
>> AmqpProvider                   - New Proton Event: CONNECTION_REMOTE_CLOSE
>> Connection ExceptionListener fired, exiting.
>> javax.jms.JMSException: Die Verbindung war länger als zulässig
>> inaktiv..TrackingId:d7c7f05c1eff4183ac3dc9f2885e3f93_Gcw7,TimeStamp:05.07.2016
>> 08:55:41 [condition = amqp:connection:forced]
>>     at
>> org.apache.qpid.jms.provider.amqp.AmqpSupport.convertToException(AmqpSupport.java:148)
>>     at
>> org.apache.qpid.jms.provider.amqp.AmqpSupport.convertToException(AmqpSupport.java:103)
>>     at
>> org.apache.qpid.jms.provider.amqp.AmqpAbstractResource.remotelyClosed(AmqpAbstractResource.java:147)
>>     at
>> org.apache.qpid.jms.provider.amqp.AmqpAbstractResource.processRemoteClose(AmqpAbstractResource.java:242)
>>     at
>> org.apache.qpid.jms.provider.amqp.AmqpProvider.processUpdates(AmqpProvider.java:771)
>>     at
>> org.apache.qpid.jms.provider.amqp.AmqpProvider.access$1900(AmqpProvider.java:92)
>>     at
>> org.apache.qpid.jms.provider.amqp.AmqpProvider$17.run(AmqpProvider.java:699)
>>     at
>> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>>     at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>>     at
>> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:178)
>>     at
>> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:292)
>>     at
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>>     at
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>>     at java.lang.Thread.run(Thread.java:745)
>> 2016-07-05 10:55:49,416 [rifyHost=false]] - TRACE
>> AmqpProvider                   - New Proton Event: TRANSPORT_TAIL_CLOSED
>> 2016-07-05 10:55:49,416 [rifyHost=false]] - TRACE
>> AmqpProvider                   - New Proton Event: SESSION_LOCAL_CLOSE
>> 2016-07-05 10:55:49,416 [rifyHost=false]] - TRACE
>> AmqpProvider                   - New Proton Event: CONNECTION_LOCAL_CLOSE
>> 2016-07-05 10:55:49,416 [rifyHost=false]] - TRACE
>> FRAMES                         - SENT: End{error=null}
>> 2016-07-05 10:55:49,416 [rifyHost=false]] - TRACE
>> FRAMES                         - SENT: Close{error=null}
>> 2016-07-05 10:55:49,416 [rifyHost=false]] - TRACE
>> NettyTcpTransport              - Attempted write of: 24 bytes
>> 2016-07-05 10:55:49,416 [rifyHost=false]] - TRACE
>> AmqpProvider                   - Cancelling scheduled IdleTimeoutCheck
>> 2016-07-05 10:55:49,419 [ntLoopGroup-2-1] - TRACE
>> NettyTcpTransport              - Channel has gone inactive! Channel is [id:
>> 0x5d81e8cf, /10.10.0.180:38039 :> /10.10.0.62:5671]
>>
>>
>>
>>
>>
>>
>> On Mon, Jul 4, 2016 at 5:01 PM, Robbie Gemmell <ro...@gmail.com>
>> wrote:
>>
>>> Hi Dale,
>>>
>>> There doesnt seem to be much more to try on the client side itself.
>>> The only other thing you could do would be to use a modified local
>>> copy of Proton to ignore the servers advertised 4 minute timeout value
>>> and use something else for remoteIdleTimeout in its 'tick'
>>> calculations to try and force it to send more even more often than it
>>> does and see if that makes any difference. It shouldnt.
>>>
>>> I think youll need to seek support from Microsoft in either case. The
>>> client received an requested advertised server idle-timout of 4
>>> minutes, and generated heartbeats or real traffic at gap of no more
>>> than 2.5mins with that change in place, and the server still
>>> eventually closed the connection for being idle even though it never
>>> breached the timeout regardless whether the advertised value was half
>>> the real timeout or not. That it appears to behaves differently if
>>> there are multiple conections is further strangeness, since this is a
>>> per-connection mechanism.
>>>
>>> Robbie
>>>
>>> On 4 July 2016 at 15:21, Dale Green <gr...@gmail.com> wrote:
>>> > Hi Robbie,
>>> >
>>> > Thank you for your efforts!
>>> >
>>> > I hardcoded this scheduling to be 30 sec. Then the behavior is as
>>> follows:
>>> > - the first empty packet is sent after 2.5 min;
>>> > - after that, the heartbeats are sent every 2 min.
>>> >
>>> > However, this doesn't help, as the broker closes the connections (all
>>> but
>>> > the first one, when there are multiple connections!) 5 min after "SENT
>>> > flow". Exactly like before.
>>> >
>>> > So, I am still trying to find out what makes Service Bus unhappy.
>>> >
>>> >
>>> >
>>> > On Mon, Jul 4, 2016 at 1:45 PM, Robbie Gemmell <
>>> robbie.gemmell@gmail.com>
>>> > wrote:
>>> >
>>> >> Hi Dale,
>>> >>
>>> >> I tried to reproduce this on Friday using the existing idle-timeout
>>> >> tests for the client and wasn't able to, the first heartbeat was sent
>>> >> after 2 minutes, but upon a further look today I see why and tweaking
>>> >> the test it now goes out after 4 and I can see what is going on.
>>> >>
>>> >> The way the client drives the idleTimeout processing in Proton results
>>> >> in a corner case that makes it possible for it to take up to the full
>>> >> timeout period advertised by the server before the first heartbeat is
>>> >> sent after prior non-idle activity, rather than pessimistically
>>> >> sending at a faster rate as it otherwise does. As ServerBus seems
>>> >> (your latest mail maks this less clear) to have not followed the path
>>> >> of advertising half its real timeout as recommended (but annoyingly
>>> >> not absolutely required) by the AMQP spec specifically to avoid this
>>> >> scenario, this creates a situation where there is a race between that
>>> >> heartbeat being sent and whether it arrives in time before ServiceBus
>>> >> decides to kill the connection. The retransmission you mentioned
>>> >> seeing could be enough of a delay to ensure that it didnt.
>>> >>
>>> >> I need to have a closer look at Proton to see if we can update it to
>>> >> better accomodate how the client drives the timeout processing as well
>>> >> as the other usage pattern, without introducing undesirable behaviour
>>> >> for either or changing the API.
>>> >>
>>> >> I do however think I see a way to alter the client to avoid the issue
>>> >> in the mean time. Try the change at https://paste.apache.org/1oCe
>>> >> (which applies to master or 0.10.0 of
>>> >> https://github.com/apache/qpid-jms, and shows the code areas to look
>>> >> at as you were asking), it makes the client 'tick' proton to process
>>> >> the timeout handling more often than it normally would / should be
>>> >> necessary, which in this case ensures the heartbeat is actually sent
>>> >> before the peers full advertised timeout is reached. In your case the
>>> >> patch should make the outlier get sent after at most 3 minutes.
>>> >>
>>> >> We have previously discussed stopping Proton from sending heartbeats
>>> >> much faster than necessary as it currently does, because it results in
>>> >> a significant amount of unecessary traffic/processing on servers with
>>> >> lots of idle clients and/or advertising smaller timeouts. If we do
>>> >> that, I think we would still try to send them a little faster than
>>> >> necessary to avoid spurious timeouts against servers that dont
>>> >> advertise half their actual timeout, but I think it would still make
>>> >> sense for us to add an override to allow toggling it to send more
>>> >> often than it otherwise would, since it would help in situations such
>>> >> as this plus another that came up previously.
>>> >>
>>> >> Robbie
>>> >>
>>> >> On 4 July 2016 at 09:09, Dale Green <gr...@gmail.com> wrote:
>>> >> > Hi Robbie,
>>> >> >
>>> >> > I was debugging a bit and saw that the broker behaves differently if
>>> >> there
>>> >> > is a single connection or multiple connections. That is, when I have
>>> only
>>> >> > one connection, it is not closed even after 5 minutes without
>>> heartbeats.
>>> >> > So, I'll try to observe the behavior with 2 or more connections now.
>>> >> >
>>> >> > However, I am appending now the Qpid logs of a simple consumer, so
>>> you
>>> >> can
>>> >> > see the client behavior. For me, it is still not clear why we don't
>>> have
>>> >> a
>>> >> > heartbeat at 09:46:56,838. Can you tell me where in the code this
>>> >> > functionality could be changed, so I may try a quick fix, to see
>>> what the
>>> >> > broker would do then?
>>> >> >
>>> >> > 2016-07-04 09:44:56,407 [ntLoopGroup-2-1] - TRACE
>>> >> > TransportSupport               - Getting SSLContext instance using
>>> >> > protocol: TLS
>>> >> > 2016-07-04 09:44:56,617 [ntLoopGroup-2-1] - TRACE
>>> >> > TransportSupport               - Default protocols from the
>>> SSLEngine:
>>> >> > [SSLv2Hello, TLSv1, TLSv1.1, TLSv1.2]
>>> >> > 2016-07-04 09:44:56,617 [ntLoopGroup-2-1] - TRACE
>>> >> > TransportSupport               - Disabled protocols: [SSLv2Hello,
>>> SSLv3]
>>> >> > 2016-07-04 09:44:56,617 [ntLoopGroup-2-1] - TRACE
>>> >> > TransportSupport               - Enabled protocols: [TLSv1, TLSv1.1,
>>> >> > TLSv1.2]
>>> >> > 2016-07-04 09:44:56,617 [ntLoopGroup-2-1] - TRACE
>>> >> > TransportSupport               - Default cipher suites from the
>>> >> SSLEngine:
>>> >> > [TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384,
>>> >> > TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384,
>>> TLS_RSA_WITH_AES_256_CBC_SHA256,
>>> >> > TLS_ECDH_ECDSA_WITH_AES_256_CBC_SHA384,
>>> >> > TLS_ECDH_RSA_WITH_AES_256_CBC_SHA384,
>>> >> TLS_DHE_RSA_WITH_AES_256_CBC_SHA256,
>>> >> > TLS_DHE_DSS_WITH_AES_256_CBC_SHA256,
>>> >> TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,
>>> >> > TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA, TLS_RSA_WITH_AES_256_CBC_SHA,
>>> >> > TLS_ECDH_ECDSA_WITH_AES_256_CBC_SHA,
>>> TLS_ECDH_RSA_WITH_AES_256_CBC_SHA,
>>> >> > TLS_DHE_RSA_WITH_AES_256_CBC_SHA, TLS_DHE_DSS_WITH_AES_256_CBC_SHA,
>>> >> > TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256,
>>> >> > TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256,
>>> TLS_RSA_WITH_AES_128_CBC_SHA256,
>>> >> > TLS_ECDH_ECDSA_WITH_AES_128_CBC_SHA256,
>>> >> > TLS_ECDH_RSA_WITH_AES_128_CBC_SHA256,
>>> >> TLS_DHE_RSA_WITH_AES_128_CBC_SHA256,
>>> >> > TLS_DHE_DSS_WITH_AES_128_CBC_SHA256,
>>> >> TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,
>>> >> > TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA, TLS_RSA_WITH_AES_128_CBC_SHA,
>>> >> > TLS_ECDH_ECDSA_WITH_AES_128_CBC_SHA,
>>> TLS_ECDH_RSA_WITH_AES_128_CBC_SHA,
>>> >> > TLS_DHE_RSA_WITH_AES_128_CBC_SHA, TLS_DHE_DSS_WITH_AES_128_CBC_SHA,
>>> >> > TLS_ECDHE_ECDSA_WITH_3DES_EDE_CBC_SHA,
>>> >> TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,
>>> >> > SSL_RSA_WITH_3DES_EDE_CBC_SHA, TLS_ECDH_ECDSA_WITH_3DES_EDE_CBC_SHA,
>>> >> > TLS_ECDH_RSA_WITH_3DES_EDE_CBC_SHA,
>>> SSL_DHE_RSA_WITH_3DES_EDE_CBC_SHA,
>>> >> > SSL_DHE_DSS_WITH_3DES_EDE_CBC_SHA, TLS_EMPTY_RENEGOTIATION_INFO_SCSV]
>>> >> > 2016-07-04 09:44:56,617 [ntLoopGroup-2-1] - TRACE
>>> >> > TransportSupport               - Enabled cipher suites:
>>> >> > [TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384,
>>> >> > TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384,
>>> TLS_RSA_WITH_AES_256_CBC_SHA256,
>>> >> > TLS_ECDH_ECDSA_WITH_AES_256_CBC_SHA384,
>>> >> > TLS_ECDH_RSA_WITH_AES_256_CBC_SHA384,
>>> >> TLS_DHE_RSA_WITH_AES_256_CBC_SHA256,
>>> >> > TLS_DHE_DSS_WITH_AES_256_CBC_SHA256,
>>> >> TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,
>>> >> > TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA, TLS_RSA_WITH_AES_256_CBC_SHA,
>>> >> > TLS_ECDH_ECDSA_WITH_AES_256_CBC_SHA,
>>> TLS_ECDH_RSA_WITH_AES_256_CBC_SHA,
>>> >> > TLS_DHE_RSA_WITH_AES_256_CBC_SHA, TLS_DHE_DSS_WITH_AES_256_CBC_SHA,
>>> >> > TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256,
>>> >> > TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256,
>>> TLS_RSA_WITH_AES_128_CBC_SHA256,
>>> >> > TLS_ECDH_ECDSA_WITH_AES_128_CBC_SHA256,
>>> >> > TLS_ECDH_RSA_WITH_AES_128_CBC_SHA256,
>>> >> TLS_DHE_RSA_WITH_AES_128_CBC_SHA256,
>>> >> > TLS_DHE_DSS_WITH_AES_128_CBC_SHA256,
>>> >> TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,
>>> >> > TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA, TLS_RSA_WITH_AES_128_CBC_SHA,
>>> >> > TLS_ECDH_ECDSA_WITH_AES_128_CBC_SHA,
>>> TLS_ECDH_RSA_WITH_AES_128_CBC_SHA,
>>> >> > TLS_DHE_RSA_WITH_AES_128_CBC_SHA, TLS_DHE_DSS_WITH_AES_128_CBC_SHA,
>>> >> > TLS_ECDHE_ECDSA_WITH_3DES_EDE_CBC_SHA,
>>> >> TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,
>>> >> > SSL_RSA_WITH_3DES_EDE_CBC_SHA, TLS_ECDH_ECDSA_WITH_3DES_EDE_CBC_SHA,
>>> >> > TLS_ECDH_RSA_WITH_3DES_EDE_CBC_SHA,
>>> SSL_DHE_RSA_WITH_3DES_EDE_CBC_SHA,
>>> >> > SSL_DHE_DSS_WITH_3DES_EDE_CBC_SHA, TLS_EMPTY_RENEGOTIATION_INFO_SCSV]
>>> >> > 2016-07-04 09:44:56,650 [ntLoopGroup-2-1] - TRACE
>>> >> > NettyTcpTransport              - Channel has become active! Channel
>>> is
>>> >> [id:
>>> >> > 0xec4459d2, /10.10.0.180:51561 => /10.10.0.62:5671]
>>> >> > 2016-07-04 09:44:56,773 [ntLoopGroup-2-1] - TRACE
>>> >> > NettySslTransport              - SSL Handshake has completed: [id:
>>> >> > 0xec4459d2, /10.10.0.180:51561 => /10.10.0.62:5671]
>>> >> > 2016-07-04 09:44:56,811 [rifyHost=false]] - TRACE
>>> >> > NettyTcpTransport              - Attempted write of: 8 bytes
>>> >> > 2016-07-04 09:44:56,814 [ntLoopGroup-2-1] - TRACE
>>> >> > NettyTcpTransport              - New data read: 8 bytes incoming:
>>> >> > UnpooledHeapByteBuf(ridx: 0, widx: 8, cap: 1024)
>>> >> > 2016-07-04 09:44:56,814 [rifyHost=false]] - TRACE
>>> >> > AmqpProvider                   - New Proton Event: CONNECTION_INIT
>>> >> > 2016-07-04 09:44:56,815 [ntLoopGroup-2-1] - TRACE
>>> >> > NettyTcpTransport              - New data read: 39 bytes incoming:
>>> >> > UnpooledHeapByteBuf(ridx: 0, widx: 39, cap: 69)
>>> >> > 2016-07-04 09:44:56,815 [rifyHost=false]] - TRACE
>>> >> > AmqpProvider                   - New Proton Event:
>>> CONNECTION_LOCAL_OPEN
>>> >> > 2016-07-04 09:44:56,819 [rifyHost=false]] - DEBUG
>>> >> > SaslMechanismFinder            - Skipping SASL-EXTERNAL mechanism
>>> because
>>> >> > the available credentials are not sufficient
>>> >> > 2016-07-04 09:44:56,819 [rifyHost=false]] - INFO
>>> >> > SaslMechanismFinder            - Best match for SASL auth was:
>>> SASL-PLAIN
>>> >> > 2016-07-04 09:44:56,820 [rifyHost=false]] - TRACE
>>> >> > NettyTcpTransport              - Attempted write of: 50 bytes
>>> >> > 2016-07-04 09:44:56,823 [ntLoopGroup-2-1] - TRACE
>>> >> > NettyTcpTransport              - New data read: 26 bytes incoming:
>>> >> > UnpooledHeapByteBuf(ridx: 0, widx: 26, cap: 53)
>>> >> > 2016-07-04 09:44:56,826 [rifyHost=false]] - TRACE
>>> >> > FRAMES                         - SENT: Open{
>>> >> > containerId='ID::21730abd-45cf-46c9-b312-8c1052076805:1',
>>> >> > hostname='10.10.0.62', maxFrameSize=1048576, channelMax=32767,
>>> >> > idleTimeOut=500000, outgoingLocales=null, incomingLocales=null,
>>> >> > offeredCapabilities=null,
>>> >> > desiredCapabilities=[sole-connection-for-container],
>>> >> > properties={product=QpidJMS, version=0.9.0, platform=JVM: 1.7.0_101,
>>> >> > 24.95-b01, Oracle Corporation, OS: Linux, 3.16.0-4-amd64, amd64}}
>>> >> > 2016-07-04 09:44:56,827 [rifyHost=false]] - TRACE
>>> >> > NettyTcpTransport              - Attempted write of: 256 bytes
>>> >> > 2016-07-04 09:44:56,828 [ntLoopGroup-2-1] - TRACE
>>> >> > NettyTcpTransport              - New data read: 8 bytes incoming:
>>> >> > UnpooledHeapByteBuf(ridx: 0, widx: 8, cap: 37)
>>> >> > 2016-07-04 09:44:56,830 [ntLoopGroup-2-1] - TRACE
>>> >> > NettyTcpTransport              - New data read: 72 bytes incoming:
>>> >> > UnpooledHeapByteBuf(ridx: 0, widx: 72, cap: 101)
>>> >> > 2016-07-04 09:44:56,830 [rifyHost=false]] - TRACE
>>> >> > FRAMES                         - RECV: Open{
>>> >> > containerId='d7c7f05c1eff4183ac3dc9f2885e3f93_Gcw7', hostname='null',
>>> >> > maxFrameSize=65536, channelMax=4999, idleTimeOut=240000,
>>> >> > outgoingLocales=null, incomingLocales=null, offeredCapabilities=null,
>>> >> > desiredCapabilities=null, properties=null}
>>> >> > 2016-07-04 09:44:56,830 [rifyHost=false]] - TRACE
>>> >> > AmqpProvider                   - New Proton Event:
>>> CONNECTION_REMOTE_OPEN
>>> >> > 2016-07-04 09:44:56,833 [rifyHost=false]] - TRACE
>>> >> > AmqpProvider                   - New Proton Event: SESSION_INIT
>>> >> > 2016-07-04 09:44:56,833 [rifyHost=false]] - TRACE
>>> >> > AmqpProvider                   - New Proton Event: SESSION_LOCAL_OPEN
>>> >> > 2016-07-04 09:44:56,835 [rifyHost=false]] - TRACE
>>> >> > FRAMES                         - SENT: Begin{remoteChannel=null,
>>> >> > nextOutgoingId=1, incomingWindow=2047, outgoingWindow=2147483647,
>>> >> > handleMax=65535, offeredCapabilities=null, desiredCapabilities=null,
>>> >> > properties=null}
>>> >> > 2016-07-04 09:44:56,835 [rifyHost=false]] - TRACE
>>> >> > NettyTcpTransport              - Attempted write of: 32 bytes
>>> >> > 2016-07-04 09:44:56,836 [ntLoopGroup-2-1] - TRACE
>>> >> > NettyTcpTransport              - New data read: 34 bytes incoming:
>>> >> > UnpooledHeapByteBuf(ridx: 0, widx: 34, cap: 69)
>>> >> > 2016-07-04 09:44:56,837 [rifyHost=false]] - TRACE
>>> >> > FRAMES                         - RECV: Begin{remoteChannel=0,
>>> >> > nextOutgoingId=1, incomingWindow=5000, outgoingWindow=2047,
>>> handleMax=7,
>>> >> > offeredCapabilities=null, desiredCapabilities=null, properties=null}
>>> >> > 2016-07-04 09:44:56,837 [rifyHost=false]] - TRACE
>>> >> > AmqpProvider                   - New Proton Event:
>>> SESSION_REMOTE_OPEN
>>> >> > 2016-07-04 09:44:56,837 [rifyHost=false]] - DEBUG
>>> >> > AmqpConnectionBuilder          - AmqpConnection {
>>> >> > ID::5fd31250-0099-4949-8160-1b6ba4c2cf72:1 } is now open:
>>> >> > 2016-07-04 09:44:56,837 [rifyHost=false]] - TRACE
>>> >> > AmqpProvider                   - IdleTimeoutCheck being initiated,
>>> >> initial
>>> >> > delay: 120000
>>> >> > 2016-07-04 09:44:56,837 [rifyHost=false]] - INFO
>>> >> > JmsConnection                  - Connection
>>> >> > ID::5fd31250-0099-4949-8160-1b6ba4c2cf72:1 connected to remote
>>> Broker:
>>> >> > amqps://10.10.0.62:5671/?transport.verifyHost=false
>>> >> > 2016-07-04 09:44:56,846 [rifyHost=false]] - TRACE
>>> >> > FRAMES                         - SENT: Begin{remoteChannel=null,
>>> >> > nextOutgoingId=1, incomingWindow=2047, outgoingWindow=2147483647,
>>> >> > handleMax=65535, offeredCapabilities=null, desiredCapabilities=null,
>>> >> > properties=null}
>>> >> > 2016-07-04 09:44:56,846 [rifyHost=false]] - TRACE
>>> >> > NettyTcpTransport              - Attempted write of: 32 bytes
>>> >> > 2016-07-04 09:44:56,848 [ntLoopGroup-2-1] - TRACE
>>> >> > NettyTcpTransport              - New data read: 34 bytes incoming:
>>> >> > UnpooledHeapByteBuf(ridx: 0, widx: 34, cap: 69)
>>> >> > 2016-07-04 09:44:56,848 [rifyHost=false]] - TRACE
>>> >> > FRAMES                         - RECV: Begin{remoteChannel=1,
>>> >> > nextOutgoingId=1, incomingWindow=5000, outgoingWindow=2047,
>>> handleMax=7,
>>> >> > offeredCapabilities=null, desiredCapabilities=null, properties=null}
>>> >> > 2016-07-04 09:44:56,848 [rifyHost=false]] - TRACE
>>> >> > AmqpProvider                   - New Proton Event: SESSION_INIT
>>> >> > 2016-07-04 09:44:56,848 [rifyHost=false]] - TRACE
>>> >> > AmqpProvider                   - New Proton Event: SESSION_LOCAL_OPEN
>>> >> > 2016-07-04 09:44:56,848 [rifyHost=false]] - TRACE
>>> >> > AmqpProvider                   - New Proton Event:
>>> SESSION_REMOTE_OPEN
>>> >> > 2016-07-04 09:44:56,858 [rifyHost=false]] - TRACE
>>> >> > FRAMES                         - SENT:
>>> >> >
>>> >>
>>> Attach{name='qpid-jms:receiver:ID::5fd31250-0099-4949-8160-1b6ba4c2cf72:1:1:1:ServiceBusDefaultNamespace/test2',
>>> >> > handle=0, role=RECEIVER, sndSettleMode=UNSETTLED,
>>> rcvSettleMode=FIRST,
>>> >> > source=Source{address='ServiceBusDefaultNamespace/test2',
>>> durable=NONE,
>>> >> > expiryPolicy=LINK_DETACH, timeout=0, dynamic=false,
>>> >> > dynamicNodeProperties=null, distributionMode=null, filter=null,
>>> >> > defaultOutcome=Modified{deliveryFailed=true, undeliverableHere=null,
>>> >> > messageAnnotations=null}, outcomes=[amqp:accepted:list,
>>> >> amqp:rejected:list,
>>> >> > amqp:released:list, amqp:modified:list], capabilities=[queue]},
>>> >> > target=Target{address='null', durable=NONE, expiryPolicy=SESSION_END,
>>> >> > timeout=0, dynamic=false, dynamicNodeProperties=null,
>>> capabilities=null},
>>> >> > unsettled=null, incompleteUnsettled=false, initialDeliveryCount=null,
>>> >> > maxMessageSize=null, offeredCapabilities=null,
>>> desiredCapabilities=null,
>>> >> > properties=null}
>>> >> > 2016-07-04 09:44:56,858 [rifyHost=false]] - TRACE
>>> >> > NettyTcpTransport              - Attempted write of: 285 bytes
>>> >> > 2016-07-04 09:44:57,174 [ntLoopGroup-2-1] - TRACE
>>> >> > NettyTcpTransport              - New data read: 413 bytes incoming:
>>> >> > UnpooledHeapByteBuf(ridx: 0, widx: 413, cap: 453)
>>> >> > 2016-07-04 09:44:57,175 [rifyHost=false]] - TRACE
>>> >> > FRAMES                         - RECV:
>>> >> >
>>> >>
>>> Attach{name='qpid-jms:receiver:ID::5fd31250-0099-4949-8160-1b6ba4c2cf72:1:1:1:ServiceBusDefaultNamespace/test2',
>>> >> > handle=0, role=SENDER, sndSettleMode=UNSETTLED, rcvSettleMode=FIRST,
>>> >> > source=Source{address='ServiceBusDefaultNamespace/test2',
>>> durable=NONE,
>>> >> > expiryPolicy=LINK_DETACH, timeout=0, dynamic=false,
>>> >> > dynamicNodeProperties=null, distributionMode=null, filter=null,
>>> >> > defaultOutcome=Modified{deliveryFailed=true, undeliverableHere=null,
>>> >> > messageAnnotations=null}, outcomes=[amqp:accepted:list,
>>> >> amqp:rejected:list,
>>> >> > amqp:released:list, amqp:modified:list], capabilities=[queue]},
>>> >> > target=Target{address='null', durable=NONE, expiryPolicy=SESSION_END,
>>> >> > timeout=0, dynamic=false, dynamicNodeProperties=null,
>>> capabilities=null},
>>> >> > unsettled=null, incompleteUnsettled=false, initialDeliveryCount=0,
>>> >> > maxMessageSize=262144, offeredCapabilities=null,
>>> >> desiredCapabilities=null,
>>> >> >
>>> >>
>>> properties={com.microsoft:tracking-id=d7c7f05c1eff4183ac3dc9f2885e3f93_Gcw7_Bcw7;51461:100924:100925}}
>>> >> > 2016-07-04 09:44:57,176 [rifyHost=false]] - TRACE
>>> >> > AmqpProvider                   - New Proton Event: LINK_INIT
>>> >> > 2016-07-04 09:44:57,176 [rifyHost=false]] - TRACE
>>> >> > AmqpProvider                   - New Proton Event: LINK_LOCAL_OPEN
>>> >> > 2016-07-04 09:44:57,176 [rifyHost=false]] - TRACE
>>> >> > AmqpProvider                   - New Proton Event: LINK_REMOTE_OPEN
>>> >> > 2016-07-04 09:44:57,177 [rifyHost=false]] - TRACE
>>> >> > AmqpConsumer                   - Consumer
>>> >> > ID::5fd31250-0099-4949-8160-1b6ba4c2cf72:1:1:1 granting additional
>>> >> credit:
>>> >> > 10
>>> >> > 2016-07-04 09:44:57,178 [rifyHost=false]] - TRACE
>>> >> > FRAMES                         - SENT: Flow{nextIncomingId=1,
>>> >> > incomingWindow=2047, nextOutgoingId=1, outgoingWindow=2147483647,
>>> >> handle=0,
>>> >> > deliveryCount=0, linkCredit=10, available=null, drain=false,
>>> echo=false,
>>> >> > properties=null}
>>> >> > 2016-07-04 09:44:57,178 [rifyHost=false]] - TRACE
>>> >> > NettyTcpTransport              - Attempted write of: 32 bytes
>>> >> > 2016-07-04 09:46:56,838 [rifyHost=false]] - TRACE
>>> >> > AmqpProvider                   - IdleTimeoutCheck rescheduling with
>>> >> delay:
>>> >> > 120000
>>> >> > 2016-07-04 09:48:56,839 [rifyHost=false]] - TRACE
>>> >> > FRAMES                         - SENT: Empty Frame
>>> >> > 2016-07-04 09:48:56,839 [rifyHost=false]] - TRACE
>>> >> > NettyTcpTransport              - Attempted write of: 8 bytes
>>> >> > 2016-07-04 09:48:56,839 [rifyHost=false]] - TRACE
>>> >> > AmqpProvider                   - IdleTimeoutCheck rescheduling with
>>> >> delay:
>>> >> > 120000
>>> >> > 2016-07-04 09:50:56,840 [rifyHost=false]] - TRACE
>>> >> > FRAMES                         - SENT: Empty Frame
>>> >> > 2016-07-04 09:50:56,840 [rifyHost=false]] - TRACE
>>> >> > NettyTcpTransport              - Attempted write of: 8 bytes
>>> >> > 2016-07-04 09:50:56,840 [rifyHost=false]] - TRACE
>>> >> > AmqpProvider                   - IdleTimeoutCheck rescheduling with
>>> >> delay:
>>> >> > 120000
>>> >> > 2016-07-04 09:52:14,666 [ntLoopGroup-2-1] - TRACE
>>> >> > NettyTcpTransport              - New data read: 8 bytes incoming:
>>> >> > UnpooledHeapByteBuf(ridx: 0, widx: 8, cap: 37)
>>> >> > 2016-07-04 09:52:14,667 [rifyHost=false]] - TRACE
>>> >> > FRAMES                         - RECV: Empty Frame
>>> >> > 2016-07-04 09:52:56,841 [rifyHost=false]] - TRACE
>>> >> > FRAMES                         - SENT: Empty Frame
>>> >> > 2016-07-04 09:52:56,841 [rifyHost=false]] - TRACE
>>> >> > NettyTcpTransport              - Attempted write of: 8 bytes
>>> >> > 2016-07-04 09:52:56,841 [rifyHost=false]] - TRACE
>>> >> > AmqpProvider                   - IdleTimeoutCheck rescheduling with
>>> >> delay:
>>> >> > 120000
>>> >> > 2016-07-04 09:54:56,841 [rifyHost=false]] - TRACE
>>> >> > FRAMES                         - SENT: Empty Frame
>>> >> > 2016-07-04 09:54:56,841 [rifyHost=false]] - TRACE
>>> >> > NettyTcpTransport              - Attempted write of: 8 bytes
>>> >> > 2016-07-04 09:54:56,842 [rifyHost=false]] - TRACE
>>> >> > AmqpProvider                   - IdleTimeoutCheck rescheduling with
>>> >> delay:
>>> >> > 120000
>>> >> > 2016-07-04 09:56:56,842 [rifyHost=false]] - TRACE
>>> >> > FRAMES                         - SENT: Empty Frame
>>> >> > 2016-07-04 09:56:56,842 [rifyHost=false]] - TRACE
>>> >> > NettyTcpTransport              - Attempted write of: 8 bytes
>>> >> > 2016-07-04 09:56:56,842 [rifyHost=false]] - TRACE
>>> >> > AmqpProvider                   - IdleTimeoutCheck rescheduling with
>>> >> delay:
>>> >> > 120000
>>> >> > 2016-07-04 09:58:56,843 [rifyHost=false]] - TRACE
>>> >> > FRAMES                         - SENT: Empty Frame
>>> >> > 2016-07-04 09:58:56,843 [rifyHost=false]] - TRACE
>>> >> > NettyTcpTransport              - Attempted write of: 8 bytes
>>> >> > 2016-07-04 09:58:56,843 [rifyHost=false]] - TRACE
>>> >> > AmqpProvider                   - IdleTimeoutCheck rescheduling with
>>> >> delay:
>>> >> > 120000
>>> >> > 2016-07-04 09:59:32,164 [ntLoopGroup-2-1] - TRACE
>>> >> > NettyTcpTransport              - New data read: 8 bytes incoming:
>>> >> > UnpooledHeapByteBuf(ridx: 0, widx: 8, cap: 37)
>>> >> > 2016-07-04 09:59:32,164 [rifyHost=false]] - TRACE
>>> >> > FRAMES                         - RECV: Empty Frame
>>> >> > 2016-07-04 10:00:56,843 [rifyHost=false]] - TRACE
>>> >> > FRAMES                         - SENT: Empty Frame
>>> >> > 2016-07-04 10:00:56,843 [rifyHost=false]] - TRACE
>>> >> > NettyTcpTransport              - Attempted write of: 8 bytes
>>> >> > 2016-07-04 10:00:56,843 [rifyHost=false]] - TRACE
>>> >> > AmqpProvider                   - IdleTimeoutCheck rescheduling with
>>> >> delay:
>>> >> > 120000
>>> >> > 2016-07-04 10:02:56,844 [rifyHost=false]] - TRACE
>>> >> > FRAMES                         - SENT: Empty Frame
>>> >> > 2016-07-04 10:02:56,844 [rifyHost=false]] - TRACE
>>> >> > NettyTcpTransport              - Attempted write of: 8 bytes
>>> >> > 2016-07-04 10:02:56,845 [rifyHost=false]] - TRACE
>>> >> > AmqpProvider                   - IdleTimeoutCheck rescheduling with
>>> >> delay:
>>> >> > 120000
>>> >> >
>>> >> >
>>> >> >
>>> >> >
>>> >> > On Fri, Jul 1, 2016 at 3:52 PM, Robbie Gemmell <
>>> robbie.gemmell@gmail.com
>>> >> >
>>> >> > wrote:
>>> >> >
>>> >> >> Hi Dale,
>>> >> >>
>>> >> >> It looks like you might have snipped some of the logs. If you
>>> removed
>>> >> >> any frame logging we will need to see that too (well, at least what
>>> >> >> frames and when) to better reason about what the client is (or is
>>> not)
>>> >> >> doing, as heartbeat frames are only sent in the absence of other
>>> >> >> frames. It would also be good to enable trace on logger
>>> >> >> org.apache.qpid.jms.provider.amqp.AmqpProvider to get some logging
>>> >> >> from the idle timeout processing task.
>>> >> >>
>>> >> >> The decision of whether to send a heartbeat frame (or close the
>>> >> >> connection due to not receiving any frames) is taken by the Qpid
>>> >> >> Proton engine the client uses, with the client scheduling the
>>> >> >> processing task needed for it to do that. There isn't currently a
>>> way
>>> >> >> to make it send heartbeats more often than Proton decides it needs
>>> to
>>> >> >> based on what the server advertises and what other activity is
>>> >> >> occuring.
>>> >> >>
>>> >> >> The AMQP spec indicates
>>> >> >> (
>>> >> >>
>>> >>
>>> http://docs.oasis-open.org/amqp/core/v1.0/os/amqp-core-transport-v1.0-os.html#doc-doc-idle-time-out
>>> >> >> )
>>> >> >> that peers SHOULD advertise half their actual timeout to avoid
>>> >> >> spurious timeouts. It would seem like perhaps Service Bus isnt doing
>>> >> >> that, since it advertised 4 minutes and the logs indicate the
>>> >> >> connection was closed in less than 5 minutes, as opposed to at least
>>> >> >> 8.
>>> >> >>
>>> >> >> That said, Proton actually pessimisticly assumes that the peer
>>> hasn't
>>> >> >> advertised half its timeout and so uses half the peers advertised
>>> >> >> value, meaning it should send heartbeats at either twice or four
>>> times
>>> >> >> the frequency of the peers actual timeout if no other frames are
>>> being
>>> >> >> sent. I see you have a 121sec timeout specified client side (given
>>> the
>>> >> >> client advertise idle-timeout of 60.5s) too, and that should mean
>>> the
>>> >> >> client should be doing the idle-timeout processing at least that
>>> >> >> often, with the absence of any heartbeat frames being sent
>>> suggesting
>>> >> >> Proton thinks it doesnt need to.
>>> >> >>
>>> >> >> I'll take a look at the heartbeat generation stuff in Proton, but if
>>> >> >> you could send the updated logging that would be good.
>>> >> >>
>>> >> >> Robbie
>>> >> >>
>>> >> >> On 1 July 2016 at 13:35, Dale Green <gr...@gmail.com>
>>> wrote:
>>> >> >> > Hi people,
>>> >> >> >
>>> >> >> > I have another problem with Service Bus and Qpid JMS 0.9.0.
>>> >> >> >
>>> >> >> > The broker uses 4 minutes idle timeout. With the sample code in
>>> the
>>> >> >> > examples, I verified that the client sends the first empty frame
>>> >> exactly
>>> >> >> 4
>>> >> >> > minutes after the consumer has connected. After that, Qpid sends
>>> an
>>> >> empty
>>> >> >> > frame every 2 minutes.
>>> >> >> >
>>> >> >> > However in the application, I am working on (existing code),
>>> Service
>>> >> Bus
>>> >> >> > drops the connections too often (when there are no messages sent)
>>> >> because
>>> >> >> > of inactivity. I think the problem is that Qpid waits for the
>>> whole 4
>>> >> >> > minutes before sending the first empty frame. Moreover, I see
>>> always
>>> >> >> (when
>>> >> >> > the problem occurs) TCP retransmission for exactly this frame
>>> which
>>> >> maybe
>>> >> >> > delays it for 100-200ms. Therefore, dropping the connection is
>>> perhaps
>>> >> >> > correct.
>>> >> >> >
>>> >> >> > I am appending some Qpid traces down here. My questions are:
>>> >> >> > - Is the broker's behavior correct?
>>> >> >> > - How can I tell Qpid to send the heartbeats more often to
>>> probably
>>> >> solve
>>> >> >> > this problem?
>>> >> >> >
>>> >> >> > Thanks!
>>> >> >> >
>>> >> >> > 11:48:38,001 TRACE [FRAMES] SENT: Open{
>>> >> >> > containerId='ID::68508b21-bbc7-4318-9371-5debbad2937d:1',
>>> >> >> > hostname='10.10.0.62', maxFrameSize=1048576, channelMax=32767,
>>> >> >> > idleTimeOut=60500, outgoingLocales=null, incomingLocales=null,
>>> >> >> > offeredCapabilities=null,
>>> >> >> > desiredCapabilities=[sole-connection-for-container],
>>> >> >> > properties={product=QpidJMS, version=0.9.0, platform=JVM:
>>> 1.7.0_80,
>>> >> >> > 24.80-b11, Oracle Corporation, OS: Linux, 3.16.0-4-amd64, amd64}}
>>> >> >> > 11:48:38,006 TRACE [FRAMES] RECV: Open{
>>> >> >> > containerId='d7c7f05c1eff4183ac3dc9f2885e3f93_Gcw7',
>>> hostname='null',
>>> >> >> > maxFrameSize=65536, channelMax=4999, idleTimeOut=240000,
>>> >> >> > outgoingLocales=null, incomingLocales=null,
>>> offeredCapabilities=null,
>>> >> >> > desiredCapabilities=null, properties=null}
>>> >> >> > 11:48:38,015 TRACE [FRAMES] SENT: Begin{remoteChannel=null,
>>> >> >> > nextOutgoingId=1, incomingWindow=2047, outgoingWindow=2147483647,
>>> >> >> > handleMax=65535, offeredCapabilities=null,
>>> desiredCapabilities=null,
>>> >> >> > properties=null}
>>> >> >> > 11:48:38,017 TRACE [FRAMES] RECV: Begin{remoteChannel=0,
>>> >> >> nextOutgoingId=1,
>>> >> >> > incomingWindow=5000, outgoingWindow=2047, handleMax=7,
>>> >> >> > offeredCapabilities=null, desiredCapabilities=null,
>>> properties=null}
>>> >> >> > 11:48:38,144 TRACE [FRAMES] SENT:
>>> >> >> >
>>> >> >>
>>> >>
>>> Attach{name='qpid-jms:receiver:ID::24295795-625b-46a2-9898-637cf9944fbd:1:1:1:ServiceBusDefaultNamespace/XXX',
>>> >> >> > handle=0, role=RECEIVER, sndSettleMode=UNSETTLED,
>>> rcvSettleMode=FIRST,
>>> >> >> > source=Source{address='ServiceBusDefaultNamespace/XXX',
>>> durable=NONE,
>>> >> >> > expiryPolicy=LINK_DETACH, timeout=0, dynamic=false,
>>> >> >> > dynamicNodeProperties=null, distributionMode=null, filter=null,
>>> >> >> > defaultOutcome=Modified{deliveryFailed=true,
>>> undeliverableHere=null,
>>> >> >> > messageAnnotations=null}, outcomes=[amqp:accepted:list,
>>> >> >> amqp:rejected:list,
>>> >> >> > amqp:released:list, amqp:modified:list], capabilities=[queue]},
>>> >> >> > target=Target{address='null', durable=NONE,
>>> expiryPolicy=SESSION_END,
>>> >> >> > timeout=0, dynamic=false, dynamicNodeProperties=null,
>>> >> capabilities=null},
>>> >> >> > unsettled=null, incompleteUnsettled=false,
>>> initialDeliveryCount=null,
>>> >> >> > maxMessageSize=null, offeredCapabilities=null,
>>> >> desiredCapabilities=null,
>>> >> >> > properties=null}
>>> >> >> > 11:48:38,179 TRACE [FRAMES] RECV:
>>> >> >> >
>>> >> >>
>>> >>
>>> Attach{name='qpid-jms:receiver:ID::24295795-625b-46a2-9898-637cf9944fbd:1:1:1:ServiceBusDefaultNamespace/XXX',
>>> >> >> > handle=0, role=SENDER, sndSettleMode=UNSETTLED,
>>> rcvSettleMode=FIRST,
>>> >> >> > source=Source{address='ServiceBusDefaultNamespace/XXX',
>>> durable=NONE,
>>> >> >> > expiryPolicy=LINK_DETACH, timeout=0, dynamic=false,
>>> >> >> > dynamicNodeProperties=null, distributionMode=null, filter=null,
>>> >> >> > defaultOutcome=Modified{deliveryFailed=true,
>>> undeliverableHere=null,
>>> >> >> > messageAnnotations=null}, outcomes=[amqp:accepted:list,
>>> >> >> amqp:rejected:list,
>>> >> >> > amqp:released:list, amqp:modified:list], capabilities=[queue]},
>>> >> >> > target=Target{address='null', durable=NONE,
>>> expiryPolicy=SESSION_END,
>>> >> >> > timeout=0, dynamic=false, dynamicNodeProperties=null,
>>> >> capabilities=null},
>>> >> >> > unsettled=null, incompleteUnsettled=false, initialDeliveryCount=0,
>>> >> >> > maxMessageSize=262144, offeredCapabilities=null,
>>> >> >> desiredCapabilities=null,
>>> >> >> >
>>> >> >>
>>> >>
>>> properties={com.microsoft:tracking-id=d7c7f05c1eff4183ac3dc9f2885e3f93_Gcw7_Bcw7;51163:100752:100753}}
>>> >> >> > 11:48:38,183 TRACE [FRAMES] SENT: Flow{nextIncomingId=1,
>>> >> >> > incomingWindow=2047, nextOutgoingId=1, outgoingWindow=2147483647,
>>> >> >> handle=0,
>>> >> >> > deliveryCount=0, linkCredit=1000, available=null, drain=false,
>>> >> >> echo=false,
>>> >> >> > properties=null}
>>> >> >> > ...
>>> >> >> > 11:49:30,966 TRACE [FRAMES] RECV: Empty Frame
>>> >> >> > ...
>>> >> >> > 11:50:23,911 TRACE [FRAMES] RECV: Empty Frame
>>> >> >> > ...
>>> >> >> > 11:51:16,857 TRACE [FRAMES] RECV: Empty Frame
>>> >> >> > ...
>>> >> >> > 11:52:09,804 TRACE [FRAMES] RECV: Empty Frame
>>> >> >> > ...
>>> >> >> > 11:52:38,019 TRACE [FRAMES] SENT: Empty Frame
>>> >> >> > (TCP Retransmission occurs within 100-200ms)
>>> >> >> > ...
>>> >> >> > 11:53:02,749 TRACE [FRAMES] RECV: Empty Frame
>>> >> >> > ...
>>> >> >> > 11:53:38,009 TRACE [FRAMES] RECV:
>>> >> >> > Close{error=Error{condition=amqp:connection:forced,
>>> description='Die
>>> >> >> > Verbindung war länger als zulässig
>>> >> >> >
>>> >> >>
>>> >>
>>> inaktiv..TrackingId:d7c7f05c1eff4183ac3dc9f2885e3f93_Gcw7,TimeStamp:01.07.2016
>>> >> >> > 11:53:27', info=null}}
>>> >> >>
>>> >> >>
>>> ---------------------------------------------------------------------
>>> >> >> To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
>>> >> >> For additional commands, e-mail: users-help@qpid.apache.org
>>> >> >>
>>> >> >>
>>> >>
>>> >> ---------------------------------------------------------------------
>>> >> To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
>>> >> For additional commands, e-mail: users-help@qpid.apache.org
>>> >>
>>> >>
>>>
>>> ---------------------------------------------------------------------
>>> To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
>>> For additional commands, e-mail: users-help@qpid.apache.org
>>>
>>>
>>

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
For additional commands, e-mail: users-help@qpid.apache.org


Re: Qpid JMS: client inactivity problem

Posted by Dale Green <gr...@gmail.com>.
Of course, producers are needed, not consumers.

On Tue, Jul 5, 2016 at 11:13 AM, Dale Green <gr...@gmail.com>
wrote:

> Hi Robbie,
>
> It seems that I made multiple mistakes in evaluating the problems, but I
> finally managed to reproduce it with some very simple code.
>
> It looks that this timeout after 5 minutes happens always when there is no
> consumer using the session. That is, we have only Open, Begin and Empty
> frames, no Attach.
>
> Can you tell me if it is the correct broker's behavior or I am using the
> client in a wrong way? This is existing code and such connections are used
> only for sending messages, so the consumers are created on demand.
>
> Thanks!
>
> 2016-07-05 10:50:48,978 [ntLoopGroup-2-1] - TRACE
> TransportSupport               - Getting SSLContext instance using
> protocol: TLS
> 2016-07-05 10:50:49,195 [ntLoopGroup-2-1] - TRACE
> TransportSupport               - Default protocols from the SSLEngine:
> [SSLv2Hello, TLSv1, TLSv1.1, TLSv1.2]
> 2016-07-05 10:50:49,195 [ntLoopGroup-2-1] - TRACE
> TransportSupport               - Disabled protocols: [SSLv2Hello, SSLv3]
> 2016-07-05 10:50:49,195 [ntLoopGroup-2-1] - TRACE
> TransportSupport               - Enabled protocols: [TLSv1, TLSv1.1,
> TLSv1.2]
> 2016-07-05 10:50:49,195 [ntLoopGroup-2-1] - TRACE
> TransportSupport               - Default cipher suites from the SSLEngine:
> [TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384,
> TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384, TLS_RSA_WITH_AES_256_CBC_SHA256,
> TLS_ECDH_ECDSA_WITH_AES_256_CBC_SHA384,
> TLS_ECDH_RSA_WITH_AES_256_CBC_SHA384, TLS_DHE_RSA_WITH_AES_256_CBC_SHA256,
> TLS_DHE_DSS_WITH_AES_256_CBC_SHA256, TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,
> TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA, TLS_RSA_WITH_AES_256_CBC_SHA,
> TLS_ECDH_ECDSA_WITH_AES_256_CBC_SHA, TLS_ECDH_RSA_WITH_AES_256_CBC_SHA,
> TLS_DHE_RSA_WITH_AES_256_CBC_SHA, TLS_DHE_DSS_WITH_AES_256_CBC_SHA,
> TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256,
> TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256, TLS_RSA_WITH_AES_128_CBC_SHA256,
> TLS_ECDH_ECDSA_WITH_AES_128_CBC_SHA256,
> TLS_ECDH_RSA_WITH_AES_128_CBC_SHA256, TLS_DHE_RSA_WITH_AES_128_CBC_SHA256,
> TLS_DHE_DSS_WITH_AES_128_CBC_SHA256, TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,
> TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA, TLS_RSA_WITH_AES_128_CBC_SHA,
> TLS_ECDH_ECDSA_WITH_AES_128_CBC_SHA, TLS_ECDH_RSA_WITH_AES_128_CBC_SHA,
> TLS_DHE_RSA_WITH_AES_128_CBC_SHA, TLS_DHE_DSS_WITH_AES_128_CBC_SHA,
> TLS_ECDHE_ECDSA_WITH_3DES_EDE_CBC_SHA, TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,
> SSL_RSA_WITH_3DES_EDE_CBC_SHA, TLS_ECDH_ECDSA_WITH_3DES_EDE_CBC_SHA,
> TLS_ECDH_RSA_WITH_3DES_EDE_CBC_SHA, SSL_DHE_RSA_WITH_3DES_EDE_CBC_SHA,
> SSL_DHE_DSS_WITH_3DES_EDE_CBC_SHA, TLS_EMPTY_RENEGOTIATION_INFO_SCSV]
> 2016-07-05 10:50:49,196 [ntLoopGroup-2-1] - TRACE
> TransportSupport               - Enabled cipher suites:
> [TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384,
> TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384, TLS_RSA_WITH_AES_256_CBC_SHA256,
> TLS_ECDH_ECDSA_WITH_AES_256_CBC_SHA384,
> TLS_ECDH_RSA_WITH_AES_256_CBC_SHA384, TLS_DHE_RSA_WITH_AES_256_CBC_SHA256,
> TLS_DHE_DSS_WITH_AES_256_CBC_SHA256, TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,
> TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA, TLS_RSA_WITH_AES_256_CBC_SHA,
> TLS_ECDH_ECDSA_WITH_AES_256_CBC_SHA, TLS_ECDH_RSA_WITH_AES_256_CBC_SHA,
> TLS_DHE_RSA_WITH_AES_256_CBC_SHA, TLS_DHE_DSS_WITH_AES_256_CBC_SHA,
> TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256,
> TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256, TLS_RSA_WITH_AES_128_CBC_SHA256,
> TLS_ECDH_ECDSA_WITH_AES_128_CBC_SHA256,
> TLS_ECDH_RSA_WITH_AES_128_CBC_SHA256, TLS_DHE_RSA_WITH_AES_128_CBC_SHA256,
> TLS_DHE_DSS_WITH_AES_128_CBC_SHA256, TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,
> TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA, TLS_RSA_WITH_AES_128_CBC_SHA,
> TLS_ECDH_ECDSA_WITH_AES_128_CBC_SHA, TLS_ECDH_RSA_WITH_AES_128_CBC_SHA,
> TLS_DHE_RSA_WITH_AES_128_CBC_SHA, TLS_DHE_DSS_WITH_AES_128_CBC_SHA,
> TLS_ECDHE_ECDSA_WITH_3DES_EDE_CBC_SHA, TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,
> SSL_RSA_WITH_3DES_EDE_CBC_SHA, TLS_ECDH_ECDSA_WITH_3DES_EDE_CBC_SHA,
> TLS_ECDH_RSA_WITH_3DES_EDE_CBC_SHA, SSL_DHE_RSA_WITH_3DES_EDE_CBC_SHA,
> SSL_DHE_DSS_WITH_3DES_EDE_CBC_SHA, TLS_EMPTY_RENEGOTIATION_INFO_SCSV]
> 2016-07-05 10:50:49,229 [ntLoopGroup-2-1] - TRACE
> NettyTcpTransport              - Channel has become active! Channel is [id:
> 0x5d81e8cf, /10.10.0.180:38039 => /10.10.0.62:5671]
> 2016-07-05 10:50:49,356 [ntLoopGroup-2-1] - TRACE
> NettySslTransport              - SSL Handshake has completed: [id:
> 0x5d81e8cf, /10.10.0.180:38039 => /10.10.0.62:5671]
> 2016-07-05 10:50:49,401 [rifyHost=false]] - TRACE
> NettyTcpTransport              - Attempted write of: 8 bytes
> 2016-07-05 10:50:49,404 [ntLoopGroup-2-1] - TRACE
> NettyTcpTransport              - New data read: 8 bytes incoming:
> UnpooledHeapByteBuf(ridx: 0, widx: 8, cap: 1024)
> 2016-07-05 10:50:49,405 [rifyHost=false]] - TRACE
> AmqpProvider                   - New Proton Event: CONNECTION_INIT
> 2016-07-05 10:50:49,405 [ntLoopGroup-2-1] - TRACE
> NettyTcpTransport              - New data read: 39 bytes incoming:
> UnpooledHeapByteBuf(ridx: 0, widx: 39, cap: 69)
> 2016-07-05 10:50:49,405 [rifyHost=false]] - TRACE
> AmqpProvider                   - New Proton Event: CONNECTION_LOCAL_OPEN
> 2016-07-05 10:50:49,411 [rifyHost=false]] - DEBUG
> SaslMechanismFinder            - Skipping SASL-EXTERNAL mechanism because
> the available credentials are not sufficient
> 2016-07-05 10:50:49,411 [rifyHost=false]] - INFO
> SaslMechanismFinder            - Best match for SASL auth was: SASL-PLAIN
> 2016-07-05 10:50:49,412 [rifyHost=false]] - TRACE
> NettyTcpTransport              - Attempted write of: 50 bytes
> 2016-07-05 10:50:49,415 [ntLoopGroup-2-1] - TRACE
> NettyTcpTransport              - New data read: 26 bytes incoming:
> UnpooledHeapByteBuf(ridx: 0, widx: 26, cap: 53)
> 2016-07-05 10:50:49,418 [rifyHost=false]] - TRACE
> FRAMES                         - SENT: Open{
> containerId='ID:d572f504-5ef7-4d72-9022-ee0400dbd194:1',
> hostname='10.10.0.62', maxFrameSize=1048576, channelMax=32767,
> idleTimeOut=500000, outgoingLocales=null, incomingLocales=null,
> offeredCapabilities=null,
> desiredCapabilities=[sole-connection-for-container],
> properties={product=QpidJMS, version=0.11.0-SNAPSHOT, platform=JVM:
> 1.7.0_101, 24.95-b01, Oracle Corporation, OS: Linux, 3.16.0-4-amd64, amd64}}
> 2016-07-05 10:50:49,418 [rifyHost=false]] - TRACE
> NettyTcpTransport              - Attempted write of: 265 bytes
> 2016-07-05 10:50:49,419 [ntLoopGroup-2-1] - TRACE
> NettyTcpTransport              - New data read: 8 bytes incoming:
> UnpooledHeapByteBuf(ridx: 0, widx: 8, cap: 37)
> 2016-07-05 10:50:49,421 [ntLoopGroup-2-1] - TRACE
> NettyTcpTransport              - New data read: 72 bytes incoming:
> UnpooledHeapByteBuf(ridx: 0, widx: 72, cap: 101)
> 2016-07-05 10:50:49,421 [rifyHost=false]] - TRACE
> FRAMES                         - RECV: Open{
> containerId='d7c7f05c1eff4183ac3dc9f2885e3f93_Gcw7', hostname='null',
> maxFrameSize=65536, channelMax=4999, idleTimeOut=240000,
> outgoingLocales=null, incomingLocales=null, offeredCapabilities=null,
> desiredCapabilities=null, properties=null}
> 2016-07-05 10:50:49,421 [rifyHost=false]] - TRACE
> AmqpProvider                   - New Proton Event: CONNECTION_REMOTE_OPEN
> 2016-07-05 10:50:49,424 [rifyHost=false]] - TRACE
> AmqpProvider                   - New Proton Event: SESSION_INIT
> 2016-07-05 10:50:49,424 [rifyHost=false]] - TRACE
> AmqpProvider                   - New Proton Event: SESSION_LOCAL_OPEN
> 2016-07-05 10:50:49,426 [rifyHost=false]] - TRACE
> FRAMES                         - SENT: Begin{remoteChannel=null,
> nextOutgoingId=1, incomingWindow=2047, outgoingWindow=2147483647,
> handleMax=65535, offeredCapabilities=null, desiredCapabilities=null,
> properties=null}
> 2016-07-05 10:50:49,426 [rifyHost=false]] - TRACE
> NettyTcpTransport              - Attempted write of: 32 bytes
> 2016-07-05 10:50:49,427 [ntLoopGroup-2-1] - TRACE
> NettyTcpTransport              - New data read: 34 bytes incoming:
> UnpooledHeapByteBuf(ridx: 0, widx: 34, cap: 69)
> 2016-07-05 10:50:49,427 [rifyHost=false]] - TRACE
> FRAMES                         - RECV: Begin{remoteChannel=0,
> nextOutgoingId=1, incomingWindow=5000, outgoingWindow=2047, handleMax=7,
> offeredCapabilities=null, desiredCapabilities=null, properties=null}
> 2016-07-05 10:50:49,427 [rifyHost=false]] - TRACE
> AmqpProvider                   - New Proton Event: SESSION_REMOTE_OPEN
> 2016-07-05 10:50:49,427 [rifyHost=false]] - DEBUG
> AmqpConnectionBuilder          - AmqpConnection {
> ID:f5f875e9-3e36-484f-8480-5d9b1305eb56:1 } is now open:
> 2016-07-05 10:50:49,427 [rifyHost=false]] - TRACE
> AmqpProvider                   - IdleTimeoutCheck being initiated, initial
> delay: 30000
> 2016-07-05 10:50:49,428 [rifyHost=false]] - INFO
> JmsConnection                  - Connection
> ID:f5f875e9-3e36-484f-8480-5d9b1305eb56:1 connected to remote Broker:
> amqps://10.10.0.62:5671/?transport.verifyHost=false
> 2016-07-05 10:50:49,437 [rifyHost=false]] - TRACE
> FRAMES                         - SENT: Begin{remoteChannel=null,
> nextOutgoingId=1, incomingWindow=2047, outgoingWindow=2147483647,
> handleMax=65535, offeredCapabilities=null, desiredCapabilities=null,
> properties=null}
> 2016-07-05 10:50:49,437 [rifyHost=false]] - TRACE
> NettyTcpTransport              - Attempted write of: 32 bytes
> 2016-07-05 10:50:49,438 [ntLoopGroup-2-1] - TRACE
> NettyTcpTransport              - New data read: 34 bytes incoming:
> UnpooledHeapByteBuf(ridx: 0, widx: 34, cap: 69)
> 2016-07-05 10:50:49,438 [rifyHost=false]] - TRACE
> FRAMES                         - RECV: Begin{remoteChannel=1,
> nextOutgoingId=1, incomingWindow=5000, outgoingWindow=2047, handleMax=7,
> offeredCapabilities=null, desiredCapabilities=null, properties=null}
> 2016-07-05 10:50:49,438 [rifyHost=false]] - TRACE
> AmqpProvider                   - New Proton Event: SESSION_INIT
> 2016-07-05 10:50:49,439 [rifyHost=false]] - TRACE
> AmqpProvider                   - New Proton Event: SESSION_LOCAL_OPEN
> 2016-07-05 10:50:49,439 [rifyHost=false]] - TRACE
> AmqpProvider                   - New Proton Event: SESSION_REMOTE_OPEN
> 2016-07-05 10:51:19,428 [rifyHost=false]] - TRACE
> AmqpProvider                   - IdleTimeoutCheck rescheduling with delay:
> 30000
> 2016-07-05 10:51:49,429 [rifyHost=false]] - TRACE
> AmqpProvider                   - IdleTimeoutCheck rescheduling with delay:
> 30000
> 2016-07-05 10:52:19,429 [rifyHost=false]] - TRACE
> AmqpProvider                   - IdleTimeoutCheck rescheduling with delay:
> 30000
> 2016-07-05 10:52:49,429 [rifyHost=false]] - TRACE
> AmqpProvider                   - IdleTimeoutCheck rescheduling with delay:
> 30000
> 2016-07-05 10:53:19,430 [rifyHost=false]] - TRACE
> FRAMES                         - SENT: Empty Frame
> 2016-07-05 10:53:19,430 [rifyHost=false]] - TRACE
> NettyTcpTransport              - Attempted write of: 8 bytes
> 2016-07-05 10:53:19,431 [rifyHost=false]] - TRACE
> AmqpProvider                   - IdleTimeoutCheck rescheduling with delay:
> 30000
> 2016-07-05 10:53:49,431 [rifyHost=false]] - TRACE
> AmqpProvider                   - IdleTimeoutCheck rescheduling with delay:
> 30000
> 2016-07-05 10:54:19,431 [rifyHost=false]] - TRACE
> AmqpProvider                   - IdleTimeoutCheck rescheduling with delay:
> 30000
> 2016-07-05 10:54:49,432 [rifyHost=false]] - TRACE
> AmqpProvider                   - IdleTimeoutCheck rescheduling with delay:
> 30000
> 2016-07-05 10:55:19,432 [rifyHost=false]] - TRACE
> FRAMES                         - SENT: Empty Frame
> 2016-07-05 10:55:19,432 [rifyHost=false]] - TRACE
> NettyTcpTransport              - Attempted write of: 8 bytes
> 2016-07-05 10:55:19,433 [rifyHost=false]] - TRACE
> AmqpProvider                   - IdleTimeoutCheck rescheduling with delay:
> 30000
> 2016-07-05 10:55:49,410 [ntLoopGroup-2-1] - TRACE
> NettyTcpTransport              - New data read: 15 bytes incoming:
> UnpooledHeapByteBuf(ridx: 0, widx: 15, cap: 53)
> 2016-07-05 10:55:49,411 [rifyHost=false]] - TRACE
> FRAMES                         - RECV: End{error=null}
> 2016-07-05 10:55:49,411 [rifyHost=false]] - TRACE
> AmqpProvider                   - New Proton Event: SESSION_REMOTE_CLOSE
> 2016-07-05 10:55:49,411 [ntLoopGroup-2-1] - TRACE
> NettyTcpTransport              - New data read: 190 bytes incoming:
> UnpooledHeapByteBuf(ridx: 0, widx: 190, cap: 229)
> 2016-07-05 10:55:49,411 [rifyHost=false]] - TRACE
> AmqpProvider                   - New Proton Event: SESSION_LOCAL_CLOSE
> 2016-07-05 10:55:49,412 [rifyHost=false]] - TRACE
> FRAMES                         - SENT: End{error=null}
> 2016-07-05 10:55:49,412 [rifyHost=false]] - TRACE
> NettyTcpTransport              - Attempted write of: 12 bytes
> 2016-07-05 10:55:49,412 [rifyHost=false]] - TRACE
> FRAMES                         - RECV: End{error=null}
> 2016-07-05 10:55:49,412 [rifyHost=false]] - TRACE
> FRAMES                         - RECV:
> Close{error=Error{condition=amqp:connection:forced, description='Die
> Verbindung war länger als zulässig
> inaktiv..TrackingId:d7c7f05c1eff4183ac3dc9f2885e3f93_Gcw7,TimeStamp:05.07.2016
> 08:55:41', info=null}}
> 2016-07-05 10:55:49,413 [rifyHost=false]] - TRACE
> AmqpProvider                   - New Proton Event: SESSION_REMOTE_CLOSE
> 2016-07-05 10:55:49,413 [rifyHost=false]] - TRACE
> AmqpProvider                   - New Proton Event: CONNECTION_REMOTE_CLOSE
> Connection ExceptionListener fired, exiting.
> javax.jms.JMSException: Die Verbindung war länger als zulässig
> inaktiv..TrackingId:d7c7f05c1eff4183ac3dc9f2885e3f93_Gcw7,TimeStamp:05.07.2016
> 08:55:41 [condition = amqp:connection:forced]
>     at
> org.apache.qpid.jms.provider.amqp.AmqpSupport.convertToException(AmqpSupport.java:148)
>     at
> org.apache.qpid.jms.provider.amqp.AmqpSupport.convertToException(AmqpSupport.java:103)
>     at
> org.apache.qpid.jms.provider.amqp.AmqpAbstractResource.remotelyClosed(AmqpAbstractResource.java:147)
>     at
> org.apache.qpid.jms.provider.amqp.AmqpAbstractResource.processRemoteClose(AmqpAbstractResource.java:242)
>     at
> org.apache.qpid.jms.provider.amqp.AmqpProvider.processUpdates(AmqpProvider.java:771)
>     at
> org.apache.qpid.jms.provider.amqp.AmqpProvider.access$1900(AmqpProvider.java:92)
>     at
> org.apache.qpid.jms.provider.amqp.AmqpProvider$17.run(AmqpProvider.java:699)
>     at
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>     at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>     at
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:178)
>     at
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:292)
>     at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>     at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>     at java.lang.Thread.run(Thread.java:745)
> 2016-07-05 10:55:49,416 [rifyHost=false]] - TRACE
> AmqpProvider                   - New Proton Event: TRANSPORT_TAIL_CLOSED
> 2016-07-05 10:55:49,416 [rifyHost=false]] - TRACE
> AmqpProvider                   - New Proton Event: SESSION_LOCAL_CLOSE
> 2016-07-05 10:55:49,416 [rifyHost=false]] - TRACE
> AmqpProvider                   - New Proton Event: CONNECTION_LOCAL_CLOSE
> 2016-07-05 10:55:49,416 [rifyHost=false]] - TRACE
> FRAMES                         - SENT: End{error=null}
> 2016-07-05 10:55:49,416 [rifyHost=false]] - TRACE
> FRAMES                         - SENT: Close{error=null}
> 2016-07-05 10:55:49,416 [rifyHost=false]] - TRACE
> NettyTcpTransport              - Attempted write of: 24 bytes
> 2016-07-05 10:55:49,416 [rifyHost=false]] - TRACE
> AmqpProvider                   - Cancelling scheduled IdleTimeoutCheck
> 2016-07-05 10:55:49,419 [ntLoopGroup-2-1] - TRACE
> NettyTcpTransport              - Channel has gone inactive! Channel is [id:
> 0x5d81e8cf, /10.10.0.180:38039 :> /10.10.0.62:5671]
>
>
>
>
>
>
> On Mon, Jul 4, 2016 at 5:01 PM, Robbie Gemmell <ro...@gmail.com>
> wrote:
>
>> Hi Dale,
>>
>> There doesnt seem to be much more to try on the client side itself.
>> The only other thing you could do would be to use a modified local
>> copy of Proton to ignore the servers advertised 4 minute timeout value
>> and use something else for remoteIdleTimeout in its 'tick'
>> calculations to try and force it to send more even more often than it
>> does and see if that makes any difference. It shouldnt.
>>
>> I think youll need to seek support from Microsoft in either case. The
>> client received an requested advertised server idle-timout of 4
>> minutes, and generated heartbeats or real traffic at gap of no more
>> than 2.5mins with that change in place, and the server still
>> eventually closed the connection for being idle even though it never
>> breached the timeout regardless whether the advertised value was half
>> the real timeout or not. That it appears to behaves differently if
>> there are multiple conections is further strangeness, since this is a
>> per-connection mechanism.
>>
>> Robbie
>>
>> On 4 July 2016 at 15:21, Dale Green <gr...@gmail.com> wrote:
>> > Hi Robbie,
>> >
>> > Thank you for your efforts!
>> >
>> > I hardcoded this scheduling to be 30 sec. Then the behavior is as
>> follows:
>> > - the first empty packet is sent after 2.5 min;
>> > - after that, the heartbeats are sent every 2 min.
>> >
>> > However, this doesn't help, as the broker closes the connections (all
>> but
>> > the first one, when there are multiple connections!) 5 min after "SENT
>> > flow". Exactly like before.
>> >
>> > So, I am still trying to find out what makes Service Bus unhappy.
>> >
>> >
>> >
>> > On Mon, Jul 4, 2016 at 1:45 PM, Robbie Gemmell <
>> robbie.gemmell@gmail.com>
>> > wrote:
>> >
>> >> Hi Dale,
>> >>
>> >> I tried to reproduce this on Friday using the existing idle-timeout
>> >> tests for the client and wasn't able to, the first heartbeat was sent
>> >> after 2 minutes, but upon a further look today I see why and tweaking
>> >> the test it now goes out after 4 and I can see what is going on.
>> >>
>> >> The way the client drives the idleTimeout processing in Proton results
>> >> in a corner case that makes it possible for it to take up to the full
>> >> timeout period advertised by the server before the first heartbeat is
>> >> sent after prior non-idle activity, rather than pessimistically
>> >> sending at a faster rate as it otherwise does. As ServerBus seems
>> >> (your latest mail maks this less clear) to have not followed the path
>> >> of advertising half its real timeout as recommended (but annoyingly
>> >> not absolutely required) by the AMQP spec specifically to avoid this
>> >> scenario, this creates a situation where there is a race between that
>> >> heartbeat being sent and whether it arrives in time before ServiceBus
>> >> decides to kill the connection. The retransmission you mentioned
>> >> seeing could be enough of a delay to ensure that it didnt.
>> >>
>> >> I need to have a closer look at Proton to see if we can update it to
>> >> better accomodate how the client drives the timeout processing as well
>> >> as the other usage pattern, without introducing undesirable behaviour
>> >> for either or changing the API.
>> >>
>> >> I do however think I see a way to alter the client to avoid the issue
>> >> in the mean time. Try the change at https://paste.apache.org/1oCe
>> >> (which applies to master or 0.10.0 of
>> >> https://github.com/apache/qpid-jms, and shows the code areas to look
>> >> at as you were asking), it makes the client 'tick' proton to process
>> >> the timeout handling more often than it normally would / should be
>> >> necessary, which in this case ensures the heartbeat is actually sent
>> >> before the peers full advertised timeout is reached. In your case the
>> >> patch should make the outlier get sent after at most 3 minutes.
>> >>
>> >> We have previously discussed stopping Proton from sending heartbeats
>> >> much faster than necessary as it currently does, because it results in
>> >> a significant amount of unecessary traffic/processing on servers with
>> >> lots of idle clients and/or advertising smaller timeouts. If we do
>> >> that, I think we would still try to send them a little faster than
>> >> necessary to avoid spurious timeouts against servers that dont
>> >> advertise half their actual timeout, but I think it would still make
>> >> sense for us to add an override to allow toggling it to send more
>> >> often than it otherwise would, since it would help in situations such
>> >> as this plus another that came up previously.
>> >>
>> >> Robbie
>> >>
>> >> On 4 July 2016 at 09:09, Dale Green <gr...@gmail.com> wrote:
>> >> > Hi Robbie,
>> >> >
>> >> > I was debugging a bit and saw that the broker behaves differently if
>> >> there
>> >> > is a single connection or multiple connections. That is, when I have
>> only
>> >> > one connection, it is not closed even after 5 minutes without
>> heartbeats.
>> >> > So, I'll try to observe the behavior with 2 or more connections now.
>> >> >
>> >> > However, I am appending now the Qpid logs of a simple consumer, so
>> you
>> >> can
>> >> > see the client behavior. For me, it is still not clear why we don't
>> have
>> >> a
>> >> > heartbeat at 09:46:56,838. Can you tell me where in the code this
>> >> > functionality could be changed, so I may try a quick fix, to see
>> what the
>> >> > broker would do then?
>> >> >
>> >> > 2016-07-04 09:44:56,407 [ntLoopGroup-2-1] - TRACE
>> >> > TransportSupport               - Getting SSLContext instance using
>> >> > protocol: TLS
>> >> > 2016-07-04 09:44:56,617 [ntLoopGroup-2-1] - TRACE
>> >> > TransportSupport               - Default protocols from the
>> SSLEngine:
>> >> > [SSLv2Hello, TLSv1, TLSv1.1, TLSv1.2]
>> >> > 2016-07-04 09:44:56,617 [ntLoopGroup-2-1] - TRACE
>> >> > TransportSupport               - Disabled protocols: [SSLv2Hello,
>> SSLv3]
>> >> > 2016-07-04 09:44:56,617 [ntLoopGroup-2-1] - TRACE
>> >> > TransportSupport               - Enabled protocols: [TLSv1, TLSv1.1,
>> >> > TLSv1.2]
>> >> > 2016-07-04 09:44:56,617 [ntLoopGroup-2-1] - TRACE
>> >> > TransportSupport               - Default cipher suites from the
>> >> SSLEngine:
>> >> > [TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384,
>> >> > TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384,
>> TLS_RSA_WITH_AES_256_CBC_SHA256,
>> >> > TLS_ECDH_ECDSA_WITH_AES_256_CBC_SHA384,
>> >> > TLS_ECDH_RSA_WITH_AES_256_CBC_SHA384,
>> >> TLS_DHE_RSA_WITH_AES_256_CBC_SHA256,
>> >> > TLS_DHE_DSS_WITH_AES_256_CBC_SHA256,
>> >> TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,
>> >> > TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA, TLS_RSA_WITH_AES_256_CBC_SHA,
>> >> > TLS_ECDH_ECDSA_WITH_AES_256_CBC_SHA,
>> TLS_ECDH_RSA_WITH_AES_256_CBC_SHA,
>> >> > TLS_DHE_RSA_WITH_AES_256_CBC_SHA, TLS_DHE_DSS_WITH_AES_256_CBC_SHA,
>> >> > TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256,
>> >> > TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256,
>> TLS_RSA_WITH_AES_128_CBC_SHA256,
>> >> > TLS_ECDH_ECDSA_WITH_AES_128_CBC_SHA256,
>> >> > TLS_ECDH_RSA_WITH_AES_128_CBC_SHA256,
>> >> TLS_DHE_RSA_WITH_AES_128_CBC_SHA256,
>> >> > TLS_DHE_DSS_WITH_AES_128_CBC_SHA256,
>> >> TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,
>> >> > TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA, TLS_RSA_WITH_AES_128_CBC_SHA,
>> >> > TLS_ECDH_ECDSA_WITH_AES_128_CBC_SHA,
>> TLS_ECDH_RSA_WITH_AES_128_CBC_SHA,
>> >> > TLS_DHE_RSA_WITH_AES_128_CBC_SHA, TLS_DHE_DSS_WITH_AES_128_CBC_SHA,
>> >> > TLS_ECDHE_ECDSA_WITH_3DES_EDE_CBC_SHA,
>> >> TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,
>> >> > SSL_RSA_WITH_3DES_EDE_CBC_SHA, TLS_ECDH_ECDSA_WITH_3DES_EDE_CBC_SHA,
>> >> > TLS_ECDH_RSA_WITH_3DES_EDE_CBC_SHA,
>> SSL_DHE_RSA_WITH_3DES_EDE_CBC_SHA,
>> >> > SSL_DHE_DSS_WITH_3DES_EDE_CBC_SHA, TLS_EMPTY_RENEGOTIATION_INFO_SCSV]
>> >> > 2016-07-04 09:44:56,617 [ntLoopGroup-2-1] - TRACE
>> >> > TransportSupport               - Enabled cipher suites:
>> >> > [TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384,
>> >> > TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384,
>> TLS_RSA_WITH_AES_256_CBC_SHA256,
>> >> > TLS_ECDH_ECDSA_WITH_AES_256_CBC_SHA384,
>> >> > TLS_ECDH_RSA_WITH_AES_256_CBC_SHA384,
>> >> TLS_DHE_RSA_WITH_AES_256_CBC_SHA256,
>> >> > TLS_DHE_DSS_WITH_AES_256_CBC_SHA256,
>> >> TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,
>> >> > TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA, TLS_RSA_WITH_AES_256_CBC_SHA,
>> >> > TLS_ECDH_ECDSA_WITH_AES_256_CBC_SHA,
>> TLS_ECDH_RSA_WITH_AES_256_CBC_SHA,
>> >> > TLS_DHE_RSA_WITH_AES_256_CBC_SHA, TLS_DHE_DSS_WITH_AES_256_CBC_SHA,
>> >> > TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256,
>> >> > TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256,
>> TLS_RSA_WITH_AES_128_CBC_SHA256,
>> >> > TLS_ECDH_ECDSA_WITH_AES_128_CBC_SHA256,
>> >> > TLS_ECDH_RSA_WITH_AES_128_CBC_SHA256,
>> >> TLS_DHE_RSA_WITH_AES_128_CBC_SHA256,
>> >> > TLS_DHE_DSS_WITH_AES_128_CBC_SHA256,
>> >> TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,
>> >> > TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA, TLS_RSA_WITH_AES_128_CBC_SHA,
>> >> > TLS_ECDH_ECDSA_WITH_AES_128_CBC_SHA,
>> TLS_ECDH_RSA_WITH_AES_128_CBC_SHA,
>> >> > TLS_DHE_RSA_WITH_AES_128_CBC_SHA, TLS_DHE_DSS_WITH_AES_128_CBC_SHA,
>> >> > TLS_ECDHE_ECDSA_WITH_3DES_EDE_CBC_SHA,
>> >> TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,
>> >> > SSL_RSA_WITH_3DES_EDE_CBC_SHA, TLS_ECDH_ECDSA_WITH_3DES_EDE_CBC_SHA,
>> >> > TLS_ECDH_RSA_WITH_3DES_EDE_CBC_SHA,
>> SSL_DHE_RSA_WITH_3DES_EDE_CBC_SHA,
>> >> > SSL_DHE_DSS_WITH_3DES_EDE_CBC_SHA, TLS_EMPTY_RENEGOTIATION_INFO_SCSV]
>> >> > 2016-07-04 09:44:56,650 [ntLoopGroup-2-1] - TRACE
>> >> > NettyTcpTransport              - Channel has become active! Channel
>> is
>> >> [id:
>> >> > 0xec4459d2, /10.10.0.180:51561 => /10.10.0.62:5671]
>> >> > 2016-07-04 09:44:56,773 [ntLoopGroup-2-1] - TRACE
>> >> > NettySslTransport              - SSL Handshake has completed: [id:
>> >> > 0xec4459d2, /10.10.0.180:51561 => /10.10.0.62:5671]
>> >> > 2016-07-04 09:44:56,811 [rifyHost=false]] - TRACE
>> >> > NettyTcpTransport              - Attempted write of: 8 bytes
>> >> > 2016-07-04 09:44:56,814 [ntLoopGroup-2-1] - TRACE
>> >> > NettyTcpTransport              - New data read: 8 bytes incoming:
>> >> > UnpooledHeapByteBuf(ridx: 0, widx: 8, cap: 1024)
>> >> > 2016-07-04 09:44:56,814 [rifyHost=false]] - TRACE
>> >> > AmqpProvider                   - New Proton Event: CONNECTION_INIT
>> >> > 2016-07-04 09:44:56,815 [ntLoopGroup-2-1] - TRACE
>> >> > NettyTcpTransport              - New data read: 39 bytes incoming:
>> >> > UnpooledHeapByteBuf(ridx: 0, widx: 39, cap: 69)
>> >> > 2016-07-04 09:44:56,815 [rifyHost=false]] - TRACE
>> >> > AmqpProvider                   - New Proton Event:
>> CONNECTION_LOCAL_OPEN
>> >> > 2016-07-04 09:44:56,819 [rifyHost=false]] - DEBUG
>> >> > SaslMechanismFinder            - Skipping SASL-EXTERNAL mechanism
>> because
>> >> > the available credentials are not sufficient
>> >> > 2016-07-04 09:44:56,819 [rifyHost=false]] - INFO
>> >> > SaslMechanismFinder            - Best match for SASL auth was:
>> SASL-PLAIN
>> >> > 2016-07-04 09:44:56,820 [rifyHost=false]] - TRACE
>> >> > NettyTcpTransport              - Attempted write of: 50 bytes
>> >> > 2016-07-04 09:44:56,823 [ntLoopGroup-2-1] - TRACE
>> >> > NettyTcpTransport              - New data read: 26 bytes incoming:
>> >> > UnpooledHeapByteBuf(ridx: 0, widx: 26, cap: 53)
>> >> > 2016-07-04 09:44:56,826 [rifyHost=false]] - TRACE
>> >> > FRAMES                         - SENT: Open{
>> >> > containerId='ID::21730abd-45cf-46c9-b312-8c1052076805:1',
>> >> > hostname='10.10.0.62', maxFrameSize=1048576, channelMax=32767,
>> >> > idleTimeOut=500000, outgoingLocales=null, incomingLocales=null,
>> >> > offeredCapabilities=null,
>> >> > desiredCapabilities=[sole-connection-for-container],
>> >> > properties={product=QpidJMS, version=0.9.0, platform=JVM: 1.7.0_101,
>> >> > 24.95-b01, Oracle Corporation, OS: Linux, 3.16.0-4-amd64, amd64}}
>> >> > 2016-07-04 09:44:56,827 [rifyHost=false]] - TRACE
>> >> > NettyTcpTransport              - Attempted write of: 256 bytes
>> >> > 2016-07-04 09:44:56,828 [ntLoopGroup-2-1] - TRACE
>> >> > NettyTcpTransport              - New data read: 8 bytes incoming:
>> >> > UnpooledHeapByteBuf(ridx: 0, widx: 8, cap: 37)
>> >> > 2016-07-04 09:44:56,830 [ntLoopGroup-2-1] - TRACE
>> >> > NettyTcpTransport              - New data read: 72 bytes incoming:
>> >> > UnpooledHeapByteBuf(ridx: 0, widx: 72, cap: 101)
>> >> > 2016-07-04 09:44:56,830 [rifyHost=false]] - TRACE
>> >> > FRAMES                         - RECV: Open{
>> >> > containerId='d7c7f05c1eff4183ac3dc9f2885e3f93_Gcw7', hostname='null',
>> >> > maxFrameSize=65536, channelMax=4999, idleTimeOut=240000,
>> >> > outgoingLocales=null, incomingLocales=null, offeredCapabilities=null,
>> >> > desiredCapabilities=null, properties=null}
>> >> > 2016-07-04 09:44:56,830 [rifyHost=false]] - TRACE
>> >> > AmqpProvider                   - New Proton Event:
>> CONNECTION_REMOTE_OPEN
>> >> > 2016-07-04 09:44:56,833 [rifyHost=false]] - TRACE
>> >> > AmqpProvider                   - New Proton Event: SESSION_INIT
>> >> > 2016-07-04 09:44:56,833 [rifyHost=false]] - TRACE
>> >> > AmqpProvider                   - New Proton Event: SESSION_LOCAL_OPEN
>> >> > 2016-07-04 09:44:56,835 [rifyHost=false]] - TRACE
>> >> > FRAMES                         - SENT: Begin{remoteChannel=null,
>> >> > nextOutgoingId=1, incomingWindow=2047, outgoingWindow=2147483647,
>> >> > handleMax=65535, offeredCapabilities=null, desiredCapabilities=null,
>> >> > properties=null}
>> >> > 2016-07-04 09:44:56,835 [rifyHost=false]] - TRACE
>> >> > NettyTcpTransport              - Attempted write of: 32 bytes
>> >> > 2016-07-04 09:44:56,836 [ntLoopGroup-2-1] - TRACE
>> >> > NettyTcpTransport              - New data read: 34 bytes incoming:
>> >> > UnpooledHeapByteBuf(ridx: 0, widx: 34, cap: 69)
>> >> > 2016-07-04 09:44:56,837 [rifyHost=false]] - TRACE
>> >> > FRAMES                         - RECV: Begin{remoteChannel=0,
>> >> > nextOutgoingId=1, incomingWindow=5000, outgoingWindow=2047,
>> handleMax=7,
>> >> > offeredCapabilities=null, desiredCapabilities=null, properties=null}
>> >> > 2016-07-04 09:44:56,837 [rifyHost=false]] - TRACE
>> >> > AmqpProvider                   - New Proton Event:
>> SESSION_REMOTE_OPEN
>> >> > 2016-07-04 09:44:56,837 [rifyHost=false]] - DEBUG
>> >> > AmqpConnectionBuilder          - AmqpConnection {
>> >> > ID::5fd31250-0099-4949-8160-1b6ba4c2cf72:1 } is now open:
>> >> > 2016-07-04 09:44:56,837 [rifyHost=false]] - TRACE
>> >> > AmqpProvider                   - IdleTimeoutCheck being initiated,
>> >> initial
>> >> > delay: 120000
>> >> > 2016-07-04 09:44:56,837 [rifyHost=false]] - INFO
>> >> > JmsConnection                  - Connection
>> >> > ID::5fd31250-0099-4949-8160-1b6ba4c2cf72:1 connected to remote
>> Broker:
>> >> > amqps://10.10.0.62:5671/?transport.verifyHost=false
>> >> > 2016-07-04 09:44:56,846 [rifyHost=false]] - TRACE
>> >> > FRAMES                         - SENT: Begin{remoteChannel=null,
>> >> > nextOutgoingId=1, incomingWindow=2047, outgoingWindow=2147483647,
>> >> > handleMax=65535, offeredCapabilities=null, desiredCapabilities=null,
>> >> > properties=null}
>> >> > 2016-07-04 09:44:56,846 [rifyHost=false]] - TRACE
>> >> > NettyTcpTransport              - Attempted write of: 32 bytes
>> >> > 2016-07-04 09:44:56,848 [ntLoopGroup-2-1] - TRACE
>> >> > NettyTcpTransport              - New data read: 34 bytes incoming:
>> >> > UnpooledHeapByteBuf(ridx: 0, widx: 34, cap: 69)
>> >> > 2016-07-04 09:44:56,848 [rifyHost=false]] - TRACE
>> >> > FRAMES                         - RECV: Begin{remoteChannel=1,
>> >> > nextOutgoingId=1, incomingWindow=5000, outgoingWindow=2047,
>> handleMax=7,
>> >> > offeredCapabilities=null, desiredCapabilities=null, properties=null}
>> >> > 2016-07-04 09:44:56,848 [rifyHost=false]] - TRACE
>> >> > AmqpProvider                   - New Proton Event: SESSION_INIT
>> >> > 2016-07-04 09:44:56,848 [rifyHost=false]] - TRACE
>> >> > AmqpProvider                   - New Proton Event: SESSION_LOCAL_OPEN
>> >> > 2016-07-04 09:44:56,848 [rifyHost=false]] - TRACE
>> >> > AmqpProvider                   - New Proton Event:
>> SESSION_REMOTE_OPEN
>> >> > 2016-07-04 09:44:56,858 [rifyHost=false]] - TRACE
>> >> > FRAMES                         - SENT:
>> >> >
>> >>
>> Attach{name='qpid-jms:receiver:ID::5fd31250-0099-4949-8160-1b6ba4c2cf72:1:1:1:ServiceBusDefaultNamespace/test2',
>> >> > handle=0, role=RECEIVER, sndSettleMode=UNSETTLED,
>> rcvSettleMode=FIRST,
>> >> > source=Source{address='ServiceBusDefaultNamespace/test2',
>> durable=NONE,
>> >> > expiryPolicy=LINK_DETACH, timeout=0, dynamic=false,
>> >> > dynamicNodeProperties=null, distributionMode=null, filter=null,
>> >> > defaultOutcome=Modified{deliveryFailed=true, undeliverableHere=null,
>> >> > messageAnnotations=null}, outcomes=[amqp:accepted:list,
>> >> amqp:rejected:list,
>> >> > amqp:released:list, amqp:modified:list], capabilities=[queue]},
>> >> > target=Target{address='null', durable=NONE, expiryPolicy=SESSION_END,
>> >> > timeout=0, dynamic=false, dynamicNodeProperties=null,
>> capabilities=null},
>> >> > unsettled=null, incompleteUnsettled=false, initialDeliveryCount=null,
>> >> > maxMessageSize=null, offeredCapabilities=null,
>> desiredCapabilities=null,
>> >> > properties=null}
>> >> > 2016-07-04 09:44:56,858 [rifyHost=false]] - TRACE
>> >> > NettyTcpTransport              - Attempted write of: 285 bytes
>> >> > 2016-07-04 09:44:57,174 [ntLoopGroup-2-1] - TRACE
>> >> > NettyTcpTransport              - New data read: 413 bytes incoming:
>> >> > UnpooledHeapByteBuf(ridx: 0, widx: 413, cap: 453)
>> >> > 2016-07-04 09:44:57,175 [rifyHost=false]] - TRACE
>> >> > FRAMES                         - RECV:
>> >> >
>> >>
>> Attach{name='qpid-jms:receiver:ID::5fd31250-0099-4949-8160-1b6ba4c2cf72:1:1:1:ServiceBusDefaultNamespace/test2',
>> >> > handle=0, role=SENDER, sndSettleMode=UNSETTLED, rcvSettleMode=FIRST,
>> >> > source=Source{address='ServiceBusDefaultNamespace/test2',
>> durable=NONE,
>> >> > expiryPolicy=LINK_DETACH, timeout=0, dynamic=false,
>> >> > dynamicNodeProperties=null, distributionMode=null, filter=null,
>> >> > defaultOutcome=Modified{deliveryFailed=true, undeliverableHere=null,
>> >> > messageAnnotations=null}, outcomes=[amqp:accepted:list,
>> >> amqp:rejected:list,
>> >> > amqp:released:list, amqp:modified:list], capabilities=[queue]},
>> >> > target=Target{address='null', durable=NONE, expiryPolicy=SESSION_END,
>> >> > timeout=0, dynamic=false, dynamicNodeProperties=null,
>> capabilities=null},
>> >> > unsettled=null, incompleteUnsettled=false, initialDeliveryCount=0,
>> >> > maxMessageSize=262144, offeredCapabilities=null,
>> >> desiredCapabilities=null,
>> >> >
>> >>
>> properties={com.microsoft:tracking-id=d7c7f05c1eff4183ac3dc9f2885e3f93_Gcw7_Bcw7;51461:100924:100925}}
>> >> > 2016-07-04 09:44:57,176 [rifyHost=false]] - TRACE
>> >> > AmqpProvider                   - New Proton Event: LINK_INIT
>> >> > 2016-07-04 09:44:57,176 [rifyHost=false]] - TRACE
>> >> > AmqpProvider                   - New Proton Event: LINK_LOCAL_OPEN
>> >> > 2016-07-04 09:44:57,176 [rifyHost=false]] - TRACE
>> >> > AmqpProvider                   - New Proton Event: LINK_REMOTE_OPEN
>> >> > 2016-07-04 09:44:57,177 [rifyHost=false]] - TRACE
>> >> > AmqpConsumer                   - Consumer
>> >> > ID::5fd31250-0099-4949-8160-1b6ba4c2cf72:1:1:1 granting additional
>> >> credit:
>> >> > 10
>> >> > 2016-07-04 09:44:57,178 [rifyHost=false]] - TRACE
>> >> > FRAMES                         - SENT: Flow{nextIncomingId=1,
>> >> > incomingWindow=2047, nextOutgoingId=1, outgoingWindow=2147483647,
>> >> handle=0,
>> >> > deliveryCount=0, linkCredit=10, available=null, drain=false,
>> echo=false,
>> >> > properties=null}
>> >> > 2016-07-04 09:44:57,178 [rifyHost=false]] - TRACE
>> >> > NettyTcpTransport              - Attempted write of: 32 bytes
>> >> > 2016-07-04 09:46:56,838 [rifyHost=false]] - TRACE
>> >> > AmqpProvider                   - IdleTimeoutCheck rescheduling with
>> >> delay:
>> >> > 120000
>> >> > 2016-07-04 09:48:56,839 [rifyHost=false]] - TRACE
>> >> > FRAMES                         - SENT: Empty Frame
>> >> > 2016-07-04 09:48:56,839 [rifyHost=false]] - TRACE
>> >> > NettyTcpTransport              - Attempted write of: 8 bytes
>> >> > 2016-07-04 09:48:56,839 [rifyHost=false]] - TRACE
>> >> > AmqpProvider                   - IdleTimeoutCheck rescheduling with
>> >> delay:
>> >> > 120000
>> >> > 2016-07-04 09:50:56,840 [rifyHost=false]] - TRACE
>> >> > FRAMES                         - SENT: Empty Frame
>> >> > 2016-07-04 09:50:56,840 [rifyHost=false]] - TRACE
>> >> > NettyTcpTransport              - Attempted write of: 8 bytes
>> >> > 2016-07-04 09:50:56,840 [rifyHost=false]] - TRACE
>> >> > AmqpProvider                   - IdleTimeoutCheck rescheduling with
>> >> delay:
>> >> > 120000
>> >> > 2016-07-04 09:52:14,666 [ntLoopGroup-2-1] - TRACE
>> >> > NettyTcpTransport              - New data read: 8 bytes incoming:
>> >> > UnpooledHeapByteBuf(ridx: 0, widx: 8, cap: 37)
>> >> > 2016-07-04 09:52:14,667 [rifyHost=false]] - TRACE
>> >> > FRAMES                         - RECV: Empty Frame
>> >> > 2016-07-04 09:52:56,841 [rifyHost=false]] - TRACE
>> >> > FRAMES                         - SENT: Empty Frame
>> >> > 2016-07-04 09:52:56,841 [rifyHost=false]] - TRACE
>> >> > NettyTcpTransport              - Attempted write of: 8 bytes
>> >> > 2016-07-04 09:52:56,841 [rifyHost=false]] - TRACE
>> >> > AmqpProvider                   - IdleTimeoutCheck rescheduling with
>> >> delay:
>> >> > 120000
>> >> > 2016-07-04 09:54:56,841 [rifyHost=false]] - TRACE
>> >> > FRAMES                         - SENT: Empty Frame
>> >> > 2016-07-04 09:54:56,841 [rifyHost=false]] - TRACE
>> >> > NettyTcpTransport              - Attempted write of: 8 bytes
>> >> > 2016-07-04 09:54:56,842 [rifyHost=false]] - TRACE
>> >> > AmqpProvider                   - IdleTimeoutCheck rescheduling with
>> >> delay:
>> >> > 120000
>> >> > 2016-07-04 09:56:56,842 [rifyHost=false]] - TRACE
>> >> > FRAMES                         - SENT: Empty Frame
>> >> > 2016-07-04 09:56:56,842 [rifyHost=false]] - TRACE
>> >> > NettyTcpTransport              - Attempted write of: 8 bytes
>> >> > 2016-07-04 09:56:56,842 [rifyHost=false]] - TRACE
>> >> > AmqpProvider                   - IdleTimeoutCheck rescheduling with
>> >> delay:
>> >> > 120000
>> >> > 2016-07-04 09:58:56,843 [rifyHost=false]] - TRACE
>> >> > FRAMES                         - SENT: Empty Frame
>> >> > 2016-07-04 09:58:56,843 [rifyHost=false]] - TRACE
>> >> > NettyTcpTransport              - Attempted write of: 8 bytes
>> >> > 2016-07-04 09:58:56,843 [rifyHost=false]] - TRACE
>> >> > AmqpProvider                   - IdleTimeoutCheck rescheduling with
>> >> delay:
>> >> > 120000
>> >> > 2016-07-04 09:59:32,164 [ntLoopGroup-2-1] - TRACE
>> >> > NettyTcpTransport              - New data read: 8 bytes incoming:
>> >> > UnpooledHeapByteBuf(ridx: 0, widx: 8, cap: 37)
>> >> > 2016-07-04 09:59:32,164 [rifyHost=false]] - TRACE
>> >> > FRAMES                         - RECV: Empty Frame
>> >> > 2016-07-04 10:00:56,843 [rifyHost=false]] - TRACE
>> >> > FRAMES                         - SENT: Empty Frame
>> >> > 2016-07-04 10:00:56,843 [rifyHost=false]] - TRACE
>> >> > NettyTcpTransport              - Attempted write of: 8 bytes
>> >> > 2016-07-04 10:00:56,843 [rifyHost=false]] - TRACE
>> >> > AmqpProvider                   - IdleTimeoutCheck rescheduling with
>> >> delay:
>> >> > 120000
>> >> > 2016-07-04 10:02:56,844 [rifyHost=false]] - TRACE
>> >> > FRAMES                         - SENT: Empty Frame
>> >> > 2016-07-04 10:02:56,844 [rifyHost=false]] - TRACE
>> >> > NettyTcpTransport              - Attempted write of: 8 bytes
>> >> > 2016-07-04 10:02:56,845 [rifyHost=false]] - TRACE
>> >> > AmqpProvider                   - IdleTimeoutCheck rescheduling with
>> >> delay:
>> >> > 120000
>> >> >
>> >> >
>> >> >
>> >> >
>> >> > On Fri, Jul 1, 2016 at 3:52 PM, Robbie Gemmell <
>> robbie.gemmell@gmail.com
>> >> >
>> >> > wrote:
>> >> >
>> >> >> Hi Dale,
>> >> >>
>> >> >> It looks like you might have snipped some of the logs. If you
>> removed
>> >> >> any frame logging we will need to see that too (well, at least what
>> >> >> frames and when) to better reason about what the client is (or is
>> not)
>> >> >> doing, as heartbeat frames are only sent in the absence of other
>> >> >> frames. It would also be good to enable trace on logger
>> >> >> org.apache.qpid.jms.provider.amqp.AmqpProvider to get some logging
>> >> >> from the idle timeout processing task.
>> >> >>
>> >> >> The decision of whether to send a heartbeat frame (or close the
>> >> >> connection due to not receiving any frames) is taken by the Qpid
>> >> >> Proton engine the client uses, with the client scheduling the
>> >> >> processing task needed for it to do that. There isn't currently a
>> way
>> >> >> to make it send heartbeats more often than Proton decides it needs
>> to
>> >> >> based on what the server advertises and what other activity is
>> >> >> occuring.
>> >> >>
>> >> >> The AMQP spec indicates
>> >> >> (
>> >> >>
>> >>
>> http://docs.oasis-open.org/amqp/core/v1.0/os/amqp-core-transport-v1.0-os.html#doc-doc-idle-time-out
>> >> >> )
>> >> >> that peers SHOULD advertise half their actual timeout to avoid
>> >> >> spurious timeouts. It would seem like perhaps Service Bus isnt doing
>> >> >> that, since it advertised 4 minutes and the logs indicate the
>> >> >> connection was closed in less than 5 minutes, as opposed to at least
>> >> >> 8.
>> >> >>
>> >> >> That said, Proton actually pessimisticly assumes that the peer
>> hasn't
>> >> >> advertised half its timeout and so uses half the peers advertised
>> >> >> value, meaning it should send heartbeats at either twice or four
>> times
>> >> >> the frequency of the peers actual timeout if no other frames are
>> being
>> >> >> sent. I see you have a 121sec timeout specified client side (given
>> the
>> >> >> client advertise idle-timeout of 60.5s) too, and that should mean
>> the
>> >> >> client should be doing the idle-timeout processing at least that
>> >> >> often, with the absence of any heartbeat frames being sent
>> suggesting
>> >> >> Proton thinks it doesnt need to.
>> >> >>
>> >> >> I'll take a look at the heartbeat generation stuff in Proton, but if
>> >> >> you could send the updated logging that would be good.
>> >> >>
>> >> >> Robbie
>> >> >>
>> >> >> On 1 July 2016 at 13:35, Dale Green <gr...@gmail.com>
>> wrote:
>> >> >> > Hi people,
>> >> >> >
>> >> >> > I have another problem with Service Bus and Qpid JMS 0.9.0.
>> >> >> >
>> >> >> > The broker uses 4 minutes idle timeout. With the sample code in
>> the
>> >> >> > examples, I verified that the client sends the first empty frame
>> >> exactly
>> >> >> 4
>> >> >> > minutes after the consumer has connected. After that, Qpid sends
>> an
>> >> empty
>> >> >> > frame every 2 minutes.
>> >> >> >
>> >> >> > However in the application, I am working on (existing code),
>> Service
>> >> Bus
>> >> >> > drops the connections too often (when there are no messages sent)
>> >> because
>> >> >> > of inactivity. I think the problem is that Qpid waits for the
>> whole 4
>> >> >> > minutes before sending the first empty frame. Moreover, I see
>> always
>> >> >> (when
>> >> >> > the problem occurs) TCP retransmission for exactly this frame
>> which
>> >> maybe
>> >> >> > delays it for 100-200ms. Therefore, dropping the connection is
>> perhaps
>> >> >> > correct.
>> >> >> >
>> >> >> > I am appending some Qpid traces down here. My questions are:
>> >> >> > - Is the broker's behavior correct?
>> >> >> > - How can I tell Qpid to send the heartbeats more often to
>> probably
>> >> solve
>> >> >> > this problem?
>> >> >> >
>> >> >> > Thanks!
>> >> >> >
>> >> >> > 11:48:38,001 TRACE [FRAMES] SENT: Open{
>> >> >> > containerId='ID::68508b21-bbc7-4318-9371-5debbad2937d:1',
>> >> >> > hostname='10.10.0.62', maxFrameSize=1048576, channelMax=32767,
>> >> >> > idleTimeOut=60500, outgoingLocales=null, incomingLocales=null,
>> >> >> > offeredCapabilities=null,
>> >> >> > desiredCapabilities=[sole-connection-for-container],
>> >> >> > properties={product=QpidJMS, version=0.9.0, platform=JVM:
>> 1.7.0_80,
>> >> >> > 24.80-b11, Oracle Corporation, OS: Linux, 3.16.0-4-amd64, amd64}}
>> >> >> > 11:48:38,006 TRACE [FRAMES] RECV: Open{
>> >> >> > containerId='d7c7f05c1eff4183ac3dc9f2885e3f93_Gcw7',
>> hostname='null',
>> >> >> > maxFrameSize=65536, channelMax=4999, idleTimeOut=240000,
>> >> >> > outgoingLocales=null, incomingLocales=null,
>> offeredCapabilities=null,
>> >> >> > desiredCapabilities=null, properties=null}
>> >> >> > 11:48:38,015 TRACE [FRAMES] SENT: Begin{remoteChannel=null,
>> >> >> > nextOutgoingId=1, incomingWindow=2047, outgoingWindow=2147483647,
>> >> >> > handleMax=65535, offeredCapabilities=null,
>> desiredCapabilities=null,
>> >> >> > properties=null}
>> >> >> > 11:48:38,017 TRACE [FRAMES] RECV: Begin{remoteChannel=0,
>> >> >> nextOutgoingId=1,
>> >> >> > incomingWindow=5000, outgoingWindow=2047, handleMax=7,
>> >> >> > offeredCapabilities=null, desiredCapabilities=null,
>> properties=null}
>> >> >> > 11:48:38,144 TRACE [FRAMES] SENT:
>> >> >> >
>> >> >>
>> >>
>> Attach{name='qpid-jms:receiver:ID::24295795-625b-46a2-9898-637cf9944fbd:1:1:1:ServiceBusDefaultNamespace/XXX',
>> >> >> > handle=0, role=RECEIVER, sndSettleMode=UNSETTLED,
>> rcvSettleMode=FIRST,
>> >> >> > source=Source{address='ServiceBusDefaultNamespace/XXX',
>> durable=NONE,
>> >> >> > expiryPolicy=LINK_DETACH, timeout=0, dynamic=false,
>> >> >> > dynamicNodeProperties=null, distributionMode=null, filter=null,
>> >> >> > defaultOutcome=Modified{deliveryFailed=true,
>> undeliverableHere=null,
>> >> >> > messageAnnotations=null}, outcomes=[amqp:accepted:list,
>> >> >> amqp:rejected:list,
>> >> >> > amqp:released:list, amqp:modified:list], capabilities=[queue]},
>> >> >> > target=Target{address='null', durable=NONE,
>> expiryPolicy=SESSION_END,
>> >> >> > timeout=0, dynamic=false, dynamicNodeProperties=null,
>> >> capabilities=null},
>> >> >> > unsettled=null, incompleteUnsettled=false,
>> initialDeliveryCount=null,
>> >> >> > maxMessageSize=null, offeredCapabilities=null,
>> >> desiredCapabilities=null,
>> >> >> > properties=null}
>> >> >> > 11:48:38,179 TRACE [FRAMES] RECV:
>> >> >> >
>> >> >>
>> >>
>> Attach{name='qpid-jms:receiver:ID::24295795-625b-46a2-9898-637cf9944fbd:1:1:1:ServiceBusDefaultNamespace/XXX',
>> >> >> > handle=0, role=SENDER, sndSettleMode=UNSETTLED,
>> rcvSettleMode=FIRST,
>> >> >> > source=Source{address='ServiceBusDefaultNamespace/XXX',
>> durable=NONE,
>> >> >> > expiryPolicy=LINK_DETACH, timeout=0, dynamic=false,
>> >> >> > dynamicNodeProperties=null, distributionMode=null, filter=null,
>> >> >> > defaultOutcome=Modified{deliveryFailed=true,
>> undeliverableHere=null,
>> >> >> > messageAnnotations=null}, outcomes=[amqp:accepted:list,
>> >> >> amqp:rejected:list,
>> >> >> > amqp:released:list, amqp:modified:list], capabilities=[queue]},
>> >> >> > target=Target{address='null', durable=NONE,
>> expiryPolicy=SESSION_END,
>> >> >> > timeout=0, dynamic=false, dynamicNodeProperties=null,
>> >> capabilities=null},
>> >> >> > unsettled=null, incompleteUnsettled=false, initialDeliveryCount=0,
>> >> >> > maxMessageSize=262144, offeredCapabilities=null,
>> >> >> desiredCapabilities=null,
>> >> >> >
>> >> >>
>> >>
>> properties={com.microsoft:tracking-id=d7c7f05c1eff4183ac3dc9f2885e3f93_Gcw7_Bcw7;51163:100752:100753}}
>> >> >> > 11:48:38,183 TRACE [FRAMES] SENT: Flow{nextIncomingId=1,
>> >> >> > incomingWindow=2047, nextOutgoingId=1, outgoingWindow=2147483647,
>> >> >> handle=0,
>> >> >> > deliveryCount=0, linkCredit=1000, available=null, drain=false,
>> >> >> echo=false,
>> >> >> > properties=null}
>> >> >> > ...
>> >> >> > 11:49:30,966 TRACE [FRAMES] RECV: Empty Frame
>> >> >> > ...
>> >> >> > 11:50:23,911 TRACE [FRAMES] RECV: Empty Frame
>> >> >> > ...
>> >> >> > 11:51:16,857 TRACE [FRAMES] RECV: Empty Frame
>> >> >> > ...
>> >> >> > 11:52:09,804 TRACE [FRAMES] RECV: Empty Frame
>> >> >> > ...
>> >> >> > 11:52:38,019 TRACE [FRAMES] SENT: Empty Frame
>> >> >> > (TCP Retransmission occurs within 100-200ms)
>> >> >> > ...
>> >> >> > 11:53:02,749 TRACE [FRAMES] RECV: Empty Frame
>> >> >> > ...
>> >> >> > 11:53:38,009 TRACE [FRAMES] RECV:
>> >> >> > Close{error=Error{condition=amqp:connection:forced,
>> description='Die
>> >> >> > Verbindung war länger als zulässig
>> >> >> >
>> >> >>
>> >>
>> inaktiv..TrackingId:d7c7f05c1eff4183ac3dc9f2885e3f93_Gcw7,TimeStamp:01.07.2016
>> >> >> > 11:53:27', info=null}}
>> >> >>
>> >> >>
>> ---------------------------------------------------------------------
>> >> >> To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
>> >> >> For additional commands, e-mail: users-help@qpid.apache.org
>> >> >>
>> >> >>
>> >>
>> >> ---------------------------------------------------------------------
>> >> To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
>> >> For additional commands, e-mail: users-help@qpid.apache.org
>> >>
>> >>
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
>> For additional commands, e-mail: users-help@qpid.apache.org
>>
>>
>

Re: Qpid JMS: client inactivity problem

Posted by Dale Green <gr...@gmail.com>.
Hi Robbie,

It seems that I made multiple mistakes in evaluating the problems, but I
finally managed to reproduce it with some very simple code.

It looks that this timeout after 5 minutes happens always when there is no
consumer using the session. That is, we have only Open, Begin and Empty
frames, no Attach.

Can you tell me if it is the correct broker's behavior or I am using the
client in a wrong way? This is existing code and such connections are used
only for sending messages, so the consumers are created on demand.

Thanks!

2016-07-05 10:50:48,978 [ntLoopGroup-2-1] - TRACE
TransportSupport               - Getting SSLContext instance using
protocol: TLS
2016-07-05 10:50:49,195 [ntLoopGroup-2-1] - TRACE
TransportSupport               - Default protocols from the SSLEngine:
[SSLv2Hello, TLSv1, TLSv1.1, TLSv1.2]
2016-07-05 10:50:49,195 [ntLoopGroup-2-1] - TRACE
TransportSupport               - Disabled protocols: [SSLv2Hello, SSLv3]
2016-07-05 10:50:49,195 [ntLoopGroup-2-1] - TRACE
TransportSupport               - Enabled protocols: [TLSv1, TLSv1.1,
TLSv1.2]
2016-07-05 10:50:49,195 [ntLoopGroup-2-1] - TRACE
TransportSupport               - Default cipher suites from the SSLEngine:
[TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384,
TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384, TLS_RSA_WITH_AES_256_CBC_SHA256,
TLS_ECDH_ECDSA_WITH_AES_256_CBC_SHA384,
TLS_ECDH_RSA_WITH_AES_256_CBC_SHA384, TLS_DHE_RSA_WITH_AES_256_CBC_SHA256,
TLS_DHE_DSS_WITH_AES_256_CBC_SHA256, TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,
TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA, TLS_RSA_WITH_AES_256_CBC_SHA,
TLS_ECDH_ECDSA_WITH_AES_256_CBC_SHA, TLS_ECDH_RSA_WITH_AES_256_CBC_SHA,
TLS_DHE_RSA_WITH_AES_256_CBC_SHA, TLS_DHE_DSS_WITH_AES_256_CBC_SHA,
TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256,
TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256, TLS_RSA_WITH_AES_128_CBC_SHA256,
TLS_ECDH_ECDSA_WITH_AES_128_CBC_SHA256,
TLS_ECDH_RSA_WITH_AES_128_CBC_SHA256, TLS_DHE_RSA_WITH_AES_128_CBC_SHA256,
TLS_DHE_DSS_WITH_AES_128_CBC_SHA256, TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,
TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA, TLS_RSA_WITH_AES_128_CBC_SHA,
TLS_ECDH_ECDSA_WITH_AES_128_CBC_SHA, TLS_ECDH_RSA_WITH_AES_128_CBC_SHA,
TLS_DHE_RSA_WITH_AES_128_CBC_SHA, TLS_DHE_DSS_WITH_AES_128_CBC_SHA,
TLS_ECDHE_ECDSA_WITH_3DES_EDE_CBC_SHA, TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,
SSL_RSA_WITH_3DES_EDE_CBC_SHA, TLS_ECDH_ECDSA_WITH_3DES_EDE_CBC_SHA,
TLS_ECDH_RSA_WITH_3DES_EDE_CBC_SHA, SSL_DHE_RSA_WITH_3DES_EDE_CBC_SHA,
SSL_DHE_DSS_WITH_3DES_EDE_CBC_SHA, TLS_EMPTY_RENEGOTIATION_INFO_SCSV]
2016-07-05 10:50:49,196 [ntLoopGroup-2-1] - TRACE
TransportSupport               - Enabled cipher suites:
[TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384,
TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384, TLS_RSA_WITH_AES_256_CBC_SHA256,
TLS_ECDH_ECDSA_WITH_AES_256_CBC_SHA384,
TLS_ECDH_RSA_WITH_AES_256_CBC_SHA384, TLS_DHE_RSA_WITH_AES_256_CBC_SHA256,
TLS_DHE_DSS_WITH_AES_256_CBC_SHA256, TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,
TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA, TLS_RSA_WITH_AES_256_CBC_SHA,
TLS_ECDH_ECDSA_WITH_AES_256_CBC_SHA, TLS_ECDH_RSA_WITH_AES_256_CBC_SHA,
TLS_DHE_RSA_WITH_AES_256_CBC_SHA, TLS_DHE_DSS_WITH_AES_256_CBC_SHA,
TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256,
TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256, TLS_RSA_WITH_AES_128_CBC_SHA256,
TLS_ECDH_ECDSA_WITH_AES_128_CBC_SHA256,
TLS_ECDH_RSA_WITH_AES_128_CBC_SHA256, TLS_DHE_RSA_WITH_AES_128_CBC_SHA256,
TLS_DHE_DSS_WITH_AES_128_CBC_SHA256, TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,
TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA, TLS_RSA_WITH_AES_128_CBC_SHA,
TLS_ECDH_ECDSA_WITH_AES_128_CBC_SHA, TLS_ECDH_RSA_WITH_AES_128_CBC_SHA,
TLS_DHE_RSA_WITH_AES_128_CBC_SHA, TLS_DHE_DSS_WITH_AES_128_CBC_SHA,
TLS_ECDHE_ECDSA_WITH_3DES_EDE_CBC_SHA, TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,
SSL_RSA_WITH_3DES_EDE_CBC_SHA, TLS_ECDH_ECDSA_WITH_3DES_EDE_CBC_SHA,
TLS_ECDH_RSA_WITH_3DES_EDE_CBC_SHA, SSL_DHE_RSA_WITH_3DES_EDE_CBC_SHA,
SSL_DHE_DSS_WITH_3DES_EDE_CBC_SHA, TLS_EMPTY_RENEGOTIATION_INFO_SCSV]
2016-07-05 10:50:49,229 [ntLoopGroup-2-1] - TRACE
NettyTcpTransport              - Channel has become active! Channel is [id:
0x5d81e8cf, /10.10.0.180:38039 => /10.10.0.62:5671]
2016-07-05 10:50:49,356 [ntLoopGroup-2-1] - TRACE
NettySslTransport              - SSL Handshake has completed: [id:
0x5d81e8cf, /10.10.0.180:38039 => /10.10.0.62:5671]
2016-07-05 10:50:49,401 [rifyHost=false]] - TRACE
NettyTcpTransport              - Attempted write of: 8 bytes
2016-07-05 10:50:49,404 [ntLoopGroup-2-1] - TRACE
NettyTcpTransport              - New data read: 8 bytes incoming:
UnpooledHeapByteBuf(ridx: 0, widx: 8, cap: 1024)
2016-07-05 10:50:49,405 [rifyHost=false]] - TRACE
AmqpProvider                   - New Proton Event: CONNECTION_INIT
2016-07-05 10:50:49,405 [ntLoopGroup-2-1] - TRACE
NettyTcpTransport              - New data read: 39 bytes incoming:
UnpooledHeapByteBuf(ridx: 0, widx: 39, cap: 69)
2016-07-05 10:50:49,405 [rifyHost=false]] - TRACE
AmqpProvider                   - New Proton Event: CONNECTION_LOCAL_OPEN
2016-07-05 10:50:49,411 [rifyHost=false]] - DEBUG
SaslMechanismFinder            - Skipping SASL-EXTERNAL mechanism because
the available credentials are not sufficient
2016-07-05 10:50:49,411 [rifyHost=false]] - INFO
SaslMechanismFinder            - Best match for SASL auth was: SASL-PLAIN
2016-07-05 10:50:49,412 [rifyHost=false]] - TRACE
NettyTcpTransport              - Attempted write of: 50 bytes
2016-07-05 10:50:49,415 [ntLoopGroup-2-1] - TRACE
NettyTcpTransport              - New data read: 26 bytes incoming:
UnpooledHeapByteBuf(ridx: 0, widx: 26, cap: 53)
2016-07-05 10:50:49,418 [rifyHost=false]] - TRACE
FRAMES                         - SENT: Open{
containerId='ID:d572f504-5ef7-4d72-9022-ee0400dbd194:1',
hostname='10.10.0.62', maxFrameSize=1048576, channelMax=32767,
idleTimeOut=500000, outgoingLocales=null, incomingLocales=null,
offeredCapabilities=null,
desiredCapabilities=[sole-connection-for-container],
properties={product=QpidJMS, version=0.11.0-SNAPSHOT, platform=JVM:
1.7.0_101, 24.95-b01, Oracle Corporation, OS: Linux, 3.16.0-4-amd64, amd64}}
2016-07-05 10:50:49,418 [rifyHost=false]] - TRACE
NettyTcpTransport              - Attempted write of: 265 bytes
2016-07-05 10:50:49,419 [ntLoopGroup-2-1] - TRACE
NettyTcpTransport              - New data read: 8 bytes incoming:
UnpooledHeapByteBuf(ridx: 0, widx: 8, cap: 37)
2016-07-05 10:50:49,421 [ntLoopGroup-2-1] - TRACE
NettyTcpTransport              - New data read: 72 bytes incoming:
UnpooledHeapByteBuf(ridx: 0, widx: 72, cap: 101)
2016-07-05 10:50:49,421 [rifyHost=false]] - TRACE
FRAMES                         - RECV: Open{
containerId='d7c7f05c1eff4183ac3dc9f2885e3f93_Gcw7', hostname='null',
maxFrameSize=65536, channelMax=4999, idleTimeOut=240000,
outgoingLocales=null, incomingLocales=null, offeredCapabilities=null,
desiredCapabilities=null, properties=null}
2016-07-05 10:50:49,421 [rifyHost=false]] - TRACE
AmqpProvider                   - New Proton Event: CONNECTION_REMOTE_OPEN
2016-07-05 10:50:49,424 [rifyHost=false]] - TRACE
AmqpProvider                   - New Proton Event: SESSION_INIT
2016-07-05 10:50:49,424 [rifyHost=false]] - TRACE
AmqpProvider                   - New Proton Event: SESSION_LOCAL_OPEN
2016-07-05 10:50:49,426 [rifyHost=false]] - TRACE
FRAMES                         - SENT: Begin{remoteChannel=null,
nextOutgoingId=1, incomingWindow=2047, outgoingWindow=2147483647,
handleMax=65535, offeredCapabilities=null, desiredCapabilities=null,
properties=null}
2016-07-05 10:50:49,426 [rifyHost=false]] - TRACE
NettyTcpTransport              - Attempted write of: 32 bytes
2016-07-05 10:50:49,427 [ntLoopGroup-2-1] - TRACE
NettyTcpTransport              - New data read: 34 bytes incoming:
UnpooledHeapByteBuf(ridx: 0, widx: 34, cap: 69)
2016-07-05 10:50:49,427 [rifyHost=false]] - TRACE
FRAMES                         - RECV: Begin{remoteChannel=0,
nextOutgoingId=1, incomingWindow=5000, outgoingWindow=2047, handleMax=7,
offeredCapabilities=null, desiredCapabilities=null, properties=null}
2016-07-05 10:50:49,427 [rifyHost=false]] - TRACE
AmqpProvider                   - New Proton Event: SESSION_REMOTE_OPEN
2016-07-05 10:50:49,427 [rifyHost=false]] - DEBUG
AmqpConnectionBuilder          - AmqpConnection {
ID:f5f875e9-3e36-484f-8480-5d9b1305eb56:1 } is now open:
2016-07-05 10:50:49,427 [rifyHost=false]] - TRACE
AmqpProvider                   - IdleTimeoutCheck being initiated, initial
delay: 30000
2016-07-05 10:50:49,428 [rifyHost=false]] - INFO
JmsConnection                  - Connection
ID:f5f875e9-3e36-484f-8480-5d9b1305eb56:1 connected to remote Broker:
amqps://10.10.0.62:5671/?transport.verifyHost=false
2016-07-05 10:50:49,437 [rifyHost=false]] - TRACE
FRAMES                         - SENT: Begin{remoteChannel=null,
nextOutgoingId=1, incomingWindow=2047, outgoingWindow=2147483647,
handleMax=65535, offeredCapabilities=null, desiredCapabilities=null,
properties=null}
2016-07-05 10:50:49,437 [rifyHost=false]] - TRACE
NettyTcpTransport              - Attempted write of: 32 bytes
2016-07-05 10:50:49,438 [ntLoopGroup-2-1] - TRACE
NettyTcpTransport              - New data read: 34 bytes incoming:
UnpooledHeapByteBuf(ridx: 0, widx: 34, cap: 69)
2016-07-05 10:50:49,438 [rifyHost=false]] - TRACE
FRAMES                         - RECV: Begin{remoteChannel=1,
nextOutgoingId=1, incomingWindow=5000, outgoingWindow=2047, handleMax=7,
offeredCapabilities=null, desiredCapabilities=null, properties=null}
2016-07-05 10:50:49,438 [rifyHost=false]] - TRACE
AmqpProvider                   - New Proton Event: SESSION_INIT
2016-07-05 10:50:49,439 [rifyHost=false]] - TRACE
AmqpProvider                   - New Proton Event: SESSION_LOCAL_OPEN
2016-07-05 10:50:49,439 [rifyHost=false]] - TRACE
AmqpProvider                   - New Proton Event: SESSION_REMOTE_OPEN
2016-07-05 10:51:19,428 [rifyHost=false]] - TRACE
AmqpProvider                   - IdleTimeoutCheck rescheduling with delay:
30000
2016-07-05 10:51:49,429 [rifyHost=false]] - TRACE
AmqpProvider                   - IdleTimeoutCheck rescheduling with delay:
30000
2016-07-05 10:52:19,429 [rifyHost=false]] - TRACE
AmqpProvider                   - IdleTimeoutCheck rescheduling with delay:
30000
2016-07-05 10:52:49,429 [rifyHost=false]] - TRACE
AmqpProvider                   - IdleTimeoutCheck rescheduling with delay:
30000
2016-07-05 10:53:19,430 [rifyHost=false]] - TRACE
FRAMES                         - SENT: Empty Frame
2016-07-05 10:53:19,430 [rifyHost=false]] - TRACE
NettyTcpTransport              - Attempted write of: 8 bytes
2016-07-05 10:53:19,431 [rifyHost=false]] - TRACE
AmqpProvider                   - IdleTimeoutCheck rescheduling with delay:
30000
2016-07-05 10:53:49,431 [rifyHost=false]] - TRACE
AmqpProvider                   - IdleTimeoutCheck rescheduling with delay:
30000
2016-07-05 10:54:19,431 [rifyHost=false]] - TRACE
AmqpProvider                   - IdleTimeoutCheck rescheduling with delay:
30000
2016-07-05 10:54:49,432 [rifyHost=false]] - TRACE
AmqpProvider                   - IdleTimeoutCheck rescheduling with delay:
30000
2016-07-05 10:55:19,432 [rifyHost=false]] - TRACE
FRAMES                         - SENT: Empty Frame
2016-07-05 10:55:19,432 [rifyHost=false]] - TRACE
NettyTcpTransport              - Attempted write of: 8 bytes
2016-07-05 10:55:19,433 [rifyHost=false]] - TRACE
AmqpProvider                   - IdleTimeoutCheck rescheduling with delay:
30000
2016-07-05 10:55:49,410 [ntLoopGroup-2-1] - TRACE
NettyTcpTransport              - New data read: 15 bytes incoming:
UnpooledHeapByteBuf(ridx: 0, widx: 15, cap: 53)
2016-07-05 10:55:49,411 [rifyHost=false]] - TRACE
FRAMES                         - RECV: End{error=null}
2016-07-05 10:55:49,411 [rifyHost=false]] - TRACE
AmqpProvider                   - New Proton Event: SESSION_REMOTE_CLOSE
2016-07-05 10:55:49,411 [ntLoopGroup-2-1] - TRACE
NettyTcpTransport              - New data read: 190 bytes incoming:
UnpooledHeapByteBuf(ridx: 0, widx: 190, cap: 229)
2016-07-05 10:55:49,411 [rifyHost=false]] - TRACE
AmqpProvider                   - New Proton Event: SESSION_LOCAL_CLOSE
2016-07-05 10:55:49,412 [rifyHost=false]] - TRACE
FRAMES                         - SENT: End{error=null}
2016-07-05 10:55:49,412 [rifyHost=false]] - TRACE
NettyTcpTransport              - Attempted write of: 12 bytes
2016-07-05 10:55:49,412 [rifyHost=false]] - TRACE
FRAMES                         - RECV: End{error=null}
2016-07-05 10:55:49,412 [rifyHost=false]] - TRACE
FRAMES                         - RECV:
Close{error=Error{condition=amqp:connection:forced, description='Die
Verbindung war länger als zulässig
inaktiv..TrackingId:d7c7f05c1eff4183ac3dc9f2885e3f93_Gcw7,TimeStamp:05.07.2016
08:55:41', info=null}}
2016-07-05 10:55:49,413 [rifyHost=false]] - TRACE
AmqpProvider                   - New Proton Event: SESSION_REMOTE_CLOSE
2016-07-05 10:55:49,413 [rifyHost=false]] - TRACE
AmqpProvider                   - New Proton Event: CONNECTION_REMOTE_CLOSE
Connection ExceptionListener fired, exiting.
javax.jms.JMSException: Die Verbindung war länger als zulässig
inaktiv..TrackingId:d7c7f05c1eff4183ac3dc9f2885e3f93_Gcw7,TimeStamp:05.07.2016
08:55:41 [condition = amqp:connection:forced]
    at
org.apache.qpid.jms.provider.amqp.AmqpSupport.convertToException(AmqpSupport.java:148)
    at
org.apache.qpid.jms.provider.amqp.AmqpSupport.convertToException(AmqpSupport.java:103)
    at
org.apache.qpid.jms.provider.amqp.AmqpAbstractResource.remotelyClosed(AmqpAbstractResource.java:147)
    at
org.apache.qpid.jms.provider.amqp.AmqpAbstractResource.processRemoteClose(AmqpAbstractResource.java:242)
    at
org.apache.qpid.jms.provider.amqp.AmqpProvider.processUpdates(AmqpProvider.java:771)
    at
org.apache.qpid.jms.provider.amqp.AmqpProvider.access$1900(AmqpProvider.java:92)
    at
org.apache.qpid.jms.provider.amqp.AmqpProvider$17.run(AmqpProvider.java:699)
    at
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
    at java.util.concurrent.FutureTask.run(FutureTask.java:262)
    at
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:178)
    at
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:292)
    at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:745)
2016-07-05 10:55:49,416 [rifyHost=false]] - TRACE
AmqpProvider                   - New Proton Event: TRANSPORT_TAIL_CLOSED
2016-07-05 10:55:49,416 [rifyHost=false]] - TRACE
AmqpProvider                   - New Proton Event: SESSION_LOCAL_CLOSE
2016-07-05 10:55:49,416 [rifyHost=false]] - TRACE
AmqpProvider                   - New Proton Event: CONNECTION_LOCAL_CLOSE
2016-07-05 10:55:49,416 [rifyHost=false]] - TRACE
FRAMES                         - SENT: End{error=null}
2016-07-05 10:55:49,416 [rifyHost=false]] - TRACE
FRAMES                         - SENT: Close{error=null}
2016-07-05 10:55:49,416 [rifyHost=false]] - TRACE
NettyTcpTransport              - Attempted write of: 24 bytes
2016-07-05 10:55:49,416 [rifyHost=false]] - TRACE
AmqpProvider                   - Cancelling scheduled IdleTimeoutCheck
2016-07-05 10:55:49,419 [ntLoopGroup-2-1] - TRACE
NettyTcpTransport              - Channel has gone inactive! Channel is [id:
0x5d81e8cf, /10.10.0.180:38039 :> /10.10.0.62:5671]






On Mon, Jul 4, 2016 at 5:01 PM, Robbie Gemmell <ro...@gmail.com>
wrote:

> Hi Dale,
>
> There doesnt seem to be much more to try on the client side itself.
> The only other thing you could do would be to use a modified local
> copy of Proton to ignore the servers advertised 4 minute timeout value
> and use something else for remoteIdleTimeout in its 'tick'
> calculations to try and force it to send more even more often than it
> does and see if that makes any difference. It shouldnt.
>
> I think youll need to seek support from Microsoft in either case. The
> client received an requested advertised server idle-timout of 4
> minutes, and generated heartbeats or real traffic at gap of no more
> than 2.5mins with that change in place, and the server still
> eventually closed the connection for being idle even though it never
> breached the timeout regardless whether the advertised value was half
> the real timeout or not. That it appears to behaves differently if
> there are multiple conections is further strangeness, since this is a
> per-connection mechanism.
>
> Robbie
>
> On 4 July 2016 at 15:21, Dale Green <gr...@gmail.com> wrote:
> > Hi Robbie,
> >
> > Thank you for your efforts!
> >
> > I hardcoded this scheduling to be 30 sec. Then the behavior is as
> follows:
> > - the first empty packet is sent after 2.5 min;
> > - after that, the heartbeats are sent every 2 min.
> >
> > However, this doesn't help, as the broker closes the connections (all but
> > the first one, when there are multiple connections!) 5 min after "SENT
> > flow". Exactly like before.
> >
> > So, I am still trying to find out what makes Service Bus unhappy.
> >
> >
> >
> > On Mon, Jul 4, 2016 at 1:45 PM, Robbie Gemmell <robbie.gemmell@gmail.com
> >
> > wrote:
> >
> >> Hi Dale,
> >>
> >> I tried to reproduce this on Friday using the existing idle-timeout
> >> tests for the client and wasn't able to, the first heartbeat was sent
> >> after 2 minutes, but upon a further look today I see why and tweaking
> >> the test it now goes out after 4 and I can see what is going on.
> >>
> >> The way the client drives the idleTimeout processing in Proton results
> >> in a corner case that makes it possible for it to take up to the full
> >> timeout period advertised by the server before the first heartbeat is
> >> sent after prior non-idle activity, rather than pessimistically
> >> sending at a faster rate as it otherwise does. As ServerBus seems
> >> (your latest mail maks this less clear) to have not followed the path
> >> of advertising half its real timeout as recommended (but annoyingly
> >> not absolutely required) by the AMQP spec specifically to avoid this
> >> scenario, this creates a situation where there is a race between that
> >> heartbeat being sent and whether it arrives in time before ServiceBus
> >> decides to kill the connection. The retransmission you mentioned
> >> seeing could be enough of a delay to ensure that it didnt.
> >>
> >> I need to have a closer look at Proton to see if we can update it to
> >> better accomodate how the client drives the timeout processing as well
> >> as the other usage pattern, without introducing undesirable behaviour
> >> for either or changing the API.
> >>
> >> I do however think I see a way to alter the client to avoid the issue
> >> in the mean time. Try the change at https://paste.apache.org/1oCe
> >> (which applies to master or 0.10.0 of
> >> https://github.com/apache/qpid-jms, and shows the code areas to look
> >> at as you were asking), it makes the client 'tick' proton to process
> >> the timeout handling more often than it normally would / should be
> >> necessary, which in this case ensures the heartbeat is actually sent
> >> before the peers full advertised timeout is reached. In your case the
> >> patch should make the outlier get sent after at most 3 minutes.
> >>
> >> We have previously discussed stopping Proton from sending heartbeats
> >> much faster than necessary as it currently does, because it results in
> >> a significant amount of unecessary traffic/processing on servers with
> >> lots of idle clients and/or advertising smaller timeouts. If we do
> >> that, I think we would still try to send them a little faster than
> >> necessary to avoid spurious timeouts against servers that dont
> >> advertise half their actual timeout, but I think it would still make
> >> sense for us to add an override to allow toggling it to send more
> >> often than it otherwise would, since it would help in situations such
> >> as this plus another that came up previously.
> >>
> >> Robbie
> >>
> >> On 4 July 2016 at 09:09, Dale Green <gr...@gmail.com> wrote:
> >> > Hi Robbie,
> >> >
> >> > I was debugging a bit and saw that the broker behaves differently if
> >> there
> >> > is a single connection or multiple connections. That is, when I have
> only
> >> > one connection, it is not closed even after 5 minutes without
> heartbeats.
> >> > So, I'll try to observe the behavior with 2 or more connections now.
> >> >
> >> > However, I am appending now the Qpid logs of a simple consumer, so you
> >> can
> >> > see the client behavior. For me, it is still not clear why we don't
> have
> >> a
> >> > heartbeat at 09:46:56,838. Can you tell me where in the code this
> >> > functionality could be changed, so I may try a quick fix, to see what
> the
> >> > broker would do then?
> >> >
> >> > 2016-07-04 09:44:56,407 [ntLoopGroup-2-1] - TRACE
> >> > TransportSupport               - Getting SSLContext instance using
> >> > protocol: TLS
> >> > 2016-07-04 09:44:56,617 [ntLoopGroup-2-1] - TRACE
> >> > TransportSupport               - Default protocols from the SSLEngine:
> >> > [SSLv2Hello, TLSv1, TLSv1.1, TLSv1.2]
> >> > 2016-07-04 09:44:56,617 [ntLoopGroup-2-1] - TRACE
> >> > TransportSupport               - Disabled protocols: [SSLv2Hello,
> SSLv3]
> >> > 2016-07-04 09:44:56,617 [ntLoopGroup-2-1] - TRACE
> >> > TransportSupport               - Enabled protocols: [TLSv1, TLSv1.1,
> >> > TLSv1.2]
> >> > 2016-07-04 09:44:56,617 [ntLoopGroup-2-1] - TRACE
> >> > TransportSupport               - Default cipher suites from the
> >> SSLEngine:
> >> > [TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384,
> >> > TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384,
> TLS_RSA_WITH_AES_256_CBC_SHA256,
> >> > TLS_ECDH_ECDSA_WITH_AES_256_CBC_SHA384,
> >> > TLS_ECDH_RSA_WITH_AES_256_CBC_SHA384,
> >> TLS_DHE_RSA_WITH_AES_256_CBC_SHA256,
> >> > TLS_DHE_DSS_WITH_AES_256_CBC_SHA256,
> >> TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,
> >> > TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA, TLS_RSA_WITH_AES_256_CBC_SHA,
> >> > TLS_ECDH_ECDSA_WITH_AES_256_CBC_SHA,
> TLS_ECDH_RSA_WITH_AES_256_CBC_SHA,
> >> > TLS_DHE_RSA_WITH_AES_256_CBC_SHA, TLS_DHE_DSS_WITH_AES_256_CBC_SHA,
> >> > TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256,
> >> > TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256,
> TLS_RSA_WITH_AES_128_CBC_SHA256,
> >> > TLS_ECDH_ECDSA_WITH_AES_128_CBC_SHA256,
> >> > TLS_ECDH_RSA_WITH_AES_128_CBC_SHA256,
> >> TLS_DHE_RSA_WITH_AES_128_CBC_SHA256,
> >> > TLS_DHE_DSS_WITH_AES_128_CBC_SHA256,
> >> TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,
> >> > TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA, TLS_RSA_WITH_AES_128_CBC_SHA,
> >> > TLS_ECDH_ECDSA_WITH_AES_128_CBC_SHA,
> TLS_ECDH_RSA_WITH_AES_128_CBC_SHA,
> >> > TLS_DHE_RSA_WITH_AES_128_CBC_SHA, TLS_DHE_DSS_WITH_AES_128_CBC_SHA,
> >> > TLS_ECDHE_ECDSA_WITH_3DES_EDE_CBC_SHA,
> >> TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,
> >> > SSL_RSA_WITH_3DES_EDE_CBC_SHA, TLS_ECDH_ECDSA_WITH_3DES_EDE_CBC_SHA,
> >> > TLS_ECDH_RSA_WITH_3DES_EDE_CBC_SHA, SSL_DHE_RSA_WITH_3DES_EDE_CBC_SHA,
> >> > SSL_DHE_DSS_WITH_3DES_EDE_CBC_SHA, TLS_EMPTY_RENEGOTIATION_INFO_SCSV]
> >> > 2016-07-04 09:44:56,617 [ntLoopGroup-2-1] - TRACE
> >> > TransportSupport               - Enabled cipher suites:
> >> > [TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384,
> >> > TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384,
> TLS_RSA_WITH_AES_256_CBC_SHA256,
> >> > TLS_ECDH_ECDSA_WITH_AES_256_CBC_SHA384,
> >> > TLS_ECDH_RSA_WITH_AES_256_CBC_SHA384,
> >> TLS_DHE_RSA_WITH_AES_256_CBC_SHA256,
> >> > TLS_DHE_DSS_WITH_AES_256_CBC_SHA256,
> >> TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,
> >> > TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA, TLS_RSA_WITH_AES_256_CBC_SHA,
> >> > TLS_ECDH_ECDSA_WITH_AES_256_CBC_SHA,
> TLS_ECDH_RSA_WITH_AES_256_CBC_SHA,
> >> > TLS_DHE_RSA_WITH_AES_256_CBC_SHA, TLS_DHE_DSS_WITH_AES_256_CBC_SHA,
> >> > TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256,
> >> > TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256,
> TLS_RSA_WITH_AES_128_CBC_SHA256,
> >> > TLS_ECDH_ECDSA_WITH_AES_128_CBC_SHA256,
> >> > TLS_ECDH_RSA_WITH_AES_128_CBC_SHA256,
> >> TLS_DHE_RSA_WITH_AES_128_CBC_SHA256,
> >> > TLS_DHE_DSS_WITH_AES_128_CBC_SHA256,
> >> TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,
> >> > TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA, TLS_RSA_WITH_AES_128_CBC_SHA,
> >> > TLS_ECDH_ECDSA_WITH_AES_128_CBC_SHA,
> TLS_ECDH_RSA_WITH_AES_128_CBC_SHA,
> >> > TLS_DHE_RSA_WITH_AES_128_CBC_SHA, TLS_DHE_DSS_WITH_AES_128_CBC_SHA,
> >> > TLS_ECDHE_ECDSA_WITH_3DES_EDE_CBC_SHA,
> >> TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,
> >> > SSL_RSA_WITH_3DES_EDE_CBC_SHA, TLS_ECDH_ECDSA_WITH_3DES_EDE_CBC_SHA,
> >> > TLS_ECDH_RSA_WITH_3DES_EDE_CBC_SHA, SSL_DHE_RSA_WITH_3DES_EDE_CBC_SHA,
> >> > SSL_DHE_DSS_WITH_3DES_EDE_CBC_SHA, TLS_EMPTY_RENEGOTIATION_INFO_SCSV]
> >> > 2016-07-04 09:44:56,650 [ntLoopGroup-2-1] - TRACE
> >> > NettyTcpTransport              - Channel has become active! Channel is
> >> [id:
> >> > 0xec4459d2, /10.10.0.180:51561 => /10.10.0.62:5671]
> >> > 2016-07-04 09:44:56,773 [ntLoopGroup-2-1] - TRACE
> >> > NettySslTransport              - SSL Handshake has completed: [id:
> >> > 0xec4459d2, /10.10.0.180:51561 => /10.10.0.62:5671]
> >> > 2016-07-04 09:44:56,811 [rifyHost=false]] - TRACE
> >> > NettyTcpTransport              - Attempted write of: 8 bytes
> >> > 2016-07-04 09:44:56,814 [ntLoopGroup-2-1] - TRACE
> >> > NettyTcpTransport              - New data read: 8 bytes incoming:
> >> > UnpooledHeapByteBuf(ridx: 0, widx: 8, cap: 1024)
> >> > 2016-07-04 09:44:56,814 [rifyHost=false]] - TRACE
> >> > AmqpProvider                   - New Proton Event: CONNECTION_INIT
> >> > 2016-07-04 09:44:56,815 [ntLoopGroup-2-1] - TRACE
> >> > NettyTcpTransport              - New data read: 39 bytes incoming:
> >> > UnpooledHeapByteBuf(ridx: 0, widx: 39, cap: 69)
> >> > 2016-07-04 09:44:56,815 [rifyHost=false]] - TRACE
> >> > AmqpProvider                   - New Proton Event:
> CONNECTION_LOCAL_OPEN
> >> > 2016-07-04 09:44:56,819 [rifyHost=false]] - DEBUG
> >> > SaslMechanismFinder            - Skipping SASL-EXTERNAL mechanism
> because
> >> > the available credentials are not sufficient
> >> > 2016-07-04 09:44:56,819 [rifyHost=false]] - INFO
> >> > SaslMechanismFinder            - Best match for SASL auth was:
> SASL-PLAIN
> >> > 2016-07-04 09:44:56,820 [rifyHost=false]] - TRACE
> >> > NettyTcpTransport              - Attempted write of: 50 bytes
> >> > 2016-07-04 09:44:56,823 [ntLoopGroup-2-1] - TRACE
> >> > NettyTcpTransport              - New data read: 26 bytes incoming:
> >> > UnpooledHeapByteBuf(ridx: 0, widx: 26, cap: 53)
> >> > 2016-07-04 09:44:56,826 [rifyHost=false]] - TRACE
> >> > FRAMES                         - SENT: Open{
> >> > containerId='ID::21730abd-45cf-46c9-b312-8c1052076805:1',
> >> > hostname='10.10.0.62', maxFrameSize=1048576, channelMax=32767,
> >> > idleTimeOut=500000, outgoingLocales=null, incomingLocales=null,
> >> > offeredCapabilities=null,
> >> > desiredCapabilities=[sole-connection-for-container],
> >> > properties={product=QpidJMS, version=0.9.0, platform=JVM: 1.7.0_101,
> >> > 24.95-b01, Oracle Corporation, OS: Linux, 3.16.0-4-amd64, amd64}}
> >> > 2016-07-04 09:44:56,827 [rifyHost=false]] - TRACE
> >> > NettyTcpTransport              - Attempted write of: 256 bytes
> >> > 2016-07-04 09:44:56,828 [ntLoopGroup-2-1] - TRACE
> >> > NettyTcpTransport              - New data read: 8 bytes incoming:
> >> > UnpooledHeapByteBuf(ridx: 0, widx: 8, cap: 37)
> >> > 2016-07-04 09:44:56,830 [ntLoopGroup-2-1] - TRACE
> >> > NettyTcpTransport              - New data read: 72 bytes incoming:
> >> > UnpooledHeapByteBuf(ridx: 0, widx: 72, cap: 101)
> >> > 2016-07-04 09:44:56,830 [rifyHost=false]] - TRACE
> >> > FRAMES                         - RECV: Open{
> >> > containerId='d7c7f05c1eff4183ac3dc9f2885e3f93_Gcw7', hostname='null',
> >> > maxFrameSize=65536, channelMax=4999, idleTimeOut=240000,
> >> > outgoingLocales=null, incomingLocales=null, offeredCapabilities=null,
> >> > desiredCapabilities=null, properties=null}
> >> > 2016-07-04 09:44:56,830 [rifyHost=false]] - TRACE
> >> > AmqpProvider                   - New Proton Event:
> CONNECTION_REMOTE_OPEN
> >> > 2016-07-04 09:44:56,833 [rifyHost=false]] - TRACE
> >> > AmqpProvider                   - New Proton Event: SESSION_INIT
> >> > 2016-07-04 09:44:56,833 [rifyHost=false]] - TRACE
> >> > AmqpProvider                   - New Proton Event: SESSION_LOCAL_OPEN
> >> > 2016-07-04 09:44:56,835 [rifyHost=false]] - TRACE
> >> > FRAMES                         - SENT: Begin{remoteChannel=null,
> >> > nextOutgoingId=1, incomingWindow=2047, outgoingWindow=2147483647,
> >> > handleMax=65535, offeredCapabilities=null, desiredCapabilities=null,
> >> > properties=null}
> >> > 2016-07-04 09:44:56,835 [rifyHost=false]] - TRACE
> >> > NettyTcpTransport              - Attempted write of: 32 bytes
> >> > 2016-07-04 09:44:56,836 [ntLoopGroup-2-1] - TRACE
> >> > NettyTcpTransport              - New data read: 34 bytes incoming:
> >> > UnpooledHeapByteBuf(ridx: 0, widx: 34, cap: 69)
> >> > 2016-07-04 09:44:56,837 [rifyHost=false]] - TRACE
> >> > FRAMES                         - RECV: Begin{remoteChannel=0,
> >> > nextOutgoingId=1, incomingWindow=5000, outgoingWindow=2047,
> handleMax=7,
> >> > offeredCapabilities=null, desiredCapabilities=null, properties=null}
> >> > 2016-07-04 09:44:56,837 [rifyHost=false]] - TRACE
> >> > AmqpProvider                   - New Proton Event: SESSION_REMOTE_OPEN
> >> > 2016-07-04 09:44:56,837 [rifyHost=false]] - DEBUG
> >> > AmqpConnectionBuilder          - AmqpConnection {
> >> > ID::5fd31250-0099-4949-8160-1b6ba4c2cf72:1 } is now open:
> >> > 2016-07-04 09:44:56,837 [rifyHost=false]] - TRACE
> >> > AmqpProvider                   - IdleTimeoutCheck being initiated,
> >> initial
> >> > delay: 120000
> >> > 2016-07-04 09:44:56,837 [rifyHost=false]] - INFO
> >> > JmsConnection                  - Connection
> >> > ID::5fd31250-0099-4949-8160-1b6ba4c2cf72:1 connected to remote Broker:
> >> > amqps://10.10.0.62:5671/?transport.verifyHost=false
> >> > 2016-07-04 09:44:56,846 [rifyHost=false]] - TRACE
> >> > FRAMES                         - SENT: Begin{remoteChannel=null,
> >> > nextOutgoingId=1, incomingWindow=2047, outgoingWindow=2147483647,
> >> > handleMax=65535, offeredCapabilities=null, desiredCapabilities=null,
> >> > properties=null}
> >> > 2016-07-04 09:44:56,846 [rifyHost=false]] - TRACE
> >> > NettyTcpTransport              - Attempted write of: 32 bytes
> >> > 2016-07-04 09:44:56,848 [ntLoopGroup-2-1] - TRACE
> >> > NettyTcpTransport              - New data read: 34 bytes incoming:
> >> > UnpooledHeapByteBuf(ridx: 0, widx: 34, cap: 69)
> >> > 2016-07-04 09:44:56,848 [rifyHost=false]] - TRACE
> >> > FRAMES                         - RECV: Begin{remoteChannel=1,
> >> > nextOutgoingId=1, incomingWindow=5000, outgoingWindow=2047,
> handleMax=7,
> >> > offeredCapabilities=null, desiredCapabilities=null, properties=null}
> >> > 2016-07-04 09:44:56,848 [rifyHost=false]] - TRACE
> >> > AmqpProvider                   - New Proton Event: SESSION_INIT
> >> > 2016-07-04 09:44:56,848 [rifyHost=false]] - TRACE
> >> > AmqpProvider                   - New Proton Event: SESSION_LOCAL_OPEN
> >> > 2016-07-04 09:44:56,848 [rifyHost=false]] - TRACE
> >> > AmqpProvider                   - New Proton Event: SESSION_REMOTE_OPEN
> >> > 2016-07-04 09:44:56,858 [rifyHost=false]] - TRACE
> >> > FRAMES                         - SENT:
> >> >
> >>
> Attach{name='qpid-jms:receiver:ID::5fd31250-0099-4949-8160-1b6ba4c2cf72:1:1:1:ServiceBusDefaultNamespace/test2',
> >> > handle=0, role=RECEIVER, sndSettleMode=UNSETTLED, rcvSettleMode=FIRST,
> >> > source=Source{address='ServiceBusDefaultNamespace/test2',
> durable=NONE,
> >> > expiryPolicy=LINK_DETACH, timeout=0, dynamic=false,
> >> > dynamicNodeProperties=null, distributionMode=null, filter=null,
> >> > defaultOutcome=Modified{deliveryFailed=true, undeliverableHere=null,
> >> > messageAnnotations=null}, outcomes=[amqp:accepted:list,
> >> amqp:rejected:list,
> >> > amqp:released:list, amqp:modified:list], capabilities=[queue]},
> >> > target=Target{address='null', durable=NONE, expiryPolicy=SESSION_END,
> >> > timeout=0, dynamic=false, dynamicNodeProperties=null,
> capabilities=null},
> >> > unsettled=null, incompleteUnsettled=false, initialDeliveryCount=null,
> >> > maxMessageSize=null, offeredCapabilities=null,
> desiredCapabilities=null,
> >> > properties=null}
> >> > 2016-07-04 09:44:56,858 [rifyHost=false]] - TRACE
> >> > NettyTcpTransport              - Attempted write of: 285 bytes
> >> > 2016-07-04 09:44:57,174 [ntLoopGroup-2-1] - TRACE
> >> > NettyTcpTransport              - New data read: 413 bytes incoming:
> >> > UnpooledHeapByteBuf(ridx: 0, widx: 413, cap: 453)
> >> > 2016-07-04 09:44:57,175 [rifyHost=false]] - TRACE
> >> > FRAMES                         - RECV:
> >> >
> >>
> Attach{name='qpid-jms:receiver:ID::5fd31250-0099-4949-8160-1b6ba4c2cf72:1:1:1:ServiceBusDefaultNamespace/test2',
> >> > handle=0, role=SENDER, sndSettleMode=UNSETTLED, rcvSettleMode=FIRST,
> >> > source=Source{address='ServiceBusDefaultNamespace/test2',
> durable=NONE,
> >> > expiryPolicy=LINK_DETACH, timeout=0, dynamic=false,
> >> > dynamicNodeProperties=null, distributionMode=null, filter=null,
> >> > defaultOutcome=Modified{deliveryFailed=true, undeliverableHere=null,
> >> > messageAnnotations=null}, outcomes=[amqp:accepted:list,
> >> amqp:rejected:list,
> >> > amqp:released:list, amqp:modified:list], capabilities=[queue]},
> >> > target=Target{address='null', durable=NONE, expiryPolicy=SESSION_END,
> >> > timeout=0, dynamic=false, dynamicNodeProperties=null,
> capabilities=null},
> >> > unsettled=null, incompleteUnsettled=false, initialDeliveryCount=0,
> >> > maxMessageSize=262144, offeredCapabilities=null,
> >> desiredCapabilities=null,
> >> >
> >>
> properties={com.microsoft:tracking-id=d7c7f05c1eff4183ac3dc9f2885e3f93_Gcw7_Bcw7;51461:100924:100925}}
> >> > 2016-07-04 09:44:57,176 [rifyHost=false]] - TRACE
> >> > AmqpProvider                   - New Proton Event: LINK_INIT
> >> > 2016-07-04 09:44:57,176 [rifyHost=false]] - TRACE
> >> > AmqpProvider                   - New Proton Event: LINK_LOCAL_OPEN
> >> > 2016-07-04 09:44:57,176 [rifyHost=false]] - TRACE
> >> > AmqpProvider                   - New Proton Event: LINK_REMOTE_OPEN
> >> > 2016-07-04 09:44:57,177 [rifyHost=false]] - TRACE
> >> > AmqpConsumer                   - Consumer
> >> > ID::5fd31250-0099-4949-8160-1b6ba4c2cf72:1:1:1 granting additional
> >> credit:
> >> > 10
> >> > 2016-07-04 09:44:57,178 [rifyHost=false]] - TRACE
> >> > FRAMES                         - SENT: Flow{nextIncomingId=1,
> >> > incomingWindow=2047, nextOutgoingId=1, outgoingWindow=2147483647,
> >> handle=0,
> >> > deliveryCount=0, linkCredit=10, available=null, drain=false,
> echo=false,
> >> > properties=null}
> >> > 2016-07-04 09:44:57,178 [rifyHost=false]] - TRACE
> >> > NettyTcpTransport              - Attempted write of: 32 bytes
> >> > 2016-07-04 09:46:56,838 [rifyHost=false]] - TRACE
> >> > AmqpProvider                   - IdleTimeoutCheck rescheduling with
> >> delay:
> >> > 120000
> >> > 2016-07-04 09:48:56,839 [rifyHost=false]] - TRACE
> >> > FRAMES                         - SENT: Empty Frame
> >> > 2016-07-04 09:48:56,839 [rifyHost=false]] - TRACE
> >> > NettyTcpTransport              - Attempted write of: 8 bytes
> >> > 2016-07-04 09:48:56,839 [rifyHost=false]] - TRACE
> >> > AmqpProvider                   - IdleTimeoutCheck rescheduling with
> >> delay:
> >> > 120000
> >> > 2016-07-04 09:50:56,840 [rifyHost=false]] - TRACE
> >> > FRAMES                         - SENT: Empty Frame
> >> > 2016-07-04 09:50:56,840 [rifyHost=false]] - TRACE
> >> > NettyTcpTransport              - Attempted write of: 8 bytes
> >> > 2016-07-04 09:50:56,840 [rifyHost=false]] - TRACE
> >> > AmqpProvider                   - IdleTimeoutCheck rescheduling with
> >> delay:
> >> > 120000
> >> > 2016-07-04 09:52:14,666 [ntLoopGroup-2-1] - TRACE
> >> > NettyTcpTransport              - New data read: 8 bytes incoming:
> >> > UnpooledHeapByteBuf(ridx: 0, widx: 8, cap: 37)
> >> > 2016-07-04 09:52:14,667 [rifyHost=false]] - TRACE
> >> > FRAMES                         - RECV: Empty Frame
> >> > 2016-07-04 09:52:56,841 [rifyHost=false]] - TRACE
> >> > FRAMES                         - SENT: Empty Frame
> >> > 2016-07-04 09:52:56,841 [rifyHost=false]] - TRACE
> >> > NettyTcpTransport              - Attempted write of: 8 bytes
> >> > 2016-07-04 09:52:56,841 [rifyHost=false]] - TRACE
> >> > AmqpProvider                   - IdleTimeoutCheck rescheduling with
> >> delay:
> >> > 120000
> >> > 2016-07-04 09:54:56,841 [rifyHost=false]] - TRACE
> >> > FRAMES                         - SENT: Empty Frame
> >> > 2016-07-04 09:54:56,841 [rifyHost=false]] - TRACE
> >> > NettyTcpTransport              - Attempted write of: 8 bytes
> >> > 2016-07-04 09:54:56,842 [rifyHost=false]] - TRACE
> >> > AmqpProvider                   - IdleTimeoutCheck rescheduling with
> >> delay:
> >> > 120000
> >> > 2016-07-04 09:56:56,842 [rifyHost=false]] - TRACE
> >> > FRAMES                         - SENT: Empty Frame
> >> > 2016-07-04 09:56:56,842 [rifyHost=false]] - TRACE
> >> > NettyTcpTransport              - Attempted write of: 8 bytes
> >> > 2016-07-04 09:56:56,842 [rifyHost=false]] - TRACE
> >> > AmqpProvider                   - IdleTimeoutCheck rescheduling with
> >> delay:
> >> > 120000
> >> > 2016-07-04 09:58:56,843 [rifyHost=false]] - TRACE
> >> > FRAMES                         - SENT: Empty Frame
> >> > 2016-07-04 09:58:56,843 [rifyHost=false]] - TRACE
> >> > NettyTcpTransport              - Attempted write of: 8 bytes
> >> > 2016-07-04 09:58:56,843 [rifyHost=false]] - TRACE
> >> > AmqpProvider                   - IdleTimeoutCheck rescheduling with
> >> delay:
> >> > 120000
> >> > 2016-07-04 09:59:32,164 [ntLoopGroup-2-1] - TRACE
> >> > NettyTcpTransport              - New data read: 8 bytes incoming:
> >> > UnpooledHeapByteBuf(ridx: 0, widx: 8, cap: 37)
> >> > 2016-07-04 09:59:32,164 [rifyHost=false]] - TRACE
> >> > FRAMES                         - RECV: Empty Frame
> >> > 2016-07-04 10:00:56,843 [rifyHost=false]] - TRACE
> >> > FRAMES                         - SENT: Empty Frame
> >> > 2016-07-04 10:00:56,843 [rifyHost=false]] - TRACE
> >> > NettyTcpTransport              - Attempted write of: 8 bytes
> >> > 2016-07-04 10:00:56,843 [rifyHost=false]] - TRACE
> >> > AmqpProvider                   - IdleTimeoutCheck rescheduling with
> >> delay:
> >> > 120000
> >> > 2016-07-04 10:02:56,844 [rifyHost=false]] - TRACE
> >> > FRAMES                         - SENT: Empty Frame
> >> > 2016-07-04 10:02:56,844 [rifyHost=false]] - TRACE
> >> > NettyTcpTransport              - Attempted write of: 8 bytes
> >> > 2016-07-04 10:02:56,845 [rifyHost=false]] - TRACE
> >> > AmqpProvider                   - IdleTimeoutCheck rescheduling with
> >> delay:
> >> > 120000
> >> >
> >> >
> >> >
> >> >
> >> > On Fri, Jul 1, 2016 at 3:52 PM, Robbie Gemmell <
> robbie.gemmell@gmail.com
> >> >
> >> > wrote:
> >> >
> >> >> Hi Dale,
> >> >>
> >> >> It looks like you might have snipped some of the logs. If you removed
> >> >> any frame logging we will need to see that too (well, at least what
> >> >> frames and when) to better reason about what the client is (or is
> not)
> >> >> doing, as heartbeat frames are only sent in the absence of other
> >> >> frames. It would also be good to enable trace on logger
> >> >> org.apache.qpid.jms.provider.amqp.AmqpProvider to get some logging
> >> >> from the idle timeout processing task.
> >> >>
> >> >> The decision of whether to send a heartbeat frame (or close the
> >> >> connection due to not receiving any frames) is taken by the Qpid
> >> >> Proton engine the client uses, with the client scheduling the
> >> >> processing task needed for it to do that. There isn't currently a way
> >> >> to make it send heartbeats more often than Proton decides it needs to
> >> >> based on what the server advertises and what other activity is
> >> >> occuring.
> >> >>
> >> >> The AMQP spec indicates
> >> >> (
> >> >>
> >>
> http://docs.oasis-open.org/amqp/core/v1.0/os/amqp-core-transport-v1.0-os.html#doc-doc-idle-time-out
> >> >> )
> >> >> that peers SHOULD advertise half their actual timeout to avoid
> >> >> spurious timeouts. It would seem like perhaps Service Bus isnt doing
> >> >> that, since it advertised 4 minutes and the logs indicate the
> >> >> connection was closed in less than 5 minutes, as opposed to at least
> >> >> 8.
> >> >>
> >> >> That said, Proton actually pessimisticly assumes that the peer hasn't
> >> >> advertised half its timeout and so uses half the peers advertised
> >> >> value, meaning it should send heartbeats at either twice or four
> times
> >> >> the frequency of the peers actual timeout if no other frames are
> being
> >> >> sent. I see you have a 121sec timeout specified client side (given
> the
> >> >> client advertise idle-timeout of 60.5s) too, and that should mean the
> >> >> client should be doing the idle-timeout processing at least that
> >> >> often, with the absence of any heartbeat frames being sent suggesting
> >> >> Proton thinks it doesnt need to.
> >> >>
> >> >> I'll take a look at the heartbeat generation stuff in Proton, but if
> >> >> you could send the updated logging that would be good.
> >> >>
> >> >> Robbie
> >> >>
> >> >> On 1 July 2016 at 13:35, Dale Green <gr...@gmail.com>
> wrote:
> >> >> > Hi people,
> >> >> >
> >> >> > I have another problem with Service Bus and Qpid JMS 0.9.0.
> >> >> >
> >> >> > The broker uses 4 minutes idle timeout. With the sample code in the
> >> >> > examples, I verified that the client sends the first empty frame
> >> exactly
> >> >> 4
> >> >> > minutes after the consumer has connected. After that, Qpid sends an
> >> empty
> >> >> > frame every 2 minutes.
> >> >> >
> >> >> > However in the application, I am working on (existing code),
> Service
> >> Bus
> >> >> > drops the connections too often (when there are no messages sent)
> >> because
> >> >> > of inactivity. I think the problem is that Qpid waits for the
> whole 4
> >> >> > minutes before sending the first empty frame. Moreover, I see
> always
> >> >> (when
> >> >> > the problem occurs) TCP retransmission for exactly this frame which
> >> maybe
> >> >> > delays it for 100-200ms. Therefore, dropping the connection is
> perhaps
> >> >> > correct.
> >> >> >
> >> >> > I am appending some Qpid traces down here. My questions are:
> >> >> > - Is the broker's behavior correct?
> >> >> > - How can I tell Qpid to send the heartbeats more often to probably
> >> solve
> >> >> > this problem?
> >> >> >
> >> >> > Thanks!
> >> >> >
> >> >> > 11:48:38,001 TRACE [FRAMES] SENT: Open{
> >> >> > containerId='ID::68508b21-bbc7-4318-9371-5debbad2937d:1',
> >> >> > hostname='10.10.0.62', maxFrameSize=1048576, channelMax=32767,
> >> >> > idleTimeOut=60500, outgoingLocales=null, incomingLocales=null,
> >> >> > offeredCapabilities=null,
> >> >> > desiredCapabilities=[sole-connection-for-container],
> >> >> > properties={product=QpidJMS, version=0.9.0, platform=JVM: 1.7.0_80,
> >> >> > 24.80-b11, Oracle Corporation, OS: Linux, 3.16.0-4-amd64, amd64}}
> >> >> > 11:48:38,006 TRACE [FRAMES] RECV: Open{
> >> >> > containerId='d7c7f05c1eff4183ac3dc9f2885e3f93_Gcw7',
> hostname='null',
> >> >> > maxFrameSize=65536, channelMax=4999, idleTimeOut=240000,
> >> >> > outgoingLocales=null, incomingLocales=null,
> offeredCapabilities=null,
> >> >> > desiredCapabilities=null, properties=null}
> >> >> > 11:48:38,015 TRACE [FRAMES] SENT: Begin{remoteChannel=null,
> >> >> > nextOutgoingId=1, incomingWindow=2047, outgoingWindow=2147483647,
> >> >> > handleMax=65535, offeredCapabilities=null,
> desiredCapabilities=null,
> >> >> > properties=null}
> >> >> > 11:48:38,017 TRACE [FRAMES] RECV: Begin{remoteChannel=0,
> >> >> nextOutgoingId=1,
> >> >> > incomingWindow=5000, outgoingWindow=2047, handleMax=7,
> >> >> > offeredCapabilities=null, desiredCapabilities=null,
> properties=null}
> >> >> > 11:48:38,144 TRACE [FRAMES] SENT:
> >> >> >
> >> >>
> >>
> Attach{name='qpid-jms:receiver:ID::24295795-625b-46a2-9898-637cf9944fbd:1:1:1:ServiceBusDefaultNamespace/XXX',
> >> >> > handle=0, role=RECEIVER, sndSettleMode=UNSETTLED,
> rcvSettleMode=FIRST,
> >> >> > source=Source{address='ServiceBusDefaultNamespace/XXX',
> durable=NONE,
> >> >> > expiryPolicy=LINK_DETACH, timeout=0, dynamic=false,
> >> >> > dynamicNodeProperties=null, distributionMode=null, filter=null,
> >> >> > defaultOutcome=Modified{deliveryFailed=true,
> undeliverableHere=null,
> >> >> > messageAnnotations=null}, outcomes=[amqp:accepted:list,
> >> >> amqp:rejected:list,
> >> >> > amqp:released:list, amqp:modified:list], capabilities=[queue]},
> >> >> > target=Target{address='null', durable=NONE,
> expiryPolicy=SESSION_END,
> >> >> > timeout=0, dynamic=false, dynamicNodeProperties=null,
> >> capabilities=null},
> >> >> > unsettled=null, incompleteUnsettled=false,
> initialDeliveryCount=null,
> >> >> > maxMessageSize=null, offeredCapabilities=null,
> >> desiredCapabilities=null,
> >> >> > properties=null}
> >> >> > 11:48:38,179 TRACE [FRAMES] RECV:
> >> >> >
> >> >>
> >>
> Attach{name='qpid-jms:receiver:ID::24295795-625b-46a2-9898-637cf9944fbd:1:1:1:ServiceBusDefaultNamespace/XXX',
> >> >> > handle=0, role=SENDER, sndSettleMode=UNSETTLED,
> rcvSettleMode=FIRST,
> >> >> > source=Source{address='ServiceBusDefaultNamespace/XXX',
> durable=NONE,
> >> >> > expiryPolicy=LINK_DETACH, timeout=0, dynamic=false,
> >> >> > dynamicNodeProperties=null, distributionMode=null, filter=null,
> >> >> > defaultOutcome=Modified{deliveryFailed=true,
> undeliverableHere=null,
> >> >> > messageAnnotations=null}, outcomes=[amqp:accepted:list,
> >> >> amqp:rejected:list,
> >> >> > amqp:released:list, amqp:modified:list], capabilities=[queue]},
> >> >> > target=Target{address='null', durable=NONE,
> expiryPolicy=SESSION_END,
> >> >> > timeout=0, dynamic=false, dynamicNodeProperties=null,
> >> capabilities=null},
> >> >> > unsettled=null, incompleteUnsettled=false, initialDeliveryCount=0,
> >> >> > maxMessageSize=262144, offeredCapabilities=null,
> >> >> desiredCapabilities=null,
> >> >> >
> >> >>
> >>
> properties={com.microsoft:tracking-id=d7c7f05c1eff4183ac3dc9f2885e3f93_Gcw7_Bcw7;51163:100752:100753}}
> >> >> > 11:48:38,183 TRACE [FRAMES] SENT: Flow{nextIncomingId=1,
> >> >> > incomingWindow=2047, nextOutgoingId=1, outgoingWindow=2147483647,
> >> >> handle=0,
> >> >> > deliveryCount=0, linkCredit=1000, available=null, drain=false,
> >> >> echo=false,
> >> >> > properties=null}
> >> >> > ...
> >> >> > 11:49:30,966 TRACE [FRAMES] RECV: Empty Frame
> >> >> > ...
> >> >> > 11:50:23,911 TRACE [FRAMES] RECV: Empty Frame
> >> >> > ...
> >> >> > 11:51:16,857 TRACE [FRAMES] RECV: Empty Frame
> >> >> > ...
> >> >> > 11:52:09,804 TRACE [FRAMES] RECV: Empty Frame
> >> >> > ...
> >> >> > 11:52:38,019 TRACE [FRAMES] SENT: Empty Frame
> >> >> > (TCP Retransmission occurs within 100-200ms)
> >> >> > ...
> >> >> > 11:53:02,749 TRACE [FRAMES] RECV: Empty Frame
> >> >> > ...
> >> >> > 11:53:38,009 TRACE [FRAMES] RECV:
> >> >> > Close{error=Error{condition=amqp:connection:forced,
> description='Die
> >> >> > Verbindung war länger als zulässig
> >> >> >
> >> >>
> >>
> inaktiv..TrackingId:d7c7f05c1eff4183ac3dc9f2885e3f93_Gcw7,TimeStamp:01.07.2016
> >> >> > 11:53:27', info=null}}
> >> >>
> >> >> ---------------------------------------------------------------------
> >> >> To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
> >> >> For additional commands, e-mail: users-help@qpid.apache.org
> >> >>
> >> >>
> >>
> >> ---------------------------------------------------------------------
> >> To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
> >> For additional commands, e-mail: users-help@qpid.apache.org
> >>
> >>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
> For additional commands, e-mail: users-help@qpid.apache.org
>
>

Re: Qpid JMS: client inactivity problem

Posted by Robbie Gemmell <ro...@gmail.com>.
Hi Dale,

There doesnt seem to be much more to try on the client side itself.
The only other thing you could do would be to use a modified local
copy of Proton to ignore the servers advertised 4 minute timeout value
and use something else for remoteIdleTimeout in its 'tick'
calculations to try and force it to send more even more often than it
does and see if that makes any difference. It shouldnt.

I think youll need to seek support from Microsoft in either case. The
client received an requested advertised server idle-timout of 4
minutes, and generated heartbeats or real traffic at gap of no more
than 2.5mins with that change in place, and the server still
eventually closed the connection for being idle even though it never
breached the timeout regardless whether the advertised value was half
the real timeout or not. That it appears to behaves differently if
there are multiple conections is further strangeness, since this is a
per-connection mechanism.

Robbie

On 4 July 2016 at 15:21, Dale Green <gr...@gmail.com> wrote:
> Hi Robbie,
>
> Thank you for your efforts!
>
> I hardcoded this scheduling to be 30 sec. Then the behavior is as follows:
> - the first empty packet is sent after 2.5 min;
> - after that, the heartbeats are sent every 2 min.
>
> However, this doesn't help, as the broker closes the connections (all but
> the first one, when there are multiple connections!) 5 min after "SENT
> flow". Exactly like before.
>
> So, I am still trying to find out what makes Service Bus unhappy.
>
>
>
> On Mon, Jul 4, 2016 at 1:45 PM, Robbie Gemmell <ro...@gmail.com>
> wrote:
>
>> Hi Dale,
>>
>> I tried to reproduce this on Friday using the existing idle-timeout
>> tests for the client and wasn't able to, the first heartbeat was sent
>> after 2 minutes, but upon a further look today I see why and tweaking
>> the test it now goes out after 4 and I can see what is going on.
>>
>> The way the client drives the idleTimeout processing in Proton results
>> in a corner case that makes it possible for it to take up to the full
>> timeout period advertised by the server before the first heartbeat is
>> sent after prior non-idle activity, rather than pessimistically
>> sending at a faster rate as it otherwise does. As ServerBus seems
>> (your latest mail maks this less clear) to have not followed the path
>> of advertising half its real timeout as recommended (but annoyingly
>> not absolutely required) by the AMQP spec specifically to avoid this
>> scenario, this creates a situation where there is a race between that
>> heartbeat being sent and whether it arrives in time before ServiceBus
>> decides to kill the connection. The retransmission you mentioned
>> seeing could be enough of a delay to ensure that it didnt.
>>
>> I need to have a closer look at Proton to see if we can update it to
>> better accomodate how the client drives the timeout processing as well
>> as the other usage pattern, without introducing undesirable behaviour
>> for either or changing the API.
>>
>> I do however think I see a way to alter the client to avoid the issue
>> in the mean time. Try the change at https://paste.apache.org/1oCe
>> (which applies to master or 0.10.0 of
>> https://github.com/apache/qpid-jms, and shows the code areas to look
>> at as you were asking), it makes the client 'tick' proton to process
>> the timeout handling more often than it normally would / should be
>> necessary, which in this case ensures the heartbeat is actually sent
>> before the peers full advertised timeout is reached. In your case the
>> patch should make the outlier get sent after at most 3 minutes.
>>
>> We have previously discussed stopping Proton from sending heartbeats
>> much faster than necessary as it currently does, because it results in
>> a significant amount of unecessary traffic/processing on servers with
>> lots of idle clients and/or advertising smaller timeouts. If we do
>> that, I think we would still try to send them a little faster than
>> necessary to avoid spurious timeouts against servers that dont
>> advertise half their actual timeout, but I think it would still make
>> sense for us to add an override to allow toggling it to send more
>> often than it otherwise would, since it would help in situations such
>> as this plus another that came up previously.
>>
>> Robbie
>>
>> On 4 July 2016 at 09:09, Dale Green <gr...@gmail.com> wrote:
>> > Hi Robbie,
>> >
>> > I was debugging a bit and saw that the broker behaves differently if
>> there
>> > is a single connection or multiple connections. That is, when I have only
>> > one connection, it is not closed even after 5 minutes without heartbeats.
>> > So, I'll try to observe the behavior with 2 or more connections now.
>> >
>> > However, I am appending now the Qpid logs of a simple consumer, so you
>> can
>> > see the client behavior. For me, it is still not clear why we don't have
>> a
>> > heartbeat at 09:46:56,838. Can you tell me where in the code this
>> > functionality could be changed, so I may try a quick fix, to see what the
>> > broker would do then?
>> >
>> > 2016-07-04 09:44:56,407 [ntLoopGroup-2-1] - TRACE
>> > TransportSupport               - Getting SSLContext instance using
>> > protocol: TLS
>> > 2016-07-04 09:44:56,617 [ntLoopGroup-2-1] - TRACE
>> > TransportSupport               - Default protocols from the SSLEngine:
>> > [SSLv2Hello, TLSv1, TLSv1.1, TLSv1.2]
>> > 2016-07-04 09:44:56,617 [ntLoopGroup-2-1] - TRACE
>> > TransportSupport               - Disabled protocols: [SSLv2Hello, SSLv3]
>> > 2016-07-04 09:44:56,617 [ntLoopGroup-2-1] - TRACE
>> > TransportSupport               - Enabled protocols: [TLSv1, TLSv1.1,
>> > TLSv1.2]
>> > 2016-07-04 09:44:56,617 [ntLoopGroup-2-1] - TRACE
>> > TransportSupport               - Default cipher suites from the
>> SSLEngine:
>> > [TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384,
>> > TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384, TLS_RSA_WITH_AES_256_CBC_SHA256,
>> > TLS_ECDH_ECDSA_WITH_AES_256_CBC_SHA384,
>> > TLS_ECDH_RSA_WITH_AES_256_CBC_SHA384,
>> TLS_DHE_RSA_WITH_AES_256_CBC_SHA256,
>> > TLS_DHE_DSS_WITH_AES_256_CBC_SHA256,
>> TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,
>> > TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA, TLS_RSA_WITH_AES_256_CBC_SHA,
>> > TLS_ECDH_ECDSA_WITH_AES_256_CBC_SHA, TLS_ECDH_RSA_WITH_AES_256_CBC_SHA,
>> > TLS_DHE_RSA_WITH_AES_256_CBC_SHA, TLS_DHE_DSS_WITH_AES_256_CBC_SHA,
>> > TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256,
>> > TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256, TLS_RSA_WITH_AES_128_CBC_SHA256,
>> > TLS_ECDH_ECDSA_WITH_AES_128_CBC_SHA256,
>> > TLS_ECDH_RSA_WITH_AES_128_CBC_SHA256,
>> TLS_DHE_RSA_WITH_AES_128_CBC_SHA256,
>> > TLS_DHE_DSS_WITH_AES_128_CBC_SHA256,
>> TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,
>> > TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA, TLS_RSA_WITH_AES_128_CBC_SHA,
>> > TLS_ECDH_ECDSA_WITH_AES_128_CBC_SHA, TLS_ECDH_RSA_WITH_AES_128_CBC_SHA,
>> > TLS_DHE_RSA_WITH_AES_128_CBC_SHA, TLS_DHE_DSS_WITH_AES_128_CBC_SHA,
>> > TLS_ECDHE_ECDSA_WITH_3DES_EDE_CBC_SHA,
>> TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,
>> > SSL_RSA_WITH_3DES_EDE_CBC_SHA, TLS_ECDH_ECDSA_WITH_3DES_EDE_CBC_SHA,
>> > TLS_ECDH_RSA_WITH_3DES_EDE_CBC_SHA, SSL_DHE_RSA_WITH_3DES_EDE_CBC_SHA,
>> > SSL_DHE_DSS_WITH_3DES_EDE_CBC_SHA, TLS_EMPTY_RENEGOTIATION_INFO_SCSV]
>> > 2016-07-04 09:44:56,617 [ntLoopGroup-2-1] - TRACE
>> > TransportSupport               - Enabled cipher suites:
>> > [TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384,
>> > TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384, TLS_RSA_WITH_AES_256_CBC_SHA256,
>> > TLS_ECDH_ECDSA_WITH_AES_256_CBC_SHA384,
>> > TLS_ECDH_RSA_WITH_AES_256_CBC_SHA384,
>> TLS_DHE_RSA_WITH_AES_256_CBC_SHA256,
>> > TLS_DHE_DSS_WITH_AES_256_CBC_SHA256,
>> TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,
>> > TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA, TLS_RSA_WITH_AES_256_CBC_SHA,
>> > TLS_ECDH_ECDSA_WITH_AES_256_CBC_SHA, TLS_ECDH_RSA_WITH_AES_256_CBC_SHA,
>> > TLS_DHE_RSA_WITH_AES_256_CBC_SHA, TLS_DHE_DSS_WITH_AES_256_CBC_SHA,
>> > TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256,
>> > TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256, TLS_RSA_WITH_AES_128_CBC_SHA256,
>> > TLS_ECDH_ECDSA_WITH_AES_128_CBC_SHA256,
>> > TLS_ECDH_RSA_WITH_AES_128_CBC_SHA256,
>> TLS_DHE_RSA_WITH_AES_128_CBC_SHA256,
>> > TLS_DHE_DSS_WITH_AES_128_CBC_SHA256,
>> TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,
>> > TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA, TLS_RSA_WITH_AES_128_CBC_SHA,
>> > TLS_ECDH_ECDSA_WITH_AES_128_CBC_SHA, TLS_ECDH_RSA_WITH_AES_128_CBC_SHA,
>> > TLS_DHE_RSA_WITH_AES_128_CBC_SHA, TLS_DHE_DSS_WITH_AES_128_CBC_SHA,
>> > TLS_ECDHE_ECDSA_WITH_3DES_EDE_CBC_SHA,
>> TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,
>> > SSL_RSA_WITH_3DES_EDE_CBC_SHA, TLS_ECDH_ECDSA_WITH_3DES_EDE_CBC_SHA,
>> > TLS_ECDH_RSA_WITH_3DES_EDE_CBC_SHA, SSL_DHE_RSA_WITH_3DES_EDE_CBC_SHA,
>> > SSL_DHE_DSS_WITH_3DES_EDE_CBC_SHA, TLS_EMPTY_RENEGOTIATION_INFO_SCSV]
>> > 2016-07-04 09:44:56,650 [ntLoopGroup-2-1] - TRACE
>> > NettyTcpTransport              - Channel has become active! Channel is
>> [id:
>> > 0xec4459d2, /10.10.0.180:51561 => /10.10.0.62:5671]
>> > 2016-07-04 09:44:56,773 [ntLoopGroup-2-1] - TRACE
>> > NettySslTransport              - SSL Handshake has completed: [id:
>> > 0xec4459d2, /10.10.0.180:51561 => /10.10.0.62:5671]
>> > 2016-07-04 09:44:56,811 [rifyHost=false]] - TRACE
>> > NettyTcpTransport              - Attempted write of: 8 bytes
>> > 2016-07-04 09:44:56,814 [ntLoopGroup-2-1] - TRACE
>> > NettyTcpTransport              - New data read: 8 bytes incoming:
>> > UnpooledHeapByteBuf(ridx: 0, widx: 8, cap: 1024)
>> > 2016-07-04 09:44:56,814 [rifyHost=false]] - TRACE
>> > AmqpProvider                   - New Proton Event: CONNECTION_INIT
>> > 2016-07-04 09:44:56,815 [ntLoopGroup-2-1] - TRACE
>> > NettyTcpTransport              - New data read: 39 bytes incoming:
>> > UnpooledHeapByteBuf(ridx: 0, widx: 39, cap: 69)
>> > 2016-07-04 09:44:56,815 [rifyHost=false]] - TRACE
>> > AmqpProvider                   - New Proton Event: CONNECTION_LOCAL_OPEN
>> > 2016-07-04 09:44:56,819 [rifyHost=false]] - DEBUG
>> > SaslMechanismFinder            - Skipping SASL-EXTERNAL mechanism because
>> > the available credentials are not sufficient
>> > 2016-07-04 09:44:56,819 [rifyHost=false]] - INFO
>> > SaslMechanismFinder            - Best match for SASL auth was: SASL-PLAIN
>> > 2016-07-04 09:44:56,820 [rifyHost=false]] - TRACE
>> > NettyTcpTransport              - Attempted write of: 50 bytes
>> > 2016-07-04 09:44:56,823 [ntLoopGroup-2-1] - TRACE
>> > NettyTcpTransport              - New data read: 26 bytes incoming:
>> > UnpooledHeapByteBuf(ridx: 0, widx: 26, cap: 53)
>> > 2016-07-04 09:44:56,826 [rifyHost=false]] - TRACE
>> > FRAMES                         - SENT: Open{
>> > containerId='ID::21730abd-45cf-46c9-b312-8c1052076805:1',
>> > hostname='10.10.0.62', maxFrameSize=1048576, channelMax=32767,
>> > idleTimeOut=500000, outgoingLocales=null, incomingLocales=null,
>> > offeredCapabilities=null,
>> > desiredCapabilities=[sole-connection-for-container],
>> > properties={product=QpidJMS, version=0.9.0, platform=JVM: 1.7.0_101,
>> > 24.95-b01, Oracle Corporation, OS: Linux, 3.16.0-4-amd64, amd64}}
>> > 2016-07-04 09:44:56,827 [rifyHost=false]] - TRACE
>> > NettyTcpTransport              - Attempted write of: 256 bytes
>> > 2016-07-04 09:44:56,828 [ntLoopGroup-2-1] - TRACE
>> > NettyTcpTransport              - New data read: 8 bytes incoming:
>> > UnpooledHeapByteBuf(ridx: 0, widx: 8, cap: 37)
>> > 2016-07-04 09:44:56,830 [ntLoopGroup-2-1] - TRACE
>> > NettyTcpTransport              - New data read: 72 bytes incoming:
>> > UnpooledHeapByteBuf(ridx: 0, widx: 72, cap: 101)
>> > 2016-07-04 09:44:56,830 [rifyHost=false]] - TRACE
>> > FRAMES                         - RECV: Open{
>> > containerId='d7c7f05c1eff4183ac3dc9f2885e3f93_Gcw7', hostname='null',
>> > maxFrameSize=65536, channelMax=4999, idleTimeOut=240000,
>> > outgoingLocales=null, incomingLocales=null, offeredCapabilities=null,
>> > desiredCapabilities=null, properties=null}
>> > 2016-07-04 09:44:56,830 [rifyHost=false]] - TRACE
>> > AmqpProvider                   - New Proton Event: CONNECTION_REMOTE_OPEN
>> > 2016-07-04 09:44:56,833 [rifyHost=false]] - TRACE
>> > AmqpProvider                   - New Proton Event: SESSION_INIT
>> > 2016-07-04 09:44:56,833 [rifyHost=false]] - TRACE
>> > AmqpProvider                   - New Proton Event: SESSION_LOCAL_OPEN
>> > 2016-07-04 09:44:56,835 [rifyHost=false]] - TRACE
>> > FRAMES                         - SENT: Begin{remoteChannel=null,
>> > nextOutgoingId=1, incomingWindow=2047, outgoingWindow=2147483647,
>> > handleMax=65535, offeredCapabilities=null, desiredCapabilities=null,
>> > properties=null}
>> > 2016-07-04 09:44:56,835 [rifyHost=false]] - TRACE
>> > NettyTcpTransport              - Attempted write of: 32 bytes
>> > 2016-07-04 09:44:56,836 [ntLoopGroup-2-1] - TRACE
>> > NettyTcpTransport              - New data read: 34 bytes incoming:
>> > UnpooledHeapByteBuf(ridx: 0, widx: 34, cap: 69)
>> > 2016-07-04 09:44:56,837 [rifyHost=false]] - TRACE
>> > FRAMES                         - RECV: Begin{remoteChannel=0,
>> > nextOutgoingId=1, incomingWindow=5000, outgoingWindow=2047, handleMax=7,
>> > offeredCapabilities=null, desiredCapabilities=null, properties=null}
>> > 2016-07-04 09:44:56,837 [rifyHost=false]] - TRACE
>> > AmqpProvider                   - New Proton Event: SESSION_REMOTE_OPEN
>> > 2016-07-04 09:44:56,837 [rifyHost=false]] - DEBUG
>> > AmqpConnectionBuilder          - AmqpConnection {
>> > ID::5fd31250-0099-4949-8160-1b6ba4c2cf72:1 } is now open:
>> > 2016-07-04 09:44:56,837 [rifyHost=false]] - TRACE
>> > AmqpProvider                   - IdleTimeoutCheck being initiated,
>> initial
>> > delay: 120000
>> > 2016-07-04 09:44:56,837 [rifyHost=false]] - INFO
>> > JmsConnection                  - Connection
>> > ID::5fd31250-0099-4949-8160-1b6ba4c2cf72:1 connected to remote Broker:
>> > amqps://10.10.0.62:5671/?transport.verifyHost=false
>> > 2016-07-04 09:44:56,846 [rifyHost=false]] - TRACE
>> > FRAMES                         - SENT: Begin{remoteChannel=null,
>> > nextOutgoingId=1, incomingWindow=2047, outgoingWindow=2147483647,
>> > handleMax=65535, offeredCapabilities=null, desiredCapabilities=null,
>> > properties=null}
>> > 2016-07-04 09:44:56,846 [rifyHost=false]] - TRACE
>> > NettyTcpTransport              - Attempted write of: 32 bytes
>> > 2016-07-04 09:44:56,848 [ntLoopGroup-2-1] - TRACE
>> > NettyTcpTransport              - New data read: 34 bytes incoming:
>> > UnpooledHeapByteBuf(ridx: 0, widx: 34, cap: 69)
>> > 2016-07-04 09:44:56,848 [rifyHost=false]] - TRACE
>> > FRAMES                         - RECV: Begin{remoteChannel=1,
>> > nextOutgoingId=1, incomingWindow=5000, outgoingWindow=2047, handleMax=7,
>> > offeredCapabilities=null, desiredCapabilities=null, properties=null}
>> > 2016-07-04 09:44:56,848 [rifyHost=false]] - TRACE
>> > AmqpProvider                   - New Proton Event: SESSION_INIT
>> > 2016-07-04 09:44:56,848 [rifyHost=false]] - TRACE
>> > AmqpProvider                   - New Proton Event: SESSION_LOCAL_OPEN
>> > 2016-07-04 09:44:56,848 [rifyHost=false]] - TRACE
>> > AmqpProvider                   - New Proton Event: SESSION_REMOTE_OPEN
>> > 2016-07-04 09:44:56,858 [rifyHost=false]] - TRACE
>> > FRAMES                         - SENT:
>> >
>> Attach{name='qpid-jms:receiver:ID::5fd31250-0099-4949-8160-1b6ba4c2cf72:1:1:1:ServiceBusDefaultNamespace/test2',
>> > handle=0, role=RECEIVER, sndSettleMode=UNSETTLED, rcvSettleMode=FIRST,
>> > source=Source{address='ServiceBusDefaultNamespace/test2', durable=NONE,
>> > expiryPolicy=LINK_DETACH, timeout=0, dynamic=false,
>> > dynamicNodeProperties=null, distributionMode=null, filter=null,
>> > defaultOutcome=Modified{deliveryFailed=true, undeliverableHere=null,
>> > messageAnnotations=null}, outcomes=[amqp:accepted:list,
>> amqp:rejected:list,
>> > amqp:released:list, amqp:modified:list], capabilities=[queue]},
>> > target=Target{address='null', durable=NONE, expiryPolicy=SESSION_END,
>> > timeout=0, dynamic=false, dynamicNodeProperties=null, capabilities=null},
>> > unsettled=null, incompleteUnsettled=false, initialDeliveryCount=null,
>> > maxMessageSize=null, offeredCapabilities=null, desiredCapabilities=null,
>> > properties=null}
>> > 2016-07-04 09:44:56,858 [rifyHost=false]] - TRACE
>> > NettyTcpTransport              - Attempted write of: 285 bytes
>> > 2016-07-04 09:44:57,174 [ntLoopGroup-2-1] - TRACE
>> > NettyTcpTransport              - New data read: 413 bytes incoming:
>> > UnpooledHeapByteBuf(ridx: 0, widx: 413, cap: 453)
>> > 2016-07-04 09:44:57,175 [rifyHost=false]] - TRACE
>> > FRAMES                         - RECV:
>> >
>> Attach{name='qpid-jms:receiver:ID::5fd31250-0099-4949-8160-1b6ba4c2cf72:1:1:1:ServiceBusDefaultNamespace/test2',
>> > handle=0, role=SENDER, sndSettleMode=UNSETTLED, rcvSettleMode=FIRST,
>> > source=Source{address='ServiceBusDefaultNamespace/test2', durable=NONE,
>> > expiryPolicy=LINK_DETACH, timeout=0, dynamic=false,
>> > dynamicNodeProperties=null, distributionMode=null, filter=null,
>> > defaultOutcome=Modified{deliveryFailed=true, undeliverableHere=null,
>> > messageAnnotations=null}, outcomes=[amqp:accepted:list,
>> amqp:rejected:list,
>> > amqp:released:list, amqp:modified:list], capabilities=[queue]},
>> > target=Target{address='null', durable=NONE, expiryPolicy=SESSION_END,
>> > timeout=0, dynamic=false, dynamicNodeProperties=null, capabilities=null},
>> > unsettled=null, incompleteUnsettled=false, initialDeliveryCount=0,
>> > maxMessageSize=262144, offeredCapabilities=null,
>> desiredCapabilities=null,
>> >
>> properties={com.microsoft:tracking-id=d7c7f05c1eff4183ac3dc9f2885e3f93_Gcw7_Bcw7;51461:100924:100925}}
>> > 2016-07-04 09:44:57,176 [rifyHost=false]] - TRACE
>> > AmqpProvider                   - New Proton Event: LINK_INIT
>> > 2016-07-04 09:44:57,176 [rifyHost=false]] - TRACE
>> > AmqpProvider                   - New Proton Event: LINK_LOCAL_OPEN
>> > 2016-07-04 09:44:57,176 [rifyHost=false]] - TRACE
>> > AmqpProvider                   - New Proton Event: LINK_REMOTE_OPEN
>> > 2016-07-04 09:44:57,177 [rifyHost=false]] - TRACE
>> > AmqpConsumer                   - Consumer
>> > ID::5fd31250-0099-4949-8160-1b6ba4c2cf72:1:1:1 granting additional
>> credit:
>> > 10
>> > 2016-07-04 09:44:57,178 [rifyHost=false]] - TRACE
>> > FRAMES                         - SENT: Flow{nextIncomingId=1,
>> > incomingWindow=2047, nextOutgoingId=1, outgoingWindow=2147483647,
>> handle=0,
>> > deliveryCount=0, linkCredit=10, available=null, drain=false, echo=false,
>> > properties=null}
>> > 2016-07-04 09:44:57,178 [rifyHost=false]] - TRACE
>> > NettyTcpTransport              - Attempted write of: 32 bytes
>> > 2016-07-04 09:46:56,838 [rifyHost=false]] - TRACE
>> > AmqpProvider                   - IdleTimeoutCheck rescheduling with
>> delay:
>> > 120000
>> > 2016-07-04 09:48:56,839 [rifyHost=false]] - TRACE
>> > FRAMES                         - SENT: Empty Frame
>> > 2016-07-04 09:48:56,839 [rifyHost=false]] - TRACE
>> > NettyTcpTransport              - Attempted write of: 8 bytes
>> > 2016-07-04 09:48:56,839 [rifyHost=false]] - TRACE
>> > AmqpProvider                   - IdleTimeoutCheck rescheduling with
>> delay:
>> > 120000
>> > 2016-07-04 09:50:56,840 [rifyHost=false]] - TRACE
>> > FRAMES                         - SENT: Empty Frame
>> > 2016-07-04 09:50:56,840 [rifyHost=false]] - TRACE
>> > NettyTcpTransport              - Attempted write of: 8 bytes
>> > 2016-07-04 09:50:56,840 [rifyHost=false]] - TRACE
>> > AmqpProvider                   - IdleTimeoutCheck rescheduling with
>> delay:
>> > 120000
>> > 2016-07-04 09:52:14,666 [ntLoopGroup-2-1] - TRACE
>> > NettyTcpTransport              - New data read: 8 bytes incoming:
>> > UnpooledHeapByteBuf(ridx: 0, widx: 8, cap: 37)
>> > 2016-07-04 09:52:14,667 [rifyHost=false]] - TRACE
>> > FRAMES                         - RECV: Empty Frame
>> > 2016-07-04 09:52:56,841 [rifyHost=false]] - TRACE
>> > FRAMES                         - SENT: Empty Frame
>> > 2016-07-04 09:52:56,841 [rifyHost=false]] - TRACE
>> > NettyTcpTransport              - Attempted write of: 8 bytes
>> > 2016-07-04 09:52:56,841 [rifyHost=false]] - TRACE
>> > AmqpProvider                   - IdleTimeoutCheck rescheduling with
>> delay:
>> > 120000
>> > 2016-07-04 09:54:56,841 [rifyHost=false]] - TRACE
>> > FRAMES                         - SENT: Empty Frame
>> > 2016-07-04 09:54:56,841 [rifyHost=false]] - TRACE
>> > NettyTcpTransport              - Attempted write of: 8 bytes
>> > 2016-07-04 09:54:56,842 [rifyHost=false]] - TRACE
>> > AmqpProvider                   - IdleTimeoutCheck rescheduling with
>> delay:
>> > 120000
>> > 2016-07-04 09:56:56,842 [rifyHost=false]] - TRACE
>> > FRAMES                         - SENT: Empty Frame
>> > 2016-07-04 09:56:56,842 [rifyHost=false]] - TRACE
>> > NettyTcpTransport              - Attempted write of: 8 bytes
>> > 2016-07-04 09:56:56,842 [rifyHost=false]] - TRACE
>> > AmqpProvider                   - IdleTimeoutCheck rescheduling with
>> delay:
>> > 120000
>> > 2016-07-04 09:58:56,843 [rifyHost=false]] - TRACE
>> > FRAMES                         - SENT: Empty Frame
>> > 2016-07-04 09:58:56,843 [rifyHost=false]] - TRACE
>> > NettyTcpTransport              - Attempted write of: 8 bytes
>> > 2016-07-04 09:58:56,843 [rifyHost=false]] - TRACE
>> > AmqpProvider                   - IdleTimeoutCheck rescheduling with
>> delay:
>> > 120000
>> > 2016-07-04 09:59:32,164 [ntLoopGroup-2-1] - TRACE
>> > NettyTcpTransport              - New data read: 8 bytes incoming:
>> > UnpooledHeapByteBuf(ridx: 0, widx: 8, cap: 37)
>> > 2016-07-04 09:59:32,164 [rifyHost=false]] - TRACE
>> > FRAMES                         - RECV: Empty Frame
>> > 2016-07-04 10:00:56,843 [rifyHost=false]] - TRACE
>> > FRAMES                         - SENT: Empty Frame
>> > 2016-07-04 10:00:56,843 [rifyHost=false]] - TRACE
>> > NettyTcpTransport              - Attempted write of: 8 bytes
>> > 2016-07-04 10:00:56,843 [rifyHost=false]] - TRACE
>> > AmqpProvider                   - IdleTimeoutCheck rescheduling with
>> delay:
>> > 120000
>> > 2016-07-04 10:02:56,844 [rifyHost=false]] - TRACE
>> > FRAMES                         - SENT: Empty Frame
>> > 2016-07-04 10:02:56,844 [rifyHost=false]] - TRACE
>> > NettyTcpTransport              - Attempted write of: 8 bytes
>> > 2016-07-04 10:02:56,845 [rifyHost=false]] - TRACE
>> > AmqpProvider                   - IdleTimeoutCheck rescheduling with
>> delay:
>> > 120000
>> >
>> >
>> >
>> >
>> > On Fri, Jul 1, 2016 at 3:52 PM, Robbie Gemmell <robbie.gemmell@gmail.com
>> >
>> > wrote:
>> >
>> >> Hi Dale,
>> >>
>> >> It looks like you might have snipped some of the logs. If you removed
>> >> any frame logging we will need to see that too (well, at least what
>> >> frames and when) to better reason about what the client is (or is not)
>> >> doing, as heartbeat frames are only sent in the absence of other
>> >> frames. It would also be good to enable trace on logger
>> >> org.apache.qpid.jms.provider.amqp.AmqpProvider to get some logging
>> >> from the idle timeout processing task.
>> >>
>> >> The decision of whether to send a heartbeat frame (or close the
>> >> connection due to not receiving any frames) is taken by the Qpid
>> >> Proton engine the client uses, with the client scheduling the
>> >> processing task needed for it to do that. There isn't currently a way
>> >> to make it send heartbeats more often than Proton decides it needs to
>> >> based on what the server advertises and what other activity is
>> >> occuring.
>> >>
>> >> The AMQP spec indicates
>> >> (
>> >>
>> http://docs.oasis-open.org/amqp/core/v1.0/os/amqp-core-transport-v1.0-os.html#doc-doc-idle-time-out
>> >> )
>> >> that peers SHOULD advertise half their actual timeout to avoid
>> >> spurious timeouts. It would seem like perhaps Service Bus isnt doing
>> >> that, since it advertised 4 minutes and the logs indicate the
>> >> connection was closed in less than 5 minutes, as opposed to at least
>> >> 8.
>> >>
>> >> That said, Proton actually pessimisticly assumes that the peer hasn't
>> >> advertised half its timeout and so uses half the peers advertised
>> >> value, meaning it should send heartbeats at either twice or four times
>> >> the frequency of the peers actual timeout if no other frames are being
>> >> sent. I see you have a 121sec timeout specified client side (given the
>> >> client advertise idle-timeout of 60.5s) too, and that should mean the
>> >> client should be doing the idle-timeout processing at least that
>> >> often, with the absence of any heartbeat frames being sent suggesting
>> >> Proton thinks it doesnt need to.
>> >>
>> >> I'll take a look at the heartbeat generation stuff in Proton, but if
>> >> you could send the updated logging that would be good.
>> >>
>> >> Robbie
>> >>
>> >> On 1 July 2016 at 13:35, Dale Green <gr...@gmail.com> wrote:
>> >> > Hi people,
>> >> >
>> >> > I have another problem with Service Bus and Qpid JMS 0.9.0.
>> >> >
>> >> > The broker uses 4 minutes idle timeout. With the sample code in the
>> >> > examples, I verified that the client sends the first empty frame
>> exactly
>> >> 4
>> >> > minutes after the consumer has connected. After that, Qpid sends an
>> empty
>> >> > frame every 2 minutes.
>> >> >
>> >> > However in the application, I am working on (existing code), Service
>> Bus
>> >> > drops the connections too often (when there are no messages sent)
>> because
>> >> > of inactivity. I think the problem is that Qpid waits for the whole 4
>> >> > minutes before sending the first empty frame. Moreover, I see always
>> >> (when
>> >> > the problem occurs) TCP retransmission for exactly this frame which
>> maybe
>> >> > delays it for 100-200ms. Therefore, dropping the connection is perhaps
>> >> > correct.
>> >> >
>> >> > I am appending some Qpid traces down here. My questions are:
>> >> > - Is the broker's behavior correct?
>> >> > - How can I tell Qpid to send the heartbeats more often to probably
>> solve
>> >> > this problem?
>> >> >
>> >> > Thanks!
>> >> >
>> >> > 11:48:38,001 TRACE [FRAMES] SENT: Open{
>> >> > containerId='ID::68508b21-bbc7-4318-9371-5debbad2937d:1',
>> >> > hostname='10.10.0.62', maxFrameSize=1048576, channelMax=32767,
>> >> > idleTimeOut=60500, outgoingLocales=null, incomingLocales=null,
>> >> > offeredCapabilities=null,
>> >> > desiredCapabilities=[sole-connection-for-container],
>> >> > properties={product=QpidJMS, version=0.9.0, platform=JVM: 1.7.0_80,
>> >> > 24.80-b11, Oracle Corporation, OS: Linux, 3.16.0-4-amd64, amd64}}
>> >> > 11:48:38,006 TRACE [FRAMES] RECV: Open{
>> >> > containerId='d7c7f05c1eff4183ac3dc9f2885e3f93_Gcw7', hostname='null',
>> >> > maxFrameSize=65536, channelMax=4999, idleTimeOut=240000,
>> >> > outgoingLocales=null, incomingLocales=null, offeredCapabilities=null,
>> >> > desiredCapabilities=null, properties=null}
>> >> > 11:48:38,015 TRACE [FRAMES] SENT: Begin{remoteChannel=null,
>> >> > nextOutgoingId=1, incomingWindow=2047, outgoingWindow=2147483647,
>> >> > handleMax=65535, offeredCapabilities=null, desiredCapabilities=null,
>> >> > properties=null}
>> >> > 11:48:38,017 TRACE [FRAMES] RECV: Begin{remoteChannel=0,
>> >> nextOutgoingId=1,
>> >> > incomingWindow=5000, outgoingWindow=2047, handleMax=7,
>> >> > offeredCapabilities=null, desiredCapabilities=null, properties=null}
>> >> > 11:48:38,144 TRACE [FRAMES] SENT:
>> >> >
>> >>
>> Attach{name='qpid-jms:receiver:ID::24295795-625b-46a2-9898-637cf9944fbd:1:1:1:ServiceBusDefaultNamespace/XXX',
>> >> > handle=0, role=RECEIVER, sndSettleMode=UNSETTLED, rcvSettleMode=FIRST,
>> >> > source=Source{address='ServiceBusDefaultNamespace/XXX', durable=NONE,
>> >> > expiryPolicy=LINK_DETACH, timeout=0, dynamic=false,
>> >> > dynamicNodeProperties=null, distributionMode=null, filter=null,
>> >> > defaultOutcome=Modified{deliveryFailed=true, undeliverableHere=null,
>> >> > messageAnnotations=null}, outcomes=[amqp:accepted:list,
>> >> amqp:rejected:list,
>> >> > amqp:released:list, amqp:modified:list], capabilities=[queue]},
>> >> > target=Target{address='null', durable=NONE, expiryPolicy=SESSION_END,
>> >> > timeout=0, dynamic=false, dynamicNodeProperties=null,
>> capabilities=null},
>> >> > unsettled=null, incompleteUnsettled=false, initialDeliveryCount=null,
>> >> > maxMessageSize=null, offeredCapabilities=null,
>> desiredCapabilities=null,
>> >> > properties=null}
>> >> > 11:48:38,179 TRACE [FRAMES] RECV:
>> >> >
>> >>
>> Attach{name='qpid-jms:receiver:ID::24295795-625b-46a2-9898-637cf9944fbd:1:1:1:ServiceBusDefaultNamespace/XXX',
>> >> > handle=0, role=SENDER, sndSettleMode=UNSETTLED, rcvSettleMode=FIRST,
>> >> > source=Source{address='ServiceBusDefaultNamespace/XXX', durable=NONE,
>> >> > expiryPolicy=LINK_DETACH, timeout=0, dynamic=false,
>> >> > dynamicNodeProperties=null, distributionMode=null, filter=null,
>> >> > defaultOutcome=Modified{deliveryFailed=true, undeliverableHere=null,
>> >> > messageAnnotations=null}, outcomes=[amqp:accepted:list,
>> >> amqp:rejected:list,
>> >> > amqp:released:list, amqp:modified:list], capabilities=[queue]},
>> >> > target=Target{address='null', durable=NONE, expiryPolicy=SESSION_END,
>> >> > timeout=0, dynamic=false, dynamicNodeProperties=null,
>> capabilities=null},
>> >> > unsettled=null, incompleteUnsettled=false, initialDeliveryCount=0,
>> >> > maxMessageSize=262144, offeredCapabilities=null,
>> >> desiredCapabilities=null,
>> >> >
>> >>
>> properties={com.microsoft:tracking-id=d7c7f05c1eff4183ac3dc9f2885e3f93_Gcw7_Bcw7;51163:100752:100753}}
>> >> > 11:48:38,183 TRACE [FRAMES] SENT: Flow{nextIncomingId=1,
>> >> > incomingWindow=2047, nextOutgoingId=1, outgoingWindow=2147483647,
>> >> handle=0,
>> >> > deliveryCount=0, linkCredit=1000, available=null, drain=false,
>> >> echo=false,
>> >> > properties=null}
>> >> > ...
>> >> > 11:49:30,966 TRACE [FRAMES] RECV: Empty Frame
>> >> > ...
>> >> > 11:50:23,911 TRACE [FRAMES] RECV: Empty Frame
>> >> > ...
>> >> > 11:51:16,857 TRACE [FRAMES] RECV: Empty Frame
>> >> > ...
>> >> > 11:52:09,804 TRACE [FRAMES] RECV: Empty Frame
>> >> > ...
>> >> > 11:52:38,019 TRACE [FRAMES] SENT: Empty Frame
>> >> > (TCP Retransmission occurs within 100-200ms)
>> >> > ...
>> >> > 11:53:02,749 TRACE [FRAMES] RECV: Empty Frame
>> >> > ...
>> >> > 11:53:38,009 TRACE [FRAMES] RECV:
>> >> > Close{error=Error{condition=amqp:connection:forced, description='Die
>> >> > Verbindung war länger als zulässig
>> >> >
>> >>
>> inaktiv..TrackingId:d7c7f05c1eff4183ac3dc9f2885e3f93_Gcw7,TimeStamp:01.07.2016
>> >> > 11:53:27', info=null}}
>> >>
>> >> ---------------------------------------------------------------------
>> >> To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
>> >> For additional commands, e-mail: users-help@qpid.apache.org
>> >>
>> >>
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
>> For additional commands, e-mail: users-help@qpid.apache.org
>>
>>

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
For additional commands, e-mail: users-help@qpid.apache.org


Re: Qpid JMS: client inactivity problem

Posted by Dale Green <gr...@gmail.com>.
Hi Robbie,

Thank you for your efforts!

I hardcoded this scheduling to be 30 sec. Then the behavior is as follows:
- the first empty packet is sent after 2.5 min;
- after that, the heartbeats are sent every 2 min.

However, this doesn't help, as the broker closes the connections (all but
the first one, when there are multiple connections!) 5 min after "SENT
flow". Exactly like before.

So, I am still trying to find out what makes Service Bus unhappy.



On Mon, Jul 4, 2016 at 1:45 PM, Robbie Gemmell <ro...@gmail.com>
wrote:

> Hi Dale,
>
> I tried to reproduce this on Friday using the existing idle-timeout
> tests for the client and wasn't able to, the first heartbeat was sent
> after 2 minutes, but upon a further look today I see why and tweaking
> the test it now goes out after 4 and I can see what is going on.
>
> The way the client drives the idleTimeout processing in Proton results
> in a corner case that makes it possible for it to take up to the full
> timeout period advertised by the server before the first heartbeat is
> sent after prior non-idle activity, rather than pessimistically
> sending at a faster rate as it otherwise does. As ServerBus seems
> (your latest mail maks this less clear) to have not followed the path
> of advertising half its real timeout as recommended (but annoyingly
> not absolutely required) by the AMQP spec specifically to avoid this
> scenario, this creates a situation where there is a race between that
> heartbeat being sent and whether it arrives in time before ServiceBus
> decides to kill the connection. The retransmission you mentioned
> seeing could be enough of a delay to ensure that it didnt.
>
> I need to have a closer look at Proton to see if we can update it to
> better accomodate how the client drives the timeout processing as well
> as the other usage pattern, without introducing undesirable behaviour
> for either or changing the API.
>
> I do however think I see a way to alter the client to avoid the issue
> in the mean time. Try the change at https://paste.apache.org/1oCe
> (which applies to master or 0.10.0 of
> https://github.com/apache/qpid-jms, and shows the code areas to look
> at as you were asking), it makes the client 'tick' proton to process
> the timeout handling more often than it normally would / should be
> necessary, which in this case ensures the heartbeat is actually sent
> before the peers full advertised timeout is reached. In your case the
> patch should make the outlier get sent after at most 3 minutes.
>
> We have previously discussed stopping Proton from sending heartbeats
> much faster than necessary as it currently does, because it results in
> a significant amount of unecessary traffic/processing on servers with
> lots of idle clients and/or advertising smaller timeouts. If we do
> that, I think we would still try to send them a little faster than
> necessary to avoid spurious timeouts against servers that dont
> advertise half their actual timeout, but I think it would still make
> sense for us to add an override to allow toggling it to send more
> often than it otherwise would, since it would help in situations such
> as this plus another that came up previously.
>
> Robbie
>
> On 4 July 2016 at 09:09, Dale Green <gr...@gmail.com> wrote:
> > Hi Robbie,
> >
> > I was debugging a bit and saw that the broker behaves differently if
> there
> > is a single connection or multiple connections. That is, when I have only
> > one connection, it is not closed even after 5 minutes without heartbeats.
> > So, I'll try to observe the behavior with 2 or more connections now.
> >
> > However, I am appending now the Qpid logs of a simple consumer, so you
> can
> > see the client behavior. For me, it is still not clear why we don't have
> a
> > heartbeat at 09:46:56,838. Can you tell me where in the code this
> > functionality could be changed, so I may try a quick fix, to see what the
> > broker would do then?
> >
> > 2016-07-04 09:44:56,407 [ntLoopGroup-2-1] - TRACE
> > TransportSupport               - Getting SSLContext instance using
> > protocol: TLS
> > 2016-07-04 09:44:56,617 [ntLoopGroup-2-1] - TRACE
> > TransportSupport               - Default protocols from the SSLEngine:
> > [SSLv2Hello, TLSv1, TLSv1.1, TLSv1.2]
> > 2016-07-04 09:44:56,617 [ntLoopGroup-2-1] - TRACE
> > TransportSupport               - Disabled protocols: [SSLv2Hello, SSLv3]
> > 2016-07-04 09:44:56,617 [ntLoopGroup-2-1] - TRACE
> > TransportSupport               - Enabled protocols: [TLSv1, TLSv1.1,
> > TLSv1.2]
> > 2016-07-04 09:44:56,617 [ntLoopGroup-2-1] - TRACE
> > TransportSupport               - Default cipher suites from the
> SSLEngine:
> > [TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384,
> > TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384, TLS_RSA_WITH_AES_256_CBC_SHA256,
> > TLS_ECDH_ECDSA_WITH_AES_256_CBC_SHA384,
> > TLS_ECDH_RSA_WITH_AES_256_CBC_SHA384,
> TLS_DHE_RSA_WITH_AES_256_CBC_SHA256,
> > TLS_DHE_DSS_WITH_AES_256_CBC_SHA256,
> TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,
> > TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA, TLS_RSA_WITH_AES_256_CBC_SHA,
> > TLS_ECDH_ECDSA_WITH_AES_256_CBC_SHA, TLS_ECDH_RSA_WITH_AES_256_CBC_SHA,
> > TLS_DHE_RSA_WITH_AES_256_CBC_SHA, TLS_DHE_DSS_WITH_AES_256_CBC_SHA,
> > TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256,
> > TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256, TLS_RSA_WITH_AES_128_CBC_SHA256,
> > TLS_ECDH_ECDSA_WITH_AES_128_CBC_SHA256,
> > TLS_ECDH_RSA_WITH_AES_128_CBC_SHA256,
> TLS_DHE_RSA_WITH_AES_128_CBC_SHA256,
> > TLS_DHE_DSS_WITH_AES_128_CBC_SHA256,
> TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,
> > TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA, TLS_RSA_WITH_AES_128_CBC_SHA,
> > TLS_ECDH_ECDSA_WITH_AES_128_CBC_SHA, TLS_ECDH_RSA_WITH_AES_128_CBC_SHA,
> > TLS_DHE_RSA_WITH_AES_128_CBC_SHA, TLS_DHE_DSS_WITH_AES_128_CBC_SHA,
> > TLS_ECDHE_ECDSA_WITH_3DES_EDE_CBC_SHA,
> TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,
> > SSL_RSA_WITH_3DES_EDE_CBC_SHA, TLS_ECDH_ECDSA_WITH_3DES_EDE_CBC_SHA,
> > TLS_ECDH_RSA_WITH_3DES_EDE_CBC_SHA, SSL_DHE_RSA_WITH_3DES_EDE_CBC_SHA,
> > SSL_DHE_DSS_WITH_3DES_EDE_CBC_SHA, TLS_EMPTY_RENEGOTIATION_INFO_SCSV]
> > 2016-07-04 09:44:56,617 [ntLoopGroup-2-1] - TRACE
> > TransportSupport               - Enabled cipher suites:
> > [TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384,
> > TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384, TLS_RSA_WITH_AES_256_CBC_SHA256,
> > TLS_ECDH_ECDSA_WITH_AES_256_CBC_SHA384,
> > TLS_ECDH_RSA_WITH_AES_256_CBC_SHA384,
> TLS_DHE_RSA_WITH_AES_256_CBC_SHA256,
> > TLS_DHE_DSS_WITH_AES_256_CBC_SHA256,
> TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,
> > TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA, TLS_RSA_WITH_AES_256_CBC_SHA,
> > TLS_ECDH_ECDSA_WITH_AES_256_CBC_SHA, TLS_ECDH_RSA_WITH_AES_256_CBC_SHA,
> > TLS_DHE_RSA_WITH_AES_256_CBC_SHA, TLS_DHE_DSS_WITH_AES_256_CBC_SHA,
> > TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256,
> > TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256, TLS_RSA_WITH_AES_128_CBC_SHA256,
> > TLS_ECDH_ECDSA_WITH_AES_128_CBC_SHA256,
> > TLS_ECDH_RSA_WITH_AES_128_CBC_SHA256,
> TLS_DHE_RSA_WITH_AES_128_CBC_SHA256,
> > TLS_DHE_DSS_WITH_AES_128_CBC_SHA256,
> TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,
> > TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA, TLS_RSA_WITH_AES_128_CBC_SHA,
> > TLS_ECDH_ECDSA_WITH_AES_128_CBC_SHA, TLS_ECDH_RSA_WITH_AES_128_CBC_SHA,
> > TLS_DHE_RSA_WITH_AES_128_CBC_SHA, TLS_DHE_DSS_WITH_AES_128_CBC_SHA,
> > TLS_ECDHE_ECDSA_WITH_3DES_EDE_CBC_SHA,
> TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,
> > SSL_RSA_WITH_3DES_EDE_CBC_SHA, TLS_ECDH_ECDSA_WITH_3DES_EDE_CBC_SHA,
> > TLS_ECDH_RSA_WITH_3DES_EDE_CBC_SHA, SSL_DHE_RSA_WITH_3DES_EDE_CBC_SHA,
> > SSL_DHE_DSS_WITH_3DES_EDE_CBC_SHA, TLS_EMPTY_RENEGOTIATION_INFO_SCSV]
> > 2016-07-04 09:44:56,650 [ntLoopGroup-2-1] - TRACE
> > NettyTcpTransport              - Channel has become active! Channel is
> [id:
> > 0xec4459d2, /10.10.0.180:51561 => /10.10.0.62:5671]
> > 2016-07-04 09:44:56,773 [ntLoopGroup-2-1] - TRACE
> > NettySslTransport              - SSL Handshake has completed: [id:
> > 0xec4459d2, /10.10.0.180:51561 => /10.10.0.62:5671]
> > 2016-07-04 09:44:56,811 [rifyHost=false]] - TRACE
> > NettyTcpTransport              - Attempted write of: 8 bytes
> > 2016-07-04 09:44:56,814 [ntLoopGroup-2-1] - TRACE
> > NettyTcpTransport              - New data read: 8 bytes incoming:
> > UnpooledHeapByteBuf(ridx: 0, widx: 8, cap: 1024)
> > 2016-07-04 09:44:56,814 [rifyHost=false]] - TRACE
> > AmqpProvider                   - New Proton Event: CONNECTION_INIT
> > 2016-07-04 09:44:56,815 [ntLoopGroup-2-1] - TRACE
> > NettyTcpTransport              - New data read: 39 bytes incoming:
> > UnpooledHeapByteBuf(ridx: 0, widx: 39, cap: 69)
> > 2016-07-04 09:44:56,815 [rifyHost=false]] - TRACE
> > AmqpProvider                   - New Proton Event: CONNECTION_LOCAL_OPEN
> > 2016-07-04 09:44:56,819 [rifyHost=false]] - DEBUG
> > SaslMechanismFinder            - Skipping SASL-EXTERNAL mechanism because
> > the available credentials are not sufficient
> > 2016-07-04 09:44:56,819 [rifyHost=false]] - INFO
> > SaslMechanismFinder            - Best match for SASL auth was: SASL-PLAIN
> > 2016-07-04 09:44:56,820 [rifyHost=false]] - TRACE
> > NettyTcpTransport              - Attempted write of: 50 bytes
> > 2016-07-04 09:44:56,823 [ntLoopGroup-2-1] - TRACE
> > NettyTcpTransport              - New data read: 26 bytes incoming:
> > UnpooledHeapByteBuf(ridx: 0, widx: 26, cap: 53)
> > 2016-07-04 09:44:56,826 [rifyHost=false]] - TRACE
> > FRAMES                         - SENT: Open{
> > containerId='ID::21730abd-45cf-46c9-b312-8c1052076805:1',
> > hostname='10.10.0.62', maxFrameSize=1048576, channelMax=32767,
> > idleTimeOut=500000, outgoingLocales=null, incomingLocales=null,
> > offeredCapabilities=null,
> > desiredCapabilities=[sole-connection-for-container],
> > properties={product=QpidJMS, version=0.9.0, platform=JVM: 1.7.0_101,
> > 24.95-b01, Oracle Corporation, OS: Linux, 3.16.0-4-amd64, amd64}}
> > 2016-07-04 09:44:56,827 [rifyHost=false]] - TRACE
> > NettyTcpTransport              - Attempted write of: 256 bytes
> > 2016-07-04 09:44:56,828 [ntLoopGroup-2-1] - TRACE
> > NettyTcpTransport              - New data read: 8 bytes incoming:
> > UnpooledHeapByteBuf(ridx: 0, widx: 8, cap: 37)
> > 2016-07-04 09:44:56,830 [ntLoopGroup-2-1] - TRACE
> > NettyTcpTransport              - New data read: 72 bytes incoming:
> > UnpooledHeapByteBuf(ridx: 0, widx: 72, cap: 101)
> > 2016-07-04 09:44:56,830 [rifyHost=false]] - TRACE
> > FRAMES                         - RECV: Open{
> > containerId='d7c7f05c1eff4183ac3dc9f2885e3f93_Gcw7', hostname='null',
> > maxFrameSize=65536, channelMax=4999, idleTimeOut=240000,
> > outgoingLocales=null, incomingLocales=null, offeredCapabilities=null,
> > desiredCapabilities=null, properties=null}
> > 2016-07-04 09:44:56,830 [rifyHost=false]] - TRACE
> > AmqpProvider                   - New Proton Event: CONNECTION_REMOTE_OPEN
> > 2016-07-04 09:44:56,833 [rifyHost=false]] - TRACE
> > AmqpProvider                   - New Proton Event: SESSION_INIT
> > 2016-07-04 09:44:56,833 [rifyHost=false]] - TRACE
> > AmqpProvider                   - New Proton Event: SESSION_LOCAL_OPEN
> > 2016-07-04 09:44:56,835 [rifyHost=false]] - TRACE
> > FRAMES                         - SENT: Begin{remoteChannel=null,
> > nextOutgoingId=1, incomingWindow=2047, outgoingWindow=2147483647,
> > handleMax=65535, offeredCapabilities=null, desiredCapabilities=null,
> > properties=null}
> > 2016-07-04 09:44:56,835 [rifyHost=false]] - TRACE
> > NettyTcpTransport              - Attempted write of: 32 bytes
> > 2016-07-04 09:44:56,836 [ntLoopGroup-2-1] - TRACE
> > NettyTcpTransport              - New data read: 34 bytes incoming:
> > UnpooledHeapByteBuf(ridx: 0, widx: 34, cap: 69)
> > 2016-07-04 09:44:56,837 [rifyHost=false]] - TRACE
> > FRAMES                         - RECV: Begin{remoteChannel=0,
> > nextOutgoingId=1, incomingWindow=5000, outgoingWindow=2047, handleMax=7,
> > offeredCapabilities=null, desiredCapabilities=null, properties=null}
> > 2016-07-04 09:44:56,837 [rifyHost=false]] - TRACE
> > AmqpProvider                   - New Proton Event: SESSION_REMOTE_OPEN
> > 2016-07-04 09:44:56,837 [rifyHost=false]] - DEBUG
> > AmqpConnectionBuilder          - AmqpConnection {
> > ID::5fd31250-0099-4949-8160-1b6ba4c2cf72:1 } is now open:
> > 2016-07-04 09:44:56,837 [rifyHost=false]] - TRACE
> > AmqpProvider                   - IdleTimeoutCheck being initiated,
> initial
> > delay: 120000
> > 2016-07-04 09:44:56,837 [rifyHost=false]] - INFO
> > JmsConnection                  - Connection
> > ID::5fd31250-0099-4949-8160-1b6ba4c2cf72:1 connected to remote Broker:
> > amqps://10.10.0.62:5671/?transport.verifyHost=false
> > 2016-07-04 09:44:56,846 [rifyHost=false]] - TRACE
> > FRAMES                         - SENT: Begin{remoteChannel=null,
> > nextOutgoingId=1, incomingWindow=2047, outgoingWindow=2147483647,
> > handleMax=65535, offeredCapabilities=null, desiredCapabilities=null,
> > properties=null}
> > 2016-07-04 09:44:56,846 [rifyHost=false]] - TRACE
> > NettyTcpTransport              - Attempted write of: 32 bytes
> > 2016-07-04 09:44:56,848 [ntLoopGroup-2-1] - TRACE
> > NettyTcpTransport              - New data read: 34 bytes incoming:
> > UnpooledHeapByteBuf(ridx: 0, widx: 34, cap: 69)
> > 2016-07-04 09:44:56,848 [rifyHost=false]] - TRACE
> > FRAMES                         - RECV: Begin{remoteChannel=1,
> > nextOutgoingId=1, incomingWindow=5000, outgoingWindow=2047, handleMax=7,
> > offeredCapabilities=null, desiredCapabilities=null, properties=null}
> > 2016-07-04 09:44:56,848 [rifyHost=false]] - TRACE
> > AmqpProvider                   - New Proton Event: SESSION_INIT
> > 2016-07-04 09:44:56,848 [rifyHost=false]] - TRACE
> > AmqpProvider                   - New Proton Event: SESSION_LOCAL_OPEN
> > 2016-07-04 09:44:56,848 [rifyHost=false]] - TRACE
> > AmqpProvider                   - New Proton Event: SESSION_REMOTE_OPEN
> > 2016-07-04 09:44:56,858 [rifyHost=false]] - TRACE
> > FRAMES                         - SENT:
> >
> Attach{name='qpid-jms:receiver:ID::5fd31250-0099-4949-8160-1b6ba4c2cf72:1:1:1:ServiceBusDefaultNamespace/test2',
> > handle=0, role=RECEIVER, sndSettleMode=UNSETTLED, rcvSettleMode=FIRST,
> > source=Source{address='ServiceBusDefaultNamespace/test2', durable=NONE,
> > expiryPolicy=LINK_DETACH, timeout=0, dynamic=false,
> > dynamicNodeProperties=null, distributionMode=null, filter=null,
> > defaultOutcome=Modified{deliveryFailed=true, undeliverableHere=null,
> > messageAnnotations=null}, outcomes=[amqp:accepted:list,
> amqp:rejected:list,
> > amqp:released:list, amqp:modified:list], capabilities=[queue]},
> > target=Target{address='null', durable=NONE, expiryPolicy=SESSION_END,
> > timeout=0, dynamic=false, dynamicNodeProperties=null, capabilities=null},
> > unsettled=null, incompleteUnsettled=false, initialDeliveryCount=null,
> > maxMessageSize=null, offeredCapabilities=null, desiredCapabilities=null,
> > properties=null}
> > 2016-07-04 09:44:56,858 [rifyHost=false]] - TRACE
> > NettyTcpTransport              - Attempted write of: 285 bytes
> > 2016-07-04 09:44:57,174 [ntLoopGroup-2-1] - TRACE
> > NettyTcpTransport              - New data read: 413 bytes incoming:
> > UnpooledHeapByteBuf(ridx: 0, widx: 413, cap: 453)
> > 2016-07-04 09:44:57,175 [rifyHost=false]] - TRACE
> > FRAMES                         - RECV:
> >
> Attach{name='qpid-jms:receiver:ID::5fd31250-0099-4949-8160-1b6ba4c2cf72:1:1:1:ServiceBusDefaultNamespace/test2',
> > handle=0, role=SENDER, sndSettleMode=UNSETTLED, rcvSettleMode=FIRST,
> > source=Source{address='ServiceBusDefaultNamespace/test2', durable=NONE,
> > expiryPolicy=LINK_DETACH, timeout=0, dynamic=false,
> > dynamicNodeProperties=null, distributionMode=null, filter=null,
> > defaultOutcome=Modified{deliveryFailed=true, undeliverableHere=null,
> > messageAnnotations=null}, outcomes=[amqp:accepted:list,
> amqp:rejected:list,
> > amqp:released:list, amqp:modified:list], capabilities=[queue]},
> > target=Target{address='null', durable=NONE, expiryPolicy=SESSION_END,
> > timeout=0, dynamic=false, dynamicNodeProperties=null, capabilities=null},
> > unsettled=null, incompleteUnsettled=false, initialDeliveryCount=0,
> > maxMessageSize=262144, offeredCapabilities=null,
> desiredCapabilities=null,
> >
> properties={com.microsoft:tracking-id=d7c7f05c1eff4183ac3dc9f2885e3f93_Gcw7_Bcw7;51461:100924:100925}}
> > 2016-07-04 09:44:57,176 [rifyHost=false]] - TRACE
> > AmqpProvider                   - New Proton Event: LINK_INIT
> > 2016-07-04 09:44:57,176 [rifyHost=false]] - TRACE
> > AmqpProvider                   - New Proton Event: LINK_LOCAL_OPEN
> > 2016-07-04 09:44:57,176 [rifyHost=false]] - TRACE
> > AmqpProvider                   - New Proton Event: LINK_REMOTE_OPEN
> > 2016-07-04 09:44:57,177 [rifyHost=false]] - TRACE
> > AmqpConsumer                   - Consumer
> > ID::5fd31250-0099-4949-8160-1b6ba4c2cf72:1:1:1 granting additional
> credit:
> > 10
> > 2016-07-04 09:44:57,178 [rifyHost=false]] - TRACE
> > FRAMES                         - SENT: Flow{nextIncomingId=1,
> > incomingWindow=2047, nextOutgoingId=1, outgoingWindow=2147483647,
> handle=0,
> > deliveryCount=0, linkCredit=10, available=null, drain=false, echo=false,
> > properties=null}
> > 2016-07-04 09:44:57,178 [rifyHost=false]] - TRACE
> > NettyTcpTransport              - Attempted write of: 32 bytes
> > 2016-07-04 09:46:56,838 [rifyHost=false]] - TRACE
> > AmqpProvider                   - IdleTimeoutCheck rescheduling with
> delay:
> > 120000
> > 2016-07-04 09:48:56,839 [rifyHost=false]] - TRACE
> > FRAMES                         - SENT: Empty Frame
> > 2016-07-04 09:48:56,839 [rifyHost=false]] - TRACE
> > NettyTcpTransport              - Attempted write of: 8 bytes
> > 2016-07-04 09:48:56,839 [rifyHost=false]] - TRACE
> > AmqpProvider                   - IdleTimeoutCheck rescheduling with
> delay:
> > 120000
> > 2016-07-04 09:50:56,840 [rifyHost=false]] - TRACE
> > FRAMES                         - SENT: Empty Frame
> > 2016-07-04 09:50:56,840 [rifyHost=false]] - TRACE
> > NettyTcpTransport              - Attempted write of: 8 bytes
> > 2016-07-04 09:50:56,840 [rifyHost=false]] - TRACE
> > AmqpProvider                   - IdleTimeoutCheck rescheduling with
> delay:
> > 120000
> > 2016-07-04 09:52:14,666 [ntLoopGroup-2-1] - TRACE
> > NettyTcpTransport              - New data read: 8 bytes incoming:
> > UnpooledHeapByteBuf(ridx: 0, widx: 8, cap: 37)
> > 2016-07-04 09:52:14,667 [rifyHost=false]] - TRACE
> > FRAMES                         - RECV: Empty Frame
> > 2016-07-04 09:52:56,841 [rifyHost=false]] - TRACE
> > FRAMES                         - SENT: Empty Frame
> > 2016-07-04 09:52:56,841 [rifyHost=false]] - TRACE
> > NettyTcpTransport              - Attempted write of: 8 bytes
> > 2016-07-04 09:52:56,841 [rifyHost=false]] - TRACE
> > AmqpProvider                   - IdleTimeoutCheck rescheduling with
> delay:
> > 120000
> > 2016-07-04 09:54:56,841 [rifyHost=false]] - TRACE
> > FRAMES                         - SENT: Empty Frame
> > 2016-07-04 09:54:56,841 [rifyHost=false]] - TRACE
> > NettyTcpTransport              - Attempted write of: 8 bytes
> > 2016-07-04 09:54:56,842 [rifyHost=false]] - TRACE
> > AmqpProvider                   - IdleTimeoutCheck rescheduling with
> delay:
> > 120000
> > 2016-07-04 09:56:56,842 [rifyHost=false]] - TRACE
> > FRAMES                         - SENT: Empty Frame
> > 2016-07-04 09:56:56,842 [rifyHost=false]] - TRACE
> > NettyTcpTransport              - Attempted write of: 8 bytes
> > 2016-07-04 09:56:56,842 [rifyHost=false]] - TRACE
> > AmqpProvider                   - IdleTimeoutCheck rescheduling with
> delay:
> > 120000
> > 2016-07-04 09:58:56,843 [rifyHost=false]] - TRACE
> > FRAMES                         - SENT: Empty Frame
> > 2016-07-04 09:58:56,843 [rifyHost=false]] - TRACE
> > NettyTcpTransport              - Attempted write of: 8 bytes
> > 2016-07-04 09:58:56,843 [rifyHost=false]] - TRACE
> > AmqpProvider                   - IdleTimeoutCheck rescheduling with
> delay:
> > 120000
> > 2016-07-04 09:59:32,164 [ntLoopGroup-2-1] - TRACE
> > NettyTcpTransport              - New data read: 8 bytes incoming:
> > UnpooledHeapByteBuf(ridx: 0, widx: 8, cap: 37)
> > 2016-07-04 09:59:32,164 [rifyHost=false]] - TRACE
> > FRAMES                         - RECV: Empty Frame
> > 2016-07-04 10:00:56,843 [rifyHost=false]] - TRACE
> > FRAMES                         - SENT: Empty Frame
> > 2016-07-04 10:00:56,843 [rifyHost=false]] - TRACE
> > NettyTcpTransport              - Attempted write of: 8 bytes
> > 2016-07-04 10:00:56,843 [rifyHost=false]] - TRACE
> > AmqpProvider                   - IdleTimeoutCheck rescheduling with
> delay:
> > 120000
> > 2016-07-04 10:02:56,844 [rifyHost=false]] - TRACE
> > FRAMES                         - SENT: Empty Frame
> > 2016-07-04 10:02:56,844 [rifyHost=false]] - TRACE
> > NettyTcpTransport              - Attempted write of: 8 bytes
> > 2016-07-04 10:02:56,845 [rifyHost=false]] - TRACE
> > AmqpProvider                   - IdleTimeoutCheck rescheduling with
> delay:
> > 120000
> >
> >
> >
> >
> > On Fri, Jul 1, 2016 at 3:52 PM, Robbie Gemmell <robbie.gemmell@gmail.com
> >
> > wrote:
> >
> >> Hi Dale,
> >>
> >> It looks like you might have snipped some of the logs. If you removed
> >> any frame logging we will need to see that too (well, at least what
> >> frames and when) to better reason about what the client is (or is not)
> >> doing, as heartbeat frames are only sent in the absence of other
> >> frames. It would also be good to enable trace on logger
> >> org.apache.qpid.jms.provider.amqp.AmqpProvider to get some logging
> >> from the idle timeout processing task.
> >>
> >> The decision of whether to send a heartbeat frame (or close the
> >> connection due to not receiving any frames) is taken by the Qpid
> >> Proton engine the client uses, with the client scheduling the
> >> processing task needed for it to do that. There isn't currently a way
> >> to make it send heartbeats more often than Proton decides it needs to
> >> based on what the server advertises and what other activity is
> >> occuring.
> >>
> >> The AMQP spec indicates
> >> (
> >>
> http://docs.oasis-open.org/amqp/core/v1.0/os/amqp-core-transport-v1.0-os.html#doc-doc-idle-time-out
> >> )
> >> that peers SHOULD advertise half their actual timeout to avoid
> >> spurious timeouts. It would seem like perhaps Service Bus isnt doing
> >> that, since it advertised 4 minutes and the logs indicate the
> >> connection was closed in less than 5 minutes, as opposed to at least
> >> 8.
> >>
> >> That said, Proton actually pessimisticly assumes that the peer hasn't
> >> advertised half its timeout and so uses half the peers advertised
> >> value, meaning it should send heartbeats at either twice or four times
> >> the frequency of the peers actual timeout if no other frames are being
> >> sent. I see you have a 121sec timeout specified client side (given the
> >> client advertise idle-timeout of 60.5s) too, and that should mean the
> >> client should be doing the idle-timeout processing at least that
> >> often, with the absence of any heartbeat frames being sent suggesting
> >> Proton thinks it doesnt need to.
> >>
> >> I'll take a look at the heartbeat generation stuff in Proton, but if
> >> you could send the updated logging that would be good.
> >>
> >> Robbie
> >>
> >> On 1 July 2016 at 13:35, Dale Green <gr...@gmail.com> wrote:
> >> > Hi people,
> >> >
> >> > I have another problem with Service Bus and Qpid JMS 0.9.0.
> >> >
> >> > The broker uses 4 minutes idle timeout. With the sample code in the
> >> > examples, I verified that the client sends the first empty frame
> exactly
> >> 4
> >> > minutes after the consumer has connected. After that, Qpid sends an
> empty
> >> > frame every 2 minutes.
> >> >
> >> > However in the application, I am working on (existing code), Service
> Bus
> >> > drops the connections too often (when there are no messages sent)
> because
> >> > of inactivity. I think the problem is that Qpid waits for the whole 4
> >> > minutes before sending the first empty frame. Moreover, I see always
> >> (when
> >> > the problem occurs) TCP retransmission for exactly this frame which
> maybe
> >> > delays it for 100-200ms. Therefore, dropping the connection is perhaps
> >> > correct.
> >> >
> >> > I am appending some Qpid traces down here. My questions are:
> >> > - Is the broker's behavior correct?
> >> > - How can I tell Qpid to send the heartbeats more often to probably
> solve
> >> > this problem?
> >> >
> >> > Thanks!
> >> >
> >> > 11:48:38,001 TRACE [FRAMES] SENT: Open{
> >> > containerId='ID::68508b21-bbc7-4318-9371-5debbad2937d:1',
> >> > hostname='10.10.0.62', maxFrameSize=1048576, channelMax=32767,
> >> > idleTimeOut=60500, outgoingLocales=null, incomingLocales=null,
> >> > offeredCapabilities=null,
> >> > desiredCapabilities=[sole-connection-for-container],
> >> > properties={product=QpidJMS, version=0.9.0, platform=JVM: 1.7.0_80,
> >> > 24.80-b11, Oracle Corporation, OS: Linux, 3.16.0-4-amd64, amd64}}
> >> > 11:48:38,006 TRACE [FRAMES] RECV: Open{
> >> > containerId='d7c7f05c1eff4183ac3dc9f2885e3f93_Gcw7', hostname='null',
> >> > maxFrameSize=65536, channelMax=4999, idleTimeOut=240000,
> >> > outgoingLocales=null, incomingLocales=null, offeredCapabilities=null,
> >> > desiredCapabilities=null, properties=null}
> >> > 11:48:38,015 TRACE [FRAMES] SENT: Begin{remoteChannel=null,
> >> > nextOutgoingId=1, incomingWindow=2047, outgoingWindow=2147483647,
> >> > handleMax=65535, offeredCapabilities=null, desiredCapabilities=null,
> >> > properties=null}
> >> > 11:48:38,017 TRACE [FRAMES] RECV: Begin{remoteChannel=0,
> >> nextOutgoingId=1,
> >> > incomingWindow=5000, outgoingWindow=2047, handleMax=7,
> >> > offeredCapabilities=null, desiredCapabilities=null, properties=null}
> >> > 11:48:38,144 TRACE [FRAMES] SENT:
> >> >
> >>
> Attach{name='qpid-jms:receiver:ID::24295795-625b-46a2-9898-637cf9944fbd:1:1:1:ServiceBusDefaultNamespace/XXX',
> >> > handle=0, role=RECEIVER, sndSettleMode=UNSETTLED, rcvSettleMode=FIRST,
> >> > source=Source{address='ServiceBusDefaultNamespace/XXX', durable=NONE,
> >> > expiryPolicy=LINK_DETACH, timeout=0, dynamic=false,
> >> > dynamicNodeProperties=null, distributionMode=null, filter=null,
> >> > defaultOutcome=Modified{deliveryFailed=true, undeliverableHere=null,
> >> > messageAnnotations=null}, outcomes=[amqp:accepted:list,
> >> amqp:rejected:list,
> >> > amqp:released:list, amqp:modified:list], capabilities=[queue]},
> >> > target=Target{address='null', durable=NONE, expiryPolicy=SESSION_END,
> >> > timeout=0, dynamic=false, dynamicNodeProperties=null,
> capabilities=null},
> >> > unsettled=null, incompleteUnsettled=false, initialDeliveryCount=null,
> >> > maxMessageSize=null, offeredCapabilities=null,
> desiredCapabilities=null,
> >> > properties=null}
> >> > 11:48:38,179 TRACE [FRAMES] RECV:
> >> >
> >>
> Attach{name='qpid-jms:receiver:ID::24295795-625b-46a2-9898-637cf9944fbd:1:1:1:ServiceBusDefaultNamespace/XXX',
> >> > handle=0, role=SENDER, sndSettleMode=UNSETTLED, rcvSettleMode=FIRST,
> >> > source=Source{address='ServiceBusDefaultNamespace/XXX', durable=NONE,
> >> > expiryPolicy=LINK_DETACH, timeout=0, dynamic=false,
> >> > dynamicNodeProperties=null, distributionMode=null, filter=null,
> >> > defaultOutcome=Modified{deliveryFailed=true, undeliverableHere=null,
> >> > messageAnnotations=null}, outcomes=[amqp:accepted:list,
> >> amqp:rejected:list,
> >> > amqp:released:list, amqp:modified:list], capabilities=[queue]},
> >> > target=Target{address='null', durable=NONE, expiryPolicy=SESSION_END,
> >> > timeout=0, dynamic=false, dynamicNodeProperties=null,
> capabilities=null},
> >> > unsettled=null, incompleteUnsettled=false, initialDeliveryCount=0,
> >> > maxMessageSize=262144, offeredCapabilities=null,
> >> desiredCapabilities=null,
> >> >
> >>
> properties={com.microsoft:tracking-id=d7c7f05c1eff4183ac3dc9f2885e3f93_Gcw7_Bcw7;51163:100752:100753}}
> >> > 11:48:38,183 TRACE [FRAMES] SENT: Flow{nextIncomingId=1,
> >> > incomingWindow=2047, nextOutgoingId=1, outgoingWindow=2147483647,
> >> handle=0,
> >> > deliveryCount=0, linkCredit=1000, available=null, drain=false,
> >> echo=false,
> >> > properties=null}
> >> > ...
> >> > 11:49:30,966 TRACE [FRAMES] RECV: Empty Frame
> >> > ...
> >> > 11:50:23,911 TRACE [FRAMES] RECV: Empty Frame
> >> > ...
> >> > 11:51:16,857 TRACE [FRAMES] RECV: Empty Frame
> >> > ...
> >> > 11:52:09,804 TRACE [FRAMES] RECV: Empty Frame
> >> > ...
> >> > 11:52:38,019 TRACE [FRAMES] SENT: Empty Frame
> >> > (TCP Retransmission occurs within 100-200ms)
> >> > ...
> >> > 11:53:02,749 TRACE [FRAMES] RECV: Empty Frame
> >> > ...
> >> > 11:53:38,009 TRACE [FRAMES] RECV:
> >> > Close{error=Error{condition=amqp:connection:forced, description='Die
> >> > Verbindung war länger als zulässig
> >> >
> >>
> inaktiv..TrackingId:d7c7f05c1eff4183ac3dc9f2885e3f93_Gcw7,TimeStamp:01.07.2016
> >> > 11:53:27', info=null}}
> >>
> >> ---------------------------------------------------------------------
> >> To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
> >> For additional commands, e-mail: users-help@qpid.apache.org
> >>
> >>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
> For additional commands, e-mail: users-help@qpid.apache.org
>
>

Re: Qpid JMS: client inactivity problem

Posted by Robbie Gemmell <ro...@gmail.com>.
Hi Dale,

I tried to reproduce this on Friday using the existing idle-timeout
tests for the client and wasn't able to, the first heartbeat was sent
after 2 minutes, but upon a further look today I see why and tweaking
the test it now goes out after 4 and I can see what is going on.

The way the client drives the idleTimeout processing in Proton results
in a corner case that makes it possible for it to take up to the full
timeout period advertised by the server before the first heartbeat is
sent after prior non-idle activity, rather than pessimistically
sending at a faster rate as it otherwise does. As ServerBus seems
(your latest mail maks this less clear) to have not followed the path
of advertising half its real timeout as recommended (but annoyingly
not absolutely required) by the AMQP spec specifically to avoid this
scenario, this creates a situation where there is a race between that
heartbeat being sent and whether it arrives in time before ServiceBus
decides to kill the connection. The retransmission you mentioned
seeing could be enough of a delay to ensure that it didnt.

I need to have a closer look at Proton to see if we can update it to
better accomodate how the client drives the timeout processing as well
as the other usage pattern, without introducing undesirable behaviour
for either or changing the API.

I do however think I see a way to alter the client to avoid the issue
in the mean time. Try the change at https://paste.apache.org/1oCe
(which applies to master or 0.10.0 of
https://github.com/apache/qpid-jms, and shows the code areas to look
at as you were asking), it makes the client 'tick' proton to process
the timeout handling more often than it normally would / should be
necessary, which in this case ensures the heartbeat is actually sent
before the peers full advertised timeout is reached. In your case the
patch should make the outlier get sent after at most 3 minutes.

We have previously discussed stopping Proton from sending heartbeats
much faster than necessary as it currently does, because it results in
a significant amount of unecessary traffic/processing on servers with
lots of idle clients and/or advertising smaller timeouts. If we do
that, I think we would still try to send them a little faster than
necessary to avoid spurious timeouts against servers that dont
advertise half their actual timeout, but I think it would still make
sense for us to add an override to allow toggling it to send more
often than it otherwise would, since it would help in situations such
as this plus another that came up previously.

Robbie

On 4 July 2016 at 09:09, Dale Green <gr...@gmail.com> wrote:
> Hi Robbie,
>
> I was debugging a bit and saw that the broker behaves differently if there
> is a single connection or multiple connections. That is, when I have only
> one connection, it is not closed even after 5 minutes without heartbeats.
> So, I'll try to observe the behavior with 2 or more connections now.
>
> However, I am appending now the Qpid logs of a simple consumer, so you can
> see the client behavior. For me, it is still not clear why we don't have a
> heartbeat at 09:46:56,838. Can you tell me where in the code this
> functionality could be changed, so I may try a quick fix, to see what the
> broker would do then?
>
> 2016-07-04 09:44:56,407 [ntLoopGroup-2-1] - TRACE
> TransportSupport               - Getting SSLContext instance using
> protocol: TLS
> 2016-07-04 09:44:56,617 [ntLoopGroup-2-1] - TRACE
> TransportSupport               - Default protocols from the SSLEngine:
> [SSLv2Hello, TLSv1, TLSv1.1, TLSv1.2]
> 2016-07-04 09:44:56,617 [ntLoopGroup-2-1] - TRACE
> TransportSupport               - Disabled protocols: [SSLv2Hello, SSLv3]
> 2016-07-04 09:44:56,617 [ntLoopGroup-2-1] - TRACE
> TransportSupport               - Enabled protocols: [TLSv1, TLSv1.1,
> TLSv1.2]
> 2016-07-04 09:44:56,617 [ntLoopGroup-2-1] - TRACE
> TransportSupport               - Default cipher suites from the SSLEngine:
> [TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384,
> TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384, TLS_RSA_WITH_AES_256_CBC_SHA256,
> TLS_ECDH_ECDSA_WITH_AES_256_CBC_SHA384,
> TLS_ECDH_RSA_WITH_AES_256_CBC_SHA384, TLS_DHE_RSA_WITH_AES_256_CBC_SHA256,
> TLS_DHE_DSS_WITH_AES_256_CBC_SHA256, TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,
> TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA, TLS_RSA_WITH_AES_256_CBC_SHA,
> TLS_ECDH_ECDSA_WITH_AES_256_CBC_SHA, TLS_ECDH_RSA_WITH_AES_256_CBC_SHA,
> TLS_DHE_RSA_WITH_AES_256_CBC_SHA, TLS_DHE_DSS_WITH_AES_256_CBC_SHA,
> TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256,
> TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256, TLS_RSA_WITH_AES_128_CBC_SHA256,
> TLS_ECDH_ECDSA_WITH_AES_128_CBC_SHA256,
> TLS_ECDH_RSA_WITH_AES_128_CBC_SHA256, TLS_DHE_RSA_WITH_AES_128_CBC_SHA256,
> TLS_DHE_DSS_WITH_AES_128_CBC_SHA256, TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,
> TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA, TLS_RSA_WITH_AES_128_CBC_SHA,
> TLS_ECDH_ECDSA_WITH_AES_128_CBC_SHA, TLS_ECDH_RSA_WITH_AES_128_CBC_SHA,
> TLS_DHE_RSA_WITH_AES_128_CBC_SHA, TLS_DHE_DSS_WITH_AES_128_CBC_SHA,
> TLS_ECDHE_ECDSA_WITH_3DES_EDE_CBC_SHA, TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,
> SSL_RSA_WITH_3DES_EDE_CBC_SHA, TLS_ECDH_ECDSA_WITH_3DES_EDE_CBC_SHA,
> TLS_ECDH_RSA_WITH_3DES_EDE_CBC_SHA, SSL_DHE_RSA_WITH_3DES_EDE_CBC_SHA,
> SSL_DHE_DSS_WITH_3DES_EDE_CBC_SHA, TLS_EMPTY_RENEGOTIATION_INFO_SCSV]
> 2016-07-04 09:44:56,617 [ntLoopGroup-2-1] - TRACE
> TransportSupport               - Enabled cipher suites:
> [TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384,
> TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384, TLS_RSA_WITH_AES_256_CBC_SHA256,
> TLS_ECDH_ECDSA_WITH_AES_256_CBC_SHA384,
> TLS_ECDH_RSA_WITH_AES_256_CBC_SHA384, TLS_DHE_RSA_WITH_AES_256_CBC_SHA256,
> TLS_DHE_DSS_WITH_AES_256_CBC_SHA256, TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,
> TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA, TLS_RSA_WITH_AES_256_CBC_SHA,
> TLS_ECDH_ECDSA_WITH_AES_256_CBC_SHA, TLS_ECDH_RSA_WITH_AES_256_CBC_SHA,
> TLS_DHE_RSA_WITH_AES_256_CBC_SHA, TLS_DHE_DSS_WITH_AES_256_CBC_SHA,
> TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256,
> TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256, TLS_RSA_WITH_AES_128_CBC_SHA256,
> TLS_ECDH_ECDSA_WITH_AES_128_CBC_SHA256,
> TLS_ECDH_RSA_WITH_AES_128_CBC_SHA256, TLS_DHE_RSA_WITH_AES_128_CBC_SHA256,
> TLS_DHE_DSS_WITH_AES_128_CBC_SHA256, TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,
> TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA, TLS_RSA_WITH_AES_128_CBC_SHA,
> TLS_ECDH_ECDSA_WITH_AES_128_CBC_SHA, TLS_ECDH_RSA_WITH_AES_128_CBC_SHA,
> TLS_DHE_RSA_WITH_AES_128_CBC_SHA, TLS_DHE_DSS_WITH_AES_128_CBC_SHA,
> TLS_ECDHE_ECDSA_WITH_3DES_EDE_CBC_SHA, TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,
> SSL_RSA_WITH_3DES_EDE_CBC_SHA, TLS_ECDH_ECDSA_WITH_3DES_EDE_CBC_SHA,
> TLS_ECDH_RSA_WITH_3DES_EDE_CBC_SHA, SSL_DHE_RSA_WITH_3DES_EDE_CBC_SHA,
> SSL_DHE_DSS_WITH_3DES_EDE_CBC_SHA, TLS_EMPTY_RENEGOTIATION_INFO_SCSV]
> 2016-07-04 09:44:56,650 [ntLoopGroup-2-1] - TRACE
> NettyTcpTransport              - Channel has become active! Channel is [id:
> 0xec4459d2, /10.10.0.180:51561 => /10.10.0.62:5671]
> 2016-07-04 09:44:56,773 [ntLoopGroup-2-1] - TRACE
> NettySslTransport              - SSL Handshake has completed: [id:
> 0xec4459d2, /10.10.0.180:51561 => /10.10.0.62:5671]
> 2016-07-04 09:44:56,811 [rifyHost=false]] - TRACE
> NettyTcpTransport              - Attempted write of: 8 bytes
> 2016-07-04 09:44:56,814 [ntLoopGroup-2-1] - TRACE
> NettyTcpTransport              - New data read: 8 bytes incoming:
> UnpooledHeapByteBuf(ridx: 0, widx: 8, cap: 1024)
> 2016-07-04 09:44:56,814 [rifyHost=false]] - TRACE
> AmqpProvider                   - New Proton Event: CONNECTION_INIT
> 2016-07-04 09:44:56,815 [ntLoopGroup-2-1] - TRACE
> NettyTcpTransport              - New data read: 39 bytes incoming:
> UnpooledHeapByteBuf(ridx: 0, widx: 39, cap: 69)
> 2016-07-04 09:44:56,815 [rifyHost=false]] - TRACE
> AmqpProvider                   - New Proton Event: CONNECTION_LOCAL_OPEN
> 2016-07-04 09:44:56,819 [rifyHost=false]] - DEBUG
> SaslMechanismFinder            - Skipping SASL-EXTERNAL mechanism because
> the available credentials are not sufficient
> 2016-07-04 09:44:56,819 [rifyHost=false]] - INFO
> SaslMechanismFinder            - Best match for SASL auth was: SASL-PLAIN
> 2016-07-04 09:44:56,820 [rifyHost=false]] - TRACE
> NettyTcpTransport              - Attempted write of: 50 bytes
> 2016-07-04 09:44:56,823 [ntLoopGroup-2-1] - TRACE
> NettyTcpTransport              - New data read: 26 bytes incoming:
> UnpooledHeapByteBuf(ridx: 0, widx: 26, cap: 53)
> 2016-07-04 09:44:56,826 [rifyHost=false]] - TRACE
> FRAMES                         - SENT: Open{
> containerId='ID::21730abd-45cf-46c9-b312-8c1052076805:1',
> hostname='10.10.0.62', maxFrameSize=1048576, channelMax=32767,
> idleTimeOut=500000, outgoingLocales=null, incomingLocales=null,
> offeredCapabilities=null,
> desiredCapabilities=[sole-connection-for-container],
> properties={product=QpidJMS, version=0.9.0, platform=JVM: 1.7.0_101,
> 24.95-b01, Oracle Corporation, OS: Linux, 3.16.0-4-amd64, amd64}}
> 2016-07-04 09:44:56,827 [rifyHost=false]] - TRACE
> NettyTcpTransport              - Attempted write of: 256 bytes
> 2016-07-04 09:44:56,828 [ntLoopGroup-2-1] - TRACE
> NettyTcpTransport              - New data read: 8 bytes incoming:
> UnpooledHeapByteBuf(ridx: 0, widx: 8, cap: 37)
> 2016-07-04 09:44:56,830 [ntLoopGroup-2-1] - TRACE
> NettyTcpTransport              - New data read: 72 bytes incoming:
> UnpooledHeapByteBuf(ridx: 0, widx: 72, cap: 101)
> 2016-07-04 09:44:56,830 [rifyHost=false]] - TRACE
> FRAMES                         - RECV: Open{
> containerId='d7c7f05c1eff4183ac3dc9f2885e3f93_Gcw7', hostname='null',
> maxFrameSize=65536, channelMax=4999, idleTimeOut=240000,
> outgoingLocales=null, incomingLocales=null, offeredCapabilities=null,
> desiredCapabilities=null, properties=null}
> 2016-07-04 09:44:56,830 [rifyHost=false]] - TRACE
> AmqpProvider                   - New Proton Event: CONNECTION_REMOTE_OPEN
> 2016-07-04 09:44:56,833 [rifyHost=false]] - TRACE
> AmqpProvider                   - New Proton Event: SESSION_INIT
> 2016-07-04 09:44:56,833 [rifyHost=false]] - TRACE
> AmqpProvider                   - New Proton Event: SESSION_LOCAL_OPEN
> 2016-07-04 09:44:56,835 [rifyHost=false]] - TRACE
> FRAMES                         - SENT: Begin{remoteChannel=null,
> nextOutgoingId=1, incomingWindow=2047, outgoingWindow=2147483647,
> handleMax=65535, offeredCapabilities=null, desiredCapabilities=null,
> properties=null}
> 2016-07-04 09:44:56,835 [rifyHost=false]] - TRACE
> NettyTcpTransport              - Attempted write of: 32 bytes
> 2016-07-04 09:44:56,836 [ntLoopGroup-2-1] - TRACE
> NettyTcpTransport              - New data read: 34 bytes incoming:
> UnpooledHeapByteBuf(ridx: 0, widx: 34, cap: 69)
> 2016-07-04 09:44:56,837 [rifyHost=false]] - TRACE
> FRAMES                         - RECV: Begin{remoteChannel=0,
> nextOutgoingId=1, incomingWindow=5000, outgoingWindow=2047, handleMax=7,
> offeredCapabilities=null, desiredCapabilities=null, properties=null}
> 2016-07-04 09:44:56,837 [rifyHost=false]] - TRACE
> AmqpProvider                   - New Proton Event: SESSION_REMOTE_OPEN
> 2016-07-04 09:44:56,837 [rifyHost=false]] - DEBUG
> AmqpConnectionBuilder          - AmqpConnection {
> ID::5fd31250-0099-4949-8160-1b6ba4c2cf72:1 } is now open:
> 2016-07-04 09:44:56,837 [rifyHost=false]] - TRACE
> AmqpProvider                   - IdleTimeoutCheck being initiated, initial
> delay: 120000
> 2016-07-04 09:44:56,837 [rifyHost=false]] - INFO
> JmsConnection                  - Connection
> ID::5fd31250-0099-4949-8160-1b6ba4c2cf72:1 connected to remote Broker:
> amqps://10.10.0.62:5671/?transport.verifyHost=false
> 2016-07-04 09:44:56,846 [rifyHost=false]] - TRACE
> FRAMES                         - SENT: Begin{remoteChannel=null,
> nextOutgoingId=1, incomingWindow=2047, outgoingWindow=2147483647,
> handleMax=65535, offeredCapabilities=null, desiredCapabilities=null,
> properties=null}
> 2016-07-04 09:44:56,846 [rifyHost=false]] - TRACE
> NettyTcpTransport              - Attempted write of: 32 bytes
> 2016-07-04 09:44:56,848 [ntLoopGroup-2-1] - TRACE
> NettyTcpTransport              - New data read: 34 bytes incoming:
> UnpooledHeapByteBuf(ridx: 0, widx: 34, cap: 69)
> 2016-07-04 09:44:56,848 [rifyHost=false]] - TRACE
> FRAMES                         - RECV: Begin{remoteChannel=1,
> nextOutgoingId=1, incomingWindow=5000, outgoingWindow=2047, handleMax=7,
> offeredCapabilities=null, desiredCapabilities=null, properties=null}
> 2016-07-04 09:44:56,848 [rifyHost=false]] - TRACE
> AmqpProvider                   - New Proton Event: SESSION_INIT
> 2016-07-04 09:44:56,848 [rifyHost=false]] - TRACE
> AmqpProvider                   - New Proton Event: SESSION_LOCAL_OPEN
> 2016-07-04 09:44:56,848 [rifyHost=false]] - TRACE
> AmqpProvider                   - New Proton Event: SESSION_REMOTE_OPEN
> 2016-07-04 09:44:56,858 [rifyHost=false]] - TRACE
> FRAMES                         - SENT:
> Attach{name='qpid-jms:receiver:ID::5fd31250-0099-4949-8160-1b6ba4c2cf72:1:1:1:ServiceBusDefaultNamespace/test2',
> handle=0, role=RECEIVER, sndSettleMode=UNSETTLED, rcvSettleMode=FIRST,
> source=Source{address='ServiceBusDefaultNamespace/test2', durable=NONE,
> expiryPolicy=LINK_DETACH, timeout=0, dynamic=false,
> dynamicNodeProperties=null, distributionMode=null, filter=null,
> defaultOutcome=Modified{deliveryFailed=true, undeliverableHere=null,
> messageAnnotations=null}, outcomes=[amqp:accepted:list, amqp:rejected:list,
> amqp:released:list, amqp:modified:list], capabilities=[queue]},
> target=Target{address='null', durable=NONE, expiryPolicy=SESSION_END,
> timeout=0, dynamic=false, dynamicNodeProperties=null, capabilities=null},
> unsettled=null, incompleteUnsettled=false, initialDeliveryCount=null,
> maxMessageSize=null, offeredCapabilities=null, desiredCapabilities=null,
> properties=null}
> 2016-07-04 09:44:56,858 [rifyHost=false]] - TRACE
> NettyTcpTransport              - Attempted write of: 285 bytes
> 2016-07-04 09:44:57,174 [ntLoopGroup-2-1] - TRACE
> NettyTcpTransport              - New data read: 413 bytes incoming:
> UnpooledHeapByteBuf(ridx: 0, widx: 413, cap: 453)
> 2016-07-04 09:44:57,175 [rifyHost=false]] - TRACE
> FRAMES                         - RECV:
> Attach{name='qpid-jms:receiver:ID::5fd31250-0099-4949-8160-1b6ba4c2cf72:1:1:1:ServiceBusDefaultNamespace/test2',
> handle=0, role=SENDER, sndSettleMode=UNSETTLED, rcvSettleMode=FIRST,
> source=Source{address='ServiceBusDefaultNamespace/test2', durable=NONE,
> expiryPolicy=LINK_DETACH, timeout=0, dynamic=false,
> dynamicNodeProperties=null, distributionMode=null, filter=null,
> defaultOutcome=Modified{deliveryFailed=true, undeliverableHere=null,
> messageAnnotations=null}, outcomes=[amqp:accepted:list, amqp:rejected:list,
> amqp:released:list, amqp:modified:list], capabilities=[queue]},
> target=Target{address='null', durable=NONE, expiryPolicy=SESSION_END,
> timeout=0, dynamic=false, dynamicNodeProperties=null, capabilities=null},
> unsettled=null, incompleteUnsettled=false, initialDeliveryCount=0,
> maxMessageSize=262144, offeredCapabilities=null, desiredCapabilities=null,
> properties={com.microsoft:tracking-id=d7c7f05c1eff4183ac3dc9f2885e3f93_Gcw7_Bcw7;51461:100924:100925}}
> 2016-07-04 09:44:57,176 [rifyHost=false]] - TRACE
> AmqpProvider                   - New Proton Event: LINK_INIT
> 2016-07-04 09:44:57,176 [rifyHost=false]] - TRACE
> AmqpProvider                   - New Proton Event: LINK_LOCAL_OPEN
> 2016-07-04 09:44:57,176 [rifyHost=false]] - TRACE
> AmqpProvider                   - New Proton Event: LINK_REMOTE_OPEN
> 2016-07-04 09:44:57,177 [rifyHost=false]] - TRACE
> AmqpConsumer                   - Consumer
> ID::5fd31250-0099-4949-8160-1b6ba4c2cf72:1:1:1 granting additional credit:
> 10
> 2016-07-04 09:44:57,178 [rifyHost=false]] - TRACE
> FRAMES                         - SENT: Flow{nextIncomingId=1,
> incomingWindow=2047, nextOutgoingId=1, outgoingWindow=2147483647, handle=0,
> deliveryCount=0, linkCredit=10, available=null, drain=false, echo=false,
> properties=null}
> 2016-07-04 09:44:57,178 [rifyHost=false]] - TRACE
> NettyTcpTransport              - Attempted write of: 32 bytes
> 2016-07-04 09:46:56,838 [rifyHost=false]] - TRACE
> AmqpProvider                   - IdleTimeoutCheck rescheduling with delay:
> 120000
> 2016-07-04 09:48:56,839 [rifyHost=false]] - TRACE
> FRAMES                         - SENT: Empty Frame
> 2016-07-04 09:48:56,839 [rifyHost=false]] - TRACE
> NettyTcpTransport              - Attempted write of: 8 bytes
> 2016-07-04 09:48:56,839 [rifyHost=false]] - TRACE
> AmqpProvider                   - IdleTimeoutCheck rescheduling with delay:
> 120000
> 2016-07-04 09:50:56,840 [rifyHost=false]] - TRACE
> FRAMES                         - SENT: Empty Frame
> 2016-07-04 09:50:56,840 [rifyHost=false]] - TRACE
> NettyTcpTransport              - Attempted write of: 8 bytes
> 2016-07-04 09:50:56,840 [rifyHost=false]] - TRACE
> AmqpProvider                   - IdleTimeoutCheck rescheduling with delay:
> 120000
> 2016-07-04 09:52:14,666 [ntLoopGroup-2-1] - TRACE
> NettyTcpTransport              - New data read: 8 bytes incoming:
> UnpooledHeapByteBuf(ridx: 0, widx: 8, cap: 37)
> 2016-07-04 09:52:14,667 [rifyHost=false]] - TRACE
> FRAMES                         - RECV: Empty Frame
> 2016-07-04 09:52:56,841 [rifyHost=false]] - TRACE
> FRAMES                         - SENT: Empty Frame
> 2016-07-04 09:52:56,841 [rifyHost=false]] - TRACE
> NettyTcpTransport              - Attempted write of: 8 bytes
> 2016-07-04 09:52:56,841 [rifyHost=false]] - TRACE
> AmqpProvider                   - IdleTimeoutCheck rescheduling with delay:
> 120000
> 2016-07-04 09:54:56,841 [rifyHost=false]] - TRACE
> FRAMES                         - SENT: Empty Frame
> 2016-07-04 09:54:56,841 [rifyHost=false]] - TRACE
> NettyTcpTransport              - Attempted write of: 8 bytes
> 2016-07-04 09:54:56,842 [rifyHost=false]] - TRACE
> AmqpProvider                   - IdleTimeoutCheck rescheduling with delay:
> 120000
> 2016-07-04 09:56:56,842 [rifyHost=false]] - TRACE
> FRAMES                         - SENT: Empty Frame
> 2016-07-04 09:56:56,842 [rifyHost=false]] - TRACE
> NettyTcpTransport              - Attempted write of: 8 bytes
> 2016-07-04 09:56:56,842 [rifyHost=false]] - TRACE
> AmqpProvider                   - IdleTimeoutCheck rescheduling with delay:
> 120000
> 2016-07-04 09:58:56,843 [rifyHost=false]] - TRACE
> FRAMES                         - SENT: Empty Frame
> 2016-07-04 09:58:56,843 [rifyHost=false]] - TRACE
> NettyTcpTransport              - Attempted write of: 8 bytes
> 2016-07-04 09:58:56,843 [rifyHost=false]] - TRACE
> AmqpProvider                   - IdleTimeoutCheck rescheduling with delay:
> 120000
> 2016-07-04 09:59:32,164 [ntLoopGroup-2-1] - TRACE
> NettyTcpTransport              - New data read: 8 bytes incoming:
> UnpooledHeapByteBuf(ridx: 0, widx: 8, cap: 37)
> 2016-07-04 09:59:32,164 [rifyHost=false]] - TRACE
> FRAMES                         - RECV: Empty Frame
> 2016-07-04 10:00:56,843 [rifyHost=false]] - TRACE
> FRAMES                         - SENT: Empty Frame
> 2016-07-04 10:00:56,843 [rifyHost=false]] - TRACE
> NettyTcpTransport              - Attempted write of: 8 bytes
> 2016-07-04 10:00:56,843 [rifyHost=false]] - TRACE
> AmqpProvider                   - IdleTimeoutCheck rescheduling with delay:
> 120000
> 2016-07-04 10:02:56,844 [rifyHost=false]] - TRACE
> FRAMES                         - SENT: Empty Frame
> 2016-07-04 10:02:56,844 [rifyHost=false]] - TRACE
> NettyTcpTransport              - Attempted write of: 8 bytes
> 2016-07-04 10:02:56,845 [rifyHost=false]] - TRACE
> AmqpProvider                   - IdleTimeoutCheck rescheduling with delay:
> 120000
>
>
>
>
> On Fri, Jul 1, 2016 at 3:52 PM, Robbie Gemmell <ro...@gmail.com>
> wrote:
>
>> Hi Dale,
>>
>> It looks like you might have snipped some of the logs. If you removed
>> any frame logging we will need to see that too (well, at least what
>> frames and when) to better reason about what the client is (or is not)
>> doing, as heartbeat frames are only sent in the absence of other
>> frames. It would also be good to enable trace on logger
>> org.apache.qpid.jms.provider.amqp.AmqpProvider to get some logging
>> from the idle timeout processing task.
>>
>> The decision of whether to send a heartbeat frame (or close the
>> connection due to not receiving any frames) is taken by the Qpid
>> Proton engine the client uses, with the client scheduling the
>> processing task needed for it to do that. There isn't currently a way
>> to make it send heartbeats more often than Proton decides it needs to
>> based on what the server advertises and what other activity is
>> occuring.
>>
>> The AMQP spec indicates
>> (
>> http://docs.oasis-open.org/amqp/core/v1.0/os/amqp-core-transport-v1.0-os.html#doc-doc-idle-time-out
>> )
>> that peers SHOULD advertise half their actual timeout to avoid
>> spurious timeouts. It would seem like perhaps Service Bus isnt doing
>> that, since it advertised 4 minutes and the logs indicate the
>> connection was closed in less than 5 minutes, as opposed to at least
>> 8.
>>
>> That said, Proton actually pessimisticly assumes that the peer hasn't
>> advertised half its timeout and so uses half the peers advertised
>> value, meaning it should send heartbeats at either twice or four times
>> the frequency of the peers actual timeout if no other frames are being
>> sent. I see you have a 121sec timeout specified client side (given the
>> client advertise idle-timeout of 60.5s) too, and that should mean the
>> client should be doing the idle-timeout processing at least that
>> often, with the absence of any heartbeat frames being sent suggesting
>> Proton thinks it doesnt need to.
>>
>> I'll take a look at the heartbeat generation stuff in Proton, but if
>> you could send the updated logging that would be good.
>>
>> Robbie
>>
>> On 1 July 2016 at 13:35, Dale Green <gr...@gmail.com> wrote:
>> > Hi people,
>> >
>> > I have another problem with Service Bus and Qpid JMS 0.9.0.
>> >
>> > The broker uses 4 minutes idle timeout. With the sample code in the
>> > examples, I verified that the client sends the first empty frame exactly
>> 4
>> > minutes after the consumer has connected. After that, Qpid sends an empty
>> > frame every 2 minutes.
>> >
>> > However in the application, I am working on (existing code), Service Bus
>> > drops the connections too often (when there are no messages sent) because
>> > of inactivity. I think the problem is that Qpid waits for the whole 4
>> > minutes before sending the first empty frame. Moreover, I see always
>> (when
>> > the problem occurs) TCP retransmission for exactly this frame which maybe
>> > delays it for 100-200ms. Therefore, dropping the connection is perhaps
>> > correct.
>> >
>> > I am appending some Qpid traces down here. My questions are:
>> > - Is the broker's behavior correct?
>> > - How can I tell Qpid to send the heartbeats more often to probably solve
>> > this problem?
>> >
>> > Thanks!
>> >
>> > 11:48:38,001 TRACE [FRAMES] SENT: Open{
>> > containerId='ID::68508b21-bbc7-4318-9371-5debbad2937d:1',
>> > hostname='10.10.0.62', maxFrameSize=1048576, channelMax=32767,
>> > idleTimeOut=60500, outgoingLocales=null, incomingLocales=null,
>> > offeredCapabilities=null,
>> > desiredCapabilities=[sole-connection-for-container],
>> > properties={product=QpidJMS, version=0.9.0, platform=JVM: 1.7.0_80,
>> > 24.80-b11, Oracle Corporation, OS: Linux, 3.16.0-4-amd64, amd64}}
>> > 11:48:38,006 TRACE [FRAMES] RECV: Open{
>> > containerId='d7c7f05c1eff4183ac3dc9f2885e3f93_Gcw7', hostname='null',
>> > maxFrameSize=65536, channelMax=4999, idleTimeOut=240000,
>> > outgoingLocales=null, incomingLocales=null, offeredCapabilities=null,
>> > desiredCapabilities=null, properties=null}
>> > 11:48:38,015 TRACE [FRAMES] SENT: Begin{remoteChannel=null,
>> > nextOutgoingId=1, incomingWindow=2047, outgoingWindow=2147483647,
>> > handleMax=65535, offeredCapabilities=null, desiredCapabilities=null,
>> > properties=null}
>> > 11:48:38,017 TRACE [FRAMES] RECV: Begin{remoteChannel=0,
>> nextOutgoingId=1,
>> > incomingWindow=5000, outgoingWindow=2047, handleMax=7,
>> > offeredCapabilities=null, desiredCapabilities=null, properties=null}
>> > 11:48:38,144 TRACE [FRAMES] SENT:
>> >
>> Attach{name='qpid-jms:receiver:ID::24295795-625b-46a2-9898-637cf9944fbd:1:1:1:ServiceBusDefaultNamespace/XXX',
>> > handle=0, role=RECEIVER, sndSettleMode=UNSETTLED, rcvSettleMode=FIRST,
>> > source=Source{address='ServiceBusDefaultNamespace/XXX', durable=NONE,
>> > expiryPolicy=LINK_DETACH, timeout=0, dynamic=false,
>> > dynamicNodeProperties=null, distributionMode=null, filter=null,
>> > defaultOutcome=Modified{deliveryFailed=true, undeliverableHere=null,
>> > messageAnnotations=null}, outcomes=[amqp:accepted:list,
>> amqp:rejected:list,
>> > amqp:released:list, amqp:modified:list], capabilities=[queue]},
>> > target=Target{address='null', durable=NONE, expiryPolicy=SESSION_END,
>> > timeout=0, dynamic=false, dynamicNodeProperties=null, capabilities=null},
>> > unsettled=null, incompleteUnsettled=false, initialDeliveryCount=null,
>> > maxMessageSize=null, offeredCapabilities=null, desiredCapabilities=null,
>> > properties=null}
>> > 11:48:38,179 TRACE [FRAMES] RECV:
>> >
>> Attach{name='qpid-jms:receiver:ID::24295795-625b-46a2-9898-637cf9944fbd:1:1:1:ServiceBusDefaultNamespace/XXX',
>> > handle=0, role=SENDER, sndSettleMode=UNSETTLED, rcvSettleMode=FIRST,
>> > source=Source{address='ServiceBusDefaultNamespace/XXX', durable=NONE,
>> > expiryPolicy=LINK_DETACH, timeout=0, dynamic=false,
>> > dynamicNodeProperties=null, distributionMode=null, filter=null,
>> > defaultOutcome=Modified{deliveryFailed=true, undeliverableHere=null,
>> > messageAnnotations=null}, outcomes=[amqp:accepted:list,
>> amqp:rejected:list,
>> > amqp:released:list, amqp:modified:list], capabilities=[queue]},
>> > target=Target{address='null', durable=NONE, expiryPolicy=SESSION_END,
>> > timeout=0, dynamic=false, dynamicNodeProperties=null, capabilities=null},
>> > unsettled=null, incompleteUnsettled=false, initialDeliveryCount=0,
>> > maxMessageSize=262144, offeredCapabilities=null,
>> desiredCapabilities=null,
>> >
>> properties={com.microsoft:tracking-id=d7c7f05c1eff4183ac3dc9f2885e3f93_Gcw7_Bcw7;51163:100752:100753}}
>> > 11:48:38,183 TRACE [FRAMES] SENT: Flow{nextIncomingId=1,
>> > incomingWindow=2047, nextOutgoingId=1, outgoingWindow=2147483647,
>> handle=0,
>> > deliveryCount=0, linkCredit=1000, available=null, drain=false,
>> echo=false,
>> > properties=null}
>> > ...
>> > 11:49:30,966 TRACE [FRAMES] RECV: Empty Frame
>> > ...
>> > 11:50:23,911 TRACE [FRAMES] RECV: Empty Frame
>> > ...
>> > 11:51:16,857 TRACE [FRAMES] RECV: Empty Frame
>> > ...
>> > 11:52:09,804 TRACE [FRAMES] RECV: Empty Frame
>> > ...
>> > 11:52:38,019 TRACE [FRAMES] SENT: Empty Frame
>> > (TCP Retransmission occurs within 100-200ms)
>> > ...
>> > 11:53:02,749 TRACE [FRAMES] RECV: Empty Frame
>> > ...
>> > 11:53:38,009 TRACE [FRAMES] RECV:
>> > Close{error=Error{condition=amqp:connection:forced, description='Die
>> > Verbindung war länger als zulässig
>> >
>> inaktiv..TrackingId:d7c7f05c1eff4183ac3dc9f2885e3f93_Gcw7,TimeStamp:01.07.2016
>> > 11:53:27', info=null}}
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
>> For additional commands, e-mail: users-help@qpid.apache.org
>>
>>

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
For additional commands, e-mail: users-help@qpid.apache.org


Re: Qpid JMS: client inactivity problem

Posted by Dale Green <gr...@gmail.com>.
Hi Robbie,

I was debugging a bit and saw that the broker behaves differently if there
is a single connection or multiple connections. That is, when I have only
one connection, it is not closed even after 5 minutes without heartbeats.
So, I'll try to observe the behavior with 2 or more connections now.

However, I am appending now the Qpid logs of a simple consumer, so you can
see the client behavior. For me, it is still not clear why we don't have a
heartbeat at 09:46:56,838. Can you tell me where in the code this
functionality could be changed, so I may try a quick fix, to see what the
broker would do then?

2016-07-04 09:44:56,407 [ntLoopGroup-2-1] - TRACE
TransportSupport               - Getting SSLContext instance using
protocol: TLS
2016-07-04 09:44:56,617 [ntLoopGroup-2-1] - TRACE
TransportSupport               - Default protocols from the SSLEngine:
[SSLv2Hello, TLSv1, TLSv1.1, TLSv1.2]
2016-07-04 09:44:56,617 [ntLoopGroup-2-1] - TRACE
TransportSupport               - Disabled protocols: [SSLv2Hello, SSLv3]
2016-07-04 09:44:56,617 [ntLoopGroup-2-1] - TRACE
TransportSupport               - Enabled protocols: [TLSv1, TLSv1.1,
TLSv1.2]
2016-07-04 09:44:56,617 [ntLoopGroup-2-1] - TRACE
TransportSupport               - Default cipher suites from the SSLEngine:
[TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384,
TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384, TLS_RSA_WITH_AES_256_CBC_SHA256,
TLS_ECDH_ECDSA_WITH_AES_256_CBC_SHA384,
TLS_ECDH_RSA_WITH_AES_256_CBC_SHA384, TLS_DHE_RSA_WITH_AES_256_CBC_SHA256,
TLS_DHE_DSS_WITH_AES_256_CBC_SHA256, TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,
TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA, TLS_RSA_WITH_AES_256_CBC_SHA,
TLS_ECDH_ECDSA_WITH_AES_256_CBC_SHA, TLS_ECDH_RSA_WITH_AES_256_CBC_SHA,
TLS_DHE_RSA_WITH_AES_256_CBC_SHA, TLS_DHE_DSS_WITH_AES_256_CBC_SHA,
TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256,
TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256, TLS_RSA_WITH_AES_128_CBC_SHA256,
TLS_ECDH_ECDSA_WITH_AES_128_CBC_SHA256,
TLS_ECDH_RSA_WITH_AES_128_CBC_SHA256, TLS_DHE_RSA_WITH_AES_128_CBC_SHA256,
TLS_DHE_DSS_WITH_AES_128_CBC_SHA256, TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,
TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA, TLS_RSA_WITH_AES_128_CBC_SHA,
TLS_ECDH_ECDSA_WITH_AES_128_CBC_SHA, TLS_ECDH_RSA_WITH_AES_128_CBC_SHA,
TLS_DHE_RSA_WITH_AES_128_CBC_SHA, TLS_DHE_DSS_WITH_AES_128_CBC_SHA,
TLS_ECDHE_ECDSA_WITH_3DES_EDE_CBC_SHA, TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,
SSL_RSA_WITH_3DES_EDE_CBC_SHA, TLS_ECDH_ECDSA_WITH_3DES_EDE_CBC_SHA,
TLS_ECDH_RSA_WITH_3DES_EDE_CBC_SHA, SSL_DHE_RSA_WITH_3DES_EDE_CBC_SHA,
SSL_DHE_DSS_WITH_3DES_EDE_CBC_SHA, TLS_EMPTY_RENEGOTIATION_INFO_SCSV]
2016-07-04 09:44:56,617 [ntLoopGroup-2-1] - TRACE
TransportSupport               - Enabled cipher suites:
[TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384,
TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384, TLS_RSA_WITH_AES_256_CBC_SHA256,
TLS_ECDH_ECDSA_WITH_AES_256_CBC_SHA384,
TLS_ECDH_RSA_WITH_AES_256_CBC_SHA384, TLS_DHE_RSA_WITH_AES_256_CBC_SHA256,
TLS_DHE_DSS_WITH_AES_256_CBC_SHA256, TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,
TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA, TLS_RSA_WITH_AES_256_CBC_SHA,
TLS_ECDH_ECDSA_WITH_AES_256_CBC_SHA, TLS_ECDH_RSA_WITH_AES_256_CBC_SHA,
TLS_DHE_RSA_WITH_AES_256_CBC_SHA, TLS_DHE_DSS_WITH_AES_256_CBC_SHA,
TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256,
TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256, TLS_RSA_WITH_AES_128_CBC_SHA256,
TLS_ECDH_ECDSA_WITH_AES_128_CBC_SHA256,
TLS_ECDH_RSA_WITH_AES_128_CBC_SHA256, TLS_DHE_RSA_WITH_AES_128_CBC_SHA256,
TLS_DHE_DSS_WITH_AES_128_CBC_SHA256, TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,
TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA, TLS_RSA_WITH_AES_128_CBC_SHA,
TLS_ECDH_ECDSA_WITH_AES_128_CBC_SHA, TLS_ECDH_RSA_WITH_AES_128_CBC_SHA,
TLS_DHE_RSA_WITH_AES_128_CBC_SHA, TLS_DHE_DSS_WITH_AES_128_CBC_SHA,
TLS_ECDHE_ECDSA_WITH_3DES_EDE_CBC_SHA, TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,
SSL_RSA_WITH_3DES_EDE_CBC_SHA, TLS_ECDH_ECDSA_WITH_3DES_EDE_CBC_SHA,
TLS_ECDH_RSA_WITH_3DES_EDE_CBC_SHA, SSL_DHE_RSA_WITH_3DES_EDE_CBC_SHA,
SSL_DHE_DSS_WITH_3DES_EDE_CBC_SHA, TLS_EMPTY_RENEGOTIATION_INFO_SCSV]
2016-07-04 09:44:56,650 [ntLoopGroup-2-1] - TRACE
NettyTcpTransport              - Channel has become active! Channel is [id:
0xec4459d2, /10.10.0.180:51561 => /10.10.0.62:5671]
2016-07-04 09:44:56,773 [ntLoopGroup-2-1] - TRACE
NettySslTransport              - SSL Handshake has completed: [id:
0xec4459d2, /10.10.0.180:51561 => /10.10.0.62:5671]
2016-07-04 09:44:56,811 [rifyHost=false]] - TRACE
NettyTcpTransport              - Attempted write of: 8 bytes
2016-07-04 09:44:56,814 [ntLoopGroup-2-1] - TRACE
NettyTcpTransport              - New data read: 8 bytes incoming:
UnpooledHeapByteBuf(ridx: 0, widx: 8, cap: 1024)
2016-07-04 09:44:56,814 [rifyHost=false]] - TRACE
AmqpProvider                   - New Proton Event: CONNECTION_INIT
2016-07-04 09:44:56,815 [ntLoopGroup-2-1] - TRACE
NettyTcpTransport              - New data read: 39 bytes incoming:
UnpooledHeapByteBuf(ridx: 0, widx: 39, cap: 69)
2016-07-04 09:44:56,815 [rifyHost=false]] - TRACE
AmqpProvider                   - New Proton Event: CONNECTION_LOCAL_OPEN
2016-07-04 09:44:56,819 [rifyHost=false]] - DEBUG
SaslMechanismFinder            - Skipping SASL-EXTERNAL mechanism because
the available credentials are not sufficient
2016-07-04 09:44:56,819 [rifyHost=false]] - INFO
SaslMechanismFinder            - Best match for SASL auth was: SASL-PLAIN
2016-07-04 09:44:56,820 [rifyHost=false]] - TRACE
NettyTcpTransport              - Attempted write of: 50 bytes
2016-07-04 09:44:56,823 [ntLoopGroup-2-1] - TRACE
NettyTcpTransport              - New data read: 26 bytes incoming:
UnpooledHeapByteBuf(ridx: 0, widx: 26, cap: 53)
2016-07-04 09:44:56,826 [rifyHost=false]] - TRACE
FRAMES                         - SENT: Open{
containerId='ID::21730abd-45cf-46c9-b312-8c1052076805:1',
hostname='10.10.0.62', maxFrameSize=1048576, channelMax=32767,
idleTimeOut=500000, outgoingLocales=null, incomingLocales=null,
offeredCapabilities=null,
desiredCapabilities=[sole-connection-for-container],
properties={product=QpidJMS, version=0.9.0, platform=JVM: 1.7.0_101,
24.95-b01, Oracle Corporation, OS: Linux, 3.16.0-4-amd64, amd64}}
2016-07-04 09:44:56,827 [rifyHost=false]] - TRACE
NettyTcpTransport              - Attempted write of: 256 bytes
2016-07-04 09:44:56,828 [ntLoopGroup-2-1] - TRACE
NettyTcpTransport              - New data read: 8 bytes incoming:
UnpooledHeapByteBuf(ridx: 0, widx: 8, cap: 37)
2016-07-04 09:44:56,830 [ntLoopGroup-2-1] - TRACE
NettyTcpTransport              - New data read: 72 bytes incoming:
UnpooledHeapByteBuf(ridx: 0, widx: 72, cap: 101)
2016-07-04 09:44:56,830 [rifyHost=false]] - TRACE
FRAMES                         - RECV: Open{
containerId='d7c7f05c1eff4183ac3dc9f2885e3f93_Gcw7', hostname='null',
maxFrameSize=65536, channelMax=4999, idleTimeOut=240000,
outgoingLocales=null, incomingLocales=null, offeredCapabilities=null,
desiredCapabilities=null, properties=null}
2016-07-04 09:44:56,830 [rifyHost=false]] - TRACE
AmqpProvider                   - New Proton Event: CONNECTION_REMOTE_OPEN
2016-07-04 09:44:56,833 [rifyHost=false]] - TRACE
AmqpProvider                   - New Proton Event: SESSION_INIT
2016-07-04 09:44:56,833 [rifyHost=false]] - TRACE
AmqpProvider                   - New Proton Event: SESSION_LOCAL_OPEN
2016-07-04 09:44:56,835 [rifyHost=false]] - TRACE
FRAMES                         - SENT: Begin{remoteChannel=null,
nextOutgoingId=1, incomingWindow=2047, outgoingWindow=2147483647,
handleMax=65535, offeredCapabilities=null, desiredCapabilities=null,
properties=null}
2016-07-04 09:44:56,835 [rifyHost=false]] - TRACE
NettyTcpTransport              - Attempted write of: 32 bytes
2016-07-04 09:44:56,836 [ntLoopGroup-2-1] - TRACE
NettyTcpTransport              - New data read: 34 bytes incoming:
UnpooledHeapByteBuf(ridx: 0, widx: 34, cap: 69)
2016-07-04 09:44:56,837 [rifyHost=false]] - TRACE
FRAMES                         - RECV: Begin{remoteChannel=0,
nextOutgoingId=1, incomingWindow=5000, outgoingWindow=2047, handleMax=7,
offeredCapabilities=null, desiredCapabilities=null, properties=null}
2016-07-04 09:44:56,837 [rifyHost=false]] - TRACE
AmqpProvider                   - New Proton Event: SESSION_REMOTE_OPEN
2016-07-04 09:44:56,837 [rifyHost=false]] - DEBUG
AmqpConnectionBuilder          - AmqpConnection {
ID::5fd31250-0099-4949-8160-1b6ba4c2cf72:1 } is now open:
2016-07-04 09:44:56,837 [rifyHost=false]] - TRACE
AmqpProvider                   - IdleTimeoutCheck being initiated, initial
delay: 120000
2016-07-04 09:44:56,837 [rifyHost=false]] - INFO
JmsConnection                  - Connection
ID::5fd31250-0099-4949-8160-1b6ba4c2cf72:1 connected to remote Broker:
amqps://10.10.0.62:5671/?transport.verifyHost=false
2016-07-04 09:44:56,846 [rifyHost=false]] - TRACE
FRAMES                         - SENT: Begin{remoteChannel=null,
nextOutgoingId=1, incomingWindow=2047, outgoingWindow=2147483647,
handleMax=65535, offeredCapabilities=null, desiredCapabilities=null,
properties=null}
2016-07-04 09:44:56,846 [rifyHost=false]] - TRACE
NettyTcpTransport              - Attempted write of: 32 bytes
2016-07-04 09:44:56,848 [ntLoopGroup-2-1] - TRACE
NettyTcpTransport              - New data read: 34 bytes incoming:
UnpooledHeapByteBuf(ridx: 0, widx: 34, cap: 69)
2016-07-04 09:44:56,848 [rifyHost=false]] - TRACE
FRAMES                         - RECV: Begin{remoteChannel=1,
nextOutgoingId=1, incomingWindow=5000, outgoingWindow=2047, handleMax=7,
offeredCapabilities=null, desiredCapabilities=null, properties=null}
2016-07-04 09:44:56,848 [rifyHost=false]] - TRACE
AmqpProvider                   - New Proton Event: SESSION_INIT
2016-07-04 09:44:56,848 [rifyHost=false]] - TRACE
AmqpProvider                   - New Proton Event: SESSION_LOCAL_OPEN
2016-07-04 09:44:56,848 [rifyHost=false]] - TRACE
AmqpProvider                   - New Proton Event: SESSION_REMOTE_OPEN
2016-07-04 09:44:56,858 [rifyHost=false]] - TRACE
FRAMES                         - SENT:
Attach{name='qpid-jms:receiver:ID::5fd31250-0099-4949-8160-1b6ba4c2cf72:1:1:1:ServiceBusDefaultNamespace/test2',
handle=0, role=RECEIVER, sndSettleMode=UNSETTLED, rcvSettleMode=FIRST,
source=Source{address='ServiceBusDefaultNamespace/test2', durable=NONE,
expiryPolicy=LINK_DETACH, timeout=0, dynamic=false,
dynamicNodeProperties=null, distributionMode=null, filter=null,
defaultOutcome=Modified{deliveryFailed=true, undeliverableHere=null,
messageAnnotations=null}, outcomes=[amqp:accepted:list, amqp:rejected:list,
amqp:released:list, amqp:modified:list], capabilities=[queue]},
target=Target{address='null', durable=NONE, expiryPolicy=SESSION_END,
timeout=0, dynamic=false, dynamicNodeProperties=null, capabilities=null},
unsettled=null, incompleteUnsettled=false, initialDeliveryCount=null,
maxMessageSize=null, offeredCapabilities=null, desiredCapabilities=null,
properties=null}
2016-07-04 09:44:56,858 [rifyHost=false]] - TRACE
NettyTcpTransport              - Attempted write of: 285 bytes
2016-07-04 09:44:57,174 [ntLoopGroup-2-1] - TRACE
NettyTcpTransport              - New data read: 413 bytes incoming:
UnpooledHeapByteBuf(ridx: 0, widx: 413, cap: 453)
2016-07-04 09:44:57,175 [rifyHost=false]] - TRACE
FRAMES                         - RECV:
Attach{name='qpid-jms:receiver:ID::5fd31250-0099-4949-8160-1b6ba4c2cf72:1:1:1:ServiceBusDefaultNamespace/test2',
handle=0, role=SENDER, sndSettleMode=UNSETTLED, rcvSettleMode=FIRST,
source=Source{address='ServiceBusDefaultNamespace/test2', durable=NONE,
expiryPolicy=LINK_DETACH, timeout=0, dynamic=false,
dynamicNodeProperties=null, distributionMode=null, filter=null,
defaultOutcome=Modified{deliveryFailed=true, undeliverableHere=null,
messageAnnotations=null}, outcomes=[amqp:accepted:list, amqp:rejected:list,
amqp:released:list, amqp:modified:list], capabilities=[queue]},
target=Target{address='null', durable=NONE, expiryPolicy=SESSION_END,
timeout=0, dynamic=false, dynamicNodeProperties=null, capabilities=null},
unsettled=null, incompleteUnsettled=false, initialDeliveryCount=0,
maxMessageSize=262144, offeredCapabilities=null, desiredCapabilities=null,
properties={com.microsoft:tracking-id=d7c7f05c1eff4183ac3dc9f2885e3f93_Gcw7_Bcw7;51461:100924:100925}}
2016-07-04 09:44:57,176 [rifyHost=false]] - TRACE
AmqpProvider                   - New Proton Event: LINK_INIT
2016-07-04 09:44:57,176 [rifyHost=false]] - TRACE
AmqpProvider                   - New Proton Event: LINK_LOCAL_OPEN
2016-07-04 09:44:57,176 [rifyHost=false]] - TRACE
AmqpProvider                   - New Proton Event: LINK_REMOTE_OPEN
2016-07-04 09:44:57,177 [rifyHost=false]] - TRACE
AmqpConsumer                   - Consumer
ID::5fd31250-0099-4949-8160-1b6ba4c2cf72:1:1:1 granting additional credit:
10
2016-07-04 09:44:57,178 [rifyHost=false]] - TRACE
FRAMES                         - SENT: Flow{nextIncomingId=1,
incomingWindow=2047, nextOutgoingId=1, outgoingWindow=2147483647, handle=0,
deliveryCount=0, linkCredit=10, available=null, drain=false, echo=false,
properties=null}
2016-07-04 09:44:57,178 [rifyHost=false]] - TRACE
NettyTcpTransport              - Attempted write of: 32 bytes
2016-07-04 09:46:56,838 [rifyHost=false]] - TRACE
AmqpProvider                   - IdleTimeoutCheck rescheduling with delay:
120000
2016-07-04 09:48:56,839 [rifyHost=false]] - TRACE
FRAMES                         - SENT: Empty Frame
2016-07-04 09:48:56,839 [rifyHost=false]] - TRACE
NettyTcpTransport              - Attempted write of: 8 bytes
2016-07-04 09:48:56,839 [rifyHost=false]] - TRACE
AmqpProvider                   - IdleTimeoutCheck rescheduling with delay:
120000
2016-07-04 09:50:56,840 [rifyHost=false]] - TRACE
FRAMES                         - SENT: Empty Frame
2016-07-04 09:50:56,840 [rifyHost=false]] - TRACE
NettyTcpTransport              - Attempted write of: 8 bytes
2016-07-04 09:50:56,840 [rifyHost=false]] - TRACE
AmqpProvider                   - IdleTimeoutCheck rescheduling with delay:
120000
2016-07-04 09:52:14,666 [ntLoopGroup-2-1] - TRACE
NettyTcpTransport              - New data read: 8 bytes incoming:
UnpooledHeapByteBuf(ridx: 0, widx: 8, cap: 37)
2016-07-04 09:52:14,667 [rifyHost=false]] - TRACE
FRAMES                         - RECV: Empty Frame
2016-07-04 09:52:56,841 [rifyHost=false]] - TRACE
FRAMES                         - SENT: Empty Frame
2016-07-04 09:52:56,841 [rifyHost=false]] - TRACE
NettyTcpTransport              - Attempted write of: 8 bytes
2016-07-04 09:52:56,841 [rifyHost=false]] - TRACE
AmqpProvider                   - IdleTimeoutCheck rescheduling with delay:
120000
2016-07-04 09:54:56,841 [rifyHost=false]] - TRACE
FRAMES                         - SENT: Empty Frame
2016-07-04 09:54:56,841 [rifyHost=false]] - TRACE
NettyTcpTransport              - Attempted write of: 8 bytes
2016-07-04 09:54:56,842 [rifyHost=false]] - TRACE
AmqpProvider                   - IdleTimeoutCheck rescheduling with delay:
120000
2016-07-04 09:56:56,842 [rifyHost=false]] - TRACE
FRAMES                         - SENT: Empty Frame
2016-07-04 09:56:56,842 [rifyHost=false]] - TRACE
NettyTcpTransport              - Attempted write of: 8 bytes
2016-07-04 09:56:56,842 [rifyHost=false]] - TRACE
AmqpProvider                   - IdleTimeoutCheck rescheduling with delay:
120000
2016-07-04 09:58:56,843 [rifyHost=false]] - TRACE
FRAMES                         - SENT: Empty Frame
2016-07-04 09:58:56,843 [rifyHost=false]] - TRACE
NettyTcpTransport              - Attempted write of: 8 bytes
2016-07-04 09:58:56,843 [rifyHost=false]] - TRACE
AmqpProvider                   - IdleTimeoutCheck rescheduling with delay:
120000
2016-07-04 09:59:32,164 [ntLoopGroup-2-1] - TRACE
NettyTcpTransport              - New data read: 8 bytes incoming:
UnpooledHeapByteBuf(ridx: 0, widx: 8, cap: 37)
2016-07-04 09:59:32,164 [rifyHost=false]] - TRACE
FRAMES                         - RECV: Empty Frame
2016-07-04 10:00:56,843 [rifyHost=false]] - TRACE
FRAMES                         - SENT: Empty Frame
2016-07-04 10:00:56,843 [rifyHost=false]] - TRACE
NettyTcpTransport              - Attempted write of: 8 bytes
2016-07-04 10:00:56,843 [rifyHost=false]] - TRACE
AmqpProvider                   - IdleTimeoutCheck rescheduling with delay:
120000
2016-07-04 10:02:56,844 [rifyHost=false]] - TRACE
FRAMES                         - SENT: Empty Frame
2016-07-04 10:02:56,844 [rifyHost=false]] - TRACE
NettyTcpTransport              - Attempted write of: 8 bytes
2016-07-04 10:02:56,845 [rifyHost=false]] - TRACE
AmqpProvider                   - IdleTimeoutCheck rescheduling with delay:
120000




On Fri, Jul 1, 2016 at 3:52 PM, Robbie Gemmell <ro...@gmail.com>
wrote:

> Hi Dale,
>
> It looks like you might have snipped some of the logs. If you removed
> any frame logging we will need to see that too (well, at least what
> frames and when) to better reason about what the client is (or is not)
> doing, as heartbeat frames are only sent in the absence of other
> frames. It would also be good to enable trace on logger
> org.apache.qpid.jms.provider.amqp.AmqpProvider to get some logging
> from the idle timeout processing task.
>
> The decision of whether to send a heartbeat frame (or close the
> connection due to not receiving any frames) is taken by the Qpid
> Proton engine the client uses, with the client scheduling the
> processing task needed for it to do that. There isn't currently a way
> to make it send heartbeats more often than Proton decides it needs to
> based on what the server advertises and what other activity is
> occuring.
>
> The AMQP spec indicates
> (
> http://docs.oasis-open.org/amqp/core/v1.0/os/amqp-core-transport-v1.0-os.html#doc-doc-idle-time-out
> )
> that peers SHOULD advertise half their actual timeout to avoid
> spurious timeouts. It would seem like perhaps Service Bus isnt doing
> that, since it advertised 4 minutes and the logs indicate the
> connection was closed in less than 5 minutes, as opposed to at least
> 8.
>
> That said, Proton actually pessimisticly assumes that the peer hasn't
> advertised half its timeout and so uses half the peers advertised
> value, meaning it should send heartbeats at either twice or four times
> the frequency of the peers actual timeout if no other frames are being
> sent. I see you have a 121sec timeout specified client side (given the
> client advertise idle-timeout of 60.5s) too, and that should mean the
> client should be doing the idle-timeout processing at least that
> often, with the absence of any heartbeat frames being sent suggesting
> Proton thinks it doesnt need to.
>
> I'll take a look at the heartbeat generation stuff in Proton, but if
> you could send the updated logging that would be good.
>
> Robbie
>
> On 1 July 2016 at 13:35, Dale Green <gr...@gmail.com> wrote:
> > Hi people,
> >
> > I have another problem with Service Bus and Qpid JMS 0.9.0.
> >
> > The broker uses 4 minutes idle timeout. With the sample code in the
> > examples, I verified that the client sends the first empty frame exactly
> 4
> > minutes after the consumer has connected. After that, Qpid sends an empty
> > frame every 2 minutes.
> >
> > However in the application, I am working on (existing code), Service Bus
> > drops the connections too often (when there are no messages sent) because
> > of inactivity. I think the problem is that Qpid waits for the whole 4
> > minutes before sending the first empty frame. Moreover, I see always
> (when
> > the problem occurs) TCP retransmission for exactly this frame which maybe
> > delays it for 100-200ms. Therefore, dropping the connection is perhaps
> > correct.
> >
> > I am appending some Qpid traces down here. My questions are:
> > - Is the broker's behavior correct?
> > - How can I tell Qpid to send the heartbeats more often to probably solve
> > this problem?
> >
> > Thanks!
> >
> > 11:48:38,001 TRACE [FRAMES] SENT: Open{
> > containerId='ID::68508b21-bbc7-4318-9371-5debbad2937d:1',
> > hostname='10.10.0.62', maxFrameSize=1048576, channelMax=32767,
> > idleTimeOut=60500, outgoingLocales=null, incomingLocales=null,
> > offeredCapabilities=null,
> > desiredCapabilities=[sole-connection-for-container],
> > properties={product=QpidJMS, version=0.9.0, platform=JVM: 1.7.0_80,
> > 24.80-b11, Oracle Corporation, OS: Linux, 3.16.0-4-amd64, amd64}}
> > 11:48:38,006 TRACE [FRAMES] RECV: Open{
> > containerId='d7c7f05c1eff4183ac3dc9f2885e3f93_Gcw7', hostname='null',
> > maxFrameSize=65536, channelMax=4999, idleTimeOut=240000,
> > outgoingLocales=null, incomingLocales=null, offeredCapabilities=null,
> > desiredCapabilities=null, properties=null}
> > 11:48:38,015 TRACE [FRAMES] SENT: Begin{remoteChannel=null,
> > nextOutgoingId=1, incomingWindow=2047, outgoingWindow=2147483647,
> > handleMax=65535, offeredCapabilities=null, desiredCapabilities=null,
> > properties=null}
> > 11:48:38,017 TRACE [FRAMES] RECV: Begin{remoteChannel=0,
> nextOutgoingId=1,
> > incomingWindow=5000, outgoingWindow=2047, handleMax=7,
> > offeredCapabilities=null, desiredCapabilities=null, properties=null}
> > 11:48:38,144 TRACE [FRAMES] SENT:
> >
> Attach{name='qpid-jms:receiver:ID::24295795-625b-46a2-9898-637cf9944fbd:1:1:1:ServiceBusDefaultNamespace/XXX',
> > handle=0, role=RECEIVER, sndSettleMode=UNSETTLED, rcvSettleMode=FIRST,
> > source=Source{address='ServiceBusDefaultNamespace/XXX', durable=NONE,
> > expiryPolicy=LINK_DETACH, timeout=0, dynamic=false,
> > dynamicNodeProperties=null, distributionMode=null, filter=null,
> > defaultOutcome=Modified{deliveryFailed=true, undeliverableHere=null,
> > messageAnnotations=null}, outcomes=[amqp:accepted:list,
> amqp:rejected:list,
> > amqp:released:list, amqp:modified:list], capabilities=[queue]},
> > target=Target{address='null', durable=NONE, expiryPolicy=SESSION_END,
> > timeout=0, dynamic=false, dynamicNodeProperties=null, capabilities=null},
> > unsettled=null, incompleteUnsettled=false, initialDeliveryCount=null,
> > maxMessageSize=null, offeredCapabilities=null, desiredCapabilities=null,
> > properties=null}
> > 11:48:38,179 TRACE [FRAMES] RECV:
> >
> Attach{name='qpid-jms:receiver:ID::24295795-625b-46a2-9898-637cf9944fbd:1:1:1:ServiceBusDefaultNamespace/XXX',
> > handle=0, role=SENDER, sndSettleMode=UNSETTLED, rcvSettleMode=FIRST,
> > source=Source{address='ServiceBusDefaultNamespace/XXX', durable=NONE,
> > expiryPolicy=LINK_DETACH, timeout=0, dynamic=false,
> > dynamicNodeProperties=null, distributionMode=null, filter=null,
> > defaultOutcome=Modified{deliveryFailed=true, undeliverableHere=null,
> > messageAnnotations=null}, outcomes=[amqp:accepted:list,
> amqp:rejected:list,
> > amqp:released:list, amqp:modified:list], capabilities=[queue]},
> > target=Target{address='null', durable=NONE, expiryPolicy=SESSION_END,
> > timeout=0, dynamic=false, dynamicNodeProperties=null, capabilities=null},
> > unsettled=null, incompleteUnsettled=false, initialDeliveryCount=0,
> > maxMessageSize=262144, offeredCapabilities=null,
> desiredCapabilities=null,
> >
> properties={com.microsoft:tracking-id=d7c7f05c1eff4183ac3dc9f2885e3f93_Gcw7_Bcw7;51163:100752:100753}}
> > 11:48:38,183 TRACE [FRAMES] SENT: Flow{nextIncomingId=1,
> > incomingWindow=2047, nextOutgoingId=1, outgoingWindow=2147483647,
> handle=0,
> > deliveryCount=0, linkCredit=1000, available=null, drain=false,
> echo=false,
> > properties=null}
> > ...
> > 11:49:30,966 TRACE [FRAMES] RECV: Empty Frame
> > ...
> > 11:50:23,911 TRACE [FRAMES] RECV: Empty Frame
> > ...
> > 11:51:16,857 TRACE [FRAMES] RECV: Empty Frame
> > ...
> > 11:52:09,804 TRACE [FRAMES] RECV: Empty Frame
> > ...
> > 11:52:38,019 TRACE [FRAMES] SENT: Empty Frame
> > (TCP Retransmission occurs within 100-200ms)
> > ...
> > 11:53:02,749 TRACE [FRAMES] RECV: Empty Frame
> > ...
> > 11:53:38,009 TRACE [FRAMES] RECV:
> > Close{error=Error{condition=amqp:connection:forced, description='Die
> > Verbindung war länger als zulässig
> >
> inaktiv..TrackingId:d7c7f05c1eff4183ac3dc9f2885e3f93_Gcw7,TimeStamp:01.07.2016
> > 11:53:27', info=null}}
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
> For additional commands, e-mail: users-help@qpid.apache.org
>
>

Re: Qpid JMS: client inactivity problem

Posted by Robbie Gemmell <ro...@gmail.com>.
Hi Dale,

It looks like you might have snipped some of the logs. If you removed
any frame logging we will need to see that too (well, at least what
frames and when) to better reason about what the client is (or is not)
doing, as heartbeat frames are only sent in the absence of other
frames. It would also be good to enable trace on logger
org.apache.qpid.jms.provider.amqp.AmqpProvider to get some logging
from the idle timeout processing task.

The decision of whether to send a heartbeat frame (or close the
connection due to not receiving any frames) is taken by the Qpid
Proton engine the client uses, with the client scheduling the
processing task needed for it to do that. There isn't currently a way
to make it send heartbeats more often than Proton decides it needs to
based on what the server advertises and what other activity is
occuring.

The AMQP spec indicates
(http://docs.oasis-open.org/amqp/core/v1.0/os/amqp-core-transport-v1.0-os.html#doc-doc-idle-time-out)
that peers SHOULD advertise half their actual timeout to avoid
spurious timeouts. It would seem like perhaps Service Bus isnt doing
that, since it advertised 4 minutes and the logs indicate the
connection was closed in less than 5 minutes, as opposed to at least
8.

That said, Proton actually pessimisticly assumes that the peer hasn't
advertised half its timeout and so uses half the peers advertised
value, meaning it should send heartbeats at either twice or four times
the frequency of the peers actual timeout if no other frames are being
sent. I see you have a 121sec timeout specified client side (given the
client advertise idle-timeout of 60.5s) too, and that should mean the
client should be doing the idle-timeout processing at least that
often, with the absence of any heartbeat frames being sent suggesting
Proton thinks it doesnt need to.

I'll take a look at the heartbeat generation stuff in Proton, but if
you could send the updated logging that would be good.

Robbie

On 1 July 2016 at 13:35, Dale Green <gr...@gmail.com> wrote:
> Hi people,
>
> I have another problem with Service Bus and Qpid JMS 0.9.0.
>
> The broker uses 4 minutes idle timeout. With the sample code in the
> examples, I verified that the client sends the first empty frame exactly 4
> minutes after the consumer has connected. After that, Qpid sends an empty
> frame every 2 minutes.
>
> However in the application, I am working on (existing code), Service Bus
> drops the connections too often (when there are no messages sent) because
> of inactivity. I think the problem is that Qpid waits for the whole 4
> minutes before sending the first empty frame. Moreover, I see always (when
> the problem occurs) TCP retransmission for exactly this frame which maybe
> delays it for 100-200ms. Therefore, dropping the connection is perhaps
> correct.
>
> I am appending some Qpid traces down here. My questions are:
> - Is the broker's behavior correct?
> - How can I tell Qpid to send the heartbeats more often to probably solve
> this problem?
>
> Thanks!
>
> 11:48:38,001 TRACE [FRAMES] SENT: Open{
> containerId='ID::68508b21-bbc7-4318-9371-5debbad2937d:1',
> hostname='10.10.0.62', maxFrameSize=1048576, channelMax=32767,
> idleTimeOut=60500, outgoingLocales=null, incomingLocales=null,
> offeredCapabilities=null,
> desiredCapabilities=[sole-connection-for-container],
> properties={product=QpidJMS, version=0.9.0, platform=JVM: 1.7.0_80,
> 24.80-b11, Oracle Corporation, OS: Linux, 3.16.0-4-amd64, amd64}}
> 11:48:38,006 TRACE [FRAMES] RECV: Open{
> containerId='d7c7f05c1eff4183ac3dc9f2885e3f93_Gcw7', hostname='null',
> maxFrameSize=65536, channelMax=4999, idleTimeOut=240000,
> outgoingLocales=null, incomingLocales=null, offeredCapabilities=null,
> desiredCapabilities=null, properties=null}
> 11:48:38,015 TRACE [FRAMES] SENT: Begin{remoteChannel=null,
> nextOutgoingId=1, incomingWindow=2047, outgoingWindow=2147483647,
> handleMax=65535, offeredCapabilities=null, desiredCapabilities=null,
> properties=null}
> 11:48:38,017 TRACE [FRAMES] RECV: Begin{remoteChannel=0, nextOutgoingId=1,
> incomingWindow=5000, outgoingWindow=2047, handleMax=7,
> offeredCapabilities=null, desiredCapabilities=null, properties=null}
> 11:48:38,144 TRACE [FRAMES] SENT:
> Attach{name='qpid-jms:receiver:ID::24295795-625b-46a2-9898-637cf9944fbd:1:1:1:ServiceBusDefaultNamespace/XXX',
> handle=0, role=RECEIVER, sndSettleMode=UNSETTLED, rcvSettleMode=FIRST,
> source=Source{address='ServiceBusDefaultNamespace/XXX', durable=NONE,
> expiryPolicy=LINK_DETACH, timeout=0, dynamic=false,
> dynamicNodeProperties=null, distributionMode=null, filter=null,
> defaultOutcome=Modified{deliveryFailed=true, undeliverableHere=null,
> messageAnnotations=null}, outcomes=[amqp:accepted:list, amqp:rejected:list,
> amqp:released:list, amqp:modified:list], capabilities=[queue]},
> target=Target{address='null', durable=NONE, expiryPolicy=SESSION_END,
> timeout=0, dynamic=false, dynamicNodeProperties=null, capabilities=null},
> unsettled=null, incompleteUnsettled=false, initialDeliveryCount=null,
> maxMessageSize=null, offeredCapabilities=null, desiredCapabilities=null,
> properties=null}
> 11:48:38,179 TRACE [FRAMES] RECV:
> Attach{name='qpid-jms:receiver:ID::24295795-625b-46a2-9898-637cf9944fbd:1:1:1:ServiceBusDefaultNamespace/XXX',
> handle=0, role=SENDER, sndSettleMode=UNSETTLED, rcvSettleMode=FIRST,
> source=Source{address='ServiceBusDefaultNamespace/XXX', durable=NONE,
> expiryPolicy=LINK_DETACH, timeout=0, dynamic=false,
> dynamicNodeProperties=null, distributionMode=null, filter=null,
> defaultOutcome=Modified{deliveryFailed=true, undeliverableHere=null,
> messageAnnotations=null}, outcomes=[amqp:accepted:list, amqp:rejected:list,
> amqp:released:list, amqp:modified:list], capabilities=[queue]},
> target=Target{address='null', durable=NONE, expiryPolicy=SESSION_END,
> timeout=0, dynamic=false, dynamicNodeProperties=null, capabilities=null},
> unsettled=null, incompleteUnsettled=false, initialDeliveryCount=0,
> maxMessageSize=262144, offeredCapabilities=null, desiredCapabilities=null,
> properties={com.microsoft:tracking-id=d7c7f05c1eff4183ac3dc9f2885e3f93_Gcw7_Bcw7;51163:100752:100753}}
> 11:48:38,183 TRACE [FRAMES] SENT: Flow{nextIncomingId=1,
> incomingWindow=2047, nextOutgoingId=1, outgoingWindow=2147483647, handle=0,
> deliveryCount=0, linkCredit=1000, available=null, drain=false, echo=false,
> properties=null}
> ...
> 11:49:30,966 TRACE [FRAMES] RECV: Empty Frame
> ...
> 11:50:23,911 TRACE [FRAMES] RECV: Empty Frame
> ...
> 11:51:16,857 TRACE [FRAMES] RECV: Empty Frame
> ...
> 11:52:09,804 TRACE [FRAMES] RECV: Empty Frame
> ...
> 11:52:38,019 TRACE [FRAMES] SENT: Empty Frame
> (TCP Retransmission occurs within 100-200ms)
> ...
> 11:53:02,749 TRACE [FRAMES] RECV: Empty Frame
> ...
> 11:53:38,009 TRACE [FRAMES] RECV:
> Close{error=Error{condition=amqp:connection:forced, description='Die
> Verbindung war länger als zulässig
> inaktiv..TrackingId:d7c7f05c1eff4183ac3dc9f2885e3f93_Gcw7,TimeStamp:01.07.2016
> 11:53:27', info=null}}

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
For additional commands, e-mail: users-help@qpid.apache.org