You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@activemq.apache.org by Helge Waastad <he...@waastad.org> on 2017/06/02 08:44:13 UTC

artemis 2.2.0 logging disaster

Hi,
I'm running artemis 2.2.0 as a docker container.

I'm collecting MQTT messages an these are consumed by a JMS consumer
(artemis-jms-client)

It's running fine for a while, but suddenly this appear (docker *-
json.log):

{"log":"19:16:12,338 WARN 
[org.apache.activemq.artemis.core.remoting.impl.netty.NettyConnection] 
Trying to allocate 712 bytes, System is throwing OutOfMemoryError on 
NettyConnection org.apache.activemq.art
emis.core.remoting.impl.netty.NettyServerConnection@6f035b0a[local= 
/10.42.154.105:61616, remote=/10.42.21.198:40844], there are currently 
pendingWrites: [NETTY] -\u003e 0[EVENT LOOP] -\u003e 0 causes: fail
ed to allocate 16777216 byte(s) of direct memory (used: 1057466368,
max: 
1073741824): io.netty.util.internal.OutOfDirectMemoryError: failed to 
allocate 16777216 byte(s) of direct memory (used: 1057466368, m
ax: 
1073741824)\r\n","stream":"stdout","time":"2017-06-
01T19:16:12.342853929Z"}
{"log":"19:16:12,342 WARN  [org.apache.activemq.artemis.core.server] 
AMQ222151: removing consumer which did not handle a message, 
consumer=ServerConsumerImpl [id=0, filter=null,
binding=LocalQueueBinding [a
ddress=CentreonTopic, queue=QueueImpl[name=CentreonTopic, 
postOffice=PostOfficeImpl 
[server=ActiveMQServerImpl::serverUUID=772ad6f8-4630-11e7-93cd-
02a837635b7b], 
temp=false]@3e389a2d, filter=null, name=Cent
reonTopic, 
clusterName=CentreonTopic772ad6f8-4630-11e7-93cd-02a837635b7b]], 
message=Reference[715739]:NON-
RELIABLE:CoreMessage[messageID=715739,durable=false,userID=null,priorit
y=0, 
timestamp=0,expiration=0
, durable=false, 
address=CentreonTopic,properties=TypedProperties[mqtt.message.retain=fa
lse,mqtt.qos.level=0]]@1623021181: 
io.netty.util.internal.OutOfDirectMemoryError: failed to allocate 
16777216 byte(s)
of direct memory (used: 1057466368, max: 
1073741824)\r\n","stream":"stdout","time":"2017-06-
01T19:16:12.347107296Z"}
{"log":"19:31:54,236 WARN 
[org.apache.activemq.artemis.core.remoting.impl.netty.NettyConnection] 
Trying to allocate 548 bytes, System is throwing OutOfMemoryError on 
NettyConnection org.apache.activemq.art
emis.core.remoting.impl.netty.NettyServerConnection@7b18e1a6[local= 
/10.42.154.105:61616, remote=/10.42.162.183:48376], there are
currently 
pendingWrites: [NETTY] -\u003e 0[EVENT LOOP] -\u003e 0 causes: fai
led to allocate 16777216 byte(s) of direct memory (used: 1057466368, 
max: 1073741824): io.netty.util.internal.OutOfDirectMemoryError:
failed 
to allocate 16777216 byte(s) of direct memory (used: 1057466368,
max: 
1073741824)\r\n","stream":"stdout","time":"2017-06-
01T19:31:54.238904544Z"}
{"log":"19:31:54,238 WARN  [org.apache.activemq.artemis.core.server] 
AMQ222151: removing consumer which did not handle a message, 
consumer=ServerConsumerImpl [id=0, filter=null,
binding=LocalQueueBinding [a
ddress=CentreonTopic, queue=QueueImpl[name=CentreonTopic, 
postOffice=PostOfficeImpl 
[server=ActiveMQServerImpl::serverUUID=772ad6f8-4630-11e7-93cd-
02a837635b7b], 
temp=false]@3e389a2d, filter=null, name=Cent
reonTopic, 
clusterName=CentreonTopic772ad6f8-4630-11e7-93cd-02a837635b7b]], 
message=Reference[722892]:NON-
RELIABLE:CoreMessage[messageID=722892,durable=false,userID=null,priorit
y=0, 
timestamp=0,expiration=0
, durable=false, 
address=CentreonTopic,properties=TypedProperties[mqtt.message.retain=fa
lse,mqtt.qos.level=0]]@1252621657: 
io.netty.util.internal.OutOfDirectMemoryError: failed to allocate 
16777216 byte(s)
of direct memory (used: 1057466368, max: 
1073741824)\r\n","stream":"stdout","time":"2017-06-
01T19:31:54.239955162Z"}



Then after a couple of hours:

{"log":"23:22:24,013 WARN  [io.netty.channel.DefaultChannelPipeline]
An 
exceptionCaught() event was fired, and it reached at the tail of the 
pipeline. It usually means the last handler in the pipeline did n
ot handle the exception.:
io.netty.util.internal.OutOfDirectMemoryError: 
failed to allocate 16777216 byte(s) of direct memory (used:
1057466368, 
max: 1073741824)\r\n","stream":"stdout","time":"2017-06-01T23
:22:24.015087347Z"}
{"log":"23:22:24,014 WARN  [io.netty.channel.DefaultChannelPipeline]
An 
exceptionCaught() event was fired, and it reached at the tail of the 
pipeline. It usually means the last handler in the pipeline did n
ot handle the exception.:
io.netty.util.internal.OutOfDirectMemoryError: 
failed to allocate 16777216 byte(s) of direct memory (used:
1057466368, 
max: 1073741824)\r\n","stream":"stdout","time":"2017-06-01T23
:22:24.015759902Z"}
{"log":"23:22:24,015 WARN  [io.netty.channel.DefaultChannelPipeline]
An 
exceptionCaught() event was fired, and it reached at the tail of the 
pipeline. It usually means the last handler in the pipeline did n
ot handle the exception.:
io.netty.util.internal.OutOfDirectMemoryError: 
failed to allocate 16777216 byte(s) of direct memory (used:
1057466368, 
max: 1073741824)\r\n","stream":"stdout","time":"2017-06-01T23
:22:24.016623101Z"}


And this message is looping and in 5 mins it's filled my 12GB drive.

Any clues what to do? I'll do some more debugging.

/hw

Re: artemis 2.2.0 logging disaster

Posted by hwaastad <he...@waastad.org>.
Hi,
and yes I saw your comment and what I tried to say in my last post is that I
got 2.2.0-SNAPSHOT running and that I quite quickly got an ResourceLeak log
entry in my 2.1.0 and that I did'nt see any on the snapshot.

However, after running 2.2.0-SNAPSHOT a few hours, I also got the exception.
I have'nt gotten my logging working in my testbed, so the only output I got
is pasted below.
I will continue help tracking this one down.


3.6.2017 14:16:20
io.netty.handler.codec.mqtt.MqttDecoder.decodePublishVariableHeader(MqttDecoder.java:268)
3.6.2017 14:16:20
io.netty.handler.codec.mqtt.MqttDecoder.decodeVariableHeader(MqttDecoder.java:194)
3.6.2017 14:16:20
io.netty.handler.codec.mqtt.MqttDecoder.decode(MqttDecoder.java:84)
3.6.2017 14:16:20
io.netty.handler.codec.ReplayingDecoder.callDecode(ReplayingDecoder.java:367)
3.6.2017 14:16:20
io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:248)
3.6.2017 14:16:20
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
3.6.2017 14:16:20
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
3.6.2017 14:16:20
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
3.6.2017 14:16:20
io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1334)
3.6.2017 14:16:20
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
3.6.2017 14:16:20
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
3.6.2017 14:16:20
io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:926)
3.6.2017 14:16:20
io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:1017)
3.6.2017 14:16:20
io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:394)
3.6.2017 14:16:20
io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:299)
3.6.2017 14:16:20
io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858)
3.6.2017 14:16:20	java.lang.Thread.run(Thread.java:748)
3.6.2017 14:16:20#1:
3.6.2017 14:16:20
io.netty.buffer.AdvancedLeakAwareByteBuf.readUnsignedByte(AdvancedLeakAwareByteBuf.java:402)
3.6.2017 14:16:20
io.netty.handler.codec.ReplayingDecoderByteBuf.readUnsignedByte(ReplayingDecoderByteBuf.java:523)
3.6.2017 14:16:20
io.netty.handler.codec.mqtt.MqttDecoder.decodeMsbLsb(MqttDecoder.java:454)
3.6.2017 14:16:20
io.netty.handler.codec.mqtt.MqttDecoder.decodeMsbLsb(MqttDecoder.java:449)
3.6.2017 14:16:20
io.netty.handler.codec.mqtt.MqttDecoder.decodeString(MqttDecoder.java:434)
3.6.2017 14:16:20
io.netty.handler.codec.mqtt.MqttDecoder.decodeString(MqttDecoder.java:419)
3.6.2017 14:16:20
io.netty.handler.codec.mqtt.MqttDecoder.decodePublishVariableHeader(MqttDecoder.java:268)
3.6.2017 14:16:20
io.netty.handler.codec.mqtt.MqttDecoder.decodeVariableHeader(MqttDecoder.java:194)
3.6.2017 14:16:20
io.netty.handler.codec.mqtt.MqttDecoder.decode(MqttDecoder.java:84)
3.6.2017 14:16:20
io.netty.handler.codec.ReplayingDecoder.callDecode(ReplayingDecoder.java:367)
3.6.2017 14:16:20
io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:248)
3.6.2017 14:16:20
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
3.6.2017 14:16:20
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
3.6.2017 14:16:20
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
3.6.2017 14:16:20
io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1334)
3.6.2017 14:16:20
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
3.6.2017 14:16:20
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
3.6.2017 14:16:20
io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:926)
3.6.2017 14:16:20
io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:1017)
3.6.2017 14:16:20
io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:394)
3.6.2017 14:16:20
io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:299)
3.6.2017 14:16:20
io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858)
3.6.2017 14:16:20	java.lang.Thread.run(Thread.java:748)
3.6.2017 14:16:20Created at:
3.6.2017 14:16:20
io.netty.util.ResourceLeakDetector.track(ResourceLeakDetector.java:237)
3.6.2017 14:16:20
io.netty.buffer.PooledByteBufAllocator.newDirectBuffer(PooledByteBufAllocator.java:327)
3.6.2017 14:16:20
io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:181)
3.6.2017 14:16:20
io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:172)
3.6.2017 14:16:20
io.netty.buffer.AbstractByteBufAllocator.ioBuffer(AbstractByteBufAllocator.java:133)
3.6.2017 14:16:20
io.netty.channel.DefaultMaxMessagesRecvByteBufAllocator$MaxMessageHandle.allocate(DefaultMaxMessagesRecvByteBufAllocator.java:80)
3.6.2017 14:16:20
io.netty.channel.epoll.EpollRecvByteAllocatorHandle.allocate(EpollRecvByteAllocatorHandle.java:71)
3.6.2017 14:16:20
io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:1006)
3.6.2017 14:16:20
io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:394)
3.6.2017 14:16:20
io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:299)
3.6.2017 14:16:20
io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858)
3.6.2017 14:16:20	java.lang.Thread.run(Thread.java:748)
3.6.2017 14:16:2012:16:20,064 ERROR [io.netty.util.ResourceLeakDetector]
LEAK: ByteBuf.release() was not called before it's garbage-collected. See
http://netty.io/wiki/reference-counted-objects.html for more information.
3.6.2017 14:16:20Recent access records: 1
3.6.2017 14:16:20#1:
3.6.2017 14:16:20	Hint: 'MQTTProtocolHandler#0' will handle the message from
this point.
3.6.2017 14:16:20
io.netty.handler.codec.mqtt.MqttPublishMessage.touch(MqttPublishMessage.java:99)
3.6.2017 14:16:20
io.netty.handler.codec.mqtt.MqttPublishMessage.touch(MqttPublishMessage.java:26)
3.6.2017 14:16:20
io.netty.channel.DefaultChannelPipeline.touch(DefaultChannelPipeline.java:107)
3.6.2017 14:16:20
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:345)
3.6.2017 14:16:20
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
3.6.2017 14:16:20
io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:293)
3.6.2017 14:16:20
io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:267)
3.6.2017 14:16:20
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
3.6.2017 14:16:20
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
3.6.2017 14:16:20
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
3.6.2017 14:16:20
io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1334)
3.6.2017 14:16:20
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
3.6.2017 14:16:20
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
3.6.2017 14:16:20
io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:926)
3.6.2017 14:16:20
io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:1017)
3.6.2017 14:16:20
io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:394)
3.6.2017 14:16:20
io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:299)
3.6.2017 14:16:20
io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858)
3.6.2017 14:16:20	java.lang.Thread.run(Thread.java:748)
3.6.2017 14:16:20Created at:
3.6.2017 14:16:20
io.netty.util.ResourceLeakDetector.track(ResourceLeakDetector.java:237)
3.6.2017 14:16:20
io.netty.buffer.SimpleLeakAwareByteBuf.unwrappedDerived(SimpleLeakAwareByteBuf.java:143)
3.6.2017 14:16:20
io.netty.buffer.SimpleLeakAwareByteBuf.readRetainedSlice(SimpleLeakAwareByteBuf.java:67)
3.6.2017 14:16:20
io.netty.buffer.AdvancedLeakAwareByteBuf.readRetainedSlice(AdvancedLeakAwareByteBuf.java:103)
3.6.2017 14:16:20
io.netty.handler.codec.ReplayingDecoderByteBuf.readRetainedSlice(ReplayingDecoderByteBuf.java:589)
3.6.2017 14:16:20
io.netty.handler.codec.mqtt.MqttDecoder.decodePublishPayload(MqttDecoder.java:414)
3.6.2017 14:16:20
io.netty.handler.codec.mqtt.MqttDecoder.decodePayload(MqttDecoder.java:321)
3.6.2017 14:16:20
io.netty.handler.codec.mqtt.MqttDecoder.decode(MqttDecoder.java:96)
3.6.2017 14:16:20
io.netty.handler.codec.ReplayingDecoder.callDecode(ReplayingDecoder.java:367)
3.6.2017 14:16:20
io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:248)
3.6.2017 14:16:20
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
3.6.2017 14:16:20
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
3.6.2017 14:16:20
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
3.6.2017 14:16:20
io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1334)
3.6.2017 14:16:20
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
3.6.2017 14:16:20
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
3.6.2017 14:16:20
io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:926)
3.6.2017 14:16:20
io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:1017)
3.6.2017 14:16:20
io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:394)
3.6.2017 14:16:20
io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:299)
3.6.2017 14:16:20
io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858)
3.6.2017 14:16:20	java.lang.Thread.run(Thread.java:748)


clebertsuconic wrote
> Did u see my answer about a bug fixed recently ?
> 
> Can u try a snapshot. Just to validate it
> 
> 
> 
> On Sat, Jun 3, 2017 at 4:25 AM hwaastad &lt;

> helge@

> &gt; wrote:
> 
>> Ok,
>> so in 2.1.0 I quickly get a:
>> 08:17:03,277 ERROR [io.netty.util.ResourceLeakDetector] LEAK:
>> ByteBuf.release() was not called before it's garbage-collected. See
>> http://netty.io/wiki/reference-counted-objects.html for more information.
>>
>> In 2.2.0-sn I still have'nt gotten any.
>>
>> I'll keep investigating and update thread.
>>
>> /hw
>>
>>
>>
>> --
>> View this message in context:
>> http://activemq.2283324.n4.nabble.com/artemis-2-1-0-logging-disaster-tp4726922p4726989.html
>> Sent from the ActiveMQ - User mailing list archive at Nabble.com.
>>
> -- 
> Clebert Suconic





--
View this message in context: http://activemq.2283324.n4.nabble.com/artemis-2-1-0-logging-disaster-tp4726922p4726991.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.

Re: artemis 2.2.0 logging disaster

Posted by hwaastad <he...@waastad.org>.
Sorry,
a lot of fuzz now but can someone please evaulate the following:

Can we add a finally in channelRead in MQTTProtocolHandler.java to do:

finally {
  if ((msg instanceof ByteBufHolder)) {
      ((ByteBufHolder) msg).release();
  }
}

This solves my problem.

/hw



--
View this message in context: http://activemq.2283324.n4.nabble.com/artemis-2-1-0-logging-disaster-tp4726922p4727547.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.

Re: artemis 2.2.0 logging disaster

Posted by hwaastad <he...@waastad.org>.
Hi,
returing back to this issue.

If I use qos=1/2, artemis hangs after the same amount of messages, but will
not log messages.
I cannot see anything other than this being an netty issue.

Complete mem leak oputput:

12:18:06,640 ERROR [io.netty.util.ResourceLeakDetector] LEAK:
ByteBuf.release() was not called before it's garbage-collected. See
http://netty.io/wiki/reference-counted-objects.html for more information.
Recent access records: 18
#18:
	Hint: 'MQTTProtocolHandler#0' will handle the message from this point.

io.netty.handler.codec.mqtt.MqttPublishMessage.touch(MqttPublishMessage.java:99)

io.netty.handler.codec.mqtt.MqttPublishMessage.touch(MqttPublishMessage.java:26)

io.netty.channel.DefaultChannelPipeline.touch(DefaultChannelPipeline.java:107)

io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:356)

io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:351)

io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:293)

io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:267)

io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:373)

io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:359)

io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:351)

io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1334)

io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:373)

io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:359)

io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:926)

io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:129)
	io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:651)

io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:574)

io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:488)
	io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:450)

io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:873)
	java.lang.Thread.run(Thread.java:748)
#17:

io.netty.buffer.AdvancedLeakAwareByteBuf.release(AdvancedLeakAwareByteBuf.java:955)

io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:256)

io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:373)

io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:359)

io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:351)

io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1334)

io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:373)

io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:359)

io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:926)

io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:129)
	io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:651)

io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:574)

io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:488)
	io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:450)

io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:873)
	java.lang.Thread.run(Thread.java:748)
#16:

io.netty.buffer.AdvancedLeakAwareByteBuf.readRetainedSlice(AdvancedLeakAwareByteBuf.java:117)

io.netty.handler.codec.ReplayingDecoderByteBuf.readRetainedSlice(ReplayingDecoderByteBuf.java:589)

io.netty.handler.codec.mqtt.MqttDecoder.decodePublishPayload(MqttDecoder.java:416)
	io.netty.handler.codec.mqtt.MqttDecoder.decodePayload(MqttDecoder.java:323)
	io.netty.handler.codec.mqtt.MqttDecoder.decode(MqttDecoder.java:96)

io.netty.handler.codec.ReplayingDecoder.callDecode(ReplayingDecoder.java:367)

io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:248)

io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:373)

io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:359)

io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:351)

io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1334)

io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:373)

io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:359)

io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:926)

io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:129)
	io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:651)

io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:574)

io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:488)
	io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:450)

io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:873)
	java.lang.Thread.run(Thread.java:748)
#15:

io.netty.buffer.AdvancedLeakAwareByteBuf.writeBytes(AdvancedLeakAwareByteBuf.java:591)

io.netty.handler.codec.ByteToMessageDecoder$1.cumulate(ByteToMessageDecoder.java:92)

io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:246)

io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:373)

io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:359)

io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:351)

io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1334)

io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:373)

io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:359)

io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:926)

io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:129)
	io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:651)

io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:574)

io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:488)
	io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:450)

io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:873)
	java.lang.Thread.run(Thread.java:748)
#14:

io.netty.buffer.AdvancedLeakAwareByteBuf.discardSomeReadBytes(AdvancedLeakAwareByteBuf.java:129)

io.netty.handler.codec.ByteToMessageDecoder.discardSomeReadBytes(ByteToMessageDecoder.java:319)

io.netty.handler.codec.ByteToMessageDecoder.channelReadComplete(ByteToMessageDecoder.java:300)

io.netty.channel.AbstractChannelHandlerContext.invokeChannelReadComplete(AbstractChannelHandlerContext.java:409)

io.netty.channel.AbstractChannelHandlerContext.invokeChannelReadComplete(AbstractChannelHandlerContext.java:391)

io.netty.channel.AbstractChannelHandlerContext.fireChannelReadComplete(AbstractChannelHandlerContext.java:384)

io.netty.channel.DefaultChannelPipeline$HeadContext.channelReadComplete(DefaultChannelPipeline.java:1339)

io.netty.channel.AbstractChannelHandlerContext.invokeChannelReadComplete(AbstractChannelHandlerContext.java:409)

io.netty.channel.AbstractChannelHandlerContext.invokeChannelReadComplete(AbstractChannelHandlerContext.java:391)

io.netty.channel.DefaultChannelPipeline.fireChannelReadComplete(DefaultChannelPipeline.java:932)

io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:134)
	io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:651)

io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:574)

io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:488)
	io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:450)

io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:873)
	java.lang.Thread.run(Thread.java:748)
#13:

io.netty.buffer.AdvancedLeakAwareByteBuf.discardSomeReadBytes(AdvancedLeakAwareByteBuf.java:129)

io.netty.handler.codec.ByteToMessageDecoder.discardSomeReadBytes(ByteToMessageDecoder.java:319)

io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:262)

io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:373)

io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:359)

io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:351)

io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1334)

io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:373)

io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:359)

io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:926)

io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:129)
	io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:651)

io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:574)

io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:488)
	io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:450)

io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:873)
	java.lang.Thread.run(Thread.java:748)
#12:

io.netty.buffer.AdvancedLeakAwareByteBuf.writeBytes(AdvancedLeakAwareByteBuf.java:591)

io.netty.handler.codec.ByteToMessageDecoder$1.cumulate(ByteToMessageDecoder.java:92)

io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:246)

io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:373)

io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:359)

io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:351)

io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1334)

io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:373)

io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:359)

io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:926)

io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:129)
	io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:651)

io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:574)

io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:488)
	io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:450)

io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:873)
	java.lang.Thread.run(Thread.java:748)
#11:

io.netty.buffer.AdvancedLeakAwareByteBuf.readUnsignedByte(AdvancedLeakAwareByteBuf.java:405)

io.netty.handler.codec.ReplayingDecoderByteBuf.readUnsignedByte(ReplayingDecoderByteBuf.java:523)
	io.netty.handler.codec.mqtt.MqttDecoder.decodeMsbLsb(MqttDecoder.java:456)
	io.netty.handler.codec.mqtt.MqttDecoder.decodeMsbLsb(MqttDecoder.java:451)

io.netty.handler.codec.mqtt.MqttDecoder.decodeMessageId(MqttDecoder.java:288)

io.netty.handler.codec.mqtt.MqttDecoder.decodePublishVariableHeader(MqttDecoder.java:278)

io.netty.handler.codec.mqtt.MqttDecoder.decodeVariableHeader(MqttDecoder.java:196)
	io.netty.handler.codec.mqtt.MqttDecoder.decode(MqttDecoder.java:85)

io.netty.handler.codec.ReplayingDecoder.callDecode(ReplayingDecoder.java:367)

io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:248)

io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:373)

io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:359)

io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:351)

io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1334)

io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:373)

io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:359)

io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:926)

io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:129)
	io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:651)

io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:574)

io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:488)
	io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:450)

io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:873)
	java.lang.Thread.run(Thread.java:748)
#10:

io.netty.buffer.AdvancedLeakAwareByteBuf.readUnsignedByte(AdvancedLeakAwareByteBuf.java:405)

io.netty.handler.codec.ReplayingDecoderByteBuf.readUnsignedByte(ReplayingDecoderByteBuf.java:523)
	io.netty.handler.codec.mqtt.MqttDecoder.decodeMsbLsb(MqttDecoder.java:455)
	io.netty.handler.codec.mqtt.MqttDecoder.decodeMsbLsb(MqttDecoder.java:451)

io.netty.handler.codec.mqtt.MqttDecoder.decodeMessageId(MqttDecoder.java:288)

io.netty.handler.codec.mqtt.MqttDecoder.decodePublishVariableHeader(MqttDecoder.java:278)

io.netty.handler.codec.mqtt.MqttDecoder.decodeVariableHeader(MqttDecoder.java:196)
	io.netty.handler.codec.mqtt.MqttDecoder.decode(MqttDecoder.java:85)

io.netty.handler.codec.ReplayingDecoder.callDecode(ReplayingDecoder.java:367)

io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:248)

io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:373)

io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:359)

io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:351)

io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1334)

io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:373)

io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:359)

io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:926)

io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:129)
	io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:651)

io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:574)

io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:488)
	io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:450)

io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:873)
	java.lang.Thread.run(Thread.java:748)
#9:

io.netty.buffer.AdvancedLeakAwareByteBuf.skipBytes(AdvancedLeakAwareByteBuf.java:531)

io.netty.handler.codec.ReplayingDecoderByteBuf.skipBytes(ReplayingDecoderByteBuf.java:850)
	io.netty.handler.codec.mqtt.MqttDecoder.decodeString(MqttDecoder.java:445)
	io.netty.handler.codec.mqtt.MqttDecoder.decodeString(MqttDecoder.java:421)

io.netty.handler.codec.mqtt.MqttDecoder.decodePublishVariableHeader(MqttDecoder.java:270)

io.netty.handler.codec.mqtt.MqttDecoder.decodeVariableHeader(MqttDecoder.java:196)
	io.netty.handler.codec.mqtt.MqttDecoder.decode(MqttDecoder.java:85)

io.netty.handler.codec.ReplayingDecoder.callDecode(ReplayingDecoder.java:367)

io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:248)

io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:373)

io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:359)

io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:351)

io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1334)

io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:373)

io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:359)

io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:926)

io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:129)
	io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:651)

io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:574)

io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:488)
	io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:450)

io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:873)
	java.lang.Thread.run(Thread.java:748)
#8:

io.netty.buffer.AdvancedLeakAwareByteBuf.toString(AdvancedLeakAwareByteBuf.java:747)

io.netty.handler.codec.ReplayingDecoderByteBuf.toString(ReplayingDecoderByteBuf.java:912)
	io.netty.handler.codec.mqtt.MqttDecoder.decodeString(MqttDecoder.java:444)
	io.netty.handler.codec.mqtt.MqttDecoder.decodeString(MqttDecoder.java:421)

io.netty.handler.codec.mqtt.MqttDecoder.decodePublishVariableHeader(MqttDecoder.java:270)

io.netty.handler.codec.mqtt.MqttDecoder.decodeVariableHeader(MqttDecoder.java:196)
	io.netty.handler.codec.mqtt.MqttDecoder.decode(MqttDecoder.java:85)

io.netty.handler.codec.ReplayingDecoder.callDecode(ReplayingDecoder.java:367)

io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:248)

io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:373)

io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:359)

io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:351)

io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1334)

io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:373)

io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:359)

io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:926)

io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:129)
	io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:651)

io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:574)

io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:488)
	io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:450)

io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:873)
	java.lang.Thread.run(Thread.java:748)
#7:

io.netty.buffer.AdvancedLeakAwareByteBuf.readUnsignedByte(AdvancedLeakAwareByteBuf.java:405)

io.netty.handler.codec.ReplayingDecoderByteBuf.readUnsignedByte(ReplayingDecoderByteBuf.java:523)
	io.netty.handler.codec.mqtt.MqttDecoder.decodeMsbLsb(MqttDecoder.java:456)
	io.netty.handler.codec.mqtt.MqttDecoder.decodeMsbLsb(MqttDecoder.java:451)
	io.netty.handler.codec.mqtt.MqttDecoder.decodeString(MqttDecoder.java:436)
	io.netty.handler.codec.mqtt.MqttDecoder.decodeString(MqttDecoder.java:421)

io.netty.handler.codec.mqtt.MqttDecoder.decodePublishVariableHeader(MqttDecoder.java:270)

io.netty.handler.codec.mqtt.MqttDecoder.decodeVariableHeader(MqttDecoder.java:196)
	io.netty.handler.codec.mqtt.MqttDecoder.decode(MqttDecoder.java:85)

io.netty.handler.codec.ReplayingDecoder.callDecode(ReplayingDecoder.java:367)

io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:248)

io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:373)

io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:359)

io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:351)

io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1334)

io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:373)

io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:359)

io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:926)

io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:129)
	io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:651)

io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:574)

io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:488)
	io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:450)

io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:873)
	java.lang.Thread.run(Thread.java:748)
#6:

io.netty.buffer.AdvancedLeakAwareByteBuf.readUnsignedByte(AdvancedLeakAwareByteBuf.java:405)

io.netty.handler.codec.ReplayingDecoderByteBuf.readUnsignedByte(ReplayingDecoderByteBuf.java:523)
	io.netty.handler.codec.mqtt.MqttDecoder.decodeMsbLsb(MqttDecoder.java:455)
	io.netty.handler.codec.mqtt.MqttDecoder.decodeMsbLsb(MqttDecoder.java:451)
	io.netty.handler.codec.mqtt.MqttDecoder.decodeString(MqttDecoder.java:436)
	io.netty.handler.codec.mqtt.MqttDecoder.decodeString(MqttDecoder.java:421)

io.netty.handler.codec.mqtt.MqttDecoder.decodePublishVariableHeader(MqttDecoder.java:270)

io.netty.handler.codec.mqtt.MqttDecoder.decodeVariableHeader(MqttDecoder.java:196)
	io.netty.handler.codec.mqtt.MqttDecoder.decode(MqttDecoder.java:85)

io.netty.handler.codec.ReplayingDecoder.callDecode(ReplayingDecoder.java:367)

io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:248)

io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:373)

io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:359)

io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:351)

io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1334)

io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:373)

io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:359)

io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:926)

io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:129)
	io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:651)

io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:574)

io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:488)
	io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:450)

io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:873)
	java.lang.Thread.run(Thread.java:748)
#5:

io.netty.buffer.AdvancedLeakAwareByteBuf.readUnsignedByte(AdvancedLeakAwareByteBuf.java:405)

io.netty.handler.codec.ReplayingDecoderByteBuf.readUnsignedByte(ReplayingDecoderByteBuf.java:523)

io.netty.handler.codec.mqtt.MqttDecoder.decodeFixedHeader(MqttDecoder.java:156)
	io.netty.handler.codec.mqtt.MqttDecoder.decode(MqttDecoder.java:76)

io.netty.handler.codec.ReplayingDecoder.callDecode(ReplayingDecoder.java:367)

io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:248)

io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:373)

io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:359)

io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:351)

io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1334)

io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:373)

io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:359)

io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:926)

io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:129)
	io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:651)

io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:574)

io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:488)
	io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:450)

io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:873)
	java.lang.Thread.run(Thread.java:748)
#4:

io.netty.buffer.AdvancedLeakAwareByteBuf.readUnsignedByte(AdvancedLeakAwareByteBuf.java:405)

io.netty.handler.codec.ReplayingDecoderByteBuf.readUnsignedByte(ReplayingDecoderByteBuf.java:523)

io.netty.handler.codec.mqtt.MqttDecoder.decodeFixedHeader(MqttDecoder.java:144)
	io.netty.handler.codec.mqtt.MqttDecoder.decode(MqttDecoder.java:76)

io.netty.handler.codec.ReplayingDecoder.callDecode(ReplayingDecoder.java:367)

io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:248)

io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:373)

io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:359)

io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:351)

io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1334)

io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:373)

io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:359)

io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:926)

io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:129)
	io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:651)

io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:574)

io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:488)
	io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:450)

io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:873)
	java.lang.Thread.run(Thread.java:748)
#3:
	Hint: 'MqttDecoder#0' will handle the message from this point.

io.netty.channel.DefaultChannelPipeline.touch(DefaultChannelPipeline.java:107)

io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:356)

io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:351)

io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1334)

io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:373)

io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:359)

io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:926)

io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:129)
	io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:651)

io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:574)

io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:488)
	io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:450)

io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:873)
	java.lang.Thread.run(Thread.java:748)
#2:
	Hint: 'DefaultChannelPipeline$HeadContext#0' will handle the message from
this point.

io.netty.channel.DefaultChannelPipeline.touch(DefaultChannelPipeline.java:107)

io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:356)

io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:926)

io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:129)
	io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:651)

io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:574)

io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:488)
	io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:450)

io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:873)
	java.lang.Thread.run(Thread.java:748)
#1:

io.netty.buffer.AdvancedLeakAwareByteBuf.writeBytes(AdvancedLeakAwareByteBuf.java:633)

io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:366)

io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:118)
	io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:651)

io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:574)

io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:488)
	io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:450)

io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:873)
	java.lang.Thread.run(Thread.java:748)
Created at:

io.netty.buffer.PooledByteBufAllocator.newDirectBuffer(PooledByteBufAllocator.java:271)

io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:179)

io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:170)

io.netty.buffer.AbstractByteBufAllocator.ioBuffer(AbstractByteBufAllocator.java:131)

io.netty.channel.DefaultMaxMessagesRecvByteBufAllocator$MaxMessageHandle.allocate(DefaultMaxMessagesRecvByteBufAllocator.java:73)

io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:117)
	io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:651)

io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:574)

io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:488)
	io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:450)

io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:873)
	java.lang.Thread.run(Thread.java:748)

/hw



--
View this message in context: http://activemq.2283324.n4.nabble.com/artemis-2-1-0-logging-disaster-tp4726922p4727533.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.

Re: artemis 2.2.0 logging disaster

Posted by hwaastad <he...@waastad.org>.
Hi,
it did'nt change anything. Stille heap usage ok but OutOfDirectMemoryError

/hw





--
View this message in context: http://activemq.2283324.n4.nabble.com/artemis-2-1-0-logging-disaster-tp4726922p4727069.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.

Re: artemis 2.2.0 logging disaster

Posted by hwaastad <he...@waastad.org>.
Sorry, forgot the stack:

11:26:05,418 WARN  [io.netty.channel.DefaultChannelPipeline] An
exceptionCaught() event was fired, and it reached at the tail of the
pipeline. It usually means the last handler in the pipeline did not handle
the exception.: io.netty.util.internal.OutOfDirectMemoryError: failed to
allocate 16777216 byte(s) of direct memory (used: 503818240, max: 514850816)
        at
io.netty.util.internal.PlatformDependent.incrementMemoryCounter(PlatformDependent.java:585)
[netty-all-4.1.9.Final.jar:4.1.9.Final]
        at
io.netty.util.internal.PlatformDependent.allocateDirectNoCleaner(PlatformDependent.java:539)
[netty-all-4.1.9.Final.jar:4.1.9.Final]
        at
io.netty.buffer.PoolArena$DirectArena.allocateDirect(PoolArena.java:760)
[netty-all-4.1.9.Final.jar:4.1.9.Final]
        at
io.netty.buffer.PoolArena$DirectArena.newChunk(PoolArena.java:736)
[netty-all-4.1.9.Final.jar:4.1.9.Final]
        at io.netty.buffer.PoolArena.allocateNormal(PoolArena.java:244)
[netty-all-4.1.9.Final.jar:4.1.9.Final]
        at io.netty.buffer.PoolArena.allocate(PoolArena.java:214)
[netty-all-4.1.9.Final.jar:4.1.9.Final]
        at io.netty.buffer.PoolArena.allocate(PoolArena.java:146)
[netty-all-4.1.9.Final.jar:4.1.9.Final]
        at
io.netty.buffer.PooledByteBufAllocator.newDirectBuffer(PooledByteBufAllocator.java:320)
[netty-all-4.1.9.Final.jar:4.1.9.Final]
        at
io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:181)
[netty-all-4.1.9.Final.jar:4.1.9.Final]
        at
io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:172)
[netty-all-4.1.9.Final.jar:4.1.9.Final]
        at
io.netty.buffer.AbstractByteBufAllocator.ioBuffer(AbstractByteBufAllocator.java:133)
[netty-all-4.1.9.Final.jar:4.1.9.Final]
        at
io.netty.channel.DefaultMaxMessagesRecvByteBufAllocator$MaxMessageHandle.allocate(DefaultMaxMessagesRecvByteBufAllocator.java:80)
[netty-all-4.1.9.Final.jar:4.1.9.Final]
        at
io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:122)
[netty-all-4.1.9.Final.jar:4.1.9.Final]
        at
io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:624)
[netty-all-4.1.9.Final.jar:4.1.9.Final]
        at
io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:559)
[netty-all-4.1.9.Final.jar:4.1.9.Final]
        at
io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:476)
[netty-all-4.1.9.Final.jar:4.1.9.Final]
        at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:438)
[netty-all-4.1.9.Final.jar:4.1.9.Final]
        at
io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858)
[netty-all-4.1.9.Final.jar:4.1.9.Final]
        at java.lang.Thread.run(Thread.java:748) [rt.jar:1.8.0_131]

/hw



--
View this message in context: http://activemq.2283324.n4.nabble.com/artemis-2-1-0-logging-disaster-tp4726922p4727072.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.

Re: artemis 2.2.0 logging disaster

Posted by Michael André Pearce <mi...@me.com>.
I note your on a Linux platform and epoll is activated as such buffers are direct (offheap), On the acceptors can you set useEpoll to false, you set this a url param. And see what this does for you?

 This should change the heap memory usage as buffers will be on heap but will help in finding the leak.

Sent from my iPhone

> On 6 Jun 2017, at 09:33, hwaastad <he...@waastad.org> wrote:
> 
> Hi,
> did'nt help my setup.
> 
> I've done similar testing with activemq 5.14.2 and I have no issues.
> 
> Testing last 2.2.0-SNAPSHOT, with:
> JAVA_ARGS=" -XX:+PrintClassHistogram -XX:+AggressiveOpts
> -XX:+UseFastAccessorMethods -Xms512M -Xmx512M"
> 
> Used Heap is kept low (max ~147M)
> 
> But still:
> 10:44:56,616 WARN  [io.netty.channel.DefaultChannelPipeline] An
> exceptionCaught() event was fired, and it reached at the tail of the
> pipeline. It usually means the last handler in the pipeline did not handle
> the exception.: io.netty.util.internal.OutOfDirectMemoryError: failed to
> allocate 16777216 byte(s) of direct memory (used: 503818240, max: 514850816)
>        at
> io.netty.util.internal.PlatformDependent.incrementMemoryCounter(PlatformDependent.java:585)
> [netty-all-4.1.9.Final.jar:4.1.9.Final]
>        at
> io.netty.util.internal.PlatformDependent.allocateDirectNoCleaner(PlatformDependent.java:539)
> [netty-all-4.1.9.Final.jar:4.1.9.Final]
>        at
> io.netty.buffer.PoolArena$DirectArena.allocateDirect(PoolArena.java:760)
> [netty-all-4.1.9.Final.jar:4.1.9.Final]
>        at
> io.netty.buffer.PoolArena$DirectArena.newChunk(PoolArena.java:736)
> [netty-all-4.1.9.Final.jar:4.1.9.Final]
>        at io.netty.buffer.PoolArena.allocateNormal(PoolArena.java:244)
> [netty-all-4.1.9.Final.jar:4.1.9.Final]
>        at io.netty.buffer.PoolArena.allocate(PoolArena.java:214)
> [netty-all-4.1.9.Final.jar:4.1.9.Final]
>        at io.netty.buffer.PoolArena.allocate(PoolArena.java:146)
> [netty-all-4.1.9.Final.jar:4.1.9.Final]
>        at
> io.netty.buffer.PooledByteBufAllocator.newDirectBuffer(PooledByteBufAllocator.java:320)
> [netty-all-4.1.9.Final.jar:4.1.9.Final]
>        at
> io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:181)
> [netty-all-4.1.9.Final.jar:4.1.9.Final]
>        at
> io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:172)
> [netty-all-4.1.9.Final.jar:4.1.9.Final]
>        at
> io.netty.buffer.AbstractByteBufAllocator.ioBuffer(AbstractByteBufAllocator.java:133)
> [netty-all-4.1.9.Final.jar:4.1.9.Final]
>        at
> io.netty.channel.DefaultMaxMessagesRecvByteBufAllocator$MaxMessageHandle.allocate(DefaultMaxMessagesRecvByteBufAllocator.java:80)
> [netty-all-4.1.9.Final.jar:4.1.9.Final]
>        at
> io.netty.channel.epoll.EpollRecvByteAllocatorHandle.allocate(EpollRecvByteAllocatorHandle.java:71)
> [netty-all-4.1.9.Final.jar:4.1.9.Final]
> 
> 
> Right now I'll have to rollback to activemq-5.14.2 until this is resolved.
> I'll still try debug in staging.
> 
> /hw
> 
> 
> 
> 
> --
> View this message in context: http://activemq.2283324.n4.nabble.com/artemis-2-1-0-logging-disaster-tp4726922p4727065.html
> Sent from the ActiveMQ - User mailing list archive at Nabble.com.

Re: artemis 2.2.0 logging disaster

Posted by hwaastad <he...@waastad.org>.
Hi,
did'nt help my setup.

I've done similar testing with activemq 5.14.2 and I have no issues.

Testing last 2.2.0-SNAPSHOT, with:
JAVA_ARGS=" -XX:+PrintClassHistogram -XX:+AggressiveOpts
-XX:+UseFastAccessorMethods -Xms512M -Xmx512M"

Used Heap is kept low (max ~147M)

But still:
10:44:56,616 WARN  [io.netty.channel.DefaultChannelPipeline] An
exceptionCaught() event was fired, and it reached at the tail of the
pipeline. It usually means the last handler in the pipeline did not handle
the exception.: io.netty.util.internal.OutOfDirectMemoryError: failed to
allocate 16777216 byte(s) of direct memory (used: 503818240, max: 514850816)
        at
io.netty.util.internal.PlatformDependent.incrementMemoryCounter(PlatformDependent.java:585)
[netty-all-4.1.9.Final.jar:4.1.9.Final]
        at
io.netty.util.internal.PlatformDependent.allocateDirectNoCleaner(PlatformDependent.java:539)
[netty-all-4.1.9.Final.jar:4.1.9.Final]
        at
io.netty.buffer.PoolArena$DirectArena.allocateDirect(PoolArena.java:760)
[netty-all-4.1.9.Final.jar:4.1.9.Final]
        at
io.netty.buffer.PoolArena$DirectArena.newChunk(PoolArena.java:736)
[netty-all-4.1.9.Final.jar:4.1.9.Final]
        at io.netty.buffer.PoolArena.allocateNormal(PoolArena.java:244)
[netty-all-4.1.9.Final.jar:4.1.9.Final]
        at io.netty.buffer.PoolArena.allocate(PoolArena.java:214)
[netty-all-4.1.9.Final.jar:4.1.9.Final]
        at io.netty.buffer.PoolArena.allocate(PoolArena.java:146)
[netty-all-4.1.9.Final.jar:4.1.9.Final]
        at
io.netty.buffer.PooledByteBufAllocator.newDirectBuffer(PooledByteBufAllocator.java:320)
[netty-all-4.1.9.Final.jar:4.1.9.Final]
        at
io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:181)
[netty-all-4.1.9.Final.jar:4.1.9.Final]
        at
io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:172)
[netty-all-4.1.9.Final.jar:4.1.9.Final]
        at
io.netty.buffer.AbstractByteBufAllocator.ioBuffer(AbstractByteBufAllocator.java:133)
[netty-all-4.1.9.Final.jar:4.1.9.Final]
        at
io.netty.channel.DefaultMaxMessagesRecvByteBufAllocator$MaxMessageHandle.allocate(DefaultMaxMessagesRecvByteBufAllocator.java:80)
[netty-all-4.1.9.Final.jar:4.1.9.Final]
        at
io.netty.channel.epoll.EpollRecvByteAllocatorHandle.allocate(EpollRecvByteAllocatorHandle.java:71)
[netty-all-4.1.9.Final.jar:4.1.9.Final]


Right now I'll have to rollback to activemq-5.14.2 until this is resolved.
I'll still try debug in staging.

/hw




--
View this message in context: http://activemq.2283324.n4.nabble.com/artemis-2-1-0-logging-disaster-tp4726922p4727065.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.

Re: artemis 2.2.0 logging disaster

Posted by IlarioM <im...@gmail.com>.
hi,
i've found this bug about netty UnpooledByteBuffer:

https://groups.google.com/forum/#!topic/netty/Ve4lnRvFXjM

if i've well understand disabling netty recyclers should solve the problem:

-Dio.netty.recycler.maxCapacity=0 -Dio.netty.recycler.maxCapacity.default=0



--
View this message in context: http://activemq.2283324.n4.nabble.com/artemis-2-1-0-logging-disaster-tp4726922p4727030.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.

Re: artemis 2.2.0 logging disaster

Posted by hwaastad <he...@waastad.org>.
Hi,
yes, Im working my way through the artemis it tests to model my own for
testing.

What I do know now is that many of the mqtt messsages are ~22k size

So what I did to reproduce:
1. artemis create <instance>
2. model any spec in artemis.profile (ex xmx/xms=1024M)
3. make a 22k test file
4. running a JMS consumer on <MyQueue>
5. for (( c=1;c<100000;c++)) do echo "Running $c....."; mosquitto_pub -t
MyQueue -f ./data.txt; done
6. observe with visualvm.

After a while (and exactly at the same count) artemis stops accepting
packages, no more messages are sent
and log statements:
io.netty.channel.DefaultChannelPipeline] An exceptionCaught() event was
fired, and it reached at the tail of the pipeline. It usually means the last
handler in the pipeline did not handle the exception.:
io.netty.util.internal.OutOfDirectMemoryError: failed to allocate 16777216
byte(s) of direct memory (used: 520595456, max: 536870912)

Loops in an incredible speed.

MessagesAcknowledged and MessagesAdded are the same for the testqueue.

Not sure if it matters, but I can run, let's say 50% of the critical count,
do a GC on artemis (via visuamvm) and start another test. I would expect
~100% then, but havoc's at ~50%. To me this should signal a memory leak. 

/hw





--
View this message in context: http://activemq.2283324.n4.nabble.com/artemis-2-1-0-logging-disaster-tp4726922p4727028.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.

Re: artemis 2.2.0 logging disaster

Posted by Martyn Taylor <mt...@redhat.com>.
A reproducer is the best way for us to help diagnose/fix the issue.
Failing that, are you able to profile the broker to see which objects are
getting leaked?

There are a couple of MQTT specific things to check such as retained
messages and session state.  The life cycle of these is controlled by the
client and will they essentially hang around forever unless the client
clears them.  There are a coupe of other gotchas that might cause the
broker to start running out of memory (providing you haven't configured a
PAGE/BLOCK policy) i.e. having a slow consumer with an address with lots of
traffic e.g."/"

If you can't reproduce it in a test, could you provide more info on the use
case and include your broker config.

Thanks

On Mon, Jun 5, 2017 at 2:18 AM, Clebert Suconic <cl...@gmail.com>
wrote:

> Look especially for how you ack your messages, how you subscribe..  if
> you're not leaking consumers... etc...
>
> On Sun, Jun 4, 2017 at 4:15 PM, hwaastad <he...@waastad.org> wrote:
> > Hi,
> >  i'll see if i can produce some kind of test of the scenario i'm using.
> >
> > /hw
> >
> > Sendt fra min iPhone
> >
> >> Den 4. jun. 2017 kl. 15.37 skrev clebertsuconic [via ActiveMQ] <
> ml+s2283324n4726996h67@n4.nabble.com>:
> >>
> >> Any way you could replicate it on an example?
> >>
> >> Or test?
> >>
> >>
> >>
> >> On Sun, Jun 4, 2017 at 9:38 AM hwaastad <[hidden email]> wrote:
> >>
> >> > Hi,
> >> > ran 2.2.0-snapshot from yesterday until today.
> >> >
> >> > To see if I could get the issue earlier I added: -Xms1024M -Xmx1024M.
> >> >
> >> > So after ~24hours, the memory leak happens again. and disk is full.
> >> >
> >> > /hw
> >> >
> >> >
> >> >
> >> > --
> >> > View this message in context:
> >> > http://activemq.2283324.n4.nabble.com/artemis-2-1-0-logging-disaster-
> tp4726922p4726995.html
> >> > Sent from the ActiveMQ - User mailing list archive at Nabble.com.
> >> >
> >> --
> >> Clebert Suconic
> >>
> >>
> >> If you reply to this email, your message will be added to the
> discussion below:
> >> http://activemq.2283324.n4.nabble.com/artemis-2-1-0-logging-disaster-
> tp4726922p4726996.html
> >> To unsubscribe from artemis 2.1.0 logging disaster, click here.
> >> NAML
> >
> >
> >
> >
> > --
> > View this message in context: http://activemq.2283324.n4.
> nabble.com/artemis-2-1-0-logging-disaster-tp4726922p4727000.html
> > Sent from the ActiveMQ - User mailing list archive at Nabble.com.
>
>
>
> --
> Clebert Suconic
>

Re: artemis 2.2.0 logging disaster

Posted by Clebert Suconic <cl...@gmail.com>.
Look especially for how you ack your messages, how you subscribe..  if
you're not leaking consumers... etc...

On Sun, Jun 4, 2017 at 4:15 PM, hwaastad <he...@waastad.org> wrote:
> Hi,
>  i'll see if i can produce some kind of test of the scenario i'm using.
>
> /hw
>
> Sendt fra min iPhone
>
>> Den 4. jun. 2017 kl. 15.37 skrev clebertsuconic [via ActiveMQ] <ml...@n4.nabble.com>:
>>
>> Any way you could replicate it on an example?
>>
>> Or test?
>>
>>
>>
>> On Sun, Jun 4, 2017 at 9:38 AM hwaastad <[hidden email]> wrote:
>>
>> > Hi,
>> > ran 2.2.0-snapshot from yesterday until today.
>> >
>> > To see if I could get the issue earlier I added: -Xms1024M -Xmx1024M.
>> >
>> > So after ~24hours, the memory leak happens again. and disk is full.
>> >
>> > /hw
>> >
>> >
>> >
>> > --
>> > View this message in context:
>> > http://activemq.2283324.n4.nabble.com/artemis-2-1-0-logging-disaster-tp4726922p4726995.html
>> > Sent from the ActiveMQ - User mailing list archive at Nabble.com.
>> >
>> --
>> Clebert Suconic
>>
>>
>> If you reply to this email, your message will be added to the discussion below:
>> http://activemq.2283324.n4.nabble.com/artemis-2-1-0-logging-disaster-tp4726922p4726996.html
>> To unsubscribe from artemis 2.1.0 logging disaster, click here.
>> NAML
>
>
>
>
> --
> View this message in context: http://activemq.2283324.n4.nabble.com/artemis-2-1-0-logging-disaster-tp4726922p4727000.html
> Sent from the ActiveMQ - User mailing list archive at Nabble.com.



-- 
Clebert Suconic

Re: artemis 2.2.0 logging disaster

Posted by hwaastad <he...@waastad.org>.
Hi,
 i'll see if i can produce some kind of test of the scenario i'm using.

/hw

Sendt fra min iPhone

> Den 4. jun. 2017 kl. 15.37 skrev clebertsuconic [via ActiveMQ] <ml...@n4.nabble.com>:
> 
> Any way you could replicate it on an example? 
> 
> Or test? 
> 
> 
> 
> On Sun, Jun 4, 2017 at 9:38 AM hwaastad <[hidden email]> wrote: 
> 
> > Hi, 
> > ran 2.2.0-snapshot from yesterday until today. 
> > 
> > To see if I could get the issue earlier I added: -Xms1024M -Xmx1024M. 
> > 
> > So after ~24hours, the memory leak happens again. and disk is full. 
> > 
> > /hw 
> > 
> > 
> > 
> > -- 
> > View this message in context: 
> > http://activemq.2283324.n4.nabble.com/artemis-2-1-0-logging-disaster-tp4726922p4726995.html
> > Sent from the ActiveMQ - User mailing list archive at Nabble.com. 
> >
> -- 
> Clebert Suconic 
> 
> 
> If you reply to this email, your message will be added to the discussion below:
> http://activemq.2283324.n4.nabble.com/artemis-2-1-0-logging-disaster-tp4726922p4726996.html
> To unsubscribe from artemis 2.1.0 logging disaster, click here.
> NAML




--
View this message in context: http://activemq.2283324.n4.nabble.com/artemis-2-1-0-logging-disaster-tp4726922p4727000.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.

Re: artemis 2.2.0 logging disaster

Posted by Clebert Suconic <cl...@gmail.com>.
Any way you could replicate it on an example?

Or test?



On Sun, Jun 4, 2017 at 9:38 AM hwaastad <he...@waastad.org> wrote:

> Hi,
> ran 2.2.0-snapshot from yesterday until today.
>
> To see if I could get the issue earlier I added: -Xms1024M -Xmx1024M.
>
> So after ~24hours, the memory leak happens again. and disk is full.
>
> /hw
>
>
>
> --
> View this message in context:
> http://activemq.2283324.n4.nabble.com/artemis-2-1-0-logging-disaster-tp4726922p4726995.html
> Sent from the ActiveMQ - User mailing list archive at Nabble.com.
>
-- 
Clebert Suconic

Re: artemis 2.2.0 logging disaster

Posted by hwaastad <he...@waastad.org>.
Hi,
ran 2.2.0-snapshot from yesterday until today.

To see if I could get the issue earlier I added: -Xms1024M -Xmx1024M.

So after ~24hours, the memory leak happens again. and disk is full.

/hw



--
View this message in context: http://activemq.2283324.n4.nabble.com/artemis-2-1-0-logging-disaster-tp4726922p4726995.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.

Re: artemis 2.2.0 logging disaster

Posted by Clebert Suconic <cl...@gmail.com>.
Did u see my answer about a bug fixed recently ?

Can u try a snapshot. Just to validate it



On Sat, Jun 3, 2017 at 4:25 AM hwaastad <he...@waastad.org> wrote:

> Ok,
> so in 2.1.0 I quickly get a:
> 08:17:03,277 ERROR [io.netty.util.ResourceLeakDetector] LEAK:
> ByteBuf.release() was not called before it's garbage-collected. See
> http://netty.io/wiki/reference-counted-objects.html for more information.
>
> In 2.2.0-sn I still have'nt gotten any.
>
> I'll keep investigating and update thread.
>
> /hw
>
>
>
> --
> View this message in context:
> http://activemq.2283324.n4.nabble.com/artemis-2-1-0-logging-disaster-tp4726922p4726989.html
> Sent from the ActiveMQ - User mailing list archive at Nabble.com.
>
-- 
Clebert Suconic

Re: artemis 2.2.0 logging disaster

Posted by hwaastad <he...@waastad.org>.
Ok,
so in 2.1.0 I quickly get a:
08:17:03,277 ERROR [io.netty.util.ResourceLeakDetector] LEAK:
ByteBuf.release() was not called before it's garbage-collected. See
http://netty.io/wiki/reference-counted-objects.html for more information.

In 2.2.0-sn I still have'nt gotten any.

I'll keep investigating and update thread.

/hw



--
View this message in context: http://activemq.2283324.n4.nabble.com/artemis-2-1-0-logging-disaster-tp4726922p4726989.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.

Re: artemis 2.2.0 logging disaster

Posted by Helge Waastad <he...@waastad.org>.
Hi,
We do dev in prod😉

I actually since posting built the 2.2.0-SN from yesterday and I'll see if it makes any difference.

I'll add the detection option if needed

Thx.

/hw

Sendt fra min iPhone

> Den 3. jun. 2017 kl. 09.19 skrev Michael André Pearce <mi...@me.com>:
> 
> If you have the ability (not sure if this is a dev or a prod env) as clebert mentions at there are some changes in master for MQTT protocol since 2.1.0 release, if you have any chance to build and deploy master?
> 
> On the netty front there's a system property 
> -Dio.netty.leakDetection.level=advanced
> If you add this and rerun, this will output if there's any leak. To eliminate or prove that one.
> 
> Cheers
> Mike
> 
> 
> Sent from my iPhone
> 
>> On 3 Jun 2017, at 07:18, hwaastad <he...@waastad.org> wrote:
>> 
>> Hi guys,
>> and thx for answering.
>> 
>> a quick summary of my setup:
>> 1. Its 2.1.0 so I'm not running any snapshot
>> 2. The load is pretty constant.(performance checks from nagios), approx
>> 1.1GB data per month, 2.5 mill entries over MQTT.
>> 3. We've been running activemq with virtualdestinations (mqtt->jms) earlier
>> and never had any issues.
>> 4. Memory usage is increasing. I'll see if I can get more stats.
>> 5. I did change my setup yesterday, skipping clustering and running a
>> standalone instance. just to see if clustering was the issue.
>> 6. If there are mem leaks in netty, is there any workaround?
>> 
>> 
>> /hw
>> 
>> 
>> 
>> --
>> View this message in context: http://activemq.2283324.n4.nabble.com/artemis-2-1-0-logging-disaster-tp4726922p4726986.html
>> Sent from the ActiveMQ - User mailing list archive at Nabble.com.


Re: artemis 2.2.0 logging disaster

Posted by Michael André Pearce <mi...@me.com>.
If you have the ability (not sure if this is a dev or a prod env) as clebert mentions at there are some changes in master for MQTT protocol since 2.1.0 release, if you have any chance to build and deploy master?

On the netty front there's a system property 
-Dio.netty.leakDetection.level=advanced
If you add this and rerun, this will output if there's any leak. To eliminate or prove that one.

Cheers
Mike


Sent from my iPhone

> On 3 Jun 2017, at 07:18, hwaastad <he...@waastad.org> wrote:
> 
> Hi guys,
> and thx for answering.
> 
> a quick summary of my setup:
> 1. Its 2.1.0 so I'm not running any snapshot
> 2. The load is pretty constant.(performance checks from nagios), approx
> 1.1GB data per month, 2.5 mill entries over MQTT.
> 3. We've been running activemq with virtualdestinations (mqtt->jms) earlier
> and never had any issues.
> 4. Memory usage is increasing. I'll see if I can get more stats.
> 5. I did change my setup yesterday, skipping clustering and running a
> standalone instance. just to see if clustering was the issue.
> 6. If there are mem leaks in netty, is there any workaround?
> 
> 
> /hw
> 
> 
> 
> --
> View this message in context: http://activemq.2283324.n4.nabble.com/artemis-2-1-0-logging-disaster-tp4726922p4726986.html
> Sent from the ActiveMQ - User mailing list archive at Nabble.com.

Re: artemis 2.2.0 logging disaster

Posted by hwaastad <he...@waastad.org>.
Hi guys,
and thx for answering.

a quick summary of my setup:
1. Its 2.1.0 so I'm not running any snapshot
2. The load is pretty constant.(performance checks from nagios), approx
1.1GB data per month, 2.5 mill entries over MQTT.
3. We've been running activemq with virtualdestinations (mqtt->jms) earlier
and never had any issues.
4. Memory usage is increasing. I'll see if I can get more stats.
5. I did change my setup yesterday, skipping clustering and running a
standalone instance. just to see if clustering was the issue.
6. If there are mem leaks in netty, is there any workaround?


/hw



--
View this message in context: http://activemq.2283324.n4.nabble.com/artemis-2-1-0-logging-disaster-tp4726922p4726986.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.

Re: artemis 2.2.0 logging disaster

Posted by Tim Bain <tb...@alumni.duke.edu>.
A quick Google search this morning led me to believe that this could be a
memory leak in the Netty code. There are several recent bugs in the Netty
JIRA that might be relevant.

Tim

On Jun 2, 2017 1:27 PM, "Clebert Suconic" <cl...@gmail.com> wrote:

> There was a recent bug fix on MQTT. are you using a snapshot after the fix?
>
> On Fri, Jun 2, 2017 at 12:59 PM, Michael André Pearce
> <mi...@me.com> wrote:
> > Also sorry just one other question.
> >
> > Does this occur with 2.1.0?
> >
> > Sent from my iPhone
> >
> >> On 2 Jun 2017, at 17:57, Michael André Pearce <
> michael.andre.pearce@me.com> wrote:
> >>
> >> Essentially just from this log output I assume the server your broker
> is running out of ram to use.
> >> This can be either
> >> A) genuine memory leak in artemis
> >> B) you simply don't have enough ram for the load/throughout.
> >>
> >> Some questions:
> >>
> >> Is the load constant?
> >> Do you have server ram usage metrics available?
> >>
> >> You should ensure there is more ram available to the broker instance
> than just heap allocation, for network buffers etc.
> >>
> >> Cheers
> >> Mike
> >>
> >>
> >>
> >> Sent from my iPhone
> >>
> >>> On 2 Jun 2017, at 09:44, Helge Waastad <he...@waastad.org> wrote:
> >>>
> >>> Hi,
> >>> I'm running artemis 2.2.0 as a docker container.
> >>>
> >>> I'm collecting MQTT messages an these are consumed by a JMS consumer
> >>> (artemis-jms-client)
> >>>
> >>> It's running fine for a while, but suddenly this appear (docker *-
> >>> json.log):
> >>>
> >>> {"log":"19:16:12,338 WARN
> >>> [org.apache.activemq.artemis.core.remoting.impl.netty.NettyConnection]
> >>> Trying to allocate 712 bytes, System is throwing OutOfMemoryError on
> >>> NettyConnection org.apache.activemq.art
> >>> emis.core.remoting.impl.netty.NettyServerConnection@6f035b0a[local=
> >>> /10.42.154.105:61616, remote=/10.42.21.198:40844], there are currently
> >>> pendingWrites: [NETTY] -\u003e 0[EVENT LOOP] -\u003e 0 causes: fail
> >>> ed to allocate 16777216 byte(s) of direct memory (used: 1057466368,
> >>> max:
> >>> 1073741824): io.netty.util.internal.OutOfDirectMemoryError: failed to
> >>> allocate 16777216 byte(s) of direct memory (used: 1057466368, m
> >>> ax:
> >>> 1073741824)\r\n","stream":"stdout","time":"2017-06-
> >>> 01T19:16:12.342853929Z"}
> >>> {"log":"19:16:12,342 WARN  [org.apache.activemq.artemis.core.server]
> >>> AMQ222151: removing consumer which did not handle a message,
> >>> consumer=ServerConsumerImpl [id=0, filter=null,
> >>> binding=LocalQueueBinding [a
> >>> ddress=CentreonTopic, queue=QueueImpl[name=CentreonTopic,
> >>> postOffice=PostOfficeImpl
> >>> [server=ActiveMQServerImpl::serverUUID=772ad6f8-4630-11e7-93cd-
> >>> 02a837635b7b],
> >>> temp=false]@3e389a2d, filter=null, name=Cent
> >>> reonTopic,
> >>> clusterName=CentreonTopic772ad6f8-4630-11e7-93cd-02a837635b7b]],
> >>> message=Reference[715739]:NON-
> >>> RELIABLE:CoreMessage[messageID=715739,durable=
> false,userID=null,priorit
> >>> y=0,
> >>> timestamp=0,expiration=0
> >>> , durable=false,
> >>> address=CentreonTopic,properties=TypedProperties[
> mqtt.message.retain=fa
> >>> lse,mqtt.qos.level=0]]@1623021181:
> >>> io.netty.util.internal.OutOfDirectMemoryError: failed to allocate
> >>> 16777216 byte(s)
> >>> of direct memory (used: 1057466368, max:
> >>> 1073741824)\r\n","stream":"stdout","time":"2017-06-
> >>> 01T19:16:12.347107296Z"}
> >>> {"log":"19:31:54,236 WARN
> >>> [org.apache.activemq.artemis.core.remoting.impl.netty.NettyConnection]
> >>> Trying to allocate 548 bytes, System is throwing OutOfMemoryError on
> >>> NettyConnection org.apache.activemq.art
> >>> emis.core.remoting.impl.netty.NettyServerConnection@7b18e1a6[local=
> >>> /10.42.154.105:61616, remote=/10.42.162.183:48376], there are
> >>> currently
> >>> pendingWrites: [NETTY] -\u003e 0[EVENT LOOP] -\u003e 0 causes: fai
> >>> led to allocate 16777216 byte(s) of direct memory (used: 1057466368,
> >>> max: 1073741824): io.netty.util.internal.OutOfDirectMemoryError:
> >>> failed
> >>> to allocate 16777216 byte(s) of direct memory (used: 1057466368,
> >>> max:
> >>> 1073741824)\r\n","stream":"stdout","time":"2017-06-
> >>> 01T19:31:54.238904544Z"}
> >>> {"log":"19:31:54,238 WARN  [org.apache.activemq.artemis.core.server]
> >>> AMQ222151: removing consumer which did not handle a message,
> >>> consumer=ServerConsumerImpl [id=0, filter=null,
> >>> binding=LocalQueueBinding [a
> >>> ddress=CentreonTopic, queue=QueueImpl[name=CentreonTopic,
> >>> postOffice=PostOfficeImpl
> >>> [server=ActiveMQServerImpl::serverUUID=772ad6f8-4630-11e7-93cd-
> >>> 02a837635b7b],
> >>> temp=false]@3e389a2d, filter=null, name=Cent
> >>> reonTopic,
> >>> clusterName=CentreonTopic772ad6f8-4630-11e7-93cd-02a837635b7b]],
> >>> message=Reference[722892]:NON-
> >>> RELIABLE:CoreMessage[messageID=722892,durable=
> false,userID=null,priorit
> >>> y=0,
> >>> timestamp=0,expiration=0
> >>> , durable=false,
> >>> address=CentreonTopic,properties=TypedProperties[
> mqtt.message.retain=fa
> >>> lse,mqtt.qos.level=0]]@1252621657:
> >>> io.netty.util.internal.OutOfDirectMemoryError: failed to allocate
> >>> 16777216 byte(s)
> >>> of direct memory (used: 1057466368, max:
> >>> 1073741824)\r\n","stream":"stdout","time":"2017-06-
> >>> 01T19:31:54.239955162Z"}
> >>>
> >>>
> >>>
> >>> Then after a couple of hours:
> >>>
> >>> {"log":"23:22:24,013 WARN  [io.netty.channel.DefaultChannelPipeline]
> >>> An
> >>> exceptionCaught() event was fired, and it reached at the tail of the
> >>> pipeline. It usually means the last handler in the pipeline did n
> >>> ot handle the exception.:
> >>> io.netty.util.internal.OutOfDirectMemoryError:
> >>> failed to allocate 16777216 byte(s) of direct memory (used:
> >>> 1057466368,
> >>> max: 1073741824)\r\n","stream":"stdout","time":"2017-06-01T23
> >>> :22:24.015087347Z"}
> >>> {"log":"23:22:24,014 WARN  [io.netty.channel.DefaultChannelPipeline]
> >>> An
> >>> exceptionCaught() event was fired, and it reached at the tail of the
> >>> pipeline. It usually means the last handler in the pipeline did n
> >>> ot handle the exception.:
> >>> io.netty.util.internal.OutOfDirectMemoryError:
> >>> failed to allocate 16777216 byte(s) of direct memory (used:
> >>> 1057466368,
> >>> max: 1073741824)\r\n","stream":"stdout","time":"2017-06-01T23
> >>> :22:24.015759902Z"}
> >>> {"log":"23:22:24,015 WARN  [io.netty.channel.DefaultChannelPipeline]
> >>> An
> >>> exceptionCaught() event was fired, and it reached at the tail of the
> >>> pipeline. It usually means the last handler in the pipeline did n
> >>> ot handle the exception.:
> >>> io.netty.util.internal.OutOfDirectMemoryError:
> >>> failed to allocate 16777216 byte(s) of direct memory (used:
> >>> 1057466368,
> >>> max: 1073741824)\r\n","stream":"stdout","time":"2017-06-01T23
> >>> :22:24.016623101Z"}
> >>>
> >>>
> >>> And this message is looping and in 5 mins it's filled my 12GB drive.
> >>>
> >>> Any clues what to do? I'll do some more debugging.
> >>>
> >>> /hw
>
>
>
> --
> Clebert Suconic
>

Re: artemis 2.2.0 logging disaster

Posted by Clebert Suconic <cl...@gmail.com>.
There was a recent bug fix on MQTT. are you using a snapshot after the fix?

On Fri, Jun 2, 2017 at 12:59 PM, Michael André Pearce
<mi...@me.com> wrote:
> Also sorry just one other question.
>
> Does this occur with 2.1.0?
>
> Sent from my iPhone
>
>> On 2 Jun 2017, at 17:57, Michael André Pearce <mi...@me.com> wrote:
>>
>> Essentially just from this log output I assume the server your broker is running out of ram to use.
>> This can be either
>> A) genuine memory leak in artemis
>> B) you simply don't have enough ram for the load/throughout.
>>
>> Some questions:
>>
>> Is the load constant?
>> Do you have server ram usage metrics available?
>>
>> You should ensure there is more ram available to the broker instance than just heap allocation, for network buffers etc.
>>
>> Cheers
>> Mike
>>
>>
>>
>> Sent from my iPhone
>>
>>> On 2 Jun 2017, at 09:44, Helge Waastad <he...@waastad.org> wrote:
>>>
>>> Hi,
>>> I'm running artemis 2.2.0 as a docker container.
>>>
>>> I'm collecting MQTT messages an these are consumed by a JMS consumer
>>> (artemis-jms-client)
>>>
>>> It's running fine for a while, but suddenly this appear (docker *-
>>> json.log):
>>>
>>> {"log":"19:16:12,338 WARN
>>> [org.apache.activemq.artemis.core.remoting.impl.netty.NettyConnection]
>>> Trying to allocate 712 bytes, System is throwing OutOfMemoryError on
>>> NettyConnection org.apache.activemq.art
>>> emis.core.remoting.impl.netty.NettyServerConnection@6f035b0a[local=
>>> /10.42.154.105:61616, remote=/10.42.21.198:40844], there are currently
>>> pendingWrites: [NETTY] -\u003e 0[EVENT LOOP] -\u003e 0 causes: fail
>>> ed to allocate 16777216 byte(s) of direct memory (used: 1057466368,
>>> max:
>>> 1073741824): io.netty.util.internal.OutOfDirectMemoryError: failed to
>>> allocate 16777216 byte(s) of direct memory (used: 1057466368, m
>>> ax:
>>> 1073741824)\r\n","stream":"stdout","time":"2017-06-
>>> 01T19:16:12.342853929Z"}
>>> {"log":"19:16:12,342 WARN  [org.apache.activemq.artemis.core.server]
>>> AMQ222151: removing consumer which did not handle a message,
>>> consumer=ServerConsumerImpl [id=0, filter=null,
>>> binding=LocalQueueBinding [a
>>> ddress=CentreonTopic, queue=QueueImpl[name=CentreonTopic,
>>> postOffice=PostOfficeImpl
>>> [server=ActiveMQServerImpl::serverUUID=772ad6f8-4630-11e7-93cd-
>>> 02a837635b7b],
>>> temp=false]@3e389a2d, filter=null, name=Cent
>>> reonTopic,
>>> clusterName=CentreonTopic772ad6f8-4630-11e7-93cd-02a837635b7b]],
>>> message=Reference[715739]:NON-
>>> RELIABLE:CoreMessage[messageID=715739,durable=false,userID=null,priorit
>>> y=0,
>>> timestamp=0,expiration=0
>>> , durable=false,
>>> address=CentreonTopic,properties=TypedProperties[mqtt.message.retain=fa
>>> lse,mqtt.qos.level=0]]@1623021181:
>>> io.netty.util.internal.OutOfDirectMemoryError: failed to allocate
>>> 16777216 byte(s)
>>> of direct memory (used: 1057466368, max:
>>> 1073741824)\r\n","stream":"stdout","time":"2017-06-
>>> 01T19:16:12.347107296Z"}
>>> {"log":"19:31:54,236 WARN
>>> [org.apache.activemq.artemis.core.remoting.impl.netty.NettyConnection]
>>> Trying to allocate 548 bytes, System is throwing OutOfMemoryError on
>>> NettyConnection org.apache.activemq.art
>>> emis.core.remoting.impl.netty.NettyServerConnection@7b18e1a6[local=
>>> /10.42.154.105:61616, remote=/10.42.162.183:48376], there are
>>> currently
>>> pendingWrites: [NETTY] -\u003e 0[EVENT LOOP] -\u003e 0 causes: fai
>>> led to allocate 16777216 byte(s) of direct memory (used: 1057466368,
>>> max: 1073741824): io.netty.util.internal.OutOfDirectMemoryError:
>>> failed
>>> to allocate 16777216 byte(s) of direct memory (used: 1057466368,
>>> max:
>>> 1073741824)\r\n","stream":"stdout","time":"2017-06-
>>> 01T19:31:54.238904544Z"}
>>> {"log":"19:31:54,238 WARN  [org.apache.activemq.artemis.core.server]
>>> AMQ222151: removing consumer which did not handle a message,
>>> consumer=ServerConsumerImpl [id=0, filter=null,
>>> binding=LocalQueueBinding [a
>>> ddress=CentreonTopic, queue=QueueImpl[name=CentreonTopic,
>>> postOffice=PostOfficeImpl
>>> [server=ActiveMQServerImpl::serverUUID=772ad6f8-4630-11e7-93cd-
>>> 02a837635b7b],
>>> temp=false]@3e389a2d, filter=null, name=Cent
>>> reonTopic,
>>> clusterName=CentreonTopic772ad6f8-4630-11e7-93cd-02a837635b7b]],
>>> message=Reference[722892]:NON-
>>> RELIABLE:CoreMessage[messageID=722892,durable=false,userID=null,priorit
>>> y=0,
>>> timestamp=0,expiration=0
>>> , durable=false,
>>> address=CentreonTopic,properties=TypedProperties[mqtt.message.retain=fa
>>> lse,mqtt.qos.level=0]]@1252621657:
>>> io.netty.util.internal.OutOfDirectMemoryError: failed to allocate
>>> 16777216 byte(s)
>>> of direct memory (used: 1057466368, max:
>>> 1073741824)\r\n","stream":"stdout","time":"2017-06-
>>> 01T19:31:54.239955162Z"}
>>>
>>>
>>>
>>> Then after a couple of hours:
>>>
>>> {"log":"23:22:24,013 WARN  [io.netty.channel.DefaultChannelPipeline]
>>> An
>>> exceptionCaught() event was fired, and it reached at the tail of the
>>> pipeline. It usually means the last handler in the pipeline did n
>>> ot handle the exception.:
>>> io.netty.util.internal.OutOfDirectMemoryError:
>>> failed to allocate 16777216 byte(s) of direct memory (used:
>>> 1057466368,
>>> max: 1073741824)\r\n","stream":"stdout","time":"2017-06-01T23
>>> :22:24.015087347Z"}
>>> {"log":"23:22:24,014 WARN  [io.netty.channel.DefaultChannelPipeline]
>>> An
>>> exceptionCaught() event was fired, and it reached at the tail of the
>>> pipeline. It usually means the last handler in the pipeline did n
>>> ot handle the exception.:
>>> io.netty.util.internal.OutOfDirectMemoryError:
>>> failed to allocate 16777216 byte(s) of direct memory (used:
>>> 1057466368,
>>> max: 1073741824)\r\n","stream":"stdout","time":"2017-06-01T23
>>> :22:24.015759902Z"}
>>> {"log":"23:22:24,015 WARN  [io.netty.channel.DefaultChannelPipeline]
>>> An
>>> exceptionCaught() event was fired, and it reached at the tail of the
>>> pipeline. It usually means the last handler in the pipeline did n
>>> ot handle the exception.:
>>> io.netty.util.internal.OutOfDirectMemoryError:
>>> failed to allocate 16777216 byte(s) of direct memory (used:
>>> 1057466368,
>>> max: 1073741824)\r\n","stream":"stdout","time":"2017-06-01T23
>>> :22:24.016623101Z"}
>>>
>>>
>>> And this message is looping and in 5 mins it's filled my 12GB drive.
>>>
>>> Any clues what to do? I'll do some more debugging.
>>>
>>> /hw



-- 
Clebert Suconic

Re: artemis 2.2.0 logging disaster

Posted by Michael André Pearce <mi...@me.com>.
Also sorry just one other question.

Does this occur with 2.1.0? 

Sent from my iPhone

> On 2 Jun 2017, at 17:57, Michael André Pearce <mi...@me.com> wrote:
> 
> Essentially just from this log output I assume the server your broker is running out of ram to use. 
> This can be either 
> A) genuine memory leak in artemis
> B) you simply don't have enough ram for the load/throughout.
> 
> Some questions:
> 
> Is the load constant? 
> Do you have server ram usage metrics available?
> 
> You should ensure there is more ram available to the broker instance than just heap allocation, for network buffers etc.
> 
> Cheers
> Mike
> 
> 
> 
> Sent from my iPhone
> 
>> On 2 Jun 2017, at 09:44, Helge Waastad <he...@waastad.org> wrote:
>> 
>> Hi,
>> I'm running artemis 2.2.0 as a docker container.
>> 
>> I'm collecting MQTT messages an these are consumed by a JMS consumer
>> (artemis-jms-client)
>> 
>> It's running fine for a while, but suddenly this appear (docker *-
>> json.log):
>> 
>> {"log":"19:16:12,338 WARN 
>> [org.apache.activemq.artemis.core.remoting.impl.netty.NettyConnection] 
>> Trying to allocate 712 bytes, System is throwing OutOfMemoryError on 
>> NettyConnection org.apache.activemq.art
>> emis.core.remoting.impl.netty.NettyServerConnection@6f035b0a[local= 
>> /10.42.154.105:61616, remote=/10.42.21.198:40844], there are currently 
>> pendingWrites: [NETTY] -\u003e 0[EVENT LOOP] -\u003e 0 causes: fail
>> ed to allocate 16777216 byte(s) of direct memory (used: 1057466368,
>> max: 
>> 1073741824): io.netty.util.internal.OutOfDirectMemoryError: failed to 
>> allocate 16777216 byte(s) of direct memory (used: 1057466368, m
>> ax: 
>> 1073741824)\r\n","stream":"stdout","time":"2017-06-
>> 01T19:16:12.342853929Z"}
>> {"log":"19:16:12,342 WARN  [org.apache.activemq.artemis.core.server] 
>> AMQ222151: removing consumer which did not handle a message, 
>> consumer=ServerConsumerImpl [id=0, filter=null,
>> binding=LocalQueueBinding [a
>> ddress=CentreonTopic, queue=QueueImpl[name=CentreonTopic, 
>> postOffice=PostOfficeImpl 
>> [server=ActiveMQServerImpl::serverUUID=772ad6f8-4630-11e7-93cd-
>> 02a837635b7b], 
>> temp=false]@3e389a2d, filter=null, name=Cent
>> reonTopic, 
>> clusterName=CentreonTopic772ad6f8-4630-11e7-93cd-02a837635b7b]], 
>> message=Reference[715739]:NON-
>> RELIABLE:CoreMessage[messageID=715739,durable=false,userID=null,priorit
>> y=0, 
>> timestamp=0,expiration=0
>> , durable=false, 
>> address=CentreonTopic,properties=TypedProperties[mqtt.message.retain=fa
>> lse,mqtt.qos.level=0]]@1623021181: 
>> io.netty.util.internal.OutOfDirectMemoryError: failed to allocate 
>> 16777216 byte(s)
>> of direct memory (used: 1057466368, max: 
>> 1073741824)\r\n","stream":"stdout","time":"2017-06-
>> 01T19:16:12.347107296Z"}
>> {"log":"19:31:54,236 WARN 
>> [org.apache.activemq.artemis.core.remoting.impl.netty.NettyConnection] 
>> Trying to allocate 548 bytes, System is throwing OutOfMemoryError on 
>> NettyConnection org.apache.activemq.art
>> emis.core.remoting.impl.netty.NettyServerConnection@7b18e1a6[local= 
>> /10.42.154.105:61616, remote=/10.42.162.183:48376], there are
>> currently 
>> pendingWrites: [NETTY] -\u003e 0[EVENT LOOP] -\u003e 0 causes: fai
>> led to allocate 16777216 byte(s) of direct memory (used: 1057466368, 
>> max: 1073741824): io.netty.util.internal.OutOfDirectMemoryError:
>> failed 
>> to allocate 16777216 byte(s) of direct memory (used: 1057466368,
>> max: 
>> 1073741824)\r\n","stream":"stdout","time":"2017-06-
>> 01T19:31:54.238904544Z"}
>> {"log":"19:31:54,238 WARN  [org.apache.activemq.artemis.core.server] 
>> AMQ222151: removing consumer which did not handle a message, 
>> consumer=ServerConsumerImpl [id=0, filter=null,
>> binding=LocalQueueBinding [a
>> ddress=CentreonTopic, queue=QueueImpl[name=CentreonTopic, 
>> postOffice=PostOfficeImpl 
>> [server=ActiveMQServerImpl::serverUUID=772ad6f8-4630-11e7-93cd-
>> 02a837635b7b], 
>> temp=false]@3e389a2d, filter=null, name=Cent
>> reonTopic, 
>> clusterName=CentreonTopic772ad6f8-4630-11e7-93cd-02a837635b7b]], 
>> message=Reference[722892]:NON-
>> RELIABLE:CoreMessage[messageID=722892,durable=false,userID=null,priorit
>> y=0, 
>> timestamp=0,expiration=0
>> , durable=false, 
>> address=CentreonTopic,properties=TypedProperties[mqtt.message.retain=fa
>> lse,mqtt.qos.level=0]]@1252621657: 
>> io.netty.util.internal.OutOfDirectMemoryError: failed to allocate 
>> 16777216 byte(s)
>> of direct memory (used: 1057466368, max: 
>> 1073741824)\r\n","stream":"stdout","time":"2017-06-
>> 01T19:31:54.239955162Z"}
>> 
>> 
>> 
>> Then after a couple of hours:
>> 
>> {"log":"23:22:24,013 WARN  [io.netty.channel.DefaultChannelPipeline]
>> An 
>> exceptionCaught() event was fired, and it reached at the tail of the 
>> pipeline. It usually means the last handler in the pipeline did n
>> ot handle the exception.:
>> io.netty.util.internal.OutOfDirectMemoryError: 
>> failed to allocate 16777216 byte(s) of direct memory (used:
>> 1057466368, 
>> max: 1073741824)\r\n","stream":"stdout","time":"2017-06-01T23
>> :22:24.015087347Z"}
>> {"log":"23:22:24,014 WARN  [io.netty.channel.DefaultChannelPipeline]
>> An 
>> exceptionCaught() event was fired, and it reached at the tail of the 
>> pipeline. It usually means the last handler in the pipeline did n
>> ot handle the exception.:
>> io.netty.util.internal.OutOfDirectMemoryError: 
>> failed to allocate 16777216 byte(s) of direct memory (used:
>> 1057466368, 
>> max: 1073741824)\r\n","stream":"stdout","time":"2017-06-01T23
>> :22:24.015759902Z"}
>> {"log":"23:22:24,015 WARN  [io.netty.channel.DefaultChannelPipeline]
>> An 
>> exceptionCaught() event was fired, and it reached at the tail of the 
>> pipeline. It usually means the last handler in the pipeline did n
>> ot handle the exception.:
>> io.netty.util.internal.OutOfDirectMemoryError: 
>> failed to allocate 16777216 byte(s) of direct memory (used:
>> 1057466368, 
>> max: 1073741824)\r\n","stream":"stdout","time":"2017-06-01T23
>> :22:24.016623101Z"}
>> 
>> 
>> And this message is looping and in 5 mins it's filled my 12GB drive.
>> 
>> Any clues what to do? I'll do some more debugging.
>> 
>> /hw

Re: artemis 2.2.0 logging disaster

Posted by Michael André Pearce <mi...@me.com>.
Essentially just from this log output I assume the server your broker is running out of ram to use. 
This can be either 
A) genuine memory leak in artemis
B) you simply don't have enough ram for the load/throughout.

Some questions:

Is the load constant? 
Do you have server ram usage metrics available?

You should ensure there is more ram available to the broker instance than just heap allocation, for network buffers etc.

Cheers
Mike



Sent from my iPhone

> On 2 Jun 2017, at 09:44, Helge Waastad <he...@waastad.org> wrote:
> 
> Hi,
> I'm running artemis 2.2.0 as a docker container.
> 
> I'm collecting MQTT messages an these are consumed by a JMS consumer
> (artemis-jms-client)
> 
> It's running fine for a while, but suddenly this appear (docker *-
> json.log):
> 
> {"log":"19:16:12,338 WARN 
> [org.apache.activemq.artemis.core.remoting.impl.netty.NettyConnection] 
> Trying to allocate 712 bytes, System is throwing OutOfMemoryError on 
> NettyConnection org.apache.activemq.art
> emis.core.remoting.impl.netty.NettyServerConnection@6f035b0a[local= 
> /10.42.154.105:61616, remote=/10.42.21.198:40844], there are currently 
> pendingWrites: [NETTY] -\u003e 0[EVENT LOOP] -\u003e 0 causes: fail
> ed to allocate 16777216 byte(s) of direct memory (used: 1057466368,
> max: 
> 1073741824): io.netty.util.internal.OutOfDirectMemoryError: failed to 
> allocate 16777216 byte(s) of direct memory (used: 1057466368, m
> ax: 
> 1073741824)\r\n","stream":"stdout","time":"2017-06-
> 01T19:16:12.342853929Z"}
> {"log":"19:16:12,342 WARN  [org.apache.activemq.artemis.core.server] 
> AMQ222151: removing consumer which did not handle a message, 
> consumer=ServerConsumerImpl [id=0, filter=null,
> binding=LocalQueueBinding [a
> ddress=CentreonTopic, queue=QueueImpl[name=CentreonTopic, 
> postOffice=PostOfficeImpl 
> [server=ActiveMQServerImpl::serverUUID=772ad6f8-4630-11e7-93cd-
> 02a837635b7b], 
> temp=false]@3e389a2d, filter=null, name=Cent
> reonTopic, 
> clusterName=CentreonTopic772ad6f8-4630-11e7-93cd-02a837635b7b]], 
> message=Reference[715739]:NON-
> RELIABLE:CoreMessage[messageID=715739,durable=false,userID=null,priorit
> y=0, 
> timestamp=0,expiration=0
> , durable=false, 
> address=CentreonTopic,properties=TypedProperties[mqtt.message.retain=fa
> lse,mqtt.qos.level=0]]@1623021181: 
> io.netty.util.internal.OutOfDirectMemoryError: failed to allocate 
> 16777216 byte(s)
> of direct memory (used: 1057466368, max: 
> 1073741824)\r\n","stream":"stdout","time":"2017-06-
> 01T19:16:12.347107296Z"}
> {"log":"19:31:54,236 WARN 
> [org.apache.activemq.artemis.core.remoting.impl.netty.NettyConnection] 
> Trying to allocate 548 bytes, System is throwing OutOfMemoryError on 
> NettyConnection org.apache.activemq.art
> emis.core.remoting.impl.netty.NettyServerConnection@7b18e1a6[local= 
> /10.42.154.105:61616, remote=/10.42.162.183:48376], there are
> currently 
> pendingWrites: [NETTY] -\u003e 0[EVENT LOOP] -\u003e 0 causes: fai
> led to allocate 16777216 byte(s) of direct memory (used: 1057466368, 
> max: 1073741824): io.netty.util.internal.OutOfDirectMemoryError:
> failed 
> to allocate 16777216 byte(s) of direct memory (used: 1057466368,
> max: 
> 1073741824)\r\n","stream":"stdout","time":"2017-06-
> 01T19:31:54.238904544Z"}
> {"log":"19:31:54,238 WARN  [org.apache.activemq.artemis.core.server] 
> AMQ222151: removing consumer which did not handle a message, 
> consumer=ServerConsumerImpl [id=0, filter=null,
> binding=LocalQueueBinding [a
> ddress=CentreonTopic, queue=QueueImpl[name=CentreonTopic, 
> postOffice=PostOfficeImpl 
> [server=ActiveMQServerImpl::serverUUID=772ad6f8-4630-11e7-93cd-
> 02a837635b7b], 
> temp=false]@3e389a2d, filter=null, name=Cent
> reonTopic, 
> clusterName=CentreonTopic772ad6f8-4630-11e7-93cd-02a837635b7b]], 
> message=Reference[722892]:NON-
> RELIABLE:CoreMessage[messageID=722892,durable=false,userID=null,priorit
> y=0, 
> timestamp=0,expiration=0
> , durable=false, 
> address=CentreonTopic,properties=TypedProperties[mqtt.message.retain=fa
> lse,mqtt.qos.level=0]]@1252621657: 
> io.netty.util.internal.OutOfDirectMemoryError: failed to allocate 
> 16777216 byte(s)
> of direct memory (used: 1057466368, max: 
> 1073741824)\r\n","stream":"stdout","time":"2017-06-
> 01T19:31:54.239955162Z"}
> 
> 
> 
> Then after a couple of hours:
> 
> {"log":"23:22:24,013 WARN  [io.netty.channel.DefaultChannelPipeline]
> An 
> exceptionCaught() event was fired, and it reached at the tail of the 
> pipeline. It usually means the last handler in the pipeline did n
> ot handle the exception.:
> io.netty.util.internal.OutOfDirectMemoryError: 
> failed to allocate 16777216 byte(s) of direct memory (used:
> 1057466368, 
> max: 1073741824)\r\n","stream":"stdout","time":"2017-06-01T23
> :22:24.015087347Z"}
> {"log":"23:22:24,014 WARN  [io.netty.channel.DefaultChannelPipeline]
> An 
> exceptionCaught() event was fired, and it reached at the tail of the 
> pipeline. It usually means the last handler in the pipeline did n
> ot handle the exception.:
> io.netty.util.internal.OutOfDirectMemoryError: 
> failed to allocate 16777216 byte(s) of direct memory (used:
> 1057466368, 
> max: 1073741824)\r\n","stream":"stdout","time":"2017-06-01T23
> :22:24.015759902Z"}
> {"log":"23:22:24,015 WARN  [io.netty.channel.DefaultChannelPipeline]
> An 
> exceptionCaught() event was fired, and it reached at the tail of the 
> pipeline. It usually means the last handler in the pipeline did n
> ot handle the exception.:
> io.netty.util.internal.OutOfDirectMemoryError: 
> failed to allocate 16777216 byte(s) of direct memory (used:
> 1057466368, 
> max: 1073741824)\r\n","stream":"stdout","time":"2017-06-01T23
> :22:24.016623101Z"}
> 
> 
> And this message is looping and in 5 mins it's filled my 12GB drive.
> 
> Any clues what to do? I'll do some more debugging.
> 
> /hw