You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@activemq.apache.org by "clebert suconic (JIRA)" <ji...@apache.org> on 2017/10/18 12:14:00 UTC

[jira] [Reopened] (ARTEMIS-450) Deadlocked broker

     [ https://issues.apache.org/jira/browse/ARTEMIS-450?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

clebert suconic reopened ARTEMIS-450:
-------------------------------------
      Assignee: clebert suconic

> Deadlocked broker
> -----------------
>
>                 Key: ARTEMIS-450
>                 URL: https://issues.apache.org/jira/browse/ARTEMIS-450
>             Project: ActiveMQ Artemis
>          Issue Type: Bug
>          Components: AMQP, Broker
>    Affects Versions: 1.2.0
>            Reporter: Gordon Sim
>            Assignee: clebert suconic
>             Fix For: 2.4.0
>
>         Attachments: stack-dump.txt, thread-dump-1.3.txt
>
>
> Not sure exactly how it came about, I noticed it on trying to shutdown the broker. The log has:
> {noformat}
> 21:43:17,985 WARN  [org.apache.activemq.artemis.core.server] AMQ222174: Queue examples, on address=myqueue, is taking too long to flush deliveries. Watch out for frozen clients.
> 21:43:18,986 WARN  [org.apache.activemq.artemis.core.server] AMQ222174: Queue examples, on address=myqueue, is taking too long to flush deliveries. Watch out for frozen clients.
> 21:43:19,986 WARN  [org.apache.activemq.artemis.core.server] AMQ222174: Queue examples, on address=myqueue, is taking too long to flush deliveries. Watch out for frozen clients.
> 21:43:20,986 WARN  [org.apache.activemq.artemis.core.server] AMQ222174: Queue examples, on address=myqueue, is taking too long to flush deliveries. Watch out for frozen clients.
> 21:43:28,928 WARN  [org.apache.activemq.artemis.core.server] AMQ222174: Queue examples, on address=myqueue, is taking too long to flush deliveries. Watch out for frozen clients.
> 21:43:45,937 WARN  [org.apache.activemq.artemis.core.server] AMQ222174: Queue examples, on address=myqueue, is taking too long to flush deliveries. Watch out for frozen clients.
> 21:44:18,698 WARN  [org.apache.activemq.artemis.core.client] AMQ212037: Connection failure has been detected: AMQ119014: Did not receive data from /127.0.0.1:51232. It is likely the client has exited or crashed without closing its connection, or the network between the server and client has failed. You also might have configured connection-ttl and client-failure-check-period incorrectly. Please check user manual for more information. The connection will now be closed. [code=CONNECTION_TIMEDOUT]
> 21:44:18,698 WARN  [org.apache.activemq.artemis.core.server] AMQ222061: Client connection failed, clearing up resources for session ebd714e5-efad-11e5-83fc-fe540024bf8d
> Exception in thread "Thread-0 (ActiveMQ-AIO-poller-pool2081191879-2061347276)" java.lang.Error: java.io.IOException: Error while submitting IO: Interrupted system call
> 	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1148)
> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> 	at java.lang.Thread.run(Thread.java:745)
> Caused by: java.io.IOException: Error while submitting IO: Interrupted system call
> 	at org.apache.activemq.artemis.jlibaio.LibaioContext.blockedPoll(Native Method)
> 	at org.apache.activemq.artemis.jlibaio.LibaioContext.poll(LibaioContext.java:360)
> 	at org.apache.activemq.artemis.core.io.aio.AIOSequentialFileFactory$PollerRunnable.run(AIOSequentialFileFactory.java:355)
> 	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> 	... 2 more
> {noformat}
> I'll attach a thread dump in which you will see Thread-10 has locked the handler lock in AbstractConnectionContext
> (part of the 'proton plug'), and is itself blocked on the lock in
> ServerConsumerImpl, which is held by Thread-21. Thread-21 is waiting
> for a write lock on the deliveryLock in ServerConsumerImpl. However
> Thread-20 already has a read lock on this, and is blocked (while
> holding the read lock) on the same handler lock within the proton plug
> (object 0x00000000f3d2bd90) that Thread-10 has locked.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)