You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@activemq.apache.org by "Timothy Bish (JIRA)" <ji...@apache.org> on 2015/07/15 19:20:04 UTC

[jira] [Commented] (AMQCPP-569) thread creation leak with failover transport after disconnect

    [ https://issues.apache.org/jira/browse/AMQCPP-569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14628381#comment-14628381 ] 

Timothy Bish commented on AMQCPP-569:
-------------------------------------

The resources will eventually get cleaned up, once a connection is made there is time for the close task to catch up.  The closer attempts to be lazy when closing out failed Transport instances, it apparently is a bit to lazy and gets behind because the reconnect task hogs the background resources.  

> thread creation leak with failover transport after disconnect
> -------------------------------------------------------------
>
>                 Key: AMQCPP-569
>                 URL: https://issues.apache.org/jira/browse/AMQCPP-569
>             Project: ActiveMQ C++ Client
>          Issue Type: Bug
>    Affects Versions: 3.8.4
>         Environment: CentOS Linux 7, example activemq-cpp client from the website
>            Reporter: Arthur Naseef
>            Assignee: Timothy Bish
>
> As reported here: http://activemq.2283324.n4.nabble.com/ActiveMQ-High-number-of-threads-td4693185.html#a4693477
> The following steps lead to a thread leak:
> * Start client with failover transport with at least 2 brokers in the URL
> * Wait for the client to successfully connect to the broker
> * Shutdown the broker
> * Watch the number of threads in the process
> Using GDB, I found the following stack trace in a good number of the most recent threads:
> {noformat}
> #0  0x00007ffff68a2705 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
> #1  0x00007ffff774087b in decaf::internal::util::concurrent::PlatformThread::interruptibleWaitOnCondition (
>     condition=0x7fffc8002790, mutex=0x7fffc8000a20, complete=...)
>     at decaf/internal/util/concurrent/unix/PlatformThread.cpp:210
> #2  0x00007ffff773f5c5 in doWaitOnMonitor (interruptible=true, nanos=<optimized out>, mills=0, thread=0x7fffc8003a40, 
>     monitor=0x618310) at decaf/internal/util/concurrent/Threading.cpp:754
> #3  decaf::internal::util::concurrent::Threading::waitOnMonitor (monitor=0x618310, mills=0, nanos=<optimized out>)
>     at decaf/internal/util/concurrent/Threading.cpp:1558
> #4  0x00007ffff77a7e5c in decaf::util::TimerImpl::run (this=0x7fffc8000d20) at decaf/util/Timer.cpp:79
> #5  0x00007ffff773cb72 in (anonymous namespace)::runCallback (arg=0x7fffc8003a40)
>     at decaf/internal/util/concurrent/Threading.cpp:266
> #6  0x00007ffff773d47c in (anonymous namespace)::threadEntryMethod (arg=0x7fffc8003a40)
>     at decaf/internal/util/concurrent/Threading.cpp:254
> #7  0x00007ffff689edf3 in start_thread () from /lib64/libpthread.so.0
> #8  0x00007ffff5ba91ad in clone () from /lib64/libc.so.6
> {noformat}
> I tested with a slightly-modified version of the example C++ program from the ActiveMQ wiki (http://activemq.apache.org/cms/example.html).  The modifications consist of adding a delay after producing each message (to slow down the program in order to make testing easier), and accepting the broker URL from the command-line.
> Note that watching the threads with "ps" over a period of time, the leak does not appear to occur with 100% consistency; at least a couple of times, the number of threads dropped back down and then increased again -- this is in the same run in which the number of threads did grow more than once.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)