You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hbase.apache.org by Serega Sheypak <se...@gmail.com> on 2015/01/05 11:39:45 UTC

Threads leaking from Apache tomcat application

Hi, I'm still trying to deal with apache tomcat web-app and hbase HBase
0.98.6
The root problem is that user threads constantly grows. I do get thousands
of live threads on tomcat instance. Then it dies of course.

please see visualVM threads count dynamics
[image: Встроенное изображение 1]

Please see selected thread. It should be related to zookeeper (because of
thread-name suffix SendThread)

[image: Встроенное изображение 2]

The threaddump for this thread is:

"visit-thread-27799752116280271-EventThread" - Thread t@75
   java.lang.Thread.State: WAITING
at sun.misc.Unsafe.park(Native Method)
- parking to wait for <34671cea> (a
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
at
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
at
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:494)

   Locked ownable synchronizers:
- None

Why does it live "forever"? I next 24 hours I would get ~1200 live theads.

"visit thread" does simple put/get by key, newrelic says it takes 30-40 ms
to respond.
I just set a name for the thread inside servlet method.

Here is CPU profiling result:
[image: Встроенное изображение 3]

Here are some Zookeeper metrics.

[image: Встроенное изображение 4]

Re: Threads leaking from Apache tomcat application

Posted by Serega Sheypak <se...@gmail.com>.
Hi, as I mentioned before devops put wrong java (OpenJDK-7) for tomcat.
HBase runs on oracle-jdk-7
I've asked them to set oracle-java for Tomcat. The problem is gone....

2015-01-07 10:48 GMT+03:00 Serega Sheypak <se...@gmail.com>:

> Hm, thanks, I'll check..
>
> 2015-01-06 23:31 GMT+03:00 Stack <st...@duboce.net>:
>
>> The threads that are sticking around are tomcat threads out of a tomcat
>> executor pool. IIRC, your server has high traffic.  The pool is running up
>> to 800 connections on occasion and taking a while to die back down?
>> Googling, seems like this issue comes up frequently enough. Try it
>> yourself.  If you can't figure something like putting a bound on the
>> executor, come back here and we'll try and help you out.
>>
>> St.Ack
>>
>> On Tue, Jan 6, 2015 at 12:10 PM, Serega Sheypak <serega.sheypak@gmail.com
>> >
>> wrote:
>>
>> > Hi, yes, it was me.
>> > I've followed advices, ZK connections on server side are stable.
>> > Here is current state of Tomcat:
>> >
>> http://bigdatapath.com/wp-content/uploads/2015/01/002_jvisualvm_summary.png
>> > There are more than 800 threads and daemon threads.
>> >
>> > and the state of three ZK servers:
>> >
>> http://bigdatapath.com/wp-content/uploads/2015/01/001_zk_server_state.png
>> >
>> > here is pastebin:
>> > http://pastebin.com/Cq8ppg08
>> >
>> > P.S.
>> > Looks like tomcat is running on OpenJDK 64-Bit Server VM.
>> > I'll ask to fix it, it should be Oracle 7 JDK
>> >
>> > 2015-01-06 20:43 GMT+03:00 Stack <st...@duboce.net>:
>> >
>> > > On Tue, Jan 6, 2015 at 4:52 AM, Serega Sheypak <
>> serega.sheypak@gmail.com
>> > >
>> > > wrote:
>> > >
>> > > > yes, one of them (random) gets more connections than others.
>> > > >
>> > > > 9.3.1.1 Is OK.
>> > > > I have 1 HConnection for logical module per application and each
>> > > > ServletRequest gets it's own HTable. HTable closed each tme after
>> > > > ServletRequest is done. HConnection is never closed.
>> > > >
>> > > >
>> > > This is you, right: http://search-hadoop.com/m/DHED4lJSA32
>> > >
>> > > Then, we were leaking zk connections.  Is that fixed?
>> > >
>> > > Can you reproduce in the small?  By setting up your webapp deploy in
>> test
>> > > bed and watching it for leaking?
>> > >
>> > > For this issue, can you post a thread dump in postbin or gist so can
>> see?
>> > >
>> > > Can you post code too?
>> > >
>> > > St.Ack
>> > >
>> > >
>> > >
>> > > > 2015-01-05 21:22 GMT+03:00 Ted Yu <yu...@gmail.com>:
>> > > >
>> > > > > In 022_zookeeper_metrics.png, server names are anonymized. Looks
>> like
>> > > > only
>> > > > > one server got high number of connections.
>> > > > >
>> > > > > Have you seen 9.3.1.1 of http://hbase.apache.org/book.html#client
>> ?
>> > > > >
>> > > > > Cheers
>> > > > >
>> > > > > On Mon, Jan 5, 2015 at 8:57 AM, Serega Sheypak <
>> > > serega.sheypak@gmail.com
>> > > > >
>> > > > > wrote:
>> > > > >
>> > > > > > Hi, here is repost with images link
>> > > > > >
>> > > > > > Hi, I'm still trying to deal with apache tomcat web-app and
>> hbase
>> > > HBase
>> > > > > > 0.98.6
>> > > > > > The root problem is that user threads constantly grows. I do get
>> > > > > thousands
>> > > > > > of live threads on tomcat instance. Then it dies of course.
>> > > > > >
>> > > > > > please see visualVM threads count dynamics
>> > > > > >
>> > > > >
>> > > >
>> > >
>> >
>> http://bigdatapath.com/wp-content/uploads/2015/01/01_threads_count-grow.png
>> > > > > >
>> > > > > >
>> > > > > > Please see selected thread. It should be related to zookeeper
>> > > (because
>> > > > of
>> > > > > > thread-name suffix SendThread)
>> > > > > >
>> > > > > >
>> > > > >
>> > > >
>> > >
>> >
>> http://bigdatapath.com/wp-content/uploads/2015/01/011_long_running_threads.png
>> > > > > >
>> > > > > > The threaddump for this thread is:
>> > > > > >
>> > > > > > "visit-thread-27799752116280271-EventThread" - Thread t@75
>> > > > > >    java.lang.Thread.State: WAITING
>> > > > > > at sun.misc.Unsafe.park(Native Method)
>> > > > > > - parking to wait for <34671cea> (a
>> > > > > >
>> > > java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
>> > > > > > at
>> > java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
>> > > > > > at
>> > > > > >
>> > > > > >
>> > > > >
>> > > >
>> > >
>> >
>> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
>> > > > > > at
>> > > > > >
>> > > > >
>> > > >
>> > >
>> >
>> java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
>> > > > > > at
>> > > org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:494)
>> > > > > >
>> > > > > >    Locked ownable synchronizers:
>> > > > > > - None
>> > > > > >
>> > > > > > Why does it live "forever"? I next 24 hours I would get ~1200
>> live
>> > > > > theads.
>> > > > > >
>> > > > > > "visit thread" does simple put/get by key, newrelic says it
>> takes
>> > > 30-40
>> > > > > ms
>> > > > > > to respond.
>> > > > > > I just set a name for the thread inside servlet method.
>> > > > > >
>> > > > > > Here is CPU profiling result:
>> > > > > >
>> > > http://bigdatapath.com/wp-content/uploads/2015/01/03_cpu_prifling.png
>> > > > > >
>> > > > > > Here is zookeeper status:
>> > > > > >
>> > > > >
>> > > >
>> > >
>> >
>> http://bigdatapath.com/wp-content/uploads/2015/01/022_zookeeper_metrics.png
>> > > > > >
>> > > > > > How can I debug and get root cause for long living threads?
>> Looks
>> > > like
>> > > > I
>> > > > > > got threads leaking, but have no Idea why...
>> > > > > >
>> > > > > >
>> > > > > >
>> > > > > >
>> > > > > > 2015-01-05 17:57 GMT+03:00 Ted Yu <yu...@gmail.com>:
>> > > > > >
>> > > > > > > I used gmail.
>> > > > > > >
>> > > > > > > Please consider using third party site where you can upload
>> > images.
>> > > > > > >
>> > > > > > >
>> > > > > >
>> > > > >
>> > > >
>> > >
>> >
>>
>
>

Re: Threads leaking from Apache tomcat application

Posted by Serega Sheypak <se...@gmail.com>.
Hm, thanks, I'll check..

2015-01-06 23:31 GMT+03:00 Stack <st...@duboce.net>:

> The threads that are sticking around are tomcat threads out of a tomcat
> executor pool. IIRC, your server has high traffic.  The pool is running up
> to 800 connections on occasion and taking a while to die back down?
> Googling, seems like this issue comes up frequently enough. Try it
> yourself.  If you can't figure something like putting a bound on the
> executor, come back here and we'll try and help you out.
>
> St.Ack
>
> On Tue, Jan 6, 2015 at 12:10 PM, Serega Sheypak <se...@gmail.com>
> wrote:
>
> > Hi, yes, it was me.
> > I've followed advices, ZK connections on server side are stable.
> > Here is current state of Tomcat:
> >
> http://bigdatapath.com/wp-content/uploads/2015/01/002_jvisualvm_summary.png
> > There are more than 800 threads and daemon threads.
> >
> > and the state of three ZK servers:
> >
> http://bigdatapath.com/wp-content/uploads/2015/01/001_zk_server_state.png
> >
> > here is pastebin:
> > http://pastebin.com/Cq8ppg08
> >
> > P.S.
> > Looks like tomcat is running on OpenJDK 64-Bit Server VM.
> > I'll ask to fix it, it should be Oracle 7 JDK
> >
> > 2015-01-06 20:43 GMT+03:00 Stack <st...@duboce.net>:
> >
> > > On Tue, Jan 6, 2015 at 4:52 AM, Serega Sheypak <
> serega.sheypak@gmail.com
> > >
> > > wrote:
> > >
> > > > yes, one of them (random) gets more connections than others.
> > > >
> > > > 9.3.1.1 Is OK.
> > > > I have 1 HConnection for logical module per application and each
> > > > ServletRequest gets it's own HTable. HTable closed each tme after
> > > > ServletRequest is done. HConnection is never closed.
> > > >
> > > >
> > > This is you, right: http://search-hadoop.com/m/DHED4lJSA32
> > >
> > > Then, we were leaking zk connections.  Is that fixed?
> > >
> > > Can you reproduce in the small?  By setting up your webapp deploy in
> test
> > > bed and watching it for leaking?
> > >
> > > For this issue, can you post a thread dump in postbin or gist so can
> see?
> > >
> > > Can you post code too?
> > >
> > > St.Ack
> > >
> > >
> > >
> > > > 2015-01-05 21:22 GMT+03:00 Ted Yu <yu...@gmail.com>:
> > > >
> > > > > In 022_zookeeper_metrics.png, server names are anonymized. Looks
> like
> > > > only
> > > > > one server got high number of connections.
> > > > >
> > > > > Have you seen 9.3.1.1 of http://hbase.apache.org/book.html#client
> ?
> > > > >
> > > > > Cheers
> > > > >
> > > > > On Mon, Jan 5, 2015 at 8:57 AM, Serega Sheypak <
> > > serega.sheypak@gmail.com
> > > > >
> > > > > wrote:
> > > > >
> > > > > > Hi, here is repost with images link
> > > > > >
> > > > > > Hi, I'm still trying to deal with apache tomcat web-app and hbase
> > > HBase
> > > > > > 0.98.6
> > > > > > The root problem is that user threads constantly grows. I do get
> > > > > thousands
> > > > > > of live threads on tomcat instance. Then it dies of course.
> > > > > >
> > > > > > please see visualVM threads count dynamics
> > > > > >
> > > > >
> > > >
> > >
> >
> http://bigdatapath.com/wp-content/uploads/2015/01/01_threads_count-grow.png
> > > > > >
> > > > > >
> > > > > > Please see selected thread. It should be related to zookeeper
> > > (because
> > > > of
> > > > > > thread-name suffix SendThread)
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> http://bigdatapath.com/wp-content/uploads/2015/01/011_long_running_threads.png
> > > > > >
> > > > > > The threaddump for this thread is:
> > > > > >
> > > > > > "visit-thread-27799752116280271-EventThread" - Thread t@75
> > > > > >    java.lang.Thread.State: WAITING
> > > > > > at sun.misc.Unsafe.park(Native Method)
> > > > > > - parking to wait for <34671cea> (a
> > > > > >
> > > java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
> > > > > > at
> > java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
> > > > > > at
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
> > > > > > at
> > > > > >
> > > > >
> > > >
> > >
> >
> java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
> > > > > > at
> > > org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:494)
> > > > > >
> > > > > >    Locked ownable synchronizers:
> > > > > > - None
> > > > > >
> > > > > > Why does it live "forever"? I next 24 hours I would get ~1200
> live
> > > > > theads.
> > > > > >
> > > > > > "visit thread" does simple put/get by key, newrelic says it takes
> > > 30-40
> > > > > ms
> > > > > > to respond.
> > > > > > I just set a name for the thread inside servlet method.
> > > > > >
> > > > > > Here is CPU profiling result:
> > > > > >
> > > http://bigdatapath.com/wp-content/uploads/2015/01/03_cpu_prifling.png
> > > > > >
> > > > > > Here is zookeeper status:
> > > > > >
> > > > >
> > > >
> > >
> >
> http://bigdatapath.com/wp-content/uploads/2015/01/022_zookeeper_metrics.png
> > > > > >
> > > > > > How can I debug and get root cause for long living threads? Looks
> > > like
> > > > I
> > > > > > got threads leaking, but have no Idea why...
> > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > > > 2015-01-05 17:57 GMT+03:00 Ted Yu <yu...@gmail.com>:
> > > > > >
> > > > > > > I used gmail.
> > > > > > >
> > > > > > > Please consider using third party site where you can upload
> > images.
> > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>

Re: Threads leaking from Apache tomcat application

Posted by Stack <st...@duboce.net>.
The threads that are sticking around are tomcat threads out of a tomcat
executor pool. IIRC, your server has high traffic.  The pool is running up
to 800 connections on occasion and taking a while to die back down?
Googling, seems like this issue comes up frequently enough. Try it
yourself.  If you can't figure something like putting a bound on the
executor, come back here and we'll try and help you out.

St.Ack

On Tue, Jan 6, 2015 at 12:10 PM, Serega Sheypak <se...@gmail.com>
wrote:

> Hi, yes, it was me.
> I've followed advices, ZK connections on server side are stable.
> Here is current state of Tomcat:
> http://bigdatapath.com/wp-content/uploads/2015/01/002_jvisualvm_summary.png
> There are more than 800 threads and daemon threads.
>
> and the state of three ZK servers:
> http://bigdatapath.com/wp-content/uploads/2015/01/001_zk_server_state.png
>
> here is pastebin:
> http://pastebin.com/Cq8ppg08
>
> P.S.
> Looks like tomcat is running on OpenJDK 64-Bit Server VM.
> I'll ask to fix it, it should be Oracle 7 JDK
>
> 2015-01-06 20:43 GMT+03:00 Stack <st...@duboce.net>:
>
> > On Tue, Jan 6, 2015 at 4:52 AM, Serega Sheypak <serega.sheypak@gmail.com
> >
> > wrote:
> >
> > > yes, one of them (random) gets more connections than others.
> > >
> > > 9.3.1.1 Is OK.
> > > I have 1 HConnection for logical module per application and each
> > > ServletRequest gets it's own HTable. HTable closed each tme after
> > > ServletRequest is done. HConnection is never closed.
> > >
> > >
> > This is you, right: http://search-hadoop.com/m/DHED4lJSA32
> >
> > Then, we were leaking zk connections.  Is that fixed?
> >
> > Can you reproduce in the small?  By setting up your webapp deploy in test
> > bed and watching it for leaking?
> >
> > For this issue, can you post a thread dump in postbin or gist so can see?
> >
> > Can you post code too?
> >
> > St.Ack
> >
> >
> >
> > > 2015-01-05 21:22 GMT+03:00 Ted Yu <yu...@gmail.com>:
> > >
> > > > In 022_zookeeper_metrics.png, server names are anonymized. Looks like
> > > only
> > > > one server got high number of connections.
> > > >
> > > > Have you seen 9.3.1.1 of http://hbase.apache.org/book.html#client ?
> > > >
> > > > Cheers
> > > >
> > > > On Mon, Jan 5, 2015 at 8:57 AM, Serega Sheypak <
> > serega.sheypak@gmail.com
> > > >
> > > > wrote:
> > > >
> > > > > Hi, here is repost with images link
> > > > >
> > > > > Hi, I'm still trying to deal with apache tomcat web-app and hbase
> > HBase
> > > > > 0.98.6
> > > > > The root problem is that user threads constantly grows. I do get
> > > > thousands
> > > > > of live threads on tomcat instance. Then it dies of course.
> > > > >
> > > > > please see visualVM threads count dynamics
> > > > >
> > > >
> > >
> >
> http://bigdatapath.com/wp-content/uploads/2015/01/01_threads_count-grow.png
> > > > >
> > > > >
> > > > > Please see selected thread. It should be related to zookeeper
> > (because
> > > of
> > > > > thread-name suffix SendThread)
> > > > >
> > > > >
> > > >
> > >
> >
> http://bigdatapath.com/wp-content/uploads/2015/01/011_long_running_threads.png
> > > > >
> > > > > The threaddump for this thread is:
> > > > >
> > > > > "visit-thread-27799752116280271-EventThread" - Thread t@75
> > > > >    java.lang.Thread.State: WAITING
> > > > > at sun.misc.Unsafe.park(Native Method)
> > > > > - parking to wait for <34671cea> (a
> > > > >
> > java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
> > > > > at
> java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
> > > > > at
> > > > >
> > > > >
> > > >
> > >
> >
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
> > > > > at
> > > > >
> > > >
> > >
> >
> java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
> > > > > at
> > org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:494)
> > > > >
> > > > >    Locked ownable synchronizers:
> > > > > - None
> > > > >
> > > > > Why does it live "forever"? I next 24 hours I would get ~1200 live
> > > > theads.
> > > > >
> > > > > "visit thread" does simple put/get by key, newrelic says it takes
> > 30-40
> > > > ms
> > > > > to respond.
> > > > > I just set a name for the thread inside servlet method.
> > > > >
> > > > > Here is CPU profiling result:
> > > > >
> > http://bigdatapath.com/wp-content/uploads/2015/01/03_cpu_prifling.png
> > > > >
> > > > > Here is zookeeper status:
> > > > >
> > > >
> > >
> >
> http://bigdatapath.com/wp-content/uploads/2015/01/022_zookeeper_metrics.png
> > > > >
> > > > > How can I debug and get root cause for long living threads? Looks
> > like
> > > I
> > > > > got threads leaking, but have no Idea why...
> > > > >
> > > > >
> > > > >
> > > > >
> > > > > 2015-01-05 17:57 GMT+03:00 Ted Yu <yu...@gmail.com>:
> > > > >
> > > > > > I used gmail.
> > > > > >
> > > > > > Please consider using third party site where you can upload
> images.
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>

Re: Threads leaking from Apache tomcat application

Posted by Serega Sheypak <se...@gmail.com>.
Hi, yes, it was me.
I've followed advices, ZK connections on server side are stable.
Here is current state of Tomcat:
http://bigdatapath.com/wp-content/uploads/2015/01/002_jvisualvm_summary.png
There are more than 800 threads and daemon threads.

and the state of three ZK servers:
http://bigdatapath.com/wp-content/uploads/2015/01/001_zk_server_state.png

here is pastebin:
http://pastebin.com/Cq8ppg08

P.S.
Looks like tomcat is running on OpenJDK 64-Bit Server VM.
I'll ask to fix it, it should be Oracle 7 JDK

2015-01-06 20:43 GMT+03:00 Stack <st...@duboce.net>:

> On Tue, Jan 6, 2015 at 4:52 AM, Serega Sheypak <se...@gmail.com>
> wrote:
>
> > yes, one of them (random) gets more connections than others.
> >
> > 9.3.1.1 Is OK.
> > I have 1 HConnection for logical module per application and each
> > ServletRequest gets it's own HTable. HTable closed each tme after
> > ServletRequest is done. HConnection is never closed.
> >
> >
> This is you, right: http://search-hadoop.com/m/DHED4lJSA32
>
> Then, we were leaking zk connections.  Is that fixed?
>
> Can you reproduce in the small?  By setting up your webapp deploy in test
> bed and watching it for leaking?
>
> For this issue, can you post a thread dump in postbin or gist so can see?
>
> Can you post code too?
>
> St.Ack
>
>
>
> > 2015-01-05 21:22 GMT+03:00 Ted Yu <yu...@gmail.com>:
> >
> > > In 022_zookeeper_metrics.png, server names are anonymized. Looks like
> > only
> > > one server got high number of connections.
> > >
> > > Have you seen 9.3.1.1 of http://hbase.apache.org/book.html#client ?
> > >
> > > Cheers
> > >
> > > On Mon, Jan 5, 2015 at 8:57 AM, Serega Sheypak <
> serega.sheypak@gmail.com
> > >
> > > wrote:
> > >
> > > > Hi, here is repost with images link
> > > >
> > > > Hi, I'm still trying to deal with apache tomcat web-app and hbase
> HBase
> > > > 0.98.6
> > > > The root problem is that user threads constantly grows. I do get
> > > thousands
> > > > of live threads on tomcat instance. Then it dies of course.
> > > >
> > > > please see visualVM threads count dynamics
> > > >
> > >
> >
> http://bigdatapath.com/wp-content/uploads/2015/01/01_threads_count-grow.png
> > > >
> > > >
> > > > Please see selected thread. It should be related to zookeeper
> (because
> > of
> > > > thread-name suffix SendThread)
> > > >
> > > >
> > >
> >
> http://bigdatapath.com/wp-content/uploads/2015/01/011_long_running_threads.png
> > > >
> > > > The threaddump for this thread is:
> > > >
> > > > "visit-thread-27799752116280271-EventThread" - Thread t@75
> > > >    java.lang.Thread.State: WAITING
> > > > at sun.misc.Unsafe.park(Native Method)
> > > > - parking to wait for <34671cea> (a
> > > >
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
> > > > at java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
> > > > at
> > > >
> > > >
> > >
> >
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
> > > > at
> > > >
> > >
> >
> java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
> > > > at
> org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:494)
> > > >
> > > >    Locked ownable synchronizers:
> > > > - None
> > > >
> > > > Why does it live "forever"? I next 24 hours I would get ~1200 live
> > > theads.
> > > >
> > > > "visit thread" does simple put/get by key, newrelic says it takes
> 30-40
> > > ms
> > > > to respond.
> > > > I just set a name for the thread inside servlet method.
> > > >
> > > > Here is CPU profiling result:
> > > >
> http://bigdatapath.com/wp-content/uploads/2015/01/03_cpu_prifling.png
> > > >
> > > > Here is zookeeper status:
> > > >
> > >
> >
> http://bigdatapath.com/wp-content/uploads/2015/01/022_zookeeper_metrics.png
> > > >
> > > > How can I debug and get root cause for long living threads? Looks
> like
> > I
> > > > got threads leaking, but have no Idea why...
> > > >
> > > >
> > > >
> > > >
> > > > 2015-01-05 17:57 GMT+03:00 Ted Yu <yu...@gmail.com>:
> > > >
> > > > > I used gmail.
> > > > >
> > > > > Please consider using third party site where you can upload images.
> > > > >
> > > > >
> > > >
> > >
> >
>

Re: Threads leaking from Apache tomcat application

Posted by Stack <st...@duboce.net>.
On Tue, Jan 6, 2015 at 4:52 AM, Serega Sheypak <se...@gmail.com>
wrote:

> yes, one of them (random) gets more connections than others.
>
> 9.3.1.1 Is OK.
> I have 1 HConnection for logical module per application and each
> ServletRequest gets it's own HTable. HTable closed each tme after
> ServletRequest is done. HConnection is never closed.
>
>
This is you, right: http://search-hadoop.com/m/DHED4lJSA32

Then, we were leaking zk connections.  Is that fixed?

Can you reproduce in the small?  By setting up your webapp deploy in test
bed and watching it for leaking?

For this issue, can you post a thread dump in postbin or gist so can see?

Can you post code too?

St.Ack



> 2015-01-05 21:22 GMT+03:00 Ted Yu <yu...@gmail.com>:
>
> > In 022_zookeeper_metrics.png, server names are anonymized. Looks like
> only
> > one server got high number of connections.
> >
> > Have you seen 9.3.1.1 of http://hbase.apache.org/book.html#client ?
> >
> > Cheers
> >
> > On Mon, Jan 5, 2015 at 8:57 AM, Serega Sheypak <serega.sheypak@gmail.com
> >
> > wrote:
> >
> > > Hi, here is repost with images link
> > >
> > > Hi, I'm still trying to deal with apache tomcat web-app and hbase HBase
> > > 0.98.6
> > > The root problem is that user threads constantly grows. I do get
> > thousands
> > > of live threads on tomcat instance. Then it dies of course.
> > >
> > > please see visualVM threads count dynamics
> > >
> >
> http://bigdatapath.com/wp-content/uploads/2015/01/01_threads_count-grow.png
> > >
> > >
> > > Please see selected thread. It should be related to zookeeper (because
> of
> > > thread-name suffix SendThread)
> > >
> > >
> >
> http://bigdatapath.com/wp-content/uploads/2015/01/011_long_running_threads.png
> > >
> > > The threaddump for this thread is:
> > >
> > > "visit-thread-27799752116280271-EventThread" - Thread t@75
> > >    java.lang.Thread.State: WAITING
> > > at sun.misc.Unsafe.park(Native Method)
> > > - parking to wait for <34671cea> (a
> > > java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
> > > at java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
> > > at
> > >
> > >
> >
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
> > > at
> > >
> >
> java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
> > > at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:494)
> > >
> > >    Locked ownable synchronizers:
> > > - None
> > >
> > > Why does it live "forever"? I next 24 hours I would get ~1200 live
> > theads.
> > >
> > > "visit thread" does simple put/get by key, newrelic says it takes 30-40
> > ms
> > > to respond.
> > > I just set a name for the thread inside servlet method.
> > >
> > > Here is CPU profiling result:
> > > http://bigdatapath.com/wp-content/uploads/2015/01/03_cpu_prifling.png
> > >
> > > Here is zookeeper status:
> > >
> >
> http://bigdatapath.com/wp-content/uploads/2015/01/022_zookeeper_metrics.png
> > >
> > > How can I debug and get root cause for long living threads? Looks like
> I
> > > got threads leaking, but have no Idea why...
> > >
> > >
> > >
> > >
> > > 2015-01-05 17:57 GMT+03:00 Ted Yu <yu...@gmail.com>:
> > >
> > > > I used gmail.
> > > >
> > > > Please consider using third party site where you can upload images.
> > > >
> > > >
> > >
> >
>

Re: Threads leaking from Apache tomcat application

Posted by Serega Sheypak <se...@gmail.com>.
yes, one of them (random) gets more connections than others.

9.3.1.1 Is OK.
I have 1 HConnection for logical module per application and each
ServletRequest gets it's own HTable. HTable closed each tme after
ServletRequest is done. HConnection is never closed.

2015-01-05 21:22 GMT+03:00 Ted Yu <yu...@gmail.com>:

> In 022_zookeeper_metrics.png, server names are anonymized. Looks like only
> one server got high number of connections.
>
> Have you seen 9.3.1.1 of http://hbase.apache.org/book.html#client ?
>
> Cheers
>
> On Mon, Jan 5, 2015 at 8:57 AM, Serega Sheypak <se...@gmail.com>
> wrote:
>
> > Hi, here is repost with images link
> >
> > Hi, I'm still trying to deal with apache tomcat web-app and hbase HBase
> > 0.98.6
> > The root problem is that user threads constantly grows. I do get
> thousands
> > of live threads on tomcat instance. Then it dies of course.
> >
> > please see visualVM threads count dynamics
> >
> http://bigdatapath.com/wp-content/uploads/2015/01/01_threads_count-grow.png
> >
> >
> > Please see selected thread. It should be related to zookeeper (because of
> > thread-name suffix SendThread)
> >
> >
> http://bigdatapath.com/wp-content/uploads/2015/01/011_long_running_threads.png
> >
> > The threaddump for this thread is:
> >
> > "visit-thread-27799752116280271-EventThread" - Thread t@75
> >    java.lang.Thread.State: WAITING
> > at sun.misc.Unsafe.park(Native Method)
> > - parking to wait for <34671cea> (a
> > java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
> > at java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
> > at
> >
> >
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
> > at
> >
> java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
> > at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:494)
> >
> >    Locked ownable synchronizers:
> > - None
> >
> > Why does it live "forever"? I next 24 hours I would get ~1200 live
> theads.
> >
> > "visit thread" does simple put/get by key, newrelic says it takes 30-40
> ms
> > to respond.
> > I just set a name for the thread inside servlet method.
> >
> > Here is CPU profiling result:
> > http://bigdatapath.com/wp-content/uploads/2015/01/03_cpu_prifling.png
> >
> > Here is zookeeper status:
> >
> http://bigdatapath.com/wp-content/uploads/2015/01/022_zookeeper_metrics.png
> >
> > How can I debug and get root cause for long living threads? Looks like I
> > got threads leaking, but have no Idea why...
> >
> >
> >
> >
> > 2015-01-05 17:57 GMT+03:00 Ted Yu <yu...@gmail.com>:
> >
> > > I used gmail.
> > >
> > > Please consider using third party site where you can upload images.
> > >
> > >
> >
>

Re: Threads leaking from Apache tomcat application

Posted by Ted Yu <yu...@gmail.com>.
In 022_zookeeper_metrics.png, server names are anonymized. Looks like only
one server got high number of connections.

Have you seen 9.3.1.1 of http://hbase.apache.org/book.html#client ?

Cheers

On Mon, Jan 5, 2015 at 8:57 AM, Serega Sheypak <se...@gmail.com>
wrote:

> Hi, here is repost with images link
>
> Hi, I'm still trying to deal with apache tomcat web-app and hbase HBase
> 0.98.6
> The root problem is that user threads constantly grows. I do get thousands
> of live threads on tomcat instance. Then it dies of course.
>
> please see visualVM threads count dynamics
> http://bigdatapath.com/wp-content/uploads/2015/01/01_threads_count-grow.png
>
>
> Please see selected thread. It should be related to zookeeper (because of
> thread-name suffix SendThread)
>
> http://bigdatapath.com/wp-content/uploads/2015/01/011_long_running_threads.png
>
> The threaddump for this thread is:
>
> "visit-thread-27799752116280271-EventThread" - Thread t@75
>    java.lang.Thread.State: WAITING
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for <34671cea> (a
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
> at
>
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
> at
> java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
> at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:494)
>
>    Locked ownable synchronizers:
> - None
>
> Why does it live "forever"? I next 24 hours I would get ~1200 live theads.
>
> "visit thread" does simple put/get by key, newrelic says it takes 30-40 ms
> to respond.
> I just set a name for the thread inside servlet method.
>
> Here is CPU profiling result:
> http://bigdatapath.com/wp-content/uploads/2015/01/03_cpu_prifling.png
>
> Here is zookeeper status:
> http://bigdatapath.com/wp-content/uploads/2015/01/022_zookeeper_metrics.png
>
> How can I debug and get root cause for long living threads? Looks like I
> got threads leaking, but have no Idea why...
>
>
>
>
> 2015-01-05 17:57 GMT+03:00 Ted Yu <yu...@gmail.com>:
>
> > I used gmail.
> >
> > Please consider using third party site where you can upload images.
> >
> >
>

Re: Threads leaking from Apache tomcat application

Posted by Serega Sheypak <se...@gmail.com>.
Hi, here is repost with images link

Hi, I'm still trying to deal with apache tomcat web-app and hbase HBase
0.98.6
The root problem is that user threads constantly grows. I do get thousands
of live threads on tomcat instance. Then it dies of course.

please see visualVM threads count dynamics
http://bigdatapath.com/wp-content/uploads/2015/01/01_threads_count-grow.png


Please see selected thread. It should be related to zookeeper (because of
thread-name suffix SendThread)
http://bigdatapath.com/wp-content/uploads/2015/01/011_long_running_threads.png

The threaddump for this thread is:

"visit-thread-27799752116280271-EventThread" - Thread t@75
   java.lang.Thread.State: WAITING
at sun.misc.Unsafe.park(Native Method)
- parking to wait for <34671cea> (a
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
at
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
at
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:494)

   Locked ownable synchronizers:
- None

Why does it live "forever"? I next 24 hours I would get ~1200 live theads.

"visit thread" does simple put/get by key, newrelic says it takes 30-40 ms
to respond.
I just set a name for the thread inside servlet method.

Here is CPU profiling result:
http://bigdatapath.com/wp-content/uploads/2015/01/03_cpu_prifling.png

Here is zookeeper status:
http://bigdatapath.com/wp-content/uploads/2015/01/022_zookeeper_metrics.png

How can I debug and get root cause for long living threads? Looks like I
got threads leaking, but have no Idea why...




2015-01-05 17:57 GMT+03:00 Ted Yu <yu...@gmail.com>:

> I used gmail.
>
> Please consider using third party site where you can upload images.
>
>

Re: Threads leaking from Apache tomcat application

Posted by Ted Yu <yu...@gmail.com>.
I used gmail.

Please consider using third party site where you can upload images.

Cheers

On Mon, Jan 5, 2015 at 6:13 AM, Serega Sheypak <se...@gmail.com>
wrote:

> Hi, which mail client you use? I'm using gmail from chrome and see my
> letter with four inlined images.
> There are no links, there are 3 images. I'll reattach them. Maybe the
> problem is in them
>
> 2015-01-05 16:20 GMT+03:00 Ted Yu <yu...@gmail.com>:
>
>> There're several non-English phrases which seem to be links.
>> But when I clicked on them, there was no response.
>>
>> Can you give the links in URL ?
>>
>> Cheers
>>
>>
>>
>> > On Jan 5, 2015, at 2:39 AM, Serega Sheypak <se...@gmail.com>
>> wrote:
>> >
>> > Hi, I'm still trying to deal with apache tomcat web-app and hbase HBase
>> 0.98.6
>> > The root problem is that user threads constantly grows. I do get
>> thousands of live threads on tomcat instance. Then it dies of course.
>> >
>> > please see visualVM threads count dynamics
>> >
>> >
>> > Please see selected thread. It should be related to zookeeper (because
>> of thread-name suffix SendThread)
>> >
>> >
>> >
>> > The threaddump for this thread is:
>> >
>> > "visit-thread-27799752116280271-EventThread" - Thread t@75
>> >    java.lang.Thread.State: WAITING
>> >       at sun.misc.Unsafe.park(Native Method)
>> >       - parking to wait for <34671cea> (a
>> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
>> >       at
>> java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
>> >       at
>> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
>> >       at
>> java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
>> >       at
>> org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:494)
>> >
>> >    Locked ownable synchronizers:
>> >       - None
>> >
>> > Why does it live "forever"? I next 24 hours I would get ~1200 live
>> theads.
>> >
>> > "visit thread" does simple put/get by key, newrelic says it takes 30-40
>> ms to respond.
>> > I just set a name for the thread inside servlet method.
>> >
>> > Here is CPU profiling result:
>> >
>> >
>> > Here are some Zookeeper metrics.
>> >
>> >
>>
>
>

Re: Threads leaking from Apache tomcat application

Posted by Serega Sheypak <se...@gmail.com>.
Hi, which mail client you use? I'm using gmail from chrome and see my
letter with four inlined images.
There are no links, there are 3 images. I'll reattach them. Maybe the
problem is in them

2015-01-05 16:20 GMT+03:00 Ted Yu <yu...@gmail.com>:

> There're several non-English phrases which seem to be links.
> But when I clicked on them, there was no response.
>
> Can you give the links in URL ?
>
> Cheers
>
>
>
> > On Jan 5, 2015, at 2:39 AM, Serega Sheypak <se...@gmail.com>
> wrote:
> >
> > Hi, I'm still trying to deal with apache tomcat web-app and hbase HBase
> 0.98.6
> > The root problem is that user threads constantly grows. I do get
> thousands of live threads on tomcat instance. Then it dies of course.
> >
> > please see visualVM threads count dynamics
> >
> >
> > Please see selected thread. It should be related to zookeeper (because
> of thread-name suffix SendThread)
> >
> >
> >
> > The threaddump for this thread is:
> >
> > "visit-thread-27799752116280271-EventThread" - Thread t@75
> >    java.lang.Thread.State: WAITING
> >       at sun.misc.Unsafe.park(Native Method)
> >       - parking to wait for <34671cea> (a
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
> >       at
> java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
> >       at
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
> >       at
> java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
> >       at
> org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:494)
> >
> >    Locked ownable synchronizers:
> >       - None
> >
> > Why does it live "forever"? I next 24 hours I would get ~1200 live
> theads.
> >
> > "visit thread" does simple put/get by key, newrelic says it takes 30-40
> ms to respond.
> > I just set a name for the thread inside servlet method.
> >
> > Here is CPU profiling result:
> >
> >
> > Here are some Zookeeper metrics.
> >
> >
>

Re: Threads leaking from Apache tomcat application

Posted by Ted Yu <yu...@gmail.com>.
There're several non-English phrases which seem to be links. 
But when I clicked on them, there was no response. 

Can you give the links in URL ?

Cheers



> On Jan 5, 2015, at 2:39 AM, Serega Sheypak <se...@gmail.com> wrote:
> 
> Hi, I'm still trying to deal with apache tomcat web-app and hbase HBase 0.98.6 
> The root problem is that user threads constantly grows. I do get thousands of live threads on tomcat instance. Then it dies of course. 
> 
> please see visualVM threads count dynamics
> 
> 
> Please see selected thread. It should be related to zookeeper (because of thread-name suffix SendThread)
> 
> 
> 
> The threaddump for this thread is:
> 
> "visit-thread-27799752116280271-EventThread" - Thread t@75
>    java.lang.Thread.State: WAITING
> 	at sun.misc.Unsafe.park(Native Method)
> 	- parking to wait for <34671cea> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
> 	at java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
> 	at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
> 	at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
> 	at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:494)
> 
>    Locked ownable synchronizers:
> 	- None
> 
> Why does it live "forever"? I next 24 hours I would get ~1200 live theads.
> 
> "visit thread" does simple put/get by key, newrelic says it takes 30-40 ms to respond.
> I just set a name for the thread inside servlet method.
> 
> Here is CPU profiling result:
> 
> 
> Here are some Zookeeper metrics.
> 
>