You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@tomcat.apache.org by gnath <ga...@yahoo.com> on 2012/01/22 09:01:28 UTC

Tomcat 6.0.35-SocketException: Too many open files issue with

Hello, 

We have been seeing "SocketException: Too many open files" in production environment(Linux OS running Tomcat 6.0.35 with sun's JDK 1.6.30) every day and requires a restart of Tomcat. When this happened for the first time, we searched online and found people suggesting to increase the file descriptors size and we increased to 4096. But still the problem persists.  We have the Orion App Server also running on the same machine but usually during the day when we check the open file descriptor by command: ls -l /proc/PID/fd, its always less than 1000 combined  for both Orion and Tomcat. 


Here is the exception we see pouring in the logs once it starts: This requires us to kill java process and restart tomcat. Our Tomcat configuration maxThreadCount is 500 with minSpareThreads=50 in server.xml


SEVERE: Socket accept failed
java.net.SocketException: Too many open files
        at java.net.PlainSocketImpl.socketAccept(Native Method)
        at java.net.PlainSocketImpl.accept(PlainSocketImpl.java:408)
        at java.net.ServerSocket.implAccept(ServerSocket.java:462)
        at java.net.ServerSocket.accept(ServerSocket.java:430)
        at org.apache.tomcat.util.net.DefaultServerSocketFactory.acceptSocket(DefaultServerSocketFactory.java:61)
        at org.apache.tomcat.util.net.JIoEndpoint$Acceptor.run(JIoEndpoint.java:352)
        at java.lang.Thread.run(Thread.java:662)

ulimit -a gives for the user where Tomcat is running.


open files                      (-n) 4096


Please let me know what could be the issue here and how can i resolve this 'Too many open files' issue.

Thanks
-G

Re: Tomcat 6.0.35-SocketException: Too many open files issue with

Posted by Pid <pi...@pidster.com>.
On 26/01/2012 04:53, gnath wrote:
> Hi Chris, 
> 
> Thanks a lot for looking into this and giving answers for all my questions. Sorry, i could not get chance to reply in time. As you have suggested, i started collecting the thread dumps when it happened again and we saw some kind of DBCP Connection pool issues leading to 'Too Many open files' issue. So we decided to replace the commons DBCP with tomcat-jdbc.jar (with same configuration properties). After this change, it seemed for few hours but started seeing in the logs where the Connection Pool jar could not give any connections and seems to be all the connections are busy. So we went ahead and added a configuration property 'removeAbandoned=true' in our Datasource configuration. 
> 
> 
> We are still watching the performance and the server behavior after these changes. 
> Will keep you posted on how things will turn out or if i see any further issues. 
> 
> 
> thank you once again, I really appreciate your help.
> 
> Thanks

This sounds increasingly like your application isn't returning
connections to the pool properly.  Switching pool implementation won't
help if this is the case.

You should carefully examine the code where the database is used to
ensure that DB resources are returned to the pool in a finally block,
after use.

Chris's question regarding 'what has changed' is still relevant.


p

> ________________________________
>  From: Christopher Schultz <ch...@christopherschultz.net>
> To: Tomcat Users List <us...@tomcat.apache.org> 
> Sent: Monday, January 23, 2012 7:51 AM
> Subject: Re: Tomcat 6.0.35-SocketException: Too many open files  issue with
>  
> G,
> 
> On 1/22/12 6:18 PM, gnath wrote:
>> We have 2 connectors (one for http and another for https) using
>> the tomcatThreadPool. I have the connectionTimeout="20000" for
>> http connector.  However i was told that our https connector might
>> not be used by the app as our loadbalancer is handling all the
>> https traffic and just sending them to http connector.
> 
> You might want to disable that HTTPS connector, but it's probably not
> hurting you at all in this case -- just a bit of wasted resources. If
> you are sharing a thread pool then there is no negative impact on the
> number of threads and/or open files that you have to deal with, here.
> 
>> the ulimit settings were increased from default 1024 to 4096 by
>> our admin. not sure how he did that, but i see the count as 4096
>> when i do ulimit -a.
> 
> Well, if your admin says it's right, I suppose it's right.
> 
>> for ulimit -n i see its 'unlimited'.
> 
> That's good.
> 
>> for cat /proc/PID/limits, i get the following response:
> 
>> Limit                     Soft Limit           Hard Limit
>> Units Max cpu time              unlimited            unlimited
>> seconds Max file size             unlimited            unlimited
>> bytes Max data size             unlimited            unlimited
>> bytes Max stack size            10485760             unlimited
>> bytes Max core file size        0                    unlimited
>> bytes Max resident set          unlimited            unlimited
>> bytes Max processes             unlimited            unlimited
>> processes Max open files            4096                 4096
>> files Max locked memory         32768                32768
>> bytes Max address space         unlimited            unlimited
>> bytes Max file locks            unlimited            unlimited
>> locks Max pending signals       202752               202752
>> signals Max msgqueue size         819200               819200
>> bytes Max nice priority         0                    0
>>   Max realtime priority     0                    0
> 
> Those all look good to me.
> 
>> This morning Tomcat hung again but this time it dint say 'too many 
>> open files' in logs but i only see this below in catalina.out:
> 
>> org.apache.tomcat.util.http.Parameters processParameters INFO:
>> Invalid chunk starting at byte [0] and ending at byte [0] with a
>> value of [null] ignored Jorg.apache.tomcat.util.http.Parameters
>> processParameters INFO: Invalid chunk starting at byte [0] and
>> ending at byte [0] with a value of [null] ignored
> 
> Hmm...
> 
>> When it hung(java process is still up), i ran few commands like
>> lsof by PID and couple others.
> 
> Next time, take a thread dump as well. The fact that Tomcat hung up
> without an OS problem (like Too Many Open Files) is probably not good.
> If this happens again with an apparent hang with no stack traces in
> the logs, take a thread dump and post it back here under a different
> subject.
> 
>> here is what i got:
> 
>> lsof -p PID| wc -l 1342
> 
>> lsof | wc -l 4520
> 
>> lsof -u USER| wc -l 1953
> 
> Hmm I wonder if you are hitting a *user* or even *system* limit of
> some kind (though a *NIX system with a hard limit of ~4500 file
> descriptors seems entirely unreasonable). I also wonder how many
> /processes/ and/or /threads/ you have running at once.
> 
>> After i kill java process the lsof for pid returned obviously to
>> zero
> 
> Of course.
> 
>> Is there any chance that the tomcat is ignoring the ulimit?
> 
> Those limits are not self-imposed: the OS imposes those limits. Tomcat
> doesn't even know it's own ulimit (of any kind), so it will simply
> consume whatever resources you have configured it to use, and if it
> hits a limit, the JVM will experience some kind of OS-related error.
> 
>> , some people on web were saying something about setting this in
>> catalina.sh.
> 
> Setting what? ulimit? I'd do it in setenv.sh because that's a more
> appropriate place for that kind of thing. I'm also interested in what
> the Internet has to say about what setting(s) to use.
> 
>> Please help with my ongoing issue.. its getting very hard to
>> monitor the logs every minute and restarting whenever it hangs with
>> these kind of issues. I very much appreciate your help in this.
> 
> Did this just start happening recently? Perhaps with an upgrade of
> some component?
> 
> If you think this might actually be related to the number of file
> handles being used by your thread pool, you might want to reduce the
> maximum number of threads for that thread pool: a slightly less
> responsive site is better than one that goes down all the time because
> of hard resource limits.
> 
> -chris
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
> For additional commands, e-mail: users-help@tomcat.apache.org

-- 

[key:62590808]


Re: Tomcat 6.0.35-SocketException: Too many open files issue with

Posted by Pid <pi...@pidster.com>.
This is top-posting, for anyone who's watching & doesn't know.


p

On 27/01/2012 14:26, gnath wrote:
> We defined our data sources in spring configuration file. We did not have any DB configuration defined on Server.
>
> ________________________________
>  From: Pid * <pi...@pidster.com>
> To: Tomcat Users List <us...@tomcat.apache.org> 
> Sent: Friday, January 27, 2012 1:03 AM
> Subject: Re: Tomcat 6.0.35-SocketException: Too many open files issue with
>  
> On 27 Jan 2012, at 05:32, gnath <ga...@yahoo.com> wrote:
> 
>> Hello Chris,
>>
>>
>> After seeing the initial connection pool issue, i started searching online for help and i found this article :
>> http://vigilbose.blogspot.com/2009/03/apache-commons-dbcp-and-tomcat-jdbc.html
>> so, i thought may be tomcat's jar would bring some improvement. by the way, we had commons-dbcp-1.3.jar. Do you recommend upgrading to newer commons dbcp jar instead of using tomcat-jdbc.jar.
> 
> Tomcat ships with a DBCP implementing of its own.
> 
> How and where are you defining the database?
> 
> 
> p
> 
> 
> 
>>
>> Just because we are running tomcat-6.0.35, it did not come with tomcat-jdbc.jar, so we downloaded the 1.1.1 version or jar and dropped in WEB-INF/lib and started using it.
>>
>>
>> I agree what you are saying about leaking the connection and will plan to set the logAbandoned flag as you suggested.
>>
>>
>> However, i was about to file a new issue but would like to describe here as well. So we have 2 servers running tomcat (same code, same configuration). After we replaced tomcat-jdbc.jar and added 'removeAbandoned' flag to true, one of the servers is doing great (ofcourse i agree that pool is cleaning up the mess), but we saw one new issue on the second server. it hasn't been releasing the connections and was consistently growing slowly. So i collected thread dump and i saw a deadlock :
>>
>> Found one Java-level deadlock:
>> =============================
>> "catalina-exec-1":
>>    waiting to lock monitor 0x000000005d7944b8 (object 0x00000005bd522568, a com.mysql.jdbc.Connection),
>>    which is held by "[Pool-Cleaner]:Tomcat Connection Pool[1-1015483951]"
>> "[Pool-Cleaner]:Tomcat Connection Pool[1-1015483951]":
>>    waiting to lock monitor 0x000000005dcdea28 (object 0x00000005bd659ce8, a com.mysql.jdbc.ResultSet),
>>    which is held by "catalina-exec-1"
>>
>> Java stack information for the threads listed above:
>> ===================================================
>> "catalina-exec-1":
>>          at com.mysql.jdbc.Connection.getCharsetConverter(Connection.java:3177)
>>          - waiting to lock <0x00000005bd522568> (a com.mysql.jdbc.Connection)
>>          at com.mysql.jdbc.Field.getStringFromBytes(Field.java:583)
>>          at com.mysql.jdbc.Field.getName(Field.java:487)
>>          at com.mysql.jdbc.ResultSet.buildIndexMapping(ResultSet.java:593)
>>          at com.mysql.jdbc.ResultSet.findColumn(ResultSet.java:926)
>>          - locked <0x00000005bd659ce8> (a com.mysql.jdbc.ResultSet)
>>          at com.mysql.jdbc.ResultSet.getInt(ResultSet.java:2401)
>>
>> "[Pool-Cleaner]:Tomcat Connection Pool[1-1015483951]":
>>          at com.mysql.jdbc.ResultSet.close(ResultSet.java:736)
>>          - waiting to lock <0x00000005bd659ce8> (a com.mysql.jdbc.ResultSet)
>>          at com.mysql.jdbc.Statement.realClose(Statement.java:1606)
>>          - locked <0x00000005bd522568> (a com.mysql.jdbc.Connection)
>>          - locked <0x00000005bd5e81c0> (a com.mysql.jdbc.ServerPreparedStatement)
>>          at com.mysql.jdbc.PreparedStatement.realClose(PreparedStatement.java:1703)
>>          at com.mysql.jdbc.ServerPreparedStatement.realClose(ServerPreparedStatement.java:901)
>>          - locked <0x00000005bd525ba0> (a java.lang.Object)
>>          - locked <0x00000005bd522568> (a com.mysql.jdbc.Connection)
>>          - locked <0x00000005bd5e81c0> (a com.mysql.jdbc.ServerPreparedStatement)
>>          at com.mysql.jdbc.Connection.closeAllOpenStatements(Connection.java:2126)
>>          at com.mysql.jdbc.Connection.realClose(Connection.java:4422)
>>          at com.mysql.jdbc.Connection.close(Connection.java:2098)
>>          - locked <0x00000005bd522568> (a com.mysql.jdbc.Connection)
>>          at org.apache.tomcat.jdbc.pool.PooledConnection.disconnect(PooledConnection.java:320)
>>
>>
>>
>> Please help us on this. Could it be a problem with tomcat-jdbc.jar?
>>
>> Thanks
>> -G
>>
>>
>> ________________________________
>> From: Christopher Schultz <ch...@christopherschultz.net>
>> To: Tomcat Users List <us...@tomcat.apache.org>
>> Sent: Thursday, January 26, 2012 9:41 AM
>> Subject: Re: Tomcat 6.0.35-SocketException: Too many open files  issue with
>>
> G,
> 
> On 1/25/12 11:53 PM, gnath wrote:
>>>> As you have suggested, i started collecting the thread dumps
> 
> Thread dumps will set you free. Well, not really. Instead, they will
> tell you where your webapp is breaking, which usually means more work
> for you. But at least the hard part is done: finding out what's breaking.
> 
>>>> when it happened again and we saw some kind of DBCP Connection pool
>>>> issues leading to 'Too Many open files' issue.
> 
> That will definitely do it.
> 
>>>> So we decided to replace the commons DBCP with tomcat-jdbc.jar
>>>> (with same configuration properties).
> 
> Why?
> 
>>>> After this change, it seemed for few hours but started seeing in
>>>> the logs where the Connection Pool jar could not give any
>>>> connections and seems to be all the connections are busy. So we
>>>> went ahead and added a configuration property
>>>> 'removeAbandoned=true' in our Datasource configuration.
> 
> I would go back to DBCP unless you think you need to switch for some
> reason.
> 
> I suspect you are leaking database connections and don't have a
> suitable timeout for removal of "lost" database connections (or maybe
> didn't have that set up in the first place).
> 
> You really need to enable "logAbandoned" so you can find out where
> your connection leaks are, and fix them. In development, set
> maxActive="1" and leave it there, forever. Also, set
> logAbandoned="true" and always run like that in development. Running
> like that in production isn't a bad idea, either.
> 
>>>> We are still watching the performance and the server behavior
>>>> after these changes. Will keep you posted on how things will turn
>>>> out or if i see any further issues.
> 
> I suspect you are still leaking connections, but your pool is now
> silently cleaning-up after the mess your webapp is making. Instrument
> your pool. Fix your leaks.
> 
> -chris
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
>> For additional commands, e-mail: users-help@tomcat.apache.org
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
> For additional commands, e-mail: users-help@tomcat.apache.org

-- 

[key:62590808]


Re: Tomcat 6.0.35-SocketException: Too many open files issue with

Posted by gnath <ga...@yahoo.com>.
We defined our data sources in spring configuration file. We did not have any DB configuration defined on Server.

Thanks
-G



________________________________
 From: Pid * <pi...@pidster.com>
To: Tomcat Users List <us...@tomcat.apache.org> 
Sent: Friday, January 27, 2012 1:03 AM
Subject: Re: Tomcat 6.0.35-SocketException: Too many open files issue with
 
On 27 Jan 2012, at 05:32, gnath <ga...@yahoo.com> wrote:

> Hello Chris,
>
>
> After seeing the initial connection pool issue, i started searching online for help and i found this article :
> http://vigilbose.blogspot.com/2009/03/apache-commons-dbcp-and-tomcat-jdbc.html
> so, i thought may be tomcat's jar would bring some improvement. by the way, we had commons-dbcp-1.3.jar. Do you recommend upgrading to newer commons dbcp jar instead of using tomcat-jdbc.jar.

Tomcat ships with a DBCP implementing of its own.

How and where are you defining the database?


p



>
> Just because we are running tomcat-6.0.35, it did not come with tomcat-jdbc.jar, so we downloaded the 1.1.1 version or jar and dropped in WEB-INF/lib and started using it.
>
>
> I agree what you are saying about leaking the connection and will plan to set the logAbandoned flag as you suggested.
>
>
> However, i was about to file a new issue but would like to describe here as well. So we have 2 servers running tomcat (same code, same configuration). After we replaced tomcat-jdbc.jar and added 'removeAbandoned' flag to true, one of the servers is doing great (ofcourse i agree that pool is cleaning up the mess), but we saw one new issue on the second server. it hasn't been releasing the connections and was consistently growing slowly. So i collected thread dump and i saw a deadlock :
>
> Found one Java-level deadlock:
> =============================
> "catalina-exec-1":
>   waiting to lock monitor 0x000000005d7944b8 (object 0x00000005bd522568, a com.mysql.jdbc.Connection),
>   which is held by "[Pool-Cleaner]:Tomcat Connection Pool[1-1015483951]"
> "[Pool-Cleaner]:Tomcat Connection Pool[1-1015483951]":
>   waiting to lock monitor 0x000000005dcdea28 (object 0x00000005bd659ce8, a com.mysql.jdbc.ResultSet),
>   which is held by "catalina-exec-1"
>
> Java stack information for the threads listed above:
> ===================================================
> "catalina-exec-1":
>         at com.mysql.jdbc.Connection.getCharsetConverter(Connection.java:3177)
>         - waiting to lock <0x00000005bd522568> (a com.mysql.jdbc.Connection)
>         at com.mysql.jdbc.Field.getStringFromBytes(Field.java:583)
>         at com.mysql.jdbc.Field.getName(Field.java:487)
>         at com.mysql.jdbc.ResultSet.buildIndexMapping(ResultSet.java:593)
>         at com.mysql.jdbc.ResultSet.findColumn(ResultSet.java:926)
>         - locked <0x00000005bd659ce8> (a com.mysql.jdbc.ResultSet)
>         at com.mysql.jdbc.ResultSet.getInt(ResultSet.java:2401)
>
> "[Pool-Cleaner]:Tomcat Connection Pool[1-1015483951]":
>         at com.mysql.jdbc.ResultSet.close(ResultSet.java:736)
>         - waiting to lock <0x00000005bd659ce8> (a com.mysql.jdbc.ResultSet)
>         at com.mysql.jdbc.Statement.realClose(Statement.java:1606)
>         - locked <0x00000005bd522568> (a com.mysql.jdbc.Connection)
>         - locked <0x00000005bd5e81c0> (a com.mysql.jdbc.ServerPreparedStatement)
>         at com.mysql.jdbc.PreparedStatement.realClose(PreparedStatement.java:1703)
>         at com.mysql.jdbc.ServerPreparedStatement.realClose(ServerPreparedStatement.java:901)
>         - locked <0x00000005bd525ba0> (a java.lang.Object)
>         - locked <0x00000005bd522568> (a com.mysql.jdbc.Connection)
>         - locked <0x00000005bd5e81c0> (a com.mysql.jdbc.ServerPreparedStatement)
>         at com.mysql.jdbc.Connection.closeAllOpenStatements(Connection.java:2126)
>         at com.mysql.jdbc.Connection.realClose(Connection.java:4422)
>         at com.mysql.jdbc.Connection.close(Connection.java:2098)
>         - locked <0x00000005bd522568> (a com.mysql.jdbc.Connection)
>         at org.apache.tomcat.jdbc.pool.PooledConnection.disconnect(PooledConnection.java:320)
>
>
>
> Please help us on this. Could it be a problem with tomcat-jdbc.jar?
>
> Thanks
> -G
>
>
> ________________________________
> From: Christopher Schultz <ch...@christopherschultz.net>
> To: Tomcat Users List <us...@tomcat.apache.org>
> Sent: Thursday, January 26, 2012 9:41 AM
> Subject: Re: Tomcat 6.0.35-SocketException: Too many open files  issue with
>
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
>
> G,
>
> On 1/25/12 11:53 PM, gnath wrote:
>> As you have suggested, i started collecting the thread dumps
>
> Thread dumps will set you free. Well, not really. Instead, they will
> tell you where your webapp is breaking, which usually means more work
> for you. But at least the hard part is done: finding out what's breaking.
>
>> when it happened again and we saw some kind of DBCP Connection pool
>> issues leading to 'Too Many open files' issue.
>
> That will definitely do it.
>
>> So we decided to replace the commons DBCP with tomcat-jdbc.jar
>> (with same configuration properties).
>
> Why?
>
>> After this change, it seemed for few hours but started seeing in
>> the logs where the Connection Pool jar could not give any
>> connections and seems to be all the connections are busy. So we
>> went ahead and added a configuration property
>> 'removeAbandoned=true' in our Datasource configuration.
>
> I would go back to DBCP unless you think you need to switch for some
> reason.
>
> I suspect you are leaking database connections and don't have a
> suitable timeout for removal of "lost" database connections (or maybe
> didn't have that set up in the first place).
>
> You really need to enable "logAbandoned" so you can find out where
> your connection leaks are, and fix them. In development, set
> maxActive="1" and leave it there, forever. Also, set
> logAbandoned="true" and always run like that in development. Running
> like that in production isn't a bad idea, either.
>
>> We are still watching the performance and the server behavior
>> after these changes. Will keep you posted on how things will turn
>> out or if i see any further issues.
>
> I suspect you are still leaking connections, but your pool is now
> silently cleaning-up after the mess your webapp is making. Instrument
> your pool. Fix your leaks.
>
> - -chris
> -----BEGIN PGP SIGNATURE-----
> Version: GnuPG/MacGPG2 v2.0.17 (Darwin)
> Comment: GPGTools - http://gpgtools.org
> Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/
>
> iEYEARECAAYFAk8hkDkACgkQ9CaO5/Lv0PCxFgCgs+EiV/CNjmCNekeDwKHgnNtZ
> 5LYAoKZUkIAJOK0eItkoHBF3wScK9lQf
> =AyL4
> -----END PGP SIGNATURE-----
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
> For additional commands, e-mail: users-help@tomcat.apache.org

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org

Re: Tomcat 6.0.35-SocketException: Too many open files issue with

Posted by Pid * <pi...@pidster.com>.
On 27 Jan 2012, at 05:32, gnath <ga...@yahoo.com> wrote:

> Hello Chris,
>
>
> After seeing the initial connection pool issue, i started searching online for help and i found this article :
> http://vigilbose.blogspot.com/2009/03/apache-commons-dbcp-and-tomcat-jdbc.html
> so, i thought may be tomcat's jar would bring some improvement. by the way, we had commons-dbcp-1.3.jar. Do you recommend upgrading to newer commons dbcp jar instead of using tomcat-jdbc.jar.

Tomcat ships with a DBCP implementing of its own.

How and where are you defining the database?


p



>
> Just because we are running tomcat-6.0.35, it did not come with tomcat-jdbc.jar, so we downloaded the 1.1.1 version or jar and dropped in WEB-INF/lib and started using it.
>
>
> I agree what you are saying about leaking the connection and will plan to set the logAbandoned flag as you suggested.
>
>
> However, i was about to file a new issue but would like to describe here as well. So we have 2 servers running tomcat (same code, same configuration). After we replaced tomcat-jdbc.jar and added 'removeAbandoned' flag to true, one of the servers is doing great (ofcourse i agree that pool is cleaning up the mess), but we saw one new issue on the second server. it hasn't been releasing the connections and was consistently growing slowly. So i collected thread dump and i saw a deadlock :
>
> Found one Java-level deadlock:
> =============================
> "catalina-exec-1":
>   waiting to lock monitor 0x000000005d7944b8 (object 0x00000005bd522568, a com.mysql.jdbc.Connection),
>   which is held by "[Pool-Cleaner]:Tomcat Connection Pool[1-1015483951]"
> "[Pool-Cleaner]:Tomcat Connection Pool[1-1015483951]":
>   waiting to lock monitor 0x000000005dcdea28 (object 0x00000005bd659ce8, a com.mysql.jdbc.ResultSet),
>   which is held by "catalina-exec-1"
>
> Java stack information for the threads listed above:
> ===================================================
> "catalina-exec-1":
>         at com.mysql.jdbc.Connection.getCharsetConverter(Connection.java:3177)
>         - waiting to lock <0x00000005bd522568> (a com.mysql.jdbc.Connection)
>         at com.mysql.jdbc.Field.getStringFromBytes(Field.java:583)
>         at com.mysql.jdbc.Field.getName(Field.java:487)
>         at com.mysql.jdbc.ResultSet.buildIndexMapping(ResultSet.java:593)
>         at com.mysql.jdbc.ResultSet.findColumn(ResultSet.java:926)
>         - locked <0x00000005bd659ce8> (a com.mysql.jdbc.ResultSet)
>         at com.mysql.jdbc.ResultSet.getInt(ResultSet.java:2401)
>
> "[Pool-Cleaner]:Tomcat Connection Pool[1-1015483951]":
>         at com.mysql.jdbc.ResultSet.close(ResultSet.java:736)
>         - waiting to lock <0x00000005bd659ce8> (a com.mysql.jdbc.ResultSet)
>         at com.mysql.jdbc.Statement.realClose(Statement.java:1606)
>         - locked <0x00000005bd522568> (a com.mysql.jdbc.Connection)
>         - locked <0x00000005bd5e81c0> (a com.mysql.jdbc.ServerPreparedStatement)
>         at com.mysql.jdbc.PreparedStatement.realClose(PreparedStatement.java:1703)
>         at com.mysql.jdbc.ServerPreparedStatement.realClose(ServerPreparedStatement.java:901)
>         - locked <0x00000005bd525ba0> (a java.lang.Object)
>         - locked <0x00000005bd522568> (a com.mysql.jdbc.Connection)
>         - locked <0x00000005bd5e81c0> (a com.mysql.jdbc.ServerPreparedStatement)
>         at com.mysql.jdbc.Connection.closeAllOpenStatements(Connection.java:2126)
>         at com.mysql.jdbc.Connection.realClose(Connection.java:4422)
>         at com.mysql.jdbc.Connection.close(Connection.java:2098)
>         - locked <0x00000005bd522568> (a com.mysql.jdbc.Connection)
>         at org.apache.tomcat.jdbc.pool.PooledConnection.disconnect(PooledConnection.java:320)
>
>
>
> Please help us on this. Could it be a problem with tomcat-jdbc.jar?
>
> Thanks
> -G
>
>
> ________________________________
> From: Christopher Schultz <ch...@christopherschultz.net>
> To: Tomcat Users List <us...@tomcat.apache.org>
> Sent: Thursday, January 26, 2012 9:41 AM
> Subject: Re: Tomcat 6.0.35-SocketException: Too many open files  issue with
>
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
>
> G,
>
> On 1/25/12 11:53 PM, gnath wrote:
>> As you have suggested, i started collecting the thread dumps
>
> Thread dumps will set you free. Well, not really. Instead, they will
> tell you where your webapp is breaking, which usually means more work
> for you. But at least the hard part is done: finding out what's breaking.
>
>> when it happened again and we saw some kind of DBCP Connection pool
>> issues leading to 'Too Many open files' issue.
>
> That will definitely do it.
>
>> So we decided to replace the commons DBCP with tomcat-jdbc.jar
>> (with same configuration properties).
>
> Why?
>
>> After this change, it seemed for few hours but started seeing in
>> the logs where the Connection Pool jar could not give any
>> connections and seems to be all the connections are busy. So we
>> went ahead and added a configuration property
>> 'removeAbandoned=true' in our Datasource configuration.
>
> I would go back to DBCP unless you think you need to switch for some
> reason.
>
> I suspect you are leaking database connections and don't have a
> suitable timeout for removal of "lost" database connections (or maybe
> didn't have that set up in the first place).
>
> You really need to enable "logAbandoned" so you can find out where
> your connection leaks are, and fix them. In development, set
> maxActive="1" and leave it there, forever. Also, set
> logAbandoned="true" and always run like that in development. Running
> like that in production isn't a bad idea, either.
>
>> We are still watching the performance and the server behavior
>> after these changes. Will keep you posted on how things will turn
>> out or if i see any further issues.
>
> I suspect you are still leaking connections, but your pool is now
> silently cleaning-up after the mess your webapp is making. Instrument
> your pool. Fix your leaks.
>
> - -chris
> -----BEGIN PGP SIGNATURE-----
> Version: GnuPG/MacGPG2 v2.0.17 (Darwin)
> Comment: GPGTools - http://gpgtools.org
> Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/
>
> iEYEARECAAYFAk8hkDkACgkQ9CaO5/Lv0PCxFgCgs+EiV/CNjmCNekeDwKHgnNtZ
> 5LYAoKZUkIAJOK0eItkoHBF3wScK9lQf
> =AyL4
> -----END PGP SIGNATURE-----
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
> For additional commands, e-mail: users-help@tomcat.apache.org

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org


Re: Tomcat 6.0.35-SocketException: Too many open files issue with

Posted by gnath <ga...@yahoo.com>.
Hello Chris, 


After seeing the initial connection pool issue, i started searching online for help and i found this article :
http://vigilbose.blogspot.com/2009/03/apache-commons-dbcp-and-tomcat-jdbc.html
so, i thought may be tomcat's jar would bring some improvement. by the way, we had commons-dbcp-1.3.jar. Do you recommend upgrading to newer commons dbcp jar instead of using tomcat-jdbc.jar.

Just because we are running tomcat-6.0.35, it did not come with tomcat-jdbc.jar, so we downloaded the 1.1.1 version or jar and dropped in WEB-INF/lib and started using it. 


I agree what you are saying about leaking the connection and will plan to set the logAbandoned flag as you suggested. 


However, i was about to file a new issue but would like to describe here as well. So we have 2 servers running tomcat (same code, same configuration). After we replaced tomcat-jdbc.jar and added 'removeAbandoned' flag to true, one of the servers is doing great (ofcourse i agree that pool is cleaning up the mess), but we saw one new issue on the second server. it hasn't been releasing the connections and was consistently growing slowly. So i collected thread dump and i saw a deadlock :

Found one Java-level deadlock:
=============================
"catalina-exec-1":
  waiting to lock monitor 0x000000005d7944b8 (object 0x00000005bd522568, a com.mysql.jdbc.Connection),
  which is held by "[Pool-Cleaner]:Tomcat Connection Pool[1-1015483951]"
"[Pool-Cleaner]:Tomcat Connection Pool[1-1015483951]":
  waiting to lock monitor 0x000000005dcdea28 (object 0x00000005bd659ce8, a com.mysql.jdbc.ResultSet),
  which is held by "catalina-exec-1"

Java stack information for the threads listed above:
===================================================
"catalina-exec-1":
        at com.mysql.jdbc.Connection.getCharsetConverter(Connection.java:3177)
        - waiting to lock <0x00000005bd522568> (a com.mysql.jdbc.Connection)
        at com.mysql.jdbc.Field.getStringFromBytes(Field.java:583)
        at com.mysql.jdbc.Field.getName(Field.java:487)
        at com.mysql.jdbc.ResultSet.buildIndexMapping(ResultSet.java:593)
        at com.mysql.jdbc.ResultSet.findColumn(ResultSet.java:926)
        - locked <0x00000005bd659ce8> (a com.mysql.jdbc.ResultSet)
        at com.mysql.jdbc.ResultSet.getInt(ResultSet.java:2401)

"[Pool-Cleaner]:Tomcat Connection Pool[1-1015483951]":
        at com.mysql.jdbc.ResultSet.close(ResultSet.java:736)
        - waiting to lock <0x00000005bd659ce8> (a com.mysql.jdbc.ResultSet)
        at com.mysql.jdbc.Statement.realClose(Statement.java:1606)
        - locked <0x00000005bd522568> (a com.mysql.jdbc.Connection)
        - locked <0x00000005bd5e81c0> (a com.mysql.jdbc.ServerPreparedStatement)
        at com.mysql.jdbc.PreparedStatement.realClose(PreparedStatement.java:1703)
        at com.mysql.jdbc.ServerPreparedStatement.realClose(ServerPreparedStatement.java:901)
        - locked <0x00000005bd525ba0> (a java.lang.Object)
        - locked <0x00000005bd522568> (a com.mysql.jdbc.Connection)
        - locked <0x00000005bd5e81c0> (a com.mysql.jdbc.ServerPreparedStatement)
        at com.mysql.jdbc.Connection.closeAllOpenStatements(Connection.java:2126)
        at com.mysql.jdbc.Connection.realClose(Connection.java:4422)
        at com.mysql.jdbc.Connection.close(Connection.java:2098)
        - locked <0x00000005bd522568> (a com.mysql.jdbc.Connection)
        at org.apache.tomcat.jdbc.pool.PooledConnection.disconnect(PooledConnection.java:320)



Please help us on this. Could it be a problem with tomcat-jdbc.jar?

Thanks
-G


________________________________
 From: Christopher Schultz <ch...@christopherschultz.net>
To: Tomcat Users List <us...@tomcat.apache.org> 
Sent: Thursday, January 26, 2012 9:41 AM
Subject: Re: Tomcat 6.0.35-SocketException: Too many open files  issue with
 
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

G,

On 1/25/12 11:53 PM, gnath wrote:
> As you have suggested, i started collecting the thread dumps

Thread dumps will set you free. Well, not really. Instead, they will
tell you where your webapp is breaking, which usually means more work
for you. But at least the hard part is done: finding out what's breaking.

> when it happened again and we saw some kind of DBCP Connection pool
> issues leading to 'Too Many open files' issue.

That will definitely do it.

> So we decided to replace the commons DBCP with tomcat-jdbc.jar
> (with same configuration properties).

Why?

> After this change, it seemed for few hours but started seeing in
> the logs where the Connection Pool jar could not give any 
> connections and seems to be all the connections are busy. So we
> went ahead and added a configuration property
> 'removeAbandoned=true' in our Datasource configuration.

I would go back to DBCP unless you think you need to switch for some
reason.

I suspect you are leaking database connections and don't have a
suitable timeout for removal of "lost" database connections (or maybe
didn't have that set up in the first place).

You really need to enable "logAbandoned" so you can find out where
your connection leaks are, and fix them. In development, set
maxActive="1" and leave it there, forever. Also, set
logAbandoned="true" and always run like that in development. Running
like that in production isn't a bad idea, either.

> We are still watching the performance and the server behavior
> after these changes. Will keep you posted on how things will turn
> out or if i see any further issues.

I suspect you are still leaking connections, but your pool is now
silently cleaning-up after the mess your webapp is making. Instrument
your pool. Fix your leaks.

- -chris
-----BEGIN PGP SIGNATURE-----
Version: GnuPG/MacGPG2 v2.0.17 (Darwin)
Comment: GPGTools - http://gpgtools.org
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAk8hkDkACgkQ9CaO5/Lv0PCxFgCgs+EiV/CNjmCNekeDwKHgnNtZ
5LYAoKZUkIAJOK0eItkoHBF3wScK9lQf
=AyL4
-----END PGP SIGNATURE-----

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org

Re: Tomcat 6.0.35-SocketException: Too many open files issue with

Posted by Christopher Schultz <ch...@christopherschultz.net>.
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

G,

On 1/25/12 11:53 PM, gnath wrote:
> As you have suggested, i started collecting the thread dumps

Thread dumps will set you free. Well, not really. Instead, they will
tell you where your webapp is breaking, which usually means more work
for you. But at least the hard part is done: finding out what's breaking.

> when it happened again and we saw some kind of DBCP Connection pool
> issues leading to 'Too Many open files' issue.

That will definitely do it.

> So we decided to replace the commons DBCP with tomcat-jdbc.jar
> (with same configuration properties).

Why?

> After this change, it seemed for few hours but started seeing in
> the logs where the Connection Pool jar could not give any 
> connections and seems to be all the connections are busy. So we
> went ahead and added a configuration property
> 'removeAbandoned=true' in our Datasource configuration.

I would go back to DBCP unless you think you need to switch for some
reason.

I suspect you are leaking database connections and don't have a
suitable timeout for removal of "lost" database connections (or maybe
didn't have that set up in the first place).

You really need to enable "logAbandoned" so you can find out where
your connection leaks are, and fix them. In development, set
maxActive="1" and leave it there, forever. Also, set
logAbandoned="true" and always run like that in development. Running
like that in production isn't a bad idea, either.

> We are still watching the performance and the server behavior
> after these changes. Will keep you posted on how things will turn
> out or if i see any further issues.

I suspect you are still leaking connections, but your pool is now
silently cleaning-up after the mess your webapp is making. Instrument
your pool. Fix your leaks.

- -chris
-----BEGIN PGP SIGNATURE-----
Version: GnuPG/MacGPG2 v2.0.17 (Darwin)
Comment: GPGTools - http://gpgtools.org
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAk8hkDkACgkQ9CaO5/Lv0PCxFgCgs+EiV/CNjmCNekeDwKHgnNtZ
5LYAoKZUkIAJOK0eItkoHBF3wScK9lQf
=AyL4
-----END PGP SIGNATURE-----

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org


Re: Tomcat 6.0.35-SocketException: Too many open files issue with

Posted by gnath <ga...@yahoo.com>.
Hi Chris, 

Thanks a lot for looking into this and giving answers for all my questions. Sorry, i could not get chance to reply in time. As you have suggested, i started collecting the thread dumps when it happened again and we saw some kind of DBCP Connection pool issues leading to 'Too Many open files' issue. So we decided to replace the commons DBCP with tomcat-jdbc.jar (with same configuration properties). After this change, it seemed for few hours but started seeing in the logs where the Connection Pool jar could not give any connections and seems to be all the connections are busy. So we went ahead and added a configuration property 'removeAbandoned=true' in our Datasource configuration. 


We are still watching the performance and the server behavior after these changes. 
Will keep you posted on how things will turn out or if i see any further issues. 


thank you once again, I really appreciate your help.

Thanks
-G



________________________________
 From: Christopher Schultz <ch...@christopherschultz.net>
To: Tomcat Users List <us...@tomcat.apache.org> 
Sent: Monday, January 23, 2012 7:51 AM
Subject: Re: Tomcat 6.0.35-SocketException: Too many open files  issue with
 
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

G,

On 1/22/12 6:18 PM, gnath wrote:
> We have 2 connectors (one for http and another for https) using
> the tomcatThreadPool. I have the connectionTimeout="20000" for
> http connector.  However i was told that our https connector might
> not be used by the app as our loadbalancer is handling all the
> https traffic and just sending them to http connector.

You might want to disable that HTTPS connector, but it's probably not
hurting you at all in this case -- just a bit of wasted resources. If
you are sharing a thread pool then there is no negative impact on the
number of threads and/or open files that you have to deal with, here.

> the ulimit settings were increased from default 1024 to 4096 by
> our admin. not sure how he did that, but i see the count as 4096
> when i do ulimit -a.

Well, if your admin says it's right, I suppose it's right.

> for ulimit -n i see its 'unlimited'.

That's good.

> for cat /proc/PID/limits, i get the following response:
> 
> Limit                     Soft Limit           Hard Limit
> Units Max cpu time              unlimited            unlimited
> seconds Max file size             unlimited            unlimited
> bytes Max data size             unlimited            unlimited
> bytes Max stack size            10485760             unlimited
> bytes Max core file size        0                    unlimited
> bytes Max resident set          unlimited            unlimited
> bytes Max processes             unlimited            unlimited
> processes Max open files            4096                 4096
> files Max locked memory         32768                32768
> bytes Max address space         unlimited            unlimited
> bytes Max file locks            unlimited            unlimited
> locks Max pending signals       202752               202752
> signals Max msgqueue size         819200               819200
> bytes Max nice priority         0                    0
>  Max realtime priority     0                    0

Those all look good to me.

> This morning Tomcat hung again but this time it dint say 'too many 
> open files' in logs but i only see this below in catalina.out:
> 
> org.apache.tomcat.util.http.Parameters processParameters INFO:
> Invalid chunk starting at byte [0] and ending at byte [0] with a
> value of [null] ignored Jorg.apache.tomcat.util.http.Parameters
> processParameters INFO: Invalid chunk starting at byte [0] and
> ending at byte [0] with a value of [null] ignored

Hmm...

> When it hung(java process is still up), i ran few commands like
> lsof by PID and couple others.

Next time, take a thread dump as well. The fact that Tomcat hung up
without an OS problem (like Too Many Open Files) is probably not good.
If this happens again with an apparent hang with no stack traces in
the logs, take a thread dump and post it back here under a different
subject.

> here is what i got:
> 
> lsof -p PID| wc -l 1342
> 
> lsof | wc -l 4520
> 
> lsof -u USER| wc -l 1953

Hmm I wonder if you are hitting a *user* or even *system* limit of
some kind (though a *NIX system with a hard limit of ~4500 file
descriptors seems entirely unreasonable). I also wonder how many
/processes/ and/or /threads/ you have running at once.

> After i kill java process the lsof for pid returned obviously to
> zero

Of course.

> Is there any chance that the tomcat is ignoring the ulimit?

Those limits are not self-imposed: the OS imposes those limits. Tomcat
doesn't even know it's own ulimit (of any kind), so it will simply
consume whatever resources you have configured it to use, and if it
hits a limit, the JVM will experience some kind of OS-related error.

> , some people on web were saying something about setting this in
> catalina.sh.

Setting what? ulimit? I'd do it in setenv.sh because that's a more
appropriate place for that kind of thing. I'm also interested in what
the Internet has to say about what setting(s) to use.

> Please help with my ongoing issue.. its getting very hard to
> monitor the logs every minute and restarting whenever it hangs with
> these kind of issues. I very much appreciate your help in this.

Did this just start happening recently? Perhaps with an upgrade of
some component?

If you think this might actually be related to the number of file
handles being used by your thread pool, you might want to reduce the
maximum number of threads for that thread pool: a slightly less
responsive site is better than one that goes down all the time because
of hard resource limits.

- -chris
-----BEGIN PGP SIGNATURE-----
Version: GnuPG/MacGPG2 v2.0.17 (Darwin)
Comment: GPGTools - http://gpgtools.org
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAk8dghAACgkQ9CaO5/Lv0PCmKQCfUaYfeoSkTRDKBppR4ZGFTWgI
8dEAoKgwy1BcKO6bC8nbbLWd6hn0a38N
=TZCu
-----END PGP SIGNATURE-----

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org

Re: Tomcat 6.0.35-SocketException: Too many open files issue with

Posted by Christopher Schultz <ch...@christopherschultz.net>.
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

G,

On 1/22/12 6:18 PM, gnath wrote:
> We have 2 connectors (one for http and another for https) using
> the tomcatThreadPool. I have the connectionTimeout="20000" for
> http connector.  However i was told that our https connector might
> not be used by the app as our loadbalancer is handling all the
> https traffic and just sending them to http connector.

You might want to disable that HTTPS connector, but it's probably not
hurting you at all in this case -- just a bit of wasted resources. If
you are sharing a thread pool then there is no negative impact on the
number of threads and/or open files that you have to deal with, here.

> the ulimit settings were increased from default 1024 to 4096 by
> our admin. not sure how he did that, but i see the count as 4096
> when i do ulimit -a.

Well, if your admin says it's right, I suppose it's right.

> for ulimit -n i see its 'unlimited'.

That's good.

> for cat /proc/PID/limits, i get the following response:
> 
> Limit                     Soft Limit           Hard Limit
> Units Max cpu time              unlimited            unlimited
> seconds Max file size             unlimited            unlimited
> bytes Max data size             unlimited            unlimited
> bytes Max stack size            10485760             unlimited
> bytes Max core file size        0                    unlimited
> bytes Max resident set          unlimited            unlimited
> bytes Max processes             unlimited            unlimited
> processes Max open files            4096                 4096
> files Max locked memory         32768                32768
> bytes Max address space         unlimited            unlimited
> bytes Max file locks            unlimited            unlimited
> locks Max pending signals       202752               202752
> signals Max msgqueue size         819200               819200
> bytes Max nice priority         0                    0
>  Max realtime priority     0                    0

Those all look good to me.

> This morning Tomcat hung again but this time it dint say 'too many 
> open files' in logs but i only see this below in catalina.out:
> 
> org.apache.tomcat.util.http.Parameters processParameters INFO:
> Invalid chunk starting at byte [0] and ending at byte [0] with a
> value of [null] ignored Jorg.apache.tomcat.util.http.Parameters
> processParameters INFO: Invalid chunk starting at byte [0] and
> ending at byte [0] with a value of [null] ignored

Hmm...

> When it hung(java process is still up), i ran few commands like
> lsof by PID and couple others.

Next time, take a thread dump as well. The fact that Tomcat hung up
without an OS problem (like Too Many Open Files) is probably not good.
If this happens again with an apparent hang with no stack traces in
the logs, take a thread dump and post it back here under a different
subject.

> here is what i got:
> 
> lsof -p PID| wc -l 1342
> 
> lsof | wc -l 4520
> 
> lsof -u USER| wc -l 1953

Hmm I wonder if you are hitting a *user* or even *system* limit of
some kind (though a *NIX system with a hard limit of ~4500 file
descriptors seems entirely unreasonable). I also wonder how many
/processes/ and/or /threads/ you have running at once.

> After i kill java process the lsof for pid returned obviously to
> zero

Of course.

> Is there any chance that the tomcat is ignoring the ulimit?

Those limits are not self-imposed: the OS imposes those limits. Tomcat
doesn't even know it's own ulimit (of any kind), so it will simply
consume whatever resources you have configured it to use, and if it
hits a limit, the JVM will experience some kind of OS-related error.

> , some people on web were saying something about setting this in
> catalina.sh.

Setting what? ulimit? I'd do it in setenv.sh because that's a more
appropriate place for that kind of thing. I'm also interested in what
the Internet has to say about what setting(s) to use.

> Please help with my ongoing issue.. its getting very hard to
> monitor the logs every minute and restarting whenever it hangs with
> these kind of issues. I very much appreciate your help in this.

Did this just start happening recently? Perhaps with an upgrade of
some component?

If you think this might actually be related to the number of file
handles being used by your thread pool, you might want to reduce the
maximum number of threads for that thread pool: a slightly less
responsive site is better than one that goes down all the time because
of hard resource limits.

- -chris
-----BEGIN PGP SIGNATURE-----
Version: GnuPG/MacGPG2 v2.0.17 (Darwin)
Comment: GPGTools - http://gpgtools.org
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAk8dghAACgkQ9CaO5/Lv0PCmKQCfUaYfeoSkTRDKBppR4ZGFTWgI
8dEAoKgwy1BcKO6bC8nbbLWd6hn0a38N
=TZCu
-----END PGP SIGNATURE-----

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org


Re: Tomcat 6.0.35-SocketException: Too many open files issue with

Posted by gnath <ga...@yahoo.com>.
Thanks chris for looking into this.

Here are answers for the questions you asked.

We have 2 connectors (one for http and another for https) using the tomcatThreadPool. I have the connectionTimeout="20000" for http connector.  However i was told that our https connector might not be used by the app as our loadbalancer is handling all the https traffic and just sending them to http connector.

the ulimit settings were increased from default 1024 to 4096 by our admin. not sure how he did that, but i see the count as 4096 when i do ulimit -a.

for ulimit -n i see its 'unlimited'.

for cat /proc/PID/limits, i get the following response:

Limit                     Soft Limit           Hard Limit           Units     
Max cpu time              unlimited            unlimited            seconds   
Max file size             unlimited            unlimited            bytes     
Max data size             unlimited            unlimited            bytes     
Max stack size            10485760             unlimited            bytes     
Max core file size        0                    unlimited            bytes     
Max resident set          unlimited            unlimited            bytes     
Max processes             unlimited            unlimited            processes 
Max open files            4096                 4096                 files     
Max locked memory         32768                32768                bytes     
Max address space         unlimited            unlimited            bytes     
Max file locks            unlimited            unlimited            locks     
Max pending signals       202752               202752               signals   
Max msgqueue size         819200               819200               bytes     
Max nice priority         0                    0                    
Max realtime priority     0                    0 



This morning Tomcat hung again but this time it dint say 'too many open files' in logs but i only see this below in catalina.out:

org.apache.tomcat.util.http.Parameters processParameters
INFO: Invalid chunk starting at byte [0] and ending at byte [0] with a value of [null] ignored
Jorg.apache.tomcat.util.http.Parameters processParameters
INFO: Invalid chunk starting at byte [0] and ending at byte [0] with a value of [null] ignored

When it hung(java process is still up), i ran few commands like lsof by PID and couple others. here is what i got:

lsof -p PID| wc -l
1342

lsof | wc -l
4520

lsof -u USER| wc -l
1953

After i kill java process the lsof for pid returned obviously to zero


Is there any chance that the tomcat is ignoring the ulimit?, some people on web were saying something about setting this in catalina.sh.

Please help with my ongoing issue.. its getting very hard to monitor the logs every minute and restarting whenever it hangs with these kind of issues. I very much appreciate your help in this.

Thanks
-G



________________________________
 From: Christopher Schultz <ch...@christopherschultz.net>
To: Tomcat Users List <us...@tomcat.apache.org> 
Sent: Sunday, January 22, 2012 11:20 AM
Subject: Re: Tomcat 6.0.35-SocketException: Too many open files  issue with
 
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

G,

On 1/22/12 3:01 AM, gnath wrote:
> We have been seeing "SocketException: Too many open files" in 
> production environment(Linux OS running Tomcat 6.0.35 with sun's
> JDK 1.6.30) every day and requires a restart of Tomcat. When this
> happened for the first time, we searched online and found people
> suggesting to increase the file descriptors size and we increased
> to 4096. But still the problem persists. We have the Orion App
> Server also running on the same machine but usually during the day
> when we check the open file descriptor by command: ls -l
> /proc/PID/fd, its always less than 1000 combined for both Orion and
> Tomcat.
> 
> Here is the exception we see pouring in the logs once it starts: 
> This requires us to kill java process and restart tomcat. Our
> Tomcat configuration maxThreadCount is 500 with minSpareThreads=50
> in server.xml

How many connectors do you have? If you have more than one connector
with 500 threads, then you can have more threads than maybe you are
expecting.

> SEVERE: Socket accept failed java.net.SocketException: Too many
> open files at java.net.PlainSocketImpl.socketAccept(Native Method) 
> at java.net.PlainSocketImpl.accept(PlainSocketImpl.java:408) at
> java.net.ServerSocket.implAccept(ServerSocket.java:462) at
> java.net.ServerSocket.accept(ServerSocket.java:430) at
> org.apache.tomcat.util.net.DefaultServerSocketFactory.acceptSocket(DefaultServerSocketFactory.java:61)
>
> 
at org.apache.tomcat.util.net.JIoEndpoint$Acceptor.run(JIoEndpoint.java:352)
> at java.lang.Thread.run(Thread.java:662)
> 
> ulimit -a gives for the user where Tomcat is running.
> 
> open files                      (-n) 4096

How did you set the ulimit for this user? Did you do it in a login
script or something, or just at the command-line at some point?

How about (-u) max user processes or threads-per-process or anything
like that?

Sometimes the "Too many files open" is not entirely accurate.

What does 'cat /proc/PID/limits' show you?

- -chris
-----BEGIN PGP SIGNATURE-----
Version: GnuPG/MacGPG2 v2.0.17 (Darwin)
Comment: GPGTools - http://gpgtools.org
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAk8cYZMACgkQ9CaO5/Lv0PC7+ACeMW3/jwhOUKB9RZ3u+dfN85jD
NnMAoLU7QJ6DXKaI9Q/mPeEO6x9gXzx6
=Nd1d
-----END PGP SIGNATURE-----

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org

Re: Tomcat 6.0.35-SocketException: Too many open files issue with

Posted by Christopher Schultz <ch...@christopherschultz.net>.
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

G,

On 1/22/12 3:01 AM, gnath wrote:
> We have been seeing "SocketException: Too many open files" in 
> production environment(Linux OS running Tomcat 6.0.35 with sun's
> JDK 1.6.30) every day and requires a restart of Tomcat. When this
> happened for the first time, we searched online and found people
> suggesting to increase the file descriptors size and we increased
> to 4096. But still the problem persists. We have the Orion App
> Server also running on the same machine but usually during the day
> when we check the open file descriptor by command: ls -l
> /proc/PID/fd, its always less than 1000 combined for both Orion and
> Tomcat.
> 
> Here is the exception we see pouring in the logs once it starts: 
> This requires us to kill java process and restart tomcat. Our
> Tomcat configuration maxThreadCount is 500 with minSpareThreads=50
> in server.xml

How many connectors do you have? If you have more than one connector
with 500 threads, then you can have more threads than maybe you are
expecting.

> SEVERE: Socket accept failed java.net.SocketException: Too many
> open files at java.net.PlainSocketImpl.socketAccept(Native Method) 
> at java.net.PlainSocketImpl.accept(PlainSocketImpl.java:408) at
> java.net.ServerSocket.implAccept(ServerSocket.java:462) at
> java.net.ServerSocket.accept(ServerSocket.java:430) at
> org.apache.tomcat.util.net.DefaultServerSocketFactory.acceptSocket(DefaultServerSocketFactory.java:61)
>
> 
at org.apache.tomcat.util.net.JIoEndpoint$Acceptor.run(JIoEndpoint.java:352)
> at java.lang.Thread.run(Thread.java:662)
> 
> ulimit -a gives for the user where Tomcat is running.
> 
> open files                      (-n) 4096

How did you set the ulimit for this user? Did you do it in a login
script or something, or just at the command-line at some point?

How about (-u) max user processes or threads-per-process or anything
like that?

Sometimes the "Too many files open" is not entirely accurate.

What does 'cat /proc/PID/limits' show you?

- -chris
-----BEGIN PGP SIGNATURE-----
Version: GnuPG/MacGPG2 v2.0.17 (Darwin)
Comment: GPGTools - http://gpgtools.org
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAk8cYZMACgkQ9CaO5/Lv0PC7+ACeMW3/jwhOUKB9RZ3u+dfN85jD
NnMAoLU7QJ6DXKaI9Q/mPeEO6x9gXzx6
=Nd1d
-----END PGP SIGNATURE-----

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org