You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@tomcat.apache.org by Shawn Heisey <ap...@elyograg.org> on 2018/03/24 04:08:02 UTC

Trying to chase down "too many connection" problems with DB

This message is long.  Lots of details, a fair amount of history.

The primary Tomcat version we've got is 7.0.42. Specifically, it is the 
Tomcat that's included with Liferay 6.2. This is why we haven't 
attempted an upgrade even though the version we're running is five years 
old -- we don't want to rock the boat too hard and risk everything not 
working.  Liferay is battle-tested with that version of Tomcat.

Background:

Every now and then, we find that our MySQL master has reached the 
connection limit of 600 connections, and websites stop working, because 
they are trying to open new connections and failing.  When I look at the 
DB server, it shows a HUGE number of connections (300 to 500 out of the 
600 max) that have idle times between one hour and eight hours.  The 
MySQL server-side idle timeout is at the default of 28800 seconds -- 
eight hours.  The idle connections are coming from the webservers.

There are five main webservers, two of which run exclusively Liferay 
6.2, and three that are running older apps on Tomcat version that's 
probably ANCIENT, as well as Liferay 6.1 with Tomcat 7.0.23.  I can't 
seem to figure out what version the really ancient one is.  There are no 
version numbers that I can see in the files in the program directory.  
We also have a staging server, a dev server, and a few ancillary 
systems.  The staging and dev servers mainly use a dev/staging database 
server, but do need to connect to the main DB server for some things.

When we run out of connections, each of these five webservers has nearly 
100 (and sometimes MORE than 100) open connections to the database 
server.  And the majority of them have been idle for a LONG time.

A number of JNDI database resources are configured in conf/context.xml.  
So we have connection pooling.  But even though there are TONS of 
connections already established from Tomcat, and completely idle from 
the DB server's perspective, the application is STILL trying to open a 
NEW connection when somebody visits a page.  This is the precise issue 
that connection pooling SHOULD be preventing.

I've discussed this with the commons-user mailing list, from a DBCP 
perspective.  I started there because there is some different code that 
I wrote, which doesn't run in Tomcat, and uses DBCP2 natively.  I'm a 
lot more familiar with my code than the code running in Tomcat.

Based on my discussions with commons-user, I really think that the 
reason that the DB pools are trying to create more connections even 
though there's already a lot of them open is because the connections are 
actually abandoned.  I suspect they were never closed by the 
application, so the pool still has them as active, and thinks it can't 
use them.  Now that the discussion directly involves configuring pools 
using Tomcat's own DBCP implementation, they're not really able to help 
me any further.

If we are dealing with abandoned connections as I suspect, then I need 
to ask why abandoned connection removal isn't actually working, and how 
to configure it so that it DOES work.

We have figured out how to log the number of idle and active connections 
on the datasource, but it's going to take some time to get that logging 
into the production servers, so I don't have definitive proof that the 
connections are actually active.

Here's a redacted configuration that we have in place for one of the DB 
connection pools on the production servers:

         <Resource name="jdbc/REDACTED" auth="Container" 
factory="org.apache.tomcat.dbcp.dbcp.BasicDataSourceFactory" 
driverClassName="com.mysql.jdbc.Driver" type="javax.sql.DataSource" 
maxActive="60" maxIdle="10" maxWait="30000" removeAbandoned="true" 
removeAbandonedTimeout="30" username="REDACTED" password="REDACTED" 
testOnBorrow="true" validationQuery="select 1" 
url="jdbc:mysql://REDACTED.REDACTED.com:3306/REDACTED?autoReconnect=true&amp;zeroDateTimeBehavior=round" 
/>

This is the Tomcat documentation I'm using as a reference:

https://tomcat.apache.org/tomcat-7.0-doc/jdbc-pool.html

Looking at Tomcat 7.0 jdbc documentation, the first thing that I notice 
is that the factory we have configured is not even mentioned in the 
documentation.

One person (who replied privately and not to commons-user) said that 
BOTH factory and type attribute values mentioned in the Tomcat 7.0 docs 
are completely wrong!  And another (who did reply via the list) said 
that the factory mentioned in the documentation is "the alternative 
pool" and that I should be using the one documented here instead.  But 
this page is REALLY lean on details of how to actually set up the resource:

https://tomcat.apache.org/tomcat-7.0-doc/jndi-datasource-examples-howto.html#Database_Connection_Pool_(DBCP)_Configurations

=====================================

So now, with the sordid past covered, I have three main questions:

1) I think this is the really burning question in my mind: Why is the 
server opening NEW connections when there are dozens of them already 
open?  Does this mean that (as I suspect) all those connections are 
abandoned?  If so, why are they not getting cleaned up, when we have 
removeAbandoned set to true and removeAbandonedTimeout set to a 
RIDICULOUSLY low value of 30 seconds?  (We MUST increase this value.  
The plan is one hour.)

2, background) We tried to reconfigure all the pools on our staging 
server to a new config.  They currently have a config much like what I 
shared above, but with smaller pool size numbers. Here's an example of 
what we tried to put in place:

         <Resource name="jdbc/REDACTED"
                         auth="Container"
factory="org.apache.tomcat.jdbc.pool.DataSourceFactory"
driverClassName="com.mysql.jdbc.Driver"
                         type="javax.sql.DataSource"
                         maxActive="10"
                         maxIdle="2"
                         minIdle="1"
                         maxAge="7200000"
                         removeAbandoned="true"
                         removeAbandonedTimeout="3600"
                         logAbandoned="true"
                         username="REDACTED"
                         password="REDACTED"
                         testOnBorrow="true"
                         testOnConnect="true"
                         testOnReturn="true"
                         testWhileIdle="true"
                         validationQuery="SELECT 1"
                         validationQueryTimeout="5"
                         validationInterval="5000"
                         minEvictableIdleTimeMillis="900000"
                         allowMultiQueries="false"
url="jdbc:mysql://REDACTED.REDACTED.com:3306/REDACTED?autoReconnect=true&amp;zeroDateTimeBehavior=round"
                         />

When we tried to start the service with a config like this for all of 
the pools (there are half a dozen of them), it failed to correctly 
start, getting too many connection errors.

The size of the pools on this staging server is a LOT smaller than what 
we have configured in production.  The DB server it connects to is a lot 
less beefy, and has a limit of 400 connections.  I am told by the 
developer that did the restart that when this service started up, it had 
made 140 connections to the dev/staging DB server before failing.  Which 
shouldn't be possible, with half a dozen pool configs that all have a 
maxActive set to 10.

2, question) Have I got something wrong in the new config? If I have, 
and there are recommendations for what I should change, please tell me 
WHY each change is recommended, and if possible, point me at relevant 
documentation.  If you can, tell me how I may have misread the Tomcat 
documentation, which I used extensively in coming up with the new config.

Side note/question:  One of the replies I got on the DBCP list referred 
me to this Javadoc:

https://commons.apache.org/proper/commons-dbcp/api-1.4/org/apache/commons/dbcp/BasicDataSource.html#getRemoveAbandoned()

Which basically says that the 1.4 version of the standard DBCP library 
won't even try to eliminate abandoned connections on a pool of size 60 
until there are more than 57 connections open on the one pool.  This 
sounds like a REALLY bad idea.  The Tomcat jdbc documentation lists 
"abandonWhenPercentageFull" as an attribute, and the description there 
hints that Tomcat's jdbc implementation doesn't suffer from the same 
problem.  But with the unusual factory we have configured, I don't know 
if our configuration is even properly honored.  Does the tomcat pool 
have that same issue?

Because we have multiple pools defined, and some of those pools have a 
maxActive number that's pretty large, I'm betting that we run out of 
connections long before that one pool has opened 57 of them.

Thanks,
Shawn


---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org


Re: Trying to chase down "too many connection" problems with DB

Posted by "André Warnier (tomcat)" <aw...@ice-sa.com>.
On 26.03.2018 23:36, Shawn Heisey wrote:
> On 3/26/2018 2:39 PM, André Warnier (tomcat) wrote:
>> Just a question, more to satisfy my curiosity : when you have these
>> hundreds of "pending" connections, in what state are they, TCP/IP-wise ?
>
> Not sure where you got "pending".  I don't recall mentioning anything
> like that.
>
> The TCP state is ESTABLISHED.  Entries in the TCP table on port 3306
> with the ESTABLISHED state is how we have Xymon keeping track of the
> connection count.
>
> Slightly redacted screenshot:
>
> https://www.dropbox.com/s/4yj8y2hzfrnr2aj/xymon-port-monitoring-mysql.png?dl=0
>
> It's not a low-level networking issue.
>

Sorry for my use of the vague "pending". I meant "unused", "just sitting there".

I wasn't thinking of a low-level networking issue, but more like a case like the 
following. I am not a java expert, so I'll do this in text :

Some code creates a new object, which implicitly opens a connection to a back-end server.
Then this code returns, and the created object is now out of scope (including its 
underlying connection). The unreachable object at some point will be destroyed, giving the 
programmer a warm feeling that all is well.
But with java this does not necessarily happen right away : it will happen when there is a 
garbage collection. And only at that point will the underlying connection be closed and 
disappear.
So if you have a very comfortable Heap, it may take a very long time for these "abandoned" 
connections to be closed, as part of the destruction of the objects which contain them.
And in the meantime they accumulate (without being used, because they can only be used via 
the object, and the object is now unreachable).
The non-intutive part is that, the bigger the available Heap, the more such connections 
accumulate, because the interval between GCs is longer.

In my case, it was a bit more tricky still, in the sense that the connection was closed on 
one side and not on the other, so these connections appeared in the "CLOSE_WAIT" state. 
But under Linux, when reaching several hundreds such CLOSE_WAIT connections, the TCP/IP 
stack for some reason stops being responsive, leading to system paralysis.
(So we had the problem on the front-end client, not on the back-end server).

A forced java garbage collection cleared things up (which gave us a hint about what was 
happening), but it nevertheless took us a while to diagnose this properly (we also did not 
have the source code of the application doing this, so we had to guess).

The above is probably not your case exactly, but it may give you some hint as to how to 
find the underlying issue quicker.


---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org


Re: Trying to chase down "too many connection" problems with DB

Posted by Shawn Heisey <ap...@elyograg.org>.
On 3/26/2018 2:39 PM, André Warnier (tomcat) wrote:
> Just a question, more to satisfy my curiosity : when you have these
> hundreds of "pending" connections, in what state are they, TCP/IP-wise ?

Not sure where you got "pending".  I don't recall mentioning anything
like that.

The TCP state is ESTABLISHED.  Entries in the TCP table on port 3306
with the ESTABLISHED state is how we have Xymon keeping track of the
connection count.

Slightly redacted screenshot:

https://www.dropbox.com/s/4yj8y2hzfrnr2aj/xymon-port-monitoring-mysql.png?dl=0

It's not a low-level networking issue.

Thanks,
Shawn


---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org


Re: Trying to chase down "too many connection" problems with DB

Posted by "André Warnier (tomcat)" <aw...@ice-sa.com>.
Hi.
A question at the end.

On 26.03.2018 22:01, Shawn Heisey wrote:
> On 3/26/2018 11:28 AM, Christopher Schultz wrote:
>> The pool doesn't kill abandoned connections. It simply removes them
>> from the pool. Otherwise, you're right: you'd have torches and
>> pitchforks everywhere.
>
> That is a key piece of information.  And it should have perhaps been
> obvious from the description, but it wasn't.  I have been thinking this
> whole time that when abandoned connection removal is triggered, the
> connection was closed as well as removed from the pool.
>
> If the removal is literally just "take the object out of the pool" then
> it sounds like that removal isn't going to affect the application USING
> the connection either -- so the customer probably doesn't have any
> reason to sharpen their pitchfork.  If I'm right, the connection may get
> removed from the pool, but as long as the query doesn't throw an
> exception, JDBC is still going to plug along and return results just fine.
>
> Is there any facility in the pool that can actually close an abandoned
> connection?  Yes, I am aware that ultimately the responsibility for that
> is ours, and represents bad programming.  Just hoping the pool can make
> the problem invisible to users.
>
> Just to ask, since we're clearing up misconceptions ... when the idle
> eviction thread evicts a connection, is that connection closed?  At the
> moment I don't think we have idle eviction configured, but it's
> something I want to add.
>
>> Also, set logAbandoned="true" and you'll get a helpful message every
>> time a connection is considered abandoned and you'll find out if you
>> have a connection leak (as opposed to simply a too-short "abandoned"
>> setting).
>
> I will see about getting this into the existing config.  And brace for
> the flood of log messages. ;)  I have added it already to the new config
> I'm working on.
>
>> It's probably happening, just not meeting your expectations. Those
>> abandoned connections will pretty much live forever, no longer being
>> managed by the pool and yet still counting as being used by the
>> server. Maybe lower your idle-timeout on the server to help with this.
>
> Yep, I've already got the gears turning to get that lowered.
>

Just a question, more to satisfy my curiosity : when you have these hundreds of "pending" 
connections, in what state are they, TCP/IP-wise ?
You can find that out with the netstat command :
- under Linux, something like
netstat -pan --tcp | grep "(destination ip and/or port)"
- under Windows, the netstat command also exists, with slightly different parameters.

(destination ip and/or port) : put here the IP and/or port of the back-end system(s) to 
which these connections are made; this is just to reduce the number of lines..
The prior to last column should list the status.
The last column should show the program and PID of the process "owning" these connections
(in your case, that may just be "java"; but the PID may tell you more.)

The reason I'm asking, is because some years ago I was bitten by a bad programming issue 
also related to connections not being properly closed, and maybe this is similar.


---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org


Re: Trying to chase down "too many connection" problems with DB

Posted by Shawn Heisey <ap...@elyograg.org>.
On 3/26/2018 11:28 AM, Christopher Schultz wrote:
> The pool doesn't kill abandoned connections. It simply removes them
> from the pool. Otherwise, you're right: you'd have torches and
> pitchforks everywhere.

That is a key piece of information.  And it should have perhaps been
obvious from the description, but it wasn't.  I have been thinking this
whole time that when abandoned connection removal is triggered, the
connection was closed as well as removed from the pool.

If the removal is literally just "take the object out of the pool" then
it sounds like that removal isn't going to affect the application USING
the connection either -- so the customer probably doesn't have any
reason to sharpen their pitchfork.  If I'm right, the connection may get
removed from the pool, but as long as the query doesn't throw an
exception, JDBC is still going to plug along and return results just fine.

Is there any facility in the pool that can actually close an abandoned
connection?  Yes, I am aware that ultimately the responsibility for that
is ours, and represents bad programming.  Just hoping the pool can make
the problem invisible to users.

Just to ask, since we're clearing up misconceptions ... when the idle
eviction thread evicts a connection, is that connection closed?  At the
moment I don't think we have idle eviction configured, but it's
something I want to add.

> Also, set logAbandoned="true" and you'll get a helpful message every
> time a connection is considered abandoned and you'll find out if you
> have a connection leak (as opposed to simply a too-short "abandoned"
> setting).

I will see about getting this into the existing config.  And brace for
the flood of log messages. ;)  I have added it already to the new config
I'm working on.

> It's probably happening, just not meeting your expectations. Those
> abandoned connections will pretty much live forever, no longer being
> managed by the pool and yet still counting as being used by the
> server. Maybe lower your idle-timeout on the server to help with this.

Yep, I've already got the gears turning to get that lowered.

Thanks,
Shawn


---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org


Re: Trying to chase down "too many connection" problems with DB

Posted by Phil Steitz <ph...@gmail.com>.
On 3/27/18 5:41 PM, Shawn Heisey wrote:
> On 3/27/2018 11:03 AM, Phil Steitz wrote:
>> Not exactly, if what you are using is the DBCP pool.  To see the
> The factory in use right now is
> "org.apache.tomcat.dbcp.dbcp.BasicDataSourceFactory".  Information
> gathered previously in this thread told me that this is DBCP code,
> repackaged into the tomcat package space to avoid conflicting with the
> official commons-dbcp package.  If this is effectively unmodified DBCP,
> then what you wrote below likely applies.

As Mark said above, based on your version of tomcat, it is exactly
dbcp 1.4, pool 1.5.7.
>
>> details of what is going on, look at the removeAbandoned code in
>> DBCP's AbandonedObjectPool.  It calls
>> o.a.c.pool.GenericObjectPool#invalidateObject, which calls
>> o.a.c.dbco.PoolableConnectionFactory#destroyObject to close the
>> connection.  If an exception occurs, it is swallowed by
>> removeAbandoned, but it dumps a stack trace.
>>
>> So connections should in fact be closed if they are detected as
>> abandoned.  As I said on commons-user, in your setup, that won't
>> happen unless borrows are attempted when there are 57+ open
>> connections.  The removeAbandoned method is called *only* by
>> borrowObject in DBCP 1.x, with this test:
> It would not surprise me to learn that on the pool with maxActive=60,
> the pool is managing less than 57 connections.  Until I can get some
> logging in place on production to show me the acive and idle connection
> counts, I do not know what's actually happening.
>
> If I can successfully get a config using
> "org.apache.tomcat.jdbc.pool.DataSourceFactory" operational (which seems
> to be failing right now with "too many connections" on startup), is it
> true that this pool does not require maxActive-3 connections before
> abandoned removal kicks in?  The description for
> abandonWhenPercentageFull in the tomcat documentation implies that this
> is the case.
>
> So now I have one person telling me that removeAbandoned DOES close
> connections, and another saying that it does NOT close connections.  Is
> there a conflict here, or are both of these statements correct for
> different pool implementations?  Keep in mind that the current config
> uses "org.apache.tomcat.dbcp.dbcp.BasicDataSourceFactory"

Using that one, you get the dbcp / pool combo above and abandoned
connection removal does try to close connections when they are
marked abandoned.  As I said, if the close fails, the failure should
result in a stack trace written to stderr.  There is a unit test in
dbcp 1.4, TestAbandonedBasicDataSource#testAbandonedClose that
verifies this behavior.

Phil

>  and that I
> would like to try a new config with
> "org.apache.tomcat.jdbc.pool.DataSourceFactory" ... unless there is a
> good reason to NOT use that factory.  Which, by the way, is the factory
> that tomcat's documentation SAYS to use.
>
> The new config with the new factory fails to start on our staging
> server, but aside from seeing "too many connection" exceptions in the
> catalina log, I have no concrete information about what happened.  I
> didn't do the restart, and wasn't watching the system when it happened. 
> Switching back to the old config fixed the tomcat startup.  What the
> developer described to me shouldn't have been possible.  I need to do
> the restart myself and watch the system.
>
> I've come up with another question, related to this, but headed in a
> different direction.  I'll write a new email to the list for that.
>
> Thanks,
> Shawn
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
> For additional commands, e-mail: users-help@tomcat.apache.org
>
>


---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org


Re: Trying to chase down "too many connection" problems with DB

Posted by Christopher Schultz <ch...@christopherschultz.net>.
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

Shawn,

On 3/27/18 8:41 PM, Shawn Heisey wrote:
> So now I have one person telling me that removeAbandoned DOES
> close connections, and another saying that it does NOT close
> connections.  Is there a conflict here, or are both of these
> statements correct for different pool implementations?

Phil knows what he's talking about; I apparently do not.

I guess I've just never pushed DBCP over the hump to start actually
closing connections on me. From my perspective, they stayed open and
idle until either a service restart or they were closed by the
long-running process in the small number of cases where this kind of
thing happened.

- -chris
-----BEGIN PGP SIGNATURE-----
Comment: GPGTools - http://gpgtools.org
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQIzBAEBCAAdFiEEMmKgYcQvxMe7tcJcHPApP6U8pFgFAlq76GMACgkQHPApP6U8
pFigoQ//bHTHAuTFmh1AZ59wqLLoqw+9jvsBxiIgM731bLCEnyOP0pmRpE/KTDtP
PIvdDPZu61xyq2ZRBTKEIZPipNyxeqCkC5TwdzmEtIokCilKmX5iwteihoQAHs0d
YnGi0+Ei0Sl7FQxj8r5vqMeaNpzjrwmZF/Cq9JVys1rlQpHfEdXjv1OImUtn5Evp
xNtWHfGodRrZ2MI8LXLQZixHmVo7/MsykBTEtgKU/xyL9/W8mqv0NYvTbWN3R0I8
G/fbxR6BEzyBGyK3wT0QPNiF/F4SRtMX8JmaaKTDQUSM5JH7I3lGdItdfo3PpDVM
MmW/VrS+mDVi1JF1VHsiGvDY3tVat/i4ijt9xIrEJI9ycoEUhAygp+lLlITd3+4h
1H3jqTEP5Pjgfn0K7yPl5sgzrJBI5ct8b9Xw6Au6uWCYHCPf4ONRl3EL7z6sNpiG
OgHmh9xwIDDQ4y0K1aNQs3aQNBm8+x9Wnp5RwJXO0VZmNusRXD8627kDeZE31ejZ
tq/ImZuaVo6knrttMTpLB4PuFZLVcX5/3Q1ZGQM4ZxEJonLVh5JX51V8ajdk1CHd
SV2ZMHEnoYOn2wQPA3Wl1VKWCR0oQLpEKCzP00ci09mF4SS5Q5fo3VdIe1ge9ieR
+MFUBteVKvf1vZCqbBN2XHl57jNTBiMZ75vdqvMDvKziEQ73QEA=
=773e
-----END PGP SIGNATURE-----

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org


Re: Trying to chase down "too many connection" problems with DB

Posted by Shawn Heisey <ap...@elyograg.org>.
On 3/27/2018 11:03 AM, Phil Steitz wrote:
> Not exactly, if what you are using is the DBCP pool.  To see the

The factory in use right now is
"org.apache.tomcat.dbcp.dbcp.BasicDataSourceFactory".  Information
gathered previously in this thread told me that this is DBCP code,
repackaged into the tomcat package space to avoid conflicting with the
official commons-dbcp package.  If this is effectively unmodified DBCP,
then what you wrote below likely applies.

> details of what is going on, look at the removeAbandoned code in
> DBCP's AbandonedObjectPool.  It calls
> o.a.c.pool.GenericObjectPool#invalidateObject, which calls
> o.a.c.dbco.PoolableConnectionFactory#destroyObject to close the
> connection.  If an exception occurs, it is swallowed by
> removeAbandoned, but it dumps a stack trace.
>
> So connections should in fact be closed if they are detected as
> abandoned.  As I said on commons-user, in your setup, that won't
> happen unless borrows are attempted when there are 57+ open
> connections.  The removeAbandoned method is called *only* by
> borrowObject in DBCP 1.x, with this test:

It would not surprise me to learn that on the pool with maxActive=60,
the pool is managing less than 57 connections.  Until I can get some
logging in place on production to show me the acive and idle connection
counts, I do not know what's actually happening.

If I can successfully get a config using
"org.apache.tomcat.jdbc.pool.DataSourceFactory" operational (which seems
to be failing right now with "too many connections" on startup), is it
true that this pool does not require maxActive-3 connections before
abandoned removal kicks in?  The description for
abandonWhenPercentageFull in the tomcat documentation implies that this
is the case.

So now I have one person telling me that removeAbandoned DOES close
connections, and another saying that it does NOT close connections.  Is
there a conflict here, or are both of these statements correct for
different pool implementations?  Keep in mind that the current config
uses "org.apache.tomcat.dbcp.dbcp.BasicDataSourceFactory" and that I
would like to try a new config with
"org.apache.tomcat.jdbc.pool.DataSourceFactory" ... unless there is a
good reason to NOT use that factory.  Which, by the way, is the factory
that tomcat's documentation SAYS to use.

The new config with the new factory fails to start on our staging
server, but aside from seeing "too many connection" exceptions in the
catalina log, I have no concrete information about what happened.  I
didn't do the restart, and wasn't watching the system when it happened. 
Switching back to the old config fixed the tomcat startup.  What the
developer described to me shouldn't have been possible.  I need to do
the restart myself and watch the system.

I've come up with another question, related to this, but headed in a
different direction.  I'll write a new email to the list for that.

Thanks,
Shawn


---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org


Re: Trying to chase down "too many connection" problems with DB

Posted by Phil Steitz <ph...@gmail.com>.
On 3/28/18 11:29 AM, Christopher Schultz wrote:
> Phil,
>
> On 3/27/18 1:03 PM, Phil Steitz wrote:
> > On 3/26/18 10:28 AM, Christopher Schultz wrote:
> >> Shawn,
> >>
> >> On 3/25/18 12:17 AM, Shawn Heisey wrote:
> >>> On 3/24/2018 5:04 PM, Mark Thomas wrote:
> >>>> Regarding your configuration: <Resource name="jdbc/REDACTED"
> >>>> auth="Container"
> >>>> factory="org.apache.tomcat.dbcp.dbcp.BasicDataSourceFactory"
> >>>> driverClassName="com.mysql.jdbc.Driver"
> >>>> type="javax.sql.DataSource" maxActive="60" maxIdle="10"
> >>>> maxWait="30000" removeAbandoned="true"
> >>>> removeAbandonedTimeout="30" username="REDACTED"
> >>>> password="REDACTED" testOnBorrow="true"
> >>>> validationQuery="select 1"
> >>>>
> >>
> url="jdbc:mysql://REDACTED.REDACTED.com:3306/REDACTED?autoReconnect=t
> >>
> >>
> rue&amp;zeroDateTimeBehavior=round"
> >>>>
> >>>>
> >>>>
> >> />
> >>>>
> >>>> Generally, that looks OK but I'd strongly recommend that you
> >>>> use "autoReconnect=false" in the URL. autoReconnect is known
> >>>> to be problematic with connection pools.
> >>>>
> >>>> The removeAbandonedTimeout looks low but if all the queries
> >>>> are expected to be well under 30s then that should be OK.
> >>
> >>> Somehow I did not see this part of your email at all when I
> >>> read it the first time.  I just noticed it.  My previous reply
> >>> probably has you scratching your head a little bit.  I'm sorry
> >>> about that!
> >>
> >>> The timeout of 30 seconds is EXTREMELY low.  And if it were
> >>> being honored, we would have customers lining up outside the
> >>> office with pitchforks.  The webapp has reporting ability for
> >>> users with elevated privileges, and a lot of those reports take
> >>> a minute or two to run, sometimes even longer.  So if the pool
> >>> were killing connections after 30 seconds, the reporting
> >>> mechanism would be failing a LOT.  If you check the *planned*
> >>> configuration, you'll see that I have increased this timeout to
> >>> 3600.  I wanted to make it 900, but some of the developers are
> >>> worried that 900 is too aggressive.
> >>
> >> The pool doesn't kill abandoned connections. It simply removes
> >> them from the pool. Otherwise, you're right: you'd have torches
> >> and pitchforks everywhere.
>
> > Not exactly, if what you are using is the DBCP pool.  To see the
> > details of what is going on, look at the removeAbandoned code in
> > DBCP's AbandonedObjectPool.  It calls
> > o.a.c.pool.GenericObjectPool#invalidateObject, which calls
> > o.a.c.dbco.PoolableConnectionFactory#destroyObject to close the
> > connection.  If an exception occurs, it is swallowed by
> > removeAbandoned, but it dumps a stack trace.
>
> Is this DBCP, or DBCP2, or both?
>
> OP is running Tomcat 7.0.x. which uses DBCP(1.x)

Above references are to the 1.x versions.  In 2.x, the
AbandonedObjectPool was moved into Commons pool, but the behavior is
basically the same - connections marked abandoned get invalidated by
the pool and destroyed by the factory.  Exceptions on close are
swallowed with stack traces dumped to stderr.

Phil
>
> -chris
> >
---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org > For
additional commands, e-mail: users-help@tomcat.apache.org > >


---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org


Re: Trying to chase down "too many connection" problems with DB

Posted by Christopher Schultz <ch...@christopherschultz.net>.
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

Phil,

On 3/27/18 1:03 PM, Phil Steitz wrote:
> On 3/26/18 10:28 AM, Christopher Schultz wrote:
>> Shawn,
>> 
>> On 3/25/18 12:17 AM, Shawn Heisey wrote:
>>> On 3/24/2018 5:04 PM, Mark Thomas wrote:
>>>> Regarding your configuration: <Resource name="jdbc/REDACTED" 
>>>> auth="Container" 
>>>> factory="org.apache.tomcat.dbcp.dbcp.BasicDataSourceFactory" 
>>>> driverClassName="com.mysql.jdbc.Driver" 
>>>> type="javax.sql.DataSource" maxActive="60" maxIdle="10" 
>>>> maxWait="30000" removeAbandoned="true" 
>>>> removeAbandonedTimeout="30" username="REDACTED" 
>>>> password="REDACTED" testOnBorrow="true"
>>>> validationQuery="select 1"
>>>> 
>> url="jdbc:mysql://REDACTED.REDACTED.com:3306/REDACTED?autoReconnect=t
>>
>> 
rue&amp;zeroDateTimeBehavior=round"
>>>> 
>>>> 
>>>> 
>> />
>>>> 
>>>> Generally, that looks OK but I'd strongly recommend that you
>>>> use "autoReconnect=false" in the URL. autoReconnect is known
>>>> to be problematic with connection pools.
>>>> 
>>>> The removeAbandonedTimeout looks low but if all the queries
>>>> are expected to be well under 30s then that should be OK.
>> 
>>> Somehow I did not see this part of your email at all when I
>>> read it the first time.  I just noticed it.  My previous reply
>>> probably has you scratching your head a little bit.  I'm sorry
>>> about that!
>> 
>>> The timeout of 30 seconds is EXTREMELY low.  And if it were
>>> being honored, we would have customers lining up outside the
>>> office with pitchforks.  The webapp has reporting ability for
>>> users with elevated privileges, and a lot of those reports take
>>> a minute or two to run, sometimes even longer.  So if the pool
>>> were killing connections after 30 seconds, the reporting
>>> mechanism would be failing a LOT.  If you check the *planned*
>>> configuration, you'll see that I have increased this timeout to
>>> 3600.  I wanted to make it 900, but some of the developers are
>>> worried that 900 is too aggressive.
>> 
>> The pool doesn't kill abandoned connections. It simply removes
>> them from the pool. Otherwise, you're right: you'd have torches
>> and pitchforks everywhere.
> 
> Not exactly, if what you are using is the DBCP pool.  To see the 
> details of what is going on, look at the removeAbandoned code in 
> DBCP's AbandonedObjectPool.  It calls 
> o.a.c.pool.GenericObjectPool#invalidateObject, which calls 
> o.a.c.dbco.PoolableConnectionFactory#destroyObject to close the 
> connection.  If an exception occurs, it is swallowed by 
> removeAbandoned, but it dumps a stack trace.

Is this DBCP, or DBCP2, or both?

OP is running Tomcat 7.0.x. which uses DBCP(1.x).

- -chris
-----BEGIN PGP SIGNATURE-----
Comment: GPGTools - http://gpgtools.org
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQIzBAEBCAAdFiEEMmKgYcQvxMe7tcJcHPApP6U8pFgFAlq73vYACgkQHPApP6U8
pFjbgBAAijzCrHGCyFPsJjZWFaD/n2bdtCJhAVXT262FFsnt2lMwXwAUqPSkyDfM
2tjsACPBL28/jrsnUI9QmaM7STa7jneqlqCsIVNcWp6YMRM3sMyk/kDcSC4zzPuG
3EkZ0gCaPLK1s8VkG9j01f7mKTp16N/xa2c1RHPktfm4458RpWqzDh84ccjt/dFU
7VVG9k3OeoJ4azfsqCBoG1tj4ker5qUctsxVoldCfB8MGpepkSfYH2Bpmd4PO8rz
3hiNhRNj3MuVLplTcA4l9mu8b9QjEWio6wljdAf66SSLdxSkE4O+aNOPBwbUvbfJ
mw/udMzPaV6yhmfy3umY4TxZadpFquDv8mPqhHwcNFEPmI8f+Yb7RNEu9+LwH0Gh
eultGDtkaShY/ZOOVz8OJNybKCj6bxvv3YChRm7hPk9n4OPQfTRBAxSz0a45QhmF
xT/Gh6GuTtI3uMks3uCXR/4nbism+2YBZylQ0chFIsS+LNhFkhYGJ0cBQTGFoG4O
uZs3ei4/JqQLttEhxEoHWtDWyFpnoQyFyoO43+k3t5OPH4FbPlWJat5voqZhlWAL
B4IxtClrIl+gS3TqpAIkt/C0cNQzqUana/0RyKsKiv9UXCeMRI0v8Vzk8aYRxv1V
/1GJ2RtNqj7C+2GDGpq1ePM+7zyaq52PFMFMQLzVbrTRNsjjCUo=
=DivC
-----END PGP SIGNATURE-----

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org


Re: Trying to chase down "too many connection" problems with DB

Posted by Phil Steitz <ph...@gmail.com>.
On 3/26/18 10:28 AM, Christopher Schultz wrote:
> Shawn,
>
> On 3/25/18 12:17 AM, Shawn Heisey wrote:
> > On 3/24/2018 5:04 PM, Mark Thomas wrote:
> >> Regarding your configuration: <Resource name="jdbc/REDACTED"
> >> auth="Container"
> >> factory="org.apache.tomcat.dbcp.dbcp.BasicDataSourceFactory"
> >> driverClassName="com.mysql.jdbc.Driver"
> >> type="javax.sql.DataSource" maxActive="60" maxIdle="10"
> >> maxWait="30000" removeAbandoned="true"
> >> removeAbandonedTimeout="30" username="REDACTED"
> >> password="REDACTED" testOnBorrow="true" validationQuery="select
> >> 1"
> >>
> url="jdbc:mysql://REDACTED.REDACTED.com:3306/REDACTED?autoReconnect=t
> rue&amp;zeroDateTimeBehavior=round"
> >>
> >>
> >>
> />
> >>
> >> Generally, that looks OK but I'd strongly recommend that you use
> >> "autoReconnect=false" in the URL. autoReconnect is known to be
> >> problematic with connection pools.
> >>
> >> The removeAbandonedTimeout looks low but if all the queries are
> >> expected to be well under 30s then that should be OK.
>
> > Somehow I did not see this part of your email at all when I read it
> > the first time.  I just noticed it.  My previous reply probably has
> > you scratching your head a little bit.  I'm sorry about that!
>
> > The timeout of 30 seconds is EXTREMELY low.  And if it were being
> > honored, we would have customers lining up outside the office with
> > pitchforks.  The webapp has reporting ability for users with
> > elevated privileges, and a lot of those reports take a minute or
> > two to run, sometimes even longer.  So if the pool were killing
> > connections after 30 seconds, the reporting mechanism would be
> > failing a LOT.  If you check the *planned* configuration, you'll
> > see that I have increased this timeout to 3600.  I wanted to make
> > it 900, but some of the developers are worried that 900 is too
> > aggressive.
>
> The pool doesn't kill abandoned connections. It simply removes them
> from the pool. Otherwise, you're right: you'd have torches and
> pitchforks everywhere.

Not exactly, if what you are using is the DBCP pool.  To see the
details of what is going on, look at the removeAbandoned code in
DBCP's AbandonedObjectPool.  It calls
o.a.c.pool.GenericObjectPool#invalidateObject, which calls
o.a.c.dbco.PoolableConnectionFactory#destroyObject to close the
connection.  If an exception occurs, it is swallowed by
removeAbandoned, but it dumps a stack trace.

So connections should in fact be closed if they are detected as
abandoned.  As I said on commons-user, in your setup, that won't
happen unless borrows are attempted when there are 57+ open
connections.  The removeAbandoned method is called *only* by
borrowObject in DBCP 1.x, with this test:

if (config != null
                && config.getRemoveAbandoned()
                && (getNumIdle() < 2)
                && (getNumActive() > getMaxActive() - 3) ) {
            removeAbandoned();
 }

Another thing you should do is to set logAbandoned to true so you
get a stack trace if / when abandoned connections are detected and
close.

Phil


>
> Sounds like you need to raise that timeout by a LOT.
>
> Also, set logAbandoned="true" and you'll get a helpful message every
> time a connection is considered abandoned and you'll find out if you
> have a connection leak (as opposed to simply a too-short "abandoned"
> setting).
>
> > Because there are no pitchforks, I know that abandoned connection
> > removal is NOT happening, even though it IS configured.
>
> It's probably happening, just not meeting your expectations. Those
> abandoned connections will pretty much live forever, no longer being
> managed by the pool and yet still counting as being used by the
> server. Maybe lower your idle-timeout on the server to help with this.
>
> Hope that helps,
> -chris
> >
---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org > For
additional commands, e-mail: users-help@tomcat.apache.org > >


---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org


Re: Trying to chase down "too many connection" problems with DB

Posted by Christopher Schultz <ch...@christopherschultz.net>.
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

Shawn,

On 3/25/18 12:17 AM, Shawn Heisey wrote:
> On 3/24/2018 5:04 PM, Mark Thomas wrote:
>> Regarding your configuration: <Resource name="jdbc/REDACTED" 
>> auth="Container" 
>> factory="org.apache.tomcat.dbcp.dbcp.BasicDataSourceFactory" 
>> driverClassName="com.mysql.jdbc.Driver" 
>> type="javax.sql.DataSource" maxActive="60" maxIdle="10" 
>> maxWait="30000" removeAbandoned="true" 
>> removeAbandonedTimeout="30" username="REDACTED" 
>> password="REDACTED" testOnBorrow="true" validationQuery="select
>> 1" 
>> url="jdbc:mysql://REDACTED.REDACTED.com:3306/REDACTED?autoReconnect=t
rue&amp;zeroDateTimeBehavior=round"
>>
>>
>> 
/>
>> 
>> Generally, that looks OK but I'd strongly recommend that you use 
>> "autoReconnect=false" in the URL. autoReconnect is known to be 
>> problematic with connection pools.
>> 
>> The removeAbandonedTimeout looks low but if all the queries are
>> expected to be well under 30s then that should be OK.
> 
> Somehow I did not see this part of your email at all when I read it
> the first time.  I just noticed it.  My previous reply probably has
> you scratching your head a little bit.  I'm sorry about that!
> 
> The timeout of 30 seconds is EXTREMELY low.  And if it were being 
> honored, we would have customers lining up outside the office with 
> pitchforks.  The webapp has reporting ability for users with
> elevated privileges, and a lot of those reports take a minute or
> two to run, sometimes even longer.  So if the pool were killing
> connections after 30 seconds, the reporting mechanism would be
> failing a LOT.  If you check the *planned* configuration, you'll
> see that I have increased this timeout to 3600.  I wanted to make
> it 900, but some of the developers are worried that 900 is too
> aggressive.

The pool doesn't kill abandoned connections. It simply removes them
from the pool. Otherwise, you're right: you'd have torches and
pitchforks everywhere.

Sounds like you need to raise that timeout by a LOT.

Also, set logAbandoned="true" and you'll get a helpful message every
time a connection is considered abandoned and you'll find out if you
have a connection leak (as opposed to simply a too-short "abandoned"
setting).

> Because there are no pitchforks, I know that abandoned connection 
> removal is NOT happening, even though it IS configured.

It's probably happening, just not meeting your expectations. Those
abandoned connections will pretty much live forever, no longer being
managed by the pool and yet still counting as being used by the
server. Maybe lower your idle-timeout on the server to help with this.

Hope that helps,
- -chris
-----BEGIN PGP SIGNATURE-----
Comment: GPGTools - http://gpgtools.org
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQJRBAEBCAA7FiEEMmKgYcQvxMe7tcJcHPApP6U8pFgFAlq5LcMdHGNocmlzQGNo
cmlzdG9waGVyc2NodWx0ei5uZXQACgkQHPApP6U8pFhZtg/+JtDnnYN7vKrKWK5l
38qv2IJ6hc67scU7gGorcvfXDYm4pBJIMn9t1o9cR1+P9TOpKnoywhacwX6ajtog
B8gry43LNlhsWWwERN+0V0YO87cCSONnmhxUaDwaIS4EISTnH5TnZYm5RluETT/H
RVytMLKbgjMSe4lKcI27IBYJJBQEn56/OEZ1eDG4tdt9hMtJ3Zx1wx3RsA1Gf8M7
UK3BJIOTW8vpnXq5lQR273tMTPtcMkpW/BKD6ZXCMgqCLEJIIXQYTXPMn7SrrV3j
IDDRZyraxBzHaUfgcj+cw4HfL4y/oheM6zaGBTN75eu9mRQZel5QbdyrBkqs3SKt
zW2C9e5Q4Rly99gYDZeWwGWSlNmHCAUdRY1fTjK7UzXbd1dIgSdhlr8F0gC08dQX
eUkkM+5U3Hx1TnHfGGQAN0fw3fHWyCNwyI1iBwlYVumum1YhbRlMfncZ6hjNlcqa
txcaiyD+BxIrcDQ/xERJXT05LzWpKf5zeOzlImLnfggMz34pfQxAcsa7nUGnb3Ez
Q9Jx7LhZzwEPHvmkGZydE/LszvNYVsqfMNKnSrmjY6vFRT6MpSjBudhkeP25bEsx
SV7uzFI3TJYbfzyETD4R8Yf/0un3pwwohyO4OYpKWz+v0iLFX796Hld5RFWYo1as
eD8h3yvvJkckHLYDwIrhdgDy9yQ=
=UG4z
-----END PGP SIGNATURE-----

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org


Re: Trying to chase down "too many connection" problems with DB

Posted by Shawn Heisey <ap...@elyograg.org>.
On 3/24/2018 5:04 PM, Mark Thomas wrote:
> Regarding your configuration:
> <Resource
> name="jdbc/REDACTED"
> auth="Container"
> factory="org.apache.tomcat.dbcp.dbcp.BasicDataSourceFactory"
> driverClassName="com.mysql.jdbc.Driver"
> type="javax.sql.DataSource"
> maxActive="60"
> maxIdle="10"
> maxWait="30000"
> removeAbandoned="true"
> removeAbandonedTimeout="30"
> username="REDACTED"
> password="REDACTED"
> testOnBorrow="true"
> validationQuery="select 1"
> url="jdbc:mysql://REDACTED.REDACTED.com:3306/REDACTED?autoReconnect=true&amp;zeroDateTimeBehavior=round"
> />
>
> Generally, that looks OK but I'd strongly recommend that you use
> "autoReconnect=false" in the URL. autoReconnect is known to be
> problematic with connection pools.
>
> The removeAbandonedTimeout looks low but if all the queries are expected
> to be well under 30s then that should be OK.

Somehow I did not see this part of your email at all when I read it the 
first time.  I just noticed it.  My previous reply probably has you 
scratching your head a little bit.  I'm sorry about that!

The timeout of 30 seconds is EXTREMELY low.  And if it were being 
honored, we would have customers lining up outside the office with 
pitchforks.  The webapp has reporting ability for users with elevated 
privileges, and a lot of those reports take a minute or two to run, 
sometimes even longer.  So if the pool were killing connections after 30 
seconds, the reporting mechanism would be failing a LOT.  If you check 
the *planned* configuration, you'll see that I have increased this 
timeout to 3600.  I wanted to make it 900, but some of the developers 
are worried that 900 is too aggressive.

Because there are no pitchforks, I know that abandoned connection 
removal is NOT happening, even though it IS configured.  Plus, there's 
the fact that there are MANY connections with idle times of several 
hours.  So that's a big part of what I'm trying to figure out -- why the 
configuration we have isn't doing what it has configured.  Is there 
something incorrect, or something missing?  Or is it maybe a bug in the 
pool code that's been fixed sometime in the last five years?

The autoReconnect setting has likely been in use on our configurations 
for YEARS.  I will make sure we remove it in any config changes.

Thanks,
Shawn


---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org


Re: Trying to chase down "too many connection" problems with DB

Posted by Shawn Heisey <ap...@elyograg.org>.
On 3/24/2018 5:04 PM, Mark Thomas wrote:
> There are two pools available.
<snip>
> org.apache.tomcat.jdbc.pool.DataSourceFactory is a different pool
> developed in the Tomcat project (generally called JDBC pool).

OK, so that means that the currently active config is using dbcp.  The 
*new* config that I'm building is using the tomcat pool, with the 
factory specified in the Tomcat JDBC pool documentation.  I do have at 
least one of the tomcat-specific configuration options in that config 
I'm building.  See the initial message on the thread for examples of 
both configurations.

Which directs me back to the second question I asked in the initial 
message:  Why does the application on the staging server appear to open 
a HUGE number of connections on startup when configured with the tomcat 
pool (which overloads the server and makes startup fail), and doesn't do 
that when configured with dbcp?  The number of connections that I am 
told were opened (I wasn't involved with the restart and can't confirm 
what they said) is far more than should have been possible.  The configs 
are fairly similar, but I have a lot more options on the new config, 
designed to try and keep the connection count down.

I should figure out which version of the MySQL driver we have.  I know 
for sure that there have been bugs in older releases and I have had 
trouble when not using the latest.

Thanks,
Shawn


---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org


Re: Trying to chase down "too many connection" problems with DB

Posted by Mark Thomas <ma...@apache.org>.
On 24/03/18 22:28, Shawn Heisey wrote:
<snip/>

> The factory we have now is
> "org.apache.tomcat.dbcp.dbcp.BasicDataSourceFactory".  The factory in
> the documentation, and what I put in the configuration I'm building, is
> "org.apache.tomcat.jdbc.pool.DataSourceFactory". These are both in the
> tomcat package space.  Yet you're telling me that I'm not using the
> Tomcat pool.  How is anybody supposed to come to that conclusion on
> their own?
> 
> If the attributes on that page cannot be used with
> "org.apache.tomcat.jdbc.pool.DataSourceFactory" ... then why are they
> included on a page that says to use that factory?

There are two pools available.

You use the factory to choose between them.

org.apache.tomcat.dbcp.dbcp.BasicDataSourceFactory is a package renamed
(to avoid conflicts) exact copy of Commons DBCP. For 7.0.42 that will be:
- Commons DBCP 1.4
- Commons Pool 1.5.7

org.apache.tomcat.jdbc.pool.DataSourceFactory is a different pool
developed in the Tomcat project (generally called JDBC pool).

The default factory if you do not specify one is
org.apache.tomcat.dbcp.dbcp.BasicDataSourceFactory


Which factory you want to use is up to you. Generally,

Commons DBCP 1.x was slow compared JDBC pool to in highly concurrent
environments. I suspect this isn't an issue for most users.

Commons DBCP 1.x is provides more features and enables more of them by
default.

As of Tomcat 8 DBCP 2.x is used which addresses the performance issues.

Generally, I'd recommend DBCP unless you need a feature that is only
available in JDBC pool.

> By the way, the 9.0 version of the docs still says virtually the same
> thing.  I found a historical document using Google for the 7.0.42
> version we're running, and it ALSO has virtually the same information on
> it.
> 
> Since I can't rely on the documentation, can somebody please give me a
> configuration that will do what I'm trying to do?  And explain each
> difference from the config that I built, or point me to documentation
> that's complete enough for me to figure it out on my own?

You need to make sure you are using the docs for the pool you want to
use. There are minor differences between Commons DBCP and JDBC pool and
also differences over time.

The source for the DBCP 1.4 docs is here:
https://github.com/apache/commons-dbcp/blob/DBCP_1_4/xdocs/configuration.xml

The source for JDBC pool docs as of Tomcat 7.0.42 is here:
https://svn.apache.org/viewvc/tomcat/tc7.0.x/tags/TOMCAT_7_0_42/modules/jdbc-pool/doc/jdbc-pool.xml?view=annotate


Regarding your configuration:
<Resource
name="jdbc/REDACTED"
auth="Container"
factory="org.apache.tomcat.dbcp.dbcp.BasicDataSourceFactory"
driverClassName="com.mysql.jdbc.Driver"
type="javax.sql.DataSource"
maxActive="60"
maxIdle="10"
maxWait="30000"
removeAbandoned="true"
removeAbandonedTimeout="30"
username="REDACTED"
password="REDACTED"
testOnBorrow="true"
validationQuery="select 1"
url="jdbc:mysql://REDACTED.REDACTED.com:3306/REDACTED?autoReconnect=true&amp;zeroDateTimeBehavior=round"
/>

Generally, that looks OK but I'd strongly recommend that you use
"autoReconnect=false" in the URL. autoReconnect is known to be
problematic with connection pools.

The removeAbandonedTimeout looks low but if all the queries are expected
to be well under 30s then that should be OK.

Mark

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org


Re: Trying to chase down "too many connection" problems with DB

Posted by Shawn Heisey <ap...@elyograg.org>.
On 3/24/2018 3:34 PM, Christopher Schultz wrote:
> Before we go too far, you have said:
>
> 1. You have 5 prod servers
> 2. They have several pools defined
> 3. The above is an example of a defined pool
>
> Just above, that configuration says maxActive=60. 5 * 60 = 300
> connections. And that's just for one pool.
>
> Perhaps you simply have too many total connections when you add-up all
> the possible connections your servers can make to your db server?

I'm aware of that, and the pool sizes probably are too big, but based on 
what I see when I look during the problem, I don't think it's what I 
need to be worried about right now.  If everything were working the way 
I would expect, each server would only need maybe a dozen connections at 
any given moment for ALL the pools, because connections would be re-used 
and new ones would not be needed.

Here's a couple of SQL queries showing the current connection state on 
the main server right now (on Saturday, a very low traffic period):

https://paste.apache.org/yTNp

This is slightly redacted.  I did the redaction so that the columns 
weren't broken.  There are 414 connections right now, which you can see 
from the second query.  Only four of them are not in the Sleep state, 
which you can see from the first query. Two of those four are slave 
servers, one of them is the event scheduler daemon, and one of them is 
the query I'm running right at that moment.  Which means that NONE of 
the webservers was doing anything on the database when I ran those queries.

The two webservers running Liferay 6.2 are fourqueens and fitzgeralds.  
The three running Liferay 6.1 and the legacy application are fremont, 
frontier, and fiesta.  (Yes, they are casino names.  We inherited the 
system with that naming convention, and we have continued it.)

When we reach 600 connections, maybe 5 of the connections are actually 
doing something.  The rest are showing "Sleep", many with large idle 
times.  Which is why I think each server should have maybe a dozen total 
connections open.  Unless the code is opening connections and not 
closing them.  For that, I'm trying to get the developers to fix the 
code, and also configure abandoned connection removal, so the pool can 
clean up after bad development.

The databases with names that start with "lr" are liferay databases.  
Those connections are using c3p0.  There are more connections active 
than I think SHOULD be there, but they all have VERY low idle times, so 
they ARE getting re-used.  I have plans for that config separate from 
what we're discussing here.

>> This is the Tomcat documentation I'm using as a reference:
>>
>> https://tomcat.apache.org/tomcat-7.0-doc/jdbc-pool.html
> That's the wrong pool. You are using DBCP, and the docs are for
> Tomcat's (non-DBCP) pool. The configurations are roughly the same for
> simple configs. But make sure you are reading the right docs for the
> right pool.

The factory we have now is 
"org.apache.tomcat.dbcp.dbcp.BasicDataSourceFactory".  The factory in 
the documentation, and what I put in the configuration I'm building, is 
"org.apache.tomcat.jdbc.pool.DataSourceFactory". These are both in the 
tomcat package space.  Yet you're telling me that I'm not using the 
Tomcat pool.  How is anybody supposed to come to that conclusion on 
their own?

If the attributes on that page cannot be used with 
"org.apache.tomcat.jdbc.pool.DataSourceFactory" ... then why are they 
included on a page that says to use that factory?

By the way, the 9.0 version of the docs still says virtually the same 
thing.  I found a historical document using Google for the 7.0.42 
version we're running, and it ALSO has virtually the same information on it.

Since I can't rely on the documentation, can somebody please give me a 
configuration that will do what I'm trying to do?  And explain each 
difference from the config that I built, or point me to documentation 
that's complete enough for me to figure it out on my own?

Thanks,
Shawn


---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org


Re: Trying to chase down "too many connection" problems with DB

Posted by Christopher Schultz <ch...@christopherschultz.net>.
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

Shawn,

On 3/24/18 12:08 AM, Shawn Heisey wrote:
> This message is long.  Lots of details, a fair amount of history.

Thanks for the back-story and the configurations. It really helps
explain what is going on.

> Here's a redacted configuration that we have in place for one of
> the DB connection pools on the production servers:
> 
> <Resource name="jdbc/REDACTED" auth="Container" 
> factory="org.apache.tomcat.dbcp.dbcp.BasicDataSourceFactory" 
> driverClassName="com.mysql.jdbc.Driver"
> type="javax.sql.DataSource" maxActive="60" maxIdle="10"
> maxWait="30000" removeAbandoned="true" removeAbandonedTimeout="30"
> username="REDACTED" password="REDACTED" testOnBorrow="true"
> validationQuery="select 1" 
> url="jdbc:mysql://REDACTED.REDACTED.com:3306/REDACTED?autoReconnect=tr
ue&amp;zeroDateTimeBehavior=round"
>
> 
/>

Before we go too far, you have said:

1. You have 5 prod servers
2. They have several pools defined
3. The above is an example of a defined pool

Just above, that configuration says maxActive=60. 5 * 60 = 300
connections. And that's just for one pool.

Perhaps you simply have too many total connections when you add-up all
the possible connections your servers can make to your db server?

> This is the Tomcat documentation I'm using as a reference:
> 
> https://tomcat.apache.org/tomcat-7.0-doc/jdbc-pool.html

That's the wrong pool. You are using DBCP, and the docs are for
Tomcat's (non-DBCP) pool. The configurations are roughly the same for
simple configs. But make sure you are reading the right docs for the
right pool.

- -chris
-----BEGIN PGP SIGNATURE-----
Comment: GPGTools - http://gpgtools.org
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQJRBAEBCAA7FiEEMmKgYcQvxMe7tcJcHPApP6U8pFgFAlq2xHkdHGNocmlzQGNo
cmlzdG9waGVyc2NodWx0ei5uZXQACgkQHPApP6U8pFg8ChAAqwt7pUtJZtiuxbdp
aVwh9DliMYKBZeyD/X6LOQnXyorgaQYYbhfcQ0RWPRmsk/Bbnpxt5p9bLfrmvlrZ
g7wPALxMVLHW/ZSscY+f4yr2PyCb7RycTMlB3mz4YIzQUzfTUptW6ZAghrWfif+H
Oi9aV2kFZ7NbAEBCjmhQooMKvd958EGvTHBDEZlaJlgsK6m0RvHqfhVDZCLNzbFW
+20/uyjgRiuVwWPR1GBuxT6R7TktGeYCwkSqUxa+FSVIH22vjr61vJSw7+2QifYW
dlaPb0ORLPVxS8Kygq2iRKYU6vZzKMazX+Dnf7vXo6CJ4unqjhFgFBimVuNzq7wR
5ZiInmpRXQwofaJVPJ6JBw13u36Okb5+dm5m+NljhF0VNptWwQz9etr3zSZJrQQS
JE7QdkRcHd2WbZkoJA2zR15kXYJkdlZsOspcz3VVZOC7yy/WS1oDQ/i7BRDLPnXL
vaK3K+2ka+Sgg9JgXepiKtwxb7x/uF05I56lmTLSz6pIs5ZYT0jue4r5yshtt4cH
af8pyDf5dbr2OIelhIz9Bh/7Ugfkj+5WkoCKQ1GEmg+I6S/c7rady7+ckovIctwn
lL7Anugqdy5bbIpmWjNuYPM6Hwiqetxdu9dMO6PR4QtdiAtNpWqopydS7Og/GSh7
hMyaWAnPZf0GdRGRVGG6yqTwgYg=
=TU0a
-----END PGP SIGNATURE-----

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org


Re: Trying to chase down "too many connection" problems with DB

Posted by Olaf Kock <to...@olafkock.de>.
Hi Shawn,

only some aspects answered for now, inline:


On 25.03.2018 19:31, Shawn Heisey wrote:
> On 3/25/2018 3:15 AM, Olaf Kock wrote:
>> * Liferay comes (optionally) bundled with Tomcat to ease 
>> installation, however, the tomcat in there will be your own and is up 
>> to you to upgrade. Yes, new versions of Liferay will come with new 
>> versions of Tomcat, but new versions of Liferay won't be released 
>> because a new version of Tomcat is available. Running on an old 
>> Tomcat is your decision, not Liferay's. And the tomcat committers 
>> have done a great job providing drop-in-backwards-compatible newer 
>> versions. Of course, you'll need to validate, but the bundled version 
>> is no excuse to not upgrade Tomcat
>
> Thank you for your response!
>
> I know that the Tomcat version is basically ours to use or replace.  
> Upgrading Tomcat may be perfectly safe for Liferay itself, but we 
> cannot be sure that it is completely safe for the applications we've 
> written using Liferay.

good - I just wanted to make sure that this is well known. Many people 
are not aware of this fact, when everything comes neatly bundled 
together - but indeed, it's nothing than a vanilla tomcat with an 
application preinstalled (and a few lines of extra configuration - you 
can see an easy diff when you download the same tomcat version in vanilla)

> Many of our sites are still using Liferay 6.1.  The customers on those 
> platforms are so risk-averse that they haven't allowed us to migrate 
> them to 6.2 yet.  We want to get everything upgraded to the latest 
> Liferay version, but the upgrade from 6.1 to 6.2 was very 
> challenging.  We expect an upgrade to 7.x to be more difficult.

The difficulty of an upgrade is typically directly proportional to the 
level of customization (and the technique used for the customization). 
But that's nothing for the tomcat list.

> If somebody can give me evidence to assure me that upgrading Tomcat 
> and using configuration X will solve the problem, then I will pursue 
> that.  I can't justify an upgrade because it MIGHT fix the problem.

Going to the latest Tomcat 7 release should be so straightforward that I 
assume it's part of the debugging process you're in, just in order to 
check if you're stuck with an issue there. With the positive side effect 
that you get all fixes, even the ones that you didn't know you should have.

>> * Liferay's default configuration (as configured through the setup 
>> wizard) configures the database with an internal connection, not 
>> using an appserver-configured pool. Please confirm that you manually 
>> configured Liferay for the appserver-provided pool.
>
> We are using Liferay's configuration in portal-ext.properties for the 
> Liferay database.  The pools configured in tomcat are used for our 
> application, not Liferay itself.  The only complaint I have about 
> Liferay's pool is that it seems to use more connections than I think 
> should be necessary. But it IS re-using the pool (all connections are 
> idle less than one minute), so it's not on my radar at the moment.  
> The connections in the tomcat pools are NOT being re-used like I 
> expect them to.

Liferay can be configured with URL/user/password/driver, or with a JNDI 
name - the later naturally referring to a container-provided pool. Both 
configurations are in portal-ext.properties.

> Can I configure additional pools in portal-ext.properties for our 
> application to use?  If so, is that recommended?  That will probably 
> require code changes, but if it solves our difficulties, I'm not 
> opposed to it.  We already need to touch all of that code anyway to 
> adjust how we're closing JDBC resources.  Maybe we should find a new 
> way to handle connection pooling.

I rather prefer to configure pools through the container. But yes, you 
can configure pools in Liferay's configuration as well. Ctrl-F "pool" in 
https://docs.liferay.com/portal/6.2/propertiesdoc/portal.properties.html 
(AFAIK no changes between 6.1 and 6.2)

>> * It might help looking at the pool/connection states through JMX, so 
>> that you can determine which pool is active on which size.
>
> Does JMX work for pools other than the one for Liferay?  As soon as I 
> can work out exactly where to add the remote JMX properties, I can 
> give that a try.  If it's easy for you to tell me best practices for 
> adjusting the java commandline for startup, please do.

Just look at what's available by default. I never needed to add more, 
rather needed to find the relevant info. Plenty of information 
everywhere for enabling/connecting through JMX.

Olaf




---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org


Re: Trying to chase down "too many connection" problems with DB

Posted by Shawn Heisey <ap...@elyograg.org>.
On 3/25/2018 3:15 AM, Olaf Kock wrote:
> * Liferay comes (optionally) bundled with Tomcat to ease installation, 
> however, the tomcat in there will be your own and is up to you to 
> upgrade. Yes, new versions of Liferay will come with new versions of 
> Tomcat, but new versions of Liferay won't be released because a new 
> version of Tomcat is available. Running on an old Tomcat is your 
> decision, not Liferay's. And the tomcat committers have done a great 
> job providing drop-in-backwards-compatible newer versions. Of course, 
> you'll need to validate, but the bundled version is no excuse to not 
> upgrade Tomcat

Thank you for your response!

I know that the Tomcat version is basically ours to use or replace.  
Upgrading Tomcat may be perfectly safe for Liferay itself, but we cannot 
be sure that it is completely safe for the applications we've written 
using Liferay.

Qualifying an upgrade of a major component is a very slow and 
time-consuming process.  It's only been fairly recently that we have 
concluded the problem MIGHT be in Tomcat.  So we have not been pursuing 
an upgrade.  Before I start that process, I want to determine whether 
our problem is caused by configuration or a bug.  I always assume 
configuration first.  And if it is a bug, I always assume it's OUR 
software that has the bug, because open source projects are quite good 
at killing bugs early in a version's lifetime.

Part of the challenge is that this has only become my responsibility 
recently, so I'm not completely up to speed on how everything is done.  
We've had key personnel find their dream jobs and move on.

Many of our sites are still using Liferay 6.1.  The customers on those 
platforms are so risk-averse that they haven't allowed us to migrate 
them to 6.2 yet.  We want to get everything upgraded to the latest 
Liferay version, but the upgrade from 6.1 to 6.2 was very challenging.  
We expect an upgrade to 7.x to be more difficult.

I'm here because I need help with the configuration.  I can't seem to 
get a straight answer as to precisely how we SHOULD configure the pools, 
and whether the Tomcat version we are using is capable of handling the 
configuration.  I know that the configuration we have does not seem to 
actually be working.  I do not know whether that's because the 
configuration is wrong, or if there's another problem.  I am working on 
a new configuration that I want to try, but the system doesn't work when 
we try to use that config.  I'm waiting for permission to try the change 
myself so I can have concrete information about what's happening.

If somebody can give me evidence to assure me that upgrading Tomcat and 
using configuration X will solve the problem, then I will pursue that.  
I can't justify an upgrade because it MIGHT fix the problem.

> * Liferay's default configuration (as configured through the setup 
> wizard) configures the database with an internal connection, not using 
> an appserver-configured pool. Please confirm that you manually 
> configured Liferay for the appserver-provided pool.

We are using Liferay's configuration in portal-ext.properties for the 
Liferay database.  The pools configured in tomcat are used for our 
application, not Liferay itself.  The only complaint I have about 
Liferay's pool is that it seems to use more connections than I think 
should be necessary. But it IS re-using the pool (all connections are 
idle less than one minute), so it's not on my radar at the moment.  The 
connections in the tomcat pools are NOT being re-used like I expect them to.

Can I configure additional pools in portal-ext.properties for our 
application to use?  If so, is that recommended?  That will probably 
require code changes, but if it solves our difficulties, I'm not opposed 
to it.  We already need to touch all of that code anyway to adjust how 
we're closing JDBC resources.  Maybe we should find a new way to handle 
connection pooling.

> * It might help looking at the pool/connection states through JMX, so 
> that you can determine which pool is active on which size.

Does JMX work for pools other than the one for Liferay?  As soon as I 
can work out exactly where to add the remote JMX properties, I can give 
that a try.  If it's easy for you to tell me best practices for 
adjusting the java commandline for startup, please do.

Thanks,
Shawn


---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org


Re: Trying to chase down "too many connection" problems with DB

Posted by Olaf Kock <to...@olafkock.de>.

On 24.03.2018 05:08, Shawn Heisey wrote:
> This message is long.  Lots of details, a fair amount of history.
>
> The primary Tomcat version we've got is 7.0.42. Specifically, it is 
> the Tomcat that's included with Liferay 6.2. This is why we haven't 
> attempted an upgrade even though the version we're running is five 
> years old -- we don't want to rock the boat too hard and risk 
> everything not working.  Liferay is battle-tested with that version of 
> Tomcat.

other have answered on different pool implementations already, I'd like 
to comment on the mentioned Liferay heritage:

* Liferay comes (optionally) bundled with Tomcat to ease installation, 
however, the tomcat in there will be your own and is up to you to 
upgrade. Yes, new versions of Liferay will come with new versions of 
Tomcat, but new versions of Liferay won't be released because a new 
version of Tomcat is available. Running on an old Tomcat is your 
decision, not Liferay's. And the tomcat committers have done a great job 
providing drop-in-backwards-compatible newer versions. Of course, you'll 
need to validate, but the bundled version is no excuse to not upgrade Tomcat

* Liferay 6.2 (CE) is no longer maintained - there's no new tomcat 
version expected for it

* Liferay's default configuration (as configured through the setup 
wizard) configures the database with an internal connection, not using 
an appserver-configured pool. Please confirm that you manually 
configured Liferay for the appserver-provided pool.

* It might help looking at the pool/connection states through JMX, so 
that you can determine which pool is active on which size.

Olaf



---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org


Re: Trying to chase down "too many connection" problems with DB

Posted by Shawn Heisey <ap...@elyograg.org>.
On 3/24/2018 5:36 AM, Filippo Machi wrote:
> Hello Shawn, about this question, are you sure that none of the webapps
> running on those tomcats are connecting to the database without using the
> pools configured in the context.xml? Creating other pools or performing
> direct connections? That could explain while changing the configuration you
> are not able to limit the number of connections neither to mitigate the
> problem with removedAbandoned configuration.

That code isn't familiar to me.  I do have access to it, but there is a 
LOT of it.  I've looked over a little bit of it.

I cannot be ABSOLUTELY sure that this isn't happening, but I very much 
doubt that it is.  All of the developers know that there are connection 
pools provided in the servlet context, so I don't think they would try 
to handle it themselves.  I will ask, and hope they don't bite my head 
off TOO severely. :)

I have already risked death by telling the developers that they're 
closing JDBC resources incorrectly, which I suspect is the root cause of 
our troubles.  I will be pursuing a solution to this with them.

I will be attempting to lower the MySQL server-side wait_timeout from 
eight hours to one hour.  Which is one of those things that shouldn't be 
necessary, but might clean things up when the developer's don't.  I 
don't know if closing the connection on the server side will clean up 
all the objects in the application, or if there might be a resource 
leak.  If there is a leak, then we are already facing it, because 
connections ARE closed at eight hours idle by the server.  But I'd 
rather have that problem than the problem we currently have.

If there is anyone else with ideas, please reply to the original message!

Thanks,
Shawn


---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org


Re: Trying to chase down "too many connection" problems with DB

Posted by Filippo Machi <fi...@gmail.com>.
*1) I think this is the really burning question in my mind: Why is the
server opening NEW connections when there are dozens of them already open?
Does this mean that (as I suspect) all those connections are abandoned?  If
so, why are they not getting cleaned up, when we have removeAbandoned set
to true and removeAbandonedTimeout set to a RIDICULOUSLY low value of 30
seconds?  (We MUST increase this value.  The plan is one hour.)*
Hello Shawn, about this question, are you sure that none of the webapps
running on those tomcats are connecting to the database without using the
pools configured in the context.xml? Creating other pools or performing
direct connections? That could explain while changing the configuration you
are not able to limit the number of connections neither to mitigate the
problem with removedAbandoned configuration.
HTH
Filippo

On Sat, Mar 24, 2018 at 5:08 AM, Shawn Heisey <ap...@elyograg.org> wrote:

> This message is long.  Lots of details, a fair amount of history.
>
> The primary Tomcat version we've got is 7.0.42. Specifically, it is the
> Tomcat that's included with Liferay 6.2. This is why we haven't attempted
> an upgrade even though the version we're running is five years old -- we
> don't want to rock the boat too hard and risk everything not working.
> Liferay is battle-tested with that version of Tomcat.
>
> Background:
>
> Every now and then, we find that our MySQL master has reached the
> connection limit of 600 connections, and websites stop working, because
> they are trying to open new connections and failing.  When I look at the DB
> server, it shows a HUGE number of connections (300 to 500 out of the 600
> max) that have idle times between one hour and eight hours.  The MySQL
> server-side idle timeout is at the default of 28800 seconds -- eight
> hours.  The idle connections are coming from the webservers.
>
> There are five main webservers, two of which run exclusively Liferay 6.2,
> and three that are running older apps on Tomcat version that's probably
> ANCIENT, as well as Liferay 6.1 with Tomcat 7.0.23.  I can't seem to figure
> out what version the really ancient one is.  There are no version numbers
> that I can see in the files in the program directory.  We also have a
> staging server, a dev server, and a few ancillary systems.  The staging and
> dev servers mainly use a dev/staging database server, but do need to
> connect to the main DB server for some things.
>
> When we run out of connections, each of these five webservers has nearly
> 100 (and sometimes MORE than 100) open connections to the database server.
> And the majority of them have been idle for a LONG time.
>
> A number of JNDI database resources are configured in conf/context.xml.
> So we have connection pooling.  But even though there are TONS of
> connections already established from Tomcat, and completely idle from the
> DB server's perspective, the application is STILL trying to open a NEW
> connection when somebody visits a page.  This is the precise issue that
> connection pooling SHOULD be preventing.
>
> I've discussed this with the commons-user mailing list, from a DBCP
> perspective.  I started there because there is some different code that I
> wrote, which doesn't run in Tomcat, and uses DBCP2 natively.  I'm a lot
> more familiar with my code than the code running in Tomcat.
>
> Based on my discussions with commons-user, I really think that the reason
> that the DB pools are trying to create more connections even though there's
> already a lot of them open is because the connections are actually
> abandoned.  I suspect they were never closed by the application, so the
> pool still has them as active, and thinks it can't use them.  Now that the
> discussion directly involves configuring pools using Tomcat's own DBCP
> implementation, they're not really able to help me any further.
>
> If we are dealing with abandoned connections as I suspect, then I need to
> ask why abandoned connection removal isn't actually working, and how to
> configure it so that it DOES work.
>
> We have figured out how to log the number of idle and active connections
> on the datasource, but it's going to take some time to get that logging
> into the production servers, so I don't have definitive proof that the
> connections are actually active.
>
> Here's a redacted configuration that we have in place for one of the DB
> connection pools on the production servers:
>
>         <Resource name="jdbc/REDACTED" auth="Container"
> factory="org.apache.tomcat.dbcp.dbcp.BasicDataSourceFactory"
> driverClassName="com.mysql.jdbc.Driver" type="javax.sql.DataSource"
> maxActive="60" maxIdle="10" maxWait="30000" removeAbandoned="true"
> removeAbandonedTimeout="30" username="REDACTED" password="REDACTED"
> testOnBorrow="true" validationQuery="select 1" url="jdbc:mysql://
> REDACTED.REDACTED.com:3306/REDACTED?autoReconnect=true&amp;zeroDateTimeB
> ehavior=round" />
>
> This is the Tomcat documentation I'm using as a reference:
>
> https://tomcat.apache.org/tomcat-7.0-doc/jdbc-pool.html
>
> Looking at Tomcat 7.0 jdbc documentation, the first thing that I notice is
> that the factory we have configured is not even mentioned in the
> documentation.
>
> One person (who replied privately and not to commons-user) said that BOTH
> factory and type attribute values mentioned in the Tomcat 7.0 docs are
> completely wrong!  And another (who did reply via the list) said that the
> factory mentioned in the documentation is "the alternative pool" and that I
> should be using the one documented here instead.  But this page is REALLY
> lean on details of how to actually set up the resource:
>
> https://tomcat.apache.org/tomcat-7.0-doc/jndi-datasource-
> examples-howto.html#Database_Connection_Pool_(DBCP)_Configurations
>
> =====================================
>
> So now, with the sordid past covered, I have three main questions:
>
> 1) I think this is the really burning question in my mind: Why is the
> server opening NEW connections when there are dozens of them already open?
> Does this mean that (as I suspect) all those connections are abandoned?  If
> so, why are they not getting cleaned up, when we have removeAbandoned set
> to true and removeAbandonedTimeout set to a RIDICULOUSLY low value of 30
> seconds?  (We MUST increase this value.  The plan is one hour.)
>
> 2, background) We tried to reconfigure all the pools on our staging server
> to a new config.  They currently have a config much like what I shared
> above, but with smaller pool size numbers. Here's an example of what we
> tried to put in place:
>
>         <Resource name="jdbc/REDACTED"
>                         auth="Container"
> factory="org.apache.tomcat.jdbc.pool.DataSourceFactory"
> driverClassName="com.mysql.jdbc.Driver"
>                         type="javax.sql.DataSource"
>                         maxActive="10"
>                         maxIdle="2"
>                         minIdle="1"
>                         maxAge="7200000"
>                         removeAbandoned="true"
>                         removeAbandonedTimeout="3600"
>                         logAbandoned="true"
>                         username="REDACTED"
>                         password="REDACTED"
>                         testOnBorrow="true"
>                         testOnConnect="true"
>                         testOnReturn="true"
>                         testWhileIdle="true"
>                         validationQuery="SELECT 1"
>                         validationQueryTimeout="5"
>                         validationInterval="5000"
>                         minEvictableIdleTimeMillis="900000"
>                         allowMultiQueries="false"
> url="jdbc:mysql://REDACTED.REDACTED.com:3306/REDACTED?autoRe
> connect=true&amp;zeroDateTimeBehavior=round"
>                         />
>
> When we tried to start the service with a config like this for all of the
> pools (there are half a dozen of them), it failed to correctly start,
> getting too many connection errors.
>
> The size of the pools on this staging server is a LOT smaller than what we
> have configured in production.  The DB server it connects to is a lot less
> beefy, and has a limit of 400 connections.  I am told by the developer that
> did the restart that when this service started up, it had made 140
> connections to the dev/staging DB server before failing.  Which shouldn't
> be possible, with half a dozen pool configs that all have a maxActive set
> to 10.
>
> 2, question) Have I got something wrong in the new config? If I have, and
> there are recommendations for what I should change, please tell me WHY each
> change is recommended, and if possible, point me at relevant
> documentation.  If you can, tell me how I may have misread the Tomcat
> documentation, which I used extensively in coming up with the new config.
>
> Side note/question:  One of the replies I got on the DBCP list referred me
> to this Javadoc:
>
> https://commons.apache.org/proper/commons-dbcp/api-1.4/org/
> apache/commons/dbcp/BasicDataSource.html#getRemoveAbandoned()
>
> Which basically says that the 1.4 version of the standard DBCP library
> won't even try to eliminate abandoned connections on a pool of size 60
> until there are more than 57 connections open on the one pool.  This sounds
> like a REALLY bad idea.  The Tomcat jdbc documentation lists
> "abandonWhenPercentageFull" as an attribute, and the description there
> hints that Tomcat's jdbc implementation doesn't suffer from the same
> problem.  But with the unusual factory we have configured, I don't know if
> our configuration is even properly honored.  Does the tomcat pool have that
> same issue?
>
> Because we have multiple pools defined, and some of those pools have a
> maxActive number that's pretty large, I'm betting that we run out of
> connections long before that one pool has opened 57 of them.
>
> Thanks,
> Shawn
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
> For additional commands, e-mail: users-help@tomcat.apache.org
>
>