You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@tomcat.apache.org by Jerry Malcolm <te...@malcolms.com> on 2021/06/25 20:40:50 UTC

DB Connection Pool Error Handling

Periodically, in an otherwise normal running system, I get one simple 
mySQL query/create/update that takes 25+ seconds.  I'm pursuing this 
with mySQL logs and other steps on the mySQL end. But I have a couple of 
questions related to the TC side.  I start and end a timer on each side 
of a statement.execute() call.  So the 25 second delay is happening 
somewhere downstream from that. The connection has already been 
retrieved.  So I'm pretty sure it's nothing to do with connection pool 
wait.  Is there anything else that goes on inside the jdbc stuff that 
might cause this?  (I realize that's a long shot...)

Somewhat related, in digging though mySQL logs I'm seeing clusters of:

2021-06-25T20:01:08.609810Z 73013 [Note] Aborted connection 73013 to db: 
'----' user: '------' host: '172.xx.xx.xx' (Got an error reading 
communication packets)

I found several of these in clusters of 3 or 4 together, timestamps a 
few mS apart, then they stop for hours.  I'm curious what is happening.  
I'm not seeing, as far as I can tell, any errors like this reported back 
to my client code.  But connections being terminated can't be a good 
sign.  Does the jdbc driver intercept these and retry? Are these coming 
back to my client code, and I'm just not catching/logging them?    I've 
looked up what could cause this message, and nothing really seems to 
apply to my configuration.

Suggestions?

Thx

Jerry



---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org