You are viewing a plain text version of this content. The canonical link for it is here.
Posted to log4j-user@logging.apache.org by Mark Butler <ma...@gmail.com> on 2020/02/20 19:27:59 UTC
Log4j2 Async Logging with LMAX Disruptor
Hello team
I am using Apache Log4j2 2.13.0 with LMax Disrupter 3.4.2
I encountering problems were occasionally an instance will go into
deadlock. All threads are deadlocking here:
org.apache.logging.log4j.core.async.AsyncLoggerDisruptor.enqueueLogMessageWhenQueueFull(RingBufferLogEventTranslator)
AsyncLoggerDisruptor.java:229
org.apache.logging.log4j.spi.AbstractLogger.warn(Message)
AbstractLogger.java:2629
except one thread that seems to be runnable here
Log4j2-TF-1-AsyncLogger[AsyncContext@1de0aca6]-1 Runnable Thread ID: 22
org.apache.logging.log4j.core.async.RingBufferLogEventHandler.onEvent(Object,
long, boolean) RingBufferLogEventHandler.java:29
com.lmax.disruptor.BatchEventProcessor.processEvents()
BatchEventProcessor.java:168
com.lmax.disruptor.BatchEventProcessor.run() BatchEventProcessor.java:125
java.lang.Thread.run() Thread.java:834
I can see similar deadlock issues reported in the past
https://issues.apache.org/jira/browse/LOG4J2-1518
where toString was also logging. I can't see evidence of that in my thread
dump or my code but it's possible a dependency somewhere is doing this.
I wonder if this recent change has caused a regression
https://github.com/apache/logging-log4j2/commit/72e777708b50b5cf6240f70eafcc4b08797a0047#diff-f77d8b534ecffa0241d582d43424f4a5
as the deadlock I am seeing is happening here?
https://github.com/apache/logging-log4j2/blob/master/log4j-core/src/main/java/org/apache/logging/log4j/core/async/AsyncLoggerDisruptor.java#L229
Any suggestions how to resolve? Thank you!
Mark
Re: Log4j2 Async Logging with LMAX Disruptor
Posted by Carter Kozak <ck...@ckozak.net>.
Is there any chance you could provide a thread dump as well, even if internal details are redacted, in addition to the configuration?
On Thu, Feb 20, 2020, at 19:36, Remko Popma wrote:
> The queue being full means that the application is logging faster than the underlying appenders can handle. It’s not necessarily an indication that there’s a problem with the async loggers that sit in the middle.
>
> Things I’ve seen in the past that may cause the queue filling up:
> * using a custom Appender that has a performance issue
> * the console Appender is 50x slower than logging to a file
> * logging to a database
>
> If one thread is still runnable then there’s no deadlock; I would look into slow appenders first.
>
> Logging in the toString method used to cause an actual deadlock but that was fixed in 2.7; I believe we now log a warning and (if memory serves correctly) we bypass the ringbuffer and log directly to the underlying appenders; I could be wrong about the details.
>
> What does your configuration look like? Any appenders or filters that could be a performance bottleneck?
>
> (Shameless plug) Every java main() method deserves http://picocli.info
>
> > On Feb 21, 2020, at 4:28, Mark Butler <ma...@gmail.com> wrote:
> >
> > Hello team
> >
> > I am using Apache Log4j2 2.13.0 with LMax Disrupter 3.4.2
> >
> > I encountering problems were occasionally an instance will go into
> > deadlock. All threads are deadlocking here:
> >
> > org.apache.logging.log4j.core.async.AsyncLoggerDisruptor.enqueueLogMessageWhenQueueFull(RingBufferLogEventTranslator)
> > AsyncLoggerDisruptor.java:229
> > org.apache.logging.log4j.spi.AbstractLogger.warn(Message)
> > AbstractLogger.java:2629
> >
> > except one thread that seems to be runnable here
> >
> > Log4j2-TF-1-AsyncLogger[AsyncContext@1de0aca6]-1 Runnable Thread ID: 22
> >
> > org.apache.logging.log4j.core.async.RingBufferLogEventHandler.onEvent(Object,
> > long, boolean) RingBufferLogEventHandler.java:29
> > com.lmax.disruptor.BatchEventProcessor.processEvents()
> > BatchEventProcessor.java:168
> > com.lmax.disruptor.BatchEventProcessor.run() BatchEventProcessor.java:125
> > java.lang.Thread.run() Thread.java:834
> >
> > I can see similar deadlock issues reported in the past
> >
> > https://issues.apache.org/jira/browse/LOG4J2-1518
> >
> > where toString was also logging. I can't see evidence of that in my thread
> > dump or my code but it's possible a dependency somewhere is doing this.
> >
> > I wonder if this recent change has caused a regression
> >
> > https://github.com/apache/logging-log4j2/commit/72e777708b50b5cf6240f70eafcc4b08797a0047#diff-f77d8b534ecffa0241d582d43424f4a5
> >
> > as the deadlock I am seeing is happening here?
> >
> > https://github.com/apache/logging-log4j2/blob/master/log4j-core/src/main/java/org/apache/logging/log4j/core/async/AsyncLoggerDisruptor.java#L229
> >
> > Any suggestions how to resolve? Thank you!
> >
> > Mark
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: log4j-user-unsubscribe@logging.apache.org
> For additional commands, e-mail: log4j-user-help@logging.apache.org
>
>
-ck
Re: Log4j2 Async Logging with LMAX Disruptor
Posted by Remko Popma <re...@gmail.com>.
The queue being full means that the application is logging faster than the underlying appenders can handle. It’s not necessarily an indication that there’s a problem with the async loggers that sit in the middle.
Things I’ve seen in the past that may cause the queue filling up:
* using a custom Appender that has a performance issue
* the console Appender is 50x slower than logging to a file
* logging to a database
If one thread is still runnable then there’s no deadlock; I would look into slow appenders first.
Logging in the toString method used to cause an actual deadlock but that was fixed in 2.7; I believe we now log a warning and (if memory serves correctly) we bypass the ringbuffer and log directly to the underlying appenders; I could be wrong about the details.
What does your configuration look like? Any appenders or filters that could be a performance bottleneck?
(Shameless plug) Every java main() method deserves http://picocli.info
> On Feb 21, 2020, at 4:28, Mark Butler <ma...@gmail.com> wrote:
>
> Hello team
>
> I am using Apache Log4j2 2.13.0 with LMax Disrupter 3.4.2
>
> I encountering problems were occasionally an instance will go into
> deadlock. All threads are deadlocking here:
>
> org.apache.logging.log4j.core.async.AsyncLoggerDisruptor.enqueueLogMessageWhenQueueFull(RingBufferLogEventTranslator)
> AsyncLoggerDisruptor.java:229
> org.apache.logging.log4j.spi.AbstractLogger.warn(Message)
> AbstractLogger.java:2629
>
> except one thread that seems to be runnable here
>
> Log4j2-TF-1-AsyncLogger[AsyncContext@1de0aca6]-1 Runnable Thread ID: 22
>
> org.apache.logging.log4j.core.async.RingBufferLogEventHandler.onEvent(Object,
> long, boolean) RingBufferLogEventHandler.java:29
> com.lmax.disruptor.BatchEventProcessor.processEvents()
> BatchEventProcessor.java:168
> com.lmax.disruptor.BatchEventProcessor.run() BatchEventProcessor.java:125
> java.lang.Thread.run() Thread.java:834
>
> I can see similar deadlock issues reported in the past
>
> https://issues.apache.org/jira/browse/LOG4J2-1518
>
> where toString was also logging. I can't see evidence of that in my thread
> dump or my code but it's possible a dependency somewhere is doing this.
>
> I wonder if this recent change has caused a regression
>
> https://github.com/apache/logging-log4j2/commit/72e777708b50b5cf6240f70eafcc4b08797a0047#diff-f77d8b534ecffa0241d582d43424f4a5
>
> as the deadlock I am seeing is happening here?
>
> https://github.com/apache/logging-log4j2/blob/master/log4j-core/src/main/java/org/apache/logging/log4j/core/async/AsyncLoggerDisruptor.java#L229
>
> Any suggestions how to resolve? Thank you!
>
> Mark
---------------------------------------------------------------------
To unsubscribe, e-mail: log4j-user-unsubscribe@logging.apache.org
For additional commands, e-mail: log4j-user-help@logging.apache.org