You are viewing a plain text version of this content. The canonical link for it is here.
Posted to log4j-dev@logging.apache.org by "Tim Gokcen (JIRA)" <ji...@apache.org> on 2016/10/20 17:33:58 UTC

[jira] [Created] (LOG4J2-1644) Inefficient locking in {{AbstractLoggerAdapter}}

Tim Gokcen created LOG4J2-1644:
----------------------------------

             Summary: Inefficient locking in {{AbstractLoggerAdapter}}
                 Key: LOG4J2-1644
                 URL: https://issues.apache.org/jira/browse/LOG4J2-1644
             Project: Log4j 2
          Issue Type: Improvement
          Components: API
    Affects Versions: 2.6.1
            Reporter: Tim Gokcen


In {{org.apache.logging.log4j.spi.AbstractLoggerAdapter}}, the {{getLoggersInContext}} method has a {{synchronize}} block to prevent concurrent destructive access to the {{registry}}, a {{java.util.WeakHashMap}}:

{code}    public ConcurrentMap<String, L> getLoggersInContext(final LoggerContext context) {
        synchronized (registry) {
            ConcurrentMap<String, L> loggers = registry.get(context);
            if (loggers == null) {
                loggers = new ConcurrentHashMap<>();
                registry.put(context, loggers);
            }
            return loggers;
        }
    }{code}

However, in the case when loggers are already in the map, the large {{synchronize}} block means that two threads cannot simultaneously read from the map, which hurts performance in highly multi-threaded applications that constantly re-instantiate loggers.

In our application, we have modified this method to use a ReadWriteLock instead, providing unlimited concurrent {{get()}} operations but blocking {{put()}} operations by using a double-checked locking idiom:

{code}import java.util.concurrent.locks.ReadWriteLock;
import java.util.concurrent.locks.ReentrantReadWriteLock;

(...)

    private final ReadWriteLock lock = new ReentrantReadWriteLock (true);

(...)

    public ConcurrentMap<String, L> getLoggersInContext(final LoggerContext context) {
        ConcurrentMap<String, L> loggers;
        lock.readLock ().lock ();
        try {
            loggers = registry.get (context);
        } finally {
            lock.readLock ().unlock ();
        }

        if (loggers != null) {
            return loggers;
        } else {
            lock.writeLock ().lock ();
            try {
                loggers = registry.get (context);
                if (loggers == null) {
                    loggers = new ConcurrentHashMap<> ();
                    registry.put (context, loggers);
                }
                return loggers;
            } finally {
                lock.writeLock ().unlock ();
            }
        }
    }{code}

The {{ReadWriteLock}} interface and the {{ReentrantReadWriteLock}} implementation have been available since Java 5. The performance gains from using read locks have so far been considerable.

Why are we constantly re-instantiating loggers instead of, for example, keeping a {{static final Logger}} in our classes? In many cases it's because the class which holds the logger is a base class, and it can't use a static logger in case a different outlet has been specified for the actual derived class which has been instantiated. Some of these objects, for example {{AbstractMediaTypeExpression}} in the Spring framework, are constantly being destroyed and reconstructed. Where reasonable for our application, we've also patched those other classes to just use {{static final}} Loggers, but there are a lot of them and it is ultimately better to solve this problem at the source.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: log4j-dev-unsubscribe@logging.apache.org
For additional commands, e-mail: log4j-dev-help@logging.apache.org