You are viewing a plain text version of this content. The canonical link for it is here.
Posted to log4j-dev@logging.apache.org by bu...@apache.org on 2003/11/04 21:35:35 UTC

DO NOT REPLY [Bug 24407] New: - large maxbackupindex makes RollingFileAppender dread slow

DO NOT REPLY TO THIS EMAIL, BUT PLEASE POST YOUR BUG 
RELATED COMMENTS THROUGH THE WEB INTERFACE AVAILABLE AT
<http://nagoya.apache.org/bugzilla/show_bug.cgi?id=24407>.
ANY REPLY MADE TO THIS MESSAGE WILL NOT BE COLLECTED AND 
INSERTED IN THE BUG DATABASE.

http://nagoya.apache.org/bugzilla/show_bug.cgi?id=24407

large maxbackupindex makes RollingFileAppender dread slow

           Summary: large maxbackupindex makes RollingFileAppender dread
                    slow
           Product: Log4j
           Version: 1.2
          Platform: Other
        OS/Version: Other
            Status: NEW
          Severity: Major
          Priority: Other
         Component: Appender
        AssignedTo: log4j-dev@jakarta.apache.org
        ReportedBy: max@eos.dk


To ensure that our RollingFileAppender would not start deleting files soon, we 
set maxbackupindex=99999999.

This seemed to work, but we experienced that this would make log4j halt to a 
standstill at each time the rollover should happen.

The culprit is that RollingFileAppender at line 123 counts DOWN from 
maxbackupindex and try to check if a file exists with that number.

This is VERY ineffective when maxbackupindex is a large number.

Why don't it start form 0 (zero) and try to find the first missing file ? 
That would make it complete the search in a more reasonable time.

Or even better - do a File.list() and do it all in memory instead of accessing 
the file system for each iteration.

If you don't find this a solution, then at least warn about this in the docs of 
maxbackupindex.

---------------------------------------------------------------------
To unsubscribe, e-mail: log4j-dev-unsubscribe@jakarta.apache.org
For additional commands, e-mail: log4j-dev-help@jakarta.apache.org