You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-dev@hadoop.apache.org by Doug Cutting <cu...@apache.org> on 2007/10/01 19:00:44 UTC
Re: JCL vs SLF4J
Eric Baldeschwieler wrote:
> I find our current use of apache commons while depending on specific
> features of log4j awkward.
Yes, this isn't ideal, but an advantage is that, should we switch to a
different backend, we only need change the few log4j-specific bits, and
not every line that logs something. That's significant.
I have used the JVM's built-in logging and found it lacking. That's
what the Hadoop code used initially, and we switched to Commons so that
we could access log4j's features, and also retain the possibility of
switching again without having to edit every file.
Torsten Curdt wrote:
> IMHO logging facades are better suited for frameworks ...less useful for applications.
Hadoop is in large part a framework. We'd like applications that use
different logging frameworks to be able to use Hadoop. Ideally the
log4j-specific bits in Hadoop would be isolated to things like startup
scripts and main() routines, and not directly used in daemon classes, so
that one could start a Hadoop cluster using a different logging system
by, e.g., using different startup scripts and main() routines. We've
not maintained that sanctity, but if a contributor does arrive who is
invested in a different logging framework, it would currently not be too
hard to repair things to support multiple logging mechanisms. We should
think twice before losing that possibility.
As things like KFS arrive on the scene it is all the more important to
keep Hadoop modular. Mapreduce and HDFS should be kept independent.
For example, if a site wishes to use mapreduce on top of KFS, and if KFS
supports a different logging mechanism, then it will be convenient if
the mapred code does not presume a specific logging implementation.
Doug
Re: JCL vs SLF4J
Posted by Lukas Vlcek <lu...@gmail.com>.
Hi,
I didn't follow the whole debate in Hadoop mail list about SLF4J vs JCL so
my comment may be irrelevant. However, apart of class loader issues in JCL,
did anybody considered performance benefits of SLF4J over JCL?
I have switched to SLF4J recently and I found it very handy. Especially the
syntaxt: log.debug("This must have {} performance then {}", "better",
"JCL");
Regards,
Lukas
On 10/1/07, Doug Cutting <cu...@apache.org> wrote:
>
> Michael Stack wrote:
> > Doug Cutting wrote:
> >> I have used the JVM's built-in logging and found it lacking.
> > Do you remember what the holes in JUL were?
>
> Sorry, I don't remember all the details.
>
> I note that Hadoop uses log4j's DailyRollingFileAppender, and nothing
> like that is present in JUL. And log4j's formatting options are much
> more sophisticated than JUL's SimpleFormatter. Also, with log4j, it's
> easy to configure debug logging for a subset of code, based on package
> wildcards. Such things can be accomplished with JUL, but through the
> addition of code, while log4j moves most of that to the config.
>
> Doug
>
--
http://blog.lukas-vlcek.com/
Re: JCL vs SLF4J
Posted by Doug Cutting <cu...@apache.org>.
Michael Stack wrote:
> Doug Cutting wrote:
>> I have used the JVM's built-in logging and found it lacking.
> Do you remember what the holes in JUL were?
Sorry, I don't remember all the details.
I note that Hadoop uses log4j's DailyRollingFileAppender, and nothing
like that is present in JUL. And log4j's formatting options are much
more sophisticated than JUL's SimpleFormatter. Also, with log4j, it's
easy to configure debug logging for a subset of code, based on package
wildcards. Such things can be accomplished with JUL, but through the
addition of code, while log4j moves most of that to the config.
Doug
Re: JCL vs SLF4J
Posted by Michael Stack <st...@duboce.net>.
Doug Cutting wrote:
> I have used the JVM's built-in logging and found it lacking.
Do you remember what the holes in JUL were? Hadoop needed the extra
facility provided by log4j's PatternLayout or it needed some of the more
exotic appenders? Or was it something else? (I don't see anything in
archives -- maybe I'm searching on wrong thing).
Otherwise, the rest of your mail answers my original question as to why
hadoop suffers the commons-logging pain.
St.Ack