You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-issues@hadoop.apache.org by "Joe Crobak (Commented) (JIRA)" <ji...@apache.org> on 2011/12/05 00:06:39 UTC

[jira] [Commented] (HADOOP-7837) no NullAppender in the log4j config

    [ https://issues.apache.org/jira/browse/HADOOP-7837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13162510#comment-13162510 ] 

Joe Crobak commented on HADOOP-7837:
------------------------------------

I see similar errors when running bin/hadoop, e.g.:

{noformat}
$ bin/hadoop jar hadoop-mapreduce-examples-0.23.0.jar pi \
-Dmapreduce.clientfactory.class.name=org.apache.hadoop.mapred.YarnClientFactory \
-libjars modules/hadoop-mapreduce-client-jobclient-0.23.0.jar 16 10000
log4j:ERROR Could not find value for key log4j.appender.NullAppender
log4j:ERROR Could not instantiate appender named "NullAppender".
{noformat}
                
> no NullAppender in the log4j config
> -----------------------------------
>
>                 Key: HADOOP-7837
>                 URL: https://issues.apache.org/jira/browse/HADOOP-7837
>             Project: Hadoop Common
>          Issue Type: Bug
>          Components: conf
>    Affects Versions: 0.23.0
>         Environment: OS/X, no JAVA_HOME set
>            Reporter: Steve Loughran
>            Priority: Minor
>
> running sbin/start-dfs.sh gives me a telling off about no null appender -should one be in the log4j config file.
> Full trace (failure expected, but full output not as expected)
> {code}
> ./start-dfs.sh 
> log4j:ERROR Could not find value for key log4j.appender.NullAppender
> log4j:ERROR Could not instantiate appender named "NullAppender".
> Incorrect configuration: namenode address dfs.namenode.servicerpc-address or dfs.namenode.rpc-address is not configured.
> Starting namenodes on []
> cat: /Users/slo/Java/Hadoop/versions/hadoop-0.23.0/libexec/../etc/hadoop/slaves: No such file or directory
> cat: /Users/slo/Java/Hadoop/versions/hadoop-0.23.0/libexec/../etc/hadoop/slaves: No such file or directory
> Secondary namenodes are not configured.  Cannot start secondary namenodes.
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira