You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@ambari.apache.org by "Hadoop QA (JIRA)" <ji...@apache.org> on 2014/10/20 23:42:33 UTC

[jira] [Commented] (AMBARI-7119) log4j does not get used by hadoop as settings are present in hadoop.config.sh

    [ https://issues.apache.org/jira/browse/AMBARI-7119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14177519#comment-14177519 ] 

Hadoop QA commented on AMBARI-7119:
-----------------------------------

{color:red}-1 overall{color}.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12675901/AMBARI-7119.patch.3
  against trunk revision .

    {color:green}+1 @author{color}.  The patch does not contain any @author tags.

    {color:red}-1 tests included{color}.  The patch doesn't appear to include any new or modified tests.
                        Please justify why no new tests are needed for this patch.
                        Also please list what manual steps were performed to verify this patch.

    {color:green}+1 javac{color}.  The applied patch does not increase the total number of javac compiler warnings.

    {color:green}+1 release audit{color}.  The applied patch does not increase the total number of release audit warnings.

    {color:green}+1 core tests{color}.  The patch passed unit tests in ambari-server.

Test results: https://builds.apache.org/job/Ambari-trunk-test-patch/272//testReport/
Console output: https://builds.apache.org/job/Ambari-trunk-test-patch/272//console

This message is automatically generated.

> log4j does not get used by hadoop as settings are present in hadoop.config.sh
> -----------------------------------------------------------------------------
>
>                 Key: AMBARI-7119
>                 URL: https://issues.apache.org/jira/browse/AMBARI-7119
>             Project: Ambari
>          Issue Type: Bug
>            Reporter: Andrew Onischuk
>            Assignee: Dmitry Lysnichenko
>             Fix For: 1.7.0
>
>         Attachments: AMBARI-7119.patch.2, AMBARI-7119.patch.3
>
>
> PROBLEM: log4j settings made via Ambari update the log4j file but do not take
> any affect when restarting HDFS. It seems there are hardcoded settings in
> /usr/lib/hadoop/libexec/hadoop-config.sh such as this at line 221:
> HADOOP_OPTS="$HADOOP_OPTS -Dhadoop.root.logger=$
> {HADOOP_ROOT_LOGGER:-INFO,console}
> "
> BUSINESS IMPACT: Customers have to change core files or set environment
> variables explicitly by setting up a profile script
> STEPS TO REPRODUCE: Log in to Ambari and change log4j properties such that
> hadoop.root.logger=INFO,DRFA. The log4j file is updated in /etc/hadoop/conf.
> Restart the HDFS service. Do a ps -ef | grep <PID> for the namenode. The
> process shows duplicate entries for several properties and does not show the
> logging change. Here is the duplication and incorrect root logger setting seen
> locally in testing:
> hdfs 4304 1 14 07:26 ? 00:00:10 /usr/jdk64/jdk1.7.0_45/bin/java
> -Dproc_namenode -Xmx1024m -Djava.net.preferIPv4Stack=true
> -Djava.net.preferIPv4Stack=true -Djava.net.preferIPv4Stack=true
> -Dhadoop.log.dir=/var/log/hadoop/hdfs -Dhadoop.log.file=hadoop.log
> -Dhadoop.home.dir=/usr/lib/hadoop -Dhadoop.id.str=hdfs
> -Dhadoop.root.logger=INFO,console
> -Djava.library.path=:/usr/lib/hadoop/lib/native/Linux-
> amd64-64:/usr/lib/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml
> -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/var/log/hadoop/hdfs
> -Dhadoop.log.file=hadoop-hdfs-namenode-sandbox.hortonworks.com.log
> -Dhadoop.home.dir=/usr/lib/hadoop -Dhadoop.id.str=hdfs
> -Dhadoop.root.logger=INFO,RFA -Djava.library.path=:/usr/lib/hadoop/lib/native
> /Linux-amd64-64:/usr/lib/hadoop/lib/native:/usr/lib/hadoop/lib/native/Linux-
> amd64-64:/usr/lib/hadoop/lib/native/Linux-amd64-64:/usr/lib/hadoop/lib/native
> -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -server
> -XX:ParallelGCThreads=8 -XX:+UseConcMarkSweepGC
> -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:NewSize=100m
> -XX:MaxNewSize=50m -Xloggc:/var/log/hadoop/hdfs/gc.log-201407140726
> -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps
> -Xms1024m -Xmx1024m -Dhadoop.security.logger=INFO,DRFAS
> -Dhdfs.audit.logger=INFO,DRFAAUDIT -server -XX:ParallelGCThreads=8
> -XX:+UseConcMarkSweepGC -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log
> -XX:NewSize=100m -XX:MaxNewSize=50m
> -Xloggc:/var/log/hadoop/hdfs/gc.log-201407140726 -verbose:gc
> -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -Xms1024m
> -Xmx1024m -Dhadoop.security.logger=INFO,DRFAS
> -Dhdfs.audit.logger=INFO,DRFAAUDIT -server -XX:ParallelGCThreads=8
> -XX:+UseConcMarkSweepGC -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log
> -XX:NewSize=100m -XX:MaxNewSize=50m
> -Xloggc:/var/log/hadoop/hdfs/gc.log-201407140726 -verbose:gc
> -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -Xms1024m
> -Xmx1024m -Dhadoop.security.logger=INFO,DRFA
> ACTUAL BEHAVIOR: log4j changes made in Ambari do not persist in process. It
> seems there are values set in /usr/lib/hadoop/libexec/hadoop-config.sh that
> override no matter what. There is also duplication of settings, assuming this
> is from hadoop-config.sh as well.
> EXPECTED BEHAVIOR: Settings made in Ambari should be persisted and used by the
> process.
> SUPPORT ANALYSIS: Support made changes to log4j in Ambari on a test cluster
> and they were not used



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)