You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@ambari.apache.org by "Dmitry Lysnichenko (JIRA)" <ji...@apache.org> on 2014/11/05 20:43:35 UTC

[jira] [Created] (AMBARI-8171) DataNode maximum java heap size is not changed in hadoop-env.sh file after UI changes

Dmitry Lysnichenko created AMBARI-8171:
------------------------------------------

             Summary: DataNode maximum java heap size is not changed in hadoop-env.sh file after UI changes
                 Key: AMBARI-8171
                 URL: https://issues.apache.org/jira/browse/AMBARI-8171
             Project: Ambari
          Issue Type: Bug
          Components: ambari-server
    Affects Versions: 1.7.0
         Environment: EC2 cluster with HDFS (stack2.2)
ambari-server --hash
0826d255886d4e63a44688f2070c2a96ab46ed7c
rpm -qa | grep ambari-server
ambari-server-1.7.0-141 ambari-server-1.7.0-141
            Reporter: Dmitry Lysnichenko
            Assignee: Dmitry Lysnichenko
             Fix For: 1.7.0


STR:
1)Go on HDFS->Configs tab
2)Change DataNode maximum java heap size property at UI
3)Save changes
4)Restart all services that need it
5)Check hadoop-env.sh file
Actual result:
File was not changed according to UI, have default values (1024 not new value 1026)
etc/hadoop/conf # cat hadoop-env.sh | grep DATANODE
export HADOOP_DATANODE_OPTS="-server -XX:ParallelGCThreads=4 -XX:+UseConcMarkSweepGC -XX:ErrorFile=/var/log/hadoop/$USER/hs_err_pid%p.log -XX:NewSize=200m -XX:MaxNewSize=200m -XX:PermSize=128m -XX:MaxPermSize=256m -Xloggc:/var/log/hadoop/$USER/gc.log-`date +'%Y%m%d%H%M'` -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -Xms1024m -Xmx1024m -Dhadoop.security.logger=INFO,DRFAS -Dhdfs.audit.logger=INFO,DRFAAUDIT ${HADOOP_DATANODE_OPTS}"
Expected result:
File was changed according to UI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)