You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@bigtop.apache.org by "Masatake Iwasaki (Jira)" <ji...@apache.org> on 2021/02/23 12:46:00 UTC
[jira] [Commented] (BIGTOP-3466) HDFS default command line values
not overriden if started with 'hdfs' command instead of initscripts
[ https://issues.apache.org/jira/browse/BIGTOP-3466?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17289061#comment-17289061 ]
Masatake Iwasaki commented on BIGTOP-3466:
------------------------------------------
[~seys] Configurations under /etc/defaults are intended to be used for daemons. I think you should set heap size via HADOOP_OPTS for your usecase like
{noformat}
$ HADOOP_OPTS="-Xmx64g" hdfs namenode ...
{noformat}
While setting HDFS_NAMENODE_OPTS in /etc/hadoop/conf/hadoop-env.sh could be another option, you should be careful because the hadoop-env.sh is used by both CLI and daemons.
> HDFS default command line values not overriden if started with 'hdfs' command instead of initscripts
> ----------------------------------------------------------------------------------------------------
>
> Key: BIGTOP-3466
> URL: https://issues.apache.org/jira/browse/BIGTOP-3466
> Project: Bigtop
> Issue Type: Bug
> Components: hadoop, Init scripts
> Affects Versions: 1.5.0
> Environment: CentOS 7
> Reporter: chad
> Priority: Major
>
> Hi all, thanks for your hard work!
> When upgrading to Bigtop 1.5.0 I followed the [instructions|https://hadoop.apache.org/docs/r2.10.1/hadoop-project-dist/hadoop-hdfs/HdfsRollingUpgrade.html] for a rolling upgrade of HDFS. These instructions have one start the namenode daemon from the command line, such as this: '[hdfs dfsadmin -rollingUpgrade started|https://hadoop.apache.org/docs/r2.10.1/hadoop-project-dist/hadoop-hdfs/HdfsRollingUpgrade.html#dfsadmin_-rollingUpgrade]' This bypasses the addition of environmental variables which happens when the namenode is started by the init script.
> Specifically /etc/init.d/hadoop-hdfs overrides and adds environmental variables here:
> [ -n "${BIGTOP_DEFAULTS_DIR}" -a -r ${BIGTOP_DEFAULTS_DIR}/hadoop-hdfs-namenode ] && . ${BIGTOP_DEFAULTS_DIR}/hadoop-hdfs-namenode
> But if the namenode is started by the above command that sourcing never happens. (In our case the default Java heap is too small and the namenode fails to start.)
> Possibly the sourcing should occur in /usr/lib/hadoop-hdfs/bin/hdfs about here:
> if [ "$COMMAND" = "namenode" ] ; then
> CLASS='org.apache.hadoop.hdfs.server.namenode.NameNode'
> #>>> -n [ "${BIGTOP_DEFAULTS_DIR}" -a -r ${BIGTOP_DEFAULTS_DIR}/hadoop-hdfs-namenode ] && . ${BIGTOP_DEFAULTS_DIR}/hadoop-hdfs-namenode
> HADOOP_OPTS="$HADOOP_OPTS $HADOOP_NAMENODE_OPTS"
> This is true for the other types of HDFS daemons (datanode, journalnode...) also.
> Have a good one!
> C.
>
--
This message was sent by Atlassian Jira
(v8.3.4#803005)