You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-issues@hadoop.apache.org by "Allen Wittenauer (JIRA)" <ji...@apache.org> on 2009/07/23 22:13:14 UTC
[jira] Commented: (HADOOP-6168) HADOOP_HEAPSIZE cannot be done
per-server easily
[ https://issues.apache.org/jira/browse/HADOOP-6168?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12734759#action_12734759 ]
Allen Wittenauer commented on HADOOP-6168:
------------------------------------------
So bin/hadoop has this logic:
JAVA_HEAP_MAX=-Xmx1000m
# check envvars which might override default args
if [ "$HADOOP_HEAPSIZE" != "" ]; then
#echo "run with heapsize $HADOOP_HEAPSIZE"
JAVA_HEAP_MAX="-Xmx""$HADOOP_HEAPSIZE""m"
#echo $JAVA_HEAP_MAX
fi
This makes it impossible to create a hadoop-env.sh that is truly generic to all nodes. It would be better to do this HEAPSIZE management in hadoop-env.sh to allow it to be easily override-able.
> HADOOP_HEAPSIZE cannot be done per-server easily
> ------------------------------------------------
>
> Key: HADOOP-6168
> URL: https://issues.apache.org/jira/browse/HADOOP-6168
> Project: Hadoop Common
> Issue Type: Bug
> Components: conf
> Affects Versions: 0.18.3
> Reporter: Allen Wittenauer
>
> The hadoop script forces a heap that cannot be easily overridden if one wants to push the same config everywhere.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.