You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@ambari.apache.org by "Hudson (JIRA)" <ji...@apache.org> on 2015/04/30 02:50:05 UTC

[jira] [Commented] (AMBARI-10837) HDFS Review: Multiple recommendation API updates for HDFS configs

    [ https://issues.apache.org/jira/browse/AMBARI-10837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14520636#comment-14520636 ] 

Hudson commented on AMBARI-10837:
---------------------------------

FAILURE: Integrated in Ambari-trunk-Commit #2480 (See [https://builds.apache.org/job/Ambari-trunk-Commit/2480/])
AMBARI-10837. HDFS Review: Multiple recommendation API updates for HDFS configs (mpapirkovskyy via srimanth) (sgunturi: http://git-wip-us.apache.org/repos/asf?p=ambari.git&a=commit&h=54be9c4e67a035a4316ff8784c5686fd84f04141)
* ambari-server/src/main/resources/stacks/HDP/2.0.6/services/stack_advisor.py
* ambari-server/src/main/resources/stacks/HDP/2.2/services/stack_advisor.py
* ambari-server/src/main/resources/stacks/HDP/2.2/services/HDFS/themes/theme.json
* ambari-server/src/test/python/stacks/2.2/common/test_stack_advisor.py
* ambari-server/src/main/resources/common-services/HDFS/2.1.0.2.0/configuration/hadoop-env.xml
* ambari-server/src/main/resources/common-services/HDFS/2.1.0.2.0/configuration/hdfs-site.xml


> HDFS Review: Multiple recommendation API updates for HDFS configs
> -----------------------------------------------------------------
>
>                 Key: AMBARI-10837
>                 URL: https://issues.apache.org/jira/browse/AMBARI-10837
>             Project: Ambari
>          Issue Type: Bug
>          Components: ambari-server
>    Affects Versions: 2.1.0
>            Reporter: Myroslav Papirkovskyy
>            Assignee: Myroslav Papirkovskyy
>            Priority: Critical
>             Fix For: 2.1.0
>
>
> HDFS configs review was done and the configs spreadsheet has been updated with various changes where the following must be fixed.
> * Below configs are to be marked as {{depends_on}} {{namenode_heapsize}}, and their value should be derived from it (basically ignore document value for below configs). Whenever {{namenode_heapsize}} changes in UI, below config values should also be updated
> ** namenode_opt_newsize (hadoop-env.sh) = {{namenode_heapsize/8}}
> ** namenode_opt_maxnewsize (hadoop-env.sh) = {{namenode_heapsize/8}}
> * {{dfs.namenode.safemode.threshold-pct}}
> ** minimum = 0.990f
> ** maximum = 1.000f
> ** default = 0.999f
> ** increment-step = 0.001f
> * {{dfs.datanode.failed.volumes.tolerated}} should be {{depends_on}} {{dfs.datanode.data.dir}}. So if a user adds additional folder in {{dfs.datanode.data.dir}}, then the *value and maximum* of {{dfs.datanode.failed.volumes.tolerated}} should change accordingly.
> * {{namenode_heapsize}} calculations should take into account host memory limits. Namenode_heapsize should be the {{host-memory - os-reserverd-memory}}. Also, if there are any other master-components on the same host, then it should be halved (namenode_heapsize/2). 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)