You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@ambari.apache.org by "Hudson (JIRA)" <ji...@apache.org> on 2015/04/03 20:18:53 UTC

[jira] [Commented] (AMBARI-10351) BE: YARN stack advisor not recommending all dependent configs

    [ https://issues.apache.org/jira/browse/AMBARI-10351?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14394830#comment-14394830 ] 

Hudson commented on AMBARI-10351:
---------------------------------

SUCCESS: Integrated in Ambari-trunk-Commit #2198 (See [https://builds.apache.org/job/Ambari-trunk-Commit/2198/])
AMBARI-10351 BE: YARN stack advisor not recommending all dependent configs (dsen) (dsen: http://git-wip-us.apache.org/repos/asf?p=ambari.git&a=commit&h=b3fc39395d82b8b37cb2d2e5b9b8e3edec3cd0bf)
* ambari-web/pom.xml
* ambari-server/src/main/resources/stacks/HDP/2.0.6/services/stack_advisor.py
* ambari-server/src/test/python/stacks/2.2/common/test_stack_advisor.py
* ambari-server/src/main/resources/stacks/HDP/2.2/services/stack_advisor.py


> BE: YARN stack advisor not recommending all dependent configs
> -------------------------------------------------------------
>
>                 Key: AMBARI-10351
>                 URL: https://issues.apache.org/jira/browse/AMBARI-10351
>             Project: Ambari
>          Issue Type: Task
>          Components: stacks
>    Affects Versions: 2.1.0
>            Reporter: Dmytro Sen
>            Assignee: Dmytro Sen
>            Priority: Critical
>             Fix For: 2.1.0
>
>         Attachments: AMBARI-10351.patch
>
>
> Changing {{yarn.scheduler.minimum-allocation-mb}} should effect:
> * *Directly*
> ** mapreduce.map.memory.mb
> ** mapreduce.reduce.memory.mb
> ** yarn.app.mapreduce.am.resource.mb
> * *Indirectly*
> ** mapreduce.map.java.opts
> ** mapreduce.reduce.java.opts
> ** yarn.app.mapreduce.am.command-opts
> ** mapreduce.task.io.sort.mb
> However when the below POST call is made
> {code}
> {
>   "recommend": "configuration-dependencies",
>   "changed_configurations": [
>     {
>       "type": "yarn-site",
>       "name": "yarn.scheduler.minimum-allocation-mb"
>     }
>   ],
>   "hosts": [
>     "c6401.ambari.apache.org",
>     "c6402.ambari.apache.org",
>     "c6403.ambari.apache.org"
>   ],
>   "services": [
>     "HDFS",
>     "MAPREDUCE2",
>     "YARN",
>     "TEZ",
>     "HIVE",
>     "HBASE",
>     "PIG",
>     "ZOOKEEPER"
>   ],
>   "recommendations": {
>     "blueprint": {
>       "host_groups": [
>         {
>           "name": "host-group-1",
>           "components": [
>             {
>               "name": "HBASE_MASTER"
>             },
>             {
>               "name": "NAMENODE"
>             },
>             {
>               "name": "ZOOKEEPER_SERVER"
>             },
>             {
>               "name": "HBASE_REGIONSERVER"
>             },
>             {
>               "name": "DATANODE"
>             },
>             {
>               "name": "NFS_GATEWAY"
>             },
>             {
>               "name": "NODEMANAGER"
>             },
>             {
>               "name": "HBASE_CLIENT"
>             },
>             {
>               "name": "HDFS_CLIENT"
>             },
>             {
>               "name": "HCAT"
>             },
>             {
>               "name": "HIVE_CLIENT"
>             },
>             {
>               "name": "MAPREDUCE2_CLIENT"
>             },
>             {
>               "name": "PIG"
>             },
>             {
>               "name": "TEZ_CLIENT"
>             },
>             {
>               "name": "YARN_CLIENT"
>             },
>             {
>               "name": "ZOOKEEPER_CLIENT"
>             }
>           ]
>         },
>         {
>           "name": "host-group-2",
>           "components": [
>             {
>               "name": "APP_TIMELINE_SERVER"
>             },
>             {
>               "name": "HISTORYSERVER"
>             },
>             {
>               "name": "HIVE_METASTORE"
>             },
>             {
>               "name": "HIVE_SERVER"
>             },
>             {
>               "name": "MYSQL_SERVER"
>             },
>             {
>               "name": "RESOURCEMANAGER"
>             },
>             {
>               "name": "SECONDARY_NAMENODE"
>             },
>             {
>               "name": "WEBHCAT_SERVER"
>             },
>             {
>               "name": "ZOOKEEPER_SERVER"
>             },
>             {
>               "name": "HBASE_REGIONSERVER"
>             },
>             {
>               "name": "DATANODE"
>             },
>             {
>               "name": "NFS_GATEWAY"
>             },
>             {
>               "name": "NODEMANAGER"
>             },
>             {
>               "name": "HDFS_CLIENT"
>             },
>             {
>               "name": "MAPREDUCE2_CLIENT"
>             },
>             {
>               "name": "PIG"
>             },
>             {
>               "name": "TEZ_CLIENT"
>             },
>             {
>               "name": "YARN_CLIENT"
>             },
>             {
>               "name": "ZOOKEEPER_CLIENT"
>             }
>           ]
>         },
>         {
>           "name": "host-group-3",
>           "components": [
>             {
>               "name": "ZOOKEEPER_SERVER"
>             },
>             {
>               "name": "HBASE_REGIONSERVER"
>             },
>             {
>               "name": "DATANODE"
>             },
>             {
>               "name": "NFS_GATEWAY"
>             },
>             {
>               "name": "NODEMANAGER"
>             },
>             {
>               "name": "HBASE_CLIENT"
>             },
>             {
>               "name": "HDFS_CLIENT"
>             },
>             {
>               "name": "HCAT"
>             },
>             {
>               "name": "HIVE_CLIENT"
>             },
>             {
>               "name": "MAPREDUCE2_CLIENT"
>             },
>             {
>               "name": "PIG"
>             },
>             {
>               "name": "TEZ_CLIENT"
>             },
>             {
>               "name": "YARN_CLIENT"
>             },
>             {
>               "name": "ZOOKEEPER_CLIENT"
>             }
>           ]
>         }
>       ],
>       "configurations": {
>         "mapred-env": {
>           "properties": {
>             "content": "\n# export JAVA_HOME=/home/y/libexec/jdk1.6.0/\n\nexport HADOOP_JOB_HISTORYSERVER_HEAPSIZE={{jobhistory_heapsize}}\n\nexport HADOOP_MAPRED_ROOT_LOGGER=INFO,RFA\n\n#export HADOOP_JOB_HISTORYSERVER_OPTS=\n#export HADOOP_MAPRED_LOG_DIR=\"\" # Where log files are stored.  $HADOOP_MAPRED_HOME/logs by default.\n#export HADOOP_JHS_LOGGER=INFO,RFA # Hadoop JobSummary logger.\n#export HADOOP_MAPRED_PID_DIR= # The pid files are stored. /tmp by default.\n#export HADOOP_MAPRED_IDENT_STRING= #A string representing this instance of hadoop. $USER by default\n#export HADOOP_MAPRED_NICENESS= #The scheduling priority for daemons. Defaults to 0.\nexport HADOOP_OPTS=\"-Dhdp.version=$HDP_VERSION $HADOOP_OPTS\"",
>             "jobhistory_heapsize": "900",
>             "mapred_log_dir_prefix": "/var/log/hadoop-mapreduce",
>             "mapred_pid_dir_prefix": "/var/run/hadoop-mapreduce",
>             "mapred_user": "mapred",
>             "hs_host": false
>           }
>         },
>         "mapred-site": {
>           "properties": {
>             "mapreduce.map.memory.mb": "682",
>             "mapreduce.reduce.memory.mb": "682",
>             "mapreduce.task.io.sort.mb": "273",
>             "yarn.app.mapreduce.am.resource.mb": "682",
>             "mapreduce.admin.map.child.java.opts": "-server -XX:NewRatio=8 -Djava.net.preferIPv4Stack=true -Dhdp.version=${hdp.version}",
>             "mapreduce.admin.reduce.child.java.opts": "-server -XX:NewRatio=8 -Djava.net.preferIPv4Stack=true -Dhdp.version=${hdp.version}",
>             "mapreduce.admin.user.env": "LD_LIBRARY_PATH=/usr/hdp/${hdp.version}/hadoop/lib/native:/usr/hdp/${hdp.version}/hadoop/lib/native/Linux-amd64-64",
>             "mapreduce.am.max-attempts": "2",
>             "mapreduce.application.classpath": "$PWD/mr-framework/hadoop/share/hadoop/mapreduce/*:$PWD/mr-framework/hadoop/share/hadoop/mapreduce/lib/*:$PWD/mr-framework/hadoop/share/hadoop/common/*:$PWD/mr-framework/hadoop/share/hadoop/common/lib/*:$PWD/mr-framework/hadoop/share/hadoop/yarn/*:$PWD/mr-framework/hadoop/share/hadoop/yarn/lib/*:$PWD/mr-framework/hadoop/share/hadoop/hdfs/*:$PWD/mr-framework/hadoop/share/hadoop/hdfs/lib/*:/usr/hdp/${hdp.version}/hadoop/lib/hadoop-lzo-0.6.0.${hdp.version}.jar:/etc/hadoop/conf/secure",
>             "mapreduce.application.framework.path": "/hdp/apps/${hdp.version}/mapreduce/mapreduce.tar.gz#mr-framework",
>             "mapreduce.cluster.administrators": " hadoop",
>             "mapreduce.framework.name": "yarn",
>             "mapreduce.job.emit-timeline-data": "false",
>             "mapreduce.job.reduce.slowstart.completedmaps": "0.05",
>             "mapreduce.jobhistory.address": "c6402.ambari.apache.org:10020",
>             "mapreduce.jobhistory.bind-host": "0.0.0.0",
>             "mapreduce.jobhistory.done-dir": "/mr-history/done",
>             "mapreduce.jobhistory.intermediate-done-dir": "/mr-history/tmp",
>             "mapreduce.jobhistory.webapp.address": "c6402.ambari.apache.org:19888",
>             "mapreduce.map.java.opts": "-Xmx546m",
>             "mapreduce.map.log.level": "INFO",
>             "mapreduce.map.output.compress": "false",
>             "mapreduce.map.sort.spill.percent": "0.7",
>             "mapreduce.map.speculative": "false",
>             "mapreduce.output.fileoutputformat.compress": "false",
>             "mapreduce.output.fileoutputformat.compress.type": "BLOCK",
>             "mapreduce.reduce.input.buffer.percent": "0.0",
>             "mapreduce.reduce.java.opts": "-Xmx546m",
>             "mapreduce.reduce.log.level": "INFO",
>             "mapreduce.reduce.shuffle.fetch.retry.enabled": "1",
>             "mapreduce.reduce.shuffle.fetch.retry.interval-ms": "1000",
>             "mapreduce.reduce.shuffle.fetch.retry.timeout-ms": "30000",
>             "mapreduce.reduce.shuffle.input.buffer.percent": "0.7",
>             "mapreduce.reduce.shuffle.merge.percent": "0.66",
>             "mapreduce.reduce.shuffle.parallelcopies": "30",
>             "mapreduce.reduce.speculative": "false",
>             "mapreduce.shuffle.port": "13562",
>             "mapreduce.task.io.sort.factor": "100",
>             "mapreduce.task.timeout": "300000",
>             "yarn.app.mapreduce.am.admin-command-opts": "-Dhdp.version=${hdp.version}",
>             "yarn.app.mapreduce.am.command-opts": "-Xmx546m -Dhdp.version=${hdp.version}",
>             "yarn.app.mapreduce.am.log.level": "INFO",
>             "yarn.app.mapreduce.am.staging-dir": "/user"
>           }
>         },
>         "capacity-scheduler": {
>           "properties": {
>             "capacity-scheduler": "yarn.scheduler.capacity.default.minimum-user-limit-percent=100\nyarn.scheduler.capacity.maximum-am-resource-percent=0.2\nyarn.scheduler.capacity.maximum-applications=10000\nyarn.scheduler.capacity.node-locality-delay=40\nyarn.scheduler.capacity.resource-calculator=org.apache.hadoop.yarn.util.resource.DefaultResourceCalculator\nyarn.scheduler.capacity.root.accessible-node-labels=*\nyarn.scheduler.capacity.root.accessible-node-labels.default.capacity=-1\nyarn.scheduler.capacity.root.accessible-node-labels.default.maximum-capacity=-1\nyarn.scheduler.capacity.root.acl_administer_queue=*\nyarn.scheduler.capacity.root.capacity=100\nyarn.scheduler.capacity.root.default-node-label-expression= \nyarn.scheduler.capacity.root.default.acl_administer_jobs=*\nyarn.scheduler.capacity.root.default.acl_submit_applications=*\nyarn.scheduler.capacity.root.default.capacity=100\nyarn.scheduler.capacity.root.default.maximum-capacity=100\nyarn.scheduler.capacity.root.default.state=RUNNING\nyarn.scheduler.capacity.root.default.user-limit-factor=1\nyarn.scheduler.capacity.root.queues=default\n"
>           }
>         },
>         "yarn-env": {
>           "properties": {
>             "content": "\nexport HADOOP_YARN_HOME={{hadoop_yarn_home}}\nexport YARN_LOG_DIR={{yarn_log_dir_prefix}}/$USER\nexport YARN_PID_DIR={{yarn_pid_dir_prefix}}/$USER\nexport HADOOP_LIBEXEC_DIR={{hadoop_libexec_dir}}\nexport JAVA_HOME={{java64_home}}\n\n# User for YARN daemons\nexport HADOOP_YARN_USER=${HADOOP_YARN_USER:-yarn}\n\n# resolve links - $0 may be a softlink\nexport YARN_CONF_DIR=\"${YARN_CONF_DIR:-$HADOOP_YARN_HOME/conf}\"\n\n# some Java parameters\n# export JAVA_HOME=/home/y/libexec/jdk1.6.0/\nif [ \"$JAVA_HOME\" != \"\" ]; then\n  #echo \"run java in $JAVA_HOME\"\n  JAVA_HOME=$JAVA_HOME\nfi\n\nif [ \"$JAVA_HOME\" = \"\" ]; then\n  echo \"Error: JAVA_HOME is not set.\"\n  exit 1\nfi\n\nJAVA=$JAVA_HOME/bin/java\nJAVA_HEAP_MAX=-Xmx1000m\n\n# For setting YARN specific HEAP sizes please use this\n# Parameter and set appropriately\nYARN_HEAPSIZE={{yarn_heapsize}}\n\n# check envvars which might override default args\nif [ \"$YARN_HEAPSIZE\" != \"\" ]; then\n  JAVA_HEAP_MAX=\"-Xmx\"\"$YARN_HEAPSIZE\"\"m\"\nfi\n\n# Resource Manager specific parameters\n\n# Specify the max Heapsize for the ResourceManager using a numerical value\n# in the scale of MB. For example, to specify an jvm option of -Xmx1000m, set\n# the value to 1000.\n# This value will be overridden by an Xmx setting specified in either YARN_OPTS\n# and/or YARN_RESOURCEMANAGER_OPTS.\n# If not specified, the default value will be picked from either YARN_HEAPMAX\n# or JAVA_HEAP_MAX with YARN_HEAPMAX as the preferred option of the two.\nexport YARN_RESOURCEMANAGER_HEAPSIZE={{resourcemanager_heapsize}}\n\n# Specify the JVM options to be used when starting the ResourceManager.\n# These options will be appended to the options specified as YARN_OPTS\n# and therefore may override any similar flags set in YARN_OPTS\n#export YARN_RESOURCEMANAGER_OPTS=\n\n# Node Manager specific parameters\n\n# Specify the max Heapsize for the NodeManager using a numerical value\n# in the scale of MB. For example, to specify an jvm option of -Xmx1000m, set\n# the value to 1000.\n# This value will be overridden by an Xmx setting specified in either YARN_OPTS\n# and/or YARN_NODEMANAGER_OPTS.\n# If not specified, the default value will be picked from either YARN_HEAPMAX\n# or JAVA_HEAP_MAX with YARN_HEAPMAX as the preferred option of the two.\nexport YARN_NODEMANAGER_HEAPSIZE={{nodemanager_heapsize}}\n\n# Specify the max Heapsize for the HistoryManager using a numerical value\n# in the scale of MB. For example, to specify an jvm option of -Xmx1000m, set\n# the value to 1024.\n# This value will be overridden by an Xmx setting specified in either YARN_OPTS\n# and/or YARN_HISTORYSERVER_OPTS.\n# If not specified, the default value will be picked from either YARN_HEAPMAX\n# or JAVA_HEAP_MAX with YARN_HEAPMAX as the preferred option of the two.\nexport YARN_HISTORYSERVER_HEAPSIZE={{apptimelineserver_heapsize}}\n\n# Specify the JVM options to be used when starting the NodeManager.\n# These options will be appended to the options specified as YARN_OPTS\n# and therefore may override any similar flags set in YARN_OPTS\n#export YARN_NODEMANAGER_OPTS=\n\n# so that filenames w/ spaces are handled correctly in loops below\nIFS=\n\n\n# default log directory and file\nif [ \"$YARN_LOG_DIR\" = \"\" ]; then\n  YARN_LOG_DIR=\"$HADOOP_YARN_HOME/logs\"\nfi\nif [ \"$YARN_LOGFILE\" = \"\" ]; then\n  YARN_LOGFILE='yarn.log'\nfi\n\n# default policy file for service-level authorization\nif [ \"$YARN_POLICYFILE\" = \"\" ]; then\n  YARN_POLICYFILE=\"hadoop-policy.xml\"\nfi\n\n# restore ordinary behaviour\nunset IFS\n\n\nYARN_OPTS=\"$YARN_OPTS -Dhadoop.log.dir=$YARN_LOG_DIR\"\nYARN_OPTS=\"$YARN_OPTS -Dyarn.log.dir=$YARN_LOG_DIR\"\nYARN_OPTS=\"$YARN_OPTS -Dhadoop.log.file=$YARN_LOGFILE\"\nYARN_OPTS=\"$YARN_OPTS -Dyarn.log.file=$YARN_LOGFILE\"\nYARN_OPTS=\"$YARN_OPTS -Dyarn.home.dir=$YARN_COMMON_HOME\"\nYARN_OPTS=\"$YARN_OPTS -Dyarn.id.str=$YARN_IDENT_STRING\"\nYARN_OPTS=\"$YARN_OPTS -Dhadoop.root.logger=${YARN_ROOT_LOGGER:-INFO,console}\"\nYARN_OPTS=\"$YARN_OPTS -Dyarn.root.logger=${YARN_ROOT_LOGGER:-INFO,console}\"\nif [ \"x$JAVA_LIBRARY_PATH\" != \"x\" ]; then\n  YARN_OPTS=\"$YARN_OPTS -Djava.library.path=$JAVA_LIBRARY_PATH\"\nfi\nYARN_OPTS=\"$YARN_OPTS -Dyarn.policy.file=$YARN_POLICYFILE\"",
>             "yarn_heapsize": "1024",
>             "resourcemanager_heapsize": "1024",
>             "nodemanager_heapsize": "1024",
>             "yarn_log_dir_prefix": "/var/log/hadoop-yarn",
>             "yarn_pid_dir_prefix": "/var/run/hadoop-yarn",
>             "min_user_id": "500",
>             "apptimelineserver_heapsize": "1024",
>             "yarn_user": "yarn",
>             "ats_host": "c6402.ambari.apache.org",
>             "rm_host": "c6402.ambari.apache.org"
>           }
>         },
>         "yarn-log4j": {
>           "properties": {
>             "content": "\n#Relative to Yarn Log Dir Prefix\nyarn.log.dir=.\n#\n# Job Summary Appender\n#\n# Use following logger to send summary to separate file defined by\n# hadoop.mapreduce.jobsummary.log.file rolled daily:\n# hadoop.mapreduce.jobsummary.logger=INFO,JSA\n#\nhadoop.mapreduce.jobsummary.logger=${hadoop.root.logger}\nhadoop.mapreduce.jobsummary.log.file=hadoop-mapreduce.jobsummary.log\nlog4j.appender.JSA=org.apache.log4j.DailyRollingFileAppender\n# Set the ResourceManager summary log filename\nyarn.server.resourcemanager.appsummary.log.file=hadoop-mapreduce.jobsummary.log\n# Set the ResourceManager summary log level and appender\nyarn.server.resourcemanager.appsummary.logger=${hadoop.root.logger}\n#yarn.server.resourcemanager.appsummary.logger=INFO,RMSUMMARY\n\n# To enable AppSummaryLogging for the RM,\n# set yarn.server.resourcemanager.appsummary.logger to\n# LEVEL,RMSUMMARY in hadoop-env.sh\n\n# Appender for ResourceManager Application Summary Log\n# Requires the following properties to be set\n#    - hadoop.log.dir (Hadoop Log directory)\n#    - yarn.server.resourcemanager.appsummary.log.file (resource manager app summary log filename)\n#    - yarn.server.resourcemanager.appsummary.logger (resource manager app summary log level and appender)\nlog4j.appender.RMSUMMARY=org.apache.log4j.RollingFileAppender\nlog4j.appender.RMSUMMARY.File=${yarn.log.dir}/${yarn.server.resourcemanager.appsummary.log.file}\nlog4j.appender.RMSUMMARY.MaxFileSize=256MB\nlog4j.appender.RMSUMMARY.MaxBackupIndex=20\nlog4j.appender.RMSUMMARY.layout=org.apache.log4j.PatternLayout\nlog4j.appender.RMSUMMARY.layout.ConversionPattern=%d{ISO8601} %p %c{2}: %m%n\nlog4j.appender.JSA.layout=org.apache.log4j.PatternLayout\nlog4j.appender.JSA.layout.ConversionPattern=%d{yy/MM/dd HH:mm:ss} %p %c{2}: %m%n\nlog4j.appender.JSA.DatePattern=.yyyy-MM-dd\nlog4j.appender.JSA.layout=org.apache.log4j.PatternLayout\nlog4j.logger.org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary=${yarn.server.resourcemanager.appsummary.logger}\nlog4j.additivity.org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary=false"
>           }
>         },
>         "yarn-site": {
>           "properties": {
>             "yarn.acl.enable": "false",
>             "yarn.admin.acl": "",
>             "yarn.log-aggregation-enable": "true",
>             "yarn.resourcemanager.scheduler.class": "org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler",
>             "yarn.scheduler.minimum-allocation-mb": "1024",
>             "yarn.scheduler.maximum-allocation-mb": "2048",
>             "yarn.nodemanager.resource.memory-mb": "2048",
>             "yarn.nodemanager.vmem-pmem-ratio": "2.1",
>             "yarn.nodemanager.linux-container-executor.group": "hadoop",
>             "yarn.nodemanager.log-dirs": "/hadoop/yarn/log",
>             "yarn.nodemanager.local-dirs": "/hadoop/yarn/local",
>             "yarn.nodemanager.remote-app-log-dir": "/app-logs",
>             "yarn.nodemanager.remote-app-log-dir-suffix": "logs",
>             "yarn.nodemanager.aux-services": "mapreduce_shuffle",
>             "yarn.nodemanager.log.retain-second": "604800",
>             "yarn.log.server.url": "http://c6402.ambari.apache.org:19888/jobhistory/logs",
>             "yarn.timeline-service.enabled": "true",
>             "yarn.timeline-service.leveldb-timeline-store.path": "/hadoop/yarn/timeline",
>             "yarn.timeline-service.leveldb-timeline-store.ttl-interval-ms": "300000",
>             "yarn.timeline-service.store-class": "org.apache.hadoop.yarn.server.timeline.LeveldbTimelineStore",
>             "yarn.timeline-service.ttl-enable": "true",
>             "yarn.timeline-service.ttl-ms": "2678400000",
>             "yarn.timeline-service.generic-application-history.store-class": "org.apache.hadoop.yarn.server.applicationhistoryservice.NullApplicationHistoryStore",
>             "yarn.timeline-service.webapp.address": "c6402.ambari.apache.org:8188",
>             "yarn.timeline-service.webapp.https.address": "c6402.ambari.apache.org:8190",
>             "yarn.timeline-service.address": "c6402.ambari.apache.org:10200",
>             "hadoop.registry.rm.enabled": "false",
>             "hadoop.registry.zk.quorum": "c6402.ambari.apache.org:2181,c6401.ambari.apache.org:2181,c6403.ambari.apache.org:2181",
>             "yarn.nodemanager.recovery.enabled": "true",
>             "yarn.resourcemanager.recovery.enabled": "true",
>             "yarn.resourcemanager.work-preserving-recovery.enabled": "true",
>             "yarn.resourcemanager.zk-address": "c6402.ambari.apache.org:2181",
>             "yarn.resourcemanager.connect.retry-interval.ms": "30000",
>             "yarn.resourcemanager.connect.max-wait.ms": "900000",
>             "yarn.resourcemanager.ha.enabled": "false",
>             "yarn.nodemanager.container-executor.class": "org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor",
>             "yarn.nodemanager.linux-container-executor.resources-handler.class": "org.apache.hadoop.yarn.server.nodemanager.util.DefaultLCEResourcesHandler",
>             "yarn.nodemanager.linux-container-executor.cgroups.hierarchy": "hadoop-yarn",
>             "yarn.nodemanager.linux-container-executor.cgroups.mount": "false",
>             "yarn.nodemanager.linux-container-executor.cgroups.strict-resource-usage": "false",
>             "yarn.nodemanager.resource.cpu-vcores": "2",
>             "yarn.nodemanager.resource.percentage-physical-cpu-limit": "80",
>             "yarn.application.classpath": "$HADOOP_CONF_DIR,/usr/hdp/current/hadoop-client/*,/usr/hdp/current/hadoop-client/lib/*,/usr/hdp/current/hadoop-hdfs-client/*,/usr/hdp/current/hadoop-hdfs-client/lib/*,/usr/hdp/current/hadoop-yarn-client/*,/usr/hdp/current/hadoop-yarn-client/lib/*",
>             "yarn.client.nodemanager-connect.max-wait-ms": "60000",
>             "yarn.client.nodemanager-connect.retry-interval-ms": "10000",
>             "yarn.http.policy": "HTTP_ONLY",
>             "yarn.log-aggregation.retain-seconds": "2592000",
>             "yarn.node-labels.enabled": "false",
>             "yarn.node-labels.fs-store.retry-policy-spec": "2000, 500",
>             "yarn.node-labels.fs-store.root-dir": "/system/yarn/node-labels",
>             "yarn.node-labels.manager-class": "org.apache.hadoop.yarn.server.resourcemanager.nodelabels.MemoryRMNodeLabelsManager",
>             "yarn.nodemanager.address": "0.0.0.0:45454",
>             "yarn.nodemanager.admin-env": "MALLOC_ARENA_MAX=$MALLOC_ARENA_MAX",
>             "yarn.nodemanager.aux-services.mapreduce_shuffle.class": "org.apache.hadoop.mapred.ShuffleHandler",
>             "yarn.nodemanager.bind-host": "0.0.0.0",
>             "yarn.nodemanager.container-monitor.interval-ms": "3000",
>             "yarn.nodemanager.delete.debug-delay-sec": "0",
>             "yarn.nodemanager.disk-health-checker.max-disk-utilization-per-disk-percentage": "90",
>             "yarn.nodemanager.disk-health-checker.min-free-space-per-disk-mb": "1000",
>             "yarn.nodemanager.disk-health-checker.min-healthy-disks": "0.25",
>             "yarn.nodemanager.health-checker.interval-ms": "135000",
>             "yarn.nodemanager.health-checker.script.timeout-ms": "60000",
>             "yarn.nodemanager.log-aggregation.compression-type": "gz",
>             "yarn.nodemanager.log-aggregation.debug-enabled": "false",
>             "yarn.nodemanager.log-aggregation.num-log-files-per-app": "30",
>             "yarn.nodemanager.log-aggregation.roll-monitoring-interval-seconds": "-1",
>             "yarn.nodemanager.recovery.dir": "{{yarn_log_dir_prefix}}/nodemanager/recovery-state",
>             "yarn.nodemanager.vmem-check-enabled": "false",
>             "yarn.resourcemanager.address": "c6402.ambari.apache.org:8050",
>             "yarn.resourcemanager.admin.address": "c6402.ambari.apache.org:8141",
>             "yarn.resourcemanager.am.max-attempts": "2",
>             "yarn.resourcemanager.bind-host": "0.0.0.0",
>             "yarn.resourcemanager.fs.state-store.retry-policy-spec": "2000, 500",
>             "yarn.resourcemanager.fs.state-store.uri": " ",
>             "yarn.resourcemanager.hostname": "c6402.ambari.apache.org",
>             "yarn.resourcemanager.nodes.exclude-path": "/etc/hadoop/conf/yarn.exclude",
>             "yarn.resourcemanager.resource-tracker.address": "c6402.ambari.apache.org:8025",
>             "yarn.resourcemanager.scheduler.address": "c6402.ambari.apache.org:8030",
>             "yarn.resourcemanager.state-store.max-completed-applications": "${yarn.resourcemanager.max-completed-applications}",
>             "yarn.resourcemanager.store.class": "org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore",
>             "yarn.resourcemanager.system-metrics-publisher.dispatcher.pool-size": "10",
>             "yarn.resourcemanager.system-metrics-publisher.enabled": "true",
>             "yarn.resourcemanager.webapp.address": "c6402.ambari.apache.org:8088",
>             "yarn.resourcemanager.webapp.delegation-token-auth-filter.enabled": "false",
>             "yarn.resourcemanager.webapp.https.address": "c6402.ambari.apache.org:8090",
>             "yarn.resourcemanager.work-preserving-recovery.scheduling-wait-ms": "10000",
>             "yarn.resourcemanager.zk-acl": "world:anyone:rwcda",
>             "yarn.resourcemanager.zk-num-retries": "1000",
>             "yarn.resourcemanager.zk-retry-interval-ms": "1000",
>             "yarn.resourcemanager.zk-state-store.parent-path": "/rmstore",
>             "yarn.resourcemanager.zk-timeout-ms": "10000",
>             "yarn.scheduler.maximum-allocation-vcores": "8",
>             "yarn.scheduler.minimum-allocation-vcores": "1",
>             "yarn.timeline-service.bind-host": "0.0.0.0",
>             "yarn.timeline-service.client.max-retries": "30",
>             "yarn.timeline-service.client.retry-interval-ms": "1000",
>             "yarn.timeline-service.http-authentication.simple.anonymous.allowed": "true",
>             "yarn.timeline-service.http-authentication.type": "simple",
>             "yarn.timeline-service.leveldb-timeline-store.read-cache-size": "104857600",
>             "yarn.timeline-service.leveldb-timeline-store.start-time-read-cache-size": "10000",
>             "yarn.timeline-service.leveldb-timeline-store.start-time-write-cache-size": "10000"
>           }
>         }
>       }
>     },
>     "blueprint_cluster_binding": {
>       "host_groups": [
>         {
>           "name": "host-group-1",
>           "hosts": [
>             {
>               "fqdn": "c6401.ambari.apache.org"
>             }
>           ]
>         },
>         {
>           "name": "host-group-2",
>           "hosts": [
>             {
>               "fqdn": "c6402.ambari.apache.org"
>             }
>           ]
>         },
>         {
>           "name": "host-group-3",
>           "hosts": [
>             {
>               "fqdn": "c6403.ambari.apache.org"
>             }
>           ]
>         }
>       ]
>     }
>   }
> }
> {code}
> The response does not effect all direct+indirect configs
> {code}
> {
>   "resources": [
>     {
>       "href": "http://c6401:8080/api/v1/stacks/HDP/versions/2.2/recommendations/1",
>       "hosts": [
>         "c6402.ambari.apache.org",
>         "c6403.ambari.apache.org",
>         "c6401.ambari.apache.org"
>       ],
>       "services": [
>         "HIVE",
>         "HDFS",
>         "MAPREDUCE2",
>         "TEZ",
>         "HBASE",
>         "ZOOKEEPER",
>         "YARN",
>         "PIG"
>       ],
>       "Recommendation": {
>         "id": 1
>       },
>       "Versions": {
>         "stack_name": "HDP",
>         "stack_version": "2.2"
>       },
>       "recommendations": {
>         "blueprint": {
>           "host_groups": [
>             {
>               "components": [
>                 {
>                   "name": "DATANODE"
>                 },
>                 {
>                   "name": "APP_TIMELINE_SERVER"
>                 },
>                 {
>                   "name": "RESOURCEMANAGER"
>                 },
>                 {
>                   "name": "SECONDARY_NAMENODE"
>                 },
>                 {
>                   "name": "MAPREDUCE2_CLIENT"
>                 },
>                 {
>                   "name": "ZOOKEEPER_SERVER"
>                 },
>                 {
>                   "name": "HBASE_REGIONSERVER"
>                 },
>                 {
>                   "name": "HISTORYSERVER"
>                 },
>                 {
>                   "name": "NFS_GATEWAY"
>                 },
>                 {
>                   "name": "HIVE_METASTORE"
>                 },
>                 {
>                   "name": "TEZ_CLIENT"
>                 },
>                 {
>                   "name": "ZOOKEEPER_CLIENT"
>                 },
>                 {
>                   "name": "WEBHCAT_SERVER"
>                 },
>                 {
>                   "name": "PIG"
>                 },
>                 {
>                   "name": "NODEMANAGER"
>                 },
>                 {
>                   "name": "YARN_CLIENT"
>                 },
>                 {
>                   "name": "HDFS_CLIENT"
>                 },
>                 {
>                   "name": "HIVE_SERVER"
>                 },
>                 {
>                   "name": "MYSQL_SERVER"
>                 }
>               ],
>               "name": "host-group-2"
>             },
>             {
>               "components": [
>                 {
>                   "name": "DATANODE"
>                 },
>                 {
>                   "name": "HBASE_CLIENT"
>                 },
>                 {
>                   "name": "HIVE_CLIENT"
>                 },
>                 {
>                   "name": "MAPREDUCE2_CLIENT"
>                 },
>                 {
>                   "name": "ZOOKEEPER_SERVER"
>                 },
>                 {
>                   "name": "HBASE_REGIONSERVER"
>                 },
>                 {
>                   "name": "NFS_GATEWAY"
>                 },
>                 {
>                   "name": "TEZ_CLIENT"
>                 },
>                 {
>                   "name": "ZOOKEEPER_CLIENT"
>                 },
>                 {
>                   "name": "HCAT"
>                 },
>                 {
>                   "name": "PIG"
>                 },
>                 {
>                   "name": "NODEMANAGER"
>                 },
>                 {
>                   "name": "YARN_CLIENT"
>                 },
>                 {
>                   "name": "HDFS_CLIENT"
>                 }
>               ],
>               "name": "host-group-3"
>             },
>             {
>               "components": [
>                 {
>                   "name": "DATANODE"
>                 },
>                 {
>                   "name": "HBASE_CLIENT"
>                 },
>                 {
>                   "name": "HIVE_CLIENT"
>                 },
>                 {
>                   "name": "MAPREDUCE2_CLIENT"
>                 },
>                 {
>                   "name": "ZOOKEEPER_SERVER"
>                 },
>                 {
>                   "name": "HBASE_REGIONSERVER"
>                 },
>                 {
>                   "name": "NFS_GATEWAY"
>                 },
>                 {
>                   "name": "HBASE_MASTER"
>                 },
>                 {
>                   "name": "NAMENODE"
>                 },
>                 {
>                   "name": "TEZ_CLIENT"
>                 },
>                 {
>                   "name": "ZOOKEEPER_CLIENT"
>                 },
>                 {
>                   "name": "HCAT"
>                 },
>                 {
>                   "name": "PIG"
>                 },
>                 {
>                   "name": "NODEMANAGER"
>                 },
>                 {
>                   "name": "YARN_CLIENT"
>                 },
>                 {
>                   "name": "HDFS_CLIENT"
>                 }
>               ],
>               "name": "host-group-1"
>             }
>           ],
>           "configurations": {
>             "mapred-site": {
>               "properties": {
>                 "mapreduce.map.memory.mb": "1024",
>                 "mapreduce.reduce.memory.mb": "682",
>                 "yarn.app.mapreduce.am.command-opts": "-Xmx546m -Dhdp.version=${hdp.version}",
>                 "mapreduce.reduce.java.opts": "-Xmx546m",
>                 "yarn.app.mapreduce.am.resource.mb": "682",
>                 "mapreduce.map.java.opts": "-Xmx546m",
>                 "mapreduce.task.io.sort.mb": "273"
>               },
>               "property_attributes": {
>                 "mapreduce.reduce.memory.mb": {
>                   "max": "2048",
>                   "min": "682"
>                 },
>                 "mapreduce.map.memory.mb": {
>                   "max": "2048",
>                   "min": "682"
>                 },
>                 "yarn.app.mapreduce.am.resource.mb": {
>                   "max": "2048",
>                   "min": "682"
>                 }
>               }
>             },
>             "yarn-site": {
>               "properties": {
>                 "yarn.scheduler.minimum-allocation-mb": "1024"
>               },
>               "property_attributes": {
>                 "yarn.scheduler.minimum-allocation-mb": {
>                   "max": "2048"
>                 }
>               }
>             }
>           }
>         },
>         "blueprint_cluster_binding": {
>           "host_groups": [
>             {
>               "hosts": [
>                 {
>                   "name": "c6401.ambari.apache.org"
>                 }
>               ],
>               "name": "host-group-1"
>             },
>             {
>               "hosts": [
>                 {
>                   "name": "c6403.ambari.apache.org"
>                 }
>               ],
>               "name": "host-group-3"
>             },
>             {
>               "hosts": [
>                 {
>                   "name": "c6402.ambari.apache.org"
>                 }
>               ],
>               "name": "host-group-2"
>             }
>           ]
>         }
>       }
>     }
>   ]
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)