You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@ambari.apache.org by "ASF GitHub Bot (JIRA)" <ji...@apache.org> on 2018/10/17 23:39:00 UTC
[jira] [Updated] (AMBARI-24800) service adviser changes for cluster
specific configs
[ https://issues.apache.org/jira/browse/AMBARI-24800?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
ASF GitHub Bot updated AMBARI-24800:
------------------------------------
Labels: pull-request-available (was: )
> service adviser changes for cluster specific configs
> ----------------------------------------------------
>
> Key: AMBARI-24800
> URL: https://issues.apache.org/jira/browse/AMBARI-24800
> Project: Ambari
> Issue Type: Bug
> Components: ambari-server
> Reporter: Vitaly Brodetskyi
> Assignee: Vitaly Brodetskyi
> Priority: Critical
> Labels: pull-request-available
> Fix For: 2.8.0
>
>
> Env: HDC Spark Data science (m4x4xlarge 16 CPU/64 GB)
> Spark defaults aren't changed in ambari. It is loading with 1 GB spark.executor.memory.
> (Should this be 60-70% of yarn min container size. Need to consider spark.yarn.executor.memoryOverhead)
> Add such logic for "spark.shuffle.io.numConnectionsPerPeer":
> spark.shuffle.io.numConnectionsPerPeer should be configured dynamically based on cluster size.
> Recommandation was to set it to 10 if number of nodes < 10 and remove (so that default value is used) for higher values.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)