You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@ambari.apache.org by "Vitaly Brodetskyi (JIRA)" <ji...@apache.org> on 2018/10/17 23:03:00 UTC

[jira] [Created] (AMBARI-24800) service adviser changes for cluster specific configs

Vitaly Brodetskyi created AMBARI-24800:
------------------------------------------

             Summary: service adviser changes for cluster specific configs
                 Key: AMBARI-24800
                 URL: https://issues.apache.org/jira/browse/AMBARI-24800
             Project: Ambari
          Issue Type: Bug
          Components: ambari-server
            Reporter: Vitaly Brodetskyi
            Assignee: Vitaly Brodetskyi
             Fix For: 2.8.0


Env: HDC Spark Data science (m4x4xlarge 16 CPU/64 GB)

Spark defaults aren't changed in ambari. It is loading with 1 GB spark.executor.memory.

 (Should this be 60-70% of yarn min container size. Need to consider spark.yarn.executor.memoryOverhead)

Add such logic for "spark.shuffle.io.numConnectionsPerPeer":
spark.shuffle.io.numConnectionsPerPeer should be configured dynamically based on cluster size.
Recommandation was to set it to 10 if number of nodes < 10 and remove (so that default value is used) for higher values.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)