You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Darcy Shen (JIRA)" <ji...@apache.org> on 2018/04/13 02:16:00 UTC

[jira] [Created] (SPARK-23974) Do not allocate more containers as expected in dynamic allocation

Darcy Shen created SPARK-23974:
----------------------------------

             Summary: Do not allocate more containers as expected in dynamic allocation
                 Key: SPARK-23974
                 URL: https://issues.apache.org/jira/browse/SPARK-23974
             Project: Spark
          Issue Type: Bug
          Components: Spark Core
    Affects Versions: 2.1.1
            Reporter: Darcy Shen


Using Yarn with dynamic allocation enabled, spark does not allocate more containers when current containers(executors) number is less than the max executor num.

For example, we only have 7 executors working, while our cluster is not busy, and I have set

{{ spark.dynamicAllocation.maxExecutors = 600}}

{{and the current jobs of the context are executed slowly.}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org