You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Sean Owen (JIRA)" <ji...@apache.org> on 2015/10/18 20:01:05 UTC

[jira] [Resolved] (SPARK-10582) using dynamic-executor-allocation, if AM failed. the new AM will be started. But the new AM does not allocate executors to dirver

     [ https://issues.apache.org/jira/browse/SPARK-10582?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Sean Owen resolved SPARK-10582.
-------------------------------
    Resolution: Won't Fix

> using dynamic-executor-allocation, if AM failed. the new AM will be started. But the new AM does not allocate executors to dirver
> ---------------------------------------------------------------------------------------------------------------------------------
>
>                 Key: SPARK-10582
>                 URL: https://issues.apache.org/jira/browse/SPARK-10582
>             Project: Spark
>          Issue Type: Bug
>          Components: Spark Core
>    Affects Versions: 1.4.1, 1.5.1
>            Reporter: KaiXinXIaoLei
>
> During running tasks, when the total number of executors is the value of spark.dynamicAllocation.maxExecutors and the AM is failed. Then a new AM restarts. Because in ExecutorAllocationManager, the total number of executors does not changed, driver does not send RequestExecutors to AM to ask executors. Then the total number of executors is the value of spark.dynamicAllocation.initialExecutors . So the total number of executors in driver and AM is different.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org