You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Andrew Or (JIRA)" <ji...@apache.org> on 2015/06/30 03:10:04 UTC

[jira] [Updated] (SPARK-8119) HeartbeatReceiver should not call sc.killExecutor

     [ https://issues.apache.org/jira/browse/SPARK-8119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Andrew Or updated SPARK-8119:
-----------------------------
    Summary: HeartbeatReceiver should not call sc.killExecutor  (was: Spark will set total executor when some executors fail.)

> HeartbeatReceiver should not call sc.killExecutor
> -------------------------------------------------
>
>                 Key: SPARK-8119
>                 URL: https://issues.apache.org/jira/browse/SPARK-8119
>             Project: Spark
>          Issue Type: Bug
>          Components: Scheduler
>    Affects Versions: 1.4.0
>            Reporter: SaintBacchus
>
> DynamicAllocation will set the total executor to a little number when it wants to kill some executors.
> But in no-DynamicAllocation scenario, Spark will also set the total executor.
> So it will cause such problem: sometimes an executor fails down, there is no more executor which will be pull up by spark.
> === EDIT by andrewor14 ===
> The issue is that the AM forgets about the original number of executors it wants after calling sc.killExecutor. Even if dynamic allocation is not enabled, this is still possible because of heartbeat timeouts.
> I think the problem is that sc.killExecutor is used incorrectly in HeartbeatReceiver. The intention of the method is to permanently adjust the number of executors the application will get. In HeartbeatReceiver, however, this is used as a best-effort mechanism to ensure that the timed out executor is dead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org