You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Jackey Lee (Jira)" <ji...@apache.org> on 2019/11/12 12:16:00 UTC

[jira] [Issue Comment Deleted] (SPARK-29771) Limit executor max failures before failing the application

     [ https://issues.apache.org/jira/browse/SPARK-29771?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Jackey Lee updated SPARK-29771:
-------------------------------
    Comment: was deleted

(was: This patch is mainly used in the scenario where the executor started failed. The executor runtime failure, which is caused by task errors is controlled by spark.executor.maxFailures.

Another Example, add `--conf spark.executor.extraJavaOptions=-Xmse` after spark-submit, which can also appear executor crazy retry.)

> Limit executor max failures before failing the application
> ----------------------------------------------------------
>
>                 Key: SPARK-29771
>                 URL: https://issues.apache.org/jira/browse/SPARK-29771
>             Project: Spark
>          Issue Type: Improvement
>          Components: Kubernetes
>    Affects Versions: 3.0.0
>            Reporter: Jackey Lee
>            Priority: Major
>
> ExecutorPodsAllocator does not limit the number of executor errors or deletions, which may cause executor restart continuously without application failure.
> A simple example for this, add {{--conf spark.executor.extraJavaOptions=-Xmse}} after spark-submit, which can make executor restart thousands of times without application failure.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org