You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by GitBox <gi...@apache.org> on 2019/11/08 03:30:56 UTC

[GitHub] [spark] stczwd opened a new pull request #26433: [SPARK-29771] Add configure to limit executor failures

stczwd opened a new pull request #26433: [SPARK-29771] Add configure to limit executor failures
URL: https://github.com/apache/spark/pull/26433
 
 
    ### What changes were proposed in this pull request?
   At present, K8S scheduling does not limit the number of failures of the executors, which may cause executor retried continuously without failing.
   A simple example for this, add `--conf spark.executor.extraJavaOptions=-Xmse` after spark-submit, which can make executor restart thousands of times without application failure.
   
   ### Does this PR introduce any user-facing change?
   No
   
   ### How was this patch tested?
   Run suites and tests with executor error or pod deletion.
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org