You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Josh Rosen (JIRA)" <ji...@apache.org> on 2015/05/24 06:49:17 UTC
[jira] [Commented] (SPARK-4723) To abort the stages which have
attempted some times
[ https://issues.apache.org/jira/browse/SPARK-4723?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14557630#comment-14557630 ]
Josh Rosen commented on SPARK-4723:
-----------------------------------
I think the original PR here was closed because reviewers felt that "for some reason" was too vague. I think that I've identified some issues where we'll retry a stage an infinite number of times if there's a bug in the shuffle write path on the sending side which always leads to fetch failures. In a nutshell, it's possible for bugs in Spark to trigger infinite retry behavior; you can try manually introducing such bugs in the block serving code in order to reproduce the infinite retry symptom. Therefore, this might still be worth fixing; it _would_ mask any such bugs, but would contribute to higher overall resiliency in the system.
> To abort the stages which have attempted some times
> ---------------------------------------------------
>
> Key: SPARK-4723
> URL: https://issues.apache.org/jira/browse/SPARK-4723
> Project: Spark
> Issue Type: Improvement
> Components: Scheduler
> Reporter: YanTang Zhai
> Priority: Minor
>
> For some reason, some stages may attempt many times. A threshold could be added and the stages which have attempted more than the threshold could be aborted.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org