You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Apache Spark (JIRA)" <ji...@apache.org> on 2014/12/03 11:01:12 UTC
[jira] [Commented] (SPARK-4085) Job will fail if a shuffle file
that's read locally gets deleted
[ https://issues.apache.org/jira/browse/SPARK-4085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14232839#comment-14232839 ]
Apache Spark commented on SPARK-4085:
-------------------------------------
User 'rxin' has created a pull request for this issue:
https://github.com/apache/spark/pull/3579
> Job will fail if a shuffle file that's read locally gets deleted
> ----------------------------------------------------------------
>
> Key: SPARK-4085
> URL: https://issues.apache.org/jira/browse/SPARK-4085
> Project: Spark
> Issue Type: Bug
> Components: Spark Core
> Affects Versions: 1.2.0
> Reporter: Kay Ousterhout
> Assignee: Reynold Xin
> Priority: Critical
>
> This commit: https://github.com/apache/spark/commit/665e71d14debb8a7fc1547c614867a8c3b1f806a changed the behavior of fetching local shuffle blocks such that if a shuffle block is not found locally, the shuffle block is no longer marked as failed, and a fetch failed exception is not thrown (this is because the "catch" block here won't ever be invoked: https://github.com/apache/spark/commit/665e71d14debb8a7fc1547c614867a8c3b1f806a#diff-e6e1631fa01e17bf851f49d30d028823R202 because the exception called from getLocalFromDisk() doesn't get thrown until next() gets called on the iterator).
> [~rxin] [~matei] it looks like you guys changed the test for this to catch the new exception that gets thrown (https://github.com/apache/spark/commit/665e71d14debb8a7fc1547c614867a8c3b1f806a#diff-9c2e1918319de967045d04caf813a7d1R93). Was that intentional? Because the new exception is a SparkException and not a FetchFailedException, jobs with missing local shuffle data will now fail, rather than having the map stage get retried.
> This problem is reproducible with this test case:
> {code}
> test("hash shuffle manager recovers when local shuffle files get deleted") {
> val conf = new SparkConf(false)
> conf.set("spark.shuffle.manager", "hash")
> sc = new SparkContext("local", "test", conf)
> val rdd = sc.parallelize(1 to 10, 2).map((_, 1)).reduceByKey(_+_)
> rdd.count()
> // Delete one of the local shuffle blocks.
> sc.env.blockManager.diskBlockManager.getFile(new ShuffleBlockId(0, 0, 0)).delete()
> rdd.count()
> }
> {code}
> which will fail on the second rdd.count().
> This is a regression from 1.1.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org