You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@spark.apache.org by ma...@apache.org on 2014/02/25 01:59:05 UTC

[3/3] git commit: Merge pull request #641 from mateiz/spark-1124-master

Merge pull request #641 from mateiz/spark-1124-master

SPARK-1124: Fix infinite retries of reduce stage when a map stage failed

In the previous code, if you had a failing map stage and then tried to run reduce stages on it repeatedly, the first reduce stage would fail correctly, but the later ones would mistakenly believe that all map outputs are available and start failing infinitely with fetch failures from "null". See https://spark-project.atlassian.net/browse/SPARK-1124 for an example.

This PR also cleans up code style slightly where there was a variable named "s" and some weird map manipulation.


Project: http://git-wip-us.apache.org/repos/asf/incubator-spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/incubator-spark/commit/d8d190ef
Tree: http://git-wip-us.apache.org/repos/asf/incubator-spark/tree/d8d190ef
Diff: http://git-wip-us.apache.org/repos/asf/incubator-spark/diff/d8d190ef

Branch: refs/heads/master
Commit: d8d190efd2d08c3894b20f6814b10f9ca2157309
Parents: c0ef3af 0187cef
Author: Matei Zaharia <ma...@databricks.com>
Authored: Mon Feb 24 16:58:57 2014 -0800
Committer: Matei Zaharia <ma...@databricks.com>
Committed: Mon Feb 24 16:58:57 2014 -0800

----------------------------------------------------------------------
 .../apache/spark/scheduler/DAGScheduler.scala   | 31 +++++++++++---------
 .../scala/org/apache/spark/FailureSuite.scala   | 13 ++++++++
 2 files changed, 30 insertions(+), 14 deletions(-)
----------------------------------------------------------------------