You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@spark.apache.org by rx...@apache.org on 2013/12/16 23:16:08 UTC

[3/3] git commit: Merge pull request #245 from gregakespret/task-maxfailures-fix

Merge pull request #245 from gregakespret/task-maxfailures-fix

Fix for spark.task.maxFailures not enforced correctly.

Docs at http://spark.incubator.apache.org/docs/latest/configuration.html say:

```
spark.task.maxFailures

Number of individual task failures before giving up on the job. Should be greater than or equal to 1. Number of allowed retries = this value - 1.
```

Previous implementation worked incorrectly. When for example `spark.task.maxFailures` was set to 1, the job was aborted only after the second task failure, not after the first one.


Project: http://git-wip-us.apache.org/repos/asf/incubator-spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/incubator-spark/commit/883e034a
Tree: http://git-wip-us.apache.org/repos/asf/incubator-spark/tree/883e034a
Diff: http://git-wip-us.apache.org/repos/asf/incubator-spark/diff/883e034a

Branch: refs/heads/master
Commit: 883e034aebe61a25631497b4e299a8f2e3389b00
Parents: a51f340 558af87
Author: Reynold Xin <rx...@apache.org>
Authored: Mon Dec 16 14:16:02 2013 -0800
Committer: Reynold Xin <rx...@apache.org>
Committed: Mon Dec 16 14:16:02 2013 -0800

----------------------------------------------------------------------
 .../apache/spark/scheduler/cluster/ClusterTaskSetManager.scala | 6 +++---
 core/src/test/scala/org/apache/spark/DistributedSuite.scala    | 2 +-
 .../spark/scheduler/cluster/ClusterTaskSetManagerSuite.scala   | 2 +-
 3 files changed, 5 insertions(+), 5 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/incubator-spark/blob/883e034a/core/src/test/scala/org/apache/spark/DistributedSuite.scala
----------------------------------------------------------------------