You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Fei Niu (JIRA)" <ji...@apache.org> on 2018/04/01 03:01:00 UTC

[jira] [Commented] (SPARK-10781) Allow certain number of failed tasks and allow job to succeed

    [ https://issues.apache.org/jira/browse/SPARK-10781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16421558#comment-16421558 ] 

Fei Niu commented on SPARK-10781:
---------------------------------

This can be a very useful feature. For example, if your sequence file format itself is bad, currently there is no way to catch the exception and move on. It makes some data set not able to process.

> Allow certain number of failed tasks and allow job to succeed
> -------------------------------------------------------------
>
>                 Key: SPARK-10781
>                 URL: https://issues.apache.org/jira/browse/SPARK-10781
>             Project: Spark
>          Issue Type: Improvement
>          Components: Spark Core
>    Affects Versions: 1.5.0
>            Reporter: Thomas Graves
>            Priority: Major
>
> MapReduce has this config mapreduce.map.failures.maxpercent and mapreduce.reduce.failures.maxpercent which allows for a certain percent of tasks to fail but the job to still succeed.  
> This could be a useful feature in Spark also if a job doesn't need all the tasks to be successful.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org