You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Kay Ousterhout (JIRA)" <ji...@apache.org> on 2017/03/17 23:51:41 UTC

[jira] [Commented] (SPARK-19755) Blacklist is always active for MesosCoarseGrainedSchedulerBackend. As result - scheduler cannot create an executor after some time.

    [ https://issues.apache.org/jira/browse/SPARK-19755?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15930909#comment-15930909 ] 

Kay Ousterhout commented on SPARK-19755:
----------------------------------------

I'm closing this because the configs you're proposing adding already exist: spark.blacklist.enabled already exists to turn of all blacklisting (this is false by default, so the fact that you're seeing blacklisting behavior means that your configuration enables blacklisting), and spark.blacklist.maxFailedTaskPerExecutor is the other thing you proposed adding.  All of the blacklisting parameters are listed here: https://github.com/apache/spark/blob/master/core/src/main/scala/org/apache/spark/internal/config/package.scala#L101

Feel free to re-open this if I've misunderstood and the existing configs don't address the issues you're seeing!

> Blacklist is always active for MesosCoarseGrainedSchedulerBackend. As result - scheduler cannot create an executor after some time.
> -----------------------------------------------------------------------------------------------------------------------------------
>
>                 Key: SPARK-19755
>                 URL: https://issues.apache.org/jira/browse/SPARK-19755
>             Project: Spark
>          Issue Type: Bug
>          Components: Mesos, Scheduler
>    Affects Versions: 2.1.0
>         Environment: mesos, marathon, docker - driver and executors are dockerized.
>            Reporter: Timur Abakumov
>
> When for some reason task fails - MesosCoarseGrainedSchedulerBackend increased failure counter for a slave where that task was running.
> When counter is >=2 (MAX_SLAVE_FAILURES) mesos slave is excluded.  
> Over time  scheduler cannot create a new executor - every slave is is in the blacklist.  Task failure not necessary related to host health- especially for long running stream apps.
> If accepted as a bug: possible solution is to use: spark.blacklist.enabled to make that functionality optional and if it make sense   MAX_SLAVE_FAILURES also can be configurable.  



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org