You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by tnachen <gi...@git.apache.org> on 2018/10/22 19:24:12 UTC

[GitHub] spark pull request #21150: [SPARK-24075][MESOS] Option to limit number of re...

Github user tnachen commented on a diff in the pull request:

    https://github.com/apache/spark/pull/21150#discussion_r227028091
  
    --- Diff: resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosClusterScheduler.scala ---
    @@ -728,6 +729,28 @@ private[spark] class MesosClusterScheduler(
           state == MesosTaskState.TASK_LOST
       }
     
    +  /**
    +   * Check if the driver has exceed the number of retries.
    +   * When "spark.mesos.driver.supervise.maxRetries" is not set,
    +   * the default behavior is to retry indefinitely
    +   *
    +   * @param retryState Retry state of the driver
    +   * @param conf Spark Context to check if it contains "spark.mesos.driver.supervise.maxRetries"
    +   * @return true if driver has reached retry limit
    +   *         false if driver can be retried
    +   */
    +  private[scheduler] def hasDriverExceededRetries(retryState: Option[MesosClusterRetryState],
    --- End diff --
    
    Please fix the param style:
    hasDriverExceededRetries(
         retryState: Option[MesosClusterRetryState],
         conf.....) 


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org