You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by chenqin <gi...@git.apache.org> on 2018/08/02 00:31:35 UTC

[GitHub] spark pull request #21494: [WIP][SPARK-24375][Prototype] Support barrier sch...

Github user chenqin commented on a diff in the pull request:

    https://github.com/apache/spark/pull/21494#discussion_r207071402
  
    --- Diff: core/src/main/scala/org/apache/spark/scheduler/TaskSchedulerImpl.scala ---
    @@ -359,17 +368,42 @@ private[spark] class TaskSchedulerImpl(
         // of locality levels so that it gets a chance to launch local tasks on all of them.
         // NOTE: the preferredLocality order: PROCESS_LOCAL, NODE_LOCAL, NO_PREF, RACK_LOCAL, ANY
         for (taskSet <- sortedTaskSets) {
    -      var launchedAnyTask = false
    -      var launchedTaskAtCurrentMaxLocality = false
    -      for (currentMaxLocality <- taskSet.myLocalityLevels) {
    -        do {
    -          launchedTaskAtCurrentMaxLocality = resourceOfferSingleTaskSet(
    -            taskSet, currentMaxLocality, shuffledOffers, availableCpus, tasks)
    -          launchedAnyTask |= launchedTaskAtCurrentMaxLocality
    -        } while (launchedTaskAtCurrentMaxLocality)
    -      }
    -      if (!launchedAnyTask) {
    -        taskSet.abortIfCompletelyBlacklisted(hostToExecutors)
    +      // Skip the barrier taskSet if the available slots are less than the number of pending tasks.
    +      if (taskSet.isBarrier && availableSlots < taskSet.numTasks) {
    +        // Skip the launch process.
    --- End diff --
    
    is there a way to propagate info to scheduler and resource manager layer for preempt scheduling


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org