You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by Ngone51 <gi...@git.apache.org> on 2018/09/03 15:19:27 UTC
[GitHub] spark pull request #22288: [SPARK-22148][Scheduler] Acquire new executors to...
Github user Ngone51 commented on a diff in the pull request:
https://github.com/apache/spark/pull/22288#discussion_r214719743
--- Diff: core/src/main/scala/org/apache/spark/scheduler/TaskSchedulerImpl.scala ---
@@ -414,9 +425,54 @@ private[spark] class TaskSchedulerImpl(
launchedAnyTask |= launchedTaskAtCurrentMaxLocality
} while (launchedTaskAtCurrentMaxLocality)
}
+
if (!launchedAnyTask) {
- taskSet.abortIfCompletelyBlacklisted(hostToExecutors)
- }
+ taskSet.getCompletelyBlacklistedTaskIfAny(hostToExecutors) match {
+ case taskIndex: Some[Int] => // Returns the taskIndex which was unschedulable
+ if (conf.getBoolean("spark.dynamicAllocation.enabled", false)) {
+ // If the taskSet is unschedulable we kill the existing blacklisted executor/s and
+ // kick off an abortTimer which after waiting will abort the taskSet if we were
+ // unable to get new executors and couldn't schedule a task from the taskSet.
+ // Note: We keep a track of schedulability on a per taskSet basis rather than on a
+ // per task basis.
+ if (!unschedulableTaskSetToExpiryTime.contains(taskSet)) {
+ hostToExecutors.valuesIterator.foreach(executors => executors.foreach({
+ executor =>
+ logDebug("Killing executor because of task unschedulability: " + executor)
+ blacklistTrackerOpt.foreach(blt => blt.killBlacklistedExecutor(executor))
--- End diff --
Seriously? You killed all executors ? What if other taskSets' tasks are running on them ?
BTW, if you want to refresh executors, you have to enable `spark.blacklist.killBlacklistedExecutors` also.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org