You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by tgravescs <gi...@git.apache.org> on 2018/02/07 15:36:14 UTC

[GitHub] spark pull request #19881: [SPARK-22683][CORE] Add tasksPerExecutorSlot para...

Github user tgravescs commented on a diff in the pull request:

    https://github.com/apache/spark/pull/19881#discussion_r166655479
  
    --- Diff: core/src/main/scala/org/apache/spark/ExecutorAllocationManager.scala ---
    @@ -114,9 +114,13 @@ private[spark] class ExecutorAllocationManager(
       // TODO: The default value of 1 for spark.executor.cores works right now because dynamic
       // allocation is only supported for YARN and the default number of cores per executor in YARN is
       // 1, but it might need to be attained differently for different cluster managers
    -  private val tasksPerExecutor =
    +  private val taskSlotPerExecutor =
         conf.getInt("spark.executor.cores", 1) / conf.getInt("spark.task.cpus", 1)
     
    +  private val tasksPerExecutorSlot = conf.getInt("spark.dynamicAllocation.tasksPerExecutorSlot", 1)
    +
    +  private val tasksPerExecutor = tasksPerExecutorSlot * taskSlotPerExecutor
    --- End diff --
    
    Since we aren't using concept of slots, I think we should leave the tasksPerExecutor alone and put this functionality into maxNumExecutorsNeeded()


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org