You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by GitBox <gi...@apache.org> on 2019/02/01 09:34:52 UTC

[GitHub] zjf2012 commented on a change in pull request #23560: [SPARK-26632][Spark Core] Separate Thread Configurations of Driver and Executor

zjf2012 commented on a change in pull request #23560: [SPARK-26632][Spark Core] Separate Thread Configurations of Driver and Executor
URL: https://github.com/apache/spark/pull/23560#discussion_r252984719
 
 

 ##########
 File path: core/src/main/scala/org/apache/spark/network/netty/SparkTransportConf.scala
 ##########
 @@ -55,4 +69,27 @@ object SparkTransportConf {
       }
     })
   }
+
+  /**
+   * Separate threads configuration of driver and executor
+   * @param conf the [[SparkConf]]
+   * @param module the module name
+   * @param server if true, it's for the serverThreads. Otherwise, it's for the clientThreads.
+   * @return the configured number of threads, or -1 if not configured.
+   */
+  def getSpecificNumOfThreads(
+                               conf: SparkConf,
+                               module: String,
+                               server: Boolean): Int = {
 
 Review comment:
   @jerryshao , before I committing the changes, I want to double-check with you about the "role" to make sure we are on the same page. 
   There are other two common roles, master and worker, in addition to driver and executor. I can determine which role is by the "system name" parameter passed to RpcEnv.create which in turn invoke SparkTransportConf to reach out my modification area. We can, for example,
   
   _val workerPattern = "(sparkWorker.*)".r
   def determineRole(systemName: String): String = {
       systemName match {
         case SparkEnv.executorSystemName => "executor"
         case SparkEnv.driverSystemName => "driver"
         case "sparkMaster" => "master"
         case workerPattern(_) => "worker"
         case _ @ name => name
       }
     }_
   to map system name to roles. 
   Or we can set role specifically in spark configuration object when instantiate master, worker and executor, just like driver setting executor id to "driver". 
   Then we can use "spark.$role.$module.io.*" to get threads configurations. 
   Is it what you talked about the "role"? 

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org