You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@livy.apache.org by GitBox <gi...@apache.org> on 2019/05/23 22:52:54 UTC

[GitHub] [incubator-livy] igorcalabria commented on a change in pull request #167: [LIVY-588][WIP]: Full support for Spark on Kubernetes

igorcalabria commented on a change in pull request #167: [LIVY-588][WIP]: Full support for Spark on Kubernetes
URL: https://github.com/apache/incubator-livy/pull/167#discussion_r287162617
 
 

 ##########
 File path: server/src/main/scala/org/apache/livy/server/interactive/InteractiveSession.scala
 ##########
 @@ -86,14 +86,25 @@ object InteractiveSession extends Logging {
       val builderProperties = prepareBuilderProp(conf, request.kind, livyConf)
 
       val userOpts: Map[String, Option[String]] = Map(
-        "spark.driver.cores" -> request.driverCores.map(_.toString),
         SparkLauncher.DRIVER_MEMORY -> request.driverMemory.map(_.toString),
-        SparkLauncher.EXECUTOR_CORES -> request.executorCores.map(_.toString),
         SparkLauncher.EXECUTOR_MEMORY -> request.executorMemory.map(_.toString),
         "spark.executor.instances" -> request.numExecutors.map(_.toString),
-        "spark.app.name" -> request.name.map(_.toString),
-        "spark.yarn.queue" -> request.queue
-      )
+        "spark.app.name" -> request.name.map(_.toString)
+      ) ++ {
+        if (livyConf.isRunningOnKubernetes()) {
+          Map(
+            "spark.kubernetes.executor.request.cores" -> request.executorCores.map(_.toString),
 
 Review comment:
   Is this always desirable? Not every job is able to saturate the CPU with a 1:1 ratio. Maybe, we should not override `spark.kubernetes.executor.request.cores` if its already set. For example, `request.cores` could be set in `spark-defaults` but this particular job works well with more executor cores than cpu ones, so the user sets `executorCores` on the livy request. To his surprise, the job is not schedulable because there are no nodes that satisfy the number of cores requested by the pods. 
   
   This is not a huge issue because number of cores could also be set using `spark.executor.cores` and it won't be overridden. It's just that it can be a source of confusion since the k8s resource request is set implicitly. 
   
   I would love to hear more opinions about this

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services