You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by GitBox <gi...@apache.org> on 2022/02/07 12:19:44 UTC

[GitHub] [spark] martin-g commented on a change in pull request #35422: [SPARK-36061][K8S] Add Volcano feature step and enable user specified minimum resources

martin-g commented on a change in pull request #35422:
URL: https://github.com/apache/spark/pull/35422#discussion_r800589513



##########
File path: resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/Config.scala
##########
@@ -675,6 +676,20 @@ private[spark] object Config extends Logging {
       .checkValue(value => value > 0, "Maximum number of pending pods should be a positive integer")
       .createWithDefault(Int.MaxValue)
 
+  val KUBERNETES_JOB_MIN_MEMORY = ConfigBuilder("spark.kubernetes.job.min.memory")
+    .doc("The minimum memory for running the job, in MiB unless otherwise specified. This only " +
+      "applicable when you enable `VolcanoFeatureStep` feature step in" +

Review comment:
       To enable the step one has to provide the fully qualified class name. Would it be a good idea to use String interpolation and parameter `classOf[VolcanoFeatureStep].getName`

##########
File path: resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/Config.scala
##########
@@ -675,6 +676,20 @@ private[spark] object Config extends Logging {
       .checkValue(value => value > 0, "Maximum number of pending pods should be a positive integer")
       .createWithDefault(Int.MaxValue)
 
+  val KUBERNETES_JOB_MIN_MEMORY = ConfigBuilder("spark.kubernetes.job.min.memory")

Review comment:
       Since this new property is Volcano specific - should it be namespaced like `spark.kubernetes.volcano.job.min.memory` ?
   Or is it better to use the current name and be (re-)used in the future by something else, e.g. YuniKorn ?

##########
File path: resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/Config.scala
##########
@@ -675,6 +676,20 @@ private[spark] object Config extends Logging {
       .checkValue(value => value > 0, "Maximum number of pending pods should be a positive integer")
       .createWithDefault(Int.MaxValue)
 
+  val KUBERNETES_JOB_MIN_MEMORY = ConfigBuilder("spark.kubernetes.job.min.memory")
+    .doc("The minimum memory for running the job, in MiB unless otherwise specified. This only " +
+      "applicable when you enable `VolcanoFeatureStep` feature step in" +

Review comment:
       Currently the are two Conf classes - one for the Driver and another one for the Executor(s).
   According to https://docs.google.com/document/d/1xgQGRpaHQX6-QH_J9YV2C2Dh6RpXefUpLM7KGkzL6Fg/edit# a PodGroup will combine all nodes - the driver and all executors. 
   Since both KubernetesDriverConf and KubernetesExecutorConf may configure their own instances of VolcanoFeatureStep then would this create two PodGroups ?
   
   So, if a user specifies here `6g` as min memory 
   




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org