You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by rvesse <gi...@git.apache.org> on 2018/08/28 16:34:33 UTC

[GitHub] spark pull request #22256: [SPARK-25262][K8S][WIP] Better support configurab...

GitHub user rvesse opened a pull request:

    https://github.com/apache/spark/pull/22256

    [SPARK-25262][K8S][WIP] Better support configurability of Spark scratch space when using Kubernetes

    ## What changes were proposed in this pull request?
    
    This change improves how Spark on Kubernetes creates the local directories used for Spark scratch space i.e. `SPARK_LOCAL_DIRS`/`spark.local.dirs`
    
    Currently Spark on Kubernetes creates each defined local directory, or a single default directory if none defined, as a Kubernetes `emptyDir` volume mounted into the containers.  The problem is that `emptyDir` directories are backed by the node storage and so for some compute environments e.g. diskless any "local" storage is actually provided by some remote file system that may actually harm performance when jobs use it heavily.
    
    Kubernetes provides the option to have `emptyDir` volumes backed by `tmpfs` i.e. RAM on the nodes so we introduce a boolean `spark.kubernetes.local.dirs.tmpfs` option that when true causes the created `emptyDir` volumes to use memory.
    
    A second related problem is that because Spark on Kubernetes always generates `emptyDir` volumes users have no way to use alternative volume types that may be available in their cluster.
    
    No new options specific to this problem are introduced but the code is modified to detect when the pod spec already defines an appropriately named volume and to avoid creating `emptyDir` volumes in this case.  This uses the convention of the existing code that volumes for scratch space are named `spark-local-dirs-N` numbered from 1-N based on the number of entries defined in the `SPARK_LOCAL_DIRS`/`spark.local.dirs` setting.  This is done in anticipation of the pod template feature form SPARK-24434 (PR #22146) being merged since that will allow users to define custom volumes more easily.
    
    Tasks:
    
    - [x] Support using `tmpfs` volumes
    - [x] Support using pre-existing volumes
    - [ ] Unit tests
    
    ## How was this patch tested?
    
    Unit tests added to the relevant feature step to exercise the new configuration option and to check that pre-existing volumes are used.  Plan to add further unit tests to check some other corner cases.

You can merge this pull request into a Git repository by running:

    $ git pull https://github.com/rvesse/spark SPARK-25262

Alternatively you can review and apply these changes as the patch at:

    https://github.com/apache/spark/pull/22256.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

    This closes #22256
    
----

----


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark issue #22256: [SPARK-25262][K8S] Better support configurability of Spa...

Posted by skonto <gi...@git.apache.org>.
Github user skonto commented on the issue:

    https://github.com/apache/spark/pull/22256
  
    Cool @rvesse!


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request #22256: [SPARK-25262][K8S] Better support configurability...

Posted by rvesse <gi...@git.apache.org>.
Github user rvesse closed the pull request at:

    https://github.com/apache/spark/pull/22256


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request #22256: [SPARK-25262][K8S] Better support configurability...

Posted by rvesse <gi...@git.apache.org>.
Github user rvesse commented on a diff in the pull request:

    https://github.com/apache/spark/pull/22256#discussion_r214416634
  
    --- Diff: resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/features/LocalDirsFeatureStep.scala ---
    @@ -37,41 +40,99 @@ private[spark] class LocalDirsFeatureStep(
         .orElse(conf.getOption("spark.local.dir"))
         .getOrElse(defaultLocalDir)
         .split(",")
    +  private val useLocalDirTmpFs = conf.get(KUBERNETES_LOCAL_DIRS_TMPFS)
     
       override def configurePod(pod: SparkPod): SparkPod = {
         val localDirVolumes = resolvedLocalDirs
           .zipWithIndex
           .map { case (localDir, index) =>
    -        new VolumeBuilder()
    -          .withName(s"spark-local-dir-${index + 1}")
    -          .withNewEmptyDir()
    -          .endEmptyDir()
    -          .build()
    +        val name = s"spark-local-dir-${index + 1}"
    +        // To allow customisation of local dirs backing volumes we should avoid creating
    +        // emptyDir volumes if the volume is already defined in the pod spec
    +        hasVolume(pod, name) match {
    +          case true =>
    +            // For pre-existing volume definitions just re-use the volume
    +            pod.pod.getSpec().getVolumes().asScala.find(v => v.getName.equals(name)).get
    +          case false =>
    +            // Create new emptyDir volume
    +            new VolumeBuilder()
    +              .withName(name)
    +              .withNewEmptyDir()
    +                .withMedium(useLocalDirTmpFs match {
    +                  case true => "Memory" // Use tmpfs
    +                  case false => null    // Default - use nodes backing storage
    +                })
    +              .endEmptyDir()
    +              .build()
    +        }
           }
    +
         val localDirVolumeMounts = localDirVolumes
           .zip(resolvedLocalDirs)
           .map { case (localDirVolume, localDirPath) =>
    -        new VolumeMountBuilder()
    -          .withName(localDirVolume.getName)
    -          .withMountPath(localDirPath)
    -          .build()
    +        hasVolumeMount(pod, localDirVolume.getName, localDirPath) match {
    +          case true =>
    +            // For pre-existing volume mounts just re-use the mount
    +            pod.container.getVolumeMounts().asScala
    +              .find(m => m.getName.equals(localDirVolume.getName)
    +                         && m.getMountPath.equals(localDirPath))
    +              .get
    +          case false =>
    +            // Create new volume mount
    +            new VolumeMountBuilder()
    +              .withName (localDirVolume.getName)
    +              .withMountPath (localDirPath)
    +              .build()
    +        }
    +      }
    +
    +    // Check for conflicting volume mounts
    +    for (m: VolumeMount <- localDirVolumeMounts) {
    +      if (hasConflictingVolumeMount(pod, m.getName, m.getMountPath).size > 0) {
    +        throw new SparkException(s"Conflicting volume mounts defined, pod template attempted to " +
    +          "mount SPARK_LOCAL_DIRS volume ${m.getName} multiple times or at an alternative path " +
    +          "then the expected ${m.getPath}")
           }
    +    }
    +
         val podWithLocalDirVolumes = new PodBuilder(pod.pod)
           .editSpec()
    -        .addToVolumes(localDirVolumes: _*)
    +         // Don't want to re-add volumes that already existed in the incoming spec
    +         // as duplicate definitions will lead to K8S API errors
    +        .addToVolumes(localDirVolumes.filter(v => !hasVolume(pod, v.getName)): _*)
    --- End diff --
    
    Ok, will do that Monday
    
    FYI I notice @onursatici has now made some similar tweaks in his latest commit - https://github.com/apache/spark/pull/22146/commits/a4fde0cdc4dc5b64fd3f888244656371eb76f837 - notice several feature steps there now have `editOrNewX()` or `addToX()` so that they combine with rather than overriding the template


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request #22256: [SPARK-25262][K8S] Better support configurability...

Posted by mccheah <gi...@git.apache.org>.
Github user mccheah commented on a diff in the pull request:

    https://github.com/apache/spark/pull/22256#discussion_r214107006
  
    --- Diff: resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/features/LocalDirsFeatureStep.scala ---
    @@ -37,41 +40,99 @@ private[spark] class LocalDirsFeatureStep(
         .orElse(conf.getOption("spark.local.dir"))
         .getOrElse(defaultLocalDir)
         .split(",")
    +  private val useLocalDirTmpFs = conf.get(KUBERNETES_LOCAL_DIRS_TMPFS)
     
       override def configurePod(pod: SparkPod): SparkPod = {
         val localDirVolumes = resolvedLocalDirs
           .zipWithIndex
           .map { case (localDir, index) =>
    -        new VolumeBuilder()
    -          .withName(s"spark-local-dir-${index + 1}")
    -          .withNewEmptyDir()
    -          .endEmptyDir()
    -          .build()
    +        val name = s"spark-local-dir-${index + 1}"
    +        // To allow customisation of local dirs backing volumes we should avoid creating
    +        // emptyDir volumes if the volume is already defined in the pod spec
    +        hasVolume(pod, name) match {
    +          case true =>
    +            // For pre-existing volume definitions just re-use the volume
    +            pod.pod.getSpec().getVolumes().asScala.find(v => v.getName.equals(name)).get
    +          case false =>
    +            // Create new emptyDir volume
    +            new VolumeBuilder()
    +              .withName(name)
    +              .withNewEmptyDir()
    +                .withMedium(useLocalDirTmpFs match {
    +                  case true => "Memory" // Use tmpfs
    +                  case false => null    // Default - use nodes backing storage
    +                })
    +              .endEmptyDir()
    +              .build()
    +        }
           }
    +
         val localDirVolumeMounts = localDirVolumes
           .zip(resolvedLocalDirs)
           .map { case (localDirVolume, localDirPath) =>
    -        new VolumeMountBuilder()
    -          .withName(localDirVolume.getName)
    -          .withMountPath(localDirPath)
    -          .build()
    +        hasVolumeMount(pod, localDirVolume.getName, localDirPath) match {
    +          case true =>
    +            // For pre-existing volume mounts just re-use the mount
    +            pod.container.getVolumeMounts().asScala
    +              .find(m => m.getName.equals(localDirVolume.getName)
    +                         && m.getMountPath.equals(localDirPath))
    +              .get
    +          case false =>
    +            // Create new volume mount
    +            new VolumeMountBuilder()
    +              .withName (localDirVolume.getName)
    +              .withMountPath (localDirPath)
    +              .build()
    +        }
    +      }
    +
    +    // Check for conflicting volume mounts
    +    for (m: VolumeMount <- localDirVolumeMounts) {
    +      if (hasConflictingVolumeMount(pod, m.getName, m.getMountPath).size > 0) {
    +        throw new SparkException(s"Conflicting volume mounts defined, pod template attempted to " +
    +          "mount SPARK_LOCAL_DIRS volume ${m.getName} multiple times or at an alternative path " +
    +          "then the expected ${m.getPath}")
           }
    +    }
    +
         val podWithLocalDirVolumes = new PodBuilder(pod.pod)
           .editSpec()
    -        .addToVolumes(localDirVolumes: _*)
    +         // Don't want to re-add volumes that already existed in the incoming spec
    +         // as duplicate definitions will lead to K8S API errors
    +        .addToVolumes(localDirVolumes.filter(v => !hasVolume(pod, v.getName)): _*)
    --- End diff --
    
    > This is regardless of whether the template feature is opinionated about validation, even if the template feature doesn't do validation, Spark code itself should be ensuring that it generates valid specs as far as it is able to.
    
    This is a stance that as far I'm aware, we specifically chose not to take in the pod template feature. If one is using the pod template feature then Spark won't provide any guarantees that the pod it makes will be well-formed. When spark submit deploys the pod to the cluster the API will return a clear enough error informing the user to make the appropriate corrections to their pod template.
    
    @onursatici I just checked the pod template files PR, I didn't see this specifically called out - should this be documented?


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark issue #22256: [SPARK-25262][K8S][WIP] Better support configurability o...

Posted by AmplabJenkins <gi...@git.apache.org>.
Github user AmplabJenkins commented on the issue:

    https://github.com/apache/spark/pull/22256
  
    Can one of the admins verify this patch?


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request #22256: [SPARK-25262][K8S] Better support configurability...

Posted by onursatici <gi...@git.apache.org>.
Github user onursatici commented on a diff in the pull request:

    https://github.com/apache/spark/pull/22256#discussion_r214198366
  
    --- Diff: resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/features/LocalDirsFeatureStep.scala ---
    @@ -37,41 +40,99 @@ private[spark] class LocalDirsFeatureStep(
         .orElse(conf.getOption("spark.local.dir"))
         .getOrElse(defaultLocalDir)
         .split(",")
    +  private val useLocalDirTmpFs = conf.get(KUBERNETES_LOCAL_DIRS_TMPFS)
     
       override def configurePod(pod: SparkPod): SparkPod = {
         val localDirVolumes = resolvedLocalDirs
           .zipWithIndex
           .map { case (localDir, index) =>
    -        new VolumeBuilder()
    -          .withName(s"spark-local-dir-${index + 1}")
    -          .withNewEmptyDir()
    -          .endEmptyDir()
    -          .build()
    +        val name = s"spark-local-dir-${index + 1}"
    +        // To allow customisation of local dirs backing volumes we should avoid creating
    +        // emptyDir volumes if the volume is already defined in the pod spec
    +        hasVolume(pod, name) match {
    +          case true =>
    +            // For pre-existing volume definitions just re-use the volume
    +            pod.pod.getSpec().getVolumes().asScala.find(v => v.getName.equals(name)).get
    +          case false =>
    +            // Create new emptyDir volume
    +            new VolumeBuilder()
    +              .withName(name)
    +              .withNewEmptyDir()
    +                .withMedium(useLocalDirTmpFs match {
    +                  case true => "Memory" // Use tmpfs
    +                  case false => null    // Default - use nodes backing storage
    +                })
    +              .endEmptyDir()
    +              .build()
    +        }
           }
    +
         val localDirVolumeMounts = localDirVolumes
           .zip(resolvedLocalDirs)
           .map { case (localDirVolume, localDirPath) =>
    -        new VolumeMountBuilder()
    -          .withName(localDirVolume.getName)
    -          .withMountPath(localDirPath)
    -          .build()
    +        hasVolumeMount(pod, localDirVolume.getName, localDirPath) match {
    +          case true =>
    +            // For pre-existing volume mounts just re-use the mount
    +            pod.container.getVolumeMounts().asScala
    +              .find(m => m.getName.equals(localDirVolume.getName)
    +                         && m.getMountPath.equals(localDirPath))
    +              .get
    +          case false =>
    +            // Create new volume mount
    +            new VolumeMountBuilder()
    +              .withName (localDirVolume.getName)
    +              .withMountPath (localDirPath)
    +              .build()
    +        }
    +      }
    +
    +    // Check for conflicting volume mounts
    +    for (m: VolumeMount <- localDirVolumeMounts) {
    +      if (hasConflictingVolumeMount(pod, m.getName, m.getMountPath).size > 0) {
    +        throw new SparkException(s"Conflicting volume mounts defined, pod template attempted to " +
    +          "mount SPARK_LOCAL_DIRS volume ${m.getName} multiple times or at an alternative path " +
    +          "then the expected ${m.getPath}")
           }
    +    }
    +
         val podWithLocalDirVolumes = new PodBuilder(pod.pod)
           .editSpec()
    -        .addToVolumes(localDirVolumes: _*)
    +         // Don't want to re-add volumes that already existed in the incoming spec
    +         // as duplicate definitions will lead to K8S API errors
    +        .addToVolumes(localDirVolumes.filter(v => !hasVolume(pod, v.getName)): _*)
    --- End diff --
    
    @mccheah yeap we should document that, will add


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark issue #22256: [SPARK-25262][K8S][WIP] Better support configurability o...

Posted by AmplabJenkins <gi...@git.apache.org>.
Github user AmplabJenkins commented on the issue:

    https://github.com/apache/spark/pull/22256
  
    Can one of the admins verify this patch?


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark issue #22256: [SPARK-25262][K8S] Better support configurability of Spa...

Posted by rvesse <gi...@git.apache.org>.
Github user rvesse commented on the issue:

    https://github.com/apache/spark/pull/22256
  
    Closed in favour of #22323 which has been merged


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark issue #22256: [SPARK-25262][K8S][WIP] Better support configurability o...

Posted by AmplabJenkins <gi...@git.apache.org>.
Github user AmplabJenkins commented on the issue:

    https://github.com/apache/spark/pull/22256
  
    Can one of the admins verify this patch?


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request #22256: [SPARK-25262][K8S] Better support configurability...

Posted by ifilonenko <gi...@git.apache.org>.
Github user ifilonenko commented on a diff in the pull request:

    https://github.com/apache/spark/pull/22256#discussion_r213768267
  
    --- Diff: resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/features/LocalDirsFeatureStep.scala ---
    @@ -37,41 +40,99 @@ private[spark] class LocalDirsFeatureStep(
         .orElse(conf.getOption("spark.local.dir"))
         .getOrElse(defaultLocalDir)
         .split(",")
    +  private val useLocalDirTmpFs = conf.get(KUBERNETES_LOCAL_DIRS_TMPFS)
     
       override def configurePod(pod: SparkPod): SparkPod = {
         val localDirVolumes = resolvedLocalDirs
           .zipWithIndex
           .map { case (localDir, index) =>
    -        new VolumeBuilder()
    -          .withName(s"spark-local-dir-${index + 1}")
    -          .withNewEmptyDir()
    -          .endEmptyDir()
    -          .build()
    +        val name = s"spark-local-dir-${index + 1}"
    +        // To allow customisation of local dirs backing volumes we should avoid creating
    +        // emptyDir volumes if the volume is already defined in the pod spec
    +        hasVolume(pod, name) match {
    +          case true =>
    +            // For pre-existing volume definitions just re-use the volume
    +            pod.pod.getSpec().getVolumes().asScala.find(v => v.getName.equals(name)).get
    +          case false =>
    +            // Create new emptyDir volume
    +            new VolumeBuilder()
    +              .withName(name)
    +              .withNewEmptyDir()
    +                .withMedium(useLocalDirTmpFs match {
    +                  case true => "Memory" // Use tmpfs
    +                  case false => null    // Default - use nodes backing storage
    +                })
    +              .endEmptyDir()
    +              .build()
    +        }
           }
    +
         val localDirVolumeMounts = localDirVolumes
           .zip(resolvedLocalDirs)
           .map { case (localDirVolume, localDirPath) =>
    -        new VolumeMountBuilder()
    -          .withName(localDirVolume.getName)
    -          .withMountPath(localDirPath)
    -          .build()
    +        hasVolumeMount(pod, localDirVolume.getName, localDirPath) match {
    +          case true =>
    +            // For pre-existing volume mounts just re-use the mount
    +            pod.container.getVolumeMounts().asScala
    +              .find(m => m.getName.equals(localDirVolume.getName)
    +                         && m.getMountPath.equals(localDirPath))
    +              .get
    +          case false =>
    +            // Create new volume mount
    +            new VolumeMountBuilder()
    +              .withName (localDirVolume.getName)
    +              .withMountPath (localDirPath)
    +              .build()
    +        }
    +      }
    +
    +    // Check for conflicting volume mounts
    +    for (m: VolumeMount <- localDirVolumeMounts) {
    +      if (hasConflictingVolumeMount(pod, m.getName, m.getMountPath).size > 0) {
    +        throw new SparkException(s"Conflicting volume mounts defined, pod template attempted to " +
    +          "mount SPARK_LOCAL_DIRS volume ${m.getName} multiple times or at an alternative path " +
    +          "then the expected ${m.getPath}")
           }
    +    }
    +
         val podWithLocalDirVolumes = new PodBuilder(pod.pod)
           .editSpec()
    -        .addToVolumes(localDirVolumes: _*)
    +         // Don't want to re-add volumes that already existed in the incoming spec
    +         // as duplicate definitions will lead to K8S API errors
    +        .addToVolumes(localDirVolumes.filter(v => !hasVolume(pod, v.getName)): _*)
    --- End diff --
    
    Checking current volumes in a feature step isn't consistent with the additive design of the feature builder pattern. @mccheah to comment


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request #22256: [SPARK-25262][K8S] Better support configurability...

Posted by rvesse <gi...@git.apache.org>.
Github user rvesse commented on a diff in the pull request:

    https://github.com/apache/spark/pull/22256#discussion_r213954156
  
    --- Diff: resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/features/LocalDirsFeatureStep.scala ---
    @@ -37,41 +40,99 @@ private[spark] class LocalDirsFeatureStep(
         .orElse(conf.getOption("spark.local.dir"))
         .getOrElse(defaultLocalDir)
         .split(",")
    +  private val useLocalDirTmpFs = conf.get(KUBERNETES_LOCAL_DIRS_TMPFS)
     
       override def configurePod(pod: SparkPod): SparkPod = {
         val localDirVolumes = resolvedLocalDirs
           .zipWithIndex
           .map { case (localDir, index) =>
    -        new VolumeBuilder()
    -          .withName(s"spark-local-dir-${index + 1}")
    -          .withNewEmptyDir()
    -          .endEmptyDir()
    -          .build()
    +        val name = s"spark-local-dir-${index + 1}"
    +        // To allow customisation of local dirs backing volumes we should avoid creating
    +        // emptyDir volumes if the volume is already defined in the pod spec
    +        hasVolume(pod, name) match {
    +          case true =>
    +            // For pre-existing volume definitions just re-use the volume
    +            pod.pod.getSpec().getVolumes().asScala.find(v => v.getName.equals(name)).get
    +          case false =>
    +            // Create new emptyDir volume
    +            new VolumeBuilder()
    +              .withName(name)
    +              .withNewEmptyDir()
    +                .withMedium(useLocalDirTmpFs match {
    +                  case true => "Memory" // Use tmpfs
    +                  case false => null    // Default - use nodes backing storage
    +                })
    +              .endEmptyDir()
    +              .build()
    +        }
           }
    +
         val localDirVolumeMounts = localDirVolumes
           .zip(resolvedLocalDirs)
           .map { case (localDirVolume, localDirPath) =>
    -        new VolumeMountBuilder()
    -          .withName(localDirVolume.getName)
    -          .withMountPath(localDirPath)
    -          .build()
    +        hasVolumeMount(pod, localDirVolume.getName, localDirPath) match {
    +          case true =>
    +            // For pre-existing volume mounts just re-use the mount
    +            pod.container.getVolumeMounts().asScala
    +              .find(m => m.getName.equals(localDirVolume.getName)
    +                         && m.getMountPath.equals(localDirPath))
    +              .get
    +          case false =>
    +            // Create new volume mount
    +            new VolumeMountBuilder()
    +              .withName (localDirVolume.getName)
    +              .withMountPath (localDirPath)
    +              .build()
    +        }
    +      }
    +
    +    // Check for conflicting volume mounts
    +    for (m: VolumeMount <- localDirVolumeMounts) {
    +      if (hasConflictingVolumeMount(pod, m.getName, m.getMountPath).size > 0) {
    +        throw new SparkException(s"Conflicting volume mounts defined, pod template attempted to " +
    +          "mount SPARK_LOCAL_DIRS volume ${m.getName} multiple times or at an alternative path " +
    +          "then the expected ${m.getPath}")
           }
    +    }
    +
         val podWithLocalDirVolumes = new PodBuilder(pod.pod)
           .editSpec()
    -        .addToVolumes(localDirVolumes: _*)
    +         // Don't want to re-add volumes that already existed in the incoming spec
    +         // as duplicate definitions will lead to K8S API errors
    +        .addToVolumes(localDirVolumes.filter(v => !hasVolume(pod, v.getName)): _*)
    --- End diff --
    
    The implementation is still additive in that it will add to existing elements in the pod spec as needed but respect what is already present.
    
    If your pod spec contains duplicate volumes/volume mounts then K8S will reject it as invalid e.g.
    
    ```
    The Pod "rvesse-test" is invalid: spec.volumes[1].name: Duplicate value: "spark-local-dirs-1"
    ```
    
    Therefore it is necessary to explicitly avoid duplicating things already present in the template
    
    If the aim is to replace adding further config options with the pod template feature then the existing builders do need to be more intelligent in what they do to avoid generating invalid pod specs.  This is regardless of whether the template feature is opinionated about validation, even if the template feature doesn't do validation, Spark code itself should be ensuring that it generates valid specs as far as it is able to.  Obviously it can't detect every possible invalid spec that it might generate if the templates aren't being validated but it can avoid introducing easily avoidable invalid specs.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request #22256: [SPARK-25262][K8S] Better support configurability...

Posted by mccheah <gi...@git.apache.org>.
Github user mccheah commented on a diff in the pull request:

    https://github.com/apache/spark/pull/22256#discussion_r213771635
  
    --- Diff: resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/features/LocalDirsFeatureStep.scala ---
    @@ -37,41 +40,99 @@ private[spark] class LocalDirsFeatureStep(
         .orElse(conf.getOption("spark.local.dir"))
         .getOrElse(defaultLocalDir)
         .split(",")
    +  private val useLocalDirTmpFs = conf.get(KUBERNETES_LOCAL_DIRS_TMPFS)
     
       override def configurePod(pod: SparkPod): SparkPod = {
         val localDirVolumes = resolvedLocalDirs
           .zipWithIndex
           .map { case (localDir, index) =>
    -        new VolumeBuilder()
    -          .withName(s"spark-local-dir-${index + 1}")
    -          .withNewEmptyDir()
    -          .endEmptyDir()
    -          .build()
    +        val name = s"spark-local-dir-${index + 1}"
    +        // To allow customisation of local dirs backing volumes we should avoid creating
    +        // emptyDir volumes if the volume is already defined in the pod spec
    +        hasVolume(pod, name) match {
    +          case true =>
    +            // For pre-existing volume definitions just re-use the volume
    +            pod.pod.getSpec().getVolumes().asScala.find(v => v.getName.equals(name)).get
    +          case false =>
    +            // Create new emptyDir volume
    +            new VolumeBuilder()
    +              .withName(name)
    +              .withNewEmptyDir()
    +                .withMedium(useLocalDirTmpFs match {
    +                  case true => "Memory" // Use tmpfs
    +                  case false => null    // Default - use nodes backing storage
    +                })
    +              .endEmptyDir()
    +              .build()
    +        }
           }
    +
         val localDirVolumeMounts = localDirVolumes
           .zip(resolvedLocalDirs)
           .map { case (localDirVolume, localDirPath) =>
    -        new VolumeMountBuilder()
    -          .withName(localDirVolume.getName)
    -          .withMountPath(localDirPath)
    -          .build()
    +        hasVolumeMount(pod, localDirVolume.getName, localDirPath) match {
    +          case true =>
    +            // For pre-existing volume mounts just re-use the mount
    +            pod.container.getVolumeMounts().asScala
    +              .find(m => m.getName.equals(localDirVolume.getName)
    +                         && m.getMountPath.equals(localDirPath))
    +              .get
    +          case false =>
    +            // Create new volume mount
    +            new VolumeMountBuilder()
    +              .withName (localDirVolume.getName)
    +              .withMountPath (localDirPath)
    +              .build()
    +        }
    +      }
    +
    +    // Check for conflicting volume mounts
    +    for (m: VolumeMount <- localDirVolumeMounts) {
    +      if (hasConflictingVolumeMount(pod, m.getName, m.getMountPath).size > 0) {
    +        throw new SparkException(s"Conflicting volume mounts defined, pod template attempted to " +
    +          "mount SPARK_LOCAL_DIRS volume ${m.getName} multiple times or at an alternative path " +
    +          "then the expected ${m.getPath}")
           }
    +    }
    +
         val podWithLocalDirVolumes = new PodBuilder(pod.pod)
           .editSpec()
    -        .addToVolumes(localDirVolumes: _*)
    +         // Don't want to re-add volumes that already existed in the incoming spec
    +         // as duplicate definitions will lead to K8S API errors
    +        .addToVolumes(localDirVolumes.filter(v => !hasVolume(pod, v.getName)): _*)
    --- End diff --
    
    All of this conflicting volume mount and conflicting volumes seems out of place here. If we're anticipating using the pod template file, keep in mind that the pod template feature is specifically not designed to do any validation. What kinds of errors are we hoping to avoid by doing the deduplication here?


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request #22256: [SPARK-25262][K8S] Better support configurability...

Posted by ifilonenko <gi...@git.apache.org>.
Github user ifilonenko commented on a diff in the pull request:

    https://github.com/apache/spark/pull/22256#discussion_r214395067
  
    --- Diff: resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/features/LocalDirsFeatureStep.scala ---
    @@ -37,41 +40,99 @@ private[spark] class LocalDirsFeatureStep(
         .orElse(conf.getOption("spark.local.dir"))
         .getOrElse(defaultLocalDir)
         .split(",")
    +  private val useLocalDirTmpFs = conf.get(KUBERNETES_LOCAL_DIRS_TMPFS)
     
       override def configurePod(pod: SparkPod): SparkPod = {
         val localDirVolumes = resolvedLocalDirs
           .zipWithIndex
           .map { case (localDir, index) =>
    -        new VolumeBuilder()
    -          .withName(s"spark-local-dir-${index + 1}")
    -          .withNewEmptyDir()
    -          .endEmptyDir()
    -          .build()
    +        val name = s"spark-local-dir-${index + 1}"
    +        // To allow customisation of local dirs backing volumes we should avoid creating
    +        // emptyDir volumes if the volume is already defined in the pod spec
    +        hasVolume(pod, name) match {
    +          case true =>
    +            // For pre-existing volume definitions just re-use the volume
    +            pod.pod.getSpec().getVolumes().asScala.find(v => v.getName.equals(name)).get
    +          case false =>
    +            // Create new emptyDir volume
    +            new VolumeBuilder()
    +              .withName(name)
    +              .withNewEmptyDir()
    +                .withMedium(useLocalDirTmpFs match {
    +                  case true => "Memory" // Use tmpfs
    +                  case false => null    // Default - use nodes backing storage
    +                })
    +              .endEmptyDir()
    +              .build()
    +        }
           }
    +
         val localDirVolumeMounts = localDirVolumes
           .zip(resolvedLocalDirs)
           .map { case (localDirVolume, localDirPath) =>
    -        new VolumeMountBuilder()
    -          .withName(localDirVolume.getName)
    -          .withMountPath(localDirPath)
    -          .build()
    +        hasVolumeMount(pod, localDirVolume.getName, localDirPath) match {
    +          case true =>
    +            // For pre-existing volume mounts just re-use the mount
    +            pod.container.getVolumeMounts().asScala
    +              .find(m => m.getName.equals(localDirVolume.getName)
    +                         && m.getMountPath.equals(localDirPath))
    +              .get
    +          case false =>
    +            // Create new volume mount
    +            new VolumeMountBuilder()
    +              .withName (localDirVolume.getName)
    +              .withMountPath (localDirPath)
    +              .build()
    +        }
    +      }
    +
    +    // Check for conflicting volume mounts
    +    for (m: VolumeMount <- localDirVolumeMounts) {
    +      if (hasConflictingVolumeMount(pod, m.getName, m.getMountPath).size > 0) {
    +        throw new SparkException(s"Conflicting volume mounts defined, pod template attempted to " +
    +          "mount SPARK_LOCAL_DIRS volume ${m.getName} multiple times or at an alternative path " +
    +          "then the expected ${m.getPath}")
           }
    +    }
    +
         val podWithLocalDirVolumes = new PodBuilder(pod.pod)
           .editSpec()
    -        .addToVolumes(localDirVolumes: _*)
    +         // Don't want to re-add volumes that already existed in the incoming spec
    +         // as duplicate definitions will lead to K8S API errors
    +        .addToVolumes(localDirVolumes.filter(v => !hasVolume(pod, v.getName)): _*)
    --- End diff --
    
    Yeah, that might be better @rvesse 


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark issue #22256: [SPARK-25262][K8S] Better support configurability of Spa...

Posted by SparkQA <gi...@git.apache.org>.
Github user SparkQA commented on the issue:

    https://github.com/apache/spark/pull/22256
  
    **[Test build #4308 has started](https://amplab.cs.berkeley.edu/jenkins/job/NewSparkPullRequestBuilder/4308/testReport)** for PR 22256 at commit [`8762ac1`](https://github.com/apache/spark/commit/8762ac1edb3bd8b9a645ef59f900b9fa2a5a27d6).


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark issue #22256: [SPARK-25262][K8S] Better support configurability of Spa...

Posted by rvesse <gi...@git.apache.org>.
Github user rvesse commented on the issue:

    https://github.com/apache/spark/pull/22256
  
    @skonto I haven't done anything specific for the size limit ATM.  From the K8S docs `tmpfs` backed `emptyDir` usage counts towards your containers memory limits so you can just set `spark.executor.memory` to a larger amount as needed.
    
    From the discussion you linked you can explicitly set a size limit for the volume but I wanted to avoid adding multiple configuration options if possible.  Since this PR allows for template defined volumes to be used you could define a volume in your template with the `sizeLimit` applied to it once you have pod templates PR available


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark issue #22256: [SPARK-25262][K8S] Better support configurability of Spa...

Posted by SparkQA <gi...@git.apache.org>.
Github user SparkQA commented on the issue:

    https://github.com/apache/spark/pull/22256
  
    **[Test build #4308 has finished](https://amplab.cs.berkeley.edu/jenkins/job/NewSparkPullRequestBuilder/4308/testReport)** for PR 22256 at commit [`8762ac1`](https://github.com/apache/spark/commit/8762ac1edb3bd8b9a645ef59f900b9fa2a5a27d6).
     * This patch **fails Scala style tests**.
     * This patch merges cleanly.
     * This patch adds no public classes.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request #22256: [SPARK-25262][K8S] Better support configurability...

Posted by rvesse <gi...@git.apache.org>.
Github user rvesse commented on a diff in the pull request:

    https://github.com/apache/spark/pull/22256#discussion_r214623272
  
    --- Diff: resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/features/LocalDirsFeatureStep.scala ---
    @@ -37,41 +40,99 @@ private[spark] class LocalDirsFeatureStep(
         .orElse(conf.getOption("spark.local.dir"))
         .getOrElse(defaultLocalDir)
         .split(",")
    +  private val useLocalDirTmpFs = conf.get(KUBERNETES_LOCAL_DIRS_TMPFS)
     
       override def configurePod(pod: SparkPod): SparkPod = {
         val localDirVolumes = resolvedLocalDirs
           .zipWithIndex
           .map { case (localDir, index) =>
    -        new VolumeBuilder()
    -          .withName(s"spark-local-dir-${index + 1}")
    -          .withNewEmptyDir()
    -          .endEmptyDir()
    -          .build()
    +        val name = s"spark-local-dir-${index + 1}"
    +        // To allow customisation of local dirs backing volumes we should avoid creating
    +        // emptyDir volumes if the volume is already defined in the pod spec
    +        hasVolume(pod, name) match {
    +          case true =>
    +            // For pre-existing volume definitions just re-use the volume
    +            pod.pod.getSpec().getVolumes().asScala.find(v => v.getName.equals(name)).get
    +          case false =>
    +            // Create new emptyDir volume
    +            new VolumeBuilder()
    +              .withName(name)
    +              .withNewEmptyDir()
    +                .withMedium(useLocalDirTmpFs match {
    +                  case true => "Memory" // Use tmpfs
    +                  case false => null    // Default - use nodes backing storage
    +                })
    +              .endEmptyDir()
    +              .build()
    +        }
           }
    +
         val localDirVolumeMounts = localDirVolumes
           .zip(resolvedLocalDirs)
           .map { case (localDirVolume, localDirPath) =>
    -        new VolumeMountBuilder()
    -          .withName(localDirVolume.getName)
    -          .withMountPath(localDirPath)
    -          .build()
    +        hasVolumeMount(pod, localDirVolume.getName, localDirPath) match {
    +          case true =>
    +            // For pre-existing volume mounts just re-use the mount
    +            pod.container.getVolumeMounts().asScala
    +              .find(m => m.getName.equals(localDirVolume.getName)
    +                         && m.getMountPath.equals(localDirPath))
    +              .get
    +          case false =>
    +            // Create new volume mount
    +            new VolumeMountBuilder()
    +              .withName (localDirVolume.getName)
    +              .withMountPath (localDirPath)
    +              .build()
    +        }
    +      }
    +
    +    // Check for conflicting volume mounts
    +    for (m: VolumeMount <- localDirVolumeMounts) {
    +      if (hasConflictingVolumeMount(pod, m.getName, m.getMountPath).size > 0) {
    +        throw new SparkException(s"Conflicting volume mounts defined, pod template attempted to " +
    +          "mount SPARK_LOCAL_DIRS volume ${m.getName} multiple times or at an alternative path " +
    +          "then the expected ${m.getPath}")
           }
    +    }
    +
         val podWithLocalDirVolumes = new PodBuilder(pod.pod)
           .editSpec()
    -        .addToVolumes(localDirVolumes: _*)
    +         // Don't want to re-add volumes that already existed in the incoming spec
    +         // as duplicate definitions will lead to K8S API errors
    +        .addToVolumes(localDirVolumes.filter(v => !hasVolume(pod, v.getName)): _*)
    --- End diff --
    
    @mccheah @ifilonenko OK, I have opened PR #22323 with just the `tmpfs` enabling changes


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark issue #22256: [SPARK-25262][K8S][WIP] Better support configurability o...

Posted by liyinan926 <gi...@git.apache.org>.
Github user liyinan926 commented on the issue:

    https://github.com/apache/spark/pull/22256
  
    @mccheah.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark issue #22256: [SPARK-25262][K8S] Better support configurability of Spa...

Posted by rvesse <gi...@git.apache.org>.
Github user rvesse commented on the issue:

    https://github.com/apache/spark/pull/22256
  
    @skonto Here is the PR for the `SPARK_LOCAL_DIRS` behaviour customisation we were discussing in the context of SPARK-24434.
    
    I have minimised config to adding a single new setting for the simple case of just wanting to use `tmpfs` for the local storage and relied upon the forthcoming pod templates functionality to allow deeper customisation.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request #22256: [SPARK-25262][K8S] Better support configurability...

Posted by mccheah <gi...@git.apache.org>.
Github user mccheah commented on a diff in the pull request:

    https://github.com/apache/spark/pull/22256#discussion_r214420503
  
    --- Diff: resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/features/LocalDirsFeatureStep.scala ---
    @@ -37,41 +40,99 @@ private[spark] class LocalDirsFeatureStep(
         .orElse(conf.getOption("spark.local.dir"))
         .getOrElse(defaultLocalDir)
         .split(",")
    +  private val useLocalDirTmpFs = conf.get(KUBERNETES_LOCAL_DIRS_TMPFS)
     
       override def configurePod(pod: SparkPod): SparkPod = {
         val localDirVolumes = resolvedLocalDirs
           .zipWithIndex
           .map { case (localDir, index) =>
    -        new VolumeBuilder()
    -          .withName(s"spark-local-dir-${index + 1}")
    -          .withNewEmptyDir()
    -          .endEmptyDir()
    -          .build()
    +        val name = s"spark-local-dir-${index + 1}"
    +        // To allow customisation of local dirs backing volumes we should avoid creating
    +        // emptyDir volumes if the volume is already defined in the pod spec
    +        hasVolume(pod, name) match {
    +          case true =>
    +            // For pre-existing volume definitions just re-use the volume
    +            pod.pod.getSpec().getVolumes().asScala.find(v => v.getName.equals(name)).get
    +          case false =>
    +            // Create new emptyDir volume
    +            new VolumeBuilder()
    +              .withName(name)
    +              .withNewEmptyDir()
    +                .withMedium(useLocalDirTmpFs match {
    +                  case true => "Memory" // Use tmpfs
    +                  case false => null    // Default - use nodes backing storage
    +                })
    +              .endEmptyDir()
    +              .build()
    +        }
           }
    +
         val localDirVolumeMounts = localDirVolumes
           .zip(resolvedLocalDirs)
           .map { case (localDirVolume, localDirPath) =>
    -        new VolumeMountBuilder()
    -          .withName(localDirVolume.getName)
    -          .withMountPath(localDirPath)
    -          .build()
    +        hasVolumeMount(pod, localDirVolume.getName, localDirPath) match {
    +          case true =>
    +            // For pre-existing volume mounts just re-use the mount
    +            pod.container.getVolumeMounts().asScala
    +              .find(m => m.getName.equals(localDirVolume.getName)
    +                         && m.getMountPath.equals(localDirPath))
    +              .get
    +          case false =>
    +            // Create new volume mount
    +            new VolumeMountBuilder()
    +              .withName (localDirVolume.getName)
    +              .withMountPath (localDirPath)
    +              .build()
    +        }
    +      }
    +
    +    // Check for conflicting volume mounts
    +    for (m: VolumeMount <- localDirVolumeMounts) {
    +      if (hasConflictingVolumeMount(pod, m.getName, m.getMountPath).size > 0) {
    +        throw new SparkException(s"Conflicting volume mounts defined, pod template attempted to " +
    +          "mount SPARK_LOCAL_DIRS volume ${m.getName} multiple times or at an alternative path " +
    +          "then the expected ${m.getPath}")
           }
    +    }
    +
         val podWithLocalDirVolumes = new PodBuilder(pod.pod)
           .editSpec()
    -        .addToVolumes(localDirVolumes: _*)
    +         // Don't want to re-add volumes that already existed in the incoming spec
    +         // as duplicate definitions will lead to K8S API errors
    +        .addToVolumes(localDirVolumes.filter(v => !hasVolume(pod, v.getName)): _*)
    --- End diff --
    
    This is different in that we're looking for specific volumes that have been set up by previous feature steps or outside logic. Preferably every step is self-contained in that it doesn't have to look up specific values set by previous steps.
    
    For example this logic would break if we applied the templating after this step, or if a different step after this one added the volumes that are being looked up here.
    
    Whereas `editOrNew` and `addTo...` at worst only change the ordering on some of the fields depending on when the step is invoked in the sequence.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark issue #22256: [SPARK-25262][K8S] Better support configurability of Spa...

Posted by AmplabJenkins <gi...@git.apache.org>.
Github user AmplabJenkins commented on the issue:

    https://github.com/apache/spark/pull/22256
  
    Can one of the admins verify this patch?


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request #22256: [SPARK-25262][K8S] Better support configurability...

Posted by rvesse <gi...@git.apache.org>.
Github user rvesse commented on a diff in the pull request:

    https://github.com/apache/spark/pull/22256#discussion_r214277672
  
    --- Diff: resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/features/LocalDirsFeatureStep.scala ---
    @@ -37,41 +40,99 @@ private[spark] class LocalDirsFeatureStep(
         .orElse(conf.getOption("spark.local.dir"))
         .getOrElse(defaultLocalDir)
         .split(",")
    +  private val useLocalDirTmpFs = conf.get(KUBERNETES_LOCAL_DIRS_TMPFS)
     
       override def configurePod(pod: SparkPod): SparkPod = {
         val localDirVolumes = resolvedLocalDirs
           .zipWithIndex
           .map { case (localDir, index) =>
    -        new VolumeBuilder()
    -          .withName(s"spark-local-dir-${index + 1}")
    -          .withNewEmptyDir()
    -          .endEmptyDir()
    -          .build()
    +        val name = s"spark-local-dir-${index + 1}"
    +        // To allow customisation of local dirs backing volumes we should avoid creating
    +        // emptyDir volumes if the volume is already defined in the pod spec
    +        hasVolume(pod, name) match {
    +          case true =>
    +            // For pre-existing volume definitions just re-use the volume
    +            pod.pod.getSpec().getVolumes().asScala.find(v => v.getName.equals(name)).get
    +          case false =>
    +            // Create new emptyDir volume
    +            new VolumeBuilder()
    +              .withName(name)
    +              .withNewEmptyDir()
    +                .withMedium(useLocalDirTmpFs match {
    +                  case true => "Memory" // Use tmpfs
    +                  case false => null    // Default - use nodes backing storage
    +                })
    +              .endEmptyDir()
    +              .build()
    +        }
           }
    +
         val localDirVolumeMounts = localDirVolumes
           .zip(resolvedLocalDirs)
           .map { case (localDirVolume, localDirPath) =>
    -        new VolumeMountBuilder()
    -          .withName(localDirVolume.getName)
    -          .withMountPath(localDirPath)
    -          .build()
    +        hasVolumeMount(pod, localDirVolume.getName, localDirPath) match {
    +          case true =>
    +            // For pre-existing volume mounts just re-use the mount
    +            pod.container.getVolumeMounts().asScala
    +              .find(m => m.getName.equals(localDirVolume.getName)
    +                         && m.getMountPath.equals(localDirPath))
    +              .get
    +          case false =>
    +            // Create new volume mount
    +            new VolumeMountBuilder()
    +              .withName (localDirVolume.getName)
    +              .withMountPath (localDirPath)
    +              .build()
    +        }
    +      }
    +
    +    // Check for conflicting volume mounts
    +    for (m: VolumeMount <- localDirVolumeMounts) {
    +      if (hasConflictingVolumeMount(pod, m.getName, m.getMountPath).size > 0) {
    +        throw new SparkException(s"Conflicting volume mounts defined, pod template attempted to " +
    +          "mount SPARK_LOCAL_DIRS volume ${m.getName} multiple times or at an alternative path " +
    +          "then the expected ${m.getPath}")
           }
    +    }
    +
         val podWithLocalDirVolumes = new PodBuilder(pod.pod)
           .editSpec()
    -        .addToVolumes(localDirVolumes: _*)
    +         // Don't want to re-add volumes that already existed in the incoming spec
    +         // as duplicate definitions will lead to K8S API errors
    +        .addToVolumes(localDirVolumes.filter(v => !hasVolume(pod, v.getName)): _*)
    --- End diff --
    
    > This is a stance that as far I'm aware, we specifically chose not to take in the pod template feature. If one is using the pod template feature then Spark won't provide any guarantees that the pod it makes will be well-formed. When spark submit deploys the pod to the cluster the API will return a clear enough error informing the user to make the appropriate corrections to their pod template.
    
    Sure, but we still need to be realistic about how the template feature will be used.  It is supposed to enable power users to customise the pods for their environments.  If there is an area like this where there is a clear use case to allow customisation we should be enabling that rather than saying sorry we're going to generate invalid pods regardless.  Obviously the power user is assuming the risk of creating a pod template that meaningfully combines with Sparks generated pod to yield a valid runtime environment.
    
    Clearly my stance here is controversial and likely needs a broader discussion on the dev list.
    
    I can reduce this PR to just the config to enable `tmpfs` backed `emptyDir` volumes if that is preferred?


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org