You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@spark.apache.org by do...@apache.org on 2020/05/24 01:08:40 UTC

[spark] branch branch-3.0 updated (d8b788e -> f05a26a)

This is an automated email from the ASF dual-hosted git repository.

dongjoon pushed a change to branch branch-3.0
in repository https://gitbox.apache.org/repos/asf/spark.git.


    from d8b788e  [SPARK-29854][SQL][TESTS][FOLLOWUP] Regenerate string-functions.sql.out
     new 5f966f9  [SPARK-30715][K8S] Bump fabric8 to 4.7.1
     new 1e79c0d  [SPARK-30715][K8S][TESTS][FOLLOWUP] Update k8s client version in IT as well
     new f05a26a  [SPARK-31786][K8S][BUILD] Upgrade kubernetes-client to 4.9.2

The 3 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 dev/deps/spark-deps-hadoop-2.7-hive-1.2            |  7 ++++---
 dev/deps/spark-deps-hadoop-2.7-hive-2.3            |  7 ++++---
 dev/deps/spark-deps-hadoop-3.2-hive-2.3            |  7 ++++---
 resource-managers/kubernetes/core/pom.xml          |  3 ++-
 .../apache/spark/deploy/k8s/KubernetesUtils.scala  |  6 ++----
 .../k8s/features/BasicDriverFeatureStep.scala      | 10 +++-------
 .../k8s/features/BasicExecutorFeatureStep.scala    | 12 +++---------
 .../k8s/features/MountVolumesFeatureStep.scala     |  2 +-
 .../apache/spark/deploy/k8s/PodBuilderSuite.scala  |  5 ++---
 .../k8s/features/BasicDriverFeatureStepSuite.scala | 22 ++++++++++++----------
 .../features/BasicExecutorFeatureStepSuite.scala   | 15 +++++++++------
 .../features/MountVolumesFeatureStepSuite.scala    |  5 +++--
 .../kubernetes/integration-tests/pom.xml           |  2 +-
 .../k8s/integrationtest/DepsTestsSuite.scala       |  8 ++------
 .../k8s/integrationtest/KubernetesSuite.scala      |  8 ++++----
 .../deploy/k8s/integrationtest/PVTestsSuite.scala  |  5 ++---
 16 files changed, 58 insertions(+), 66 deletions(-)


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@spark.apache.org
For additional commands, e-mail: commits-help@spark.apache.org


[spark] 03/03: [SPARK-31786][K8S][BUILD] Upgrade kubernetes-client to 4.9.2

Posted by do...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

dongjoon pushed a commit to branch branch-3.0
in repository https://gitbox.apache.org/repos/asf/spark.git

commit f05a26a814c66f61dc9f742ae58ebb0d8787fa60
Author: Dongjoon Hyun <do...@apache.org>
AuthorDate: Sat May 23 11:07:45 2020 -0700

    [SPARK-31786][K8S][BUILD] Upgrade kubernetes-client to 4.9.2
    
    ### What changes were proposed in this pull request?
    
    This PR aims to upgrade `kubernetes-client` library to bring the JDK8 related fixes. Please note that JDK11 works fine without any problem.
    - https://github.com/fabric8io/kubernetes-client/releases/tag/v4.9.2
      - JDK8 always uses http/1.1 protocol (Prevent OkHttp from wrongly enabling http/2)
    
    ### Why are the changes needed?
    
    OkHttp "wrongly" detects the Platform as Jdk9Platform on JDK 8u251.
    - https://github.com/fabric8io/kubernetes-client/issues/2212
    - https://stackoverflow.com/questions/61565751/why-am-i-not-able-to-run-sparkpi-example-on-a-kubernetes-k8s-cluster
    
    Although there is a workaround `export HTTP2_DISABLE=true` and `Downgrade JDK or K8s`, we had better avoid this problematic situation.
    
    ### Does this PR introduce _any_ user-facing change?
    
    No. This will recover the failures on JDK 8u252.
    
    ### How was this patch tested?
    
    - [x] Pass the Jenkins UT (https://github.com/apache/spark/pull/28601#issuecomment-632474270)
    - [x] Pass the Jenkins K8S IT with the K8s 1.13 (https://github.com/apache/spark/pull/28601#issuecomment-632438452)
    - [x] Manual testing with K8s 1.17.3. (Below)
    
    **v1.17.6 result (on Minikube)**
    ```
    KubernetesSuite:
    - Run SparkPi with no resources
    - Run SparkPi with a very long application name.
    - Use SparkLauncher.NO_RESOURCE
    - Run SparkPi with a master URL without a scheme.
    - Run SparkPi with an argument.
    - Run SparkPi with custom labels, annotations, and environment variables.
    - All pods have the same service account by default
    - Run extraJVMOptions check on driver
    - Run SparkRemoteFileTest using a remote data file
    - Run SparkPi with env and mount secrets.
    - Run PySpark on simple pi.py example
    - Run PySpark with Python2 to test a pyfiles example
    - Run PySpark with Python3 to test a pyfiles example
    - Run PySpark with memory customization
    - Run in client mode.
    - Start pod creation from template
    - PVs with local storage
    - Launcher client dependencies
    - Test basic decommissioning
    Run completed in 8 minutes, 27 seconds.
    Total number of tests run: 19
    Suites: completed 2, aborted 0
    Tests: succeeded 19, failed 0, canceled 0, ignored 0, pending 0
    All tests passed.
    ```
    
    Closes #28601 from dongjoon-hyun/SPARK-K8S-CLIENT.
    
    Authored-by: Dongjoon Hyun <do...@apache.org>
    Signed-off-by: Dongjoon Hyun <do...@apache.org>
    (cherry picked from commit 64ffc6649623e3cb568315f57c9e06be3b547c00)
    Signed-off-by: Dongjoon Hyun <do...@apache.org>
---
 dev/deps/spark-deps-hadoop-2.7-hive-1.2                | 7 ++++---
 dev/deps/spark-deps-hadoop-2.7-hive-2.3                | 7 ++++---
 dev/deps/spark-deps-hadoop-3.2-hive-2.3                | 7 ++++---
 resource-managers/kubernetes/core/pom.xml              | 2 +-
 resource-managers/kubernetes/integration-tests/pom.xml | 2 +-
 5 files changed, 14 insertions(+), 11 deletions(-)

diff --git a/dev/deps/spark-deps-hadoop-2.7-hive-1.2 b/dev/deps/spark-deps-hadoop-2.7-hive-1.2
index b375629..82d5a06 100644
--- a/dev/deps/spark-deps-hadoop-2.7-hive-1.2
+++ b/dev/deps/spark-deps-hadoop-2.7-hive-1.2
@@ -93,6 +93,7 @@ jackson-core-asl/1.9.13//jackson-core-asl-1.9.13.jar
 jackson-core/2.10.0//jackson-core-2.10.0.jar
 jackson-databind/2.10.0//jackson-databind-2.10.0.jar
 jackson-dataformat-yaml/2.10.0//jackson-dataformat-yaml-2.10.0.jar
+jackson-datatype-jsr310/2.10.3//jackson-datatype-jsr310-2.10.3.jar
 jackson-jaxrs/1.9.13//jackson-jaxrs-1.9.13.jar
 jackson-mapper-asl/1.9.13//jackson-mapper-asl-1.9.13.jar
 jackson-module-jaxb-annotations/2.10.0//jackson-module-jaxb-annotations-2.10.0.jar
@@ -137,9 +138,9 @@ jsr305/3.0.0//jsr305-3.0.0.jar
 jta/1.1//jta-1.1.jar
 jul-to-slf4j/1.7.30//jul-to-slf4j-1.7.30.jar
 kryo-shaded/4.0.2//kryo-shaded-4.0.2.jar
-kubernetes-client/4.7.1//kubernetes-client-4.7.1.jar
-kubernetes-model-common/4.7.1//kubernetes-model-common-4.7.1.jar
-kubernetes-model/4.7.1//kubernetes-model-4.7.1.jar
+kubernetes-client/4.9.2//kubernetes-client-4.9.2.jar
+kubernetes-model-common/4.9.2//kubernetes-model-common-4.9.2.jar
+kubernetes-model/4.9.2//kubernetes-model-4.9.2.jar
 leveldbjni-all/1.8//leveldbjni-all-1.8.jar
 libfb303/0.9.3//libfb303-0.9.3.jar
 libthrift/0.12.0//libthrift-0.12.0.jar
diff --git a/dev/deps/spark-deps-hadoop-2.7-hive-2.3 b/dev/deps/spark-deps-hadoop-2.7-hive-2.3
index 093924f..17c787e 100644
--- a/dev/deps/spark-deps-hadoop-2.7-hive-2.3
+++ b/dev/deps/spark-deps-hadoop-2.7-hive-2.3
@@ -106,6 +106,7 @@ jackson-core-asl/1.9.13//jackson-core-asl-1.9.13.jar
 jackson-core/2.10.0//jackson-core-2.10.0.jar
 jackson-databind/2.10.0//jackson-databind-2.10.0.jar
 jackson-dataformat-yaml/2.10.0//jackson-dataformat-yaml-2.10.0.jar
+jackson-datatype-jsr310/2.10.3//jackson-datatype-jsr310-2.10.3.jar
 jackson-jaxrs/1.9.13//jackson-jaxrs-1.9.13.jar
 jackson-mapper-asl/1.9.13//jackson-mapper-asl-1.9.13.jar
 jackson-module-jaxb-annotations/2.10.0//jackson-module-jaxb-annotations-2.10.0.jar
@@ -152,9 +153,9 @@ jsr305/3.0.0//jsr305-3.0.0.jar
 jta/1.1//jta-1.1.jar
 jul-to-slf4j/1.7.30//jul-to-slf4j-1.7.30.jar
 kryo-shaded/4.0.2//kryo-shaded-4.0.2.jar
-kubernetes-client/4.7.1//kubernetes-client-4.7.1.jar
-kubernetes-model-common/4.7.1//kubernetes-model-common-4.7.1.jar
-kubernetes-model/4.7.1//kubernetes-model-4.7.1.jar
+kubernetes-client/4.9.2//kubernetes-client-4.9.2.jar
+kubernetes-model-common/4.9.2//kubernetes-model-common-4.9.2.jar
+kubernetes-model/4.9.2//kubernetes-model-4.9.2.jar
 leveldbjni-all/1.8//leveldbjni-all-1.8.jar
 libfb303/0.9.3//libfb303-0.9.3.jar
 libthrift/0.12.0//libthrift-0.12.0.jar
diff --git a/dev/deps/spark-deps-hadoop-3.2-hive-2.3 b/dev/deps/spark-deps-hadoop-3.2-hive-2.3
index 2db8d3e..3c3ce2d 100644
--- a/dev/deps/spark-deps-hadoop-3.2-hive-2.3
+++ b/dev/deps/spark-deps-hadoop-3.2-hive-2.3
@@ -105,6 +105,7 @@ jackson-core-asl/1.9.13//jackson-core-asl-1.9.13.jar
 jackson-core/2.10.0//jackson-core-2.10.0.jar
 jackson-databind/2.10.0//jackson-databind-2.10.0.jar
 jackson-dataformat-yaml/2.10.0//jackson-dataformat-yaml-2.10.0.jar
+jackson-datatype-jsr310/2.10.3//jackson-datatype-jsr310-2.10.3.jar
 jackson-jaxrs-base/2.9.5//jackson-jaxrs-base-2.9.5.jar
 jackson-jaxrs-json-provider/2.9.5//jackson-jaxrs-json-provider-2.9.5.jar
 jackson-mapper-asl/1.9.13//jackson-mapper-asl-1.9.13.jar
@@ -164,9 +165,9 @@ kerby-pkix/1.0.1//kerby-pkix-1.0.1.jar
 kerby-util/1.0.1//kerby-util-1.0.1.jar
 kerby-xdr/1.0.1//kerby-xdr-1.0.1.jar
 kryo-shaded/4.0.2//kryo-shaded-4.0.2.jar
-kubernetes-client/4.7.1//kubernetes-client-4.7.1.jar
-kubernetes-model-common/4.7.1//kubernetes-model-common-4.7.1.jar
-kubernetes-model/4.7.1//kubernetes-model-4.7.1.jar
+kubernetes-client/4.9.2//kubernetes-client-4.9.2.jar
+kubernetes-model-common/4.9.2//kubernetes-model-common-4.9.2.jar
+kubernetes-model/4.9.2//kubernetes-model-4.9.2.jar
 leveldbjni-all/1.8//leveldbjni-all-1.8.jar
 libfb303/0.9.3//libfb303-0.9.3.jar
 libthrift/0.12.0//libthrift-0.12.0.jar
diff --git a/resource-managers/kubernetes/core/pom.xml b/resource-managers/kubernetes/core/pom.xml
index 36d9543..168f7f5 100644
--- a/resource-managers/kubernetes/core/pom.xml
+++ b/resource-managers/kubernetes/core/pom.xml
@@ -30,7 +30,7 @@
   <properties>
     <sbt.project.name>kubernetes</sbt.project.name>
     <!-- Note: Please update the kubernetes client version in kubernetes/integration-tests/pom.xml -->
-    <kubernetes.client.version>4.7.1</kubernetes.client.version>
+    <kubernetes.client.version>4.9.2</kubernetes.client.version>
   </properties>
 
   <dependencies>
diff --git a/resource-managers/kubernetes/integration-tests/pom.xml b/resource-managers/kubernetes/integration-tests/pom.xml
index 92ddeae..719ebe4 100644
--- a/resource-managers/kubernetes/integration-tests/pom.xml
+++ b/resource-managers/kubernetes/integration-tests/pom.xml
@@ -29,7 +29,7 @@
     <download-maven-plugin.version>1.3.0</download-maven-plugin.version>
     <exec-maven-plugin.version>1.4.0</exec-maven-plugin.version>
     <extraScalaTestArgs></extraScalaTestArgs>
-    <kubernetes-client.version>4.7.1</kubernetes-client.version>
+    <kubernetes-client.version>4.9.2</kubernetes-client.version>
     <scala-maven-plugin.version>3.2.2</scala-maven-plugin.version>
     <scalatest-maven-plugin.version>1.0</scalatest-maven-plugin.version>
     <sbt.project.name>kubernetes-integration-tests</sbt.project.name>


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@spark.apache.org
For additional commands, e-mail: commits-help@spark.apache.org


[spark] 02/03: [SPARK-30715][K8S][TESTS][FOLLOWUP] Update k8s client version in IT as well

Posted by do...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

dongjoon pushed a commit to branch branch-3.0
in repository https://gitbox.apache.org/repos/asf/spark.git

commit 1e79c0ddb23ff159f6023b9840d0dc0bdee289c3
Author: Prashant Sharma <pr...@in.ibm.com>
AuthorDate: Sat Mar 21 18:26:53 2020 -0700

    [SPARK-30715][K8S][TESTS][FOLLOWUP] Update k8s client version in IT as well
    
    ### What changes were proposed in this pull request?
    This is a follow up for SPARK-30715 . Kubernetes client version in sync in integration-tests and kubernetes/core
    
    ### Why are the changes needed?
    More than once, the kubernetes client version has gone out of sync between integration tests and kubernetes/core. So brought them up in sync again and added a comment to save us from future need of this additional followup.
    
    ### Does this PR introduce any user-facing change?
    No
    
    ### How was this patch tested?
    Manually.
    
    Closes #27948 from ScrapCodes/follow-up-spark-30715.
    
    Authored-by: Prashant Sharma <pr...@in.ibm.com>
    Signed-off-by: Dongjoon Hyun <do...@apache.org>
    (cherry picked from commit 3799d2b9d842f4b9f4e78bf701f5e123f0061bad)
    Signed-off-by: Dongjoon Hyun <do...@apache.org>
---
 resource-managers/kubernetes/core/pom.xml                         | 1 +
 resource-managers/kubernetes/integration-tests/pom.xml            | 2 +-
 .../apache/spark/deploy/k8s/integrationtest/KubernetesSuite.scala | 8 ++++----
 3 files changed, 6 insertions(+), 5 deletions(-)

diff --git a/resource-managers/kubernetes/core/pom.xml b/resource-managers/kubernetes/core/pom.xml
index 9d35c81..36d9543 100644
--- a/resource-managers/kubernetes/core/pom.xml
+++ b/resource-managers/kubernetes/core/pom.xml
@@ -29,6 +29,7 @@
   <name>Spark Project Kubernetes</name>
   <properties>
     <sbt.project.name>kubernetes</sbt.project.name>
+    <!-- Note: Please update the kubernetes client version in kubernetes/integration-tests/pom.xml -->
     <kubernetes.client.version>4.7.1</kubernetes.client.version>
   </properties>
 
diff --git a/resource-managers/kubernetes/integration-tests/pom.xml b/resource-managers/kubernetes/integration-tests/pom.xml
index 3222ec7..92ddeae 100644
--- a/resource-managers/kubernetes/integration-tests/pom.xml
+++ b/resource-managers/kubernetes/integration-tests/pom.xml
@@ -29,7 +29,7 @@
     <download-maven-plugin.version>1.3.0</download-maven-plugin.version>
     <exec-maven-plugin.version>1.4.0</exec-maven-plugin.version>
     <extraScalaTestArgs></extraScalaTestArgs>
-    <kubernetes-client.version>4.6.4</kubernetes-client.version>
+    <kubernetes-client.version>4.7.1</kubernetes-client.version>
     <scala-maven-plugin.version>3.2.2</scala-maven-plugin.version>
     <scalatest-maven-plugin.version>1.0</scalatest-maven-plugin.version>
     <sbt.project.name>kubernetes-integration-tests</sbt.project.name>
diff --git a/resource-managers/kubernetes/integration-tests/src/test/scala/org/apache/spark/deploy/k8s/integrationtest/KubernetesSuite.scala b/resource-managers/kubernetes/integration-tests/src/test/scala/org/apache/spark/deploy/k8s/integrationtest/KubernetesSuite.scala
index 00996df..dbb84e3 100644
--- a/resource-managers/kubernetes/integration-tests/src/test/scala/org/apache/spark/deploy/k8s/integrationtest/KubernetesSuite.scala
+++ b/resource-managers/kubernetes/integration-tests/src/test/scala/org/apache/spark/deploy/k8s/integrationtest/KubernetesSuite.scala
@@ -59,15 +59,15 @@ class KubernetesSuite extends SparkFunSuite
   protected var appLocator: String = _
 
   // Default memory limit is 1024M + 384M (minimum overhead constant)
-  private val baseMemory = s"${1024 + 384}Mi"
+  private val baseMemory = s"${1024 + 384}"
   protected val memOverheadConstant = 0.8
-  private val standardNonJVMMemory = s"${(1024 + 0.4*1024).toInt}Mi"
+  private val standardNonJVMMemory = s"${(1024 + 0.4*1024).toInt}"
   protected val additionalMemory = 200
   // 209715200 is 200Mi
   protected val additionalMemoryInBytes = 209715200
-  private val extraDriverTotalMemory = s"${(1024 + memOverheadConstant*1024).toInt}Mi"
+  private val extraDriverTotalMemory = s"${(1024 + memOverheadConstant*1024).toInt}"
   private val extraExecTotalMemory =
-    s"${(1024 + memOverheadConstant*1024 + additionalMemory).toInt}Mi"
+    s"${(1024 + memOverheadConstant*1024 + additionalMemory).toInt}"
 
   /**
    * Build the image ref for the given image name, taking the repo and tag from the


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@spark.apache.org
For additional commands, e-mail: commits-help@spark.apache.org


[spark] 01/03: [SPARK-30715][K8S] Bump fabric8 to 4.7.1

Posted by do...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

dongjoon pushed a commit to branch branch-3.0
in repository https://gitbox.apache.org/repos/asf/spark.git

commit 5f966f91571b4d332dc56914bee3751b7fb19a71
Author: Onur Satici <on...@gmail.com>
AuthorDate: Wed Feb 5 01:17:30 2020 -0800

    [SPARK-30715][K8S] Bump fabric8 to 4.7.1
    
    ### What changes were proposed in this pull request?
    Bump fabric8 kubernetes-client to 4.7.1
    
    ### Why are the changes needed?
    New fabric8 version brings support for Kubernetes 1.17 clusters.
    Full release notes:
    - https://github.com/fabric8io/kubernetes-client/releases/tag/v4.7.0
    - https://github.com/fabric8io/kubernetes-client/releases/tag/v4.7.1
    
    ### Does this PR introduce any user-facing change?
    No
    
    ### How was this patch tested?
    Existing unit and integration tests cover creation of K8S objects. Adjusted them to work with the new fabric8 version
    
    Closes #27443 from onursatici/os/bump-fabric8.
    
    Authored-by: Onur Satici <on...@gmail.com>
    Signed-off-by: Dongjoon Hyun <dh...@apple.com>
    (cherry picked from commit 86fdb818bf5dfde7744bf2b358876af361ec9a68)
    Signed-off-by: Dongjoon Hyun <do...@apache.org>
---
 dev/deps/spark-deps-hadoop-2.7-hive-1.2            |  6 +++---
 dev/deps/spark-deps-hadoop-2.7-hive-2.3            |  6 +++---
 dev/deps/spark-deps-hadoop-3.2-hive-2.3            |  6 +++---
 resource-managers/kubernetes/core/pom.xml          |  2 +-
 .../apache/spark/deploy/k8s/KubernetesUtils.scala  |  6 ++----
 .../k8s/features/BasicDriverFeatureStep.scala      | 10 +++-------
 .../k8s/features/BasicExecutorFeatureStep.scala    | 12 +++---------
 .../k8s/features/MountVolumesFeatureStep.scala     |  2 +-
 .../apache/spark/deploy/k8s/PodBuilderSuite.scala  |  5 ++---
 .../k8s/features/BasicDriverFeatureStepSuite.scala | 22 ++++++++++++----------
 .../features/BasicExecutorFeatureStepSuite.scala   | 15 +++++++++------
 .../features/MountVolumesFeatureStepSuite.scala    |  5 +++--
 .../k8s/integrationtest/DepsTestsSuite.scala       |  8 ++------
 .../deploy/k8s/integrationtest/PVTestsSuite.scala  |  5 ++---
 14 files changed, 49 insertions(+), 61 deletions(-)

diff --git a/dev/deps/spark-deps-hadoop-2.7-hive-1.2 b/dev/deps/spark-deps-hadoop-2.7-hive-1.2
index b307e57..b375629 100644
--- a/dev/deps/spark-deps-hadoop-2.7-hive-1.2
+++ b/dev/deps/spark-deps-hadoop-2.7-hive-1.2
@@ -137,9 +137,9 @@ jsr305/3.0.0//jsr305-3.0.0.jar
 jta/1.1//jta-1.1.jar
 jul-to-slf4j/1.7.30//jul-to-slf4j-1.7.30.jar
 kryo-shaded/4.0.2//kryo-shaded-4.0.2.jar
-kubernetes-client/4.6.4//kubernetes-client-4.6.4.jar
-kubernetes-model-common/4.6.4//kubernetes-model-common-4.6.4.jar
-kubernetes-model/4.6.4//kubernetes-model-4.6.4.jar
+kubernetes-client/4.7.1//kubernetes-client-4.7.1.jar
+kubernetes-model-common/4.7.1//kubernetes-model-common-4.7.1.jar
+kubernetes-model/4.7.1//kubernetes-model-4.7.1.jar
 leveldbjni-all/1.8//leveldbjni-all-1.8.jar
 libfb303/0.9.3//libfb303-0.9.3.jar
 libthrift/0.12.0//libthrift-0.12.0.jar
diff --git a/dev/deps/spark-deps-hadoop-2.7-hive-2.3 b/dev/deps/spark-deps-hadoop-2.7-hive-2.3
index b6d6e34..093924f 100644
--- a/dev/deps/spark-deps-hadoop-2.7-hive-2.3
+++ b/dev/deps/spark-deps-hadoop-2.7-hive-2.3
@@ -152,9 +152,9 @@ jsr305/3.0.0//jsr305-3.0.0.jar
 jta/1.1//jta-1.1.jar
 jul-to-slf4j/1.7.30//jul-to-slf4j-1.7.30.jar
 kryo-shaded/4.0.2//kryo-shaded-4.0.2.jar
-kubernetes-client/4.6.4//kubernetes-client-4.6.4.jar
-kubernetes-model-common/4.6.4//kubernetes-model-common-4.6.4.jar
-kubernetes-model/4.6.4//kubernetes-model-4.6.4.jar
+kubernetes-client/4.7.1//kubernetes-client-4.7.1.jar
+kubernetes-model-common/4.7.1//kubernetes-model-common-4.7.1.jar
+kubernetes-model/4.7.1//kubernetes-model-4.7.1.jar
 leveldbjni-all/1.8//leveldbjni-all-1.8.jar
 libfb303/0.9.3//libfb303-0.9.3.jar
 libthrift/0.12.0//libthrift-0.12.0.jar
diff --git a/dev/deps/spark-deps-hadoop-3.2-hive-2.3 b/dev/deps/spark-deps-hadoop-3.2-hive-2.3
index c8d2c82..2db8d3e 100644
--- a/dev/deps/spark-deps-hadoop-3.2-hive-2.3
+++ b/dev/deps/spark-deps-hadoop-3.2-hive-2.3
@@ -164,9 +164,9 @@ kerby-pkix/1.0.1//kerby-pkix-1.0.1.jar
 kerby-util/1.0.1//kerby-util-1.0.1.jar
 kerby-xdr/1.0.1//kerby-xdr-1.0.1.jar
 kryo-shaded/4.0.2//kryo-shaded-4.0.2.jar
-kubernetes-client/4.6.4//kubernetes-client-4.6.4.jar
-kubernetes-model-common/4.6.4//kubernetes-model-common-4.6.4.jar
-kubernetes-model/4.6.4//kubernetes-model-4.6.4.jar
+kubernetes-client/4.7.1//kubernetes-client-4.7.1.jar
+kubernetes-model-common/4.7.1//kubernetes-model-common-4.7.1.jar
+kubernetes-model/4.7.1//kubernetes-model-4.7.1.jar
 leveldbjni-all/1.8//leveldbjni-all-1.8.jar
 libfb303/0.9.3//libfb303-0.9.3.jar
 libthrift/0.12.0//libthrift-0.12.0.jar
diff --git a/resource-managers/kubernetes/core/pom.xml b/resource-managers/kubernetes/core/pom.xml
index 9a48bf3..9d35c81 100644
--- a/resource-managers/kubernetes/core/pom.xml
+++ b/resource-managers/kubernetes/core/pom.xml
@@ -29,7 +29,7 @@
   <name>Spark Project Kubernetes</name>
   <properties>
     <sbt.project.name>kubernetes</sbt.project.name>
-    <kubernetes.client.version>4.6.4</kubernetes.client.version>
+    <kubernetes.client.version>4.7.1</kubernetes.client.version>
   </properties>
 
   <dependencies>
diff --git a/resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/KubernetesUtils.scala b/resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/KubernetesUtils.scala
index e234b17..c49f4a1 100644
--- a/resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/KubernetesUtils.scala
+++ b/resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/KubernetesUtils.scala
@@ -23,7 +23,7 @@ import java.util.UUID
 
 import scala.collection.JavaConverters._
 
-import io.fabric8.kubernetes.api.model.{Container, ContainerBuilder, ContainerStateRunning, ContainerStateTerminated, ContainerStateWaiting, ContainerStatus, Pod, PodBuilder, Quantity, QuantityBuilder}
+import io.fabric8.kubernetes.api.model.{Container, ContainerBuilder, ContainerStateRunning, ContainerStateTerminated, ContainerStateWaiting, ContainerStatus, Pod, PodBuilder, Quantity}
 import io.fabric8.kubernetes.client.KubernetesClient
 import org.apache.commons.codec.binary.Hex
 import org.apache.hadoop.fs.{FileSystem, Path}
@@ -234,9 +234,7 @@ private[spark] object KubernetesUtils extends Logging {
         throw new SparkException(s"Resource: ${request.id.resourceName} was requested, " +
           "but vendor was not specified.")
       }
-      val quantity = new QuantityBuilder(false)
-        .withAmount(request.amount.toString)
-        .build()
+      val quantity = new Quantity(request.amount.toString)
       (KubernetesConf.buildKubernetesResourceName(vendorDomain, request.id.resourceName), quantity)
     }.toMap
   }
diff --git a/resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/features/BasicDriverFeatureStep.scala b/resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/features/BasicDriverFeatureStep.scala
index 1edd3f7..63f1812 100644
--- a/resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/features/BasicDriverFeatureStep.scala
+++ b/resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/features/BasicDriverFeatureStep.scala
@@ -80,14 +80,10 @@ private[spark] class BasicDriverFeatureStep(conf: KubernetesDriverConf)
           .build()
       }
 
-    val driverCpuQuantity = new QuantityBuilder(false)
-      .withAmount(driverCoresRequest)
-      .build()
-    val driverMemoryQuantity = new QuantityBuilder(false)
-      .withAmount(s"${driverMemoryWithOverheadMiB}Mi")
-      .build()
+    val driverCpuQuantity = new Quantity(driverCoresRequest)
+    val driverMemoryQuantity = new Quantity(s"${driverMemoryWithOverheadMiB}Mi")
     val maybeCpuLimitQuantity = driverLimitCores.map { limitCores =>
-      ("cpu", new QuantityBuilder(false).withAmount(limitCores).build())
+      ("cpu", new Quantity(limitCores))
     }
 
     val driverResourceQuantities =
diff --git a/resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/features/BasicExecutorFeatureStep.scala b/resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/features/BasicExecutorFeatureStep.scala
index d88bd58..6a26df2 100644
--- a/resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/features/BasicExecutorFeatureStep.scala
+++ b/resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/features/BasicExecutorFeatureStep.scala
@@ -88,12 +88,8 @@ private[spark] class BasicExecutorFeatureStep(
       // Replace dangerous characters in the remaining string with a safe alternative.
       .replaceAll("[^\\w-]+", "_")
 
-    val executorMemoryQuantity = new QuantityBuilder(false)
-      .withAmount(s"${executorMemoryTotal}Mi")
-      .build()
-    val executorCpuQuantity = new QuantityBuilder(false)
-      .withAmount(executorCoresRequest)
-      .build()
+    val executorMemoryQuantity = new Quantity(s"${executorMemoryTotal}Mi")
+    val executorCpuQuantity = new Quantity(executorCoresRequest)
 
     val executorResourceQuantities =
       KubernetesUtils.buildResourcesQuantities(SPARK_EXECUTOR_PREFIX,
@@ -183,9 +179,7 @@ private[spark] class BasicExecutorFeatureStep(
       .addToArgs("executor")
       .build()
     val containerWithLimitCores = executorLimitCores.map { limitCores =>
-      val executorCpuLimitQuantity = new QuantityBuilder(false)
-        .withAmount(limitCores)
-        .build()
+      val executorCpuLimitQuantity = new Quantity(limitCores)
       new ContainerBuilder(executorContainer)
         .editResources()
           .addToLimits("cpu", executorCpuLimitQuantity)
diff --git a/resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/features/MountVolumesFeatureStep.scala b/resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/features/MountVolumesFeatureStep.scala
index 8548e70..4599df9 100644
--- a/resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/features/MountVolumesFeatureStep.scala
+++ b/resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/features/MountVolumesFeatureStep.scala
@@ -65,7 +65,7 @@ private[spark] class MountVolumesFeatureStep(conf: KubernetesConf)
           new VolumeBuilder()
             .withEmptyDir(
               new EmptyDirVolumeSource(medium.getOrElse(""),
-              new Quantity(sizeLimit.orNull)))
+                sizeLimit.map(new Quantity(_)).orNull))
       }
 
       val volume = volumeBuilder.withName(spec.volumeName).build()
diff --git a/resource-managers/kubernetes/core/src/test/scala/org/apache/spark/deploy/k8s/PodBuilderSuite.scala b/resource-managers/kubernetes/core/src/test/scala/org/apache/spark/deploy/k8s/PodBuilderSuite.scala
index 707c823..26bd317 100644
--- a/resource-managers/kubernetes/core/src/test/scala/org/apache/spark/deploy/k8s/PodBuilderSuite.scala
+++ b/resource-managers/kubernetes/core/src/test/scala/org/apache/spark/deploy/k8s/PodBuilderSuite.scala
@@ -101,8 +101,7 @@ abstract class PodBuilderSuite extends SparkFunSuite {
     assert(container.getArgs.contains("arg"))
     assert(container.getCommand.equals(List("command").asJava))
     assert(container.getEnv.asScala.exists(_.getName == "env-key"))
-    assert(container.getResources.getLimits.get("gpu") ===
-      new QuantityBuilder().withAmount("1").build())
+    assert(container.getResources.getLimits.get("gpu") === new Quantity("1"))
     assert(container.getSecurityContext.getRunAsNonRoot)
     assert(container.getStdin)
     assert(container.getTerminationMessagePath === "termination-message-path")
@@ -156,7 +155,7 @@ abstract class PodBuilderSuite extends SparkFunSuite {
           .withImagePullPolicy("Always")
           .withName("executor-container")
           .withNewResources()
-            .withLimits(Map("gpu" -> new QuantityBuilder().withAmount("1").build()).asJava)
+            .withLimits(Map("gpu" -> new Quantity("1")).asJava)
             .endResources()
           .withNewSecurityContext()
             .withRunAsNonRoot(true)
diff --git a/resource-managers/kubernetes/core/src/test/scala/org/apache/spark/deploy/k8s/features/BasicDriverFeatureStepSuite.scala b/resource-managers/kubernetes/core/src/test/scala/org/apache/spark/deploy/k8s/features/BasicDriverFeatureStepSuite.scala
index 89e7ff9..c8c934b 100644
--- a/resource-managers/kubernetes/core/src/test/scala/org/apache/spark/deploy/k8s/features/BasicDriverFeatureStepSuite.scala
+++ b/resource-managers/kubernetes/core/src/test/scala/org/apache/spark/deploy/k8s/features/BasicDriverFeatureStepSuite.scala
@@ -18,7 +18,7 @@ package org.apache.spark.deploy.k8s.features
 
 import scala.collection.JavaConverters._
 
-import io.fabric8.kubernetes.api.model.{ContainerPort, ContainerPortBuilder, LocalObjectReferenceBuilder}
+import io.fabric8.kubernetes.api.model.{ContainerPort, ContainerPortBuilder, LocalObjectReferenceBuilder, Quantity}
 
 import org.apache.spark.{SparkConf, SparkFunSuite}
 import org.apache.spark.deploy.k8s.{KubernetesTestConf, SparkPod}
@@ -105,13 +105,13 @@ class BasicDriverFeatureStepSuite extends SparkFunSuite {
 
     val resourceRequirements = configuredPod.container.getResources
     val requests = resourceRequirements.getRequests.asScala
-    assert(requests("cpu").getAmount === "2")
-    assert(requests("memory").getAmount === "456Mi")
+    assert(amountAndFormat(requests("cpu")) === "2")
+    assert(amountAndFormat(requests("memory")) === "456Mi")
     val limits = resourceRequirements.getLimits.asScala
-    assert(limits("memory").getAmount === "456Mi")
-    assert(limits("cpu").getAmount === "4")
+    assert(amountAndFormat(limits("memory")) === "456Mi")
+    assert(amountAndFormat(limits("cpu")) === "4")
     resources.foreach { case (k8sName, testRInfo) =>
-      assert(limits(k8sName).getAmount === testRInfo.count)
+      assert(amountAndFormat(limits(k8sName)) === testRInfo.count)
     }
 
     val driverPodMetadata = configuredPod.pod.getMetadata
@@ -140,7 +140,7 @@ class BasicDriverFeatureStepSuite extends SparkFunSuite {
       .configurePod(basePod)
       .container.getResources
       .getRequests.asScala
-    assert(requests1("cpu").getAmount === "1")
+    assert(amountAndFormat(requests1("cpu")) === "1")
 
     // if spark.driver.cores is set it should be used
     sparkConf.set(DRIVER_CORES, 10)
@@ -148,7 +148,7 @@ class BasicDriverFeatureStepSuite extends SparkFunSuite {
       .configurePod(basePod)
       .container.getResources
       .getRequests.asScala
-    assert(requests2("cpu").getAmount === "10")
+    assert(amountAndFormat(requests2("cpu")) === "10")
 
     // spark.kubernetes.driver.request.cores should be preferred over spark.driver.cores
     Seq("0.1", "100m").foreach { value =>
@@ -157,7 +157,7 @@ class BasicDriverFeatureStepSuite extends SparkFunSuite {
         .configurePod(basePod)
         .container.getResources
         .getRequests.asScala
-      assert(requests3("cpu").getAmount === value)
+      assert(amountAndFormat(requests3("cpu")) === value)
     }
   }
 
@@ -203,7 +203,7 @@ class BasicDriverFeatureStepSuite extends SparkFunSuite {
         mainAppResource = resource)
       val step = new BasicDriverFeatureStep(conf)
       val pod = step.configurePod(SparkPod.initialPod())
-      val mem = pod.container.getResources.getRequests.get("memory").getAmount()
+      val mem = amountAndFormat(pod.container.getResources.getRequests.get("memory"))
       val expected = (driverMem + driverMem * expectedFactor).toInt
       assert(mem === s"${expected}Mi")
 
@@ -218,4 +218,6 @@ class BasicDriverFeatureStepSuite extends SparkFunSuite {
       .withContainerPort(portNumber)
       .withProtocol("TCP")
       .build()
+
+  private def amountAndFormat(quantity: Quantity): String = quantity.getAmount + quantity.getFormat
 }
diff --git a/resource-managers/kubernetes/core/src/test/scala/org/apache/spark/deploy/k8s/features/BasicExecutorFeatureStepSuite.scala b/resource-managers/kubernetes/core/src/test/scala/org/apache/spark/deploy/k8s/features/BasicExecutorFeatureStepSuite.scala
index f375b1f..da50372 100644
--- a/resource-managers/kubernetes/core/src/test/scala/org/apache/spark/deploy/k8s/features/BasicExecutorFeatureStepSuite.scala
+++ b/resource-managers/kubernetes/core/src/test/scala/org/apache/spark/deploy/k8s/features/BasicExecutorFeatureStepSuite.scala
@@ -128,10 +128,11 @@ class BasicExecutorFeatureStepSuite extends SparkFunSuite with BeforeAndAfter {
     val executor = step.configurePod(SparkPod.initialPod())
 
     assert(executor.container.getResources.getLimits.size() === 3)
-    assert(executor.container.getResources
-      .getLimits.get("memory").getAmount === "1408Mi")
+    assert(amountAndFormat(executor.container.getResources
+      .getLimits.get("memory")) === "1408Mi")
     gpuResources.foreach { case (k8sName, testRInfo) =>
-      assert(executor.container.getResources.getLimits.get(k8sName).getAmount === testRInfo.count)
+      assert(amountAndFormat(
+        executor.container.getResources.getLimits.get(k8sName)) === testRInfo.count)
     }
   }
 
@@ -151,8 +152,8 @@ class BasicExecutorFeatureStepSuite extends SparkFunSuite with BeforeAndAfter {
     assert(executor.container.getImage === EXECUTOR_IMAGE)
     assert(executor.container.getVolumeMounts.isEmpty)
     assert(executor.container.getResources.getLimits.size() === 1)
-    assert(executor.container.getResources
-      .getLimits.get("memory").getAmount === "1408Mi")
+    assert(amountAndFormat(executor.container.getResources
+      .getLimits.get("memory")) === "1408Mi")
 
     // The pod has no node selector, volumes.
     assert(executor.pod.getSpec.getNodeSelector.isEmpty)
@@ -201,7 +202,7 @@ class BasicExecutorFeatureStepSuite extends SparkFunSuite with BeforeAndAfter {
     val step = new BasicExecutorFeatureStep(newExecutorConf(), new SecurityManager(baseConf))
     val executor = step.configurePod(SparkPod.initialPod())
     // This is checking that basic executor + executorMemory = 1408 + 42 = 1450
-    assert(executor.container.getResources.getRequests.get("memory").getAmount === "1450Mi")
+    assert(amountAndFormat(executor.container.getResources.getRequests.get("memory")) === "1450Mi")
   }
 
   test("auth secret propagation") {
@@ -273,4 +274,6 @@ class BasicExecutorFeatureStepSuite extends SparkFunSuite with BeforeAndAfter {
     val expectedEnvs = defaultEnvs ++ additionalEnvVars ++ extraJavaOptsEnvs
     assert(containerEnvs === expectedEnvs)
   }
+
+  private def amountAndFormat(quantity: Quantity): String = quantity.getAmount + quantity.getFormat
 }
diff --git a/resource-managers/kubernetes/core/src/test/scala/org/apache/spark/deploy/k8s/features/MountVolumesFeatureStepSuite.scala b/resource-managers/kubernetes/core/src/test/scala/org/apache/spark/deploy/k8s/features/MountVolumesFeatureStepSuite.scala
index 8c430ee..3888062 100644
--- a/resource-managers/kubernetes/core/src/test/scala/org/apache/spark/deploy/k8s/features/MountVolumesFeatureStepSuite.scala
+++ b/resource-managers/kubernetes/core/src/test/scala/org/apache/spark/deploy/k8s/features/MountVolumesFeatureStepSuite.scala
@@ -79,7 +79,8 @@ class MountVolumesFeatureStepSuite extends SparkFunSuite {
     assert(configuredPod.pod.getSpec.getVolumes.size() === 1)
     val emptyDir = configuredPod.pod.getSpec.getVolumes.get(0).getEmptyDir
     assert(emptyDir.getMedium === "Memory")
-    assert(emptyDir.getSizeLimit.getAmount === "6G")
+    assert(emptyDir.getSizeLimit.getAmount ===  "6")
+    assert(emptyDir.getSizeLimit.getFormat === "G")
     assert(configuredPod.container.getVolumeMounts.size() === 1)
     assert(configuredPod.container.getVolumeMounts.get(0).getMountPath === "/tmp")
     assert(configuredPod.container.getVolumeMounts.get(0).getName === "testVolume")
@@ -101,7 +102,7 @@ class MountVolumesFeatureStepSuite extends SparkFunSuite {
     assert(configuredPod.pod.getSpec.getVolumes.size() === 1)
     val emptyDir = configuredPod.pod.getSpec.getVolumes.get(0).getEmptyDir
     assert(emptyDir.getMedium === "")
-    assert(emptyDir.getSizeLimit.getAmount === null)
+    assert(emptyDir.getSizeLimit === null)
     assert(configuredPod.container.getVolumeMounts.size() === 1)
     assert(configuredPod.container.getVolumeMounts.get(0).getMountPath === "/tmp")
     assert(configuredPod.container.getVolumeMounts.get(0).getName === "testVolume")
diff --git a/resource-managers/kubernetes/integration-tests/src/test/scala/org/apache/spark/deploy/k8s/integrationtest/DepsTestsSuite.scala b/resource-managers/kubernetes/integration-tests/src/test/scala/org/apache/spark/deploy/k8s/integrationtest/DepsTestsSuite.scala
index c35aa5c..2d90c06 100644
--- a/resource-managers/kubernetes/integration-tests/src/test/scala/org/apache/spark/deploy/k8s/integrationtest/DepsTestsSuite.scala
+++ b/resource-managers/kubernetes/integration-tests/src/test/scala/org/apache/spark/deploy/k8s/integrationtest/DepsTestsSuite.scala
@@ -53,12 +53,8 @@ private[spark] trait DepsTestsSuite { k8sSuite: KubernetesSuite =>
     ).toArray
 
     val resources = Map(
-      "cpu" -> new QuantityBuilder()
-        .withAmount("1")
-        .build(),
-      "memory" -> new QuantityBuilder()
-        .withAmount("512M")
-        .build()
+      "cpu" -> new Quantity("1"),
+      "memory" -> new Quantity("512M")
     ).asJava
 
     new ContainerBuilder()
diff --git a/resource-managers/kubernetes/integration-tests/src/test/scala/org/apache/spark/deploy/k8s/integrationtest/PVTestsSuite.scala b/resource-managers/kubernetes/integration-tests/src/test/scala/org/apache/spark/deploy/k8s/integrationtest/PVTestsSuite.scala
index a7cb84e..86f8cdd 100644
--- a/resource-managers/kubernetes/integration-tests/src/test/scala/org/apache/spark/deploy/k8s/integrationtest/PVTestsSuite.scala
+++ b/resource-managers/kubernetes/integration-tests/src/test/scala/org/apache/spark/deploy/k8s/integrationtest/PVTestsSuite.scala
@@ -45,7 +45,7 @@ private[spark] trait PVTestsSuite { k8sSuite: KubernetesSuite =>
         .withName("test-local-pv")
       .endMetadata()
       .withNewSpec()
-        .withCapacity(Map("storage" -> new QuantityBuilder().withAmount("1Gi").build()).asJava)
+        .withCapacity(Map("storage" -> new Quantity("1Gi")).asJava)
         .withAccessModes("ReadWriteOnce")
         .withPersistentVolumeReclaimPolicy("Retain")
         .withStorageClassName("test-local-storage")
@@ -72,8 +72,7 @@ private[spark] trait PVTestsSuite { k8sSuite: KubernetesSuite =>
         .withAccessModes("ReadWriteOnce")
         .withStorageClassName("test-local-storage")
         .withResources(new ResourceRequirementsBuilder()
-        .withRequests(Map("storage" -> new QuantityBuilder()
-          .withAmount("1Gi").build()).asJava).build())
+          .withRequests(Map("storage" -> new Quantity("1Gi")).asJava).build())
       .endSpec()
 
     kubernetesTestComponents


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@spark.apache.org
For additional commands, e-mail: commits-help@spark.apache.org