You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@spark.apache.org by do...@apache.org on 2020/06/03 09:20:10 UTC

[spark] branch branch-3.0 updated: [SPARK-31881][K8S][TESTS][FOLLOWUP] Activate hadoop-2.7 by default in K8S IT

This is an automated email from the ASF dual-hosted git repository.

dongjoon pushed a commit to branch branch-3.0
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/branch-3.0 by this push:
     new 4e9eafd  [SPARK-31881][K8S][TESTS][FOLLOWUP] Activate hadoop-2.7 by default in K8S IT
4e9eafd is described below

commit 4e9eafd75864e82ec9db2743bf8f7f06faeba528
Author: Dongjoon Hyun <do...@apache.org>
AuthorDate: Wed Jun 3 02:17:25 2020 -0700

    [SPARK-31881][K8S][TESTS][FOLLOWUP] Activate hadoop-2.7 by default in K8S IT
    
    ### What changes were proposed in this pull request?
    
    This PR aims to activate `hadoop-2.7` profile by default in Kubernetes IT module.
    
    ### Why are the changes needed?
    
    While SPARK-31881 added Hadoop 3.2 support, one default test dependency was moved to `hadoop-2.7` profile. It works when we give one of `hadoop-2.7` and `hadoop-3.2`, but it fails when we don't give any profile.
    
    **BEFORE**
    ```
    $ mvn test-compile -pl resource-managers/kubernetes/integration-tests -Pkubernetes-integration-tests
    ...
    [ERROR] [Error] /APACHE/spark-merge/resource-managers/kubernetes/integration-tests/src/test/scala/org/apache/spark/deploy/k8s/integrationtest/DepsTestsSuite.scala:23:
    object amazonaws is not a member of package com
    ```
    
    **AFTER**
    ```
    $ mvn test-compile -pl resource-managers/kubernetes/integration-tests -Pkubernetes-integration-tests
    ..
    [INFO] ------------------------------------------------------------------------
    [INFO] BUILD SUCCESS
    [INFO] ------------------------------------------------------------------------
    ```
    
    The default activated profile will be override when we give `hadoop-3.2`.
    ```
    $ mvn help:active-profiles -Pkubernetes-integration-tests
    ...
    Active Profiles for Project 'org.apache.spark:spark-kubernetes-integration-tests_2.12:jar:3.1.0-SNAPSHOT':
    
    The following profiles are active:
    
     - hadoop-2.7 (source: org.apache.spark:spark-kubernetes-integration-tests_2.12:3.1.0-SNAPSHOT)
     - kubernetes-integration-tests (source: org.apache.spark:spark-parent_2.12:3.1.0-SNAPSHOT)
     - test-java-home (source: org.apache.spark:spark-parent_2.12:3.1.0-SNAPSHOT)
    ```
    ```
    $ mvn help:active-profiles -Pkubernetes-integration-tests -Phadoop-3.2
    ...
    Active Profiles for Project 'org.apache.spark:spark-kubernetes-integration-tests_2.12:jar:3.1.0-SNAPSHOT':
    
    The following profiles are active:
    
     - hadoop-3.2 (source: org.apache.spark:spark-kubernetes-integration-tests_2.12:3.1.0-SNAPSHOT)
     - hadoop-3.2 (source: org.apache.spark:spark-parent_2.12:3.1.0-SNAPSHOT)
     - kubernetes-integration-tests (source: org.apache.spark:spark-parent_2.12:3.1.0-SNAPSHOT)
     - test-java-home (source: org.apache.spark:spark-parent_2.12:3.1.0-SNAPSHOT)
    ```
    
    ### Does this PR introduce _any_ user-facing change?
    
    No.
    
    ### How was this patch tested?
    
    Pass the Jenkins UT and IT.
    
    Currently, all Jenkins build and tests (UT & IT) passes without this patch. This should be tested manually with the above command.
    
    `hadoop-3.2` K8s IT also passed like the following.
    ```
    KubernetesSuite:
    - Run SparkPi with no resources
    - Run SparkPi with a very long application name.
    - Use SparkLauncher.NO_RESOURCE
    - Run SparkPi with a master URL without a scheme.
    - Run SparkPi with an argument.
    - Run SparkPi with custom labels, annotations, and environment variables.
    - All pods have the same service account by default
    - Run extraJVMOptions check on driver
    - Run SparkRemoteFileTest using a remote data file
    - Run SparkPi with env and mount secrets.
    - Run PySpark on simple pi.py example
    - Run PySpark with Python2 to test a pyfiles example
    - Run PySpark with Python3 to test a pyfiles example
    - Run PySpark with memory customization
    - Run in client mode.
    - Start pod creation from template
    - PVs with local storage
    - Launcher client dependencies
    - Test basic decommissioning
    Run completed in 8 minutes, 33 seconds.
    Total number of tests run: 19
    Suites: completed 2, aborted 0
    Tests: succeeded 19, failed 0, canceled 0, ignored 0, pending 0
    All tests passed.
    ```
    
    Closes #28716 from dongjoon-hyun/SPARK-31881-2.
    
    Authored-by: Dongjoon Hyun <do...@apache.org>
    Signed-off-by: Dongjoon Hyun <do...@apache.org>
    (cherry picked from commit e5b9b862e6011f37dfc0f646d6c3ae8e545e2cd6)
    Signed-off-by: Dongjoon Hyun <do...@apache.org>
---
 resource-managers/kubernetes/integration-tests/pom.xml | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/resource-managers/kubernetes/integration-tests/pom.xml b/resource-managers/kubernetes/integration-tests/pom.xml
index bda936c..6c258a7 100644
--- a/resource-managers/kubernetes/integration-tests/pom.xml
+++ b/resource-managers/kubernetes/integration-tests/pom.xml
@@ -186,6 +186,9 @@
   <profiles>
     <profile>
       <id>hadoop-2.7</id>
+      <activation>
+        <activeByDefault>true</activeByDefault>
+      </activation>
       <dependencies>
         <dependency>
           <groupId>com.amazonaws</groupId>


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@spark.apache.org
For additional commands, e-mail: commits-help@spark.apache.org