You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by GitBox <gi...@apache.org> on 2019/02/20 19:21:07 UTC

[GitHub] rvesse opened a new pull request #23846: [SPARK-26729][K8S] Make image names under test configurable

rvesse opened a new pull request #23846: [SPARK-26729][K8S] Make image names under test configurable
URL: https://github.com/apache/spark/pull/23846
 
 
   ## What changes were proposed in this pull request?
   
   Allow specifying system properties to customise the image names for the images used in the integration testing.  Useful if your CI/CD pipeline or policy requires using a different naming format.
   
   This is one part of addressing SPARK-26729, I plan to have a follow up patch that will also make the names configurable when using `docker-image-tool.sh`
   
   ## How was this patch tested?
   
   Ran integration tests against custom images generated by our CI/CD pipeline that do not follow Spark's existing hardcoded naming conventions using the new system properties to override the image names appropriately:
   
   ```
   mvn clean integration-test -pl :spark-kubernetes-integration-tests_${SCALA_VERSION} \
               -Pkubernetes -Pkubernetes-integration-tests \
               -P${SPARK_HADOOP_PROFILE} -Dhadoop.version=${HADOOP_VERSION} \
               -Dspark.kubernetes.test.sparkTgz=${TARBALL} \
               -Dspark.kubernetes.test.imageTag=${TAG} \
               -Dspark.kubernetes.test.imageRepo=${REPO} \
               -Dspark.kubernetes.test.namespace=${K8S_NAMESPACE} \
               -Dspark.kubernetes.test.kubeConfigContext=${K8S_CONTEXT} \
               -Dspark.kubernetes.test.deployMode=${K8S_TEST_DEPLOY_MODE} \
               -Dspark.kubernetes.test.jvmImage=apache-spark \
               -Dspark.kubernetes.test.pyImage=apache-spark-py \
               -Dspark.kubernetes.test.rImage=apache-spark-r \
               -Dtest.include.tags=k8s
   ...
   [INFO] --- scalatest-maven-plugin:1.0:test (integration-test) @ spark-kubernetes-integration-tests_2.12 ---
   Discovery starting.
   Discovery completed in 230 milliseconds.
   Run starting. Expected test count is: 15
   KubernetesSuite:
   - Run SparkPi with no resources
   - Run SparkPi with a very long application name.
   - Use SparkLauncher.NO_RESOURCE
   - Run SparkPi with a master URL without a scheme.
   - Run SparkPi with an argument.
   - Run SparkPi with custom labels, annotations, and environment variables.
   - Run extraJVMOptions check on driver
   - Run SparkRemoteFileTest using a remote data file
   - Run SparkPi with env and mount secrets.
   - Run PySpark on simple pi.py example
   - Run PySpark with Python2 to test a pyfiles example
   - Run PySpark with Python3 to test a pyfiles example
   - Run PySpark with memory customization
   - Run in client mode.
   - Start pod creation from template
   Run completed in 8 minutes, 33 seconds.
   Total number of tests run: 15
   Suites: completed 2, aborted 0
   Tests: succeeded 15, failed 0, canceled 0, ignored 0, pending 0
   All tests passed.
   ```
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org