You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by GitBox <gi...@apache.org> on 2022/03/30 15:26:22 UTC

[GitHub] [spark] dongjoon-hyun commented on pull request #36010: [SPARK-38652][K8S] `uploadFileUri` should preserve file scheme

dongjoon-hyun commented on pull request #36010:
URL: https://github.com/apache/spark/pull/36010#issuecomment-1083284322


   Thank you, @LuciferYang , @dcoliversun , @wangyum . Merged to master/3.3.
   
   All tests including K8s IT passed.
   ```
   KubernetesSuite:
   - Run SparkPi with no resources
   - Run SparkPi with no resources & statefulset allocation
   - Run SparkPi with a very long application name.
   - Use SparkLauncher.NO_RESOURCE
   - Run SparkPi with a master URL without a scheme.
   - Run SparkPi with an argument.
   - Run SparkPi with custom labels, annotations, and environment variables.
   - All pods have the same service account by default
   - Run extraJVMOptions check on driver
   - Run SparkRemoteFileTest using a remote data file
   - Verify logging configuration is picked from the provided SPARK_CONF_DIR/log4j2.properties
   - Run SparkPi with env and mount secrets.
   - Run PySpark on simple pi.py example
   - Run PySpark to test a pyfiles example
   - Run PySpark with memory customization
   - Run in client mode.
   - Start pod creation from template
   - SPARK-38398: Schedule pod creation from template
   - PVs with local hostpath storage on statefulsets
   - PVs with local hostpath and storageClass on statefulsets
   - PVs with local storage
   - Launcher client dependencies
   - SPARK-33615: Launcher client archives
   - SPARK-33748: Launcher python client respecting PYSPARK_PYTHON
   - SPARK-33748: Launcher python client respecting spark.pyspark.python and spark.pyspark.driver.python
   - Launcher python client dependencies using a zip file
   - Test basic decommissioning
   - Test basic decommissioning with shuffle cleanup
   - Test decommissioning with dynamic allocation & shuffle cleanups
   - Test decommissioning timeouts
   - SPARK-37576: Rolling decommissioning
   Run completed in 22 minutes, 16 seconds.
   Total number of tests run: 31
   Suites: completed 2, aborted 0
   Tests: succeeded 31, failed 0, canceled 0, ignored 0, pending 0
   All tests passed.
   ```
   
   To @dcoliversun . No, it should not. This PR aims to fix `uploadFileUri` itself for all usages. Like the test case written in the PR description, Apache Spark `uploadFileUri` uses `Utils.resolveURI(uri)` to add file scheme before invoking `uploadFileToHadoopCompatibleFS`. 
   
   https://github.com/apache/spark/blob/6b29b28deffd11edd65b69e0f5c79ed51d483b66/resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/KubernetesUtils.scala#L314


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org