You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by GitBox <gi...@apache.org> on 2019/02/27 20:03:02 UTC

[GitHub] vanzin commented on a change in pull request #23546: [SPARK-23153][K8s] Support client dependencies with a Hadoop Compatible File System

vanzin commented on a change in pull request #23546: [SPARK-23153][K8s] Support client dependencies with a Hadoop Compatible File System
URL: https://github.com/apache/spark/pull/23546#discussion_r260910739
 
 

 ##########
 File path: core/src/main/scala/org/apache/spark/deploy/SparkSubmit.scala
 ##########
 @@ -318,9 +319,25 @@ private[spark] class SparkSubmit extends Logging {
         args.ivySettingsPath)
 
       if (!StringUtils.isBlank(resolvedMavenCoordinates)) {
-        args.jars = mergeFileLists(args.jars, resolvedMavenCoordinates)
-        if (args.isPython || isInternal(args.primaryResource)) {
-          args.pyFiles = mergeFileLists(args.pyFiles, resolvedMavenCoordinates)
+        // In K8s client mode, when in the driver, add resolved jars early as we might need
+        // them at the submit time. For example we might use the dependencies for downloading
+        // files from a Hadoop Compatible fs eg. S3. In this case the user might pass:
+        // --packages com.amazonaws:aws-java-sdk:1.7.4:org.apache.hadoop:hadoop-aws:2.7.6
+        if (isKubernetesClient &&
+          args.sparkProperties.getOrElse("spark.kubernetes.submitInDriver", "false").toBoolean) {
 
 Review comment:
   I'm a little confused, especially after reading the comment.
   
   In client mode, by definition, this code is always running in the driver.
   
   These are the scenarios where you get here:
   - cluster mode launcher
   - cluster mode driver (client mode + spark.kubernetes.submitInDriver=true)
   - client mode
   
   So this needs some clarification based on that. Because of that, too, the `isKubernetesClient` variable you added is ambiguous, so I'd avoid using that name and try to follow my list above when naming these things.
   
   You could also use `sparkConf.getBoolean` instead of `args.sparkProperties`.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org