You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by GitBox <gi...@apache.org> on 2022/11/09 21:40:32 UTC

[GitHub] [spark] xkrogen commented on pull request #37949: [SPARK-40504][YARN] Make yarn appmaster load config from client

xkrogen commented on PR #37949:
URL: https://github.com/apache/spark/pull/37949#issuecomment-1309412598

   I'm a bit confused about why this change is necessary. `yarn.Client` already gathers all Hadoop config files under `HADOOP_CONF_DIR` (and `SPARK_CONF_DIR`) and uploads them, placing them on the classpath of all YARN containers:
   https://github.com/apache/spark/blob/5600bef0ee6149ebc1abcf4c9c9b2991553ca3de/resource-managers/yarn/src/main/scala/org/apache/spark/deploy/yarn/Client.scala#L823-L842
   
   So when the `new Configuration` object is created within `SparkHadoopUtil.newConfiguration`, it will already have access to the `yarn-site.xml` from your client side. That should override whatever configs are coming from the cluster side, since Spark puts Hadoop at the end of the classpath for containers.
   
   Unless your `yarn-site.xml` is in `YARN_CONF_DIR` instead of `HADOOP_CONF_DIR`, I guess?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org