You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by GitBox <gi...@apache.org> on 2019/03/17 10:56:38 UTC

[GitHub] [spark] yaooqinn opened a new pull request #24120: [SPARK-27183][YARN]YarnConfiguration in ApplicationMaster should be cluster-specific

yaooqinn opened a new pull request #24120: [SPARK-27183][YARN]YarnConfiguration in ApplicationMaster should be cluster-specific
URL: https://github.com/apache/spark/pull/24120
 
 
   ## What changes were proposed in this pull request?
   
   YarnConfiguration in ApplicationMaster should be cluster specified. In an kerberized yarn cluster with multiple hdfs federations, spark apps will write credentials files to a client determined hdfs cluster but not the yarn default hdfs of yarn cluster.
   
   You may see error like this while the application master updating tokens. 
   ```scala
   java.lang.IllegalArgumentException: Wrong FS: hdfs://hz-cluster10/user/kyuubi/.sparkStaging/application_1552217598813_752210, expected: hdfs://hz-cluster7
   	at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:649)
   	at org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:194)
   	at org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal(DistributedFileSystem.java:788)
   	at org.apache.hadoop.hdfs.DistributedFileSystem.access$700(DistributedFileSystem.java:106)
   	at org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall(DistributedFileSystem.java:853)
   	at org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall(DistributedFileSystem.java:849)
   	at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
   	at org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:860)
   	at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1517)
   	at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1557)
   	at org.apache.spark.deploy.SparkHadoopUtil.listFilesSorted(SparkHadoopUtil.scala:267)
   	at org.apache.spark.deploy.yarn.security.AMCredentialRenewer.org$apache$spark$deploy$yarn$security$AMCredentialRenewer$$writeNewCredentialsToHDFS(AMCredentialRenewer.scala:210)
   	at org.apache.spark.deploy.yarn.security.AMCredentialRenewer$$anon$1.run(AMCredentialRenewer.scala:107)
   	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
   	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
   	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
   	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
   	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
   	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
   	at java.lang.Thread.run(Thread.java:745)
   ```
   ## How was this patch tested?
   manually
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org