You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Shixiong Zhu (JIRA)" <ji...@apache.org> on 2017/06/07 23:41:18 UTC

[jira] [Updated] (SPARK-21008) Streaming applications read stale credentials file when recovering from checkpoint.

     [ https://issues.apache.org/jira/browse/SPARK-21008?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Shixiong Zhu updated SPARK-21008:
---------------------------------
    Component/s:     (was: Structured Streaming)
                 DStreams

> Streaming applications read stale credentials file when recovering from checkpoint.
> -----------------------------------------------------------------------------------
>
>                 Key: SPARK-21008
>                 URL: https://issues.apache.org/jira/browse/SPARK-21008
>             Project: Spark
>          Issue Type: Bug
>          Components: DStreams
>    Affects Versions: 1.6.3, 2.0.2, 2.1.1, 2.2.0
>            Reporter: Xing Shi
>
> On a security(Kerberos) enabled cluster, streaming applications renew HDFS delegation tokens periodically and save them in {{<userDir>/.sparkStaging/<applicationId>/}} directory on HDFS.
> The path of the credentials file will written into checkpoint, and reloaded as the *old applicationId* at application restarting, although the application has changed to a new id.
> This issue can be reproduced by restarting a checkpoint-enabled streaming application on a kerberized cluster.
> The application run well - but with thousands of {{java.io.FileNotFoundException}} logged - and finally failed by token expiration.
> The log file is something like this:
> {code:title=the_first_run.log}
> 17/06/07 14:52:06 INFO executor.CoarseGrainedExecutorBackend: Will periodically update credentials from: hdfs://nameservice1/user/xxx/.sparkStaging/application_1496384469444_0035/credentials-19a7c11e-8c93-478c-ab0a-cdbfae5b2ae5
> 17/06/07 14:52:06 INFO security.CredentialUpdater: Scheduling credentials refresh from HDFS in 92263 ms.
> {code}
> {code:title=after_restart.log}
> 17/06/07 15:11:24 INFO executor.CoarseGrainedExecutorBackend: Will periodically update credentials from: hdfs://nameservice1/user/xxx/.sparkStaging/application_1496384469444_0035/credentials-19a7c11e-8c93-478c-ab0a-cdbfae5b2ae5
> ...
> 17/06/07 15:12:24 WARN yarn.YarnSparkHadoopUtil: Error while attempting to list files from application staging dir
> java.io.FileNotFoundException: File hdfs://nameservice1/user/xxx/.sparkStaging/application_1496384469444_0035 does not exist.
>         at org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal(DistributedFileSystem.java:697)
>         at org.apache.hadoop.hdfs.DistributedFileSystem.access$600(DistributedFileSystem.java:105)
>         at org.apache.hadoop.hdfs.DistributedFileSystem$15.doCall(DistributedFileSystem.java:755)
>         at org.apache.hadoop.hdfs.DistributedFileSystem$15.doCall(DistributedFileSystem.java:751)
>         at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>         at org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:751)
>         at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1485)
>         at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1525)
>         at org.apache.spark.deploy.SparkHadoopUtil.listFilesSorted(SparkHadoopUtil.scala:257)
>         at org.apache.spark.deploy.yarn.security.CredentialUpdater.org$apache$spark$deploy$yarn$security$CredentialUpdater$$updateCredentialsIfRequired(CredentialUpdater.scala:72)
>         at org.apache.spark.deploy.yarn.security.CredentialUpdater$$anon$1$$anonfun$run$1.apply$mcV$sp(CredentialUpdater.scala:53)
>         at org.apache.spark.deploy.yarn.security.CredentialUpdater$$anon$1$$anonfun$run$1.apply(CredentialUpdater.scala:53)
>         at org.apache.spark.deploy.yarn.security.CredentialUpdater$$anon$1$$anonfun$run$1.apply(CredentialUpdater.scala:53)
>         at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1962)
>         at org.apache.spark.deploy.yarn.security.CredentialUpdater$$anon$1.run(CredentialUpdater.scala:53)
>         at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>         at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>         at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>         at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>         at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>         at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>         at java.lang.Thread.run(Thread.java:745)
> {code}
> Notice that the applicationId after restart is application_1496384469444_{color:red}0036{color} but the application still attempt to read credentials from 0035's directory.
> Recently I used Spark 1.6 in my cluster, and tested this issue with Spark 1.6.3 and 2.1.1. But it should affect all the versions from 1.5.x to current master(2.3.x).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org