You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "SaintBacchus (JIRA)" <ji...@apache.org> on 2015/06/28 09:44:04 UTC

[jira] [Created] (SPARK-8688) Hadoop Configuration has to disable client cache when writing or reading delegation tokens.

SaintBacchus created SPARK-8688:
-----------------------------------

             Summary: Hadoop Configuration has to disable client cache when writing or reading delegation tokens.
                 Key: SPARK-8688
                 URL: https://issues.apache.org/jira/browse/SPARK-8688
             Project: Spark
          Issue Type: Bug
          Components: YARN
    Affects Versions: 1.5.0
            Reporter: SaintBacchus


In class *AMDelegationTokenRenewer* and *ExecutorDelegationTokenUpdater*, Spark will write and read the credentials.
But if we don't disable the *fs.hdfs.impl.disable.cache*, Spark will use cached  FileSystem (which will use old token ) to  upload or download file.
Then when the old token is expired, it can't gain the auth to get/put the hdfs.

(I only tested in a very short time with the configuration:
dfs.namenode.delegation.token.renew-interval=3min
dfs.namenode.delegation.token.max-lifetime=10min
I'm not sure whatever it matters.
 )



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org