You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Manu Zhang (Jira)" <ji...@apache.org> on 2021/05/06 12:15:00 UTC

[jira] [Updated] (SPARK-35160) Spark application submitted despite failing to get Hive delegation token

     [ https://issues.apache.org/jira/browse/SPARK-35160?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Manu Zhang updated SPARK-35160:
-------------------------------
    Description: 
Currently, when running on YARN and failing to get Hive delegation token, a Spark SQL application will still be submitted. Eventually, the application will fail on connecting to Hive metastore without a valid delegation token. 

Is there any reason for this design ?

cc [~jerryshao] who originally implemented this in https://issues.apache.org/jira/browse/SPARK-14743

I'd propose to fail immediately like HadoopFSDelegationTokenProvider.

 

Update:

After [https://github.com/apache/spark/pull/23418], HadoopFSDelegationTokenProvider no longer fail on non fatal exception. However, the author changed the behavior just to keep it consistent with other providers. 

  was:
Currently, when running on YARN and failing to get Hive delegation token, a Spark SQL application will still be submitted. Eventually, the application will fail on connecting to Hive metastore without a valid delegation token. 

Is there any reason for this design ?

cc [~jerryshao] who originally implemented this in https://issues.apache.org/jira/browse/SPARK-14743

I'd propose to fail immediately like HadoopFSDelegationTokenProvider.


> Spark application submitted despite failing to get Hive delegation token
> ------------------------------------------------------------------------
>
>                 Key: SPARK-35160
>                 URL: https://issues.apache.org/jira/browse/SPARK-35160
>             Project: Spark
>          Issue Type: Improvement
>          Components: Security
>    Affects Versions: 3.1.1
>            Reporter: Manu Zhang
>            Priority: Major
>
> Currently, when running on YARN and failing to get Hive delegation token, a Spark SQL application will still be submitted. Eventually, the application will fail on connecting to Hive metastore without a valid delegation token. 
> Is there any reason for this design ?
> cc [~jerryshao] who originally implemented this in https://issues.apache.org/jira/browse/SPARK-14743
> I'd propose to fail immediately like HadoopFSDelegationTokenProvider.
>  
> Update:
> After [https://github.com/apache/spark/pull/23418], HadoopFSDelegationTokenProvider no longer fail on non fatal exception. However, the author changed the behavior just to keep it consistent with other providers. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org