You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Hyukjin Kwon (Jira)" <ji...@apache.org> on 2019/10/08 05:44:16 UTC

[jira] [Resolved] (SPARK-23790) proxy-user failed connecting to a kerberos configured metastore

     [ https://issues.apache.org/jira/browse/SPARK-23790?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Hyukjin Kwon resolved SPARK-23790.
----------------------------------
    Resolution: Incomplete

> proxy-user failed connecting to a kerberos configured metastore
> ---------------------------------------------------------------
>
>                 Key: SPARK-23790
>                 URL: https://issues.apache.org/jira/browse/SPARK-23790
>             Project: Spark
>          Issue Type: Bug
>          Components: Mesos
>    Affects Versions: 2.3.0
>            Reporter: Stavros Kontopoulos
>            Priority: Major
>              Labels: bulk-closed
>
> This appeared at a customer trying to integrate with a kerberized hdfs cluster.
> This can be easily fixed with the proposed fix [here|https://github.com/apache/spark/pull/17333] and the problem was reported first [here|https://issues.apache.org/jira/browse/SPARK-19995] for yarn.
> The other option is to add the delegation tokens to the current user's UGI as in [here|https://github.com/apache/spark/pull/17335] . The last fixes the problem but leads to a failure when someones uses a HadoopRDD because the latter, uses FileInputFormat to get the splits which calls the local ticket cache by using TokenCache.obtainTokensForNamenodes. Eventually this will fail with:
> {quote}Exception in thread "main" org.apache.hadoop.ipc.RemoteException(java.io.IOException): Delegation Token can be issued only with kerberos or web authenticationat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getDelegationToken(FSNamesystem.java:5896)
> {quote}
> This implies that security mode is SIMPLE and hadoop libs there are not aware of kerberos.
> This is related to this issue the workaround decided was to [trick|https://github.com/apache/spark/blob/a33655348c4066d9c1d8ad2055aadfbc892ba7fd/core/src/main/scala/org/apache/spark/deploy/SparkSubmit.scala#L795-L804] hadoop.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org