You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Apache Spark (JIRA)" <ji...@apache.org> on 2017/03/20 17:42:41 UTC

[jira] [Commented] (SPARK-19970) Table owner should be USER instead of PRINCIPAL in kerberized clusters

    [ https://issues.apache.org/jira/browse/SPARK-19970?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15933152#comment-15933152 ] 

Apache Spark commented on SPARK-19970:
--------------------------------------

User 'dongjoon-hyun' has created a pull request for this issue:
https://github.com/apache/spark/pull/17363

> Table owner should be USER instead of PRINCIPAL in kerberized clusters
> ----------------------------------------------------------------------
>
>                 Key: SPARK-19970
>                 URL: https://issues.apache.org/jira/browse/SPARK-19970
>             Project: Spark
>          Issue Type: Bug
>          Components: SQL
>    Affects Versions: 2.0.0, 2.0.1, 2.0.2, 2.1.0
>            Reporter: Dongjoon Hyun
>            Assignee: Dongjoon Hyun
>             Fix For: 2.2.0
>
>
> In the kerberized hadoop cluster, when Spark creates tables, the owner of tables are filled with PRINCIPAL strings instead of USER names. This is inconsistent with Hive and causes problems when using ROLE in Hive. We had better to fix this.
> *BEFORE*
> {code}
> scala> sql("create table t(a int)").show
> scala> sql("desc formatted t").show(false)
> ...
> |Owner:                      |spark@EXAMPLE.COM                                         |       |
> {code}
> *AFTER*
> {code}
> scala> sql("create table t(a int)").show
> scala> sql("desc formatted t").show(false)
> ...
> |Owner:                      |spark                                         |       |
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org