You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@hive.apache.org by "Daniel Dai (JIRA)" <ji...@apache.org> on 2018/08/08 23:35:00 UTC

[jira] [Commented] (HIVE-20344) PrivilegeSynchronizer for SBA might hit AccessControlException

    [ https://issues.apache.org/jira/browse/HIVE-20344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16574032#comment-16574032 ] 

Daniel Dai commented on HIVE-20344:
-----------------------------------

In the patch, I also include:
1. New flag hive.privilege.synchronizer, false to disable PrivilegeSynchonizer
2. Correct way to check the existence of SBA, the original code only check paramter metastore.x not the compatible hive.metastore.x

> PrivilegeSynchronizer for SBA might hit AccessControlException
> --------------------------------------------------------------
>
>                 Key: HIVE-20344
>                 URL: https://issues.apache.org/jira/browse/HIVE-20344
>             Project: Hive
>          Issue Type: Improvement
>            Reporter: Daniel Dai
>            Assignee: Daniel Dai
>            Priority: Major
>         Attachments: HIVE-20344.1.patch
>
>
> If "hive" user does not have privilege of corresponding hdfs folders, PrivilegeSynchronizer won't be able to get metadata of the table because SBA is preventing it. Here is a sample stack:
> {code}
> Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: java.security.AccessControlException: Permission denied: user=hive, access=EXECUTE, inode="/tmp/sba_is/sba_db":hrt_7:hrt_qa:dr--------
>         at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:399)
>         at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:315)
>         at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:242)
>         at org.apache.ranger.authorization.hadoop.RangerHdfsAuthorizer$RangerAccessControlEnforcer.checkDefaultEnforcer(RangerHdfsAuthorizer.java:512)
>         at org.apache.ranger.authorization.hadoop.RangerHdfsAuthorizer$RangerAccessControlEnforcer.checkPermission(RangerHdfsAuthorizer.java:305)
>         at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:193)
>         at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1850)
>         at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1834)
>         at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPathAccess(FSDirectory.java:1784)
>         at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkAccess(FSNamesystem.java:7767)
>         at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.checkAccess(NameNodeRpcServer.java:2217)
>         at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.checkAccess(ClientNamenodeProtocolServerSideTranslatorPB.java:1659)
>         at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>         at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)
>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
>         at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:872)
>         at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:818)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:422)
>         at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2678)
>         at org.apache.hadoop.hive.ql.security.authorization.StorageBasedAuthorizationProvider.checkPermissions(StorageBasedAuthorizationProvider.java:424)
>         at org.apache.hadoop.hive.ql.security.authorization.StorageBasedAuthorizationProvider.checkPermissions(StorageBasedAuthorizationProvider.java:382)
>         at org.apache.hadoop.hive.ql.security.authorization.StorageBasedAuthorizationProvider.authorize(StorageBasedAuthorizationProvider.java:355)
>         at org.apache.hadoop.hive.ql.security.authorization.StorageBasedAuthorizationProvider.authorize(StorageBasedAuthorizationProvider.java:203)
>         at org.apache.hadoop.hive.ql.security.authorization.AuthorizationPreEventListener.authorizeReadTable(AuthorizationPreEventListener.java:192)
>         ... 23 more
> {code}
> I simply skip the table if that happens. In practice, managed tables are owned by "hive" user, so only external tables will be impacted. User need to grant execute permission of db folder and read permission of the table folders to "hive" user if they want to query the information schema for the tables, whose permission is only granted via SBA. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)