You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@sentry.apache.org by "Na Li (JIRA)" <ji...@apache.org> on 2018/09/13 16:41:00 UTC

[jira] [Updated] (SENTRY-2357) Revert SENTRY-2309 as it causes an HMS issue for pig

     [ https://issues.apache.org/jira/browse/SENTRY-2357?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Na Li updated SENTRY-2357:
--------------------------
    Fix Version/s:     (was: 2.1.0)

> Revert  SENTRY-2309 as it causes an HMS issue for pig
> -----------------------------------------------------
>
>                 Key: SENTRY-2357
>                 URL: https://issues.apache.org/jira/browse/SENTRY-2357
>             Project: Sentry
>          Issue Type: Bug
>          Components: Sentry
>    Affects Versions: 2.1.0
>            Reporter: Steve Moist
>            Assignee: Arjun Mishra
>            Priority: Major
>
> When running create table on pig, it has an issue with HMS
>  
> FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. MetaException(message:Got exception: org.apache.hadoop.security.AccessControlException Permission denied: user=systest, access=WRITE, inode="/user/hive/
> warehouse":hive:hive:drwxrwx--x
>         at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:400)
>         at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:256)
>         at org.apache.sentry.hdfs.SentryINodeAttributesProvider$SentryPermissionEnforcer.checkPermission(SentryINodeAttributesProvider.java:86)
>         at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:194)
>         at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1846)
>         at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1830)
>         at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkAncestorAccess(FSDirectory.java:1789)
>         at org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:60)
>         at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3142)
>         at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:1123)
>         at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:696)
>         at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>         at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)
>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
>         at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:869)
>         at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:815)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:422)
>         at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1726)
>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2675)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)