You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@ambari.apache.org by "Andrew Onischuk (JIRA)" <ji...@apache.org> on 2016/05/26 16:48:13 UTC

[jira] [Assigned] (AMBARI-16844) Create directory for hbase user when hbase is deployed

     [ https://issues.apache.org/jira/browse/AMBARI-16844?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Andrew Onischuk reassigned AMBARI-16844:
----------------------------------------

    Assignee: Andrew Onischuk

> Create directory for hbase user when hbase is deployed
> ------------------------------------------------------
>
>                 Key: AMBARI-16844
>                 URL: https://issues.apache.org/jira/browse/AMBARI-16844
>             Project: Ambari
>          Issue Type: Improvement
>            Reporter: Ted Yu
>            Assignee: Andrew Onischuk
>         Attachments: AMBARI-16844.patch
>
>
> HBase backups fail if there is no /user/\{hbase_user\} directory in HDFS.
> {code}
> 2016-05-18 00:05:41,051 ERROR [ProcedureExecutorThread-1] snapshot.ExportSnapshot: Snapshot export failed
> org.apache.hadoop.security.AccessControlException: Permission denied: user=hbase, access=WRITE, inode="/user/hbase/.staging":hdfs:hdfs:drwxr-xr-x
> 	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:319)
> 	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:292)
> 	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:213)
> 	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:190)
> 	at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1813)
> 	at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1797)
> 	at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkAncestorAccess(FSDirectory.java:1780)
> 	at org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:71)
> 	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:4002)
> 	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:1098)
> 	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:630)
> 	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> 	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:644)
> 	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2268)
> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2264)
> 	at java.security.AccessController.doPrivileged(Native Method
> {code}
> /user/\{hbase_user\} directory should be created automatically.
> hbase_user is the name of the hbase user



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)