You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@hbase.apache.org by "Andrew Kyle Purtell (Jira)" <ji...@apache.org> on 2022/07/01 20:08:00 UTC

[jira] [Closed] (HBASE-16521) Restore operation would fail if the hbase.tmp.dir directory is absent or doesn't have proper permission

     [ https://issues.apache.org/jira/browse/HBASE-16521?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Andrew Kyle Purtell closed HBASE-16521.
---------------------------------------

> Restore operation would fail if the hbase.tmp.dir directory is absent or doesn't have proper permission
> -------------------------------------------------------------------------------------------------------
>
>                 Key: HBASE-16521
>                 URL: https://issues.apache.org/jira/browse/HBASE-16521
>             Project: HBase
>          Issue Type: Bug
>            Reporter: Ted Yu
>            Assignee: Ted Yu
>            Priority: Major
>              Labels: backup
>             Fix For: HBASE-7912
>
>         Attachments: 16521.v1.txt, 16521.v2.txt
>
>
> I ran backup IT test and bumped into the following:
> {code}
> 2016-08-29 20:38:31,390 INFO  [main] mapreduce.Job: Job job_1472498400634_0004 failed with state FAILED due to: Job setup failed : org.apache.hadoop.security.AccessControlException: Permission denied:   user=hbase, access=WRITE, inode="/tmp/hbase-hbase/bulk_output-default-IntegrationTestBackupRestore.table1-1472503079471/_temporary/1":hdfs:hdfs:drwxr-xr-x
>   at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:319)
>   at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:292)
>   at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:213)
>   at org.apache.ranger.authorization.hadoop.RangerHdfsAuthorizer$RangerAccessControlEnforcer.checkPermission(RangerHdfsAuthorizer.java:307)
>   at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:190)
>   at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1827)
>   at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1811)
>   at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkAncestorAccess(FSDirectory.java:1794)
>   at org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:71)
>   at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:4011)
>   at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:1102)
>   at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:630)
> {code}
> Here is related code in MapReduceRestoreService :
> {code}
>   private Path getBulkOutputDir(String tableName) throws IOException
>   {
>     Configuration conf = getConf();
>     FileSystem fs = FileSystem.get(conf);
>     String tmp = conf.get("hbase.tmp.dir");
>     Path path =  new Path(tmp + Path.SEPARATOR + "bulk_output-"+tableName + "-"
>         + EnvironmentEdgeManager.currentTime());
> {code}
> conf.get("hbase.tmp.dir") returned /tmp/hbase-hbase which was not created on hdfs.
> We should use hbase.fs.tmp.dir as the base dir to avoid the above permission error.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)