You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hbase.apache.org by jhaobull <jh...@gmail.com> on 2014/06/11 08:38:14 UTC

secure bulkload problem ?

hi,everyone !


SecureBulkLoadEndpoint
-- cleanupBulkLoad,


I wonder to know why we create the staging dir when delete it.  For if a dir exists,hdfs can't make one.


I have checked hdfs name node log:


ERROR org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException as:xxx/xxxx@SANKUAI.COM (auth:KERBEROS) cause:org.apache.hadoop.security.AccessControlException: Permission denied


INFO org.apache.hadoop.ipc.Server: IPC Server handler 3 on 9000, call org.apache.hadoop.hdfs.protocol.ClientProtocol.setPermission from xxx Call#159529 Retry#0: error: org.apache.hadoop.security.AccessControlException: Permission denied


-----------------------------------------0.96.1.1 version--
public void cleanupBulkLoad(RpcController controller,
               CleanupBulkLoadRequest request,
               RpcCallbackCleanupBulkLoadResponse done) {
  try {
   getAccessController().preCleanupBulkLoad(env);
   fs.delete(createStagingDir(baseStagingDir,
     getActiveUser(),
     env.getRegion().getTableDesc().getTableName(),
     new Path(request.getBulkToken()).getName()),
     true);
   done.run(CleanupBulkLoadResponse.newBuilder().build());
  } catch (IOException e) {
   ResponseConverter.setControllerException(controller, e);
  }
  done.run(null);
 }


原始邮件
发件人:jhaobulljhaobull@gmail.com
收件人:user@hbase.apache.orguser@hbase.apache.org
发送时间:2014年6月10日(周二) 11:56
主题:Re: permission




The exception was throwed by :
SecureBulkLoadEndpoint
-- cleanupBulkLoad,


However I checked the /tmp/hbase-staging dir, all the dirs were empty !


and I also wonder to know why we create the staging dir when delete it


public void cleanupBulkLoad(RpcController controller,
               CleanupBulkLoadRequest request,
               RpcCallbackCleanupBulkLoadResponse done) {
  try {
   getAccessController().preCleanupBulkLoad(env);
   fs.delete(createStagingDir(baseStagingDir,
     getActiveUser(),
     env.getRegion().getTableDesc().getTableName(),
     new Path(request.getBulkToken()).getName()),
     true);
   done.run(CleanupBulkLoadResponse.newBuilder().build());
  } catch (IOException e) {
   ResponseConverter.setControllerException(controller, e);
  }
  done.run(null);
 }










原始邮件
发件人:Ted Yuyuzhihong@gmail.com
收件人:user@hbase.apache.orguser@hbase.apache.org
发送时间:2014年6月10日(周二) 11:42
主题:Re: permission


Without cleaning staging directory, temp files would pile up on your hdfs, right ? Cheers On Mon, Jun 9, 2014 at 8:25 PM, jhaobull jhaobull@gmail.com wrote:  thanks!    For this method only does some cleanup work, and I found that the data has  been imported to hbase successfully, and now I just ignore it, does it ok?      原始邮件  发件人:Ted Yuyuzhihong@gmail.com  收件人:user@hbase.apache.orguser@hbase.apache.org  抄送:useruser@hbase.apache.org  发送时间:2014年6月10日(周二) 10:03  主题:Re: permission    There have been some fixes for secure bulk load. The latest of which is  HBASE-11311. Are you able to try out, say 0.98.3 which was released today ?  Cheers