You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hbase.apache.org by anil gupta <an...@gmail.com> on 2013/01/24 02:09:39 UTC
LoadIncrementalHFiles always run with "hbase" user
Hi All,
I am generating HFiles by running the bulk loader with a custom mapper.
Once the MR job for generating HFile is finished, I trigger the loading of
HFiles into HBase with the help of following java code:
ToolRunner.run(new LoadIncrementalHFiles(HBaseConfiguration.create()), new
String[]{conf.get("importtsv.bulk.output"), otherArgs[0]});
However, while loading i am getting errors related to permissions since the
loading is being attempted by "hbase" user even though the process(java
program) was started by "root". This seems like a bug since the loading of
data into HBase should also be done as "root". Is there any for only using
"hbase" user while loading?
HBase cluster is not secured. I am using 0.92.1 and its fully distributed
cluster. Please help me in resolving this error.
Here is the error message:
13/01/23 17:02:16 WARN mapreduce.LoadIncrementalHFiles: Skipping
non-directory hdfs://ihubcluster/tmp/hfile_txn_subset/_SUCCESS
13/01/23 17:02:16 INFO hfile.CacheConfig: Allocating LruBlockCache with
maximum size 241.7m
13/01/23 17:02:16 INFO mapreduce.LoadIncrementalHFiles: Trying to load
hfile=hdfs://ihubcluster/tmp/hfile_txn_subset/t/344d58edc7d74e7b9a35ef5e1bf906cc
first=\x00\x0F(\xC7F\xAD2\xB4\x00\x00\x02\x87\xE1\xB9\x9F\x18\x00\x0C\x1E\x1A\x00\x00\x01<j\x14\x95d
last=\x00\x12\xA4\xC6$IP\x9D\x00\x00\x02\x88+\x11\xD2
\x00\x0C\x1E\x1A\x00\x00\x01<j\x14\x04A
13/01/23 17:02:55 ERROR mapreduce.LoadIncrementalHFiles: Encountered
unrecoverable error from region server
org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after
attempts=10, exceptions:
Wed Jan 23 17:02:16 PST 2013,
org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles$3@7b4189d0,
org.apache.hadoop.security.AccessControlException:
org.apache.hadoop.security.AccessControlException: Permission denied:
user=hbase, access=WRITE,
inode="/tmp/hfile_txn_subset/t":root:hadoop:drwxr-xr-x
at
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
at
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:186)
at
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:138)
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4265)
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkParentAccess(FSNamesystem.java:4231)
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.renameToInternal(FSNamesystem.java:2347)
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.renameTo(FSNamesystem.java:2315)
at
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.rename(NameNodeRpcServer.java:579)
at
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.rename(ClientNamenodeProtocolServerSideTranslatorPB.java:374)
at
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:42612)
at
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:427)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:916)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1692)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1688)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1232)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1686)
at sun.reflect.GeneratedConstructorAccessor21.newInstance(Unknown
Source)
at
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
at
org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:90)
at
org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:57)
at org.apache.hadoop.hdfs.DFSClient.rename(DFSClient.java:1237)
at
org.apache.hadoop.hdfs.DistributedFileSystem.rename(DistributedFileSystem.java:294)
at
org.apache.hadoop.hbase.regionserver.StoreFile.rename(StoreFile.java:640)
at
org.apache.hadoop.hbase.regionserver.Store.bulkLoadHFile(Store.java:420)
at
org.apache.hadoop.hbase.regionserver.HRegion.bulkLoadHFiles(HRegion.java:2803)
at
org.apache.hadoop.hbase.regionserver.HRegionServer.bulkLoadHFiles(HRegionServer.java:2417)
at sun.reflect.GeneratedMethodAccessor21.invoke(Unknown Source)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at
org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
at
org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1336)
--
Thanks & Regards,
Anil Gupta
Re: LoadIncrementalHFiles always run with "hbase" user
Posted by anil gupta <an...@gmail.com>.
Hi Harsh,
Thanks for your response. If i understand you correctly then instead of the
process(java program) that i started, RS is trying to write in the
directory. Hence the error. Right?
If that's the case then it kinda makes sense.
~Anil
On Thu, Jan 24, 2013 at 6:30 AM, Harsh J <ha...@cloudera.com> wrote:
> The exception is remote and seems to indicate that your RS is running
> as the 'hbase' user. RS will attempt to do a mv/rename operation when
> you provide it a bulkloadable file, which will then be attempted as
> the user the RS itself runs as - thereby this error.
>
> On Thu, Jan 24, 2013 at 6:39 AM, anil gupta <an...@gmail.com> wrote:
> > Hi All,
> >
> > I am generating HFiles by running the bulk loader with a custom mapper.
> > Once the MR job for generating HFile is finished, I trigger the loading
> of
> > HFiles into HBase with the help of following java code:
> > ToolRunner.run(new LoadIncrementalHFiles(HBaseConfiguration.create()),
> new
> > String[]{conf.get("importtsv.bulk.output"), otherArgs[0]});
> >
> > However, while loading i am getting errors related to permissions since
> the
> > loading is being attempted by "hbase" user even though the process(java
> > program) was started by "root". This seems like a bug since the loading
> of
> > data into HBase should also be done as "root". Is there any for only
> using
> > "hbase" user while loading?
> > HBase cluster is not secured. I am using 0.92.1 and its fully distributed
> > cluster. Please help me in resolving this error.
> >
> > Here is the error message:
> > 13/01/23 17:02:16 WARN mapreduce.LoadIncrementalHFiles: Skipping
> > non-directory hdfs://ihubcluster/tmp/hfile_txn_subset/_SUCCESS
> > 13/01/23 17:02:16 INFO hfile.CacheConfig: Allocating LruBlockCache with
> > maximum size 241.7m
> > 13/01/23 17:02:16 INFO mapreduce.LoadIncrementalHFiles: Trying to load
> >
> hfile=hdfs://ihubcluster/tmp/hfile_txn_subset/t/344d58edc7d74e7b9a35ef5e1bf906cc
> >
> first=\x00\x0F(\xC7F\xAD2\xB4\x00\x00\x02\x87\xE1\xB9\x9F\x18\x00\x0C\x1E\x1A\x00\x00\x01<j\x14\x95d
> > last=\x00\x12\xA4\xC6$IP\x9D\x00\x00\x02\x88+\x11\xD2
> > \x00\x0C\x1E\x1A\x00\x00\x01<j\x14\x04A
> > 13/01/23 17:02:55 ERROR mapreduce.LoadIncrementalHFiles: Encountered
> > unrecoverable error from region server
> > org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after
> > attempts=10, exceptions:
> > Wed Jan 23 17:02:16 PST 2013,
> > org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles$3@7b4189d0,
> > org.apache.hadoop.security.AccessControlException:
> > org.apache.hadoop.security.AccessControlException: Permission denied:
> > user=hbase, access=WRITE,
> > inode="/tmp/hfile_txn_subset/t":root:hadoop:drwxr-xr-x
> > at
> >
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
> > at
> >
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:186)
> > at
> >
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:138)
> > at
> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4265)
> > at
> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkParentAccess(FSNamesystem.java:4231)
> > at
> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.renameToInternal(FSNamesystem.java:2347)
> > at
> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.renameTo(FSNamesystem.java:2315)
> > at
> >
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.rename(NameNodeRpcServer.java:579)
> > at
> >
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.rename(ClientNamenodeProtocolServerSideTranslatorPB.java:374)
> > at
> >
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:42612)
> > at
> >
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:427)
> > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:916)
> > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1692)
> > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1688)
> > at java.security.AccessController.doPrivileged(Native Method)
> > at javax.security.auth.Subject.doAs(Subject.java:396)
> > at
> >
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1232)
> > at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1686)
> >
> > at sun.reflect.GeneratedConstructorAccessor21.newInstance(Unknown
> > Source)
> > at
> >
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
> > at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
> > at
> >
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:90)
> > at
> >
> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:57)
> > at org.apache.hadoop.hdfs.DFSClient.rename(DFSClient.java:1237)
> > at
> >
> org.apache.hadoop.hdfs.DistributedFileSystem.rename(DistributedFileSystem.java:294)
> > at
> > org.apache.hadoop.hbase.regionserver.StoreFile.rename(StoreFile.java:640)
> > at
> > org.apache.hadoop.hbase.regionserver.Store.bulkLoadHFile(Store.java:420)
> > at
> >
> org.apache.hadoop.hbase.regionserver.HRegion.bulkLoadHFiles(HRegion.java:2803)
> > at
> >
> org.apache.hadoop.hbase.regionserver.HRegionServer.bulkLoadHFiles(HRegionServer.java:2417)
> > at sun.reflect.GeneratedMethodAccessor21.invoke(Unknown Source)
> > at
> >
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> > at java.lang.reflect.Method.invoke(Method.java:597)
> > at
> >
> org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
> > at
> >
> org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1336)
> >
> > --
> > Thanks & Regards,
> > Anil Gupta
>
>
>
> --
> Harsh J
>
--
Thanks & Regards,
Anil Gupta
Re: LoadIncrementalHFiles always run with "hbase" user
Posted by Harsh J <ha...@cloudera.com>.
The exception is remote and seems to indicate that your RS is running
as the 'hbase' user. RS will attempt to do a mv/rename operation when
you provide it a bulkloadable file, which will then be attempted as
the user the RS itself runs as - thereby this error.
On Thu, Jan 24, 2013 at 6:39 AM, anil gupta <an...@gmail.com> wrote:
> Hi All,
>
> I am generating HFiles by running the bulk loader with a custom mapper.
> Once the MR job for generating HFile is finished, I trigger the loading of
> HFiles into HBase with the help of following java code:
> ToolRunner.run(new LoadIncrementalHFiles(HBaseConfiguration.create()), new
> String[]{conf.get("importtsv.bulk.output"), otherArgs[0]});
>
> However, while loading i am getting errors related to permissions since the
> loading is being attempted by "hbase" user even though the process(java
> program) was started by "root". This seems like a bug since the loading of
> data into HBase should also be done as "root". Is there any for only using
> "hbase" user while loading?
> HBase cluster is not secured. I am using 0.92.1 and its fully distributed
> cluster. Please help me in resolving this error.
>
> Here is the error message:
> 13/01/23 17:02:16 WARN mapreduce.LoadIncrementalHFiles: Skipping
> non-directory hdfs://ihubcluster/tmp/hfile_txn_subset/_SUCCESS
> 13/01/23 17:02:16 INFO hfile.CacheConfig: Allocating LruBlockCache with
> maximum size 241.7m
> 13/01/23 17:02:16 INFO mapreduce.LoadIncrementalHFiles: Trying to load
> hfile=hdfs://ihubcluster/tmp/hfile_txn_subset/t/344d58edc7d74e7b9a35ef5e1bf906cc
> first=\x00\x0F(\xC7F\xAD2\xB4\x00\x00\x02\x87\xE1\xB9\x9F\x18\x00\x0C\x1E\x1A\x00\x00\x01<j\x14\x95d
> last=\x00\x12\xA4\xC6$IP\x9D\x00\x00\x02\x88+\x11\xD2
> \x00\x0C\x1E\x1A\x00\x00\x01<j\x14\x04A
> 13/01/23 17:02:55 ERROR mapreduce.LoadIncrementalHFiles: Encountered
> unrecoverable error from region server
> org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after
> attempts=10, exceptions:
> Wed Jan 23 17:02:16 PST 2013,
> org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles$3@7b4189d0,
> org.apache.hadoop.security.AccessControlException:
> org.apache.hadoop.security.AccessControlException: Permission denied:
> user=hbase, access=WRITE,
> inode="/tmp/hfile_txn_subset/t":root:hadoop:drwxr-xr-x
> at
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
> at
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:186)
> at
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:138)
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4265)
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkParentAccess(FSNamesystem.java:4231)
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.renameToInternal(FSNamesystem.java:2347)
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.renameTo(FSNamesystem.java:2315)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.rename(NameNodeRpcServer.java:579)
> at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.rename(ClientNamenodeProtocolServerSideTranslatorPB.java:374)
> at
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:42612)
> at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:427)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:916)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1692)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1688)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
> at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1232)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1686)
>
> at sun.reflect.GeneratedConstructorAccessor21.newInstance(Unknown
> Source)
> at
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
> at
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:90)
> at
> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:57)
> at org.apache.hadoop.hdfs.DFSClient.rename(DFSClient.java:1237)
> at
> org.apache.hadoop.hdfs.DistributedFileSystem.rename(DistributedFileSystem.java:294)
> at
> org.apache.hadoop.hbase.regionserver.StoreFile.rename(StoreFile.java:640)
> at
> org.apache.hadoop.hbase.regionserver.Store.bulkLoadHFile(Store.java:420)
> at
> org.apache.hadoop.hbase.regionserver.HRegion.bulkLoadHFiles(HRegion.java:2803)
> at
> org.apache.hadoop.hbase.regionserver.HRegionServer.bulkLoadHFiles(HRegionServer.java:2417)
> at sun.reflect.GeneratedMethodAccessor21.invoke(Unknown Source)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at
> org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
> at
> org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1336)
>
> --
> Thanks & Regards,
> Anil Gupta
--
Harsh J