You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@hbase.apache.org by "Himanshu Vashishtha (JIRA)" <ji...@apache.org> on 2013/09/11 22:44:51 UTC

[jira] [Updated] (HBASE-9509) Fix HFile V1 Detector to handle AccessControlException for non-existant files

     [ https://issues.apache.org/jira/browse/HBASE-9509?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Himanshu Vashishtha updated HBASE-9509:
---------------------------------------

    Attachment: HBase-9509.patch

Simple fix. TestUpgradeTo96 passes. 
Ran the patched version on the complaining hdfs and it was good.
                
> Fix HFile V1 Detector to handle AccessControlException for non-existant files
> -----------------------------------------------------------------------------
>
>                 Key: HBASE-9509
>                 URL: https://issues.apache.org/jira/browse/HBASE-9509
>             Project: HBase
>          Issue Type: Bug
>    Affects Versions: 0.96.0
>            Reporter: Himanshu Vashishtha
>             Fix For: 0.98.0, 0.96.0
>
>         Attachments: HBase-9509.patch
>
>
> On some hadoop versions, fs.exists() throws an AccessControlException if there is a non-searchable inode in the file path. Versions such as 2.1.0-beta just returns false.
> This jira is to fix HFile V1 detector tool to avoid making such calls.
> See the below exception when running the tool on one hadoop version
> {code}
> ERROR util.HFileV1Detector: org.apache.hadoop.security.AccessControlException: Permission denied: user=hbase, access=EXECUTE, inode="/hbase/.META./.tableinfo.0000000001":hbase:supergroup:-rw-r--r--
> 	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:234)
> 	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:187)
> 	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:150)
> 	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:5141)
> 	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:5123)
> 	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkTraverse(FSNamesystem.java:5102)
> 	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:3265)
> 	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:719)
> 	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:692)
> 	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:59628)
> 	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
> 	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1026)
> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2040)
> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2036)
> 	at java.security.AccessController.doPrivileged(Native Method)
> 	at javax.security.auth.Subject.doAs(Subject.java:396)
> 	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1477)
> 	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2034)
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira