You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-dev@hadoop.apache.org by "Fengdong Yu (JIRA)" <ji...@apache.org> on 2013/12/15 13:27:07 UTC
[jira] [Resolved] (HDFS-5670) FSPermission check is incorrect
[ https://issues.apache.org/jira/browse/HDFS-5670?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Fengdong Yu resolved HDFS-5670.
-------------------------------
Resolution: Not A Problem
Sorry for my mistake.
> FSPermission check is incorrect
> -------------------------------
>
> Key: HDFS-5670
> URL: https://issues.apache.org/jira/browse/HDFS-5670
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: hdfs-client, namenode
> Affects Versions: 3.0.0, 2.2.0
> Reporter: Fengdong Yu
> Fix For: 3.0.0, 2.3.0
>
>
> FSPermission check is incorrect after update in the trunk recently.
> I submitted MR job using root, but the whole output directory must be owned by root, otherwise, it throws Exception:
> {code}
> [root@10 ~]# hadoop fs -ls /
> Found 1 items
> drwxr-xr-x - hadoop supergroup 0 2013-12-15 10:04 /user
> [root@10 ~]#
> [root@10 ~]# hadoop fs -ls /user
> Found 1 items
> drwxr-xr-x - root root 0 2013-12-15 10:04 /user/root
> {code}
> {code}
> [root@10 ~]# hadoop jar airui.jar /input /user/root/
> Exception in thread "main" org.apache.hadoop.security.AccessControlException: Permission denied: user=root, access=WRITE, inode="/user":hadoop:supergroup:drwxr-xr-x
> at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:234)
> at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:214)
> at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:161)
> at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:5410)
> at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.deleteInternal(FSNamesystem.java:3236)
> at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.deleteInt(FSNamesystem.java:3190)
> at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:3174)
> at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.delete(NameNodeRpcServer.java:708)
> at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.delete(ClientNamenodeProtocolServerSideTranslatorPB.java:514)
> at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:605)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:932)
> {code}
--
This message was sent by Atlassian JIRA
(v6.1.4#6159)