You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-dev@hadoop.apache.org by "Zsombor Gegesy (JIRA)" <ji...@apache.org> on 2017/08/11 13:33:00 UTC

[jira] [Resolved] (HDFS-11924) FSPermissionChecker.checkTraverse doesn't pass FsAction access properly

     [ https://issues.apache.org/jira/browse/HDFS-11924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Zsombor Gegesy resolved HDFS-11924.
-----------------------------------
    Resolution: Invalid

You are right, the problem is that Ranger doesn't implement properly the traversal checks, which is enough to work with hadoop 2.7.3. Hopefully, after [RANGER-1707|https://issues.apache.org/jira/browse/RANGER-1707] is merged, it will become a compliant service

> FSPermissionChecker.checkTraverse doesn't pass FsAction access properly
> -----------------------------------------------------------------------
>
>                 Key: HDFS-11924
>                 URL: https://issues.apache.org/jira/browse/HDFS-11924
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: security
>    Affects Versions: 2.8.0
>            Reporter: Zsombor Gegesy
>              Labels: hdfs, hdfspermission
>         Attachments: 0001-HDFS-11924-Pass-FsAction-to-the-external-AccessContr.patch
>
>
> In 2.7.1, during file access check, the AccessControlEnforcer is called with the access parameter filled with FsAction values.
> A thread dump in this case:
> {code}
> 	FSPermissionChecker.checkPermission(INodesInPath, boolean, FsAction, FsAction, FsAction, FsAction, boolean) line: 189	
> 	FSDirectory.checkPermission(FSPermissionChecker, INodesInPath, boolean, FsAction, FsAction, FsAction, FsAction, boolean) line: 1698	
> 	FSDirectory.checkPermission(FSPermissionChecker, INodesInPath, boolean, FsAction, FsAction, FsAction, FsAction) line: 1682	
> 	FSDirectory.checkPathAccess(FSPermissionChecker, INodesInPath, FsAction) line: 1656	
> 	FSNamesystem.appendFileInternal(FSPermissionChecker, INodesInPath, String, String, boolean, boolean) line: 2668	
> 	FSNamesystem.appendFileInt(String, String, String, boolean, boolean) line: 2985	
> 	FSNamesystem.appendFile(String, String, String, EnumSet<CreateFlag>, boolean) line: 2952	
> 	NameNodeRpcServer.append(String, String, EnumSetWritable<CreateFlag>) line: 653	
> 	ClientNamenodeProtocolServerSideTranslatorPB.append(RpcController, ClientNamenodeProtocolProtos$AppendRequestProto) line: 421	
> 	ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(Descriptors$MethodDescriptor, RpcController, Message) line: not available	
> 	ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(RPC$Server, String, Writable, long) line: 616	
> 	ProtobufRpcEngine$Server(RPC$Server).call(RPC$RpcKind, String, Writable, long) line: 969	
> 	Server$Handler$1.run() line: 2049	
> 	Server$Handler$1.run() line: 2045	
> 	AccessController.doPrivileged(PrivilegedExceptionAction<T>, AccessControlContext) line: not available [native method]	
> 	Subject.doAs(Subject, PrivilegedExceptionAction<T>) line: 422	
> 	UserGroupInformation.doAs(PrivilegedExceptionAction<T>) line: 1657	
> {code}
> However, in 2.8.0 this value is changed to null, because in FSPermissionChecker.checkTraverse(FSPermissionChecker pc, INodesInPath iip, boolean resolveLink) couldn't pass the required information, so it's simply use 'null'.
> This is a regression between 2.7.1 and 2.8.0, because external AccessControlEnforcer couldn't work properly



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-dev-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-help@hadoop.apache.org