You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@oozie.apache.org by "Peter Cseh (JIRA)" <ji...@apache.org> on 2019/05/03 10:09:00 UTC

[jira] [Commented] (OOZIE-3478) Oozie needs execute permission on the submitting users home directory

    [ https://issues.apache.org/jira/browse/OOZIE-3478?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16832407#comment-16832407 ] 

Peter Cseh commented on OOZIE-3478:
-----------------------------------

Good catch!
I think we should use {{UserGroupInformationService}} here as well just like we do in [HadoopAccessorService|https://github.com/apache/oozie/blob/master/core/src/main/java/org/apache/oozie/service/HadoopAccessorService.java].
{{createRemoteUser}} is used in the applications we're submitting to Yarn where we're only using the delegation tokens. This code runs in the Oozie server, we should use the 'oozie' user's proxyuser power.

It might worth to move the whole call to HadoopAccessorService (if you can find a good name for the function). I'll leave that call to you though.

> Oozie needs execute permission on the submitting users home directory
> ---------------------------------------------------------------------
>
>                 Key: OOZIE-3478
>                 URL: https://issues.apache.org/jira/browse/OOZIE-3478
>             Project: Oozie
>          Issue Type: Bug
>          Components: action, security
>    Affects Versions: 5.1.0
>            Reporter: Andras Salamon
>            Assignee: Andras Salamon
>            Priority: Major
>         Attachments: OOZIE-3478-01-wip.patch
>
>
> On a secure cluster oozie user needs execute permission on the submitting user's home directory. The bug affects multiple actions ( probably all which is based on JavaActionExecutor ). Easiest way to reproduce is to use a shell action, where the {{workflow.xml}} contains the following action:
> {noformat}<action name="shell-node">
>         <shell xmlns="uri:oozie:shell-action:1.0">
>             <resource-manager>${resourceManager}</resource-manager>
>             <name-node>${nameNode}</name-node>
>             <configuration>
>                 <property>
>                     <name>mapred.job.queue.name</name>
>                     <value>${queueName}</value>
>                 </property>
>             </configuration>
>             <exec>test.sh</exec>
>             <file>/user/systest/test.sh#test.sh</file>
>             <capture-output/>
>         </shell>
>         <ok to="check-output"/>
>         <error to="fail"/>
>     </action>
> {noformat}
> If the directory has the following permissions:
> {noformat}drwx------   - systest supergroup          0 2019-04-16 08:19 /user/systest
> {noformat}
> then running the workflow gives JA009 error code with the following exception:
> {noformat}ozie-oozi-W@shell-node] Error starting action [shell-node]. ErrorType [TRANSIENT], ErrorCode [JA009], Message [JA009: Permission denied: user=oozie, access=EXECUTE, inode=&quot;/user/systest&quot;:systest:supergroup:drwx------
>         at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:400)
>         at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:316)
>         at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:243)
>         at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:194)
>         at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:605)
>         at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkTraverse(FSDirectory.java:1804)
>         at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkTraverse(FSDirectory.java:1822)
>         at org.apache.hadoop.hdfs.server.namenode.FSDirectory.resolvePath(FSDirectory.java:674)
>         at org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getFileInfo(FSDirStatAndListingOp.java:112)
>         at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:3060)
>         at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:1151)
>         at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:940)
>         at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>         at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)
>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
>         at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:869)
>         at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:815)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:422)
>         at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875)
>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2675)
> ]
> org.apache.oozie.action.ActionExecutorException: JA009: Permission denied: user=oozie, access=EXECUTE, inode=&quot;/user/systest&quot;:systest:supergroup:drwx------
>         at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:400)
>         at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:316)
>         at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:243)
>         at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:194)
>         at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:605)
>         at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkTraverse(FSDirectory.java:1804)
>         at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkTraverse(FSDirectory.java:1822)
>         at org.apache.hadoop.hdfs.server.namenode.FSDirectory.resolvePath(FSDirectory.java:674)
>         at org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getFileInfo(FSDirStatAndListingOp.java:112)
>         at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:3060)
>         at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:1151)
>         at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:940)
>         at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>         at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)
>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
>         at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:869)
>         at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:815)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:422)
>         at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875)
>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2675)
>         at org.apache.oozie.action.ActionExecutor.convertExceptionHelper(ActionExecutor.java:469)
>         at org.apache.oozie.action.ActionExecutor.convertException(ActionExecutor.java:443)
>         at org.apache.oozie.action.hadoop.JavaActionExecutor.submitLauncher(JavaActionExecutor.java:1103)
>         at org.apache.oozie.action.hadoop.JavaActionExecutor.start(JavaActionExecutor.java:1589)
>         at org.apache.oozie.command.wf.ActionStartXCommand.execute(ActionStartXCommand.java:243)
>         at org.apache.oozie.command.wf.ActionStartXCommand.execute(ActionStartXCommand.java:68)
>         at org.apache.oozie.command.XCommand.call(XCommand.java:291)
>         at org.apache.oozie.service.CallableQueueService$CompositeCallable.call(CallableQueueService.java:363)
>         at org.apache.oozie.service.CallableQueueService$CompositeCallable.call(CallableQueueService.java:292)
>         at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>         at org.apache.oozie.service.CallableQueueService$CallableWrapper.run(CallableQueueService.java:210)
>         at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>         at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>         at java.lang.Thread.run(Thread.java:748)
> Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Permission denied: user=oozie, access=EXECUTE, inode=&quot;/user/systest&quot;:systest:supergroup:drwx------
>         at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:400)
>         at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:316)
>         at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:243)
>         at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:194)
>         at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:605)
>         at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkTraverse(FSDirectory.java:1804)
>         at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkTraverse(FSDirectory.java:1822)
>         at org.apache.hadoop.hdfs.server.namenode.FSDirectory.resolvePath(FSDirectory.java:674)
>         at org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getFileInfo(FSDirStatAndListingOp.java:112)
>         at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:3060)
>         at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:1151)
>         at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:940)
>         at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>         at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)
>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
>         at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:869)
>         at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:815)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:422)
>         at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875)
>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2675)
>         at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1499)
>         at org.apache.hadoop.ipc.Client.call(Client.java:1445)
>         at org.apache.hadoop.ipc.Client.call(Client.java:1355)
>         at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228)
>         at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
>         at com.sun.proxy.$Proxy34.getFileInfo(Unknown Source)
>         at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:875)
>         at sun.reflect.GeneratedMethodAccessor34.invoke(Unknown Source)
>         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:498)
>         at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
>         at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
>         at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
>         at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
>         at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
>         at com.sun.proxy.$Proxy35.getFileInfo(Unknown Source)
>         at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1624)
>         at org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1495)
>         at org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1492)
>         at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>         at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1507)
>         at org.apache.hadoop.fs.FileSystem.resolvePath(FileSystem.java:931)
>         at org.apache.oozie.util.ClasspathUtils.addToClasspathIfNotJar(ClasspathUtils.java:183)
>         at org.apache.oozie.util.ClasspathUtils.setupClasspath(ClasspathUtils.java:73)
>         at org.apache.oozie.action.hadoop.JavaActionExecutor.setEnvironmentVariables(JavaActionExecutor.java:1355)
>         at org.apache.oozie.action.hadoop.JavaActionExecutor.createAppSubmissionContext(JavaActionExecutor.java:1175)
>         at org.apache.oozie.action.hadoop.JavaActionExecutor.submitLauncher(JavaActionExecutor.java:1090)
>         ... 11 more
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)