You are viewing a plain text version of this content. The canonical link for it is here.
Posted to mapreduce-user@hadoop.apache.org by Kevin <ke...@gmail.com> on 2014/11/18 00:38:08 UTC

Permissions issue with launching MR job from Oozie shell

I hope this question is relevant to the Hadoop mailing list. If it should
be asked to the Oozie community, then I apologize in advance.

I'm trying to schedule an HBase MapReduce bulkload preparation job with
Oozie. I need to use either the Oozie shell action or Java action since the
driver class needs to create the partition file, determine the number of
reducers, etc. I chose the shell action as my solution.

As user 'kevin', I submit and run my Oozie workflow (using the oozie client
command). I understand that Oozie executes the shell as the yarn user, but
it appears that the user 'kevin' in being used to access the yarn staging
directory. The application master's container logs are below.

My main question is:
Why is the user 'kevin' being used to run the application master? (Or maybe
it's not and I don't even understand that much)

I am using CDH5.1.3 with YARN.

The script is simple:

#!/usr/bin/env bash
hadoop jar myjar.jar package.Driver -i $INPUT -o $OUTPUT_DIR

In mapred-site.xml:

<property>
  <name>yarn.app.mapreduce.am.staging-dir</name>
  <value>/user</value>
</property>

The error I am getting is:

2014-11-17 12:43:31,428 INFO [main]
org.apache.hadoop.mapreduce.JobSubmitter: Kind: RM_DELEGATION_TOKEN,
Service: 10.10.23.111:8032,10.10.23.112:8032, Ident: (owner=kevin,
renewer=oozie mr token, realUser=oozie, issueDate=1416246198904,
maxDate=1416850998904, sequenceNumber=162, masterKeyId=87)
...
...
...
2014-11-17 12:43:34,905 INFO [main] org.apache.hadoop.mapreduce.Job: Job
job_1415194788406_0050 failed with state FAILED due to: Application
application_1415194788406_0050 failed 2 times due to AM Container for
appattempt_1415194788406_0050_000002 exited with  exitCode: -1000 due to:
Permission denied: user=kevin, access=EXECUTE,
inode="/user/yarn":yarn:hadoop:drwx------
at
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkFsPermission(FSPermissionChecker.java:271)
at
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:257)
at
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:208)
at
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:171)
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:5607)
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:3583)
at
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:766)
at
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:764)
at
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1026)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1986)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1982)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1554)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1980)