You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hadoop.apache.org by Prashant Kommireddi <pr...@gmail.com> on 2013/06/18 19:54:28 UTC

DFS Permissions on Hadoop 2.x

Hello,

We just upgraded our cluster from 0.20.2 to 2.x (with HA) and had a
question around disabling dfs permissions on the latter version. For some
reason, setting the following config does not seem to work

<property>
        <name>dfs.permissions.enabled</name>
        <value>false</value>
</property>

Any other configs that might be needed for this?

Here is the stacktrace.

2013-06-17 17:35:45,429 INFO  ipc.Server - IPC Server handler 62 on 8020,
call org.apache.hadoop.hdfs.protocol.ClientProtocol.setPermission from
10.0.53.131:24059: error:
org.apache.hadoop.security.AccessControlException: Permission denied:
user=smehta, access=EXECUTE,
inode="/mapred":pkommireddi:supergroup:drwxrwx---
org.apache.hadoop.security.AccessControlException: Permission denied:
user=smehta, access=EXECUTE,
inode="/mapred":pkommireddi:supergroup:drwxrwx---
        at
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
        at
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
        at
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
        at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
        at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
        at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
        at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
        at
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
        at
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
        at
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
        at
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:396)
        at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)

Re: DFS Permissions on Hadoop 2.x

Posted by Prashant Kommireddi <pr...@gmail.com>.
Thanks Chuan for the details. We are planning to go route 2 (pre-creating
and opening up perms on the dir) as a workaround, though I feel it's not a
good place to be if one is having to do that. There are a few different
directories that are needed to be manually created with the relaxed perms
to get around this (history, staging, tmp). Doing it outside of the regular
deployment process is not the best idea.



On Thu, Jun 20, 2013 at 12:54 AM, Chuan Liu <ch...@microsoft.com> wrote:

>  Hi Prashant,****
>
> ** **
>
> We also hit this issue before.****
>
> 1) We want run a Hadoop cluster with permission disabled.****
>
> 2) With Job history server, yarn, hdfs daemons run under a special service
> user account, e.g. ‘hadoop’****
>
> 3) Users submit jobs to the cluster under their own account.****
>
> ** **
>
> For the above scenario, submitting jobs fails in Hadoop 2.0 while succeeds
> in Hadoop 1.0.****
>
> ** **
>
> In our investigation, the regression happened in jobclient and job history
> server, not on hdfs side.****
>
> The root cause is that jobclient will copy jar files to the staging area
> configed by “yarn.app.mapreduce.am.staging-dir”.****
>
> The client will also set the permission on the directory and jar files to
> some pre-configured value, i.e. JobSubmissionFilesJOB_DIR_PERMISSION and
> JobSubmissionFilesJOB_FILE_PERMISSION.****
>
> On HDFS side, even if ‘permissoin.enabled’ is set to false, changing
> permissions are not allowed.****
>
> (This is the same in both Hadoop v1 and v2.)****
>
> JobHistoryServer also plays a part in this as its staging directory
> happens to be at the same locations as “yarn.app.mapreduce am.staging-dir”.
> ****
>
> It will create directories recursively with permissions set to
> HISTORY_STAGING_DIR_PERMISSIONS.****
>
> JobHistoryServer runs under the special service user account while
> JobClient is under the user who submitting jobs.****
>
> This lead to a failure in setPermission() during job submission.****
>
> ** **
>
> There are multiple possible mitigations possible. Here are two examples.**
> **
>
> 1) config all users submitting jobs to supergroup.****
>
> 2) during setup, pre-create the staging directory and chown to correct
> user.****
>
> ** **
>
> In our case, we took approach 1) because the security check on HDFS was
> not very important for our scenarios (part of the reason why we can disable
> HDFS permission in the first place).****
>
> ** **
>
> Hope this can help you solve your problem!****
>
> ** **
>
> ** **
>
> -Chuan****
>
> ** **
>
> ** **
>
> *From:* Prashant Kommireddi [mailto:prash1784@gmail.com]
> *Sent:* Wednesday, June 19, 2013 1:32 PM
> *To:* user@hadoop.apache.org
> *Subject:* Re: DFS Permissions on Hadoop 2.x****
>
> ** **
>
> How can we resolve the issue in the case I have mentioned? File a MR Jira
> that does not try to check permissions when dfs.permissions.enabled is set
> to false? ****
>
> ** **
>
> The explanation that Tsz Wo (Nicholas) pointed out in the JIRA makes sense
> w.r.t HDFS behavior (thanks for that). But I am still unsure how we can get
> around the fact that certain permissions are set on shared directories by a
> certain user that disallow any other users from using them. Or am I missing
> something entirely?****
>
> ** **
>
> On Wed, Jun 19, 2013 at 1:01 PM, Chris Nauroth <cn...@hortonworks.com>
> wrote:****
>
>  Just in case anyone is curious who didn't look at HDFS-4918, we
> established that this is actually expected behavior, and it's mentioned in
> the documentation.  However, I filed HDFS-4919 to make the information
> clearer in the documentation, since this caused some confusion.****
>
> ** **
>
> https://issues.apache.org/jira/browse/HDFS-4919****
>
> ** **
>
> Chris Nauroth****
>
> Hortonworks****
>
> http://hortonworks.com/****
>
> ** **
>
> ** **
>
> On Tue, Jun 18, 2013 at 10:42 PM, Prashant Kommireddi <pr...@gmail.com>
> wrote:****
>
>  Thanks guys, I will follow the discussion there.****
>
> ** **
>
> On Tue, Jun 18, 2013 at 10:10 PM, Azuryy Yu <az...@gmail.com> wrote:***
> *
>
>  Yes, and I think this was lead by Snapshot.****
>
> I've file a JIRA here:
> https://issues.apache.org/jira/browse/HDFS-4918****
>
> ** **
>
> On Wed, Jun 19, 2013 at 11:40 AM, Harsh J <ha...@cloudera.com> wrote:****
>
> This is a HDFS bug. Like all other methods that check for permissions
> being enabled, the client call of setPermission should check it as
> well. It does not do that currently and I believe it should be a NOP
> in such a case. Please do file a JIRA (and reference the ID here to
> close the loop)!****
>
>
> On Wed, Jun 19, 2013 at 6:18 AM, Prashant Kommireddi
> <pr...@gmail.com> wrote:
> > Looks like the jobs fail only on the first attempt and pass thereafter.
> > Failure occurs while setting perms on "intermediate done directory".
> Here is
> > what I think is happening:
> >
> > 1. Intermediate done dir is (ideally) created as part of deployment (for
> eg,
> > /mapred/history/done_intermediate)
> >
> > 2. When a MR job is run, it creates a user dir within intermediate done
> dir
> > (/mapred/history/done_intermediate/username)
> >
> > 3. After this dir is created, the code tries to set permissions on this
> user
> > dir. In doing so, it checks for EXECUTE permissions on not just its
> parent
> > (/mapred/history/done_intermediate) but across all dirs to the top-most
> > level (/mapred). This fails as "/mapred" does not have execute
> permissions
> > for the "Other" users.
> >
> > 4. On successive job runs, since the user dir already exists
> > (/mapred/history/done_intermediate/username) it no longer tries to create
> > and set permissions again. And the job completes without any perm errors.
> >
> > This is the code within JobHistoryEventHandler that's doing it.
> >
> >  //Check for the existence of intermediate done dir.
> >     Path doneDirPath = null;
> >     try {
> >       doneDirPath = FileSystem.get(conf).makeQualified(new
> > Path(doneDirStr));
> >       doneDirFS = FileSystem.get(doneDirPath.toUri(), conf);
> >       // This directory will be in a common location, or this may be a
> > cluster
> >       // meant for a single user. Creating based on the conf. Should
> ideally
> > be
> >       // created by the JobHistoryServer or as part of deployment.
> >       if (!doneDirFS.exists(doneDirPath)) {
> >       if (JobHistoryUtils.shouldCreateNonUserDirectory(conf)) {
> >         LOG.info("Creating intermediate history logDir: ["
> >             + doneDirPath
> >             + "] + based on conf. Should ideally be created by the
> > JobHistoryServer: "
> >             + MRJobConfig.MR_AM_CREATE_JH_INTERMEDIATE_BASE_DIR);
> >           mkdir(
> >               doneDirFS,
> >               doneDirPath,
> >               new FsPermission(
> >             JobHistoryUtils.HISTORY_INTERMEDIATE_DONE_DIR_PERMISSIONS
> >                 .toShort()));
> >           // TODO Temporary toShort till new FsPermission(FsPermissions)
> >           // respects
> >         // sticky
> >       } else {
> >           String message = "Not creating intermediate history logDir: ["
> >                 + doneDirPath
> >                 + "] based on conf: "
> >                 + MRJobConfig.MR_AM_CREATE_JH_INTERMEDIATE_BASE_DIR
> >                 + ". Either set to true or pre-create this directory
> with" +
> >                 " appropriate permissions";
> >         LOG.error(message);
> >         throw new YarnException(message);
> >       }
> >       }
> >     } catch (IOException e) {
> >       LOG.error("Failed checking for the existance of history
> intermediate "
> > +
> >                       "done directory: [" + doneDirPath + "]");
> >       throw new YarnException(e);
> >     }
> >
> >
> > In any case, this does not appear to be the right behavior as it does not
> > respect "dfs.permissions.enabled" (set to false) at any point. Sounds
> like a
> > bug?
> >
> >
> > Thanks, Prashant
> >
> >
> >
> >
> >
> >
> > On Tue, Jun 18, 2013 at 3:24 PM, Prashant Kommireddi <
> prash1784@gmail.com>
> > wrote:
> >>
> >> Hi Chris,
> >>
> >> This is while running a MR job. Please note the job is able to write
> files
> >> to "/mapred" directory and fails on EXECUTE permissions. On digging in
> some
> >> more, it looks like the failure occurs after writing to
> >> "/mapred/history/done_intermediate".
> >>
> >> Here is a more detailed stacktrace.
> >>
> >> INFO: Job end notification started for jobID : job_1371593763906_0001
> >> Jun 18, 2013 3:20:20 PM
> >> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler
> >> closeEventWriter
> >> INFO: Unable to write out JobSummaryInfo to
> >>
> [hdfs://test-local-EMPTYSPEC/mapred/history/done_intermediate/smehta/job_1371593763906_0001.summary_tmp]
> >> org.apache.hadoop.security.AccessControlException: Permission denied:
> >> user=smehta, access=EXECUTE,
> >> inode="/mapred":pkommireddi:supergroup:drwxrwx---
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
> >>      at
> >>
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
> >>      at
> >>
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
> >>      at
> >>
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
> >>      at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
> >>      at java.security.AccessController.doPrivileged(Native Method)
> >>      at javax.security.auth.Subject.doAs(Subject.java:396)
> >>      at
> >>
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
> >>      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
> >>
> >>      at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
> Method)
> >>      at
> >>
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
> >>      at
> >>
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
> >>      at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
> >>      at
> >>
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:90)
> >>      at
> >>
> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:57)
> >>      at
> org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:1897)
> >>      at
> >>
> org.apache.hadoop.hdfs.DistributedFileSystem.setPermission(DistributedFileSystem.java:823)
> >>      at
> >>
> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.closeEventWriter(JobHistoryEventHandler.java:666)
> >>      at
> >>
> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.handleEvent(JobHistoryEventHandler.java:521)
> >>      at
> >>
> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler$1.run(JobHistoryEventHandler.java:273)
> >>      at java.lang.Thread.run(Thread.java:662)
> >> Caused by:
> >>
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException):
> >> Permission denied: user=smehta, access=EXECUTE,
> >> inode="/mapred":pkommireddi:supergroup:drwxrwx---
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
> >>      at
> >>
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
> >>      at
> >>
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
> >>      at
> >>
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
> >>      at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
> >>      at java.security.AccessController.doPrivileged(Native Method)
> >>      at javax.security.auth.Subject.doAs(Subject.java:396)
> >>      at
> >>
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
> >>      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
> >>
> >>      at org.apache.hadoop.ipc.Client.call(Client.java:1225)
> >>      at
> >>
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202)
> >>      at $Proxy9.setPermission(Unknown Source)
> >>      at
> >>
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.setPermission(ClientNamenodeProtocolTranslatorPB.java:241)
> >>      at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> >>      at
> >>
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> >>      at
> >>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> >>      at java.lang.reflect.Method.invoke(Method.java:597)
> >>      at
> >>
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164)
> >>      at
> >>
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83)
> >>      at $Proxy10.setPermission(Unknown Source)
> >>      at
> org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:1895)
> >>      ... 5 more
> >> Jun 18, 2013 3:20:20 PM
> >> org.apache.hadoop.yarn.YarnUncaughtExceptionHandler uncaughtException
> >> SEVERE: Thread Thread[Thread-51,5,main] threw an Exception.
> >> org.apache.hadoop.yarn.YarnException:
> >> org.apache.hadoop.security.AccessControlException: Permission denied:
> >> user=smehta, access=EXECUTE,
> >> inode="/mapred":pkommireddi:supergroup:drwxrwx---
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
> >>      at
> >>
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
> >>      at
> >>
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
> >>      at
> >>
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
> >>      at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
> >>      at java.security.AccessController.doPrivileged(Native Method)
> >>      at javax.security.auth.Subject.doAs(Subject.java:396)
> >>      at
> >>
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
> >>      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
> >>
> >>      at
> >>
> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.handleEvent(JobHistoryEventHandler.java:523)
> >>      at
> >>
> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler$1.run(JobHistoryEventHandler.java:273)
> >>      at java.lang.Thread.run(Thread.java:662)
> >> Caused by: org.apache.hadoop.security.AccessControlException: Permission
> >> denied: user=smehta, access=EXECUTE,
> >> inode="/mapred":pkommireddi:supergroup:drwxrwx---
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
> >>      at
> >>
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
> >>      at
> >>
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
> >>      at
> >>
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
> >>      at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
> >>      at java.security.AccessController.doPrivileged(Native Method)
> >>      at javax.security.auth.Subject.doAs(Subject.java:396)
> >>      at
> >>
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
> >>      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
> >>
> >>      at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
> Method)
> >>      at
> >>
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
> >>      at
> >>
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
> >>      at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
> >>      at
> >>
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:90)
> >>      at
> >>
> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:57)
> >>      at
> org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:1897)
> >>      at
> >>
> org.apache.hadoop.hdfs.DistributedFileSystem.setPermission(DistributedFileSystem.java:823)
> >>      at
> >>
> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.closeEventWriter(JobHistoryEventHandler.java:666)
> >>      at
> >>
> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.handleEvent(JobHistoryEventHandler.java:521)
> >>      ... 2 more
> >> Caused by:
> >>
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException):
> >> Permission denied: user=smehta, access=EXECUTE,
> >> inode="/mapred":pkommireddi:supergroup:drwxrwx---
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
> >>      at
> >>
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
> >>      at
> >>
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
> >>      at
> >>
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
> >>      at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
> >>      at java.security.AccessController.doPrivileged(Native Method)
> >>      at javax.security.auth.Subject.doAs(Subject.java:396)
> >>      at
> >>
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
> >>      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
> >>
> >>      at org.apache.hadoop.ipc.Client.call(Client.java:1225)
> >>      at
> >>
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202)
> >>      at $Proxy9.setPermission(Unknown Source)
> >>      at
> >>
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.setPermission(ClientNamenodeProtocolTranslatorPB.java:241)
> >>      at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> >>      at
> >>
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> >>      at
> >>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> >>      at java.lang.reflect.Method.invoke(Method.java:597)
> >>      at
> >>
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164)
> >>      at
> >>
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83)
> >>      at $Proxy10.setPermission(Unknown Source)
> >>      at
> org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:1895)
> >>      ... 5 more
> >> Jun 18, 2013 3:20:20 PM
> >>
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator$ScheduleStats log
> >> INFO: Before Scheduling: PendingReds:0 ScheduledMaps:0 ScheduledReds:0
> >> AssignedMaps:0 AssignedReds:1 CompletedMaps:1 CompletedReds:1
> ContAlloc:2
> >> ContRel:0 HostLocal:0 RackLocal:1
> >> Jun 18, 2013 3:20:21 PM
> >> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator getResources
> >> INFO: Received completed container
> container_1371593763906_0001_01_000003
> >> Jun 18, 2013 3:20:21 PM
> >>
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator$ScheduleStats log
> >> INFO: After Scheduling: PendingReds:0 ScheduledMaps:0 ScheduledReds:0
> >> AssignedMaps:0 AssignedReds:0 CompletedMaps:1 CompletedReds:1
> ContAlloc:2
> >> ContRel:0 HostLocal:0 RackLocal:1
> >> Jun 18, 2013 3:20:21 PM
> >>
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl$DiagnosticInformationUpdater
> >> transition
> >> INFO: Diagnostics report from attempt_1371593763906_0001_r_000000_0:
> >> Container killed by the ApplicationMaster.
> >>
> >>
> >>
> >> On Tue, Jun 18, 2013 at 1:28 PM, Chris Nauroth <
> cnauroth@hortonworks.com>
> >> wrote:
> >>>
> >>> Prashant, can you provide more details about what you're doing when you
> >>> see this error?  Are you submitting a MapReduce job, running an HDFS
> shell
> >>> command, or doing some other action?  It's possible that we're also
> seeing
> >>> an interaction with some other change in 2.x that triggers a
> setPermission
> >>> call that wasn't there in 0.20.2.  I think the problem with the HDFS
> >>> setPermission API is present in both 0.20.2 and 2.x, but if the code in
> >>> 0.20.2 never triggered a setPermission call for your usage, then you
> >>> wouldn't have seen the problem.
> >>>
> >>> I'd like to gather these details for submitting a new bug report to
> HDFS.
> >>> Thanks!
> >>>
> >>> Chris Nauroth
> >>> Hortonworks
> >>> http://hortonworks.com/
> >>>
> >>>
> >>>
> >>> On Tue, Jun 18, 2013 at 12:14 PM, Leo Leung <ll...@ddn.com> wrote:
> >>>>
> >>>> I believe, the properties name should be “dfs.permissions”
> >>>>
> >>>>
> >>>>
> >>>>
> >>>>
> >>>> From: Prashant Kommireddi [mailto:prash1784@gmail.com]
> >>>> Sent: Tuesday, June 18, 2013 10:54 AM
> >>>> To: user@hadoop.apache.org
> >>>> Subject: DFS Permissions on Hadoop 2.x
> >>>>
> >>>>
> >>>>
> >>>> Hello,
> >>>>
> >>>>
> >>>>
> >>>> We just upgraded our cluster from 0.20.2 to 2.x (with HA) and had a
> >>>> question around disabling dfs permissions on the latter version. For
> some
> >>>> reason, setting the following config does not seem to work
> >>>>
> >>>>
> >>>>
> >>>> <property>
> >>>>
> >>>>         <name>dfs.permissions.enabled</name>
> >>>>
> >>>>         <value>false</value>
> >>>>
> >>>> </property>
> >>>>
> >>>>
> >>>>
> >>>> Any other configs that might be needed for this?
> >>>>
> >>>>
> >>>>
> >>>> Here is the stacktrace.
> >>>>
> >>>>
> >>>>
> >>>> 2013-06-17 17:35:45,429 INFO  ipc.Server - IPC Server handler 62 on
> >>>> 8020, call
> org.apache.hadoop.hdfs.protocol.ClientProtocol.setPermission from
> >>>> 10.0.53.131:24059: error:
> org.apache.hadoop.security.AccessControlException:
> >>>> Permission denied: user=smehta, access=EXECUTE,
> >>>> inode="/mapred":pkommireddi:supergroup:drwxrwx---
> >>>>
> >>>> org.apache.hadoop.security.AccessControlException: Permission denied:
> >>>> user=smehta, access=EXECUTE,
> >>>> inode="/mapred":pkommireddi:supergroup:drwxrwx---
> >>>>
> >>>>         at
> >>>>
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
> >>>>
> >>>>         at
> >>>>
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
> >>>>
> >>>>         at
> >>>>
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
> >>>>
> >>>>         at
> >>>>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
> >>>>
> >>>>         at
> >>>>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
> >>>>
> >>>>         at
> >>>>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
> >>>>
> >>>>         at
> >>>>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
> >>>>
> >>>>         at
> >>>>
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
> >>>>
> >>>>         at
> >>>>
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
> >>>>
> >>>>         at
> >>>>
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
> >>>>
> >>>>         at
> >>>>
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
> >>>>
> >>>>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
> >>>>
> >>>>         at
> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
> >>>>
> >>>>         at
> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
> >>>>
> >>>>         at java.security.AccessController.doPrivileged(Native Method)
> >>>>
> >>>>         at javax.security.auth.Subject.doAs(Subject.java:396)
> >>>>
> >>>>         at
> >>>>
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
> >>>>
> >>>>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
> >>>>
> >>>>
> >>>>
> >>>>
> >>>>
> >>>>
> >>>>
> >>>>
> >>>
> >>>
> >>
> >
>
>
> ****
>
> --
> Harsh J****
>
>  ** **
>
>  ** **
>
>  ** **
>
>  ** **
>

Re: DFS Permissions on Hadoop 2.x

Posted by Prashant Kommireddi <pr...@gmail.com>.
Thanks Chuan for the details. We are planning to go route 2 (pre-creating
and opening up perms on the dir) as a workaround, though I feel it's not a
good place to be if one is having to do that. There are a few different
directories that are needed to be manually created with the relaxed perms
to get around this (history, staging, tmp). Doing it outside of the regular
deployment process is not the best idea.



On Thu, Jun 20, 2013 at 12:54 AM, Chuan Liu <ch...@microsoft.com> wrote:

>  Hi Prashant,****
>
> ** **
>
> We also hit this issue before.****
>
> 1) We want run a Hadoop cluster with permission disabled.****
>
> 2) With Job history server, yarn, hdfs daemons run under a special service
> user account, e.g. ‘hadoop’****
>
> 3) Users submit jobs to the cluster under their own account.****
>
> ** **
>
> For the above scenario, submitting jobs fails in Hadoop 2.0 while succeeds
> in Hadoop 1.0.****
>
> ** **
>
> In our investigation, the regression happened in jobclient and job history
> server, not on hdfs side.****
>
> The root cause is that jobclient will copy jar files to the staging area
> configed by “yarn.app.mapreduce.am.staging-dir”.****
>
> The client will also set the permission on the directory and jar files to
> some pre-configured value, i.e. JobSubmissionFilesJOB_DIR_PERMISSION and
> JobSubmissionFilesJOB_FILE_PERMISSION.****
>
> On HDFS side, even if ‘permissoin.enabled’ is set to false, changing
> permissions are not allowed.****
>
> (This is the same in both Hadoop v1 and v2.)****
>
> JobHistoryServer also plays a part in this as its staging directory
> happens to be at the same locations as “yarn.app.mapreduce am.staging-dir”.
> ****
>
> It will create directories recursively with permissions set to
> HISTORY_STAGING_DIR_PERMISSIONS.****
>
> JobHistoryServer runs under the special service user account while
> JobClient is under the user who submitting jobs.****
>
> This lead to a failure in setPermission() during job submission.****
>
> ** **
>
> There are multiple possible mitigations possible. Here are two examples.**
> **
>
> 1) config all users submitting jobs to supergroup.****
>
> 2) during setup, pre-create the staging directory and chown to correct
> user.****
>
> ** **
>
> In our case, we took approach 1) because the security check on HDFS was
> not very important for our scenarios (part of the reason why we can disable
> HDFS permission in the first place).****
>
> ** **
>
> Hope this can help you solve your problem!****
>
> ** **
>
> ** **
>
> -Chuan****
>
> ** **
>
> ** **
>
> *From:* Prashant Kommireddi [mailto:prash1784@gmail.com]
> *Sent:* Wednesday, June 19, 2013 1:32 PM
> *To:* user@hadoop.apache.org
> *Subject:* Re: DFS Permissions on Hadoop 2.x****
>
> ** **
>
> How can we resolve the issue in the case I have mentioned? File a MR Jira
> that does not try to check permissions when dfs.permissions.enabled is set
> to false? ****
>
> ** **
>
> The explanation that Tsz Wo (Nicholas) pointed out in the JIRA makes sense
> w.r.t HDFS behavior (thanks for that). But I am still unsure how we can get
> around the fact that certain permissions are set on shared directories by a
> certain user that disallow any other users from using them. Or am I missing
> something entirely?****
>
> ** **
>
> On Wed, Jun 19, 2013 at 1:01 PM, Chris Nauroth <cn...@hortonworks.com>
> wrote:****
>
>  Just in case anyone is curious who didn't look at HDFS-4918, we
> established that this is actually expected behavior, and it's mentioned in
> the documentation.  However, I filed HDFS-4919 to make the information
> clearer in the documentation, since this caused some confusion.****
>
> ** **
>
> https://issues.apache.org/jira/browse/HDFS-4919****
>
> ** **
>
> Chris Nauroth****
>
> Hortonworks****
>
> http://hortonworks.com/****
>
> ** **
>
> ** **
>
> On Tue, Jun 18, 2013 at 10:42 PM, Prashant Kommireddi <pr...@gmail.com>
> wrote:****
>
>  Thanks guys, I will follow the discussion there.****
>
> ** **
>
> On Tue, Jun 18, 2013 at 10:10 PM, Azuryy Yu <az...@gmail.com> wrote:***
> *
>
>  Yes, and I think this was lead by Snapshot.****
>
> I've file a JIRA here:
> https://issues.apache.org/jira/browse/HDFS-4918****
>
> ** **
>
> On Wed, Jun 19, 2013 at 11:40 AM, Harsh J <ha...@cloudera.com> wrote:****
>
> This is a HDFS bug. Like all other methods that check for permissions
> being enabled, the client call of setPermission should check it as
> well. It does not do that currently and I believe it should be a NOP
> in such a case. Please do file a JIRA (and reference the ID here to
> close the loop)!****
>
>
> On Wed, Jun 19, 2013 at 6:18 AM, Prashant Kommireddi
> <pr...@gmail.com> wrote:
> > Looks like the jobs fail only on the first attempt and pass thereafter.
> > Failure occurs while setting perms on "intermediate done directory".
> Here is
> > what I think is happening:
> >
> > 1. Intermediate done dir is (ideally) created as part of deployment (for
> eg,
> > /mapred/history/done_intermediate)
> >
> > 2. When a MR job is run, it creates a user dir within intermediate done
> dir
> > (/mapred/history/done_intermediate/username)
> >
> > 3. After this dir is created, the code tries to set permissions on this
> user
> > dir. In doing so, it checks for EXECUTE permissions on not just its
> parent
> > (/mapred/history/done_intermediate) but across all dirs to the top-most
> > level (/mapred). This fails as "/mapred" does not have execute
> permissions
> > for the "Other" users.
> >
> > 4. On successive job runs, since the user dir already exists
> > (/mapred/history/done_intermediate/username) it no longer tries to create
> > and set permissions again. And the job completes without any perm errors.
> >
> > This is the code within JobHistoryEventHandler that's doing it.
> >
> >  //Check for the existence of intermediate done dir.
> >     Path doneDirPath = null;
> >     try {
> >       doneDirPath = FileSystem.get(conf).makeQualified(new
> > Path(doneDirStr));
> >       doneDirFS = FileSystem.get(doneDirPath.toUri(), conf);
> >       // This directory will be in a common location, or this may be a
> > cluster
> >       // meant for a single user. Creating based on the conf. Should
> ideally
> > be
> >       // created by the JobHistoryServer or as part of deployment.
> >       if (!doneDirFS.exists(doneDirPath)) {
> >       if (JobHistoryUtils.shouldCreateNonUserDirectory(conf)) {
> >         LOG.info("Creating intermediate history logDir: ["
> >             + doneDirPath
> >             + "] + based on conf. Should ideally be created by the
> > JobHistoryServer: "
> >             + MRJobConfig.MR_AM_CREATE_JH_INTERMEDIATE_BASE_DIR);
> >           mkdir(
> >               doneDirFS,
> >               doneDirPath,
> >               new FsPermission(
> >             JobHistoryUtils.HISTORY_INTERMEDIATE_DONE_DIR_PERMISSIONS
> >                 .toShort()));
> >           // TODO Temporary toShort till new FsPermission(FsPermissions)
> >           // respects
> >         // sticky
> >       } else {
> >           String message = "Not creating intermediate history logDir: ["
> >                 + doneDirPath
> >                 + "] based on conf: "
> >                 + MRJobConfig.MR_AM_CREATE_JH_INTERMEDIATE_BASE_DIR
> >                 + ". Either set to true or pre-create this directory
> with" +
> >                 " appropriate permissions";
> >         LOG.error(message);
> >         throw new YarnException(message);
> >       }
> >       }
> >     } catch (IOException e) {
> >       LOG.error("Failed checking for the existance of history
> intermediate "
> > +
> >                       "done directory: [" + doneDirPath + "]");
> >       throw new YarnException(e);
> >     }
> >
> >
> > In any case, this does not appear to be the right behavior as it does not
> > respect "dfs.permissions.enabled" (set to false) at any point. Sounds
> like a
> > bug?
> >
> >
> > Thanks, Prashant
> >
> >
> >
> >
> >
> >
> > On Tue, Jun 18, 2013 at 3:24 PM, Prashant Kommireddi <
> prash1784@gmail.com>
> > wrote:
> >>
> >> Hi Chris,
> >>
> >> This is while running a MR job. Please note the job is able to write
> files
> >> to "/mapred" directory and fails on EXECUTE permissions. On digging in
> some
> >> more, it looks like the failure occurs after writing to
> >> "/mapred/history/done_intermediate".
> >>
> >> Here is a more detailed stacktrace.
> >>
> >> INFO: Job end notification started for jobID : job_1371593763906_0001
> >> Jun 18, 2013 3:20:20 PM
> >> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler
> >> closeEventWriter
> >> INFO: Unable to write out JobSummaryInfo to
> >>
> [hdfs://test-local-EMPTYSPEC/mapred/history/done_intermediate/smehta/job_1371593763906_0001.summary_tmp]
> >> org.apache.hadoop.security.AccessControlException: Permission denied:
> >> user=smehta, access=EXECUTE,
> >> inode="/mapred":pkommireddi:supergroup:drwxrwx---
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
> >>      at
> >>
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
> >>      at
> >>
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
> >>      at
> >>
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
> >>      at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
> >>      at java.security.AccessController.doPrivileged(Native Method)
> >>      at javax.security.auth.Subject.doAs(Subject.java:396)
> >>      at
> >>
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
> >>      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
> >>
> >>      at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
> Method)
> >>      at
> >>
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
> >>      at
> >>
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
> >>      at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
> >>      at
> >>
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:90)
> >>      at
> >>
> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:57)
> >>      at
> org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:1897)
> >>      at
> >>
> org.apache.hadoop.hdfs.DistributedFileSystem.setPermission(DistributedFileSystem.java:823)
> >>      at
> >>
> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.closeEventWriter(JobHistoryEventHandler.java:666)
> >>      at
> >>
> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.handleEvent(JobHistoryEventHandler.java:521)
> >>      at
> >>
> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler$1.run(JobHistoryEventHandler.java:273)
> >>      at java.lang.Thread.run(Thread.java:662)
> >> Caused by:
> >>
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException):
> >> Permission denied: user=smehta, access=EXECUTE,
> >> inode="/mapred":pkommireddi:supergroup:drwxrwx---
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
> >>      at
> >>
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
> >>      at
> >>
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
> >>      at
> >>
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
> >>      at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
> >>      at java.security.AccessController.doPrivileged(Native Method)
> >>      at javax.security.auth.Subject.doAs(Subject.java:396)
> >>      at
> >>
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
> >>      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
> >>
> >>      at org.apache.hadoop.ipc.Client.call(Client.java:1225)
> >>      at
> >>
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202)
> >>      at $Proxy9.setPermission(Unknown Source)
> >>      at
> >>
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.setPermission(ClientNamenodeProtocolTranslatorPB.java:241)
> >>      at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> >>      at
> >>
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> >>      at
> >>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> >>      at java.lang.reflect.Method.invoke(Method.java:597)
> >>      at
> >>
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164)
> >>      at
> >>
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83)
> >>      at $Proxy10.setPermission(Unknown Source)
> >>      at
> org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:1895)
> >>      ... 5 more
> >> Jun 18, 2013 3:20:20 PM
> >> org.apache.hadoop.yarn.YarnUncaughtExceptionHandler uncaughtException
> >> SEVERE: Thread Thread[Thread-51,5,main] threw an Exception.
> >> org.apache.hadoop.yarn.YarnException:
> >> org.apache.hadoop.security.AccessControlException: Permission denied:
> >> user=smehta, access=EXECUTE,
> >> inode="/mapred":pkommireddi:supergroup:drwxrwx---
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
> >>      at
> >>
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
> >>      at
> >>
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
> >>      at
> >>
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
> >>      at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
> >>      at java.security.AccessController.doPrivileged(Native Method)
> >>      at javax.security.auth.Subject.doAs(Subject.java:396)
> >>      at
> >>
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
> >>      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
> >>
> >>      at
> >>
> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.handleEvent(JobHistoryEventHandler.java:523)
> >>      at
> >>
> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler$1.run(JobHistoryEventHandler.java:273)
> >>      at java.lang.Thread.run(Thread.java:662)
> >> Caused by: org.apache.hadoop.security.AccessControlException: Permission
> >> denied: user=smehta, access=EXECUTE,
> >> inode="/mapred":pkommireddi:supergroup:drwxrwx---
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
> >>      at
> >>
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
> >>      at
> >>
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
> >>      at
> >>
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
> >>      at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
> >>      at java.security.AccessController.doPrivileged(Native Method)
> >>      at javax.security.auth.Subject.doAs(Subject.java:396)
> >>      at
> >>
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
> >>      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
> >>
> >>      at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
> Method)
> >>      at
> >>
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
> >>      at
> >>
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
> >>      at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
> >>      at
> >>
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:90)
> >>      at
> >>
> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:57)
> >>      at
> org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:1897)
> >>      at
> >>
> org.apache.hadoop.hdfs.DistributedFileSystem.setPermission(DistributedFileSystem.java:823)
> >>      at
> >>
> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.closeEventWriter(JobHistoryEventHandler.java:666)
> >>      at
> >>
> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.handleEvent(JobHistoryEventHandler.java:521)
> >>      ... 2 more
> >> Caused by:
> >>
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException):
> >> Permission denied: user=smehta, access=EXECUTE,
> >> inode="/mapred":pkommireddi:supergroup:drwxrwx---
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
> >>      at
> >>
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
> >>      at
> >>
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
> >>      at
> >>
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
> >>      at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
> >>      at java.security.AccessController.doPrivileged(Native Method)
> >>      at javax.security.auth.Subject.doAs(Subject.java:396)
> >>      at
> >>
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
> >>      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
> >>
> >>      at org.apache.hadoop.ipc.Client.call(Client.java:1225)
> >>      at
> >>
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202)
> >>      at $Proxy9.setPermission(Unknown Source)
> >>      at
> >>
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.setPermission(ClientNamenodeProtocolTranslatorPB.java:241)
> >>      at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> >>      at
> >>
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> >>      at
> >>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> >>      at java.lang.reflect.Method.invoke(Method.java:597)
> >>      at
> >>
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164)
> >>      at
> >>
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83)
> >>      at $Proxy10.setPermission(Unknown Source)
> >>      at
> org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:1895)
> >>      ... 5 more
> >> Jun 18, 2013 3:20:20 PM
> >>
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator$ScheduleStats log
> >> INFO: Before Scheduling: PendingReds:0 ScheduledMaps:0 ScheduledReds:0
> >> AssignedMaps:0 AssignedReds:1 CompletedMaps:1 CompletedReds:1
> ContAlloc:2
> >> ContRel:0 HostLocal:0 RackLocal:1
> >> Jun 18, 2013 3:20:21 PM
> >> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator getResources
> >> INFO: Received completed container
> container_1371593763906_0001_01_000003
> >> Jun 18, 2013 3:20:21 PM
> >>
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator$ScheduleStats log
> >> INFO: After Scheduling: PendingReds:0 ScheduledMaps:0 ScheduledReds:0
> >> AssignedMaps:0 AssignedReds:0 CompletedMaps:1 CompletedReds:1
> ContAlloc:2
> >> ContRel:0 HostLocal:0 RackLocal:1
> >> Jun 18, 2013 3:20:21 PM
> >>
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl$DiagnosticInformationUpdater
> >> transition
> >> INFO: Diagnostics report from attempt_1371593763906_0001_r_000000_0:
> >> Container killed by the ApplicationMaster.
> >>
> >>
> >>
> >> On Tue, Jun 18, 2013 at 1:28 PM, Chris Nauroth <
> cnauroth@hortonworks.com>
> >> wrote:
> >>>
> >>> Prashant, can you provide more details about what you're doing when you
> >>> see this error?  Are you submitting a MapReduce job, running an HDFS
> shell
> >>> command, or doing some other action?  It's possible that we're also
> seeing
> >>> an interaction with some other change in 2.x that triggers a
> setPermission
> >>> call that wasn't there in 0.20.2.  I think the problem with the HDFS
> >>> setPermission API is present in both 0.20.2 and 2.x, but if the code in
> >>> 0.20.2 never triggered a setPermission call for your usage, then you
> >>> wouldn't have seen the problem.
> >>>
> >>> I'd like to gather these details for submitting a new bug report to
> HDFS.
> >>> Thanks!
> >>>
> >>> Chris Nauroth
> >>> Hortonworks
> >>> http://hortonworks.com/
> >>>
> >>>
> >>>
> >>> On Tue, Jun 18, 2013 at 12:14 PM, Leo Leung <ll...@ddn.com> wrote:
> >>>>
> >>>> I believe, the properties name should be “dfs.permissions”
> >>>>
> >>>>
> >>>>
> >>>>
> >>>>
> >>>> From: Prashant Kommireddi [mailto:prash1784@gmail.com]
> >>>> Sent: Tuesday, June 18, 2013 10:54 AM
> >>>> To: user@hadoop.apache.org
> >>>> Subject: DFS Permissions on Hadoop 2.x
> >>>>
> >>>>
> >>>>
> >>>> Hello,
> >>>>
> >>>>
> >>>>
> >>>> We just upgraded our cluster from 0.20.2 to 2.x (with HA) and had a
> >>>> question around disabling dfs permissions on the latter version. For
> some
> >>>> reason, setting the following config does not seem to work
> >>>>
> >>>>
> >>>>
> >>>> <property>
> >>>>
> >>>>         <name>dfs.permissions.enabled</name>
> >>>>
> >>>>         <value>false</value>
> >>>>
> >>>> </property>
> >>>>
> >>>>
> >>>>
> >>>> Any other configs that might be needed for this?
> >>>>
> >>>>
> >>>>
> >>>> Here is the stacktrace.
> >>>>
> >>>>
> >>>>
> >>>> 2013-06-17 17:35:45,429 INFO  ipc.Server - IPC Server handler 62 on
> >>>> 8020, call
> org.apache.hadoop.hdfs.protocol.ClientProtocol.setPermission from
> >>>> 10.0.53.131:24059: error:
> org.apache.hadoop.security.AccessControlException:
> >>>> Permission denied: user=smehta, access=EXECUTE,
> >>>> inode="/mapred":pkommireddi:supergroup:drwxrwx---
> >>>>
> >>>> org.apache.hadoop.security.AccessControlException: Permission denied:
> >>>> user=smehta, access=EXECUTE,
> >>>> inode="/mapred":pkommireddi:supergroup:drwxrwx---
> >>>>
> >>>>         at
> >>>>
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
> >>>>
> >>>>         at
> >>>>
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
> >>>>
> >>>>         at
> >>>>
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
> >>>>
> >>>>         at
> >>>>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
> >>>>
> >>>>         at
> >>>>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
> >>>>
> >>>>         at
> >>>>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
> >>>>
> >>>>         at
> >>>>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
> >>>>
> >>>>         at
> >>>>
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
> >>>>
> >>>>         at
> >>>>
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
> >>>>
> >>>>         at
> >>>>
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
> >>>>
> >>>>         at
> >>>>
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
> >>>>
> >>>>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
> >>>>
> >>>>         at
> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
> >>>>
> >>>>         at
> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
> >>>>
> >>>>         at java.security.AccessController.doPrivileged(Native Method)
> >>>>
> >>>>         at javax.security.auth.Subject.doAs(Subject.java:396)
> >>>>
> >>>>         at
> >>>>
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
> >>>>
> >>>>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
> >>>>
> >>>>
> >>>>
> >>>>
> >>>>
> >>>>
> >>>>
> >>>>
> >>>
> >>>
> >>
> >
>
>
> ****
>
> --
> Harsh J****
>
>  ** **
>
>  ** **
>
>  ** **
>
>  ** **
>

Re: DFS Permissions on Hadoop 2.x

Posted by Prashant Kommireddi <pr...@gmail.com>.
Thanks Chuan for the details. We are planning to go route 2 (pre-creating
and opening up perms on the dir) as a workaround, though I feel it's not a
good place to be if one is having to do that. There are a few different
directories that are needed to be manually created with the relaxed perms
to get around this (history, staging, tmp). Doing it outside of the regular
deployment process is not the best idea.



On Thu, Jun 20, 2013 at 12:54 AM, Chuan Liu <ch...@microsoft.com> wrote:

>  Hi Prashant,****
>
> ** **
>
> We also hit this issue before.****
>
> 1) We want run a Hadoop cluster with permission disabled.****
>
> 2) With Job history server, yarn, hdfs daemons run under a special service
> user account, e.g. ‘hadoop’****
>
> 3) Users submit jobs to the cluster under their own account.****
>
> ** **
>
> For the above scenario, submitting jobs fails in Hadoop 2.0 while succeeds
> in Hadoop 1.0.****
>
> ** **
>
> In our investigation, the regression happened in jobclient and job history
> server, not on hdfs side.****
>
> The root cause is that jobclient will copy jar files to the staging area
> configed by “yarn.app.mapreduce.am.staging-dir”.****
>
> The client will also set the permission on the directory and jar files to
> some pre-configured value, i.e. JobSubmissionFilesJOB_DIR_PERMISSION and
> JobSubmissionFilesJOB_FILE_PERMISSION.****
>
> On HDFS side, even if ‘permissoin.enabled’ is set to false, changing
> permissions are not allowed.****
>
> (This is the same in both Hadoop v1 and v2.)****
>
> JobHistoryServer also plays a part in this as its staging directory
> happens to be at the same locations as “yarn.app.mapreduce am.staging-dir”.
> ****
>
> It will create directories recursively with permissions set to
> HISTORY_STAGING_DIR_PERMISSIONS.****
>
> JobHistoryServer runs under the special service user account while
> JobClient is under the user who submitting jobs.****
>
> This lead to a failure in setPermission() during job submission.****
>
> ** **
>
> There are multiple possible mitigations possible. Here are two examples.**
> **
>
> 1) config all users submitting jobs to supergroup.****
>
> 2) during setup, pre-create the staging directory and chown to correct
> user.****
>
> ** **
>
> In our case, we took approach 1) because the security check on HDFS was
> not very important for our scenarios (part of the reason why we can disable
> HDFS permission in the first place).****
>
> ** **
>
> Hope this can help you solve your problem!****
>
> ** **
>
> ** **
>
> -Chuan****
>
> ** **
>
> ** **
>
> *From:* Prashant Kommireddi [mailto:prash1784@gmail.com]
> *Sent:* Wednesday, June 19, 2013 1:32 PM
> *To:* user@hadoop.apache.org
> *Subject:* Re: DFS Permissions on Hadoop 2.x****
>
> ** **
>
> How can we resolve the issue in the case I have mentioned? File a MR Jira
> that does not try to check permissions when dfs.permissions.enabled is set
> to false? ****
>
> ** **
>
> The explanation that Tsz Wo (Nicholas) pointed out in the JIRA makes sense
> w.r.t HDFS behavior (thanks for that). But I am still unsure how we can get
> around the fact that certain permissions are set on shared directories by a
> certain user that disallow any other users from using them. Or am I missing
> something entirely?****
>
> ** **
>
> On Wed, Jun 19, 2013 at 1:01 PM, Chris Nauroth <cn...@hortonworks.com>
> wrote:****
>
>  Just in case anyone is curious who didn't look at HDFS-4918, we
> established that this is actually expected behavior, and it's mentioned in
> the documentation.  However, I filed HDFS-4919 to make the information
> clearer in the documentation, since this caused some confusion.****
>
> ** **
>
> https://issues.apache.org/jira/browse/HDFS-4919****
>
> ** **
>
> Chris Nauroth****
>
> Hortonworks****
>
> http://hortonworks.com/****
>
> ** **
>
> ** **
>
> On Tue, Jun 18, 2013 at 10:42 PM, Prashant Kommireddi <pr...@gmail.com>
> wrote:****
>
>  Thanks guys, I will follow the discussion there.****
>
> ** **
>
> On Tue, Jun 18, 2013 at 10:10 PM, Azuryy Yu <az...@gmail.com> wrote:***
> *
>
>  Yes, and I think this was lead by Snapshot.****
>
> I've file a JIRA here:
> https://issues.apache.org/jira/browse/HDFS-4918****
>
> ** **
>
> On Wed, Jun 19, 2013 at 11:40 AM, Harsh J <ha...@cloudera.com> wrote:****
>
> This is a HDFS bug. Like all other methods that check for permissions
> being enabled, the client call of setPermission should check it as
> well. It does not do that currently and I believe it should be a NOP
> in such a case. Please do file a JIRA (and reference the ID here to
> close the loop)!****
>
>
> On Wed, Jun 19, 2013 at 6:18 AM, Prashant Kommireddi
> <pr...@gmail.com> wrote:
> > Looks like the jobs fail only on the first attempt and pass thereafter.
> > Failure occurs while setting perms on "intermediate done directory".
> Here is
> > what I think is happening:
> >
> > 1. Intermediate done dir is (ideally) created as part of deployment (for
> eg,
> > /mapred/history/done_intermediate)
> >
> > 2. When a MR job is run, it creates a user dir within intermediate done
> dir
> > (/mapred/history/done_intermediate/username)
> >
> > 3. After this dir is created, the code tries to set permissions on this
> user
> > dir. In doing so, it checks for EXECUTE permissions on not just its
> parent
> > (/mapred/history/done_intermediate) but across all dirs to the top-most
> > level (/mapred). This fails as "/mapred" does not have execute
> permissions
> > for the "Other" users.
> >
> > 4. On successive job runs, since the user dir already exists
> > (/mapred/history/done_intermediate/username) it no longer tries to create
> > and set permissions again. And the job completes without any perm errors.
> >
> > This is the code within JobHistoryEventHandler that's doing it.
> >
> >  //Check for the existence of intermediate done dir.
> >     Path doneDirPath = null;
> >     try {
> >       doneDirPath = FileSystem.get(conf).makeQualified(new
> > Path(doneDirStr));
> >       doneDirFS = FileSystem.get(doneDirPath.toUri(), conf);
> >       // This directory will be in a common location, or this may be a
> > cluster
> >       // meant for a single user. Creating based on the conf. Should
> ideally
> > be
> >       // created by the JobHistoryServer or as part of deployment.
> >       if (!doneDirFS.exists(doneDirPath)) {
> >       if (JobHistoryUtils.shouldCreateNonUserDirectory(conf)) {
> >         LOG.info("Creating intermediate history logDir: ["
> >             + doneDirPath
> >             + "] + based on conf. Should ideally be created by the
> > JobHistoryServer: "
> >             + MRJobConfig.MR_AM_CREATE_JH_INTERMEDIATE_BASE_DIR);
> >           mkdir(
> >               doneDirFS,
> >               doneDirPath,
> >               new FsPermission(
> >             JobHistoryUtils.HISTORY_INTERMEDIATE_DONE_DIR_PERMISSIONS
> >                 .toShort()));
> >           // TODO Temporary toShort till new FsPermission(FsPermissions)
> >           // respects
> >         // sticky
> >       } else {
> >           String message = "Not creating intermediate history logDir: ["
> >                 + doneDirPath
> >                 + "] based on conf: "
> >                 + MRJobConfig.MR_AM_CREATE_JH_INTERMEDIATE_BASE_DIR
> >                 + ". Either set to true or pre-create this directory
> with" +
> >                 " appropriate permissions";
> >         LOG.error(message);
> >         throw new YarnException(message);
> >       }
> >       }
> >     } catch (IOException e) {
> >       LOG.error("Failed checking for the existance of history
> intermediate "
> > +
> >                       "done directory: [" + doneDirPath + "]");
> >       throw new YarnException(e);
> >     }
> >
> >
> > In any case, this does not appear to be the right behavior as it does not
> > respect "dfs.permissions.enabled" (set to false) at any point. Sounds
> like a
> > bug?
> >
> >
> > Thanks, Prashant
> >
> >
> >
> >
> >
> >
> > On Tue, Jun 18, 2013 at 3:24 PM, Prashant Kommireddi <
> prash1784@gmail.com>
> > wrote:
> >>
> >> Hi Chris,
> >>
> >> This is while running a MR job. Please note the job is able to write
> files
> >> to "/mapred" directory and fails on EXECUTE permissions. On digging in
> some
> >> more, it looks like the failure occurs after writing to
> >> "/mapred/history/done_intermediate".
> >>
> >> Here is a more detailed stacktrace.
> >>
> >> INFO: Job end notification started for jobID : job_1371593763906_0001
> >> Jun 18, 2013 3:20:20 PM
> >> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler
> >> closeEventWriter
> >> INFO: Unable to write out JobSummaryInfo to
> >>
> [hdfs://test-local-EMPTYSPEC/mapred/history/done_intermediate/smehta/job_1371593763906_0001.summary_tmp]
> >> org.apache.hadoop.security.AccessControlException: Permission denied:
> >> user=smehta, access=EXECUTE,
> >> inode="/mapred":pkommireddi:supergroup:drwxrwx---
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
> >>      at
> >>
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
> >>      at
> >>
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
> >>      at
> >>
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
> >>      at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
> >>      at java.security.AccessController.doPrivileged(Native Method)
> >>      at javax.security.auth.Subject.doAs(Subject.java:396)
> >>      at
> >>
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
> >>      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
> >>
> >>      at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
> Method)
> >>      at
> >>
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
> >>      at
> >>
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
> >>      at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
> >>      at
> >>
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:90)
> >>      at
> >>
> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:57)
> >>      at
> org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:1897)
> >>      at
> >>
> org.apache.hadoop.hdfs.DistributedFileSystem.setPermission(DistributedFileSystem.java:823)
> >>      at
> >>
> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.closeEventWriter(JobHistoryEventHandler.java:666)
> >>      at
> >>
> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.handleEvent(JobHistoryEventHandler.java:521)
> >>      at
> >>
> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler$1.run(JobHistoryEventHandler.java:273)
> >>      at java.lang.Thread.run(Thread.java:662)
> >> Caused by:
> >>
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException):
> >> Permission denied: user=smehta, access=EXECUTE,
> >> inode="/mapred":pkommireddi:supergroup:drwxrwx---
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
> >>      at
> >>
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
> >>      at
> >>
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
> >>      at
> >>
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
> >>      at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
> >>      at java.security.AccessController.doPrivileged(Native Method)
> >>      at javax.security.auth.Subject.doAs(Subject.java:396)
> >>      at
> >>
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
> >>      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
> >>
> >>      at org.apache.hadoop.ipc.Client.call(Client.java:1225)
> >>      at
> >>
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202)
> >>      at $Proxy9.setPermission(Unknown Source)
> >>      at
> >>
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.setPermission(ClientNamenodeProtocolTranslatorPB.java:241)
> >>      at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> >>      at
> >>
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> >>      at
> >>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> >>      at java.lang.reflect.Method.invoke(Method.java:597)
> >>      at
> >>
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164)
> >>      at
> >>
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83)
> >>      at $Proxy10.setPermission(Unknown Source)
> >>      at
> org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:1895)
> >>      ... 5 more
> >> Jun 18, 2013 3:20:20 PM
> >> org.apache.hadoop.yarn.YarnUncaughtExceptionHandler uncaughtException
> >> SEVERE: Thread Thread[Thread-51,5,main] threw an Exception.
> >> org.apache.hadoop.yarn.YarnException:
> >> org.apache.hadoop.security.AccessControlException: Permission denied:
> >> user=smehta, access=EXECUTE,
> >> inode="/mapred":pkommireddi:supergroup:drwxrwx---
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
> >>      at
> >>
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
> >>      at
> >>
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
> >>      at
> >>
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
> >>      at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
> >>      at java.security.AccessController.doPrivileged(Native Method)
> >>      at javax.security.auth.Subject.doAs(Subject.java:396)
> >>      at
> >>
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
> >>      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
> >>
> >>      at
> >>
> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.handleEvent(JobHistoryEventHandler.java:523)
> >>      at
> >>
> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler$1.run(JobHistoryEventHandler.java:273)
> >>      at java.lang.Thread.run(Thread.java:662)
> >> Caused by: org.apache.hadoop.security.AccessControlException: Permission
> >> denied: user=smehta, access=EXECUTE,
> >> inode="/mapred":pkommireddi:supergroup:drwxrwx---
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
> >>      at
> >>
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
> >>      at
> >>
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
> >>      at
> >>
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
> >>      at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
> >>      at java.security.AccessController.doPrivileged(Native Method)
> >>      at javax.security.auth.Subject.doAs(Subject.java:396)
> >>      at
> >>
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
> >>      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
> >>
> >>      at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
> Method)
> >>      at
> >>
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
> >>      at
> >>
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
> >>      at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
> >>      at
> >>
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:90)
> >>      at
> >>
> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:57)
> >>      at
> org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:1897)
> >>      at
> >>
> org.apache.hadoop.hdfs.DistributedFileSystem.setPermission(DistributedFileSystem.java:823)
> >>      at
> >>
> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.closeEventWriter(JobHistoryEventHandler.java:666)
> >>      at
> >>
> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.handleEvent(JobHistoryEventHandler.java:521)
> >>      ... 2 more
> >> Caused by:
> >>
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException):
> >> Permission denied: user=smehta, access=EXECUTE,
> >> inode="/mapred":pkommireddi:supergroup:drwxrwx---
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
> >>      at
> >>
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
> >>      at
> >>
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
> >>      at
> >>
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
> >>      at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
> >>      at java.security.AccessController.doPrivileged(Native Method)
> >>      at javax.security.auth.Subject.doAs(Subject.java:396)
> >>      at
> >>
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
> >>      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
> >>
> >>      at org.apache.hadoop.ipc.Client.call(Client.java:1225)
> >>      at
> >>
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202)
> >>      at $Proxy9.setPermission(Unknown Source)
> >>      at
> >>
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.setPermission(ClientNamenodeProtocolTranslatorPB.java:241)
> >>      at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> >>      at
> >>
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> >>      at
> >>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> >>      at java.lang.reflect.Method.invoke(Method.java:597)
> >>      at
> >>
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164)
> >>      at
> >>
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83)
> >>      at $Proxy10.setPermission(Unknown Source)
> >>      at
> org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:1895)
> >>      ... 5 more
> >> Jun 18, 2013 3:20:20 PM
> >>
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator$ScheduleStats log
> >> INFO: Before Scheduling: PendingReds:0 ScheduledMaps:0 ScheduledReds:0
> >> AssignedMaps:0 AssignedReds:1 CompletedMaps:1 CompletedReds:1
> ContAlloc:2
> >> ContRel:0 HostLocal:0 RackLocal:1
> >> Jun 18, 2013 3:20:21 PM
> >> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator getResources
> >> INFO: Received completed container
> container_1371593763906_0001_01_000003
> >> Jun 18, 2013 3:20:21 PM
> >>
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator$ScheduleStats log
> >> INFO: After Scheduling: PendingReds:0 ScheduledMaps:0 ScheduledReds:0
> >> AssignedMaps:0 AssignedReds:0 CompletedMaps:1 CompletedReds:1
> ContAlloc:2
> >> ContRel:0 HostLocal:0 RackLocal:1
> >> Jun 18, 2013 3:20:21 PM
> >>
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl$DiagnosticInformationUpdater
> >> transition
> >> INFO: Diagnostics report from attempt_1371593763906_0001_r_000000_0:
> >> Container killed by the ApplicationMaster.
> >>
> >>
> >>
> >> On Tue, Jun 18, 2013 at 1:28 PM, Chris Nauroth <
> cnauroth@hortonworks.com>
> >> wrote:
> >>>
> >>> Prashant, can you provide more details about what you're doing when you
> >>> see this error?  Are you submitting a MapReduce job, running an HDFS
> shell
> >>> command, or doing some other action?  It's possible that we're also
> seeing
> >>> an interaction with some other change in 2.x that triggers a
> setPermission
> >>> call that wasn't there in 0.20.2.  I think the problem with the HDFS
> >>> setPermission API is present in both 0.20.2 and 2.x, but if the code in
> >>> 0.20.2 never triggered a setPermission call for your usage, then you
> >>> wouldn't have seen the problem.
> >>>
> >>> I'd like to gather these details for submitting a new bug report to
> HDFS.
> >>> Thanks!
> >>>
> >>> Chris Nauroth
> >>> Hortonworks
> >>> http://hortonworks.com/
> >>>
> >>>
> >>>
> >>> On Tue, Jun 18, 2013 at 12:14 PM, Leo Leung <ll...@ddn.com> wrote:
> >>>>
> >>>> I believe, the properties name should be “dfs.permissions”
> >>>>
> >>>>
> >>>>
> >>>>
> >>>>
> >>>> From: Prashant Kommireddi [mailto:prash1784@gmail.com]
> >>>> Sent: Tuesday, June 18, 2013 10:54 AM
> >>>> To: user@hadoop.apache.org
> >>>> Subject: DFS Permissions on Hadoop 2.x
> >>>>
> >>>>
> >>>>
> >>>> Hello,
> >>>>
> >>>>
> >>>>
> >>>> We just upgraded our cluster from 0.20.2 to 2.x (with HA) and had a
> >>>> question around disabling dfs permissions on the latter version. For
> some
> >>>> reason, setting the following config does not seem to work
> >>>>
> >>>>
> >>>>
> >>>> <property>
> >>>>
> >>>>         <name>dfs.permissions.enabled</name>
> >>>>
> >>>>         <value>false</value>
> >>>>
> >>>> </property>
> >>>>
> >>>>
> >>>>
> >>>> Any other configs that might be needed for this?
> >>>>
> >>>>
> >>>>
> >>>> Here is the stacktrace.
> >>>>
> >>>>
> >>>>
> >>>> 2013-06-17 17:35:45,429 INFO  ipc.Server - IPC Server handler 62 on
> >>>> 8020, call
> org.apache.hadoop.hdfs.protocol.ClientProtocol.setPermission from
> >>>> 10.0.53.131:24059: error:
> org.apache.hadoop.security.AccessControlException:
> >>>> Permission denied: user=smehta, access=EXECUTE,
> >>>> inode="/mapred":pkommireddi:supergroup:drwxrwx---
> >>>>
> >>>> org.apache.hadoop.security.AccessControlException: Permission denied:
> >>>> user=smehta, access=EXECUTE,
> >>>> inode="/mapred":pkommireddi:supergroup:drwxrwx---
> >>>>
> >>>>         at
> >>>>
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
> >>>>
> >>>>         at
> >>>>
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
> >>>>
> >>>>         at
> >>>>
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
> >>>>
> >>>>         at
> >>>>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
> >>>>
> >>>>         at
> >>>>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
> >>>>
> >>>>         at
> >>>>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
> >>>>
> >>>>         at
> >>>>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
> >>>>
> >>>>         at
> >>>>
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
> >>>>
> >>>>         at
> >>>>
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
> >>>>
> >>>>         at
> >>>>
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
> >>>>
> >>>>         at
> >>>>
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
> >>>>
> >>>>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
> >>>>
> >>>>         at
> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
> >>>>
> >>>>         at
> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
> >>>>
> >>>>         at java.security.AccessController.doPrivileged(Native Method)
> >>>>
> >>>>         at javax.security.auth.Subject.doAs(Subject.java:396)
> >>>>
> >>>>         at
> >>>>
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
> >>>>
> >>>>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
> >>>>
> >>>>
> >>>>
> >>>>
> >>>>
> >>>>
> >>>>
> >>>>
> >>>
> >>>
> >>
> >
>
>
> ****
>
> --
> Harsh J****
>
>  ** **
>
>  ** **
>
>  ** **
>
>  ** **
>

Re: DFS Permissions on Hadoop 2.x

Posted by Prashant Kommireddi <pr...@gmail.com>.
Thanks Chuan for the details. We are planning to go route 2 (pre-creating
and opening up perms on the dir) as a workaround, though I feel it's not a
good place to be if one is having to do that. There are a few different
directories that are needed to be manually created with the relaxed perms
to get around this (history, staging, tmp). Doing it outside of the regular
deployment process is not the best idea.



On Thu, Jun 20, 2013 at 12:54 AM, Chuan Liu <ch...@microsoft.com> wrote:

>  Hi Prashant,****
>
> ** **
>
> We also hit this issue before.****
>
> 1) We want run a Hadoop cluster with permission disabled.****
>
> 2) With Job history server, yarn, hdfs daemons run under a special service
> user account, e.g. ‘hadoop’****
>
> 3) Users submit jobs to the cluster under their own account.****
>
> ** **
>
> For the above scenario, submitting jobs fails in Hadoop 2.0 while succeeds
> in Hadoop 1.0.****
>
> ** **
>
> In our investigation, the regression happened in jobclient and job history
> server, not on hdfs side.****
>
> The root cause is that jobclient will copy jar files to the staging area
> configed by “yarn.app.mapreduce.am.staging-dir”.****
>
> The client will also set the permission on the directory and jar files to
> some pre-configured value, i.e. JobSubmissionFilesJOB_DIR_PERMISSION and
> JobSubmissionFilesJOB_FILE_PERMISSION.****
>
> On HDFS side, even if ‘permissoin.enabled’ is set to false, changing
> permissions are not allowed.****
>
> (This is the same in both Hadoop v1 and v2.)****
>
> JobHistoryServer also plays a part in this as its staging directory
> happens to be at the same locations as “yarn.app.mapreduce am.staging-dir”.
> ****
>
> It will create directories recursively with permissions set to
> HISTORY_STAGING_DIR_PERMISSIONS.****
>
> JobHistoryServer runs under the special service user account while
> JobClient is under the user who submitting jobs.****
>
> This lead to a failure in setPermission() during job submission.****
>
> ** **
>
> There are multiple possible mitigations possible. Here are two examples.**
> **
>
> 1) config all users submitting jobs to supergroup.****
>
> 2) during setup, pre-create the staging directory and chown to correct
> user.****
>
> ** **
>
> In our case, we took approach 1) because the security check on HDFS was
> not very important for our scenarios (part of the reason why we can disable
> HDFS permission in the first place).****
>
> ** **
>
> Hope this can help you solve your problem!****
>
> ** **
>
> ** **
>
> -Chuan****
>
> ** **
>
> ** **
>
> *From:* Prashant Kommireddi [mailto:prash1784@gmail.com]
> *Sent:* Wednesday, June 19, 2013 1:32 PM
> *To:* user@hadoop.apache.org
> *Subject:* Re: DFS Permissions on Hadoop 2.x****
>
> ** **
>
> How can we resolve the issue in the case I have mentioned? File a MR Jira
> that does not try to check permissions when dfs.permissions.enabled is set
> to false? ****
>
> ** **
>
> The explanation that Tsz Wo (Nicholas) pointed out in the JIRA makes sense
> w.r.t HDFS behavior (thanks for that). But I am still unsure how we can get
> around the fact that certain permissions are set on shared directories by a
> certain user that disallow any other users from using them. Or am I missing
> something entirely?****
>
> ** **
>
> On Wed, Jun 19, 2013 at 1:01 PM, Chris Nauroth <cn...@hortonworks.com>
> wrote:****
>
>  Just in case anyone is curious who didn't look at HDFS-4918, we
> established that this is actually expected behavior, and it's mentioned in
> the documentation.  However, I filed HDFS-4919 to make the information
> clearer in the documentation, since this caused some confusion.****
>
> ** **
>
> https://issues.apache.org/jira/browse/HDFS-4919****
>
> ** **
>
> Chris Nauroth****
>
> Hortonworks****
>
> http://hortonworks.com/****
>
> ** **
>
> ** **
>
> On Tue, Jun 18, 2013 at 10:42 PM, Prashant Kommireddi <pr...@gmail.com>
> wrote:****
>
>  Thanks guys, I will follow the discussion there.****
>
> ** **
>
> On Tue, Jun 18, 2013 at 10:10 PM, Azuryy Yu <az...@gmail.com> wrote:***
> *
>
>  Yes, and I think this was lead by Snapshot.****
>
> I've file a JIRA here:
> https://issues.apache.org/jira/browse/HDFS-4918****
>
> ** **
>
> On Wed, Jun 19, 2013 at 11:40 AM, Harsh J <ha...@cloudera.com> wrote:****
>
> This is a HDFS bug. Like all other methods that check for permissions
> being enabled, the client call of setPermission should check it as
> well. It does not do that currently and I believe it should be a NOP
> in such a case. Please do file a JIRA (and reference the ID here to
> close the loop)!****
>
>
> On Wed, Jun 19, 2013 at 6:18 AM, Prashant Kommireddi
> <pr...@gmail.com> wrote:
> > Looks like the jobs fail only on the first attempt and pass thereafter.
> > Failure occurs while setting perms on "intermediate done directory".
> Here is
> > what I think is happening:
> >
> > 1. Intermediate done dir is (ideally) created as part of deployment (for
> eg,
> > /mapred/history/done_intermediate)
> >
> > 2. When a MR job is run, it creates a user dir within intermediate done
> dir
> > (/mapred/history/done_intermediate/username)
> >
> > 3. After this dir is created, the code tries to set permissions on this
> user
> > dir. In doing so, it checks for EXECUTE permissions on not just its
> parent
> > (/mapred/history/done_intermediate) but across all dirs to the top-most
> > level (/mapred). This fails as "/mapred" does not have execute
> permissions
> > for the "Other" users.
> >
> > 4. On successive job runs, since the user dir already exists
> > (/mapred/history/done_intermediate/username) it no longer tries to create
> > and set permissions again. And the job completes without any perm errors.
> >
> > This is the code within JobHistoryEventHandler that's doing it.
> >
> >  //Check for the existence of intermediate done dir.
> >     Path doneDirPath = null;
> >     try {
> >       doneDirPath = FileSystem.get(conf).makeQualified(new
> > Path(doneDirStr));
> >       doneDirFS = FileSystem.get(doneDirPath.toUri(), conf);
> >       // This directory will be in a common location, or this may be a
> > cluster
> >       // meant for a single user. Creating based on the conf. Should
> ideally
> > be
> >       // created by the JobHistoryServer or as part of deployment.
> >       if (!doneDirFS.exists(doneDirPath)) {
> >       if (JobHistoryUtils.shouldCreateNonUserDirectory(conf)) {
> >         LOG.info("Creating intermediate history logDir: ["
> >             + doneDirPath
> >             + "] + based on conf. Should ideally be created by the
> > JobHistoryServer: "
> >             + MRJobConfig.MR_AM_CREATE_JH_INTERMEDIATE_BASE_DIR);
> >           mkdir(
> >               doneDirFS,
> >               doneDirPath,
> >               new FsPermission(
> >             JobHistoryUtils.HISTORY_INTERMEDIATE_DONE_DIR_PERMISSIONS
> >                 .toShort()));
> >           // TODO Temporary toShort till new FsPermission(FsPermissions)
> >           // respects
> >         // sticky
> >       } else {
> >           String message = "Not creating intermediate history logDir: ["
> >                 + doneDirPath
> >                 + "] based on conf: "
> >                 + MRJobConfig.MR_AM_CREATE_JH_INTERMEDIATE_BASE_DIR
> >                 + ". Either set to true or pre-create this directory
> with" +
> >                 " appropriate permissions";
> >         LOG.error(message);
> >         throw new YarnException(message);
> >       }
> >       }
> >     } catch (IOException e) {
> >       LOG.error("Failed checking for the existance of history
> intermediate "
> > +
> >                       "done directory: [" + doneDirPath + "]");
> >       throw new YarnException(e);
> >     }
> >
> >
> > In any case, this does not appear to be the right behavior as it does not
> > respect "dfs.permissions.enabled" (set to false) at any point. Sounds
> like a
> > bug?
> >
> >
> > Thanks, Prashant
> >
> >
> >
> >
> >
> >
> > On Tue, Jun 18, 2013 at 3:24 PM, Prashant Kommireddi <
> prash1784@gmail.com>
> > wrote:
> >>
> >> Hi Chris,
> >>
> >> This is while running a MR job. Please note the job is able to write
> files
> >> to "/mapred" directory and fails on EXECUTE permissions. On digging in
> some
> >> more, it looks like the failure occurs after writing to
> >> "/mapred/history/done_intermediate".
> >>
> >> Here is a more detailed stacktrace.
> >>
> >> INFO: Job end notification started for jobID : job_1371593763906_0001
> >> Jun 18, 2013 3:20:20 PM
> >> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler
> >> closeEventWriter
> >> INFO: Unable to write out JobSummaryInfo to
> >>
> [hdfs://test-local-EMPTYSPEC/mapred/history/done_intermediate/smehta/job_1371593763906_0001.summary_tmp]
> >> org.apache.hadoop.security.AccessControlException: Permission denied:
> >> user=smehta, access=EXECUTE,
> >> inode="/mapred":pkommireddi:supergroup:drwxrwx---
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
> >>      at
> >>
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
> >>      at
> >>
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
> >>      at
> >>
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
> >>      at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
> >>      at java.security.AccessController.doPrivileged(Native Method)
> >>      at javax.security.auth.Subject.doAs(Subject.java:396)
> >>      at
> >>
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
> >>      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
> >>
> >>      at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
> Method)
> >>      at
> >>
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
> >>      at
> >>
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
> >>      at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
> >>      at
> >>
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:90)
> >>      at
> >>
> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:57)
> >>      at
> org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:1897)
> >>      at
> >>
> org.apache.hadoop.hdfs.DistributedFileSystem.setPermission(DistributedFileSystem.java:823)
> >>      at
> >>
> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.closeEventWriter(JobHistoryEventHandler.java:666)
> >>      at
> >>
> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.handleEvent(JobHistoryEventHandler.java:521)
> >>      at
> >>
> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler$1.run(JobHistoryEventHandler.java:273)
> >>      at java.lang.Thread.run(Thread.java:662)
> >> Caused by:
> >>
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException):
> >> Permission denied: user=smehta, access=EXECUTE,
> >> inode="/mapred":pkommireddi:supergroup:drwxrwx---
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
> >>      at
> >>
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
> >>      at
> >>
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
> >>      at
> >>
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
> >>      at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
> >>      at java.security.AccessController.doPrivileged(Native Method)
> >>      at javax.security.auth.Subject.doAs(Subject.java:396)
> >>      at
> >>
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
> >>      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
> >>
> >>      at org.apache.hadoop.ipc.Client.call(Client.java:1225)
> >>      at
> >>
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202)
> >>      at $Proxy9.setPermission(Unknown Source)
> >>      at
> >>
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.setPermission(ClientNamenodeProtocolTranslatorPB.java:241)
> >>      at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> >>      at
> >>
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> >>      at
> >>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> >>      at java.lang.reflect.Method.invoke(Method.java:597)
> >>      at
> >>
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164)
> >>      at
> >>
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83)
> >>      at $Proxy10.setPermission(Unknown Source)
> >>      at
> org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:1895)
> >>      ... 5 more
> >> Jun 18, 2013 3:20:20 PM
> >> org.apache.hadoop.yarn.YarnUncaughtExceptionHandler uncaughtException
> >> SEVERE: Thread Thread[Thread-51,5,main] threw an Exception.
> >> org.apache.hadoop.yarn.YarnException:
> >> org.apache.hadoop.security.AccessControlException: Permission denied:
> >> user=smehta, access=EXECUTE,
> >> inode="/mapred":pkommireddi:supergroup:drwxrwx---
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
> >>      at
> >>
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
> >>      at
> >>
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
> >>      at
> >>
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
> >>      at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
> >>      at java.security.AccessController.doPrivileged(Native Method)
> >>      at javax.security.auth.Subject.doAs(Subject.java:396)
> >>      at
> >>
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
> >>      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
> >>
> >>      at
> >>
> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.handleEvent(JobHistoryEventHandler.java:523)
> >>      at
> >>
> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler$1.run(JobHistoryEventHandler.java:273)
> >>      at java.lang.Thread.run(Thread.java:662)
> >> Caused by: org.apache.hadoop.security.AccessControlException: Permission
> >> denied: user=smehta, access=EXECUTE,
> >> inode="/mapred":pkommireddi:supergroup:drwxrwx---
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
> >>      at
> >>
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
> >>      at
> >>
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
> >>      at
> >>
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
> >>      at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
> >>      at java.security.AccessController.doPrivileged(Native Method)
> >>      at javax.security.auth.Subject.doAs(Subject.java:396)
> >>      at
> >>
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
> >>      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
> >>
> >>      at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
> Method)
> >>      at
> >>
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
> >>      at
> >>
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
> >>      at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
> >>      at
> >>
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:90)
> >>      at
> >>
> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:57)
> >>      at
> org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:1897)
> >>      at
> >>
> org.apache.hadoop.hdfs.DistributedFileSystem.setPermission(DistributedFileSystem.java:823)
> >>      at
> >>
> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.closeEventWriter(JobHistoryEventHandler.java:666)
> >>      at
> >>
> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.handleEvent(JobHistoryEventHandler.java:521)
> >>      ... 2 more
> >> Caused by:
> >>
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException):
> >> Permission denied: user=smehta, access=EXECUTE,
> >> inode="/mapred":pkommireddi:supergroup:drwxrwx---
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
> >>      at
> >>
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
> >>      at
> >>
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
> >>      at
> >>
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
> >>      at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
> >>      at java.security.AccessController.doPrivileged(Native Method)
> >>      at javax.security.auth.Subject.doAs(Subject.java:396)
> >>      at
> >>
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
> >>      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
> >>
> >>      at org.apache.hadoop.ipc.Client.call(Client.java:1225)
> >>      at
> >>
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202)
> >>      at $Proxy9.setPermission(Unknown Source)
> >>      at
> >>
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.setPermission(ClientNamenodeProtocolTranslatorPB.java:241)
> >>      at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> >>      at
> >>
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> >>      at
> >>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> >>      at java.lang.reflect.Method.invoke(Method.java:597)
> >>      at
> >>
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164)
> >>      at
> >>
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83)
> >>      at $Proxy10.setPermission(Unknown Source)
> >>      at
> org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:1895)
> >>      ... 5 more
> >> Jun 18, 2013 3:20:20 PM
> >>
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator$ScheduleStats log
> >> INFO: Before Scheduling: PendingReds:0 ScheduledMaps:0 ScheduledReds:0
> >> AssignedMaps:0 AssignedReds:1 CompletedMaps:1 CompletedReds:1
> ContAlloc:2
> >> ContRel:0 HostLocal:0 RackLocal:1
> >> Jun 18, 2013 3:20:21 PM
> >> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator getResources
> >> INFO: Received completed container
> container_1371593763906_0001_01_000003
> >> Jun 18, 2013 3:20:21 PM
> >>
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator$ScheduleStats log
> >> INFO: After Scheduling: PendingReds:0 ScheduledMaps:0 ScheduledReds:0
> >> AssignedMaps:0 AssignedReds:0 CompletedMaps:1 CompletedReds:1
> ContAlloc:2
> >> ContRel:0 HostLocal:0 RackLocal:1
> >> Jun 18, 2013 3:20:21 PM
> >>
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl$DiagnosticInformationUpdater
> >> transition
> >> INFO: Diagnostics report from attempt_1371593763906_0001_r_000000_0:
> >> Container killed by the ApplicationMaster.
> >>
> >>
> >>
> >> On Tue, Jun 18, 2013 at 1:28 PM, Chris Nauroth <
> cnauroth@hortonworks.com>
> >> wrote:
> >>>
> >>> Prashant, can you provide more details about what you're doing when you
> >>> see this error?  Are you submitting a MapReduce job, running an HDFS
> shell
> >>> command, or doing some other action?  It's possible that we're also
> seeing
> >>> an interaction with some other change in 2.x that triggers a
> setPermission
> >>> call that wasn't there in 0.20.2.  I think the problem with the HDFS
> >>> setPermission API is present in both 0.20.2 and 2.x, but if the code in
> >>> 0.20.2 never triggered a setPermission call for your usage, then you
> >>> wouldn't have seen the problem.
> >>>
> >>> I'd like to gather these details for submitting a new bug report to
> HDFS.
> >>> Thanks!
> >>>
> >>> Chris Nauroth
> >>> Hortonworks
> >>> http://hortonworks.com/
> >>>
> >>>
> >>>
> >>> On Tue, Jun 18, 2013 at 12:14 PM, Leo Leung <ll...@ddn.com> wrote:
> >>>>
> >>>> I believe, the properties name should be “dfs.permissions”
> >>>>
> >>>>
> >>>>
> >>>>
> >>>>
> >>>> From: Prashant Kommireddi [mailto:prash1784@gmail.com]
> >>>> Sent: Tuesday, June 18, 2013 10:54 AM
> >>>> To: user@hadoop.apache.org
> >>>> Subject: DFS Permissions on Hadoop 2.x
> >>>>
> >>>>
> >>>>
> >>>> Hello,
> >>>>
> >>>>
> >>>>
> >>>> We just upgraded our cluster from 0.20.2 to 2.x (with HA) and had a
> >>>> question around disabling dfs permissions on the latter version. For
> some
> >>>> reason, setting the following config does not seem to work
> >>>>
> >>>>
> >>>>
> >>>> <property>
> >>>>
> >>>>         <name>dfs.permissions.enabled</name>
> >>>>
> >>>>         <value>false</value>
> >>>>
> >>>> </property>
> >>>>
> >>>>
> >>>>
> >>>> Any other configs that might be needed for this?
> >>>>
> >>>>
> >>>>
> >>>> Here is the stacktrace.
> >>>>
> >>>>
> >>>>
> >>>> 2013-06-17 17:35:45,429 INFO  ipc.Server - IPC Server handler 62 on
> >>>> 8020, call
> org.apache.hadoop.hdfs.protocol.ClientProtocol.setPermission from
> >>>> 10.0.53.131:24059: error:
> org.apache.hadoop.security.AccessControlException:
> >>>> Permission denied: user=smehta, access=EXECUTE,
> >>>> inode="/mapred":pkommireddi:supergroup:drwxrwx---
> >>>>
> >>>> org.apache.hadoop.security.AccessControlException: Permission denied:
> >>>> user=smehta, access=EXECUTE,
> >>>> inode="/mapred":pkommireddi:supergroup:drwxrwx---
> >>>>
> >>>>         at
> >>>>
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
> >>>>
> >>>>         at
> >>>>
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
> >>>>
> >>>>         at
> >>>>
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
> >>>>
> >>>>         at
> >>>>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
> >>>>
> >>>>         at
> >>>>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
> >>>>
> >>>>         at
> >>>>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
> >>>>
> >>>>         at
> >>>>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
> >>>>
> >>>>         at
> >>>>
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
> >>>>
> >>>>         at
> >>>>
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
> >>>>
> >>>>         at
> >>>>
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
> >>>>
> >>>>         at
> >>>>
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
> >>>>
> >>>>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
> >>>>
> >>>>         at
> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
> >>>>
> >>>>         at
> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
> >>>>
> >>>>         at java.security.AccessController.doPrivileged(Native Method)
> >>>>
> >>>>         at javax.security.auth.Subject.doAs(Subject.java:396)
> >>>>
> >>>>         at
> >>>>
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
> >>>>
> >>>>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
> >>>>
> >>>>
> >>>>
> >>>>
> >>>>
> >>>>
> >>>>
> >>>>
> >>>
> >>>
> >>
> >
>
>
> ****
>
> --
> Harsh J****
>
>  ** **
>
>  ** **
>
>  ** **
>
>  ** **
>

RE: DFS Permissions on Hadoop 2.x

Posted by Chuan Liu <ch...@microsoft.com>.
Hi Prashant,

We also hit this issue before.
1) We want run a Hadoop cluster with permission disabled.
2) With Job history server, yarn, hdfs daemons run under a special service user account, e.g. 'hadoop'
3) Users submit jobs to the cluster under their own account.

For the above scenario, submitting jobs fails in Hadoop 2.0 while succeeds in Hadoop 1.0.

In our investigation, the regression happened in jobclient and job history server, not on hdfs side.
The root cause is that jobclient will copy jar files to the staging area configed by "yarn.app.mapreduce.am.staging-dir".
The client will also set the permission on the directory and jar files to some pre-configured value, i.e. JobSubmissionFilesJOB_DIR_PERMISSION and JobSubmissionFilesJOB_FILE_PERMISSION.
On HDFS side, even if 'permissoin.enabled' is set to false, changing permissions are not allowed.
(This is the same in both Hadoop v1 and v2.)
JobHistoryServer also plays a part in this as its staging directory happens to be at the same locations as "yarn.app.mapreduce am.staging-dir".
It will create directories recursively with permissions set to HISTORY_STAGING_DIR_PERMISSIONS.
JobHistoryServer runs under the special service user account while JobClient is under the user who submitting jobs.
This lead to a failure in setPermission() during job submission.

There are multiple possible mitigations possible. Here are two examples.
1) config all users submitting jobs to supergroup.
2) during setup, pre-create the staging directory and chown to correct user.

In our case, we took approach 1) because the security check on HDFS was not very important for our scenarios (part of the reason why we can disable HDFS permission in the first place).

Hope this can help you solve your problem!


-Chuan


From: Prashant Kommireddi [mailto:prash1784@gmail.com]
Sent: Wednesday, June 19, 2013 1:32 PM
To: user@hadoop.apache.org
Subject: Re: DFS Permissions on Hadoop 2.x

How can we resolve the issue in the case I have mentioned? File a MR Jira that does not try to check permissions when dfs.permissions.enabled is set to false?

The explanation that Tsz Wo (Nicholas) pointed out in the JIRA makes sense w.r.t HDFS behavior (thanks for that). But I am still unsure how we can get around the fact that certain permissions are set on shared directories by a certain user that disallow any other users from using them. Or am I missing something entirely?

On Wed, Jun 19, 2013 at 1:01 PM, Chris Nauroth <cn...@hortonworks.com>> wrote:
Just in case anyone is curious who didn't look at HDFS-4918, we established that this is actually expected behavior, and it's mentioned in the documentation.  However, I filed HDFS-4919 to make the information clearer in the documentation, since this caused some confusion.

https://issues.apache.org/jira/browse/HDFS-4919

Chris Nauroth
Hortonworks
http://hortonworks.com/


On Tue, Jun 18, 2013 at 10:42 PM, Prashant Kommireddi <pr...@gmail.com>> wrote:
Thanks guys, I will follow the discussion there.

On Tue, Jun 18, 2013 at 10:10 PM, Azuryy Yu <az...@gmail.com>> wrote:
Yes, and I think this was lead by Snapshot.
I've file a JIRA here:
https://issues.apache.org/jira/browse/HDFS-4918

On Wed, Jun 19, 2013 at 11:40 AM, Harsh J <ha...@cloudera.com>> wrote:
This is a HDFS bug. Like all other methods that check for permissions
being enabled, the client call of setPermission should check it as
well. It does not do that currently and I believe it should be a NOP
in such a case. Please do file a JIRA (and reference the ID here to
close the loop)!

On Wed, Jun 19, 2013 at 6:18 AM, Prashant Kommireddi
<pr...@gmail.com>> wrote:
> Looks like the jobs fail only on the first attempt and pass thereafter.
> Failure occurs while setting perms on "intermediate done directory". Here is
> what I think is happening:
>
> 1. Intermediate done dir is (ideally) created as part of deployment (for eg,
> /mapred/history/done_intermediate)
>
> 2. When a MR job is run, it creates a user dir within intermediate done dir
> (/mapred/history/done_intermediate/username)
>
> 3. After this dir is created, the code tries to set permissions on this user
> dir. In doing so, it checks for EXECUTE permissions on not just its parent
> (/mapred/history/done_intermediate) but across all dirs to the top-most
> level (/mapred). This fails as "/mapred" does not have execute permissions
> for the "Other" users.
>
> 4. On successive job runs, since the user dir already exists
> (/mapred/history/done_intermediate/username) it no longer tries to create
> and set permissions again. And the job completes without any perm errors.
>
> This is the code within JobHistoryEventHandler that's doing it.
>
>  //Check for the existence of intermediate done dir.
>     Path doneDirPath = null;
>     try {
>       doneDirPath = FileSystem.get(conf).makeQualified(new
> Path(doneDirStr));
>       doneDirFS = FileSystem.get(doneDirPath.toUri(), conf);
>       // This directory will be in a common location, or this may be a
> cluster
>       // meant for a single user. Creating based on the conf. Should ideally
> be
>       // created by the JobHistoryServer or as part of deployment.
>       if (!doneDirFS.exists(doneDirPath)) {
>       if (JobHistoryUtils.shouldCreateNonUserDirectory(conf)) {
>         LOG.info("Creating intermediate history logDir: ["
>             + doneDirPath
>             + "] + based on conf. Should ideally be created by the
> JobHistoryServer: "
>             + MRJobConfig.MR_AM_CREATE_JH_INTERMEDIATE_BASE_DIR);
>           mkdir(
>               doneDirFS,
>               doneDirPath,
>               new FsPermission(
>             JobHistoryUtils.HISTORY_INTERMEDIATE_DONE_DIR_PERMISSIONS
>                 .toShort()));
>           // TODO Temporary toShort till new FsPermission(FsPermissions)
>           // respects
>         // sticky
>       } else {
>           String message = "Not creating intermediate history logDir: ["
>                 + doneDirPath
>                 + "] based on conf: "
>                 + MRJobConfig.MR_AM_CREATE_JH_INTERMEDIATE_BASE_DIR
>                 + ". Either set to true or pre-create this directory with" +
>                 " appropriate permissions";
>         LOG.error(message);
>         throw new YarnException(message);
>       }
>       }
>     } catch (IOException e) {
>       LOG.error("Failed checking for the existance of history intermediate "
> +
>                       "done directory: [" + doneDirPath + "]");
>       throw new YarnException(e);
>     }
>
>
> In any case, this does not appear to be the right behavior as it does not
> respect "dfs.permissions.enabled" (set to false) at any point. Sounds like a
> bug?
>
>
> Thanks, Prashant
>
>
>
>
>
>
> On Tue, Jun 18, 2013 at 3:24 PM, Prashant Kommireddi <pr...@gmail.com>>
> wrote:
>>
>> Hi Chris,
>>
>> This is while running a MR job. Please note the job is able to write files
>> to "/mapred" directory and fails on EXECUTE permissions. On digging in some
>> more, it looks like the failure occurs after writing to
>> "/mapred/history/done_intermediate".
>>
>> Here is a more detailed stacktrace.
>>
>> INFO: Job end notification started for jobID : job_1371593763906_0001
>> Jun 18, 2013 3:20:20 PM
>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler
>> closeEventWriter
>> INFO: Unable to write out JobSummaryInfo to
>> [hdfs://test-local-EMPTYSPEC/mapred/history/done_intermediate/smehta/job_1371593763906_0001.summary_tmp]
>> org.apache.hadoop.security.AccessControlException: Permission denied:
>> user=smehta, access=EXECUTE,
>> inode="/mapred":pkommireddi:supergroup:drwxrwx---
>>      at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
>>      at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
>>      at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
>>      at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
>>      at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
>>      at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
>>      at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
>>      at
>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
>>      at
>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
>>      at
>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
>>      at
>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>>      at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
>>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
>>      at java.security.AccessController.doPrivileged(Native Method)
>>      at javax.security.auth.Subject.doAs(Subject.java:396)
>>      at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>>      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
>>
>>      at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>>      at
>> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
>>      at
>> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
>>      at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
>>      at
>> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:90)
>>      at
>> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:57)
>>      at org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:1897)
>>      at
>> org.apache.hadoop.hdfs.DistributedFileSystem.setPermission(DistributedFileSystem.java:823)
>>      at
>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.closeEventWriter(JobHistoryEventHandler.java:666)
>>      at
>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.handleEvent(JobHistoryEventHandler.java:521)
>>      at
>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler$1.run(JobHistoryEventHandler.java:273)
>>      at java.lang.Thread.run(Thread.java:662)
>> Caused by:
>> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException):
>> Permission denied: user=smehta, access=EXECUTE,
>> inode="/mapred":pkommireddi:supergroup:drwxrwx---
>>      at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
>>      at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
>>      at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
>>      at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
>>      at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
>>      at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
>>      at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
>>      at
>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
>>      at
>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
>>      at
>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
>>      at
>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>>      at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
>>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
>>      at java.security.AccessController.doPrivileged(Native Method)
>>      at javax.security.auth.Subject.doAs(Subject.java:396)
>>      at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>>      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
>>
>>      at org.apache.hadoop.ipc.Client.call(Client.java:1225)
>>      at
>> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202)
>>      at $Proxy9.setPermission(Unknown Source)
>>      at
>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.setPermission(ClientNamenodeProtocolTranslatorPB.java:241)
>>      at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>      at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>>      at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>      at java.lang.reflect.Method.invoke(Method.java:597)
>>      at
>> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164)
>>      at
>> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83)
>>      at $Proxy10.setPermission(Unknown Source)
>>      at org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:1895)
>>      ... 5 more
>> Jun 18, 2013 3:20:20 PM
>> org.apache.hadoop.yarn.YarnUncaughtExceptionHandler uncaughtException
>> SEVERE: Thread Thread[Thread-51,5,main] threw an Exception.
>> org.apache.hadoop.yarn.YarnException:
>> org.apache.hadoop.security.AccessControlException: Permission denied:
>> user=smehta, access=EXECUTE,
>> inode="/mapred":pkommireddi:supergroup:drwxrwx---
>>      at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
>>      at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
>>      at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
>>      at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
>>      at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
>>      at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
>>      at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
>>      at
>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
>>      at
>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
>>      at
>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
>>      at
>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>>      at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
>>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
>>      at java.security.AccessController.doPrivileged(Native Method)
>>      at javax.security.auth.Subject.doAs(Subject.java:396)
>>      at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>>      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
>>
>>      at
>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.handleEvent(JobHistoryEventHandler.java:523)
>>      at
>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler$1.run(JobHistoryEventHandler.java:273)
>>      at java.lang.Thread.run(Thread.java:662)
>> Caused by: org.apache.hadoop.security.AccessControlException: Permission
>> denied: user=smehta, access=EXECUTE,
>> inode="/mapred":pkommireddi:supergroup:drwxrwx---
>>      at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
>>      at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
>>      at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
>>      at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
>>      at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
>>      at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
>>      at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
>>      at
>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
>>      at
>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
>>      at
>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
>>      at
>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>>      at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
>>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
>>      at java.security.AccessController.doPrivileged(Native Method)
>>      at javax.security.auth.Subject.doAs(Subject.java:396)
>>      at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>>      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
>>
>>      at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>>      at
>> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
>>      at
>> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
>>      at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
>>      at
>> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:90)
>>      at
>> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:57)
>>      at org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:1897)
>>      at
>> org.apache.hadoop.hdfs.DistributedFileSystem.setPermission(DistributedFileSystem.java:823)
>>      at
>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.closeEventWriter(JobHistoryEventHandler.java:666)
>>      at
>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.handleEvent(JobHistoryEventHandler.java:521)
>>      ... 2 more
>> Caused by:
>> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException):
>> Permission denied: user=smehta, access=EXECUTE,
>> inode="/mapred":pkommireddi:supergroup:drwxrwx---
>>      at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
>>      at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
>>      at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
>>      at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
>>      at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
>>      at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
>>      at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
>>      at
>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
>>      at
>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
>>      at
>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
>>      at
>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>>      at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
>>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
>>      at java.security.AccessController.doPrivileged(Native Method)
>>      at javax.security.auth.Subject.doAs(Subject.java:396)
>>      at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>>      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
>>
>>      at org.apache.hadoop.ipc.Client.call(Client.java:1225)
>>      at
>> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202)
>>      at $Proxy9.setPermission(Unknown Source)
>>      at
>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.setPermission(ClientNamenodeProtocolTranslatorPB.java:241)
>>      at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>      at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>>      at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>      at java.lang.reflect.Method.invoke(Method.java:597)
>>      at
>> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164)
>>      at
>> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83)
>>      at $Proxy10.setPermission(Unknown Source)
>>      at org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:1895)
>>      ... 5 more
>> Jun 18, 2013 3:20:20 PM
>> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator$ScheduleStats log
>> INFO: Before Scheduling: PendingReds:0 ScheduledMaps:0 ScheduledReds:0
>> AssignedMaps:0 AssignedReds:1 CompletedMaps:1 CompletedReds:1 ContAlloc:2
>> ContRel:0 HostLocal:0 RackLocal:1
>> Jun 18, 2013 3:20:21 PM
>> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator getResources
>> INFO: Received completed container container_1371593763906_0001_01_000003
>> Jun 18, 2013 3:20:21 PM
>> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator$ScheduleStats log
>> INFO: After Scheduling: PendingReds:0 ScheduledMaps:0 ScheduledReds:0
>> AssignedMaps:0 AssignedReds:0 CompletedMaps:1 CompletedReds:1 ContAlloc:2
>> ContRel:0 HostLocal:0 RackLocal:1
>> Jun 18, 2013 3:20:21 PM
>> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl$DiagnosticInformationUpdater
>> transition
>> INFO: Diagnostics report from attempt_1371593763906_0001_r_000000_0:
>> Container killed by the ApplicationMaster.
>>
>>
>>
>> On Tue, Jun 18, 2013 at 1:28 PM, Chris Nauroth <cn...@hortonworks.com>>
>> wrote:
>>>
>>> Prashant, can you provide more details about what you're doing when you
>>> see this error?  Are you submitting a MapReduce job, running an HDFS shell
>>> command, or doing some other action?  It's possible that we're also seeing
>>> an interaction with some other change in 2.x that triggers a setPermission
>>> call that wasn't there in 0.20.2.  I think the problem with the HDFS
>>> setPermission API is present in both 0.20.2 and 2.x, but if the code in
>>> 0.20.2 never triggered a setPermission call for your usage, then you
>>> wouldn't have seen the problem.
>>>
>>> I'd like to gather these details for submitting a new bug report to HDFS.
>>> Thanks!
>>>
>>> Chris Nauroth
>>> Hortonworks
>>> http://hortonworks.com/
>>>
>>>
>>>
>>> On Tue, Jun 18, 2013 at 12:14 PM, Leo Leung <ll...@ddn.com>> wrote:
>>>>
>>>> I believe, the properties name should be "dfs.permissions"
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> From: Prashant Kommireddi [mailto:prash1784@gmail.com<ma...@gmail.com>]
>>>> Sent: Tuesday, June 18, 2013 10:54 AM
>>>> To: user@hadoop.apache.org<ma...@hadoop.apache.org>
>>>> Subject: DFS Permissions on Hadoop 2.x
>>>>
>>>>
>>>>
>>>> Hello,
>>>>
>>>>
>>>>
>>>> We just upgraded our cluster from 0.20.2 to 2.x (with HA) and had a
>>>> question around disabling dfs permissions on the latter version. For some
>>>> reason, setting the following config does not seem to work
>>>>
>>>>
>>>>
>>>> <property>
>>>>
>>>>         <name>dfs.permissions.enabled</name>
>>>>
>>>>         <value>false</value>
>>>>
>>>> </property>
>>>>
>>>>
>>>>
>>>> Any other configs that might be needed for this?
>>>>
>>>>
>>>>
>>>> Here is the stacktrace.
>>>>
>>>>
>>>>
>>>> 2013-06-17 17:35:45,429 INFO  ipc.Server - IPC Server handler 62 on
>>>> 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.setPermission from
>>>> 10.0.53.131:24059<http://10.0.53.131:24059>: error: org.apache.hadoop.security.AccessControlException:
>>>> Permission denied: user=smehta, access=EXECUTE,
>>>> inode="/mapred":pkommireddi:supergroup:drwxrwx---
>>>>
>>>> org.apache.hadoop.security.AccessControlException: Permission denied:
>>>> user=smehta, access=EXECUTE,
>>>> inode="/mapred":pkommireddi:supergroup:drwxrwx---
>>>>
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
>>>>
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
>>>>
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
>>>>
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
>>>>
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
>>>>
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
>>>>
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
>>>>
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
>>>>
>>>>         at
>>>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
>>>>
>>>>         at
>>>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
>>>>
>>>>         at
>>>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>>>>
>>>>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>>>>
>>>>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
>>>>
>>>>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
>>>>
>>>>         at java.security.AccessController.doPrivileged(Native Method)
>>>>
>>>>         at javax.security.auth.Subject.doAs(Subject.java:396)
>>>>
>>>>         at
>>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>>>>
>>>>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>
>>>
>>
>


--
Harsh J





RE: DFS Permissions on Hadoop 2.x

Posted by Chuan Liu <ch...@microsoft.com>.
Hi Prashant,

We also hit this issue before.
1) We want run a Hadoop cluster with permission disabled.
2) With Job history server, yarn, hdfs daemons run under a special service user account, e.g. 'hadoop'
3) Users submit jobs to the cluster under their own account.

For the above scenario, submitting jobs fails in Hadoop 2.0 while succeeds in Hadoop 1.0.

In our investigation, the regression happened in jobclient and job history server, not on hdfs side.
The root cause is that jobclient will copy jar files to the staging area configed by "yarn.app.mapreduce.am.staging-dir".
The client will also set the permission on the directory and jar files to some pre-configured value, i.e. JobSubmissionFilesJOB_DIR_PERMISSION and JobSubmissionFilesJOB_FILE_PERMISSION.
On HDFS side, even if 'permissoin.enabled' is set to false, changing permissions are not allowed.
(This is the same in both Hadoop v1 and v2.)
JobHistoryServer also plays a part in this as its staging directory happens to be at the same locations as "yarn.app.mapreduce am.staging-dir".
It will create directories recursively with permissions set to HISTORY_STAGING_DIR_PERMISSIONS.
JobHistoryServer runs under the special service user account while JobClient is under the user who submitting jobs.
This lead to a failure in setPermission() during job submission.

There are multiple possible mitigations possible. Here are two examples.
1) config all users submitting jobs to supergroup.
2) during setup, pre-create the staging directory and chown to correct user.

In our case, we took approach 1) because the security check on HDFS was not very important for our scenarios (part of the reason why we can disable HDFS permission in the first place).

Hope this can help you solve your problem!


-Chuan


From: Prashant Kommireddi [mailto:prash1784@gmail.com]
Sent: Wednesday, June 19, 2013 1:32 PM
To: user@hadoop.apache.org
Subject: Re: DFS Permissions on Hadoop 2.x

How can we resolve the issue in the case I have mentioned? File a MR Jira that does not try to check permissions when dfs.permissions.enabled is set to false?

The explanation that Tsz Wo (Nicholas) pointed out in the JIRA makes sense w.r.t HDFS behavior (thanks for that). But I am still unsure how we can get around the fact that certain permissions are set on shared directories by a certain user that disallow any other users from using them. Or am I missing something entirely?

On Wed, Jun 19, 2013 at 1:01 PM, Chris Nauroth <cn...@hortonworks.com>> wrote:
Just in case anyone is curious who didn't look at HDFS-4918, we established that this is actually expected behavior, and it's mentioned in the documentation.  However, I filed HDFS-4919 to make the information clearer in the documentation, since this caused some confusion.

https://issues.apache.org/jira/browse/HDFS-4919

Chris Nauroth
Hortonworks
http://hortonworks.com/


On Tue, Jun 18, 2013 at 10:42 PM, Prashant Kommireddi <pr...@gmail.com>> wrote:
Thanks guys, I will follow the discussion there.

On Tue, Jun 18, 2013 at 10:10 PM, Azuryy Yu <az...@gmail.com>> wrote:
Yes, and I think this was lead by Snapshot.
I've file a JIRA here:
https://issues.apache.org/jira/browse/HDFS-4918

On Wed, Jun 19, 2013 at 11:40 AM, Harsh J <ha...@cloudera.com>> wrote:
This is a HDFS bug. Like all other methods that check for permissions
being enabled, the client call of setPermission should check it as
well. It does not do that currently and I believe it should be a NOP
in such a case. Please do file a JIRA (and reference the ID here to
close the loop)!

On Wed, Jun 19, 2013 at 6:18 AM, Prashant Kommireddi
<pr...@gmail.com>> wrote:
> Looks like the jobs fail only on the first attempt and pass thereafter.
> Failure occurs while setting perms on "intermediate done directory". Here is
> what I think is happening:
>
> 1. Intermediate done dir is (ideally) created as part of deployment (for eg,
> /mapred/history/done_intermediate)
>
> 2. When a MR job is run, it creates a user dir within intermediate done dir
> (/mapred/history/done_intermediate/username)
>
> 3. After this dir is created, the code tries to set permissions on this user
> dir. In doing so, it checks for EXECUTE permissions on not just its parent
> (/mapred/history/done_intermediate) but across all dirs to the top-most
> level (/mapred). This fails as "/mapred" does not have execute permissions
> for the "Other" users.
>
> 4. On successive job runs, since the user dir already exists
> (/mapred/history/done_intermediate/username) it no longer tries to create
> and set permissions again. And the job completes without any perm errors.
>
> This is the code within JobHistoryEventHandler that's doing it.
>
>  //Check for the existence of intermediate done dir.
>     Path doneDirPath = null;
>     try {
>       doneDirPath = FileSystem.get(conf).makeQualified(new
> Path(doneDirStr));
>       doneDirFS = FileSystem.get(doneDirPath.toUri(), conf);
>       // This directory will be in a common location, or this may be a
> cluster
>       // meant for a single user. Creating based on the conf. Should ideally
> be
>       // created by the JobHistoryServer or as part of deployment.
>       if (!doneDirFS.exists(doneDirPath)) {
>       if (JobHistoryUtils.shouldCreateNonUserDirectory(conf)) {
>         LOG.info("Creating intermediate history logDir: ["
>             + doneDirPath
>             + "] + based on conf. Should ideally be created by the
> JobHistoryServer: "
>             + MRJobConfig.MR_AM_CREATE_JH_INTERMEDIATE_BASE_DIR);
>           mkdir(
>               doneDirFS,
>               doneDirPath,
>               new FsPermission(
>             JobHistoryUtils.HISTORY_INTERMEDIATE_DONE_DIR_PERMISSIONS
>                 .toShort()));
>           // TODO Temporary toShort till new FsPermission(FsPermissions)
>           // respects
>         // sticky
>       } else {
>           String message = "Not creating intermediate history logDir: ["
>                 + doneDirPath
>                 + "] based on conf: "
>                 + MRJobConfig.MR_AM_CREATE_JH_INTERMEDIATE_BASE_DIR
>                 + ". Either set to true or pre-create this directory with" +
>                 " appropriate permissions";
>         LOG.error(message);
>         throw new YarnException(message);
>       }
>       }
>     } catch (IOException e) {
>       LOG.error("Failed checking for the existance of history intermediate "
> +
>                       "done directory: [" + doneDirPath + "]");
>       throw new YarnException(e);
>     }
>
>
> In any case, this does not appear to be the right behavior as it does not
> respect "dfs.permissions.enabled" (set to false) at any point. Sounds like a
> bug?
>
>
> Thanks, Prashant
>
>
>
>
>
>
> On Tue, Jun 18, 2013 at 3:24 PM, Prashant Kommireddi <pr...@gmail.com>>
> wrote:
>>
>> Hi Chris,
>>
>> This is while running a MR job. Please note the job is able to write files
>> to "/mapred" directory and fails on EXECUTE permissions. On digging in some
>> more, it looks like the failure occurs after writing to
>> "/mapred/history/done_intermediate".
>>
>> Here is a more detailed stacktrace.
>>
>> INFO: Job end notification started for jobID : job_1371593763906_0001
>> Jun 18, 2013 3:20:20 PM
>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler
>> closeEventWriter
>> INFO: Unable to write out JobSummaryInfo to
>> [hdfs://test-local-EMPTYSPEC/mapred/history/done_intermediate/smehta/job_1371593763906_0001.summary_tmp]
>> org.apache.hadoop.security.AccessControlException: Permission denied:
>> user=smehta, access=EXECUTE,
>> inode="/mapred":pkommireddi:supergroup:drwxrwx---
>>      at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
>>      at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
>>      at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
>>      at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
>>      at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
>>      at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
>>      at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
>>      at
>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
>>      at
>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
>>      at
>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
>>      at
>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>>      at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
>>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
>>      at java.security.AccessController.doPrivileged(Native Method)
>>      at javax.security.auth.Subject.doAs(Subject.java:396)
>>      at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>>      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
>>
>>      at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>>      at
>> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
>>      at
>> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
>>      at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
>>      at
>> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:90)
>>      at
>> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:57)
>>      at org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:1897)
>>      at
>> org.apache.hadoop.hdfs.DistributedFileSystem.setPermission(DistributedFileSystem.java:823)
>>      at
>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.closeEventWriter(JobHistoryEventHandler.java:666)
>>      at
>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.handleEvent(JobHistoryEventHandler.java:521)
>>      at
>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler$1.run(JobHistoryEventHandler.java:273)
>>      at java.lang.Thread.run(Thread.java:662)
>> Caused by:
>> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException):
>> Permission denied: user=smehta, access=EXECUTE,
>> inode="/mapred":pkommireddi:supergroup:drwxrwx---
>>      at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
>>      at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
>>      at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
>>      at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
>>      at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
>>      at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
>>      at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
>>      at
>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
>>      at
>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
>>      at
>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
>>      at
>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>>      at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
>>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
>>      at java.security.AccessController.doPrivileged(Native Method)
>>      at javax.security.auth.Subject.doAs(Subject.java:396)
>>      at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>>      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
>>
>>      at org.apache.hadoop.ipc.Client.call(Client.java:1225)
>>      at
>> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202)
>>      at $Proxy9.setPermission(Unknown Source)
>>      at
>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.setPermission(ClientNamenodeProtocolTranslatorPB.java:241)
>>      at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>      at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>>      at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>      at java.lang.reflect.Method.invoke(Method.java:597)
>>      at
>> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164)
>>      at
>> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83)
>>      at $Proxy10.setPermission(Unknown Source)
>>      at org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:1895)
>>      ... 5 more
>> Jun 18, 2013 3:20:20 PM
>> org.apache.hadoop.yarn.YarnUncaughtExceptionHandler uncaughtException
>> SEVERE: Thread Thread[Thread-51,5,main] threw an Exception.
>> org.apache.hadoop.yarn.YarnException:
>> org.apache.hadoop.security.AccessControlException: Permission denied:
>> user=smehta, access=EXECUTE,
>> inode="/mapred":pkommireddi:supergroup:drwxrwx---
>>      at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
>>      at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
>>      at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
>>      at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
>>      at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
>>      at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
>>      at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
>>      at
>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
>>      at
>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
>>      at
>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
>>      at
>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>>      at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
>>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
>>      at java.security.AccessController.doPrivileged(Native Method)
>>      at javax.security.auth.Subject.doAs(Subject.java:396)
>>      at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>>      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
>>
>>      at
>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.handleEvent(JobHistoryEventHandler.java:523)
>>      at
>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler$1.run(JobHistoryEventHandler.java:273)
>>      at java.lang.Thread.run(Thread.java:662)
>> Caused by: org.apache.hadoop.security.AccessControlException: Permission
>> denied: user=smehta, access=EXECUTE,
>> inode="/mapred":pkommireddi:supergroup:drwxrwx---
>>      at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
>>      at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
>>      at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
>>      at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
>>      at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
>>      at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
>>      at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
>>      at
>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
>>      at
>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
>>      at
>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
>>      at
>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>>      at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
>>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
>>      at java.security.AccessController.doPrivileged(Native Method)
>>      at javax.security.auth.Subject.doAs(Subject.java:396)
>>      at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>>      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
>>
>>      at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>>      at
>> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
>>      at
>> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
>>      at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
>>      at
>> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:90)
>>      at
>> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:57)
>>      at org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:1897)
>>      at
>> org.apache.hadoop.hdfs.DistributedFileSystem.setPermission(DistributedFileSystem.java:823)
>>      at
>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.closeEventWriter(JobHistoryEventHandler.java:666)
>>      at
>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.handleEvent(JobHistoryEventHandler.java:521)
>>      ... 2 more
>> Caused by:
>> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException):
>> Permission denied: user=smehta, access=EXECUTE,
>> inode="/mapred":pkommireddi:supergroup:drwxrwx---
>>      at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
>>      at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
>>      at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
>>      at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
>>      at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
>>      at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
>>      at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
>>      at
>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
>>      at
>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
>>      at
>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
>>      at
>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>>      at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
>>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
>>      at java.security.AccessController.doPrivileged(Native Method)
>>      at javax.security.auth.Subject.doAs(Subject.java:396)
>>      at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>>      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
>>
>>      at org.apache.hadoop.ipc.Client.call(Client.java:1225)
>>      at
>> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202)
>>      at $Proxy9.setPermission(Unknown Source)
>>      at
>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.setPermission(ClientNamenodeProtocolTranslatorPB.java:241)
>>      at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>      at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>>      at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>      at java.lang.reflect.Method.invoke(Method.java:597)
>>      at
>> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164)
>>      at
>> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83)
>>      at $Proxy10.setPermission(Unknown Source)
>>      at org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:1895)
>>      ... 5 more
>> Jun 18, 2013 3:20:20 PM
>> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator$ScheduleStats log
>> INFO: Before Scheduling: PendingReds:0 ScheduledMaps:0 ScheduledReds:0
>> AssignedMaps:0 AssignedReds:1 CompletedMaps:1 CompletedReds:1 ContAlloc:2
>> ContRel:0 HostLocal:0 RackLocal:1
>> Jun 18, 2013 3:20:21 PM
>> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator getResources
>> INFO: Received completed container container_1371593763906_0001_01_000003
>> Jun 18, 2013 3:20:21 PM
>> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator$ScheduleStats log
>> INFO: After Scheduling: PendingReds:0 ScheduledMaps:0 ScheduledReds:0
>> AssignedMaps:0 AssignedReds:0 CompletedMaps:1 CompletedReds:1 ContAlloc:2
>> ContRel:0 HostLocal:0 RackLocal:1
>> Jun 18, 2013 3:20:21 PM
>> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl$DiagnosticInformationUpdater
>> transition
>> INFO: Diagnostics report from attempt_1371593763906_0001_r_000000_0:
>> Container killed by the ApplicationMaster.
>>
>>
>>
>> On Tue, Jun 18, 2013 at 1:28 PM, Chris Nauroth <cn...@hortonworks.com>>
>> wrote:
>>>
>>> Prashant, can you provide more details about what you're doing when you
>>> see this error?  Are you submitting a MapReduce job, running an HDFS shell
>>> command, or doing some other action?  It's possible that we're also seeing
>>> an interaction with some other change in 2.x that triggers a setPermission
>>> call that wasn't there in 0.20.2.  I think the problem with the HDFS
>>> setPermission API is present in both 0.20.2 and 2.x, but if the code in
>>> 0.20.2 never triggered a setPermission call for your usage, then you
>>> wouldn't have seen the problem.
>>>
>>> I'd like to gather these details for submitting a new bug report to HDFS.
>>> Thanks!
>>>
>>> Chris Nauroth
>>> Hortonworks
>>> http://hortonworks.com/
>>>
>>>
>>>
>>> On Tue, Jun 18, 2013 at 12:14 PM, Leo Leung <ll...@ddn.com>> wrote:
>>>>
>>>> I believe, the properties name should be "dfs.permissions"
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> From: Prashant Kommireddi [mailto:prash1784@gmail.com<ma...@gmail.com>]
>>>> Sent: Tuesday, June 18, 2013 10:54 AM
>>>> To: user@hadoop.apache.org<ma...@hadoop.apache.org>
>>>> Subject: DFS Permissions on Hadoop 2.x
>>>>
>>>>
>>>>
>>>> Hello,
>>>>
>>>>
>>>>
>>>> We just upgraded our cluster from 0.20.2 to 2.x (with HA) and had a
>>>> question around disabling dfs permissions on the latter version. For some
>>>> reason, setting the following config does not seem to work
>>>>
>>>>
>>>>
>>>> <property>
>>>>
>>>>         <name>dfs.permissions.enabled</name>
>>>>
>>>>         <value>false</value>
>>>>
>>>> </property>
>>>>
>>>>
>>>>
>>>> Any other configs that might be needed for this?
>>>>
>>>>
>>>>
>>>> Here is the stacktrace.
>>>>
>>>>
>>>>
>>>> 2013-06-17 17:35:45,429 INFO  ipc.Server - IPC Server handler 62 on
>>>> 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.setPermission from
>>>> 10.0.53.131:24059<http://10.0.53.131:24059>: error: org.apache.hadoop.security.AccessControlException:
>>>> Permission denied: user=smehta, access=EXECUTE,
>>>> inode="/mapred":pkommireddi:supergroup:drwxrwx---
>>>>
>>>> org.apache.hadoop.security.AccessControlException: Permission denied:
>>>> user=smehta, access=EXECUTE,
>>>> inode="/mapred":pkommireddi:supergroup:drwxrwx---
>>>>
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
>>>>
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
>>>>
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
>>>>
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
>>>>
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
>>>>
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
>>>>
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
>>>>
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
>>>>
>>>>         at
>>>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
>>>>
>>>>         at
>>>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
>>>>
>>>>         at
>>>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>>>>
>>>>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>>>>
>>>>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
>>>>
>>>>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
>>>>
>>>>         at java.security.AccessController.doPrivileged(Native Method)
>>>>
>>>>         at javax.security.auth.Subject.doAs(Subject.java:396)
>>>>
>>>>         at
>>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>>>>
>>>>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>
>>>
>>
>


--
Harsh J





RE: DFS Permissions on Hadoop 2.x

Posted by Chuan Liu <ch...@microsoft.com>.
Hi Prashant,

We also hit this issue before.
1) We want run a Hadoop cluster with permission disabled.
2) With Job history server, yarn, hdfs daemons run under a special service user account, e.g. 'hadoop'
3) Users submit jobs to the cluster under their own account.

For the above scenario, submitting jobs fails in Hadoop 2.0 while succeeds in Hadoop 1.0.

In our investigation, the regression happened in jobclient and job history server, not on hdfs side.
The root cause is that jobclient will copy jar files to the staging area configed by "yarn.app.mapreduce.am.staging-dir".
The client will also set the permission on the directory and jar files to some pre-configured value, i.e. JobSubmissionFilesJOB_DIR_PERMISSION and JobSubmissionFilesJOB_FILE_PERMISSION.
On HDFS side, even if 'permissoin.enabled' is set to false, changing permissions are not allowed.
(This is the same in both Hadoop v1 and v2.)
JobHistoryServer also plays a part in this as its staging directory happens to be at the same locations as "yarn.app.mapreduce am.staging-dir".
It will create directories recursively with permissions set to HISTORY_STAGING_DIR_PERMISSIONS.
JobHistoryServer runs under the special service user account while JobClient is under the user who submitting jobs.
This lead to a failure in setPermission() during job submission.

There are multiple possible mitigations possible. Here are two examples.
1) config all users submitting jobs to supergroup.
2) during setup, pre-create the staging directory and chown to correct user.

In our case, we took approach 1) because the security check on HDFS was not very important for our scenarios (part of the reason why we can disable HDFS permission in the first place).

Hope this can help you solve your problem!


-Chuan


From: Prashant Kommireddi [mailto:prash1784@gmail.com]
Sent: Wednesday, June 19, 2013 1:32 PM
To: user@hadoop.apache.org
Subject: Re: DFS Permissions on Hadoop 2.x

How can we resolve the issue in the case I have mentioned? File a MR Jira that does not try to check permissions when dfs.permissions.enabled is set to false?

The explanation that Tsz Wo (Nicholas) pointed out in the JIRA makes sense w.r.t HDFS behavior (thanks for that). But I am still unsure how we can get around the fact that certain permissions are set on shared directories by a certain user that disallow any other users from using them. Or am I missing something entirely?

On Wed, Jun 19, 2013 at 1:01 PM, Chris Nauroth <cn...@hortonworks.com>> wrote:
Just in case anyone is curious who didn't look at HDFS-4918, we established that this is actually expected behavior, and it's mentioned in the documentation.  However, I filed HDFS-4919 to make the information clearer in the documentation, since this caused some confusion.

https://issues.apache.org/jira/browse/HDFS-4919

Chris Nauroth
Hortonworks
http://hortonworks.com/


On Tue, Jun 18, 2013 at 10:42 PM, Prashant Kommireddi <pr...@gmail.com>> wrote:
Thanks guys, I will follow the discussion there.

On Tue, Jun 18, 2013 at 10:10 PM, Azuryy Yu <az...@gmail.com>> wrote:
Yes, and I think this was lead by Snapshot.
I've file a JIRA here:
https://issues.apache.org/jira/browse/HDFS-4918

On Wed, Jun 19, 2013 at 11:40 AM, Harsh J <ha...@cloudera.com>> wrote:
This is a HDFS bug. Like all other methods that check for permissions
being enabled, the client call of setPermission should check it as
well. It does not do that currently and I believe it should be a NOP
in such a case. Please do file a JIRA (and reference the ID here to
close the loop)!

On Wed, Jun 19, 2013 at 6:18 AM, Prashant Kommireddi
<pr...@gmail.com>> wrote:
> Looks like the jobs fail only on the first attempt and pass thereafter.
> Failure occurs while setting perms on "intermediate done directory". Here is
> what I think is happening:
>
> 1. Intermediate done dir is (ideally) created as part of deployment (for eg,
> /mapred/history/done_intermediate)
>
> 2. When a MR job is run, it creates a user dir within intermediate done dir
> (/mapred/history/done_intermediate/username)
>
> 3. After this dir is created, the code tries to set permissions on this user
> dir. In doing so, it checks for EXECUTE permissions on not just its parent
> (/mapred/history/done_intermediate) but across all dirs to the top-most
> level (/mapred). This fails as "/mapred" does not have execute permissions
> for the "Other" users.
>
> 4. On successive job runs, since the user dir already exists
> (/mapred/history/done_intermediate/username) it no longer tries to create
> and set permissions again. And the job completes without any perm errors.
>
> This is the code within JobHistoryEventHandler that's doing it.
>
>  //Check for the existence of intermediate done dir.
>     Path doneDirPath = null;
>     try {
>       doneDirPath = FileSystem.get(conf).makeQualified(new
> Path(doneDirStr));
>       doneDirFS = FileSystem.get(doneDirPath.toUri(), conf);
>       // This directory will be in a common location, or this may be a
> cluster
>       // meant for a single user. Creating based on the conf. Should ideally
> be
>       // created by the JobHistoryServer or as part of deployment.
>       if (!doneDirFS.exists(doneDirPath)) {
>       if (JobHistoryUtils.shouldCreateNonUserDirectory(conf)) {
>         LOG.info("Creating intermediate history logDir: ["
>             + doneDirPath
>             + "] + based on conf. Should ideally be created by the
> JobHistoryServer: "
>             + MRJobConfig.MR_AM_CREATE_JH_INTERMEDIATE_BASE_DIR);
>           mkdir(
>               doneDirFS,
>               doneDirPath,
>               new FsPermission(
>             JobHistoryUtils.HISTORY_INTERMEDIATE_DONE_DIR_PERMISSIONS
>                 .toShort()));
>           // TODO Temporary toShort till new FsPermission(FsPermissions)
>           // respects
>         // sticky
>       } else {
>           String message = "Not creating intermediate history logDir: ["
>                 + doneDirPath
>                 + "] based on conf: "
>                 + MRJobConfig.MR_AM_CREATE_JH_INTERMEDIATE_BASE_DIR
>                 + ". Either set to true or pre-create this directory with" +
>                 " appropriate permissions";
>         LOG.error(message);
>         throw new YarnException(message);
>       }
>       }
>     } catch (IOException e) {
>       LOG.error("Failed checking for the existance of history intermediate "
> +
>                       "done directory: [" + doneDirPath + "]");
>       throw new YarnException(e);
>     }
>
>
> In any case, this does not appear to be the right behavior as it does not
> respect "dfs.permissions.enabled" (set to false) at any point. Sounds like a
> bug?
>
>
> Thanks, Prashant
>
>
>
>
>
>
> On Tue, Jun 18, 2013 at 3:24 PM, Prashant Kommireddi <pr...@gmail.com>>
> wrote:
>>
>> Hi Chris,
>>
>> This is while running a MR job. Please note the job is able to write files
>> to "/mapred" directory and fails on EXECUTE permissions. On digging in some
>> more, it looks like the failure occurs after writing to
>> "/mapred/history/done_intermediate".
>>
>> Here is a more detailed stacktrace.
>>
>> INFO: Job end notification started for jobID : job_1371593763906_0001
>> Jun 18, 2013 3:20:20 PM
>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler
>> closeEventWriter
>> INFO: Unable to write out JobSummaryInfo to
>> [hdfs://test-local-EMPTYSPEC/mapred/history/done_intermediate/smehta/job_1371593763906_0001.summary_tmp]
>> org.apache.hadoop.security.AccessControlException: Permission denied:
>> user=smehta, access=EXECUTE,
>> inode="/mapred":pkommireddi:supergroup:drwxrwx---
>>      at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
>>      at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
>>      at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
>>      at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
>>      at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
>>      at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
>>      at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
>>      at
>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
>>      at
>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
>>      at
>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
>>      at
>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>>      at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
>>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
>>      at java.security.AccessController.doPrivileged(Native Method)
>>      at javax.security.auth.Subject.doAs(Subject.java:396)
>>      at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>>      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
>>
>>      at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>>      at
>> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
>>      at
>> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
>>      at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
>>      at
>> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:90)
>>      at
>> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:57)
>>      at org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:1897)
>>      at
>> org.apache.hadoop.hdfs.DistributedFileSystem.setPermission(DistributedFileSystem.java:823)
>>      at
>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.closeEventWriter(JobHistoryEventHandler.java:666)
>>      at
>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.handleEvent(JobHistoryEventHandler.java:521)
>>      at
>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler$1.run(JobHistoryEventHandler.java:273)
>>      at java.lang.Thread.run(Thread.java:662)
>> Caused by:
>> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException):
>> Permission denied: user=smehta, access=EXECUTE,
>> inode="/mapred":pkommireddi:supergroup:drwxrwx---
>>      at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
>>      at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
>>      at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
>>      at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
>>      at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
>>      at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
>>      at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
>>      at
>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
>>      at
>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
>>      at
>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
>>      at
>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>>      at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
>>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
>>      at java.security.AccessController.doPrivileged(Native Method)
>>      at javax.security.auth.Subject.doAs(Subject.java:396)
>>      at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>>      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
>>
>>      at org.apache.hadoop.ipc.Client.call(Client.java:1225)
>>      at
>> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202)
>>      at $Proxy9.setPermission(Unknown Source)
>>      at
>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.setPermission(ClientNamenodeProtocolTranslatorPB.java:241)
>>      at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>      at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>>      at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>      at java.lang.reflect.Method.invoke(Method.java:597)
>>      at
>> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164)
>>      at
>> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83)
>>      at $Proxy10.setPermission(Unknown Source)
>>      at org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:1895)
>>      ... 5 more
>> Jun 18, 2013 3:20:20 PM
>> org.apache.hadoop.yarn.YarnUncaughtExceptionHandler uncaughtException
>> SEVERE: Thread Thread[Thread-51,5,main] threw an Exception.
>> org.apache.hadoop.yarn.YarnException:
>> org.apache.hadoop.security.AccessControlException: Permission denied:
>> user=smehta, access=EXECUTE,
>> inode="/mapred":pkommireddi:supergroup:drwxrwx---
>>      at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
>>      at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
>>      at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
>>      at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
>>      at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
>>      at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
>>      at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
>>      at
>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
>>      at
>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
>>      at
>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
>>      at
>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>>      at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
>>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
>>      at java.security.AccessController.doPrivileged(Native Method)
>>      at javax.security.auth.Subject.doAs(Subject.java:396)
>>      at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>>      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
>>
>>      at
>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.handleEvent(JobHistoryEventHandler.java:523)
>>      at
>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler$1.run(JobHistoryEventHandler.java:273)
>>      at java.lang.Thread.run(Thread.java:662)
>> Caused by: org.apache.hadoop.security.AccessControlException: Permission
>> denied: user=smehta, access=EXECUTE,
>> inode="/mapred":pkommireddi:supergroup:drwxrwx---
>>      at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
>>      at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
>>      at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
>>      at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
>>      at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
>>      at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
>>      at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
>>      at
>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
>>      at
>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
>>      at
>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
>>      at
>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>>      at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
>>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
>>      at java.security.AccessController.doPrivileged(Native Method)
>>      at javax.security.auth.Subject.doAs(Subject.java:396)
>>      at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>>      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
>>
>>      at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>>      at
>> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
>>      at
>> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
>>      at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
>>      at
>> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:90)
>>      at
>> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:57)
>>      at org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:1897)
>>      at
>> org.apache.hadoop.hdfs.DistributedFileSystem.setPermission(DistributedFileSystem.java:823)
>>      at
>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.closeEventWriter(JobHistoryEventHandler.java:666)
>>      at
>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.handleEvent(JobHistoryEventHandler.java:521)
>>      ... 2 more
>> Caused by:
>> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException):
>> Permission denied: user=smehta, access=EXECUTE,
>> inode="/mapred":pkommireddi:supergroup:drwxrwx---
>>      at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
>>      at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
>>      at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
>>      at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
>>      at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
>>      at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
>>      at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
>>      at
>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
>>      at
>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
>>      at
>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
>>      at
>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>>      at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
>>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
>>      at java.security.AccessController.doPrivileged(Native Method)
>>      at javax.security.auth.Subject.doAs(Subject.java:396)
>>      at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>>      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
>>
>>      at org.apache.hadoop.ipc.Client.call(Client.java:1225)
>>      at
>> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202)
>>      at $Proxy9.setPermission(Unknown Source)
>>      at
>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.setPermission(ClientNamenodeProtocolTranslatorPB.java:241)
>>      at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>      at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>>      at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>      at java.lang.reflect.Method.invoke(Method.java:597)
>>      at
>> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164)
>>      at
>> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83)
>>      at $Proxy10.setPermission(Unknown Source)
>>      at org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:1895)
>>      ... 5 more
>> Jun 18, 2013 3:20:20 PM
>> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator$ScheduleStats log
>> INFO: Before Scheduling: PendingReds:0 ScheduledMaps:0 ScheduledReds:0
>> AssignedMaps:0 AssignedReds:1 CompletedMaps:1 CompletedReds:1 ContAlloc:2
>> ContRel:0 HostLocal:0 RackLocal:1
>> Jun 18, 2013 3:20:21 PM
>> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator getResources
>> INFO: Received completed container container_1371593763906_0001_01_000003
>> Jun 18, 2013 3:20:21 PM
>> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator$ScheduleStats log
>> INFO: After Scheduling: PendingReds:0 ScheduledMaps:0 ScheduledReds:0
>> AssignedMaps:0 AssignedReds:0 CompletedMaps:1 CompletedReds:1 ContAlloc:2
>> ContRel:0 HostLocal:0 RackLocal:1
>> Jun 18, 2013 3:20:21 PM
>> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl$DiagnosticInformationUpdater
>> transition
>> INFO: Diagnostics report from attempt_1371593763906_0001_r_000000_0:
>> Container killed by the ApplicationMaster.
>>
>>
>>
>> On Tue, Jun 18, 2013 at 1:28 PM, Chris Nauroth <cn...@hortonworks.com>>
>> wrote:
>>>
>>> Prashant, can you provide more details about what you're doing when you
>>> see this error?  Are you submitting a MapReduce job, running an HDFS shell
>>> command, or doing some other action?  It's possible that we're also seeing
>>> an interaction with some other change in 2.x that triggers a setPermission
>>> call that wasn't there in 0.20.2.  I think the problem with the HDFS
>>> setPermission API is present in both 0.20.2 and 2.x, but if the code in
>>> 0.20.2 never triggered a setPermission call for your usage, then you
>>> wouldn't have seen the problem.
>>>
>>> I'd like to gather these details for submitting a new bug report to HDFS.
>>> Thanks!
>>>
>>> Chris Nauroth
>>> Hortonworks
>>> http://hortonworks.com/
>>>
>>>
>>>
>>> On Tue, Jun 18, 2013 at 12:14 PM, Leo Leung <ll...@ddn.com>> wrote:
>>>>
>>>> I believe, the properties name should be "dfs.permissions"
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> From: Prashant Kommireddi [mailto:prash1784@gmail.com<ma...@gmail.com>]
>>>> Sent: Tuesday, June 18, 2013 10:54 AM
>>>> To: user@hadoop.apache.org<ma...@hadoop.apache.org>
>>>> Subject: DFS Permissions on Hadoop 2.x
>>>>
>>>>
>>>>
>>>> Hello,
>>>>
>>>>
>>>>
>>>> We just upgraded our cluster from 0.20.2 to 2.x (with HA) and had a
>>>> question around disabling dfs permissions on the latter version. For some
>>>> reason, setting the following config does not seem to work
>>>>
>>>>
>>>>
>>>> <property>
>>>>
>>>>         <name>dfs.permissions.enabled</name>
>>>>
>>>>         <value>false</value>
>>>>
>>>> </property>
>>>>
>>>>
>>>>
>>>> Any other configs that might be needed for this?
>>>>
>>>>
>>>>
>>>> Here is the stacktrace.
>>>>
>>>>
>>>>
>>>> 2013-06-17 17:35:45,429 INFO  ipc.Server - IPC Server handler 62 on
>>>> 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.setPermission from
>>>> 10.0.53.131:24059<http://10.0.53.131:24059>: error: org.apache.hadoop.security.AccessControlException:
>>>> Permission denied: user=smehta, access=EXECUTE,
>>>> inode="/mapred":pkommireddi:supergroup:drwxrwx---
>>>>
>>>> org.apache.hadoop.security.AccessControlException: Permission denied:
>>>> user=smehta, access=EXECUTE,
>>>> inode="/mapred":pkommireddi:supergroup:drwxrwx---
>>>>
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
>>>>
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
>>>>
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
>>>>
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
>>>>
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
>>>>
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
>>>>
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
>>>>
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
>>>>
>>>>         at
>>>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
>>>>
>>>>         at
>>>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
>>>>
>>>>         at
>>>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>>>>
>>>>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>>>>
>>>>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
>>>>
>>>>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
>>>>
>>>>         at java.security.AccessController.doPrivileged(Native Method)
>>>>
>>>>         at javax.security.auth.Subject.doAs(Subject.java:396)
>>>>
>>>>         at
>>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>>>>
>>>>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>
>>>
>>
>


--
Harsh J





RE: DFS Permissions on Hadoop 2.x

Posted by Chuan Liu <ch...@microsoft.com>.
Hi Prashant,

We also hit this issue before.
1) We want run a Hadoop cluster with permission disabled.
2) With Job history server, yarn, hdfs daemons run under a special service user account, e.g. 'hadoop'
3) Users submit jobs to the cluster under their own account.

For the above scenario, submitting jobs fails in Hadoop 2.0 while succeeds in Hadoop 1.0.

In our investigation, the regression happened in jobclient and job history server, not on hdfs side.
The root cause is that jobclient will copy jar files to the staging area configed by "yarn.app.mapreduce.am.staging-dir".
The client will also set the permission on the directory and jar files to some pre-configured value, i.e. JobSubmissionFilesJOB_DIR_PERMISSION and JobSubmissionFilesJOB_FILE_PERMISSION.
On HDFS side, even if 'permissoin.enabled' is set to false, changing permissions are not allowed.
(This is the same in both Hadoop v1 and v2.)
JobHistoryServer also plays a part in this as its staging directory happens to be at the same locations as "yarn.app.mapreduce am.staging-dir".
It will create directories recursively with permissions set to HISTORY_STAGING_DIR_PERMISSIONS.
JobHistoryServer runs under the special service user account while JobClient is under the user who submitting jobs.
This lead to a failure in setPermission() during job submission.

There are multiple possible mitigations possible. Here are two examples.
1) config all users submitting jobs to supergroup.
2) during setup, pre-create the staging directory and chown to correct user.

In our case, we took approach 1) because the security check on HDFS was not very important for our scenarios (part of the reason why we can disable HDFS permission in the first place).

Hope this can help you solve your problem!


-Chuan


From: Prashant Kommireddi [mailto:prash1784@gmail.com]
Sent: Wednesday, June 19, 2013 1:32 PM
To: user@hadoop.apache.org
Subject: Re: DFS Permissions on Hadoop 2.x

How can we resolve the issue in the case I have mentioned? File a MR Jira that does not try to check permissions when dfs.permissions.enabled is set to false?

The explanation that Tsz Wo (Nicholas) pointed out in the JIRA makes sense w.r.t HDFS behavior (thanks for that). But I am still unsure how we can get around the fact that certain permissions are set on shared directories by a certain user that disallow any other users from using them. Or am I missing something entirely?

On Wed, Jun 19, 2013 at 1:01 PM, Chris Nauroth <cn...@hortonworks.com>> wrote:
Just in case anyone is curious who didn't look at HDFS-4918, we established that this is actually expected behavior, and it's mentioned in the documentation.  However, I filed HDFS-4919 to make the information clearer in the documentation, since this caused some confusion.

https://issues.apache.org/jira/browse/HDFS-4919

Chris Nauroth
Hortonworks
http://hortonworks.com/


On Tue, Jun 18, 2013 at 10:42 PM, Prashant Kommireddi <pr...@gmail.com>> wrote:
Thanks guys, I will follow the discussion there.

On Tue, Jun 18, 2013 at 10:10 PM, Azuryy Yu <az...@gmail.com>> wrote:
Yes, and I think this was lead by Snapshot.
I've file a JIRA here:
https://issues.apache.org/jira/browse/HDFS-4918

On Wed, Jun 19, 2013 at 11:40 AM, Harsh J <ha...@cloudera.com>> wrote:
This is a HDFS bug. Like all other methods that check for permissions
being enabled, the client call of setPermission should check it as
well. It does not do that currently and I believe it should be a NOP
in such a case. Please do file a JIRA (and reference the ID here to
close the loop)!

On Wed, Jun 19, 2013 at 6:18 AM, Prashant Kommireddi
<pr...@gmail.com>> wrote:
> Looks like the jobs fail only on the first attempt and pass thereafter.
> Failure occurs while setting perms on "intermediate done directory". Here is
> what I think is happening:
>
> 1. Intermediate done dir is (ideally) created as part of deployment (for eg,
> /mapred/history/done_intermediate)
>
> 2. When a MR job is run, it creates a user dir within intermediate done dir
> (/mapred/history/done_intermediate/username)
>
> 3. After this dir is created, the code tries to set permissions on this user
> dir. In doing so, it checks for EXECUTE permissions on not just its parent
> (/mapred/history/done_intermediate) but across all dirs to the top-most
> level (/mapred). This fails as "/mapred" does not have execute permissions
> for the "Other" users.
>
> 4. On successive job runs, since the user dir already exists
> (/mapred/history/done_intermediate/username) it no longer tries to create
> and set permissions again. And the job completes without any perm errors.
>
> This is the code within JobHistoryEventHandler that's doing it.
>
>  //Check for the existence of intermediate done dir.
>     Path doneDirPath = null;
>     try {
>       doneDirPath = FileSystem.get(conf).makeQualified(new
> Path(doneDirStr));
>       doneDirFS = FileSystem.get(doneDirPath.toUri(), conf);
>       // This directory will be in a common location, or this may be a
> cluster
>       // meant for a single user. Creating based on the conf. Should ideally
> be
>       // created by the JobHistoryServer or as part of deployment.
>       if (!doneDirFS.exists(doneDirPath)) {
>       if (JobHistoryUtils.shouldCreateNonUserDirectory(conf)) {
>         LOG.info("Creating intermediate history logDir: ["
>             + doneDirPath
>             + "] + based on conf. Should ideally be created by the
> JobHistoryServer: "
>             + MRJobConfig.MR_AM_CREATE_JH_INTERMEDIATE_BASE_DIR);
>           mkdir(
>               doneDirFS,
>               doneDirPath,
>               new FsPermission(
>             JobHistoryUtils.HISTORY_INTERMEDIATE_DONE_DIR_PERMISSIONS
>                 .toShort()));
>           // TODO Temporary toShort till new FsPermission(FsPermissions)
>           // respects
>         // sticky
>       } else {
>           String message = "Not creating intermediate history logDir: ["
>                 + doneDirPath
>                 + "] based on conf: "
>                 + MRJobConfig.MR_AM_CREATE_JH_INTERMEDIATE_BASE_DIR
>                 + ". Either set to true or pre-create this directory with" +
>                 " appropriate permissions";
>         LOG.error(message);
>         throw new YarnException(message);
>       }
>       }
>     } catch (IOException e) {
>       LOG.error("Failed checking for the existance of history intermediate "
> +
>                       "done directory: [" + doneDirPath + "]");
>       throw new YarnException(e);
>     }
>
>
> In any case, this does not appear to be the right behavior as it does not
> respect "dfs.permissions.enabled" (set to false) at any point. Sounds like a
> bug?
>
>
> Thanks, Prashant
>
>
>
>
>
>
> On Tue, Jun 18, 2013 at 3:24 PM, Prashant Kommireddi <pr...@gmail.com>>
> wrote:
>>
>> Hi Chris,
>>
>> This is while running a MR job. Please note the job is able to write files
>> to "/mapred" directory and fails on EXECUTE permissions. On digging in some
>> more, it looks like the failure occurs after writing to
>> "/mapred/history/done_intermediate".
>>
>> Here is a more detailed stacktrace.
>>
>> INFO: Job end notification started for jobID : job_1371593763906_0001
>> Jun 18, 2013 3:20:20 PM
>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler
>> closeEventWriter
>> INFO: Unable to write out JobSummaryInfo to
>> [hdfs://test-local-EMPTYSPEC/mapred/history/done_intermediate/smehta/job_1371593763906_0001.summary_tmp]
>> org.apache.hadoop.security.AccessControlException: Permission denied:
>> user=smehta, access=EXECUTE,
>> inode="/mapred":pkommireddi:supergroup:drwxrwx---
>>      at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
>>      at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
>>      at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
>>      at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
>>      at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
>>      at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
>>      at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
>>      at
>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
>>      at
>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
>>      at
>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
>>      at
>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>>      at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
>>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
>>      at java.security.AccessController.doPrivileged(Native Method)
>>      at javax.security.auth.Subject.doAs(Subject.java:396)
>>      at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>>      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
>>
>>      at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>>      at
>> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
>>      at
>> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
>>      at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
>>      at
>> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:90)
>>      at
>> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:57)
>>      at org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:1897)
>>      at
>> org.apache.hadoop.hdfs.DistributedFileSystem.setPermission(DistributedFileSystem.java:823)
>>      at
>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.closeEventWriter(JobHistoryEventHandler.java:666)
>>      at
>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.handleEvent(JobHistoryEventHandler.java:521)
>>      at
>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler$1.run(JobHistoryEventHandler.java:273)
>>      at java.lang.Thread.run(Thread.java:662)
>> Caused by:
>> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException):
>> Permission denied: user=smehta, access=EXECUTE,
>> inode="/mapred":pkommireddi:supergroup:drwxrwx---
>>      at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
>>      at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
>>      at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
>>      at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
>>      at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
>>      at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
>>      at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
>>      at
>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
>>      at
>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
>>      at
>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
>>      at
>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>>      at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
>>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
>>      at java.security.AccessController.doPrivileged(Native Method)
>>      at javax.security.auth.Subject.doAs(Subject.java:396)
>>      at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>>      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
>>
>>      at org.apache.hadoop.ipc.Client.call(Client.java:1225)
>>      at
>> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202)
>>      at $Proxy9.setPermission(Unknown Source)
>>      at
>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.setPermission(ClientNamenodeProtocolTranslatorPB.java:241)
>>      at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>      at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>>      at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>      at java.lang.reflect.Method.invoke(Method.java:597)
>>      at
>> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164)
>>      at
>> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83)
>>      at $Proxy10.setPermission(Unknown Source)
>>      at org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:1895)
>>      ... 5 more
>> Jun 18, 2013 3:20:20 PM
>> org.apache.hadoop.yarn.YarnUncaughtExceptionHandler uncaughtException
>> SEVERE: Thread Thread[Thread-51,5,main] threw an Exception.
>> org.apache.hadoop.yarn.YarnException:
>> org.apache.hadoop.security.AccessControlException: Permission denied:
>> user=smehta, access=EXECUTE,
>> inode="/mapred":pkommireddi:supergroup:drwxrwx---
>>      at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
>>      at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
>>      at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
>>      at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
>>      at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
>>      at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
>>      at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
>>      at
>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
>>      at
>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
>>      at
>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
>>      at
>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>>      at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
>>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
>>      at java.security.AccessController.doPrivileged(Native Method)
>>      at javax.security.auth.Subject.doAs(Subject.java:396)
>>      at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>>      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
>>
>>      at
>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.handleEvent(JobHistoryEventHandler.java:523)
>>      at
>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler$1.run(JobHistoryEventHandler.java:273)
>>      at java.lang.Thread.run(Thread.java:662)
>> Caused by: org.apache.hadoop.security.AccessControlException: Permission
>> denied: user=smehta, access=EXECUTE,
>> inode="/mapred":pkommireddi:supergroup:drwxrwx---
>>      at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
>>      at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
>>      at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
>>      at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
>>      at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
>>      at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
>>      at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
>>      at
>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
>>      at
>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
>>      at
>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
>>      at
>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>>      at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
>>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
>>      at java.security.AccessController.doPrivileged(Native Method)
>>      at javax.security.auth.Subject.doAs(Subject.java:396)
>>      at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>>      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
>>
>>      at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>>      at
>> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
>>      at
>> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
>>      at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
>>      at
>> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:90)
>>      at
>> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:57)
>>      at org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:1897)
>>      at
>> org.apache.hadoop.hdfs.DistributedFileSystem.setPermission(DistributedFileSystem.java:823)
>>      at
>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.closeEventWriter(JobHistoryEventHandler.java:666)
>>      at
>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.handleEvent(JobHistoryEventHandler.java:521)
>>      ... 2 more
>> Caused by:
>> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException):
>> Permission denied: user=smehta, access=EXECUTE,
>> inode="/mapred":pkommireddi:supergroup:drwxrwx---
>>      at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
>>      at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
>>      at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
>>      at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
>>      at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
>>      at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
>>      at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
>>      at
>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
>>      at
>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
>>      at
>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
>>      at
>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>>      at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
>>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
>>      at java.security.AccessController.doPrivileged(Native Method)
>>      at javax.security.auth.Subject.doAs(Subject.java:396)
>>      at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>>      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
>>
>>      at org.apache.hadoop.ipc.Client.call(Client.java:1225)
>>      at
>> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202)
>>      at $Proxy9.setPermission(Unknown Source)
>>      at
>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.setPermission(ClientNamenodeProtocolTranslatorPB.java:241)
>>      at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>      at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>>      at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>      at java.lang.reflect.Method.invoke(Method.java:597)
>>      at
>> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164)
>>      at
>> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83)
>>      at $Proxy10.setPermission(Unknown Source)
>>      at org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:1895)
>>      ... 5 more
>> Jun 18, 2013 3:20:20 PM
>> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator$ScheduleStats log
>> INFO: Before Scheduling: PendingReds:0 ScheduledMaps:0 ScheduledReds:0
>> AssignedMaps:0 AssignedReds:1 CompletedMaps:1 CompletedReds:1 ContAlloc:2
>> ContRel:0 HostLocal:0 RackLocal:1
>> Jun 18, 2013 3:20:21 PM
>> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator getResources
>> INFO: Received completed container container_1371593763906_0001_01_000003
>> Jun 18, 2013 3:20:21 PM
>> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator$ScheduleStats log
>> INFO: After Scheduling: PendingReds:0 ScheduledMaps:0 ScheduledReds:0
>> AssignedMaps:0 AssignedReds:0 CompletedMaps:1 CompletedReds:1 ContAlloc:2
>> ContRel:0 HostLocal:0 RackLocal:1
>> Jun 18, 2013 3:20:21 PM
>> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl$DiagnosticInformationUpdater
>> transition
>> INFO: Diagnostics report from attempt_1371593763906_0001_r_000000_0:
>> Container killed by the ApplicationMaster.
>>
>>
>>
>> On Tue, Jun 18, 2013 at 1:28 PM, Chris Nauroth <cn...@hortonworks.com>>
>> wrote:
>>>
>>> Prashant, can you provide more details about what you're doing when you
>>> see this error?  Are you submitting a MapReduce job, running an HDFS shell
>>> command, or doing some other action?  It's possible that we're also seeing
>>> an interaction with some other change in 2.x that triggers a setPermission
>>> call that wasn't there in 0.20.2.  I think the problem with the HDFS
>>> setPermission API is present in both 0.20.2 and 2.x, but if the code in
>>> 0.20.2 never triggered a setPermission call for your usage, then you
>>> wouldn't have seen the problem.
>>>
>>> I'd like to gather these details for submitting a new bug report to HDFS.
>>> Thanks!
>>>
>>> Chris Nauroth
>>> Hortonworks
>>> http://hortonworks.com/
>>>
>>>
>>>
>>> On Tue, Jun 18, 2013 at 12:14 PM, Leo Leung <ll...@ddn.com>> wrote:
>>>>
>>>> I believe, the properties name should be "dfs.permissions"
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> From: Prashant Kommireddi [mailto:prash1784@gmail.com<ma...@gmail.com>]
>>>> Sent: Tuesday, June 18, 2013 10:54 AM
>>>> To: user@hadoop.apache.org<ma...@hadoop.apache.org>
>>>> Subject: DFS Permissions on Hadoop 2.x
>>>>
>>>>
>>>>
>>>> Hello,
>>>>
>>>>
>>>>
>>>> We just upgraded our cluster from 0.20.2 to 2.x (with HA) and had a
>>>> question around disabling dfs permissions on the latter version. For some
>>>> reason, setting the following config does not seem to work
>>>>
>>>>
>>>>
>>>> <property>
>>>>
>>>>         <name>dfs.permissions.enabled</name>
>>>>
>>>>         <value>false</value>
>>>>
>>>> </property>
>>>>
>>>>
>>>>
>>>> Any other configs that might be needed for this?
>>>>
>>>>
>>>>
>>>> Here is the stacktrace.
>>>>
>>>>
>>>>
>>>> 2013-06-17 17:35:45,429 INFO  ipc.Server - IPC Server handler 62 on
>>>> 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.setPermission from
>>>> 10.0.53.131:24059<http://10.0.53.131:24059>: error: org.apache.hadoop.security.AccessControlException:
>>>> Permission denied: user=smehta, access=EXECUTE,
>>>> inode="/mapred":pkommireddi:supergroup:drwxrwx---
>>>>
>>>> org.apache.hadoop.security.AccessControlException: Permission denied:
>>>> user=smehta, access=EXECUTE,
>>>> inode="/mapred":pkommireddi:supergroup:drwxrwx---
>>>>
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
>>>>
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
>>>>
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
>>>>
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
>>>>
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
>>>>
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
>>>>
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
>>>>
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
>>>>
>>>>         at
>>>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
>>>>
>>>>         at
>>>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
>>>>
>>>>         at
>>>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>>>>
>>>>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>>>>
>>>>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
>>>>
>>>>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
>>>>
>>>>         at java.security.AccessController.doPrivileged(Native Method)
>>>>
>>>>         at javax.security.auth.Subject.doAs(Subject.java:396)
>>>>
>>>>         at
>>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>>>>
>>>>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>
>>>
>>
>


--
Harsh J





Re: DFS Permissions on Hadoop 2.x

Posted by Prashant Kommireddi <pr...@gmail.com>.
How can we resolve the issue in the case I have mentioned? File a MR Jira
that does not try to check permissions when dfs.permissions.enabled is set
to false?

The explanation that Tsz Wo (Nicholas) pointed out in the JIRA makes sense
w.r.t HDFS behavior (thanks for that). But I am still unsure how we can get
around the fact that certain permissions are set on shared directories by a
certain user that disallow any other users from using them. Or am I missing
something entirely?


On Wed, Jun 19, 2013 at 1:01 PM, Chris Nauroth <cn...@hortonworks.com>wrote:

> Just in case anyone is curious who didn't look at HDFS-4918, we
> established that this is actually expected behavior, and it's mentioned in
> the documentation.  However, I filed HDFS-4919 to make the information
> clearer in the documentation, since this caused some confusion.
>
> https://issues.apache.org/jira/browse/HDFS-4919
>
> Chris Nauroth
> Hortonworks
> http://hortonworks.com/
>
>
>
> On Tue, Jun 18, 2013 at 10:42 PM, Prashant Kommireddi <prash1784@gmail.com
> > wrote:
>
>> Thanks guys, I will follow the discussion there.
>>
>>
>> On Tue, Jun 18, 2013 at 10:10 PM, Azuryy Yu <az...@gmail.com> wrote:
>>
>>> Yes, and I think this was lead by Snapshot.
>>>
>>> I've file a JIRA here:
>>> https://issues.apache.org/jira/browse/HDFS-4918
>>>
>>>
>>>
>>> On Wed, Jun 19, 2013 at 11:40 AM, Harsh J <ha...@cloudera.com> wrote:
>>>
>>>> This is a HDFS bug. Like all other methods that check for permissions
>>>> being enabled, the client call of setPermission should check it as
>>>> well. It does not do that currently and I believe it should be a NOP
>>>> in such a case. Please do file a JIRA (and reference the ID here to
>>>> close the loop)!
>>>>
>>>> On Wed, Jun 19, 2013 at 6:18 AM, Prashant Kommireddi
>>>> <pr...@gmail.com> wrote:
>>>> > Looks like the jobs fail only on the first attempt and pass
>>>> thereafter.
>>>> > Failure occurs while setting perms on "intermediate done directory".
>>>> Here is
>>>> > what I think is happening:
>>>> >
>>>> > 1. Intermediate done dir is (ideally) created as part of deployment
>>>> (for eg,
>>>> > /mapred/history/done_intermediate)
>>>> >
>>>> > 2. When a MR job is run, it creates a user dir within intermediate
>>>> done dir
>>>> > (/mapred/history/done_intermediate/username)
>>>> >
>>>> > 3. After this dir is created, the code tries to set permissions on
>>>> this user
>>>> > dir. In doing so, it checks for EXECUTE permissions on not just its
>>>> parent
>>>> > (/mapred/history/done_intermediate) but across all dirs to the
>>>> top-most
>>>> > level (/mapred). This fails as "/mapred" does not have execute
>>>> permissions
>>>> > for the "Other" users.
>>>> >
>>>> > 4. On successive job runs, since the user dir already exists
>>>> > (/mapred/history/done_intermediate/username) it no longer tries to
>>>> create
>>>> > and set permissions again. And the job completes without any perm
>>>> errors.
>>>> >
>>>> > This is the code within JobHistoryEventHandler that's doing it.
>>>> >
>>>> >  //Check for the existence of intermediate done dir.
>>>> >     Path doneDirPath = null;
>>>> >     try {
>>>> >       doneDirPath = FileSystem.get(conf).makeQualified(new
>>>> > Path(doneDirStr));
>>>> >       doneDirFS = FileSystem.get(doneDirPath.toUri(), conf);
>>>> >       // This directory will be in a common location, or this may be a
>>>> > cluster
>>>> >       // meant for a single user. Creating based on the conf. Should
>>>> ideally
>>>> > be
>>>> >       // created by the JobHistoryServer or as part of deployment.
>>>> >       if (!doneDirFS.exists(doneDirPath)) {
>>>> >       if (JobHistoryUtils.shouldCreateNonUserDirectory(conf)) {
>>>> >         LOG.info("Creating intermediate history logDir: ["
>>>> >             + doneDirPath
>>>> >             + "] + based on conf. Should ideally be created by the
>>>> > JobHistoryServer: "
>>>> >             + MRJobConfig.MR_AM_CREATE_JH_INTERMEDIATE_BASE_DIR);
>>>> >           mkdir(
>>>> >               doneDirFS,
>>>> >               doneDirPath,
>>>> >               new FsPermission(
>>>> >             JobHistoryUtils.HISTORY_INTERMEDIATE_DONE_DIR_PERMISSIONS
>>>> >                 .toShort()));
>>>> >           // TODO Temporary toShort till new
>>>> FsPermission(FsPermissions)
>>>> >           // respects
>>>> >         // sticky
>>>> >       } else {
>>>> >           String message = "Not creating intermediate history logDir:
>>>> ["
>>>> >                 + doneDirPath
>>>> >                 + "] based on conf: "
>>>> >                 + MRJobConfig.MR_AM_CREATE_JH_INTERMEDIATE_BASE_DIR
>>>> >                 + ". Either set to true or pre-create this directory
>>>> with" +
>>>> >                 " appropriate permissions";
>>>> >         LOG.error(message);
>>>> >         throw new YarnException(message);
>>>> >       }
>>>> >       }
>>>> >     } catch (IOException e) {
>>>> >       LOG.error("Failed checking for the existance of history
>>>> intermediate "
>>>> > +
>>>> >                       "done directory: [" + doneDirPath + "]");
>>>> >       throw new YarnException(e);
>>>> >     }
>>>> >
>>>> >
>>>> > In any case, this does not appear to be the right behavior as it does
>>>> not
>>>> > respect "dfs.permissions.enabled" (set to false) at any point. Sounds
>>>> like a
>>>> > bug?
>>>> >
>>>> >
>>>> > Thanks, Prashant
>>>> >
>>>> >
>>>> >
>>>> >
>>>> >
>>>> >
>>>> > On Tue, Jun 18, 2013 at 3:24 PM, Prashant Kommireddi <
>>>> prash1784@gmail.com>
>>>> > wrote:
>>>> >>
>>>> >> Hi Chris,
>>>> >>
>>>> >> This is while running a MR job. Please note the job is able to write
>>>> files
>>>> >> to "/mapred" directory and fails on EXECUTE permissions. On digging
>>>> in some
>>>> >> more, it looks like the failure occurs after writing to
>>>> >> "/mapred/history/done_intermediate".
>>>> >>
>>>> >> Here is a more detailed stacktrace.
>>>> >>
>>>> >> INFO: Job end notification started for jobID : job_1371593763906_0001
>>>> >> Jun 18, 2013 3:20:20 PM
>>>> >> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler
>>>> >> closeEventWriter
>>>> >> INFO: Unable to write out JobSummaryInfo to
>>>> >>
>>>> [hdfs://test-local-EMPTYSPEC/mapred/history/done_intermediate/smehta/job_1371593763906_0001.summary_tmp]
>>>> >> org.apache.hadoop.security.AccessControlException: Permission denied:
>>>> >> user=smehta, access=EXECUTE,
>>>> >> inode="/mapred":pkommireddi:supergroup:drwxrwx---
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>>>> >>      at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>>>> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
>>>> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
>>>> >>      at java.security.AccessController.doPrivileged(Native Method)
>>>> >>      at javax.security.auth.Subject.doAs(Subject.java:396)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>>>> >>      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
>>>> >>
>>>> >>      at
>>>> sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>>>> >>      at
>>>> >>
>>>> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
>>>> >>      at
>>>> >>
>>>> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
>>>> >>      at
>>>> java.lang.reflect.Constructor.newInstance(Constructor.java:513)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:90)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:57)
>>>> >>      at
>>>> org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:1897)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.DistributedFileSystem.setPermission(DistributedFileSystem.java:823)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.closeEventWriter(JobHistoryEventHandler.java:666)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.handleEvent(JobHistoryEventHandler.java:521)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler$1.run(JobHistoryEventHandler.java:273)
>>>> >>      at java.lang.Thread.run(Thread.java:662)
>>>> >> Caused by:
>>>> >>
>>>> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException):
>>>> >> Permission denied: user=smehta, access=EXECUTE,
>>>> >> inode="/mapred":pkommireddi:supergroup:drwxrwx---
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>>>> >>      at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>>>> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
>>>> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
>>>> >>      at java.security.AccessController.doPrivileged(Native Method)
>>>> >>      at javax.security.auth.Subject.doAs(Subject.java:396)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>>>> >>      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
>>>> >>
>>>> >>      at org.apache.hadoop.ipc.Client.call(Client.java:1225)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202)
>>>> >>      at $Proxy9.setPermission(Unknown Source)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.setPermission(ClientNamenodeProtocolTranslatorPB.java:241)
>>>> >>      at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>>> >>      at
>>>> >>
>>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>>>> >>      at
>>>> >>
>>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>> >>      at java.lang.reflect.Method.invoke(Method.java:597)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83)
>>>> >>      at $Proxy10.setPermission(Unknown Source)
>>>> >>      at
>>>> org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:1895)
>>>> >>      ... 5 more
>>>> >> Jun 18, 2013 3:20:20 PM
>>>> >> org.apache.hadoop.yarn.YarnUncaughtExceptionHandler uncaughtException
>>>> >> SEVERE: Thread Thread[Thread-51,5,main] threw an Exception.
>>>> >> org.apache.hadoop.yarn.YarnException:
>>>> >> org.apache.hadoop.security.AccessControlException: Permission denied:
>>>> >> user=smehta, access=EXECUTE,
>>>> >> inode="/mapred":pkommireddi:supergroup:drwxrwx---
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>>>> >>      at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>>>> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
>>>> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
>>>> >>      at java.security.AccessController.doPrivileged(Native Method)
>>>> >>      at javax.security.auth.Subject.doAs(Subject.java:396)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>>>> >>      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
>>>> >>
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.handleEvent(JobHistoryEventHandler.java:523)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler$1.run(JobHistoryEventHandler.java:273)
>>>> >>      at java.lang.Thread.run(Thread.java:662)
>>>> >> Caused by: org.apache.hadoop.security.AccessControlException:
>>>> Permission
>>>> >> denied: user=smehta, access=EXECUTE,
>>>> >> inode="/mapred":pkommireddi:supergroup:drwxrwx---
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>>>> >>      at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>>>> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
>>>> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
>>>> >>      at java.security.AccessController.doPrivileged(Native Method)
>>>> >>      at javax.security.auth.Subject.doAs(Subject.java:396)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>>>> >>      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
>>>> >>
>>>> >>      at
>>>> sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>>>> >>      at
>>>> >>
>>>> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
>>>> >>      at
>>>> >>
>>>> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
>>>> >>      at
>>>> java.lang.reflect.Constructor.newInstance(Constructor.java:513)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:90)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:57)
>>>> >>      at
>>>> org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:1897)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.DistributedFileSystem.setPermission(DistributedFileSystem.java:823)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.closeEventWriter(JobHistoryEventHandler.java:666)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.handleEvent(JobHistoryEventHandler.java:521)
>>>> >>      ... 2 more
>>>> >> Caused by:
>>>> >>
>>>> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException):
>>>> >> Permission denied: user=smehta, access=EXECUTE,
>>>> >> inode="/mapred":pkommireddi:supergroup:drwxrwx---
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>>>> >>      at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>>>> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
>>>> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
>>>> >>      at java.security.AccessController.doPrivileged(Native Method)
>>>> >>      at javax.security.auth.Subject.doAs(Subject.java:396)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>>>> >>      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
>>>> >>
>>>> >>      at org.apache.hadoop.ipc.Client.call(Client.java:1225)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202)
>>>> >>      at $Proxy9.setPermission(Unknown Source)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.setPermission(ClientNamenodeProtocolTranslatorPB.java:241)
>>>> >>      at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>>> >>      at
>>>> >>
>>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>>>> >>      at
>>>> >>
>>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>> >>      at java.lang.reflect.Method.invoke(Method.java:597)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83)
>>>> >>      at $Proxy10.setPermission(Unknown Source)
>>>> >>      at
>>>> org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:1895)
>>>> >>      ... 5 more
>>>> >> Jun 18, 2013 3:20:20 PM
>>>> >>
>>>> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator$ScheduleStats log
>>>> >> INFO: Before Scheduling: PendingReds:0 ScheduledMaps:0
>>>> ScheduledReds:0
>>>> >> AssignedMaps:0 AssignedReds:1 CompletedMaps:1 CompletedReds:1
>>>> ContAlloc:2
>>>> >> ContRel:0 HostLocal:0 RackLocal:1
>>>> >> Jun 18, 2013 3:20:21 PM
>>>> >> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator
>>>> getResources
>>>> >> INFO: Received completed container
>>>> container_1371593763906_0001_01_000003
>>>> >> Jun 18, 2013 3:20:21 PM
>>>> >>
>>>> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator$ScheduleStats log
>>>> >> INFO: After Scheduling: PendingReds:0 ScheduledMaps:0 ScheduledReds:0
>>>> >> AssignedMaps:0 AssignedReds:0 CompletedMaps:1 CompletedReds:1
>>>> ContAlloc:2
>>>> >> ContRel:0 HostLocal:0 RackLocal:1
>>>> >> Jun 18, 2013 3:20:21 PM
>>>> >>
>>>> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl$DiagnosticInformationUpdater
>>>> >> transition
>>>> >> INFO: Diagnostics report from attempt_1371593763906_0001_r_000000_0:
>>>> >> Container killed by the ApplicationMaster.
>>>> >>
>>>> >>
>>>> >>
>>>> >> On Tue, Jun 18, 2013 at 1:28 PM, Chris Nauroth <
>>>> cnauroth@hortonworks.com>
>>>> >> wrote:
>>>> >>>
>>>> >>> Prashant, can you provide more details about what you're doing when
>>>> you
>>>> >>> see this error?  Are you submitting a MapReduce job, running an
>>>> HDFS shell
>>>> >>> command, or doing some other action?  It's possible that we're also
>>>> seeing
>>>> >>> an interaction with some other change in 2.x that triggers a
>>>> setPermission
>>>> >>> call that wasn't there in 0.20.2.  I think the problem with the HDFS
>>>> >>> setPermission API is present in both 0.20.2 and 2.x, but if the
>>>> code in
>>>> >>> 0.20.2 never triggered a setPermission call for your usage, then you
>>>> >>> wouldn't have seen the problem.
>>>> >>>
>>>> >>> I'd like to gather these details for submitting a new bug report to
>>>> HDFS.
>>>> >>> Thanks!
>>>> >>>
>>>> >>> Chris Nauroth
>>>> >>> Hortonworks
>>>> >>> http://hortonworks.com/
>>>> >>>
>>>> >>>
>>>> >>>
>>>> >>> On Tue, Jun 18, 2013 at 12:14 PM, Leo Leung <ll...@ddn.com> wrote:
>>>> >>>>
>>>> >>>> I believe, the properties name should be “dfs.permissions”
>>>> >>>>
>>>> >>>>
>>>> >>>>
>>>> >>>>
>>>> >>>>
>>>> >>>> From: Prashant Kommireddi [mailto:prash1784@gmail.com]
>>>> >>>> Sent: Tuesday, June 18, 2013 10:54 AM
>>>> >>>> To: user@hadoop.apache.org
>>>> >>>> Subject: DFS Permissions on Hadoop 2.x
>>>> >>>>
>>>> >>>>
>>>> >>>>
>>>> >>>> Hello,
>>>> >>>>
>>>> >>>>
>>>> >>>>
>>>> >>>> We just upgraded our cluster from 0.20.2 to 2.x (with HA) and had a
>>>> >>>> question around disabling dfs permissions on the latter version.
>>>> For some
>>>> >>>> reason, setting the following config does not seem to work
>>>> >>>>
>>>> >>>>
>>>> >>>>
>>>> >>>> <property>
>>>> >>>>
>>>> >>>>         <name>dfs.permissions.enabled</name>
>>>> >>>>
>>>> >>>>         <value>false</value>
>>>> >>>>
>>>> >>>> </property>
>>>> >>>>
>>>> >>>>
>>>> >>>>
>>>> >>>> Any other configs that might be needed for this?
>>>> >>>>
>>>> >>>>
>>>> >>>>
>>>> >>>> Here is the stacktrace.
>>>> >>>>
>>>> >>>>
>>>> >>>>
>>>> >>>> 2013-06-17 17:35:45,429 INFO  ipc.Server - IPC Server handler 62 on
>>>> >>>> 8020, call
>>>> org.apache.hadoop.hdfs.protocol.ClientProtocol.setPermission from
>>>> >>>> 10.0.53.131:24059: error:
>>>> org.apache.hadoop.security.AccessControlException:
>>>> >>>> Permission denied: user=smehta, access=EXECUTE,
>>>> >>>> inode="/mapred":pkommireddi:supergroup:drwxrwx---
>>>> >>>>
>>>> >>>> org.apache.hadoop.security.AccessControlException: Permission
>>>> denied:
>>>> >>>> user=smehta, access=EXECUTE,
>>>> >>>> inode="/mapred":pkommireddi:supergroup:drwxrwx---
>>>> >>>>
>>>> >>>>         at
>>>> >>>>
>>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
>>>> >>>>
>>>> >>>>         at
>>>> >>>>
>>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
>>>> >>>>
>>>> >>>>         at
>>>> >>>>
>>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
>>>> >>>>
>>>> >>>>         at
>>>> >>>>
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
>>>> >>>>
>>>> >>>>         at
>>>> >>>>
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
>>>> >>>>
>>>> >>>>         at
>>>> >>>>
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
>>>> >>>>
>>>> >>>>         at
>>>> >>>>
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
>>>> >>>>
>>>> >>>>         at
>>>> >>>>
>>>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
>>>> >>>>
>>>> >>>>         at
>>>> >>>>
>>>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
>>>> >>>>
>>>> >>>>         at
>>>> >>>>
>>>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
>>>> >>>>
>>>> >>>>         at
>>>> >>>>
>>>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>>>> >>>>
>>>> >>>>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>>>> >>>>
>>>> >>>>         at
>>>> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
>>>> >>>>
>>>> >>>>         at
>>>> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
>>>> >>>>
>>>> >>>>         at java.security.AccessController.doPrivileged(Native
>>>> Method)
>>>> >>>>
>>>> >>>>         at javax.security.auth.Subject.doAs(Subject.java:396)
>>>> >>>>
>>>> >>>>         at
>>>> >>>>
>>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>>>> >>>>
>>>> >>>>         at
>>>> org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
>>>> >>>>
>>>> >>>>
>>>> >>>>
>>>> >>>>
>>>> >>>>
>>>> >>>>
>>>> >>>>
>>>> >>>>
>>>> >>>
>>>> >>>
>>>> >>
>>>> >
>>>>
>>>>
>>>>
>>>> --
>>>> Harsh J
>>>>
>>>
>>>
>>
>

Re: DFS Permissions on Hadoop 2.x

Posted by Prashant Kommireddi <pr...@gmail.com>.
How can we resolve the issue in the case I have mentioned? File a MR Jira
that does not try to check permissions when dfs.permissions.enabled is set
to false?

The explanation that Tsz Wo (Nicholas) pointed out in the JIRA makes sense
w.r.t HDFS behavior (thanks for that). But I am still unsure how we can get
around the fact that certain permissions are set on shared directories by a
certain user that disallow any other users from using them. Or am I missing
something entirely?


On Wed, Jun 19, 2013 at 1:01 PM, Chris Nauroth <cn...@hortonworks.com>wrote:

> Just in case anyone is curious who didn't look at HDFS-4918, we
> established that this is actually expected behavior, and it's mentioned in
> the documentation.  However, I filed HDFS-4919 to make the information
> clearer in the documentation, since this caused some confusion.
>
> https://issues.apache.org/jira/browse/HDFS-4919
>
> Chris Nauroth
> Hortonworks
> http://hortonworks.com/
>
>
>
> On Tue, Jun 18, 2013 at 10:42 PM, Prashant Kommireddi <prash1784@gmail.com
> > wrote:
>
>> Thanks guys, I will follow the discussion there.
>>
>>
>> On Tue, Jun 18, 2013 at 10:10 PM, Azuryy Yu <az...@gmail.com> wrote:
>>
>>> Yes, and I think this was lead by Snapshot.
>>>
>>> I've file a JIRA here:
>>> https://issues.apache.org/jira/browse/HDFS-4918
>>>
>>>
>>>
>>> On Wed, Jun 19, 2013 at 11:40 AM, Harsh J <ha...@cloudera.com> wrote:
>>>
>>>> This is a HDFS bug. Like all other methods that check for permissions
>>>> being enabled, the client call of setPermission should check it as
>>>> well. It does not do that currently and I believe it should be a NOP
>>>> in such a case. Please do file a JIRA (and reference the ID here to
>>>> close the loop)!
>>>>
>>>> On Wed, Jun 19, 2013 at 6:18 AM, Prashant Kommireddi
>>>> <pr...@gmail.com> wrote:
>>>> > Looks like the jobs fail only on the first attempt and pass
>>>> thereafter.
>>>> > Failure occurs while setting perms on "intermediate done directory".
>>>> Here is
>>>> > what I think is happening:
>>>> >
>>>> > 1. Intermediate done dir is (ideally) created as part of deployment
>>>> (for eg,
>>>> > /mapred/history/done_intermediate)
>>>> >
>>>> > 2. When a MR job is run, it creates a user dir within intermediate
>>>> done dir
>>>> > (/mapred/history/done_intermediate/username)
>>>> >
>>>> > 3. After this dir is created, the code tries to set permissions on
>>>> this user
>>>> > dir. In doing so, it checks for EXECUTE permissions on not just its
>>>> parent
>>>> > (/mapred/history/done_intermediate) but across all dirs to the
>>>> top-most
>>>> > level (/mapred). This fails as "/mapred" does not have execute
>>>> permissions
>>>> > for the "Other" users.
>>>> >
>>>> > 4. On successive job runs, since the user dir already exists
>>>> > (/mapred/history/done_intermediate/username) it no longer tries to
>>>> create
>>>> > and set permissions again. And the job completes without any perm
>>>> errors.
>>>> >
>>>> > This is the code within JobHistoryEventHandler that's doing it.
>>>> >
>>>> >  //Check for the existence of intermediate done dir.
>>>> >     Path doneDirPath = null;
>>>> >     try {
>>>> >       doneDirPath = FileSystem.get(conf).makeQualified(new
>>>> > Path(doneDirStr));
>>>> >       doneDirFS = FileSystem.get(doneDirPath.toUri(), conf);
>>>> >       // This directory will be in a common location, or this may be a
>>>> > cluster
>>>> >       // meant for a single user. Creating based on the conf. Should
>>>> ideally
>>>> > be
>>>> >       // created by the JobHistoryServer or as part of deployment.
>>>> >       if (!doneDirFS.exists(doneDirPath)) {
>>>> >       if (JobHistoryUtils.shouldCreateNonUserDirectory(conf)) {
>>>> >         LOG.info("Creating intermediate history logDir: ["
>>>> >             + doneDirPath
>>>> >             + "] + based on conf. Should ideally be created by the
>>>> > JobHistoryServer: "
>>>> >             + MRJobConfig.MR_AM_CREATE_JH_INTERMEDIATE_BASE_DIR);
>>>> >           mkdir(
>>>> >               doneDirFS,
>>>> >               doneDirPath,
>>>> >               new FsPermission(
>>>> >             JobHistoryUtils.HISTORY_INTERMEDIATE_DONE_DIR_PERMISSIONS
>>>> >                 .toShort()));
>>>> >           // TODO Temporary toShort till new
>>>> FsPermission(FsPermissions)
>>>> >           // respects
>>>> >         // sticky
>>>> >       } else {
>>>> >           String message = "Not creating intermediate history logDir:
>>>> ["
>>>> >                 + doneDirPath
>>>> >                 + "] based on conf: "
>>>> >                 + MRJobConfig.MR_AM_CREATE_JH_INTERMEDIATE_BASE_DIR
>>>> >                 + ". Either set to true or pre-create this directory
>>>> with" +
>>>> >                 " appropriate permissions";
>>>> >         LOG.error(message);
>>>> >         throw new YarnException(message);
>>>> >       }
>>>> >       }
>>>> >     } catch (IOException e) {
>>>> >       LOG.error("Failed checking for the existance of history
>>>> intermediate "
>>>> > +
>>>> >                       "done directory: [" + doneDirPath + "]");
>>>> >       throw new YarnException(e);
>>>> >     }
>>>> >
>>>> >
>>>> > In any case, this does not appear to be the right behavior as it does
>>>> not
>>>> > respect "dfs.permissions.enabled" (set to false) at any point. Sounds
>>>> like a
>>>> > bug?
>>>> >
>>>> >
>>>> > Thanks, Prashant
>>>> >
>>>> >
>>>> >
>>>> >
>>>> >
>>>> >
>>>> > On Tue, Jun 18, 2013 at 3:24 PM, Prashant Kommireddi <
>>>> prash1784@gmail.com>
>>>> > wrote:
>>>> >>
>>>> >> Hi Chris,
>>>> >>
>>>> >> This is while running a MR job. Please note the job is able to write
>>>> files
>>>> >> to "/mapred" directory and fails on EXECUTE permissions. On digging
>>>> in some
>>>> >> more, it looks like the failure occurs after writing to
>>>> >> "/mapred/history/done_intermediate".
>>>> >>
>>>> >> Here is a more detailed stacktrace.
>>>> >>
>>>> >> INFO: Job end notification started for jobID : job_1371593763906_0001
>>>> >> Jun 18, 2013 3:20:20 PM
>>>> >> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler
>>>> >> closeEventWriter
>>>> >> INFO: Unable to write out JobSummaryInfo to
>>>> >>
>>>> [hdfs://test-local-EMPTYSPEC/mapred/history/done_intermediate/smehta/job_1371593763906_0001.summary_tmp]
>>>> >> org.apache.hadoop.security.AccessControlException: Permission denied:
>>>> >> user=smehta, access=EXECUTE,
>>>> >> inode="/mapred":pkommireddi:supergroup:drwxrwx---
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>>>> >>      at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>>>> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
>>>> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
>>>> >>      at java.security.AccessController.doPrivileged(Native Method)
>>>> >>      at javax.security.auth.Subject.doAs(Subject.java:396)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>>>> >>      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
>>>> >>
>>>> >>      at
>>>> sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>>>> >>      at
>>>> >>
>>>> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
>>>> >>      at
>>>> >>
>>>> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
>>>> >>      at
>>>> java.lang.reflect.Constructor.newInstance(Constructor.java:513)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:90)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:57)
>>>> >>      at
>>>> org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:1897)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.DistributedFileSystem.setPermission(DistributedFileSystem.java:823)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.closeEventWriter(JobHistoryEventHandler.java:666)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.handleEvent(JobHistoryEventHandler.java:521)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler$1.run(JobHistoryEventHandler.java:273)
>>>> >>      at java.lang.Thread.run(Thread.java:662)
>>>> >> Caused by:
>>>> >>
>>>> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException):
>>>> >> Permission denied: user=smehta, access=EXECUTE,
>>>> >> inode="/mapred":pkommireddi:supergroup:drwxrwx---
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>>>> >>      at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>>>> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
>>>> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
>>>> >>      at java.security.AccessController.doPrivileged(Native Method)
>>>> >>      at javax.security.auth.Subject.doAs(Subject.java:396)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>>>> >>      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
>>>> >>
>>>> >>      at org.apache.hadoop.ipc.Client.call(Client.java:1225)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202)
>>>> >>      at $Proxy9.setPermission(Unknown Source)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.setPermission(ClientNamenodeProtocolTranslatorPB.java:241)
>>>> >>      at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>>> >>      at
>>>> >>
>>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>>>> >>      at
>>>> >>
>>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>> >>      at java.lang.reflect.Method.invoke(Method.java:597)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83)
>>>> >>      at $Proxy10.setPermission(Unknown Source)
>>>> >>      at
>>>> org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:1895)
>>>> >>      ... 5 more
>>>> >> Jun 18, 2013 3:20:20 PM
>>>> >> org.apache.hadoop.yarn.YarnUncaughtExceptionHandler uncaughtException
>>>> >> SEVERE: Thread Thread[Thread-51,5,main] threw an Exception.
>>>> >> org.apache.hadoop.yarn.YarnException:
>>>> >> org.apache.hadoop.security.AccessControlException: Permission denied:
>>>> >> user=smehta, access=EXECUTE,
>>>> >> inode="/mapred":pkommireddi:supergroup:drwxrwx---
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>>>> >>      at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>>>> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
>>>> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
>>>> >>      at java.security.AccessController.doPrivileged(Native Method)
>>>> >>      at javax.security.auth.Subject.doAs(Subject.java:396)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>>>> >>      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
>>>> >>
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.handleEvent(JobHistoryEventHandler.java:523)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler$1.run(JobHistoryEventHandler.java:273)
>>>> >>      at java.lang.Thread.run(Thread.java:662)
>>>> >> Caused by: org.apache.hadoop.security.AccessControlException:
>>>> Permission
>>>> >> denied: user=smehta, access=EXECUTE,
>>>> >> inode="/mapred":pkommireddi:supergroup:drwxrwx---
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>>>> >>      at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>>>> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
>>>> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
>>>> >>      at java.security.AccessController.doPrivileged(Native Method)
>>>> >>      at javax.security.auth.Subject.doAs(Subject.java:396)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>>>> >>      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
>>>> >>
>>>> >>      at
>>>> sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>>>> >>      at
>>>> >>
>>>> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
>>>> >>      at
>>>> >>
>>>> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
>>>> >>      at
>>>> java.lang.reflect.Constructor.newInstance(Constructor.java:513)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:90)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:57)
>>>> >>      at
>>>> org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:1897)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.DistributedFileSystem.setPermission(DistributedFileSystem.java:823)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.closeEventWriter(JobHistoryEventHandler.java:666)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.handleEvent(JobHistoryEventHandler.java:521)
>>>> >>      ... 2 more
>>>> >> Caused by:
>>>> >>
>>>> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException):
>>>> >> Permission denied: user=smehta, access=EXECUTE,
>>>> >> inode="/mapred":pkommireddi:supergroup:drwxrwx---
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>>>> >>      at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>>>> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
>>>> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
>>>> >>      at java.security.AccessController.doPrivileged(Native Method)
>>>> >>      at javax.security.auth.Subject.doAs(Subject.java:396)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>>>> >>      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
>>>> >>
>>>> >>      at org.apache.hadoop.ipc.Client.call(Client.java:1225)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202)
>>>> >>      at $Proxy9.setPermission(Unknown Source)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.setPermission(ClientNamenodeProtocolTranslatorPB.java:241)
>>>> >>      at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>>> >>      at
>>>> >>
>>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>>>> >>      at
>>>> >>
>>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>> >>      at java.lang.reflect.Method.invoke(Method.java:597)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83)
>>>> >>      at $Proxy10.setPermission(Unknown Source)
>>>> >>      at
>>>> org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:1895)
>>>> >>      ... 5 more
>>>> >> Jun 18, 2013 3:20:20 PM
>>>> >>
>>>> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator$ScheduleStats log
>>>> >> INFO: Before Scheduling: PendingReds:0 ScheduledMaps:0
>>>> ScheduledReds:0
>>>> >> AssignedMaps:0 AssignedReds:1 CompletedMaps:1 CompletedReds:1
>>>> ContAlloc:2
>>>> >> ContRel:0 HostLocal:0 RackLocal:1
>>>> >> Jun 18, 2013 3:20:21 PM
>>>> >> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator
>>>> getResources
>>>> >> INFO: Received completed container
>>>> container_1371593763906_0001_01_000003
>>>> >> Jun 18, 2013 3:20:21 PM
>>>> >>
>>>> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator$ScheduleStats log
>>>> >> INFO: After Scheduling: PendingReds:0 ScheduledMaps:0 ScheduledReds:0
>>>> >> AssignedMaps:0 AssignedReds:0 CompletedMaps:1 CompletedReds:1
>>>> ContAlloc:2
>>>> >> ContRel:0 HostLocal:0 RackLocal:1
>>>> >> Jun 18, 2013 3:20:21 PM
>>>> >>
>>>> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl$DiagnosticInformationUpdater
>>>> >> transition
>>>> >> INFO: Diagnostics report from attempt_1371593763906_0001_r_000000_0:
>>>> >> Container killed by the ApplicationMaster.
>>>> >>
>>>> >>
>>>> >>
>>>> >> On Tue, Jun 18, 2013 at 1:28 PM, Chris Nauroth <
>>>> cnauroth@hortonworks.com>
>>>> >> wrote:
>>>> >>>
>>>> >>> Prashant, can you provide more details about what you're doing when
>>>> you
>>>> >>> see this error?  Are you submitting a MapReduce job, running an
>>>> HDFS shell
>>>> >>> command, or doing some other action?  It's possible that we're also
>>>> seeing
>>>> >>> an interaction with some other change in 2.x that triggers a
>>>> setPermission
>>>> >>> call that wasn't there in 0.20.2.  I think the problem with the HDFS
>>>> >>> setPermission API is present in both 0.20.2 and 2.x, but if the
>>>> code in
>>>> >>> 0.20.2 never triggered a setPermission call for your usage, then you
>>>> >>> wouldn't have seen the problem.
>>>> >>>
>>>> >>> I'd like to gather these details for submitting a new bug report to
>>>> HDFS.
>>>> >>> Thanks!
>>>> >>>
>>>> >>> Chris Nauroth
>>>> >>> Hortonworks
>>>> >>> http://hortonworks.com/
>>>> >>>
>>>> >>>
>>>> >>>
>>>> >>> On Tue, Jun 18, 2013 at 12:14 PM, Leo Leung <ll...@ddn.com> wrote:
>>>> >>>>
>>>> >>>> I believe, the properties name should be “dfs.permissions”
>>>> >>>>
>>>> >>>>
>>>> >>>>
>>>> >>>>
>>>> >>>>
>>>> >>>> From: Prashant Kommireddi [mailto:prash1784@gmail.com]
>>>> >>>> Sent: Tuesday, June 18, 2013 10:54 AM
>>>> >>>> To: user@hadoop.apache.org
>>>> >>>> Subject: DFS Permissions on Hadoop 2.x
>>>> >>>>
>>>> >>>>
>>>> >>>>
>>>> >>>> Hello,
>>>> >>>>
>>>> >>>>
>>>> >>>>
>>>> >>>> We just upgraded our cluster from 0.20.2 to 2.x (with HA) and had a
>>>> >>>> question around disabling dfs permissions on the latter version.
>>>> For some
>>>> >>>> reason, setting the following config does not seem to work
>>>> >>>>
>>>> >>>>
>>>> >>>>
>>>> >>>> <property>
>>>> >>>>
>>>> >>>>         <name>dfs.permissions.enabled</name>
>>>> >>>>
>>>> >>>>         <value>false</value>
>>>> >>>>
>>>> >>>> </property>
>>>> >>>>
>>>> >>>>
>>>> >>>>
>>>> >>>> Any other configs that might be needed for this?
>>>> >>>>
>>>> >>>>
>>>> >>>>
>>>> >>>> Here is the stacktrace.
>>>> >>>>
>>>> >>>>
>>>> >>>>
>>>> >>>> 2013-06-17 17:35:45,429 INFO  ipc.Server - IPC Server handler 62 on
>>>> >>>> 8020, call
>>>> org.apache.hadoop.hdfs.protocol.ClientProtocol.setPermission from
>>>> >>>> 10.0.53.131:24059: error:
>>>> org.apache.hadoop.security.AccessControlException:
>>>> >>>> Permission denied: user=smehta, access=EXECUTE,
>>>> >>>> inode="/mapred":pkommireddi:supergroup:drwxrwx---
>>>> >>>>
>>>> >>>> org.apache.hadoop.security.AccessControlException: Permission
>>>> denied:
>>>> >>>> user=smehta, access=EXECUTE,
>>>> >>>> inode="/mapred":pkommireddi:supergroup:drwxrwx---
>>>> >>>>
>>>> >>>>         at
>>>> >>>>
>>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
>>>> >>>>
>>>> >>>>         at
>>>> >>>>
>>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
>>>> >>>>
>>>> >>>>         at
>>>> >>>>
>>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
>>>> >>>>
>>>> >>>>         at
>>>> >>>>
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
>>>> >>>>
>>>> >>>>         at
>>>> >>>>
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
>>>> >>>>
>>>> >>>>         at
>>>> >>>>
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
>>>> >>>>
>>>> >>>>         at
>>>> >>>>
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
>>>> >>>>
>>>> >>>>         at
>>>> >>>>
>>>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
>>>> >>>>
>>>> >>>>         at
>>>> >>>>
>>>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
>>>> >>>>
>>>> >>>>         at
>>>> >>>>
>>>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
>>>> >>>>
>>>> >>>>         at
>>>> >>>>
>>>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>>>> >>>>
>>>> >>>>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>>>> >>>>
>>>> >>>>         at
>>>> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
>>>> >>>>
>>>> >>>>         at
>>>> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
>>>> >>>>
>>>> >>>>         at java.security.AccessController.doPrivileged(Native
>>>> Method)
>>>> >>>>
>>>> >>>>         at javax.security.auth.Subject.doAs(Subject.java:396)
>>>> >>>>
>>>> >>>>         at
>>>> >>>>
>>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>>>> >>>>
>>>> >>>>         at
>>>> org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
>>>> >>>>
>>>> >>>>
>>>> >>>>
>>>> >>>>
>>>> >>>>
>>>> >>>>
>>>> >>>>
>>>> >>>>
>>>> >>>
>>>> >>>
>>>> >>
>>>> >
>>>>
>>>>
>>>>
>>>> --
>>>> Harsh J
>>>>
>>>
>>>
>>
>

Re: DFS Permissions on Hadoop 2.x

Posted by Prashant Kommireddi <pr...@gmail.com>.
How can we resolve the issue in the case I have mentioned? File a MR Jira
that does not try to check permissions when dfs.permissions.enabled is set
to false?

The explanation that Tsz Wo (Nicholas) pointed out in the JIRA makes sense
w.r.t HDFS behavior (thanks for that). But I am still unsure how we can get
around the fact that certain permissions are set on shared directories by a
certain user that disallow any other users from using them. Or am I missing
something entirely?


On Wed, Jun 19, 2013 at 1:01 PM, Chris Nauroth <cn...@hortonworks.com>wrote:

> Just in case anyone is curious who didn't look at HDFS-4918, we
> established that this is actually expected behavior, and it's mentioned in
> the documentation.  However, I filed HDFS-4919 to make the information
> clearer in the documentation, since this caused some confusion.
>
> https://issues.apache.org/jira/browse/HDFS-4919
>
> Chris Nauroth
> Hortonworks
> http://hortonworks.com/
>
>
>
> On Tue, Jun 18, 2013 at 10:42 PM, Prashant Kommireddi <prash1784@gmail.com
> > wrote:
>
>> Thanks guys, I will follow the discussion there.
>>
>>
>> On Tue, Jun 18, 2013 at 10:10 PM, Azuryy Yu <az...@gmail.com> wrote:
>>
>>> Yes, and I think this was lead by Snapshot.
>>>
>>> I've file a JIRA here:
>>> https://issues.apache.org/jira/browse/HDFS-4918
>>>
>>>
>>>
>>> On Wed, Jun 19, 2013 at 11:40 AM, Harsh J <ha...@cloudera.com> wrote:
>>>
>>>> This is a HDFS bug. Like all other methods that check for permissions
>>>> being enabled, the client call of setPermission should check it as
>>>> well. It does not do that currently and I believe it should be a NOP
>>>> in such a case. Please do file a JIRA (and reference the ID here to
>>>> close the loop)!
>>>>
>>>> On Wed, Jun 19, 2013 at 6:18 AM, Prashant Kommireddi
>>>> <pr...@gmail.com> wrote:
>>>> > Looks like the jobs fail only on the first attempt and pass
>>>> thereafter.
>>>> > Failure occurs while setting perms on "intermediate done directory".
>>>> Here is
>>>> > what I think is happening:
>>>> >
>>>> > 1. Intermediate done dir is (ideally) created as part of deployment
>>>> (for eg,
>>>> > /mapred/history/done_intermediate)
>>>> >
>>>> > 2. When a MR job is run, it creates a user dir within intermediate
>>>> done dir
>>>> > (/mapred/history/done_intermediate/username)
>>>> >
>>>> > 3. After this dir is created, the code tries to set permissions on
>>>> this user
>>>> > dir. In doing so, it checks for EXECUTE permissions on not just its
>>>> parent
>>>> > (/mapred/history/done_intermediate) but across all dirs to the
>>>> top-most
>>>> > level (/mapred). This fails as "/mapred" does not have execute
>>>> permissions
>>>> > for the "Other" users.
>>>> >
>>>> > 4. On successive job runs, since the user dir already exists
>>>> > (/mapred/history/done_intermediate/username) it no longer tries to
>>>> create
>>>> > and set permissions again. And the job completes without any perm
>>>> errors.
>>>> >
>>>> > This is the code within JobHistoryEventHandler that's doing it.
>>>> >
>>>> >  //Check for the existence of intermediate done dir.
>>>> >     Path doneDirPath = null;
>>>> >     try {
>>>> >       doneDirPath = FileSystem.get(conf).makeQualified(new
>>>> > Path(doneDirStr));
>>>> >       doneDirFS = FileSystem.get(doneDirPath.toUri(), conf);
>>>> >       // This directory will be in a common location, or this may be a
>>>> > cluster
>>>> >       // meant for a single user. Creating based on the conf. Should
>>>> ideally
>>>> > be
>>>> >       // created by the JobHistoryServer or as part of deployment.
>>>> >       if (!doneDirFS.exists(doneDirPath)) {
>>>> >       if (JobHistoryUtils.shouldCreateNonUserDirectory(conf)) {
>>>> >         LOG.info("Creating intermediate history logDir: ["
>>>> >             + doneDirPath
>>>> >             + "] + based on conf. Should ideally be created by the
>>>> > JobHistoryServer: "
>>>> >             + MRJobConfig.MR_AM_CREATE_JH_INTERMEDIATE_BASE_DIR);
>>>> >           mkdir(
>>>> >               doneDirFS,
>>>> >               doneDirPath,
>>>> >               new FsPermission(
>>>> >             JobHistoryUtils.HISTORY_INTERMEDIATE_DONE_DIR_PERMISSIONS
>>>> >                 .toShort()));
>>>> >           // TODO Temporary toShort till new
>>>> FsPermission(FsPermissions)
>>>> >           // respects
>>>> >         // sticky
>>>> >       } else {
>>>> >           String message = "Not creating intermediate history logDir:
>>>> ["
>>>> >                 + doneDirPath
>>>> >                 + "] based on conf: "
>>>> >                 + MRJobConfig.MR_AM_CREATE_JH_INTERMEDIATE_BASE_DIR
>>>> >                 + ". Either set to true or pre-create this directory
>>>> with" +
>>>> >                 " appropriate permissions";
>>>> >         LOG.error(message);
>>>> >         throw new YarnException(message);
>>>> >       }
>>>> >       }
>>>> >     } catch (IOException e) {
>>>> >       LOG.error("Failed checking for the existance of history
>>>> intermediate "
>>>> > +
>>>> >                       "done directory: [" + doneDirPath + "]");
>>>> >       throw new YarnException(e);
>>>> >     }
>>>> >
>>>> >
>>>> > In any case, this does not appear to be the right behavior as it does
>>>> not
>>>> > respect "dfs.permissions.enabled" (set to false) at any point. Sounds
>>>> like a
>>>> > bug?
>>>> >
>>>> >
>>>> > Thanks, Prashant
>>>> >
>>>> >
>>>> >
>>>> >
>>>> >
>>>> >
>>>> > On Tue, Jun 18, 2013 at 3:24 PM, Prashant Kommireddi <
>>>> prash1784@gmail.com>
>>>> > wrote:
>>>> >>
>>>> >> Hi Chris,
>>>> >>
>>>> >> This is while running a MR job. Please note the job is able to write
>>>> files
>>>> >> to "/mapred" directory and fails on EXECUTE permissions. On digging
>>>> in some
>>>> >> more, it looks like the failure occurs after writing to
>>>> >> "/mapred/history/done_intermediate".
>>>> >>
>>>> >> Here is a more detailed stacktrace.
>>>> >>
>>>> >> INFO: Job end notification started for jobID : job_1371593763906_0001
>>>> >> Jun 18, 2013 3:20:20 PM
>>>> >> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler
>>>> >> closeEventWriter
>>>> >> INFO: Unable to write out JobSummaryInfo to
>>>> >>
>>>> [hdfs://test-local-EMPTYSPEC/mapred/history/done_intermediate/smehta/job_1371593763906_0001.summary_tmp]
>>>> >> org.apache.hadoop.security.AccessControlException: Permission denied:
>>>> >> user=smehta, access=EXECUTE,
>>>> >> inode="/mapred":pkommireddi:supergroup:drwxrwx---
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>>>> >>      at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>>>> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
>>>> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
>>>> >>      at java.security.AccessController.doPrivileged(Native Method)
>>>> >>      at javax.security.auth.Subject.doAs(Subject.java:396)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>>>> >>      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
>>>> >>
>>>> >>      at
>>>> sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>>>> >>      at
>>>> >>
>>>> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
>>>> >>      at
>>>> >>
>>>> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
>>>> >>      at
>>>> java.lang.reflect.Constructor.newInstance(Constructor.java:513)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:90)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:57)
>>>> >>      at
>>>> org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:1897)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.DistributedFileSystem.setPermission(DistributedFileSystem.java:823)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.closeEventWriter(JobHistoryEventHandler.java:666)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.handleEvent(JobHistoryEventHandler.java:521)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler$1.run(JobHistoryEventHandler.java:273)
>>>> >>      at java.lang.Thread.run(Thread.java:662)
>>>> >> Caused by:
>>>> >>
>>>> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException):
>>>> >> Permission denied: user=smehta, access=EXECUTE,
>>>> >> inode="/mapred":pkommireddi:supergroup:drwxrwx---
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>>>> >>      at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>>>> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
>>>> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
>>>> >>      at java.security.AccessController.doPrivileged(Native Method)
>>>> >>      at javax.security.auth.Subject.doAs(Subject.java:396)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>>>> >>      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
>>>> >>
>>>> >>      at org.apache.hadoop.ipc.Client.call(Client.java:1225)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202)
>>>> >>      at $Proxy9.setPermission(Unknown Source)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.setPermission(ClientNamenodeProtocolTranslatorPB.java:241)
>>>> >>      at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>>> >>      at
>>>> >>
>>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>>>> >>      at
>>>> >>
>>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>> >>      at java.lang.reflect.Method.invoke(Method.java:597)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83)
>>>> >>      at $Proxy10.setPermission(Unknown Source)
>>>> >>      at
>>>> org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:1895)
>>>> >>      ... 5 more
>>>> >> Jun 18, 2013 3:20:20 PM
>>>> >> org.apache.hadoop.yarn.YarnUncaughtExceptionHandler uncaughtException
>>>> >> SEVERE: Thread Thread[Thread-51,5,main] threw an Exception.
>>>> >> org.apache.hadoop.yarn.YarnException:
>>>> >> org.apache.hadoop.security.AccessControlException: Permission denied:
>>>> >> user=smehta, access=EXECUTE,
>>>> >> inode="/mapred":pkommireddi:supergroup:drwxrwx---
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>>>> >>      at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>>>> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
>>>> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
>>>> >>      at java.security.AccessController.doPrivileged(Native Method)
>>>> >>      at javax.security.auth.Subject.doAs(Subject.java:396)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>>>> >>      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
>>>> >>
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.handleEvent(JobHistoryEventHandler.java:523)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler$1.run(JobHistoryEventHandler.java:273)
>>>> >>      at java.lang.Thread.run(Thread.java:662)
>>>> >> Caused by: org.apache.hadoop.security.AccessControlException:
>>>> Permission
>>>> >> denied: user=smehta, access=EXECUTE,
>>>> >> inode="/mapred":pkommireddi:supergroup:drwxrwx---
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>>>> >>      at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>>>> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
>>>> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
>>>> >>      at java.security.AccessController.doPrivileged(Native Method)
>>>> >>      at javax.security.auth.Subject.doAs(Subject.java:396)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>>>> >>      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
>>>> >>
>>>> >>      at
>>>> sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>>>> >>      at
>>>> >>
>>>> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
>>>> >>      at
>>>> >>
>>>> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
>>>> >>      at
>>>> java.lang.reflect.Constructor.newInstance(Constructor.java:513)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:90)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:57)
>>>> >>      at
>>>> org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:1897)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.DistributedFileSystem.setPermission(DistributedFileSystem.java:823)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.closeEventWriter(JobHistoryEventHandler.java:666)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.handleEvent(JobHistoryEventHandler.java:521)
>>>> >>      ... 2 more
>>>> >> Caused by:
>>>> >>
>>>> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException):
>>>> >> Permission denied: user=smehta, access=EXECUTE,
>>>> >> inode="/mapred":pkommireddi:supergroup:drwxrwx---
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>>>> >>      at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>>>> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
>>>> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
>>>> >>      at java.security.AccessController.doPrivileged(Native Method)
>>>> >>      at javax.security.auth.Subject.doAs(Subject.java:396)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>>>> >>      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
>>>> >>
>>>> >>      at org.apache.hadoop.ipc.Client.call(Client.java:1225)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202)
>>>> >>      at $Proxy9.setPermission(Unknown Source)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.setPermission(ClientNamenodeProtocolTranslatorPB.java:241)
>>>> >>      at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>>> >>      at
>>>> >>
>>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>>>> >>      at
>>>> >>
>>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>> >>      at java.lang.reflect.Method.invoke(Method.java:597)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83)
>>>> >>      at $Proxy10.setPermission(Unknown Source)
>>>> >>      at
>>>> org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:1895)
>>>> >>      ... 5 more
>>>> >> Jun 18, 2013 3:20:20 PM
>>>> >>
>>>> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator$ScheduleStats log
>>>> >> INFO: Before Scheduling: PendingReds:0 ScheduledMaps:0
>>>> ScheduledReds:0
>>>> >> AssignedMaps:0 AssignedReds:1 CompletedMaps:1 CompletedReds:1
>>>> ContAlloc:2
>>>> >> ContRel:0 HostLocal:0 RackLocal:1
>>>> >> Jun 18, 2013 3:20:21 PM
>>>> >> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator
>>>> getResources
>>>> >> INFO: Received completed container
>>>> container_1371593763906_0001_01_000003
>>>> >> Jun 18, 2013 3:20:21 PM
>>>> >>
>>>> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator$ScheduleStats log
>>>> >> INFO: After Scheduling: PendingReds:0 ScheduledMaps:0 ScheduledReds:0
>>>> >> AssignedMaps:0 AssignedReds:0 CompletedMaps:1 CompletedReds:1
>>>> ContAlloc:2
>>>> >> ContRel:0 HostLocal:0 RackLocal:1
>>>> >> Jun 18, 2013 3:20:21 PM
>>>> >>
>>>> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl$DiagnosticInformationUpdater
>>>> >> transition
>>>> >> INFO: Diagnostics report from attempt_1371593763906_0001_r_000000_0:
>>>> >> Container killed by the ApplicationMaster.
>>>> >>
>>>> >>
>>>> >>
>>>> >> On Tue, Jun 18, 2013 at 1:28 PM, Chris Nauroth <
>>>> cnauroth@hortonworks.com>
>>>> >> wrote:
>>>> >>>
>>>> >>> Prashant, can you provide more details about what you're doing when
>>>> you
>>>> >>> see this error?  Are you submitting a MapReduce job, running an
>>>> HDFS shell
>>>> >>> command, or doing some other action?  It's possible that we're also
>>>> seeing
>>>> >>> an interaction with some other change in 2.x that triggers a
>>>> setPermission
>>>> >>> call that wasn't there in 0.20.2.  I think the problem with the HDFS
>>>> >>> setPermission API is present in both 0.20.2 and 2.x, but if the
>>>> code in
>>>> >>> 0.20.2 never triggered a setPermission call for your usage, then you
>>>> >>> wouldn't have seen the problem.
>>>> >>>
>>>> >>> I'd like to gather these details for submitting a new bug report to
>>>> HDFS.
>>>> >>> Thanks!
>>>> >>>
>>>> >>> Chris Nauroth
>>>> >>> Hortonworks
>>>> >>> http://hortonworks.com/
>>>> >>>
>>>> >>>
>>>> >>>
>>>> >>> On Tue, Jun 18, 2013 at 12:14 PM, Leo Leung <ll...@ddn.com> wrote:
>>>> >>>>
>>>> >>>> I believe, the properties name should be “dfs.permissions”
>>>> >>>>
>>>> >>>>
>>>> >>>>
>>>> >>>>
>>>> >>>>
>>>> >>>> From: Prashant Kommireddi [mailto:prash1784@gmail.com]
>>>> >>>> Sent: Tuesday, June 18, 2013 10:54 AM
>>>> >>>> To: user@hadoop.apache.org
>>>> >>>> Subject: DFS Permissions on Hadoop 2.x
>>>> >>>>
>>>> >>>>
>>>> >>>>
>>>> >>>> Hello,
>>>> >>>>
>>>> >>>>
>>>> >>>>
>>>> >>>> We just upgraded our cluster from 0.20.2 to 2.x (with HA) and had a
>>>> >>>> question around disabling dfs permissions on the latter version.
>>>> For some
>>>> >>>> reason, setting the following config does not seem to work
>>>> >>>>
>>>> >>>>
>>>> >>>>
>>>> >>>> <property>
>>>> >>>>
>>>> >>>>         <name>dfs.permissions.enabled</name>
>>>> >>>>
>>>> >>>>         <value>false</value>
>>>> >>>>
>>>> >>>> </property>
>>>> >>>>
>>>> >>>>
>>>> >>>>
>>>> >>>> Any other configs that might be needed for this?
>>>> >>>>
>>>> >>>>
>>>> >>>>
>>>> >>>> Here is the stacktrace.
>>>> >>>>
>>>> >>>>
>>>> >>>>
>>>> >>>> 2013-06-17 17:35:45,429 INFO  ipc.Server - IPC Server handler 62 on
>>>> >>>> 8020, call
>>>> org.apache.hadoop.hdfs.protocol.ClientProtocol.setPermission from
>>>> >>>> 10.0.53.131:24059: error:
>>>> org.apache.hadoop.security.AccessControlException:
>>>> >>>> Permission denied: user=smehta, access=EXECUTE,
>>>> >>>> inode="/mapred":pkommireddi:supergroup:drwxrwx---
>>>> >>>>
>>>> >>>> org.apache.hadoop.security.AccessControlException: Permission
>>>> denied:
>>>> >>>> user=smehta, access=EXECUTE,
>>>> >>>> inode="/mapred":pkommireddi:supergroup:drwxrwx---
>>>> >>>>
>>>> >>>>         at
>>>> >>>>
>>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
>>>> >>>>
>>>> >>>>         at
>>>> >>>>
>>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
>>>> >>>>
>>>> >>>>         at
>>>> >>>>
>>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
>>>> >>>>
>>>> >>>>         at
>>>> >>>>
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
>>>> >>>>
>>>> >>>>         at
>>>> >>>>
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
>>>> >>>>
>>>> >>>>         at
>>>> >>>>
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
>>>> >>>>
>>>> >>>>         at
>>>> >>>>
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
>>>> >>>>
>>>> >>>>         at
>>>> >>>>
>>>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
>>>> >>>>
>>>> >>>>         at
>>>> >>>>
>>>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
>>>> >>>>
>>>> >>>>         at
>>>> >>>>
>>>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
>>>> >>>>
>>>> >>>>         at
>>>> >>>>
>>>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>>>> >>>>
>>>> >>>>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>>>> >>>>
>>>> >>>>         at
>>>> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
>>>> >>>>
>>>> >>>>         at
>>>> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
>>>> >>>>
>>>> >>>>         at java.security.AccessController.doPrivileged(Native
>>>> Method)
>>>> >>>>
>>>> >>>>         at javax.security.auth.Subject.doAs(Subject.java:396)
>>>> >>>>
>>>> >>>>         at
>>>> >>>>
>>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>>>> >>>>
>>>> >>>>         at
>>>> org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
>>>> >>>>
>>>> >>>>
>>>> >>>>
>>>> >>>>
>>>> >>>>
>>>> >>>>
>>>> >>>>
>>>> >>>>
>>>> >>>
>>>> >>>
>>>> >>
>>>> >
>>>>
>>>>
>>>>
>>>> --
>>>> Harsh J
>>>>
>>>
>>>
>>
>

Re: DFS Permissions on Hadoop 2.x

Posted by Prashant Kommireddi <pr...@gmail.com>.
How can we resolve the issue in the case I have mentioned? File a MR Jira
that does not try to check permissions when dfs.permissions.enabled is set
to false?

The explanation that Tsz Wo (Nicholas) pointed out in the JIRA makes sense
w.r.t HDFS behavior (thanks for that). But I am still unsure how we can get
around the fact that certain permissions are set on shared directories by a
certain user that disallow any other users from using them. Or am I missing
something entirely?


On Wed, Jun 19, 2013 at 1:01 PM, Chris Nauroth <cn...@hortonworks.com>wrote:

> Just in case anyone is curious who didn't look at HDFS-4918, we
> established that this is actually expected behavior, and it's mentioned in
> the documentation.  However, I filed HDFS-4919 to make the information
> clearer in the documentation, since this caused some confusion.
>
> https://issues.apache.org/jira/browse/HDFS-4919
>
> Chris Nauroth
> Hortonworks
> http://hortonworks.com/
>
>
>
> On Tue, Jun 18, 2013 at 10:42 PM, Prashant Kommireddi <prash1784@gmail.com
> > wrote:
>
>> Thanks guys, I will follow the discussion there.
>>
>>
>> On Tue, Jun 18, 2013 at 10:10 PM, Azuryy Yu <az...@gmail.com> wrote:
>>
>>> Yes, and I think this was lead by Snapshot.
>>>
>>> I've file a JIRA here:
>>> https://issues.apache.org/jira/browse/HDFS-4918
>>>
>>>
>>>
>>> On Wed, Jun 19, 2013 at 11:40 AM, Harsh J <ha...@cloudera.com> wrote:
>>>
>>>> This is a HDFS bug. Like all other methods that check for permissions
>>>> being enabled, the client call of setPermission should check it as
>>>> well. It does not do that currently and I believe it should be a NOP
>>>> in such a case. Please do file a JIRA (and reference the ID here to
>>>> close the loop)!
>>>>
>>>> On Wed, Jun 19, 2013 at 6:18 AM, Prashant Kommireddi
>>>> <pr...@gmail.com> wrote:
>>>> > Looks like the jobs fail only on the first attempt and pass
>>>> thereafter.
>>>> > Failure occurs while setting perms on "intermediate done directory".
>>>> Here is
>>>> > what I think is happening:
>>>> >
>>>> > 1. Intermediate done dir is (ideally) created as part of deployment
>>>> (for eg,
>>>> > /mapred/history/done_intermediate)
>>>> >
>>>> > 2. When a MR job is run, it creates a user dir within intermediate
>>>> done dir
>>>> > (/mapred/history/done_intermediate/username)
>>>> >
>>>> > 3. After this dir is created, the code tries to set permissions on
>>>> this user
>>>> > dir. In doing so, it checks for EXECUTE permissions on not just its
>>>> parent
>>>> > (/mapred/history/done_intermediate) but across all dirs to the
>>>> top-most
>>>> > level (/mapred). This fails as "/mapred" does not have execute
>>>> permissions
>>>> > for the "Other" users.
>>>> >
>>>> > 4. On successive job runs, since the user dir already exists
>>>> > (/mapred/history/done_intermediate/username) it no longer tries to
>>>> create
>>>> > and set permissions again. And the job completes without any perm
>>>> errors.
>>>> >
>>>> > This is the code within JobHistoryEventHandler that's doing it.
>>>> >
>>>> >  //Check for the existence of intermediate done dir.
>>>> >     Path doneDirPath = null;
>>>> >     try {
>>>> >       doneDirPath = FileSystem.get(conf).makeQualified(new
>>>> > Path(doneDirStr));
>>>> >       doneDirFS = FileSystem.get(doneDirPath.toUri(), conf);
>>>> >       // This directory will be in a common location, or this may be a
>>>> > cluster
>>>> >       // meant for a single user. Creating based on the conf. Should
>>>> ideally
>>>> > be
>>>> >       // created by the JobHistoryServer or as part of deployment.
>>>> >       if (!doneDirFS.exists(doneDirPath)) {
>>>> >       if (JobHistoryUtils.shouldCreateNonUserDirectory(conf)) {
>>>> >         LOG.info("Creating intermediate history logDir: ["
>>>> >             + doneDirPath
>>>> >             + "] + based on conf. Should ideally be created by the
>>>> > JobHistoryServer: "
>>>> >             + MRJobConfig.MR_AM_CREATE_JH_INTERMEDIATE_BASE_DIR);
>>>> >           mkdir(
>>>> >               doneDirFS,
>>>> >               doneDirPath,
>>>> >               new FsPermission(
>>>> >             JobHistoryUtils.HISTORY_INTERMEDIATE_DONE_DIR_PERMISSIONS
>>>> >                 .toShort()));
>>>> >           // TODO Temporary toShort till new
>>>> FsPermission(FsPermissions)
>>>> >           // respects
>>>> >         // sticky
>>>> >       } else {
>>>> >           String message = "Not creating intermediate history logDir:
>>>> ["
>>>> >                 + doneDirPath
>>>> >                 + "] based on conf: "
>>>> >                 + MRJobConfig.MR_AM_CREATE_JH_INTERMEDIATE_BASE_DIR
>>>> >                 + ". Either set to true or pre-create this directory
>>>> with" +
>>>> >                 " appropriate permissions";
>>>> >         LOG.error(message);
>>>> >         throw new YarnException(message);
>>>> >       }
>>>> >       }
>>>> >     } catch (IOException e) {
>>>> >       LOG.error("Failed checking for the existance of history
>>>> intermediate "
>>>> > +
>>>> >                       "done directory: [" + doneDirPath + "]");
>>>> >       throw new YarnException(e);
>>>> >     }
>>>> >
>>>> >
>>>> > In any case, this does not appear to be the right behavior as it does
>>>> not
>>>> > respect "dfs.permissions.enabled" (set to false) at any point. Sounds
>>>> like a
>>>> > bug?
>>>> >
>>>> >
>>>> > Thanks, Prashant
>>>> >
>>>> >
>>>> >
>>>> >
>>>> >
>>>> >
>>>> > On Tue, Jun 18, 2013 at 3:24 PM, Prashant Kommireddi <
>>>> prash1784@gmail.com>
>>>> > wrote:
>>>> >>
>>>> >> Hi Chris,
>>>> >>
>>>> >> This is while running a MR job. Please note the job is able to write
>>>> files
>>>> >> to "/mapred" directory and fails on EXECUTE permissions. On digging
>>>> in some
>>>> >> more, it looks like the failure occurs after writing to
>>>> >> "/mapred/history/done_intermediate".
>>>> >>
>>>> >> Here is a more detailed stacktrace.
>>>> >>
>>>> >> INFO: Job end notification started for jobID : job_1371593763906_0001
>>>> >> Jun 18, 2013 3:20:20 PM
>>>> >> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler
>>>> >> closeEventWriter
>>>> >> INFO: Unable to write out JobSummaryInfo to
>>>> >>
>>>> [hdfs://test-local-EMPTYSPEC/mapred/history/done_intermediate/smehta/job_1371593763906_0001.summary_tmp]
>>>> >> org.apache.hadoop.security.AccessControlException: Permission denied:
>>>> >> user=smehta, access=EXECUTE,
>>>> >> inode="/mapred":pkommireddi:supergroup:drwxrwx---
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>>>> >>      at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>>>> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
>>>> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
>>>> >>      at java.security.AccessController.doPrivileged(Native Method)
>>>> >>      at javax.security.auth.Subject.doAs(Subject.java:396)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>>>> >>      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
>>>> >>
>>>> >>      at
>>>> sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>>>> >>      at
>>>> >>
>>>> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
>>>> >>      at
>>>> >>
>>>> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
>>>> >>      at
>>>> java.lang.reflect.Constructor.newInstance(Constructor.java:513)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:90)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:57)
>>>> >>      at
>>>> org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:1897)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.DistributedFileSystem.setPermission(DistributedFileSystem.java:823)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.closeEventWriter(JobHistoryEventHandler.java:666)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.handleEvent(JobHistoryEventHandler.java:521)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler$1.run(JobHistoryEventHandler.java:273)
>>>> >>      at java.lang.Thread.run(Thread.java:662)
>>>> >> Caused by:
>>>> >>
>>>> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException):
>>>> >> Permission denied: user=smehta, access=EXECUTE,
>>>> >> inode="/mapred":pkommireddi:supergroup:drwxrwx---
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>>>> >>      at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>>>> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
>>>> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
>>>> >>      at java.security.AccessController.doPrivileged(Native Method)
>>>> >>      at javax.security.auth.Subject.doAs(Subject.java:396)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>>>> >>      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
>>>> >>
>>>> >>      at org.apache.hadoop.ipc.Client.call(Client.java:1225)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202)
>>>> >>      at $Proxy9.setPermission(Unknown Source)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.setPermission(ClientNamenodeProtocolTranslatorPB.java:241)
>>>> >>      at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>>> >>      at
>>>> >>
>>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>>>> >>      at
>>>> >>
>>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>> >>      at java.lang.reflect.Method.invoke(Method.java:597)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83)
>>>> >>      at $Proxy10.setPermission(Unknown Source)
>>>> >>      at
>>>> org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:1895)
>>>> >>      ... 5 more
>>>> >> Jun 18, 2013 3:20:20 PM
>>>> >> org.apache.hadoop.yarn.YarnUncaughtExceptionHandler uncaughtException
>>>> >> SEVERE: Thread Thread[Thread-51,5,main] threw an Exception.
>>>> >> org.apache.hadoop.yarn.YarnException:
>>>> >> org.apache.hadoop.security.AccessControlException: Permission denied:
>>>> >> user=smehta, access=EXECUTE,
>>>> >> inode="/mapred":pkommireddi:supergroup:drwxrwx---
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>>>> >>      at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>>>> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
>>>> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
>>>> >>      at java.security.AccessController.doPrivileged(Native Method)
>>>> >>      at javax.security.auth.Subject.doAs(Subject.java:396)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>>>> >>      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
>>>> >>
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.handleEvent(JobHistoryEventHandler.java:523)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler$1.run(JobHistoryEventHandler.java:273)
>>>> >>      at java.lang.Thread.run(Thread.java:662)
>>>> >> Caused by: org.apache.hadoop.security.AccessControlException:
>>>> Permission
>>>> >> denied: user=smehta, access=EXECUTE,
>>>> >> inode="/mapred":pkommireddi:supergroup:drwxrwx---
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>>>> >>      at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>>>> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
>>>> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
>>>> >>      at java.security.AccessController.doPrivileged(Native Method)
>>>> >>      at javax.security.auth.Subject.doAs(Subject.java:396)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>>>> >>      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
>>>> >>
>>>> >>      at
>>>> sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>>>> >>      at
>>>> >>
>>>> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
>>>> >>      at
>>>> >>
>>>> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
>>>> >>      at
>>>> java.lang.reflect.Constructor.newInstance(Constructor.java:513)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:90)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:57)
>>>> >>      at
>>>> org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:1897)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.DistributedFileSystem.setPermission(DistributedFileSystem.java:823)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.closeEventWriter(JobHistoryEventHandler.java:666)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.handleEvent(JobHistoryEventHandler.java:521)
>>>> >>      ... 2 more
>>>> >> Caused by:
>>>> >>
>>>> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException):
>>>> >> Permission denied: user=smehta, access=EXECUTE,
>>>> >> inode="/mapred":pkommireddi:supergroup:drwxrwx---
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>>>> >>      at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>>>> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
>>>> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
>>>> >>      at java.security.AccessController.doPrivileged(Native Method)
>>>> >>      at javax.security.auth.Subject.doAs(Subject.java:396)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>>>> >>      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
>>>> >>
>>>> >>      at org.apache.hadoop.ipc.Client.call(Client.java:1225)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202)
>>>> >>      at $Proxy9.setPermission(Unknown Source)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.setPermission(ClientNamenodeProtocolTranslatorPB.java:241)
>>>> >>      at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>>> >>      at
>>>> >>
>>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>>>> >>      at
>>>> >>
>>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>> >>      at java.lang.reflect.Method.invoke(Method.java:597)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164)
>>>> >>      at
>>>> >>
>>>> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83)
>>>> >>      at $Proxy10.setPermission(Unknown Source)
>>>> >>      at
>>>> org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:1895)
>>>> >>      ... 5 more
>>>> >> Jun 18, 2013 3:20:20 PM
>>>> >>
>>>> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator$ScheduleStats log
>>>> >> INFO: Before Scheduling: PendingReds:0 ScheduledMaps:0
>>>> ScheduledReds:0
>>>> >> AssignedMaps:0 AssignedReds:1 CompletedMaps:1 CompletedReds:1
>>>> ContAlloc:2
>>>> >> ContRel:0 HostLocal:0 RackLocal:1
>>>> >> Jun 18, 2013 3:20:21 PM
>>>> >> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator
>>>> getResources
>>>> >> INFO: Received completed container
>>>> container_1371593763906_0001_01_000003
>>>> >> Jun 18, 2013 3:20:21 PM
>>>> >>
>>>> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator$ScheduleStats log
>>>> >> INFO: After Scheduling: PendingReds:0 ScheduledMaps:0 ScheduledReds:0
>>>> >> AssignedMaps:0 AssignedReds:0 CompletedMaps:1 CompletedReds:1
>>>> ContAlloc:2
>>>> >> ContRel:0 HostLocal:0 RackLocal:1
>>>> >> Jun 18, 2013 3:20:21 PM
>>>> >>
>>>> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl$DiagnosticInformationUpdater
>>>> >> transition
>>>> >> INFO: Diagnostics report from attempt_1371593763906_0001_r_000000_0:
>>>> >> Container killed by the ApplicationMaster.
>>>> >>
>>>> >>
>>>> >>
>>>> >> On Tue, Jun 18, 2013 at 1:28 PM, Chris Nauroth <
>>>> cnauroth@hortonworks.com>
>>>> >> wrote:
>>>> >>>
>>>> >>> Prashant, can you provide more details about what you're doing when
>>>> you
>>>> >>> see this error?  Are you submitting a MapReduce job, running an
>>>> HDFS shell
>>>> >>> command, or doing some other action?  It's possible that we're also
>>>> seeing
>>>> >>> an interaction with some other change in 2.x that triggers a
>>>> setPermission
>>>> >>> call that wasn't there in 0.20.2.  I think the problem with the HDFS
>>>> >>> setPermission API is present in both 0.20.2 and 2.x, but if the
>>>> code in
>>>> >>> 0.20.2 never triggered a setPermission call for your usage, then you
>>>> >>> wouldn't have seen the problem.
>>>> >>>
>>>> >>> I'd like to gather these details for submitting a new bug report to
>>>> HDFS.
>>>> >>> Thanks!
>>>> >>>
>>>> >>> Chris Nauroth
>>>> >>> Hortonworks
>>>> >>> http://hortonworks.com/
>>>> >>>
>>>> >>>
>>>> >>>
>>>> >>> On Tue, Jun 18, 2013 at 12:14 PM, Leo Leung <ll...@ddn.com> wrote:
>>>> >>>>
>>>> >>>> I believe, the properties name should be “dfs.permissions”
>>>> >>>>
>>>> >>>>
>>>> >>>>
>>>> >>>>
>>>> >>>>
>>>> >>>> From: Prashant Kommireddi [mailto:prash1784@gmail.com]
>>>> >>>> Sent: Tuesday, June 18, 2013 10:54 AM
>>>> >>>> To: user@hadoop.apache.org
>>>> >>>> Subject: DFS Permissions on Hadoop 2.x
>>>> >>>>
>>>> >>>>
>>>> >>>>
>>>> >>>> Hello,
>>>> >>>>
>>>> >>>>
>>>> >>>>
>>>> >>>> We just upgraded our cluster from 0.20.2 to 2.x (with HA) and had a
>>>> >>>> question around disabling dfs permissions on the latter version.
>>>> For some
>>>> >>>> reason, setting the following config does not seem to work
>>>> >>>>
>>>> >>>>
>>>> >>>>
>>>> >>>> <property>
>>>> >>>>
>>>> >>>>         <name>dfs.permissions.enabled</name>
>>>> >>>>
>>>> >>>>         <value>false</value>
>>>> >>>>
>>>> >>>> </property>
>>>> >>>>
>>>> >>>>
>>>> >>>>
>>>> >>>> Any other configs that might be needed for this?
>>>> >>>>
>>>> >>>>
>>>> >>>>
>>>> >>>> Here is the stacktrace.
>>>> >>>>
>>>> >>>>
>>>> >>>>
>>>> >>>> 2013-06-17 17:35:45,429 INFO  ipc.Server - IPC Server handler 62 on
>>>> >>>> 8020, call
>>>> org.apache.hadoop.hdfs.protocol.ClientProtocol.setPermission from
>>>> >>>> 10.0.53.131:24059: error:
>>>> org.apache.hadoop.security.AccessControlException:
>>>> >>>> Permission denied: user=smehta, access=EXECUTE,
>>>> >>>> inode="/mapred":pkommireddi:supergroup:drwxrwx---
>>>> >>>>
>>>> >>>> org.apache.hadoop.security.AccessControlException: Permission
>>>> denied:
>>>> >>>> user=smehta, access=EXECUTE,
>>>> >>>> inode="/mapred":pkommireddi:supergroup:drwxrwx---
>>>> >>>>
>>>> >>>>         at
>>>> >>>>
>>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
>>>> >>>>
>>>> >>>>         at
>>>> >>>>
>>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
>>>> >>>>
>>>> >>>>         at
>>>> >>>>
>>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
>>>> >>>>
>>>> >>>>         at
>>>> >>>>
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
>>>> >>>>
>>>> >>>>         at
>>>> >>>>
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
>>>> >>>>
>>>> >>>>         at
>>>> >>>>
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
>>>> >>>>
>>>> >>>>         at
>>>> >>>>
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
>>>> >>>>
>>>> >>>>         at
>>>> >>>>
>>>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
>>>> >>>>
>>>> >>>>         at
>>>> >>>>
>>>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
>>>> >>>>
>>>> >>>>         at
>>>> >>>>
>>>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
>>>> >>>>
>>>> >>>>         at
>>>> >>>>
>>>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>>>> >>>>
>>>> >>>>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>>>> >>>>
>>>> >>>>         at
>>>> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
>>>> >>>>
>>>> >>>>         at
>>>> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
>>>> >>>>
>>>> >>>>         at java.security.AccessController.doPrivileged(Native
>>>> Method)
>>>> >>>>
>>>> >>>>         at javax.security.auth.Subject.doAs(Subject.java:396)
>>>> >>>>
>>>> >>>>         at
>>>> >>>>
>>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>>>> >>>>
>>>> >>>>         at
>>>> org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
>>>> >>>>
>>>> >>>>
>>>> >>>>
>>>> >>>>
>>>> >>>>
>>>> >>>>
>>>> >>>>
>>>> >>>>
>>>> >>>
>>>> >>>
>>>> >>
>>>> >
>>>>
>>>>
>>>>
>>>> --
>>>> Harsh J
>>>>
>>>
>>>
>>
>

Re: DFS Permissions on Hadoop 2.x

Posted by Chris Nauroth <cn...@hortonworks.com>.
Just in case anyone is curious who didn't look at HDFS-4918, we established
that this is actually expected behavior, and it's mentioned in the
documentation.  However, I filed HDFS-4919 to make the information clearer
in the documentation, since this caused some confusion.

https://issues.apache.org/jira/browse/HDFS-4919

Chris Nauroth
Hortonworks
http://hortonworks.com/



On Tue, Jun 18, 2013 at 10:42 PM, Prashant Kommireddi
<pr...@gmail.com>wrote:

> Thanks guys, I will follow the discussion there.
>
>
> On Tue, Jun 18, 2013 at 10:10 PM, Azuryy Yu <az...@gmail.com> wrote:
>
>> Yes, and I think this was lead by Snapshot.
>>
>> I've file a JIRA here:
>> https://issues.apache.org/jira/browse/HDFS-4918
>>
>>
>>
>> On Wed, Jun 19, 2013 at 11:40 AM, Harsh J <ha...@cloudera.com> wrote:
>>
>>> This is a HDFS bug. Like all other methods that check for permissions
>>> being enabled, the client call of setPermission should check it as
>>> well. It does not do that currently and I believe it should be a NOP
>>> in such a case. Please do file a JIRA (and reference the ID here to
>>> close the loop)!
>>>
>>> On Wed, Jun 19, 2013 at 6:18 AM, Prashant Kommireddi
>>> <pr...@gmail.com> wrote:
>>> > Looks like the jobs fail only on the first attempt and pass thereafter.
>>> > Failure occurs while setting perms on "intermediate done directory".
>>> Here is
>>> > what I think is happening:
>>> >
>>> > 1. Intermediate done dir is (ideally) created as part of deployment
>>> (for eg,
>>> > /mapred/history/done_intermediate)
>>> >
>>> > 2. When a MR job is run, it creates a user dir within intermediate
>>> done dir
>>> > (/mapred/history/done_intermediate/username)
>>> >
>>> > 3. After this dir is created, the code tries to set permissions on
>>> this user
>>> > dir. In doing so, it checks for EXECUTE permissions on not just its
>>> parent
>>> > (/mapred/history/done_intermediate) but across all dirs to the top-most
>>> > level (/mapred). This fails as "/mapred" does not have execute
>>> permissions
>>> > for the "Other" users.
>>> >
>>> > 4. On successive job runs, since the user dir already exists
>>> > (/mapred/history/done_intermediate/username) it no longer tries to
>>> create
>>> > and set permissions again. And the job completes without any perm
>>> errors.
>>> >
>>> > This is the code within JobHistoryEventHandler that's doing it.
>>> >
>>> >  //Check for the existence of intermediate done dir.
>>> >     Path doneDirPath = null;
>>> >     try {
>>> >       doneDirPath = FileSystem.get(conf).makeQualified(new
>>> > Path(doneDirStr));
>>> >       doneDirFS = FileSystem.get(doneDirPath.toUri(), conf);
>>> >       // This directory will be in a common location, or this may be a
>>> > cluster
>>> >       // meant for a single user. Creating based on the conf. Should
>>> ideally
>>> > be
>>> >       // created by the JobHistoryServer or as part of deployment.
>>> >       if (!doneDirFS.exists(doneDirPath)) {
>>> >       if (JobHistoryUtils.shouldCreateNonUserDirectory(conf)) {
>>> >         LOG.info("Creating intermediate history logDir: ["
>>> >             + doneDirPath
>>> >             + "] + based on conf. Should ideally be created by the
>>> > JobHistoryServer: "
>>> >             + MRJobConfig.MR_AM_CREATE_JH_INTERMEDIATE_BASE_DIR);
>>> >           mkdir(
>>> >               doneDirFS,
>>> >               doneDirPath,
>>> >               new FsPermission(
>>> >             JobHistoryUtils.HISTORY_INTERMEDIATE_DONE_DIR_PERMISSIONS
>>> >                 .toShort()));
>>> >           // TODO Temporary toShort till new
>>> FsPermission(FsPermissions)
>>> >           // respects
>>> >         // sticky
>>> >       } else {
>>> >           String message = "Not creating intermediate history logDir:
>>> ["
>>> >                 + doneDirPath
>>> >                 + "] based on conf: "
>>> >                 + MRJobConfig.MR_AM_CREATE_JH_INTERMEDIATE_BASE_DIR
>>> >                 + ". Either set to true or pre-create this directory
>>> with" +
>>> >                 " appropriate permissions";
>>> >         LOG.error(message);
>>> >         throw new YarnException(message);
>>> >       }
>>> >       }
>>> >     } catch (IOException e) {
>>> >       LOG.error("Failed checking for the existance of history
>>> intermediate "
>>> > +
>>> >                       "done directory: [" + doneDirPath + "]");
>>> >       throw new YarnException(e);
>>> >     }
>>> >
>>> >
>>> > In any case, this does not appear to be the right behavior as it does
>>> not
>>> > respect "dfs.permissions.enabled" (set to false) at any point. Sounds
>>> like a
>>> > bug?
>>> >
>>> >
>>> > Thanks, Prashant
>>> >
>>> >
>>> >
>>> >
>>> >
>>> >
>>> > On Tue, Jun 18, 2013 at 3:24 PM, Prashant Kommireddi <
>>> prash1784@gmail.com>
>>> > wrote:
>>> >>
>>> >> Hi Chris,
>>> >>
>>> >> This is while running a MR job. Please note the job is able to write
>>> files
>>> >> to "/mapred" directory and fails on EXECUTE permissions. On digging
>>> in some
>>> >> more, it looks like the failure occurs after writing to
>>> >> "/mapred/history/done_intermediate".
>>> >>
>>> >> Here is a more detailed stacktrace.
>>> >>
>>> >> INFO: Job end notification started for jobID : job_1371593763906_0001
>>> >> Jun 18, 2013 3:20:20 PM
>>> >> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler
>>> >> closeEventWriter
>>> >> INFO: Unable to write out JobSummaryInfo to
>>> >>
>>> [hdfs://test-local-EMPTYSPEC/mapred/history/done_intermediate/smehta/job_1371593763906_0001.summary_tmp]
>>> >> org.apache.hadoop.security.AccessControlException: Permission denied:
>>> >> user=smehta, access=EXECUTE,
>>> >> inode="/mapred":pkommireddi:supergroup:drwxrwx---
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
>>> >>      at
>>> >>
>>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>>> >>      at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>>> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
>>> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
>>> >>      at java.security.AccessController.doPrivileged(Native Method)
>>> >>      at javax.security.auth.Subject.doAs(Subject.java:396)
>>> >>      at
>>> >>
>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>>> >>      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
>>> >>
>>> >>      at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
>>> Method)
>>> >>      at
>>> >>
>>> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
>>> >>      at
>>> >>
>>> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
>>> >>      at
>>> java.lang.reflect.Constructor.newInstance(Constructor.java:513)
>>> >>      at
>>> >>
>>> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:90)
>>> >>      at
>>> >>
>>> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:57)
>>> >>      at
>>> org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:1897)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.DistributedFileSystem.setPermission(DistributedFileSystem.java:823)
>>> >>      at
>>> >>
>>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.closeEventWriter(JobHistoryEventHandler.java:666)
>>> >>      at
>>> >>
>>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.handleEvent(JobHistoryEventHandler.java:521)
>>> >>      at
>>> >>
>>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler$1.run(JobHistoryEventHandler.java:273)
>>> >>      at java.lang.Thread.run(Thread.java:662)
>>> >> Caused by:
>>> >>
>>> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException):
>>> >> Permission denied: user=smehta, access=EXECUTE,
>>> >> inode="/mapred":pkommireddi:supergroup:drwxrwx---
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
>>> >>      at
>>> >>
>>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>>> >>      at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>>> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
>>> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
>>> >>      at java.security.AccessController.doPrivileged(Native Method)
>>> >>      at javax.security.auth.Subject.doAs(Subject.java:396)
>>> >>      at
>>> >>
>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>>> >>      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
>>> >>
>>> >>      at org.apache.hadoop.ipc.Client.call(Client.java:1225)
>>> >>      at
>>> >>
>>> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202)
>>> >>      at $Proxy9.setPermission(Unknown Source)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.setPermission(ClientNamenodeProtocolTranslatorPB.java:241)
>>> >>      at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>> >>      at
>>> >>
>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>>> >>      at
>>> >>
>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>> >>      at java.lang.reflect.Method.invoke(Method.java:597)
>>> >>      at
>>> >>
>>> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164)
>>> >>      at
>>> >>
>>> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83)
>>> >>      at $Proxy10.setPermission(Unknown Source)
>>> >>      at
>>> org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:1895)
>>> >>      ... 5 more
>>> >> Jun 18, 2013 3:20:20 PM
>>> >> org.apache.hadoop.yarn.YarnUncaughtExceptionHandler uncaughtException
>>> >> SEVERE: Thread Thread[Thread-51,5,main] threw an Exception.
>>> >> org.apache.hadoop.yarn.YarnException:
>>> >> org.apache.hadoop.security.AccessControlException: Permission denied:
>>> >> user=smehta, access=EXECUTE,
>>> >> inode="/mapred":pkommireddi:supergroup:drwxrwx---
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
>>> >>      at
>>> >>
>>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>>> >>      at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>>> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
>>> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
>>> >>      at java.security.AccessController.doPrivileged(Native Method)
>>> >>      at javax.security.auth.Subject.doAs(Subject.java:396)
>>> >>      at
>>> >>
>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>>> >>      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
>>> >>
>>> >>      at
>>> >>
>>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.handleEvent(JobHistoryEventHandler.java:523)
>>> >>      at
>>> >>
>>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler$1.run(JobHistoryEventHandler.java:273)
>>> >>      at java.lang.Thread.run(Thread.java:662)
>>> >> Caused by: org.apache.hadoop.security.AccessControlException:
>>> Permission
>>> >> denied: user=smehta, access=EXECUTE,
>>> >> inode="/mapred":pkommireddi:supergroup:drwxrwx---
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
>>> >>      at
>>> >>
>>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>>> >>      at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>>> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
>>> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
>>> >>      at java.security.AccessController.doPrivileged(Native Method)
>>> >>      at javax.security.auth.Subject.doAs(Subject.java:396)
>>> >>      at
>>> >>
>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>>> >>      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
>>> >>
>>> >>      at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
>>> Method)
>>> >>      at
>>> >>
>>> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
>>> >>      at
>>> >>
>>> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
>>> >>      at
>>> java.lang.reflect.Constructor.newInstance(Constructor.java:513)
>>> >>      at
>>> >>
>>> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:90)
>>> >>      at
>>> >>
>>> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:57)
>>> >>      at
>>> org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:1897)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.DistributedFileSystem.setPermission(DistributedFileSystem.java:823)
>>> >>      at
>>> >>
>>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.closeEventWriter(JobHistoryEventHandler.java:666)
>>> >>      at
>>> >>
>>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.handleEvent(JobHistoryEventHandler.java:521)
>>> >>      ... 2 more
>>> >> Caused by:
>>> >>
>>> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException):
>>> >> Permission denied: user=smehta, access=EXECUTE,
>>> >> inode="/mapred":pkommireddi:supergroup:drwxrwx---
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
>>> >>      at
>>> >>
>>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>>> >>      at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>>> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
>>> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
>>> >>      at java.security.AccessController.doPrivileged(Native Method)
>>> >>      at javax.security.auth.Subject.doAs(Subject.java:396)
>>> >>      at
>>> >>
>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>>> >>      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
>>> >>
>>> >>      at org.apache.hadoop.ipc.Client.call(Client.java:1225)
>>> >>      at
>>> >>
>>> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202)
>>> >>      at $Proxy9.setPermission(Unknown Source)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.setPermission(ClientNamenodeProtocolTranslatorPB.java:241)
>>> >>      at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>> >>      at
>>> >>
>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>>> >>      at
>>> >>
>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>> >>      at java.lang.reflect.Method.invoke(Method.java:597)
>>> >>      at
>>> >>
>>> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164)
>>> >>      at
>>> >>
>>> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83)
>>> >>      at $Proxy10.setPermission(Unknown Source)
>>> >>      at
>>> org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:1895)
>>> >>      ... 5 more
>>> >> Jun 18, 2013 3:20:20 PM
>>> >>
>>> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator$ScheduleStats log
>>> >> INFO: Before Scheduling: PendingReds:0 ScheduledMaps:0 ScheduledReds:0
>>> >> AssignedMaps:0 AssignedReds:1 CompletedMaps:1 CompletedReds:1
>>> ContAlloc:2
>>> >> ContRel:0 HostLocal:0 RackLocal:1
>>> >> Jun 18, 2013 3:20:21 PM
>>> >> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator
>>> getResources
>>> >> INFO: Received completed container
>>> container_1371593763906_0001_01_000003
>>> >> Jun 18, 2013 3:20:21 PM
>>> >>
>>> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator$ScheduleStats log
>>> >> INFO: After Scheduling: PendingReds:0 ScheduledMaps:0 ScheduledReds:0
>>> >> AssignedMaps:0 AssignedReds:0 CompletedMaps:1 CompletedReds:1
>>> ContAlloc:2
>>> >> ContRel:0 HostLocal:0 RackLocal:1
>>> >> Jun 18, 2013 3:20:21 PM
>>> >>
>>> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl$DiagnosticInformationUpdater
>>> >> transition
>>> >> INFO: Diagnostics report from attempt_1371593763906_0001_r_000000_0:
>>> >> Container killed by the ApplicationMaster.
>>> >>
>>> >>
>>> >>
>>> >> On Tue, Jun 18, 2013 at 1:28 PM, Chris Nauroth <
>>> cnauroth@hortonworks.com>
>>> >> wrote:
>>> >>>
>>> >>> Prashant, can you provide more details about what you're doing when
>>> you
>>> >>> see this error?  Are you submitting a MapReduce job, running an HDFS
>>> shell
>>> >>> command, or doing some other action?  It's possible that we're also
>>> seeing
>>> >>> an interaction with some other change in 2.x that triggers a
>>> setPermission
>>> >>> call that wasn't there in 0.20.2.  I think the problem with the HDFS
>>> >>> setPermission API is present in both 0.20.2 and 2.x, but if the code
>>> in
>>> >>> 0.20.2 never triggered a setPermission call for your usage, then you
>>> >>> wouldn't have seen the problem.
>>> >>>
>>> >>> I'd like to gather these details for submitting a new bug report to
>>> HDFS.
>>> >>> Thanks!
>>> >>>
>>> >>> Chris Nauroth
>>> >>> Hortonworks
>>> >>> http://hortonworks.com/
>>> >>>
>>> >>>
>>> >>>
>>> >>> On Tue, Jun 18, 2013 at 12:14 PM, Leo Leung <ll...@ddn.com> wrote:
>>> >>>>
>>> >>>> I believe, the properties name should be “dfs.permissions”
>>> >>>>
>>> >>>>
>>> >>>>
>>> >>>>
>>> >>>>
>>> >>>> From: Prashant Kommireddi [mailto:prash1784@gmail.com]
>>> >>>> Sent: Tuesday, June 18, 2013 10:54 AM
>>> >>>> To: user@hadoop.apache.org
>>> >>>> Subject: DFS Permissions on Hadoop 2.x
>>> >>>>
>>> >>>>
>>> >>>>
>>> >>>> Hello,
>>> >>>>
>>> >>>>
>>> >>>>
>>> >>>> We just upgraded our cluster from 0.20.2 to 2.x (with HA) and had a
>>> >>>> question around disabling dfs permissions on the latter version.
>>> For some
>>> >>>> reason, setting the following config does not seem to work
>>> >>>>
>>> >>>>
>>> >>>>
>>> >>>> <property>
>>> >>>>
>>> >>>>         <name>dfs.permissions.enabled</name>
>>> >>>>
>>> >>>>         <value>false</value>
>>> >>>>
>>> >>>> </property>
>>> >>>>
>>> >>>>
>>> >>>>
>>> >>>> Any other configs that might be needed for this?
>>> >>>>
>>> >>>>
>>> >>>>
>>> >>>> Here is the stacktrace.
>>> >>>>
>>> >>>>
>>> >>>>
>>> >>>> 2013-06-17 17:35:45,429 INFO  ipc.Server - IPC Server handler 62 on
>>> >>>> 8020, call
>>> org.apache.hadoop.hdfs.protocol.ClientProtocol.setPermission from
>>> >>>> 10.0.53.131:24059: error:
>>> org.apache.hadoop.security.AccessControlException:
>>> >>>> Permission denied: user=smehta, access=EXECUTE,
>>> >>>> inode="/mapred":pkommireddi:supergroup:drwxrwx---
>>> >>>>
>>> >>>> org.apache.hadoop.security.AccessControlException: Permission
>>> denied:
>>> >>>> user=smehta, access=EXECUTE,
>>> >>>> inode="/mapred":pkommireddi:supergroup:drwxrwx---
>>> >>>>
>>> >>>>         at
>>> >>>>
>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
>>> >>>>
>>> >>>>         at
>>> >>>>
>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
>>> >>>>
>>> >>>>         at
>>> >>>>
>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
>>> >>>>
>>> >>>>         at
>>> >>>>
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
>>> >>>>
>>> >>>>         at
>>> >>>>
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
>>> >>>>
>>> >>>>         at
>>> >>>>
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
>>> >>>>
>>> >>>>         at
>>> >>>>
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
>>> >>>>
>>> >>>>         at
>>> >>>>
>>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
>>> >>>>
>>> >>>>         at
>>> >>>>
>>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
>>> >>>>
>>> >>>>         at
>>> >>>>
>>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
>>> >>>>
>>> >>>>         at
>>> >>>>
>>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>>> >>>>
>>> >>>>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>>> >>>>
>>> >>>>         at
>>> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
>>> >>>>
>>> >>>>         at
>>> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
>>> >>>>
>>> >>>>         at java.security.AccessController.doPrivileged(Native
>>> Method)
>>> >>>>
>>> >>>>         at javax.security.auth.Subject.doAs(Subject.java:396)
>>> >>>>
>>> >>>>         at
>>> >>>>
>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>>> >>>>
>>> >>>>         at
>>> org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
>>> >>>>
>>> >>>>
>>> >>>>
>>> >>>>
>>> >>>>
>>> >>>>
>>> >>>>
>>> >>>>
>>> >>>
>>> >>>
>>> >>
>>> >
>>>
>>>
>>>
>>> --
>>> Harsh J
>>>
>>
>>
>

Re: DFS Permissions on Hadoop 2.x

Posted by Chris Nauroth <cn...@hortonworks.com>.
Just in case anyone is curious who didn't look at HDFS-4918, we established
that this is actually expected behavior, and it's mentioned in the
documentation.  However, I filed HDFS-4919 to make the information clearer
in the documentation, since this caused some confusion.

https://issues.apache.org/jira/browse/HDFS-4919

Chris Nauroth
Hortonworks
http://hortonworks.com/



On Tue, Jun 18, 2013 at 10:42 PM, Prashant Kommireddi
<pr...@gmail.com>wrote:

> Thanks guys, I will follow the discussion there.
>
>
> On Tue, Jun 18, 2013 at 10:10 PM, Azuryy Yu <az...@gmail.com> wrote:
>
>> Yes, and I think this was lead by Snapshot.
>>
>> I've file a JIRA here:
>> https://issues.apache.org/jira/browse/HDFS-4918
>>
>>
>>
>> On Wed, Jun 19, 2013 at 11:40 AM, Harsh J <ha...@cloudera.com> wrote:
>>
>>> This is a HDFS bug. Like all other methods that check for permissions
>>> being enabled, the client call of setPermission should check it as
>>> well. It does not do that currently and I believe it should be a NOP
>>> in such a case. Please do file a JIRA (and reference the ID here to
>>> close the loop)!
>>>
>>> On Wed, Jun 19, 2013 at 6:18 AM, Prashant Kommireddi
>>> <pr...@gmail.com> wrote:
>>> > Looks like the jobs fail only on the first attempt and pass thereafter.
>>> > Failure occurs while setting perms on "intermediate done directory".
>>> Here is
>>> > what I think is happening:
>>> >
>>> > 1. Intermediate done dir is (ideally) created as part of deployment
>>> (for eg,
>>> > /mapred/history/done_intermediate)
>>> >
>>> > 2. When a MR job is run, it creates a user dir within intermediate
>>> done dir
>>> > (/mapred/history/done_intermediate/username)
>>> >
>>> > 3. After this dir is created, the code tries to set permissions on
>>> this user
>>> > dir. In doing so, it checks for EXECUTE permissions on not just its
>>> parent
>>> > (/mapred/history/done_intermediate) but across all dirs to the top-most
>>> > level (/mapred). This fails as "/mapred" does not have execute
>>> permissions
>>> > for the "Other" users.
>>> >
>>> > 4. On successive job runs, since the user dir already exists
>>> > (/mapred/history/done_intermediate/username) it no longer tries to
>>> create
>>> > and set permissions again. And the job completes without any perm
>>> errors.
>>> >
>>> > This is the code within JobHistoryEventHandler that's doing it.
>>> >
>>> >  //Check for the existence of intermediate done dir.
>>> >     Path doneDirPath = null;
>>> >     try {
>>> >       doneDirPath = FileSystem.get(conf).makeQualified(new
>>> > Path(doneDirStr));
>>> >       doneDirFS = FileSystem.get(doneDirPath.toUri(), conf);
>>> >       // This directory will be in a common location, or this may be a
>>> > cluster
>>> >       // meant for a single user. Creating based on the conf. Should
>>> ideally
>>> > be
>>> >       // created by the JobHistoryServer or as part of deployment.
>>> >       if (!doneDirFS.exists(doneDirPath)) {
>>> >       if (JobHistoryUtils.shouldCreateNonUserDirectory(conf)) {
>>> >         LOG.info("Creating intermediate history logDir: ["
>>> >             + doneDirPath
>>> >             + "] + based on conf. Should ideally be created by the
>>> > JobHistoryServer: "
>>> >             + MRJobConfig.MR_AM_CREATE_JH_INTERMEDIATE_BASE_DIR);
>>> >           mkdir(
>>> >               doneDirFS,
>>> >               doneDirPath,
>>> >               new FsPermission(
>>> >             JobHistoryUtils.HISTORY_INTERMEDIATE_DONE_DIR_PERMISSIONS
>>> >                 .toShort()));
>>> >           // TODO Temporary toShort till new
>>> FsPermission(FsPermissions)
>>> >           // respects
>>> >         // sticky
>>> >       } else {
>>> >           String message = "Not creating intermediate history logDir:
>>> ["
>>> >                 + doneDirPath
>>> >                 + "] based on conf: "
>>> >                 + MRJobConfig.MR_AM_CREATE_JH_INTERMEDIATE_BASE_DIR
>>> >                 + ". Either set to true or pre-create this directory
>>> with" +
>>> >                 " appropriate permissions";
>>> >         LOG.error(message);
>>> >         throw new YarnException(message);
>>> >       }
>>> >       }
>>> >     } catch (IOException e) {
>>> >       LOG.error("Failed checking for the existance of history
>>> intermediate "
>>> > +
>>> >                       "done directory: [" + doneDirPath + "]");
>>> >       throw new YarnException(e);
>>> >     }
>>> >
>>> >
>>> > In any case, this does not appear to be the right behavior as it does
>>> not
>>> > respect "dfs.permissions.enabled" (set to false) at any point. Sounds
>>> like a
>>> > bug?
>>> >
>>> >
>>> > Thanks, Prashant
>>> >
>>> >
>>> >
>>> >
>>> >
>>> >
>>> > On Tue, Jun 18, 2013 at 3:24 PM, Prashant Kommireddi <
>>> prash1784@gmail.com>
>>> > wrote:
>>> >>
>>> >> Hi Chris,
>>> >>
>>> >> This is while running a MR job. Please note the job is able to write
>>> files
>>> >> to "/mapred" directory and fails on EXECUTE permissions. On digging
>>> in some
>>> >> more, it looks like the failure occurs after writing to
>>> >> "/mapred/history/done_intermediate".
>>> >>
>>> >> Here is a more detailed stacktrace.
>>> >>
>>> >> INFO: Job end notification started for jobID : job_1371593763906_0001
>>> >> Jun 18, 2013 3:20:20 PM
>>> >> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler
>>> >> closeEventWriter
>>> >> INFO: Unable to write out JobSummaryInfo to
>>> >>
>>> [hdfs://test-local-EMPTYSPEC/mapred/history/done_intermediate/smehta/job_1371593763906_0001.summary_tmp]
>>> >> org.apache.hadoop.security.AccessControlException: Permission denied:
>>> >> user=smehta, access=EXECUTE,
>>> >> inode="/mapred":pkommireddi:supergroup:drwxrwx---
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
>>> >>      at
>>> >>
>>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>>> >>      at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>>> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
>>> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
>>> >>      at java.security.AccessController.doPrivileged(Native Method)
>>> >>      at javax.security.auth.Subject.doAs(Subject.java:396)
>>> >>      at
>>> >>
>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>>> >>      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
>>> >>
>>> >>      at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
>>> Method)
>>> >>      at
>>> >>
>>> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
>>> >>      at
>>> >>
>>> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
>>> >>      at
>>> java.lang.reflect.Constructor.newInstance(Constructor.java:513)
>>> >>      at
>>> >>
>>> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:90)
>>> >>      at
>>> >>
>>> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:57)
>>> >>      at
>>> org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:1897)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.DistributedFileSystem.setPermission(DistributedFileSystem.java:823)
>>> >>      at
>>> >>
>>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.closeEventWriter(JobHistoryEventHandler.java:666)
>>> >>      at
>>> >>
>>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.handleEvent(JobHistoryEventHandler.java:521)
>>> >>      at
>>> >>
>>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler$1.run(JobHistoryEventHandler.java:273)
>>> >>      at java.lang.Thread.run(Thread.java:662)
>>> >> Caused by:
>>> >>
>>> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException):
>>> >> Permission denied: user=smehta, access=EXECUTE,
>>> >> inode="/mapred":pkommireddi:supergroup:drwxrwx---
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
>>> >>      at
>>> >>
>>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>>> >>      at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>>> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
>>> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
>>> >>      at java.security.AccessController.doPrivileged(Native Method)
>>> >>      at javax.security.auth.Subject.doAs(Subject.java:396)
>>> >>      at
>>> >>
>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>>> >>      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
>>> >>
>>> >>      at org.apache.hadoop.ipc.Client.call(Client.java:1225)
>>> >>      at
>>> >>
>>> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202)
>>> >>      at $Proxy9.setPermission(Unknown Source)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.setPermission(ClientNamenodeProtocolTranslatorPB.java:241)
>>> >>      at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>> >>      at
>>> >>
>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>>> >>      at
>>> >>
>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>> >>      at java.lang.reflect.Method.invoke(Method.java:597)
>>> >>      at
>>> >>
>>> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164)
>>> >>      at
>>> >>
>>> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83)
>>> >>      at $Proxy10.setPermission(Unknown Source)
>>> >>      at
>>> org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:1895)
>>> >>      ... 5 more
>>> >> Jun 18, 2013 3:20:20 PM
>>> >> org.apache.hadoop.yarn.YarnUncaughtExceptionHandler uncaughtException
>>> >> SEVERE: Thread Thread[Thread-51,5,main] threw an Exception.
>>> >> org.apache.hadoop.yarn.YarnException:
>>> >> org.apache.hadoop.security.AccessControlException: Permission denied:
>>> >> user=smehta, access=EXECUTE,
>>> >> inode="/mapred":pkommireddi:supergroup:drwxrwx---
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
>>> >>      at
>>> >>
>>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>>> >>      at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>>> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
>>> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
>>> >>      at java.security.AccessController.doPrivileged(Native Method)
>>> >>      at javax.security.auth.Subject.doAs(Subject.java:396)
>>> >>      at
>>> >>
>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>>> >>      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
>>> >>
>>> >>      at
>>> >>
>>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.handleEvent(JobHistoryEventHandler.java:523)
>>> >>      at
>>> >>
>>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler$1.run(JobHistoryEventHandler.java:273)
>>> >>      at java.lang.Thread.run(Thread.java:662)
>>> >> Caused by: org.apache.hadoop.security.AccessControlException:
>>> Permission
>>> >> denied: user=smehta, access=EXECUTE,
>>> >> inode="/mapred":pkommireddi:supergroup:drwxrwx---
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
>>> >>      at
>>> >>
>>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>>> >>      at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>>> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
>>> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
>>> >>      at java.security.AccessController.doPrivileged(Native Method)
>>> >>      at javax.security.auth.Subject.doAs(Subject.java:396)
>>> >>      at
>>> >>
>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>>> >>      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
>>> >>
>>> >>      at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
>>> Method)
>>> >>      at
>>> >>
>>> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
>>> >>      at
>>> >>
>>> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
>>> >>      at
>>> java.lang.reflect.Constructor.newInstance(Constructor.java:513)
>>> >>      at
>>> >>
>>> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:90)
>>> >>      at
>>> >>
>>> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:57)
>>> >>      at
>>> org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:1897)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.DistributedFileSystem.setPermission(DistributedFileSystem.java:823)
>>> >>      at
>>> >>
>>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.closeEventWriter(JobHistoryEventHandler.java:666)
>>> >>      at
>>> >>
>>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.handleEvent(JobHistoryEventHandler.java:521)
>>> >>      ... 2 more
>>> >> Caused by:
>>> >>
>>> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException):
>>> >> Permission denied: user=smehta, access=EXECUTE,
>>> >> inode="/mapred":pkommireddi:supergroup:drwxrwx---
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
>>> >>      at
>>> >>
>>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>>> >>      at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>>> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
>>> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
>>> >>      at java.security.AccessController.doPrivileged(Native Method)
>>> >>      at javax.security.auth.Subject.doAs(Subject.java:396)
>>> >>      at
>>> >>
>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>>> >>      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
>>> >>
>>> >>      at org.apache.hadoop.ipc.Client.call(Client.java:1225)
>>> >>      at
>>> >>
>>> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202)
>>> >>      at $Proxy9.setPermission(Unknown Source)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.setPermission(ClientNamenodeProtocolTranslatorPB.java:241)
>>> >>      at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>> >>      at
>>> >>
>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>>> >>      at
>>> >>
>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>> >>      at java.lang.reflect.Method.invoke(Method.java:597)
>>> >>      at
>>> >>
>>> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164)
>>> >>      at
>>> >>
>>> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83)
>>> >>      at $Proxy10.setPermission(Unknown Source)
>>> >>      at
>>> org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:1895)
>>> >>      ... 5 more
>>> >> Jun 18, 2013 3:20:20 PM
>>> >>
>>> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator$ScheduleStats log
>>> >> INFO: Before Scheduling: PendingReds:0 ScheduledMaps:0 ScheduledReds:0
>>> >> AssignedMaps:0 AssignedReds:1 CompletedMaps:1 CompletedReds:1
>>> ContAlloc:2
>>> >> ContRel:0 HostLocal:0 RackLocal:1
>>> >> Jun 18, 2013 3:20:21 PM
>>> >> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator
>>> getResources
>>> >> INFO: Received completed container
>>> container_1371593763906_0001_01_000003
>>> >> Jun 18, 2013 3:20:21 PM
>>> >>
>>> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator$ScheduleStats log
>>> >> INFO: After Scheduling: PendingReds:0 ScheduledMaps:0 ScheduledReds:0
>>> >> AssignedMaps:0 AssignedReds:0 CompletedMaps:1 CompletedReds:1
>>> ContAlloc:2
>>> >> ContRel:0 HostLocal:0 RackLocal:1
>>> >> Jun 18, 2013 3:20:21 PM
>>> >>
>>> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl$DiagnosticInformationUpdater
>>> >> transition
>>> >> INFO: Diagnostics report from attempt_1371593763906_0001_r_000000_0:
>>> >> Container killed by the ApplicationMaster.
>>> >>
>>> >>
>>> >>
>>> >> On Tue, Jun 18, 2013 at 1:28 PM, Chris Nauroth <
>>> cnauroth@hortonworks.com>
>>> >> wrote:
>>> >>>
>>> >>> Prashant, can you provide more details about what you're doing when
>>> you
>>> >>> see this error?  Are you submitting a MapReduce job, running an HDFS
>>> shell
>>> >>> command, or doing some other action?  It's possible that we're also
>>> seeing
>>> >>> an interaction with some other change in 2.x that triggers a
>>> setPermission
>>> >>> call that wasn't there in 0.20.2.  I think the problem with the HDFS
>>> >>> setPermission API is present in both 0.20.2 and 2.x, but if the code
>>> in
>>> >>> 0.20.2 never triggered a setPermission call for your usage, then you
>>> >>> wouldn't have seen the problem.
>>> >>>
>>> >>> I'd like to gather these details for submitting a new bug report to
>>> HDFS.
>>> >>> Thanks!
>>> >>>
>>> >>> Chris Nauroth
>>> >>> Hortonworks
>>> >>> http://hortonworks.com/
>>> >>>
>>> >>>
>>> >>>
>>> >>> On Tue, Jun 18, 2013 at 12:14 PM, Leo Leung <ll...@ddn.com> wrote:
>>> >>>>
>>> >>>> I believe, the properties name should be “dfs.permissions”
>>> >>>>
>>> >>>>
>>> >>>>
>>> >>>>
>>> >>>>
>>> >>>> From: Prashant Kommireddi [mailto:prash1784@gmail.com]
>>> >>>> Sent: Tuesday, June 18, 2013 10:54 AM
>>> >>>> To: user@hadoop.apache.org
>>> >>>> Subject: DFS Permissions on Hadoop 2.x
>>> >>>>
>>> >>>>
>>> >>>>
>>> >>>> Hello,
>>> >>>>
>>> >>>>
>>> >>>>
>>> >>>> We just upgraded our cluster from 0.20.2 to 2.x (with HA) and had a
>>> >>>> question around disabling dfs permissions on the latter version.
>>> For some
>>> >>>> reason, setting the following config does not seem to work
>>> >>>>
>>> >>>>
>>> >>>>
>>> >>>> <property>
>>> >>>>
>>> >>>>         <name>dfs.permissions.enabled</name>
>>> >>>>
>>> >>>>         <value>false</value>
>>> >>>>
>>> >>>> </property>
>>> >>>>
>>> >>>>
>>> >>>>
>>> >>>> Any other configs that might be needed for this?
>>> >>>>
>>> >>>>
>>> >>>>
>>> >>>> Here is the stacktrace.
>>> >>>>
>>> >>>>
>>> >>>>
>>> >>>> 2013-06-17 17:35:45,429 INFO  ipc.Server - IPC Server handler 62 on
>>> >>>> 8020, call
>>> org.apache.hadoop.hdfs.protocol.ClientProtocol.setPermission from
>>> >>>> 10.0.53.131:24059: error:
>>> org.apache.hadoop.security.AccessControlException:
>>> >>>> Permission denied: user=smehta, access=EXECUTE,
>>> >>>> inode="/mapred":pkommireddi:supergroup:drwxrwx---
>>> >>>>
>>> >>>> org.apache.hadoop.security.AccessControlException: Permission
>>> denied:
>>> >>>> user=smehta, access=EXECUTE,
>>> >>>> inode="/mapred":pkommireddi:supergroup:drwxrwx---
>>> >>>>
>>> >>>>         at
>>> >>>>
>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
>>> >>>>
>>> >>>>         at
>>> >>>>
>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
>>> >>>>
>>> >>>>         at
>>> >>>>
>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
>>> >>>>
>>> >>>>         at
>>> >>>>
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
>>> >>>>
>>> >>>>         at
>>> >>>>
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
>>> >>>>
>>> >>>>         at
>>> >>>>
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
>>> >>>>
>>> >>>>         at
>>> >>>>
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
>>> >>>>
>>> >>>>         at
>>> >>>>
>>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
>>> >>>>
>>> >>>>         at
>>> >>>>
>>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
>>> >>>>
>>> >>>>         at
>>> >>>>
>>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
>>> >>>>
>>> >>>>         at
>>> >>>>
>>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>>> >>>>
>>> >>>>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>>> >>>>
>>> >>>>         at
>>> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
>>> >>>>
>>> >>>>         at
>>> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
>>> >>>>
>>> >>>>         at java.security.AccessController.doPrivileged(Native
>>> Method)
>>> >>>>
>>> >>>>         at javax.security.auth.Subject.doAs(Subject.java:396)
>>> >>>>
>>> >>>>         at
>>> >>>>
>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>>> >>>>
>>> >>>>         at
>>> org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
>>> >>>>
>>> >>>>
>>> >>>>
>>> >>>>
>>> >>>>
>>> >>>>
>>> >>>>
>>> >>>>
>>> >>>
>>> >>>
>>> >>
>>> >
>>>
>>>
>>>
>>> --
>>> Harsh J
>>>
>>
>>
>

Re: DFS Permissions on Hadoop 2.x

Posted by Chris Nauroth <cn...@hortonworks.com>.
Just in case anyone is curious who didn't look at HDFS-4918, we established
that this is actually expected behavior, and it's mentioned in the
documentation.  However, I filed HDFS-4919 to make the information clearer
in the documentation, since this caused some confusion.

https://issues.apache.org/jira/browse/HDFS-4919

Chris Nauroth
Hortonworks
http://hortonworks.com/



On Tue, Jun 18, 2013 at 10:42 PM, Prashant Kommireddi
<pr...@gmail.com>wrote:

> Thanks guys, I will follow the discussion there.
>
>
> On Tue, Jun 18, 2013 at 10:10 PM, Azuryy Yu <az...@gmail.com> wrote:
>
>> Yes, and I think this was lead by Snapshot.
>>
>> I've file a JIRA here:
>> https://issues.apache.org/jira/browse/HDFS-4918
>>
>>
>>
>> On Wed, Jun 19, 2013 at 11:40 AM, Harsh J <ha...@cloudera.com> wrote:
>>
>>> This is a HDFS bug. Like all other methods that check for permissions
>>> being enabled, the client call of setPermission should check it as
>>> well. It does not do that currently and I believe it should be a NOP
>>> in such a case. Please do file a JIRA (and reference the ID here to
>>> close the loop)!
>>>
>>> On Wed, Jun 19, 2013 at 6:18 AM, Prashant Kommireddi
>>> <pr...@gmail.com> wrote:
>>> > Looks like the jobs fail only on the first attempt and pass thereafter.
>>> > Failure occurs while setting perms on "intermediate done directory".
>>> Here is
>>> > what I think is happening:
>>> >
>>> > 1. Intermediate done dir is (ideally) created as part of deployment
>>> (for eg,
>>> > /mapred/history/done_intermediate)
>>> >
>>> > 2. When a MR job is run, it creates a user dir within intermediate
>>> done dir
>>> > (/mapred/history/done_intermediate/username)
>>> >
>>> > 3. After this dir is created, the code tries to set permissions on
>>> this user
>>> > dir. In doing so, it checks for EXECUTE permissions on not just its
>>> parent
>>> > (/mapred/history/done_intermediate) but across all dirs to the top-most
>>> > level (/mapred). This fails as "/mapred" does not have execute
>>> permissions
>>> > for the "Other" users.
>>> >
>>> > 4. On successive job runs, since the user dir already exists
>>> > (/mapred/history/done_intermediate/username) it no longer tries to
>>> create
>>> > and set permissions again. And the job completes without any perm
>>> errors.
>>> >
>>> > This is the code within JobHistoryEventHandler that's doing it.
>>> >
>>> >  //Check for the existence of intermediate done dir.
>>> >     Path doneDirPath = null;
>>> >     try {
>>> >       doneDirPath = FileSystem.get(conf).makeQualified(new
>>> > Path(doneDirStr));
>>> >       doneDirFS = FileSystem.get(doneDirPath.toUri(), conf);
>>> >       // This directory will be in a common location, or this may be a
>>> > cluster
>>> >       // meant for a single user. Creating based on the conf. Should
>>> ideally
>>> > be
>>> >       // created by the JobHistoryServer or as part of deployment.
>>> >       if (!doneDirFS.exists(doneDirPath)) {
>>> >       if (JobHistoryUtils.shouldCreateNonUserDirectory(conf)) {
>>> >         LOG.info("Creating intermediate history logDir: ["
>>> >             + doneDirPath
>>> >             + "] + based on conf. Should ideally be created by the
>>> > JobHistoryServer: "
>>> >             + MRJobConfig.MR_AM_CREATE_JH_INTERMEDIATE_BASE_DIR);
>>> >           mkdir(
>>> >               doneDirFS,
>>> >               doneDirPath,
>>> >               new FsPermission(
>>> >             JobHistoryUtils.HISTORY_INTERMEDIATE_DONE_DIR_PERMISSIONS
>>> >                 .toShort()));
>>> >           // TODO Temporary toShort till new
>>> FsPermission(FsPermissions)
>>> >           // respects
>>> >         // sticky
>>> >       } else {
>>> >           String message = "Not creating intermediate history logDir:
>>> ["
>>> >                 + doneDirPath
>>> >                 + "] based on conf: "
>>> >                 + MRJobConfig.MR_AM_CREATE_JH_INTERMEDIATE_BASE_DIR
>>> >                 + ". Either set to true or pre-create this directory
>>> with" +
>>> >                 " appropriate permissions";
>>> >         LOG.error(message);
>>> >         throw new YarnException(message);
>>> >       }
>>> >       }
>>> >     } catch (IOException e) {
>>> >       LOG.error("Failed checking for the existance of history
>>> intermediate "
>>> > +
>>> >                       "done directory: [" + doneDirPath + "]");
>>> >       throw new YarnException(e);
>>> >     }
>>> >
>>> >
>>> > In any case, this does not appear to be the right behavior as it does
>>> not
>>> > respect "dfs.permissions.enabled" (set to false) at any point. Sounds
>>> like a
>>> > bug?
>>> >
>>> >
>>> > Thanks, Prashant
>>> >
>>> >
>>> >
>>> >
>>> >
>>> >
>>> > On Tue, Jun 18, 2013 at 3:24 PM, Prashant Kommireddi <
>>> prash1784@gmail.com>
>>> > wrote:
>>> >>
>>> >> Hi Chris,
>>> >>
>>> >> This is while running a MR job. Please note the job is able to write
>>> files
>>> >> to "/mapred" directory and fails on EXECUTE permissions. On digging
>>> in some
>>> >> more, it looks like the failure occurs after writing to
>>> >> "/mapred/history/done_intermediate".
>>> >>
>>> >> Here is a more detailed stacktrace.
>>> >>
>>> >> INFO: Job end notification started for jobID : job_1371593763906_0001
>>> >> Jun 18, 2013 3:20:20 PM
>>> >> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler
>>> >> closeEventWriter
>>> >> INFO: Unable to write out JobSummaryInfo to
>>> >>
>>> [hdfs://test-local-EMPTYSPEC/mapred/history/done_intermediate/smehta/job_1371593763906_0001.summary_tmp]
>>> >> org.apache.hadoop.security.AccessControlException: Permission denied:
>>> >> user=smehta, access=EXECUTE,
>>> >> inode="/mapred":pkommireddi:supergroup:drwxrwx---
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
>>> >>      at
>>> >>
>>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>>> >>      at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>>> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
>>> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
>>> >>      at java.security.AccessController.doPrivileged(Native Method)
>>> >>      at javax.security.auth.Subject.doAs(Subject.java:396)
>>> >>      at
>>> >>
>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>>> >>      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
>>> >>
>>> >>      at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
>>> Method)
>>> >>      at
>>> >>
>>> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
>>> >>      at
>>> >>
>>> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
>>> >>      at
>>> java.lang.reflect.Constructor.newInstance(Constructor.java:513)
>>> >>      at
>>> >>
>>> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:90)
>>> >>      at
>>> >>
>>> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:57)
>>> >>      at
>>> org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:1897)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.DistributedFileSystem.setPermission(DistributedFileSystem.java:823)
>>> >>      at
>>> >>
>>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.closeEventWriter(JobHistoryEventHandler.java:666)
>>> >>      at
>>> >>
>>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.handleEvent(JobHistoryEventHandler.java:521)
>>> >>      at
>>> >>
>>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler$1.run(JobHistoryEventHandler.java:273)
>>> >>      at java.lang.Thread.run(Thread.java:662)
>>> >> Caused by:
>>> >>
>>> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException):
>>> >> Permission denied: user=smehta, access=EXECUTE,
>>> >> inode="/mapred":pkommireddi:supergroup:drwxrwx---
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
>>> >>      at
>>> >>
>>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>>> >>      at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>>> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
>>> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
>>> >>      at java.security.AccessController.doPrivileged(Native Method)
>>> >>      at javax.security.auth.Subject.doAs(Subject.java:396)
>>> >>      at
>>> >>
>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>>> >>      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
>>> >>
>>> >>      at org.apache.hadoop.ipc.Client.call(Client.java:1225)
>>> >>      at
>>> >>
>>> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202)
>>> >>      at $Proxy9.setPermission(Unknown Source)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.setPermission(ClientNamenodeProtocolTranslatorPB.java:241)
>>> >>      at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>> >>      at
>>> >>
>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>>> >>      at
>>> >>
>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>> >>      at java.lang.reflect.Method.invoke(Method.java:597)
>>> >>      at
>>> >>
>>> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164)
>>> >>      at
>>> >>
>>> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83)
>>> >>      at $Proxy10.setPermission(Unknown Source)
>>> >>      at
>>> org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:1895)
>>> >>      ... 5 more
>>> >> Jun 18, 2013 3:20:20 PM
>>> >> org.apache.hadoop.yarn.YarnUncaughtExceptionHandler uncaughtException
>>> >> SEVERE: Thread Thread[Thread-51,5,main] threw an Exception.
>>> >> org.apache.hadoop.yarn.YarnException:
>>> >> org.apache.hadoop.security.AccessControlException: Permission denied:
>>> >> user=smehta, access=EXECUTE,
>>> >> inode="/mapred":pkommireddi:supergroup:drwxrwx---
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
>>> >>      at
>>> >>
>>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>>> >>      at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>>> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
>>> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
>>> >>      at java.security.AccessController.doPrivileged(Native Method)
>>> >>      at javax.security.auth.Subject.doAs(Subject.java:396)
>>> >>      at
>>> >>
>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>>> >>      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
>>> >>
>>> >>      at
>>> >>
>>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.handleEvent(JobHistoryEventHandler.java:523)
>>> >>      at
>>> >>
>>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler$1.run(JobHistoryEventHandler.java:273)
>>> >>      at java.lang.Thread.run(Thread.java:662)
>>> >> Caused by: org.apache.hadoop.security.AccessControlException:
>>> Permission
>>> >> denied: user=smehta, access=EXECUTE,
>>> >> inode="/mapred":pkommireddi:supergroup:drwxrwx---
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
>>> >>      at
>>> >>
>>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>>> >>      at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>>> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
>>> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
>>> >>      at java.security.AccessController.doPrivileged(Native Method)
>>> >>      at javax.security.auth.Subject.doAs(Subject.java:396)
>>> >>      at
>>> >>
>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>>> >>      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
>>> >>
>>> >>      at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
>>> Method)
>>> >>      at
>>> >>
>>> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
>>> >>      at
>>> >>
>>> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
>>> >>      at
>>> java.lang.reflect.Constructor.newInstance(Constructor.java:513)
>>> >>      at
>>> >>
>>> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:90)
>>> >>      at
>>> >>
>>> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:57)
>>> >>      at
>>> org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:1897)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.DistributedFileSystem.setPermission(DistributedFileSystem.java:823)
>>> >>      at
>>> >>
>>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.closeEventWriter(JobHistoryEventHandler.java:666)
>>> >>      at
>>> >>
>>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.handleEvent(JobHistoryEventHandler.java:521)
>>> >>      ... 2 more
>>> >> Caused by:
>>> >>
>>> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException):
>>> >> Permission denied: user=smehta, access=EXECUTE,
>>> >> inode="/mapred":pkommireddi:supergroup:drwxrwx---
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
>>> >>      at
>>> >>
>>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>>> >>      at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>>> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
>>> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
>>> >>      at java.security.AccessController.doPrivileged(Native Method)
>>> >>      at javax.security.auth.Subject.doAs(Subject.java:396)
>>> >>      at
>>> >>
>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>>> >>      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
>>> >>
>>> >>      at org.apache.hadoop.ipc.Client.call(Client.java:1225)
>>> >>      at
>>> >>
>>> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202)
>>> >>      at $Proxy9.setPermission(Unknown Source)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.setPermission(ClientNamenodeProtocolTranslatorPB.java:241)
>>> >>      at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>> >>      at
>>> >>
>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>>> >>      at
>>> >>
>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>> >>      at java.lang.reflect.Method.invoke(Method.java:597)
>>> >>      at
>>> >>
>>> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164)
>>> >>      at
>>> >>
>>> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83)
>>> >>      at $Proxy10.setPermission(Unknown Source)
>>> >>      at
>>> org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:1895)
>>> >>      ... 5 more
>>> >> Jun 18, 2013 3:20:20 PM
>>> >>
>>> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator$ScheduleStats log
>>> >> INFO: Before Scheduling: PendingReds:0 ScheduledMaps:0 ScheduledReds:0
>>> >> AssignedMaps:0 AssignedReds:1 CompletedMaps:1 CompletedReds:1
>>> ContAlloc:2
>>> >> ContRel:0 HostLocal:0 RackLocal:1
>>> >> Jun 18, 2013 3:20:21 PM
>>> >> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator
>>> getResources
>>> >> INFO: Received completed container
>>> container_1371593763906_0001_01_000003
>>> >> Jun 18, 2013 3:20:21 PM
>>> >>
>>> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator$ScheduleStats log
>>> >> INFO: After Scheduling: PendingReds:0 ScheduledMaps:0 ScheduledReds:0
>>> >> AssignedMaps:0 AssignedReds:0 CompletedMaps:1 CompletedReds:1
>>> ContAlloc:2
>>> >> ContRel:0 HostLocal:0 RackLocal:1
>>> >> Jun 18, 2013 3:20:21 PM
>>> >>
>>> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl$DiagnosticInformationUpdater
>>> >> transition
>>> >> INFO: Diagnostics report from attempt_1371593763906_0001_r_000000_0:
>>> >> Container killed by the ApplicationMaster.
>>> >>
>>> >>
>>> >>
>>> >> On Tue, Jun 18, 2013 at 1:28 PM, Chris Nauroth <
>>> cnauroth@hortonworks.com>
>>> >> wrote:
>>> >>>
>>> >>> Prashant, can you provide more details about what you're doing when
>>> you
>>> >>> see this error?  Are you submitting a MapReduce job, running an HDFS
>>> shell
>>> >>> command, or doing some other action?  It's possible that we're also
>>> seeing
>>> >>> an interaction with some other change in 2.x that triggers a
>>> setPermission
>>> >>> call that wasn't there in 0.20.2.  I think the problem with the HDFS
>>> >>> setPermission API is present in both 0.20.2 and 2.x, but if the code
>>> in
>>> >>> 0.20.2 never triggered a setPermission call for your usage, then you
>>> >>> wouldn't have seen the problem.
>>> >>>
>>> >>> I'd like to gather these details for submitting a new bug report to
>>> HDFS.
>>> >>> Thanks!
>>> >>>
>>> >>> Chris Nauroth
>>> >>> Hortonworks
>>> >>> http://hortonworks.com/
>>> >>>
>>> >>>
>>> >>>
>>> >>> On Tue, Jun 18, 2013 at 12:14 PM, Leo Leung <ll...@ddn.com> wrote:
>>> >>>>
>>> >>>> I believe, the properties name should be “dfs.permissions”
>>> >>>>
>>> >>>>
>>> >>>>
>>> >>>>
>>> >>>>
>>> >>>> From: Prashant Kommireddi [mailto:prash1784@gmail.com]
>>> >>>> Sent: Tuesday, June 18, 2013 10:54 AM
>>> >>>> To: user@hadoop.apache.org
>>> >>>> Subject: DFS Permissions on Hadoop 2.x
>>> >>>>
>>> >>>>
>>> >>>>
>>> >>>> Hello,
>>> >>>>
>>> >>>>
>>> >>>>
>>> >>>> We just upgraded our cluster from 0.20.2 to 2.x (with HA) and had a
>>> >>>> question around disabling dfs permissions on the latter version.
>>> For some
>>> >>>> reason, setting the following config does not seem to work
>>> >>>>
>>> >>>>
>>> >>>>
>>> >>>> <property>
>>> >>>>
>>> >>>>         <name>dfs.permissions.enabled</name>
>>> >>>>
>>> >>>>         <value>false</value>
>>> >>>>
>>> >>>> </property>
>>> >>>>
>>> >>>>
>>> >>>>
>>> >>>> Any other configs that might be needed for this?
>>> >>>>
>>> >>>>
>>> >>>>
>>> >>>> Here is the stacktrace.
>>> >>>>
>>> >>>>
>>> >>>>
>>> >>>> 2013-06-17 17:35:45,429 INFO  ipc.Server - IPC Server handler 62 on
>>> >>>> 8020, call
>>> org.apache.hadoop.hdfs.protocol.ClientProtocol.setPermission from
>>> >>>> 10.0.53.131:24059: error:
>>> org.apache.hadoop.security.AccessControlException:
>>> >>>> Permission denied: user=smehta, access=EXECUTE,
>>> >>>> inode="/mapred":pkommireddi:supergroup:drwxrwx---
>>> >>>>
>>> >>>> org.apache.hadoop.security.AccessControlException: Permission
>>> denied:
>>> >>>> user=smehta, access=EXECUTE,
>>> >>>> inode="/mapred":pkommireddi:supergroup:drwxrwx---
>>> >>>>
>>> >>>>         at
>>> >>>>
>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
>>> >>>>
>>> >>>>         at
>>> >>>>
>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
>>> >>>>
>>> >>>>         at
>>> >>>>
>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
>>> >>>>
>>> >>>>         at
>>> >>>>
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
>>> >>>>
>>> >>>>         at
>>> >>>>
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
>>> >>>>
>>> >>>>         at
>>> >>>>
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
>>> >>>>
>>> >>>>         at
>>> >>>>
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
>>> >>>>
>>> >>>>         at
>>> >>>>
>>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
>>> >>>>
>>> >>>>         at
>>> >>>>
>>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
>>> >>>>
>>> >>>>         at
>>> >>>>
>>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
>>> >>>>
>>> >>>>         at
>>> >>>>
>>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>>> >>>>
>>> >>>>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>>> >>>>
>>> >>>>         at
>>> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
>>> >>>>
>>> >>>>         at
>>> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
>>> >>>>
>>> >>>>         at java.security.AccessController.doPrivileged(Native
>>> Method)
>>> >>>>
>>> >>>>         at javax.security.auth.Subject.doAs(Subject.java:396)
>>> >>>>
>>> >>>>         at
>>> >>>>
>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>>> >>>>
>>> >>>>         at
>>> org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
>>> >>>>
>>> >>>>
>>> >>>>
>>> >>>>
>>> >>>>
>>> >>>>
>>> >>>>
>>> >>>>
>>> >>>
>>> >>>
>>> >>
>>> >
>>>
>>>
>>>
>>> --
>>> Harsh J
>>>
>>
>>
>

Re: DFS Permissions on Hadoop 2.x

Posted by Chris Nauroth <cn...@hortonworks.com>.
Just in case anyone is curious who didn't look at HDFS-4918, we established
that this is actually expected behavior, and it's mentioned in the
documentation.  However, I filed HDFS-4919 to make the information clearer
in the documentation, since this caused some confusion.

https://issues.apache.org/jira/browse/HDFS-4919

Chris Nauroth
Hortonworks
http://hortonworks.com/



On Tue, Jun 18, 2013 at 10:42 PM, Prashant Kommireddi
<pr...@gmail.com>wrote:

> Thanks guys, I will follow the discussion there.
>
>
> On Tue, Jun 18, 2013 at 10:10 PM, Azuryy Yu <az...@gmail.com> wrote:
>
>> Yes, and I think this was lead by Snapshot.
>>
>> I've file a JIRA here:
>> https://issues.apache.org/jira/browse/HDFS-4918
>>
>>
>>
>> On Wed, Jun 19, 2013 at 11:40 AM, Harsh J <ha...@cloudera.com> wrote:
>>
>>> This is a HDFS bug. Like all other methods that check for permissions
>>> being enabled, the client call of setPermission should check it as
>>> well. It does not do that currently and I believe it should be a NOP
>>> in such a case. Please do file a JIRA (and reference the ID here to
>>> close the loop)!
>>>
>>> On Wed, Jun 19, 2013 at 6:18 AM, Prashant Kommireddi
>>> <pr...@gmail.com> wrote:
>>> > Looks like the jobs fail only on the first attempt and pass thereafter.
>>> > Failure occurs while setting perms on "intermediate done directory".
>>> Here is
>>> > what I think is happening:
>>> >
>>> > 1. Intermediate done dir is (ideally) created as part of deployment
>>> (for eg,
>>> > /mapred/history/done_intermediate)
>>> >
>>> > 2. When a MR job is run, it creates a user dir within intermediate
>>> done dir
>>> > (/mapred/history/done_intermediate/username)
>>> >
>>> > 3. After this dir is created, the code tries to set permissions on
>>> this user
>>> > dir. In doing so, it checks for EXECUTE permissions on not just its
>>> parent
>>> > (/mapred/history/done_intermediate) but across all dirs to the top-most
>>> > level (/mapred). This fails as "/mapred" does not have execute
>>> permissions
>>> > for the "Other" users.
>>> >
>>> > 4. On successive job runs, since the user dir already exists
>>> > (/mapred/history/done_intermediate/username) it no longer tries to
>>> create
>>> > and set permissions again. And the job completes without any perm
>>> errors.
>>> >
>>> > This is the code within JobHistoryEventHandler that's doing it.
>>> >
>>> >  //Check for the existence of intermediate done dir.
>>> >     Path doneDirPath = null;
>>> >     try {
>>> >       doneDirPath = FileSystem.get(conf).makeQualified(new
>>> > Path(doneDirStr));
>>> >       doneDirFS = FileSystem.get(doneDirPath.toUri(), conf);
>>> >       // This directory will be in a common location, or this may be a
>>> > cluster
>>> >       // meant for a single user. Creating based on the conf. Should
>>> ideally
>>> > be
>>> >       // created by the JobHistoryServer or as part of deployment.
>>> >       if (!doneDirFS.exists(doneDirPath)) {
>>> >       if (JobHistoryUtils.shouldCreateNonUserDirectory(conf)) {
>>> >         LOG.info("Creating intermediate history logDir: ["
>>> >             + doneDirPath
>>> >             + "] + based on conf. Should ideally be created by the
>>> > JobHistoryServer: "
>>> >             + MRJobConfig.MR_AM_CREATE_JH_INTERMEDIATE_BASE_DIR);
>>> >           mkdir(
>>> >               doneDirFS,
>>> >               doneDirPath,
>>> >               new FsPermission(
>>> >             JobHistoryUtils.HISTORY_INTERMEDIATE_DONE_DIR_PERMISSIONS
>>> >                 .toShort()));
>>> >           // TODO Temporary toShort till new
>>> FsPermission(FsPermissions)
>>> >           // respects
>>> >         // sticky
>>> >       } else {
>>> >           String message = "Not creating intermediate history logDir:
>>> ["
>>> >                 + doneDirPath
>>> >                 + "] based on conf: "
>>> >                 + MRJobConfig.MR_AM_CREATE_JH_INTERMEDIATE_BASE_DIR
>>> >                 + ". Either set to true or pre-create this directory
>>> with" +
>>> >                 " appropriate permissions";
>>> >         LOG.error(message);
>>> >         throw new YarnException(message);
>>> >       }
>>> >       }
>>> >     } catch (IOException e) {
>>> >       LOG.error("Failed checking for the existance of history
>>> intermediate "
>>> > +
>>> >                       "done directory: [" + doneDirPath + "]");
>>> >       throw new YarnException(e);
>>> >     }
>>> >
>>> >
>>> > In any case, this does not appear to be the right behavior as it does
>>> not
>>> > respect "dfs.permissions.enabled" (set to false) at any point. Sounds
>>> like a
>>> > bug?
>>> >
>>> >
>>> > Thanks, Prashant
>>> >
>>> >
>>> >
>>> >
>>> >
>>> >
>>> > On Tue, Jun 18, 2013 at 3:24 PM, Prashant Kommireddi <
>>> prash1784@gmail.com>
>>> > wrote:
>>> >>
>>> >> Hi Chris,
>>> >>
>>> >> This is while running a MR job. Please note the job is able to write
>>> files
>>> >> to "/mapred" directory and fails on EXECUTE permissions. On digging
>>> in some
>>> >> more, it looks like the failure occurs after writing to
>>> >> "/mapred/history/done_intermediate".
>>> >>
>>> >> Here is a more detailed stacktrace.
>>> >>
>>> >> INFO: Job end notification started for jobID : job_1371593763906_0001
>>> >> Jun 18, 2013 3:20:20 PM
>>> >> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler
>>> >> closeEventWriter
>>> >> INFO: Unable to write out JobSummaryInfo to
>>> >>
>>> [hdfs://test-local-EMPTYSPEC/mapred/history/done_intermediate/smehta/job_1371593763906_0001.summary_tmp]
>>> >> org.apache.hadoop.security.AccessControlException: Permission denied:
>>> >> user=smehta, access=EXECUTE,
>>> >> inode="/mapred":pkommireddi:supergroup:drwxrwx---
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
>>> >>      at
>>> >>
>>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>>> >>      at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>>> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
>>> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
>>> >>      at java.security.AccessController.doPrivileged(Native Method)
>>> >>      at javax.security.auth.Subject.doAs(Subject.java:396)
>>> >>      at
>>> >>
>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>>> >>      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
>>> >>
>>> >>      at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
>>> Method)
>>> >>      at
>>> >>
>>> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
>>> >>      at
>>> >>
>>> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
>>> >>      at
>>> java.lang.reflect.Constructor.newInstance(Constructor.java:513)
>>> >>      at
>>> >>
>>> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:90)
>>> >>      at
>>> >>
>>> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:57)
>>> >>      at
>>> org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:1897)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.DistributedFileSystem.setPermission(DistributedFileSystem.java:823)
>>> >>      at
>>> >>
>>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.closeEventWriter(JobHistoryEventHandler.java:666)
>>> >>      at
>>> >>
>>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.handleEvent(JobHistoryEventHandler.java:521)
>>> >>      at
>>> >>
>>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler$1.run(JobHistoryEventHandler.java:273)
>>> >>      at java.lang.Thread.run(Thread.java:662)
>>> >> Caused by:
>>> >>
>>> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException):
>>> >> Permission denied: user=smehta, access=EXECUTE,
>>> >> inode="/mapred":pkommireddi:supergroup:drwxrwx---
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
>>> >>      at
>>> >>
>>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>>> >>      at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>>> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
>>> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
>>> >>      at java.security.AccessController.doPrivileged(Native Method)
>>> >>      at javax.security.auth.Subject.doAs(Subject.java:396)
>>> >>      at
>>> >>
>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>>> >>      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
>>> >>
>>> >>      at org.apache.hadoop.ipc.Client.call(Client.java:1225)
>>> >>      at
>>> >>
>>> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202)
>>> >>      at $Proxy9.setPermission(Unknown Source)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.setPermission(ClientNamenodeProtocolTranslatorPB.java:241)
>>> >>      at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>> >>      at
>>> >>
>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>>> >>      at
>>> >>
>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>> >>      at java.lang.reflect.Method.invoke(Method.java:597)
>>> >>      at
>>> >>
>>> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164)
>>> >>      at
>>> >>
>>> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83)
>>> >>      at $Proxy10.setPermission(Unknown Source)
>>> >>      at
>>> org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:1895)
>>> >>      ... 5 more
>>> >> Jun 18, 2013 3:20:20 PM
>>> >> org.apache.hadoop.yarn.YarnUncaughtExceptionHandler uncaughtException
>>> >> SEVERE: Thread Thread[Thread-51,5,main] threw an Exception.
>>> >> org.apache.hadoop.yarn.YarnException:
>>> >> org.apache.hadoop.security.AccessControlException: Permission denied:
>>> >> user=smehta, access=EXECUTE,
>>> >> inode="/mapred":pkommireddi:supergroup:drwxrwx---
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
>>> >>      at
>>> >>
>>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>>> >>      at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>>> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
>>> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
>>> >>      at java.security.AccessController.doPrivileged(Native Method)
>>> >>      at javax.security.auth.Subject.doAs(Subject.java:396)
>>> >>      at
>>> >>
>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>>> >>      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
>>> >>
>>> >>      at
>>> >>
>>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.handleEvent(JobHistoryEventHandler.java:523)
>>> >>      at
>>> >>
>>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler$1.run(JobHistoryEventHandler.java:273)
>>> >>      at java.lang.Thread.run(Thread.java:662)
>>> >> Caused by: org.apache.hadoop.security.AccessControlException:
>>> Permission
>>> >> denied: user=smehta, access=EXECUTE,
>>> >> inode="/mapred":pkommireddi:supergroup:drwxrwx---
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
>>> >>      at
>>> >>
>>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>>> >>      at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>>> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
>>> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
>>> >>      at java.security.AccessController.doPrivileged(Native Method)
>>> >>      at javax.security.auth.Subject.doAs(Subject.java:396)
>>> >>      at
>>> >>
>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>>> >>      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
>>> >>
>>> >>      at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
>>> Method)
>>> >>      at
>>> >>
>>> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
>>> >>      at
>>> >>
>>> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
>>> >>      at
>>> java.lang.reflect.Constructor.newInstance(Constructor.java:513)
>>> >>      at
>>> >>
>>> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:90)
>>> >>      at
>>> >>
>>> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:57)
>>> >>      at
>>> org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:1897)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.DistributedFileSystem.setPermission(DistributedFileSystem.java:823)
>>> >>      at
>>> >>
>>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.closeEventWriter(JobHistoryEventHandler.java:666)
>>> >>      at
>>> >>
>>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.handleEvent(JobHistoryEventHandler.java:521)
>>> >>      ... 2 more
>>> >> Caused by:
>>> >>
>>> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException):
>>> >> Permission denied: user=smehta, access=EXECUTE,
>>> >> inode="/mapred":pkommireddi:supergroup:drwxrwx---
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
>>> >>      at
>>> >>
>>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>>> >>      at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>>> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
>>> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
>>> >>      at java.security.AccessController.doPrivileged(Native Method)
>>> >>      at javax.security.auth.Subject.doAs(Subject.java:396)
>>> >>      at
>>> >>
>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>>> >>      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
>>> >>
>>> >>      at org.apache.hadoop.ipc.Client.call(Client.java:1225)
>>> >>      at
>>> >>
>>> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202)
>>> >>      at $Proxy9.setPermission(Unknown Source)
>>> >>      at
>>> >>
>>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.setPermission(ClientNamenodeProtocolTranslatorPB.java:241)
>>> >>      at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>> >>      at
>>> >>
>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>>> >>      at
>>> >>
>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>> >>      at java.lang.reflect.Method.invoke(Method.java:597)
>>> >>      at
>>> >>
>>> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164)
>>> >>      at
>>> >>
>>> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83)
>>> >>      at $Proxy10.setPermission(Unknown Source)
>>> >>      at
>>> org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:1895)
>>> >>      ... 5 more
>>> >> Jun 18, 2013 3:20:20 PM
>>> >>
>>> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator$ScheduleStats log
>>> >> INFO: Before Scheduling: PendingReds:0 ScheduledMaps:0 ScheduledReds:0
>>> >> AssignedMaps:0 AssignedReds:1 CompletedMaps:1 CompletedReds:1
>>> ContAlloc:2
>>> >> ContRel:0 HostLocal:0 RackLocal:1
>>> >> Jun 18, 2013 3:20:21 PM
>>> >> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator
>>> getResources
>>> >> INFO: Received completed container
>>> container_1371593763906_0001_01_000003
>>> >> Jun 18, 2013 3:20:21 PM
>>> >>
>>> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator$ScheduleStats log
>>> >> INFO: After Scheduling: PendingReds:0 ScheduledMaps:0 ScheduledReds:0
>>> >> AssignedMaps:0 AssignedReds:0 CompletedMaps:1 CompletedReds:1
>>> ContAlloc:2
>>> >> ContRel:0 HostLocal:0 RackLocal:1
>>> >> Jun 18, 2013 3:20:21 PM
>>> >>
>>> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl$DiagnosticInformationUpdater
>>> >> transition
>>> >> INFO: Diagnostics report from attempt_1371593763906_0001_r_000000_0:
>>> >> Container killed by the ApplicationMaster.
>>> >>
>>> >>
>>> >>
>>> >> On Tue, Jun 18, 2013 at 1:28 PM, Chris Nauroth <
>>> cnauroth@hortonworks.com>
>>> >> wrote:
>>> >>>
>>> >>> Prashant, can you provide more details about what you're doing when
>>> you
>>> >>> see this error?  Are you submitting a MapReduce job, running an HDFS
>>> shell
>>> >>> command, or doing some other action?  It's possible that we're also
>>> seeing
>>> >>> an interaction with some other change in 2.x that triggers a
>>> setPermission
>>> >>> call that wasn't there in 0.20.2.  I think the problem with the HDFS
>>> >>> setPermission API is present in both 0.20.2 and 2.x, but if the code
>>> in
>>> >>> 0.20.2 never triggered a setPermission call for your usage, then you
>>> >>> wouldn't have seen the problem.
>>> >>>
>>> >>> I'd like to gather these details for submitting a new bug report to
>>> HDFS.
>>> >>> Thanks!
>>> >>>
>>> >>> Chris Nauroth
>>> >>> Hortonworks
>>> >>> http://hortonworks.com/
>>> >>>
>>> >>>
>>> >>>
>>> >>> On Tue, Jun 18, 2013 at 12:14 PM, Leo Leung <ll...@ddn.com> wrote:
>>> >>>>
>>> >>>> I believe, the properties name should be “dfs.permissions”
>>> >>>>
>>> >>>>
>>> >>>>
>>> >>>>
>>> >>>>
>>> >>>> From: Prashant Kommireddi [mailto:prash1784@gmail.com]
>>> >>>> Sent: Tuesday, June 18, 2013 10:54 AM
>>> >>>> To: user@hadoop.apache.org
>>> >>>> Subject: DFS Permissions on Hadoop 2.x
>>> >>>>
>>> >>>>
>>> >>>>
>>> >>>> Hello,
>>> >>>>
>>> >>>>
>>> >>>>
>>> >>>> We just upgraded our cluster from 0.20.2 to 2.x (with HA) and had a
>>> >>>> question around disabling dfs permissions on the latter version.
>>> For some
>>> >>>> reason, setting the following config does not seem to work
>>> >>>>
>>> >>>>
>>> >>>>
>>> >>>> <property>
>>> >>>>
>>> >>>>         <name>dfs.permissions.enabled</name>
>>> >>>>
>>> >>>>         <value>false</value>
>>> >>>>
>>> >>>> </property>
>>> >>>>
>>> >>>>
>>> >>>>
>>> >>>> Any other configs that might be needed for this?
>>> >>>>
>>> >>>>
>>> >>>>
>>> >>>> Here is the stacktrace.
>>> >>>>
>>> >>>>
>>> >>>>
>>> >>>> 2013-06-17 17:35:45,429 INFO  ipc.Server - IPC Server handler 62 on
>>> >>>> 8020, call
>>> org.apache.hadoop.hdfs.protocol.ClientProtocol.setPermission from
>>> >>>> 10.0.53.131:24059: error:
>>> org.apache.hadoop.security.AccessControlException:
>>> >>>> Permission denied: user=smehta, access=EXECUTE,
>>> >>>> inode="/mapred":pkommireddi:supergroup:drwxrwx---
>>> >>>>
>>> >>>> org.apache.hadoop.security.AccessControlException: Permission
>>> denied:
>>> >>>> user=smehta, access=EXECUTE,
>>> >>>> inode="/mapred":pkommireddi:supergroup:drwxrwx---
>>> >>>>
>>> >>>>         at
>>> >>>>
>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
>>> >>>>
>>> >>>>         at
>>> >>>>
>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
>>> >>>>
>>> >>>>         at
>>> >>>>
>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
>>> >>>>
>>> >>>>         at
>>> >>>>
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
>>> >>>>
>>> >>>>         at
>>> >>>>
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
>>> >>>>
>>> >>>>         at
>>> >>>>
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
>>> >>>>
>>> >>>>         at
>>> >>>>
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
>>> >>>>
>>> >>>>         at
>>> >>>>
>>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
>>> >>>>
>>> >>>>         at
>>> >>>>
>>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
>>> >>>>
>>> >>>>         at
>>> >>>>
>>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
>>> >>>>
>>> >>>>         at
>>> >>>>
>>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>>> >>>>
>>> >>>>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>>> >>>>
>>> >>>>         at
>>> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
>>> >>>>
>>> >>>>         at
>>> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
>>> >>>>
>>> >>>>         at java.security.AccessController.doPrivileged(Native
>>> Method)
>>> >>>>
>>> >>>>         at javax.security.auth.Subject.doAs(Subject.java:396)
>>> >>>>
>>> >>>>         at
>>> >>>>
>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>>> >>>>
>>> >>>>         at
>>> org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
>>> >>>>
>>> >>>>
>>> >>>>
>>> >>>>
>>> >>>>
>>> >>>>
>>> >>>>
>>> >>>>
>>> >>>
>>> >>>
>>> >>
>>> >
>>>
>>>
>>>
>>> --
>>> Harsh J
>>>
>>
>>
>

Re: DFS Permissions on Hadoop 2.x

Posted by Prashant Kommireddi <pr...@gmail.com>.
Thanks guys, I will follow the discussion there.


On Tue, Jun 18, 2013 at 10:10 PM, Azuryy Yu <az...@gmail.com> wrote:

> Yes, and I think this was lead by Snapshot.
>
> I've file a JIRA here:
> https://issues.apache.org/jira/browse/HDFS-4918
>
>
>
> On Wed, Jun 19, 2013 at 11:40 AM, Harsh J <ha...@cloudera.com> wrote:
>
>> This is a HDFS bug. Like all other methods that check for permissions
>> being enabled, the client call of setPermission should check it as
>> well. It does not do that currently and I believe it should be a NOP
>> in such a case. Please do file a JIRA (and reference the ID here to
>> close the loop)!
>>
>> On Wed, Jun 19, 2013 at 6:18 AM, Prashant Kommireddi
>> <pr...@gmail.com> wrote:
>> > Looks like the jobs fail only on the first attempt and pass thereafter.
>> > Failure occurs while setting perms on "intermediate done directory".
>> Here is
>> > what I think is happening:
>> >
>> > 1. Intermediate done dir is (ideally) created as part of deployment
>> (for eg,
>> > /mapred/history/done_intermediate)
>> >
>> > 2. When a MR job is run, it creates a user dir within intermediate done
>> dir
>> > (/mapred/history/done_intermediate/username)
>> >
>> > 3. After this dir is created, the code tries to set permissions on this
>> user
>> > dir. In doing so, it checks for EXECUTE permissions on not just its
>> parent
>> > (/mapred/history/done_intermediate) but across all dirs to the top-most
>> > level (/mapred). This fails as "/mapred" does not have execute
>> permissions
>> > for the "Other" users.
>> >
>> > 4. On successive job runs, since the user dir already exists
>> > (/mapred/history/done_intermediate/username) it no longer tries to
>> create
>> > and set permissions again. And the job completes without any perm
>> errors.
>> >
>> > This is the code within JobHistoryEventHandler that's doing it.
>> >
>> >  //Check for the existence of intermediate done dir.
>> >     Path doneDirPath = null;
>> >     try {
>> >       doneDirPath = FileSystem.get(conf).makeQualified(new
>> > Path(doneDirStr));
>> >       doneDirFS = FileSystem.get(doneDirPath.toUri(), conf);
>> >       // This directory will be in a common location, or this may be a
>> > cluster
>> >       // meant for a single user. Creating based on the conf. Should
>> ideally
>> > be
>> >       // created by the JobHistoryServer or as part of deployment.
>> >       if (!doneDirFS.exists(doneDirPath)) {
>> >       if (JobHistoryUtils.shouldCreateNonUserDirectory(conf)) {
>> >         LOG.info("Creating intermediate history logDir: ["
>> >             + doneDirPath
>> >             + "] + based on conf. Should ideally be created by the
>> > JobHistoryServer: "
>> >             + MRJobConfig.MR_AM_CREATE_JH_INTERMEDIATE_BASE_DIR);
>> >           mkdir(
>> >               doneDirFS,
>> >               doneDirPath,
>> >               new FsPermission(
>> >             JobHistoryUtils.HISTORY_INTERMEDIATE_DONE_DIR_PERMISSIONS
>> >                 .toShort()));
>> >           // TODO Temporary toShort till new FsPermission(FsPermissions)
>> >           // respects
>> >         // sticky
>> >       } else {
>> >           String message = "Not creating intermediate history logDir: ["
>> >                 + doneDirPath
>> >                 + "] based on conf: "
>> >                 + MRJobConfig.MR_AM_CREATE_JH_INTERMEDIATE_BASE_DIR
>> >                 + ". Either set to true or pre-create this directory
>> with" +
>> >                 " appropriate permissions";
>> >         LOG.error(message);
>> >         throw new YarnException(message);
>> >       }
>> >       }
>> >     } catch (IOException e) {
>> >       LOG.error("Failed checking for the existance of history
>> intermediate "
>> > +
>> >                       "done directory: [" + doneDirPath + "]");
>> >       throw new YarnException(e);
>> >     }
>> >
>> >
>> > In any case, this does not appear to be the right behavior as it does
>> not
>> > respect "dfs.permissions.enabled" (set to false) at any point. Sounds
>> like a
>> > bug?
>> >
>> >
>> > Thanks, Prashant
>> >
>> >
>> >
>> >
>> >
>> >
>> > On Tue, Jun 18, 2013 at 3:24 PM, Prashant Kommireddi <
>> prash1784@gmail.com>
>> > wrote:
>> >>
>> >> Hi Chris,
>> >>
>> >> This is while running a MR job. Please note the job is able to write
>> files
>> >> to "/mapred" directory and fails on EXECUTE permissions. On digging in
>> some
>> >> more, it looks like the failure occurs after writing to
>> >> "/mapred/history/done_intermediate".
>> >>
>> >> Here is a more detailed stacktrace.
>> >>
>> >> INFO: Job end notification started for jobID : job_1371593763906_0001
>> >> Jun 18, 2013 3:20:20 PM
>> >> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler
>> >> closeEventWriter
>> >> INFO: Unable to write out JobSummaryInfo to
>> >>
>> [hdfs://test-local-EMPTYSPEC/mapred/history/done_intermediate/smehta/job_1371593763906_0001.summary_tmp]
>> >> org.apache.hadoop.security.AccessControlException: Permission denied:
>> >> user=smehta, access=EXECUTE,
>> >> inode="/mapred":pkommireddi:supergroup:drwxrwx---
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
>> >>      at
>> >>
>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>> >>      at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
>> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
>> >>      at java.security.AccessController.doPrivileged(Native Method)
>> >>      at javax.security.auth.Subject.doAs(Subject.java:396)
>> >>      at
>> >>
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>> >>      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
>> >>
>> >>      at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
>> Method)
>> >>      at
>> >>
>> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
>> >>      at
>> >>
>> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
>> >>      at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
>> >>      at
>> >>
>> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:90)
>> >>      at
>> >>
>> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:57)
>> >>      at
>> org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:1897)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.DistributedFileSystem.setPermission(DistributedFileSystem.java:823)
>> >>      at
>> >>
>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.closeEventWriter(JobHistoryEventHandler.java:666)
>> >>      at
>> >>
>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.handleEvent(JobHistoryEventHandler.java:521)
>> >>      at
>> >>
>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler$1.run(JobHistoryEventHandler.java:273)
>> >>      at java.lang.Thread.run(Thread.java:662)
>> >> Caused by:
>> >>
>> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException):
>> >> Permission denied: user=smehta, access=EXECUTE,
>> >> inode="/mapred":pkommireddi:supergroup:drwxrwx---
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
>> >>      at
>> >>
>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>> >>      at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
>> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
>> >>      at java.security.AccessController.doPrivileged(Native Method)
>> >>      at javax.security.auth.Subject.doAs(Subject.java:396)
>> >>      at
>> >>
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>> >>      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
>> >>
>> >>      at org.apache.hadoop.ipc.Client.call(Client.java:1225)
>> >>      at
>> >>
>> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202)
>> >>      at $Proxy9.setPermission(Unknown Source)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.setPermission(ClientNamenodeProtocolTranslatorPB.java:241)
>> >>      at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> >>      at
>> >>
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>> >>      at
>> >>
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>> >>      at java.lang.reflect.Method.invoke(Method.java:597)
>> >>      at
>> >>
>> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164)
>> >>      at
>> >>
>> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83)
>> >>      at $Proxy10.setPermission(Unknown Source)
>> >>      at
>> org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:1895)
>> >>      ... 5 more
>> >> Jun 18, 2013 3:20:20 PM
>> >> org.apache.hadoop.yarn.YarnUncaughtExceptionHandler uncaughtException
>> >> SEVERE: Thread Thread[Thread-51,5,main] threw an Exception.
>> >> org.apache.hadoop.yarn.YarnException:
>> >> org.apache.hadoop.security.AccessControlException: Permission denied:
>> >> user=smehta, access=EXECUTE,
>> >> inode="/mapred":pkommireddi:supergroup:drwxrwx---
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
>> >>      at
>> >>
>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>> >>      at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
>> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
>> >>      at java.security.AccessController.doPrivileged(Native Method)
>> >>      at javax.security.auth.Subject.doAs(Subject.java:396)
>> >>      at
>> >>
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>> >>      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
>> >>
>> >>      at
>> >>
>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.handleEvent(JobHistoryEventHandler.java:523)
>> >>      at
>> >>
>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler$1.run(JobHistoryEventHandler.java:273)
>> >>      at java.lang.Thread.run(Thread.java:662)
>> >> Caused by: org.apache.hadoop.security.AccessControlException:
>> Permission
>> >> denied: user=smehta, access=EXECUTE,
>> >> inode="/mapred":pkommireddi:supergroup:drwxrwx---
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
>> >>      at
>> >>
>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>> >>      at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
>> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
>> >>      at java.security.AccessController.doPrivileged(Native Method)
>> >>      at javax.security.auth.Subject.doAs(Subject.java:396)
>> >>      at
>> >>
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>> >>      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
>> >>
>> >>      at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
>> Method)
>> >>      at
>> >>
>> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
>> >>      at
>> >>
>> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
>> >>      at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
>> >>      at
>> >>
>> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:90)
>> >>      at
>> >>
>> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:57)
>> >>      at
>> org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:1897)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.DistributedFileSystem.setPermission(DistributedFileSystem.java:823)
>> >>      at
>> >>
>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.closeEventWriter(JobHistoryEventHandler.java:666)
>> >>      at
>> >>
>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.handleEvent(JobHistoryEventHandler.java:521)
>> >>      ... 2 more
>> >> Caused by:
>> >>
>> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException):
>> >> Permission denied: user=smehta, access=EXECUTE,
>> >> inode="/mapred":pkommireddi:supergroup:drwxrwx---
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
>> >>      at
>> >>
>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>> >>      at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
>> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
>> >>      at java.security.AccessController.doPrivileged(Native Method)
>> >>      at javax.security.auth.Subject.doAs(Subject.java:396)
>> >>      at
>> >>
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>> >>      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
>> >>
>> >>      at org.apache.hadoop.ipc.Client.call(Client.java:1225)
>> >>      at
>> >>
>> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202)
>> >>      at $Proxy9.setPermission(Unknown Source)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.setPermission(ClientNamenodeProtocolTranslatorPB.java:241)
>> >>      at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> >>      at
>> >>
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>> >>      at
>> >>
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>> >>      at java.lang.reflect.Method.invoke(Method.java:597)
>> >>      at
>> >>
>> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164)
>> >>      at
>> >>
>> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83)
>> >>      at $Proxy10.setPermission(Unknown Source)
>> >>      at
>> org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:1895)
>> >>      ... 5 more
>> >> Jun 18, 2013 3:20:20 PM
>> >>
>> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator$ScheduleStats log
>> >> INFO: Before Scheduling: PendingReds:0 ScheduledMaps:0 ScheduledReds:0
>> >> AssignedMaps:0 AssignedReds:1 CompletedMaps:1 CompletedReds:1
>> ContAlloc:2
>> >> ContRel:0 HostLocal:0 RackLocal:1
>> >> Jun 18, 2013 3:20:21 PM
>> >> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator getResources
>> >> INFO: Received completed container
>> container_1371593763906_0001_01_000003
>> >> Jun 18, 2013 3:20:21 PM
>> >>
>> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator$ScheduleStats log
>> >> INFO: After Scheduling: PendingReds:0 ScheduledMaps:0 ScheduledReds:0
>> >> AssignedMaps:0 AssignedReds:0 CompletedMaps:1 CompletedReds:1
>> ContAlloc:2
>> >> ContRel:0 HostLocal:0 RackLocal:1
>> >> Jun 18, 2013 3:20:21 PM
>> >>
>> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl$DiagnosticInformationUpdater
>> >> transition
>> >> INFO: Diagnostics report from attempt_1371593763906_0001_r_000000_0:
>> >> Container killed by the ApplicationMaster.
>> >>
>> >>
>> >>
>> >> On Tue, Jun 18, 2013 at 1:28 PM, Chris Nauroth <
>> cnauroth@hortonworks.com>
>> >> wrote:
>> >>>
>> >>> Prashant, can you provide more details about what you're doing when
>> you
>> >>> see this error?  Are you submitting a MapReduce job, running an HDFS
>> shell
>> >>> command, or doing some other action?  It's possible that we're also
>> seeing
>> >>> an interaction with some other change in 2.x that triggers a
>> setPermission
>> >>> call that wasn't there in 0.20.2.  I think the problem with the HDFS
>> >>> setPermission API is present in both 0.20.2 and 2.x, but if the code
>> in
>> >>> 0.20.2 never triggered a setPermission call for your usage, then you
>> >>> wouldn't have seen the problem.
>> >>>
>> >>> I'd like to gather these details for submitting a new bug report to
>> HDFS.
>> >>> Thanks!
>> >>>
>> >>> Chris Nauroth
>> >>> Hortonworks
>> >>> http://hortonworks.com/
>> >>>
>> >>>
>> >>>
>> >>> On Tue, Jun 18, 2013 at 12:14 PM, Leo Leung <ll...@ddn.com> wrote:
>> >>>>
>> >>>> I believe, the properties name should be “dfs.permissions”
>> >>>>
>> >>>>
>> >>>>
>> >>>>
>> >>>>
>> >>>> From: Prashant Kommireddi [mailto:prash1784@gmail.com]
>> >>>> Sent: Tuesday, June 18, 2013 10:54 AM
>> >>>> To: user@hadoop.apache.org
>> >>>> Subject: DFS Permissions on Hadoop 2.x
>> >>>>
>> >>>>
>> >>>>
>> >>>> Hello,
>> >>>>
>> >>>>
>> >>>>
>> >>>> We just upgraded our cluster from 0.20.2 to 2.x (with HA) and had a
>> >>>> question around disabling dfs permissions on the latter version. For
>> some
>> >>>> reason, setting the following config does not seem to work
>> >>>>
>> >>>>
>> >>>>
>> >>>> <property>
>> >>>>
>> >>>>         <name>dfs.permissions.enabled</name>
>> >>>>
>> >>>>         <value>false</value>
>> >>>>
>> >>>> </property>
>> >>>>
>> >>>>
>> >>>>
>> >>>> Any other configs that might be needed for this?
>> >>>>
>> >>>>
>> >>>>
>> >>>> Here is the stacktrace.
>> >>>>
>> >>>>
>> >>>>
>> >>>> 2013-06-17 17:35:45,429 INFO  ipc.Server - IPC Server handler 62 on
>> >>>> 8020, call
>> org.apache.hadoop.hdfs.protocol.ClientProtocol.setPermission from
>> >>>> 10.0.53.131:24059: error:
>> org.apache.hadoop.security.AccessControlException:
>> >>>> Permission denied: user=smehta, access=EXECUTE,
>> >>>> inode="/mapred":pkommireddi:supergroup:drwxrwx---
>> >>>>
>> >>>> org.apache.hadoop.security.AccessControlException: Permission denied:
>> >>>> user=smehta, access=EXECUTE,
>> >>>> inode="/mapred":pkommireddi:supergroup:drwxrwx---
>> >>>>
>> >>>>         at
>> >>>>
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
>> >>>>
>> >>>>         at
>> >>>>
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
>> >>>>
>> >>>>         at
>> >>>>
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
>> >>>>
>> >>>>         at
>> >>>>
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
>> >>>>
>> >>>>         at
>> >>>>
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
>> >>>>
>> >>>>         at
>> >>>>
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
>> >>>>
>> >>>>         at
>> >>>>
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
>> >>>>
>> >>>>         at
>> >>>>
>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
>> >>>>
>> >>>>         at
>> >>>>
>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
>> >>>>
>> >>>>         at
>> >>>>
>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
>> >>>>
>> >>>>         at
>> >>>>
>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>> >>>>
>> >>>>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>> >>>>
>> >>>>         at
>> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
>> >>>>
>> >>>>         at
>> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
>> >>>>
>> >>>>         at java.security.AccessController.doPrivileged(Native Method)
>> >>>>
>> >>>>         at javax.security.auth.Subject.doAs(Subject.java:396)
>> >>>>
>> >>>>         at
>> >>>>
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>> >>>>
>> >>>>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
>> >>>>
>> >>>>
>> >>>>
>> >>>>
>> >>>>
>> >>>>
>> >>>>
>> >>>>
>> >>>
>> >>>
>> >>
>> >
>>
>>
>>
>> --
>> Harsh J
>>
>
>

Re: DFS Permissions on Hadoop 2.x

Posted by Prashant Kommireddi <pr...@gmail.com>.
Thanks guys, I will follow the discussion there.


On Tue, Jun 18, 2013 at 10:10 PM, Azuryy Yu <az...@gmail.com> wrote:

> Yes, and I think this was lead by Snapshot.
>
> I've file a JIRA here:
> https://issues.apache.org/jira/browse/HDFS-4918
>
>
>
> On Wed, Jun 19, 2013 at 11:40 AM, Harsh J <ha...@cloudera.com> wrote:
>
>> This is a HDFS bug. Like all other methods that check for permissions
>> being enabled, the client call of setPermission should check it as
>> well. It does not do that currently and I believe it should be a NOP
>> in such a case. Please do file a JIRA (and reference the ID here to
>> close the loop)!
>>
>> On Wed, Jun 19, 2013 at 6:18 AM, Prashant Kommireddi
>> <pr...@gmail.com> wrote:
>> > Looks like the jobs fail only on the first attempt and pass thereafter.
>> > Failure occurs while setting perms on "intermediate done directory".
>> Here is
>> > what I think is happening:
>> >
>> > 1. Intermediate done dir is (ideally) created as part of deployment
>> (for eg,
>> > /mapred/history/done_intermediate)
>> >
>> > 2. When a MR job is run, it creates a user dir within intermediate done
>> dir
>> > (/mapred/history/done_intermediate/username)
>> >
>> > 3. After this dir is created, the code tries to set permissions on this
>> user
>> > dir. In doing so, it checks for EXECUTE permissions on not just its
>> parent
>> > (/mapred/history/done_intermediate) but across all dirs to the top-most
>> > level (/mapred). This fails as "/mapred" does not have execute
>> permissions
>> > for the "Other" users.
>> >
>> > 4. On successive job runs, since the user dir already exists
>> > (/mapred/history/done_intermediate/username) it no longer tries to
>> create
>> > and set permissions again. And the job completes without any perm
>> errors.
>> >
>> > This is the code within JobHistoryEventHandler that's doing it.
>> >
>> >  //Check for the existence of intermediate done dir.
>> >     Path doneDirPath = null;
>> >     try {
>> >       doneDirPath = FileSystem.get(conf).makeQualified(new
>> > Path(doneDirStr));
>> >       doneDirFS = FileSystem.get(doneDirPath.toUri(), conf);
>> >       // This directory will be in a common location, or this may be a
>> > cluster
>> >       // meant for a single user. Creating based on the conf. Should
>> ideally
>> > be
>> >       // created by the JobHistoryServer or as part of deployment.
>> >       if (!doneDirFS.exists(doneDirPath)) {
>> >       if (JobHistoryUtils.shouldCreateNonUserDirectory(conf)) {
>> >         LOG.info("Creating intermediate history logDir: ["
>> >             + doneDirPath
>> >             + "] + based on conf. Should ideally be created by the
>> > JobHistoryServer: "
>> >             + MRJobConfig.MR_AM_CREATE_JH_INTERMEDIATE_BASE_DIR);
>> >           mkdir(
>> >               doneDirFS,
>> >               doneDirPath,
>> >               new FsPermission(
>> >             JobHistoryUtils.HISTORY_INTERMEDIATE_DONE_DIR_PERMISSIONS
>> >                 .toShort()));
>> >           // TODO Temporary toShort till new FsPermission(FsPermissions)
>> >           // respects
>> >         // sticky
>> >       } else {
>> >           String message = "Not creating intermediate history logDir: ["
>> >                 + doneDirPath
>> >                 + "] based on conf: "
>> >                 + MRJobConfig.MR_AM_CREATE_JH_INTERMEDIATE_BASE_DIR
>> >                 + ". Either set to true or pre-create this directory
>> with" +
>> >                 " appropriate permissions";
>> >         LOG.error(message);
>> >         throw new YarnException(message);
>> >       }
>> >       }
>> >     } catch (IOException e) {
>> >       LOG.error("Failed checking for the existance of history
>> intermediate "
>> > +
>> >                       "done directory: [" + doneDirPath + "]");
>> >       throw new YarnException(e);
>> >     }
>> >
>> >
>> > In any case, this does not appear to be the right behavior as it does
>> not
>> > respect "dfs.permissions.enabled" (set to false) at any point. Sounds
>> like a
>> > bug?
>> >
>> >
>> > Thanks, Prashant
>> >
>> >
>> >
>> >
>> >
>> >
>> > On Tue, Jun 18, 2013 at 3:24 PM, Prashant Kommireddi <
>> prash1784@gmail.com>
>> > wrote:
>> >>
>> >> Hi Chris,
>> >>
>> >> This is while running a MR job. Please note the job is able to write
>> files
>> >> to "/mapred" directory and fails on EXECUTE permissions. On digging in
>> some
>> >> more, it looks like the failure occurs after writing to
>> >> "/mapred/history/done_intermediate".
>> >>
>> >> Here is a more detailed stacktrace.
>> >>
>> >> INFO: Job end notification started for jobID : job_1371593763906_0001
>> >> Jun 18, 2013 3:20:20 PM
>> >> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler
>> >> closeEventWriter
>> >> INFO: Unable to write out JobSummaryInfo to
>> >>
>> [hdfs://test-local-EMPTYSPEC/mapred/history/done_intermediate/smehta/job_1371593763906_0001.summary_tmp]
>> >> org.apache.hadoop.security.AccessControlException: Permission denied:
>> >> user=smehta, access=EXECUTE,
>> >> inode="/mapred":pkommireddi:supergroup:drwxrwx---
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
>> >>      at
>> >>
>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>> >>      at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
>> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
>> >>      at java.security.AccessController.doPrivileged(Native Method)
>> >>      at javax.security.auth.Subject.doAs(Subject.java:396)
>> >>      at
>> >>
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>> >>      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
>> >>
>> >>      at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
>> Method)
>> >>      at
>> >>
>> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
>> >>      at
>> >>
>> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
>> >>      at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
>> >>      at
>> >>
>> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:90)
>> >>      at
>> >>
>> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:57)
>> >>      at
>> org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:1897)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.DistributedFileSystem.setPermission(DistributedFileSystem.java:823)
>> >>      at
>> >>
>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.closeEventWriter(JobHistoryEventHandler.java:666)
>> >>      at
>> >>
>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.handleEvent(JobHistoryEventHandler.java:521)
>> >>      at
>> >>
>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler$1.run(JobHistoryEventHandler.java:273)
>> >>      at java.lang.Thread.run(Thread.java:662)
>> >> Caused by:
>> >>
>> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException):
>> >> Permission denied: user=smehta, access=EXECUTE,
>> >> inode="/mapred":pkommireddi:supergroup:drwxrwx---
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
>> >>      at
>> >>
>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>> >>      at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
>> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
>> >>      at java.security.AccessController.doPrivileged(Native Method)
>> >>      at javax.security.auth.Subject.doAs(Subject.java:396)
>> >>      at
>> >>
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>> >>      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
>> >>
>> >>      at org.apache.hadoop.ipc.Client.call(Client.java:1225)
>> >>      at
>> >>
>> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202)
>> >>      at $Proxy9.setPermission(Unknown Source)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.setPermission(ClientNamenodeProtocolTranslatorPB.java:241)
>> >>      at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> >>      at
>> >>
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>> >>      at
>> >>
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>> >>      at java.lang.reflect.Method.invoke(Method.java:597)
>> >>      at
>> >>
>> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164)
>> >>      at
>> >>
>> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83)
>> >>      at $Proxy10.setPermission(Unknown Source)
>> >>      at
>> org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:1895)
>> >>      ... 5 more
>> >> Jun 18, 2013 3:20:20 PM
>> >> org.apache.hadoop.yarn.YarnUncaughtExceptionHandler uncaughtException
>> >> SEVERE: Thread Thread[Thread-51,5,main] threw an Exception.
>> >> org.apache.hadoop.yarn.YarnException:
>> >> org.apache.hadoop.security.AccessControlException: Permission denied:
>> >> user=smehta, access=EXECUTE,
>> >> inode="/mapred":pkommireddi:supergroup:drwxrwx---
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
>> >>      at
>> >>
>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>> >>      at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
>> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
>> >>      at java.security.AccessController.doPrivileged(Native Method)
>> >>      at javax.security.auth.Subject.doAs(Subject.java:396)
>> >>      at
>> >>
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>> >>      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
>> >>
>> >>      at
>> >>
>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.handleEvent(JobHistoryEventHandler.java:523)
>> >>      at
>> >>
>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler$1.run(JobHistoryEventHandler.java:273)
>> >>      at java.lang.Thread.run(Thread.java:662)
>> >> Caused by: org.apache.hadoop.security.AccessControlException:
>> Permission
>> >> denied: user=smehta, access=EXECUTE,
>> >> inode="/mapred":pkommireddi:supergroup:drwxrwx---
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
>> >>      at
>> >>
>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>> >>      at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
>> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
>> >>      at java.security.AccessController.doPrivileged(Native Method)
>> >>      at javax.security.auth.Subject.doAs(Subject.java:396)
>> >>      at
>> >>
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>> >>      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
>> >>
>> >>      at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
>> Method)
>> >>      at
>> >>
>> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
>> >>      at
>> >>
>> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
>> >>      at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
>> >>      at
>> >>
>> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:90)
>> >>      at
>> >>
>> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:57)
>> >>      at
>> org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:1897)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.DistributedFileSystem.setPermission(DistributedFileSystem.java:823)
>> >>      at
>> >>
>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.closeEventWriter(JobHistoryEventHandler.java:666)
>> >>      at
>> >>
>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.handleEvent(JobHistoryEventHandler.java:521)
>> >>      ... 2 more
>> >> Caused by:
>> >>
>> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException):
>> >> Permission denied: user=smehta, access=EXECUTE,
>> >> inode="/mapred":pkommireddi:supergroup:drwxrwx---
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
>> >>      at
>> >>
>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>> >>      at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
>> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
>> >>      at java.security.AccessController.doPrivileged(Native Method)
>> >>      at javax.security.auth.Subject.doAs(Subject.java:396)
>> >>      at
>> >>
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>> >>      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
>> >>
>> >>      at org.apache.hadoop.ipc.Client.call(Client.java:1225)
>> >>      at
>> >>
>> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202)
>> >>      at $Proxy9.setPermission(Unknown Source)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.setPermission(ClientNamenodeProtocolTranslatorPB.java:241)
>> >>      at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> >>      at
>> >>
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>> >>      at
>> >>
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>> >>      at java.lang.reflect.Method.invoke(Method.java:597)
>> >>      at
>> >>
>> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164)
>> >>      at
>> >>
>> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83)
>> >>      at $Proxy10.setPermission(Unknown Source)
>> >>      at
>> org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:1895)
>> >>      ... 5 more
>> >> Jun 18, 2013 3:20:20 PM
>> >>
>> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator$ScheduleStats log
>> >> INFO: Before Scheduling: PendingReds:0 ScheduledMaps:0 ScheduledReds:0
>> >> AssignedMaps:0 AssignedReds:1 CompletedMaps:1 CompletedReds:1
>> ContAlloc:2
>> >> ContRel:0 HostLocal:0 RackLocal:1
>> >> Jun 18, 2013 3:20:21 PM
>> >> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator getResources
>> >> INFO: Received completed container
>> container_1371593763906_0001_01_000003
>> >> Jun 18, 2013 3:20:21 PM
>> >>
>> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator$ScheduleStats log
>> >> INFO: After Scheduling: PendingReds:0 ScheduledMaps:0 ScheduledReds:0
>> >> AssignedMaps:0 AssignedReds:0 CompletedMaps:1 CompletedReds:1
>> ContAlloc:2
>> >> ContRel:0 HostLocal:0 RackLocal:1
>> >> Jun 18, 2013 3:20:21 PM
>> >>
>> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl$DiagnosticInformationUpdater
>> >> transition
>> >> INFO: Diagnostics report from attempt_1371593763906_0001_r_000000_0:
>> >> Container killed by the ApplicationMaster.
>> >>
>> >>
>> >>
>> >> On Tue, Jun 18, 2013 at 1:28 PM, Chris Nauroth <
>> cnauroth@hortonworks.com>
>> >> wrote:
>> >>>
>> >>> Prashant, can you provide more details about what you're doing when
>> you
>> >>> see this error?  Are you submitting a MapReduce job, running an HDFS
>> shell
>> >>> command, or doing some other action?  It's possible that we're also
>> seeing
>> >>> an interaction with some other change in 2.x that triggers a
>> setPermission
>> >>> call that wasn't there in 0.20.2.  I think the problem with the HDFS
>> >>> setPermission API is present in both 0.20.2 and 2.x, but if the code
>> in
>> >>> 0.20.2 never triggered a setPermission call for your usage, then you
>> >>> wouldn't have seen the problem.
>> >>>
>> >>> I'd like to gather these details for submitting a new bug report to
>> HDFS.
>> >>> Thanks!
>> >>>
>> >>> Chris Nauroth
>> >>> Hortonworks
>> >>> http://hortonworks.com/
>> >>>
>> >>>
>> >>>
>> >>> On Tue, Jun 18, 2013 at 12:14 PM, Leo Leung <ll...@ddn.com> wrote:
>> >>>>
>> >>>> I believe, the properties name should be “dfs.permissions”
>> >>>>
>> >>>>
>> >>>>
>> >>>>
>> >>>>
>> >>>> From: Prashant Kommireddi [mailto:prash1784@gmail.com]
>> >>>> Sent: Tuesday, June 18, 2013 10:54 AM
>> >>>> To: user@hadoop.apache.org
>> >>>> Subject: DFS Permissions on Hadoop 2.x
>> >>>>
>> >>>>
>> >>>>
>> >>>> Hello,
>> >>>>
>> >>>>
>> >>>>
>> >>>> We just upgraded our cluster from 0.20.2 to 2.x (with HA) and had a
>> >>>> question around disabling dfs permissions on the latter version. For
>> some
>> >>>> reason, setting the following config does not seem to work
>> >>>>
>> >>>>
>> >>>>
>> >>>> <property>
>> >>>>
>> >>>>         <name>dfs.permissions.enabled</name>
>> >>>>
>> >>>>         <value>false</value>
>> >>>>
>> >>>> </property>
>> >>>>
>> >>>>
>> >>>>
>> >>>> Any other configs that might be needed for this?
>> >>>>
>> >>>>
>> >>>>
>> >>>> Here is the stacktrace.
>> >>>>
>> >>>>
>> >>>>
>> >>>> 2013-06-17 17:35:45,429 INFO  ipc.Server - IPC Server handler 62 on
>> >>>> 8020, call
>> org.apache.hadoop.hdfs.protocol.ClientProtocol.setPermission from
>> >>>> 10.0.53.131:24059: error:
>> org.apache.hadoop.security.AccessControlException:
>> >>>> Permission denied: user=smehta, access=EXECUTE,
>> >>>> inode="/mapred":pkommireddi:supergroup:drwxrwx---
>> >>>>
>> >>>> org.apache.hadoop.security.AccessControlException: Permission denied:
>> >>>> user=smehta, access=EXECUTE,
>> >>>> inode="/mapred":pkommireddi:supergroup:drwxrwx---
>> >>>>
>> >>>>         at
>> >>>>
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
>> >>>>
>> >>>>         at
>> >>>>
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
>> >>>>
>> >>>>         at
>> >>>>
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
>> >>>>
>> >>>>         at
>> >>>>
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
>> >>>>
>> >>>>         at
>> >>>>
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
>> >>>>
>> >>>>         at
>> >>>>
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
>> >>>>
>> >>>>         at
>> >>>>
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
>> >>>>
>> >>>>         at
>> >>>>
>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
>> >>>>
>> >>>>         at
>> >>>>
>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
>> >>>>
>> >>>>         at
>> >>>>
>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
>> >>>>
>> >>>>         at
>> >>>>
>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>> >>>>
>> >>>>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>> >>>>
>> >>>>         at
>> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
>> >>>>
>> >>>>         at
>> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
>> >>>>
>> >>>>         at java.security.AccessController.doPrivileged(Native Method)
>> >>>>
>> >>>>         at javax.security.auth.Subject.doAs(Subject.java:396)
>> >>>>
>> >>>>         at
>> >>>>
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>> >>>>
>> >>>>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
>> >>>>
>> >>>>
>> >>>>
>> >>>>
>> >>>>
>> >>>>
>> >>>>
>> >>>>
>> >>>
>> >>>
>> >>
>> >
>>
>>
>>
>> --
>> Harsh J
>>
>
>

Re: DFS Permissions on Hadoop 2.x

Posted by Prashant Kommireddi <pr...@gmail.com>.
Thanks guys, I will follow the discussion there.


On Tue, Jun 18, 2013 at 10:10 PM, Azuryy Yu <az...@gmail.com> wrote:

> Yes, and I think this was lead by Snapshot.
>
> I've file a JIRA here:
> https://issues.apache.org/jira/browse/HDFS-4918
>
>
>
> On Wed, Jun 19, 2013 at 11:40 AM, Harsh J <ha...@cloudera.com> wrote:
>
>> This is a HDFS bug. Like all other methods that check for permissions
>> being enabled, the client call of setPermission should check it as
>> well. It does not do that currently and I believe it should be a NOP
>> in such a case. Please do file a JIRA (and reference the ID here to
>> close the loop)!
>>
>> On Wed, Jun 19, 2013 at 6:18 AM, Prashant Kommireddi
>> <pr...@gmail.com> wrote:
>> > Looks like the jobs fail only on the first attempt and pass thereafter.
>> > Failure occurs while setting perms on "intermediate done directory".
>> Here is
>> > what I think is happening:
>> >
>> > 1. Intermediate done dir is (ideally) created as part of deployment
>> (for eg,
>> > /mapred/history/done_intermediate)
>> >
>> > 2. When a MR job is run, it creates a user dir within intermediate done
>> dir
>> > (/mapred/history/done_intermediate/username)
>> >
>> > 3. After this dir is created, the code tries to set permissions on this
>> user
>> > dir. In doing so, it checks for EXECUTE permissions on not just its
>> parent
>> > (/mapred/history/done_intermediate) but across all dirs to the top-most
>> > level (/mapred). This fails as "/mapred" does not have execute
>> permissions
>> > for the "Other" users.
>> >
>> > 4. On successive job runs, since the user dir already exists
>> > (/mapred/history/done_intermediate/username) it no longer tries to
>> create
>> > and set permissions again. And the job completes without any perm
>> errors.
>> >
>> > This is the code within JobHistoryEventHandler that's doing it.
>> >
>> >  //Check for the existence of intermediate done dir.
>> >     Path doneDirPath = null;
>> >     try {
>> >       doneDirPath = FileSystem.get(conf).makeQualified(new
>> > Path(doneDirStr));
>> >       doneDirFS = FileSystem.get(doneDirPath.toUri(), conf);
>> >       // This directory will be in a common location, or this may be a
>> > cluster
>> >       // meant for a single user. Creating based on the conf. Should
>> ideally
>> > be
>> >       // created by the JobHistoryServer or as part of deployment.
>> >       if (!doneDirFS.exists(doneDirPath)) {
>> >       if (JobHistoryUtils.shouldCreateNonUserDirectory(conf)) {
>> >         LOG.info("Creating intermediate history logDir: ["
>> >             + doneDirPath
>> >             + "] + based on conf. Should ideally be created by the
>> > JobHistoryServer: "
>> >             + MRJobConfig.MR_AM_CREATE_JH_INTERMEDIATE_BASE_DIR);
>> >           mkdir(
>> >               doneDirFS,
>> >               doneDirPath,
>> >               new FsPermission(
>> >             JobHistoryUtils.HISTORY_INTERMEDIATE_DONE_DIR_PERMISSIONS
>> >                 .toShort()));
>> >           // TODO Temporary toShort till new FsPermission(FsPermissions)
>> >           // respects
>> >         // sticky
>> >       } else {
>> >           String message = "Not creating intermediate history logDir: ["
>> >                 + doneDirPath
>> >                 + "] based on conf: "
>> >                 + MRJobConfig.MR_AM_CREATE_JH_INTERMEDIATE_BASE_DIR
>> >                 + ". Either set to true or pre-create this directory
>> with" +
>> >                 " appropriate permissions";
>> >         LOG.error(message);
>> >         throw new YarnException(message);
>> >       }
>> >       }
>> >     } catch (IOException e) {
>> >       LOG.error("Failed checking for the existance of history
>> intermediate "
>> > +
>> >                       "done directory: [" + doneDirPath + "]");
>> >       throw new YarnException(e);
>> >     }
>> >
>> >
>> > In any case, this does not appear to be the right behavior as it does
>> not
>> > respect "dfs.permissions.enabled" (set to false) at any point. Sounds
>> like a
>> > bug?
>> >
>> >
>> > Thanks, Prashant
>> >
>> >
>> >
>> >
>> >
>> >
>> > On Tue, Jun 18, 2013 at 3:24 PM, Prashant Kommireddi <
>> prash1784@gmail.com>
>> > wrote:
>> >>
>> >> Hi Chris,
>> >>
>> >> This is while running a MR job. Please note the job is able to write
>> files
>> >> to "/mapred" directory and fails on EXECUTE permissions. On digging in
>> some
>> >> more, it looks like the failure occurs after writing to
>> >> "/mapred/history/done_intermediate".
>> >>
>> >> Here is a more detailed stacktrace.
>> >>
>> >> INFO: Job end notification started for jobID : job_1371593763906_0001
>> >> Jun 18, 2013 3:20:20 PM
>> >> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler
>> >> closeEventWriter
>> >> INFO: Unable to write out JobSummaryInfo to
>> >>
>> [hdfs://test-local-EMPTYSPEC/mapred/history/done_intermediate/smehta/job_1371593763906_0001.summary_tmp]
>> >> org.apache.hadoop.security.AccessControlException: Permission denied:
>> >> user=smehta, access=EXECUTE,
>> >> inode="/mapred":pkommireddi:supergroup:drwxrwx---
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
>> >>      at
>> >>
>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>> >>      at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
>> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
>> >>      at java.security.AccessController.doPrivileged(Native Method)
>> >>      at javax.security.auth.Subject.doAs(Subject.java:396)
>> >>      at
>> >>
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>> >>      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
>> >>
>> >>      at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
>> Method)
>> >>      at
>> >>
>> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
>> >>      at
>> >>
>> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
>> >>      at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
>> >>      at
>> >>
>> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:90)
>> >>      at
>> >>
>> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:57)
>> >>      at
>> org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:1897)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.DistributedFileSystem.setPermission(DistributedFileSystem.java:823)
>> >>      at
>> >>
>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.closeEventWriter(JobHistoryEventHandler.java:666)
>> >>      at
>> >>
>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.handleEvent(JobHistoryEventHandler.java:521)
>> >>      at
>> >>
>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler$1.run(JobHistoryEventHandler.java:273)
>> >>      at java.lang.Thread.run(Thread.java:662)
>> >> Caused by:
>> >>
>> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException):
>> >> Permission denied: user=smehta, access=EXECUTE,
>> >> inode="/mapred":pkommireddi:supergroup:drwxrwx---
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
>> >>      at
>> >>
>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>> >>      at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
>> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
>> >>      at java.security.AccessController.doPrivileged(Native Method)
>> >>      at javax.security.auth.Subject.doAs(Subject.java:396)
>> >>      at
>> >>
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>> >>      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
>> >>
>> >>      at org.apache.hadoop.ipc.Client.call(Client.java:1225)
>> >>      at
>> >>
>> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202)
>> >>      at $Proxy9.setPermission(Unknown Source)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.setPermission(ClientNamenodeProtocolTranslatorPB.java:241)
>> >>      at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> >>      at
>> >>
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>> >>      at
>> >>
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>> >>      at java.lang.reflect.Method.invoke(Method.java:597)
>> >>      at
>> >>
>> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164)
>> >>      at
>> >>
>> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83)
>> >>      at $Proxy10.setPermission(Unknown Source)
>> >>      at
>> org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:1895)
>> >>      ... 5 more
>> >> Jun 18, 2013 3:20:20 PM
>> >> org.apache.hadoop.yarn.YarnUncaughtExceptionHandler uncaughtException
>> >> SEVERE: Thread Thread[Thread-51,5,main] threw an Exception.
>> >> org.apache.hadoop.yarn.YarnException:
>> >> org.apache.hadoop.security.AccessControlException: Permission denied:
>> >> user=smehta, access=EXECUTE,
>> >> inode="/mapred":pkommireddi:supergroup:drwxrwx---
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
>> >>      at
>> >>
>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>> >>      at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
>> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
>> >>      at java.security.AccessController.doPrivileged(Native Method)
>> >>      at javax.security.auth.Subject.doAs(Subject.java:396)
>> >>      at
>> >>
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>> >>      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
>> >>
>> >>      at
>> >>
>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.handleEvent(JobHistoryEventHandler.java:523)
>> >>      at
>> >>
>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler$1.run(JobHistoryEventHandler.java:273)
>> >>      at java.lang.Thread.run(Thread.java:662)
>> >> Caused by: org.apache.hadoop.security.AccessControlException:
>> Permission
>> >> denied: user=smehta, access=EXECUTE,
>> >> inode="/mapred":pkommireddi:supergroup:drwxrwx---
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
>> >>      at
>> >>
>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>> >>      at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
>> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
>> >>      at java.security.AccessController.doPrivileged(Native Method)
>> >>      at javax.security.auth.Subject.doAs(Subject.java:396)
>> >>      at
>> >>
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>> >>      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
>> >>
>> >>      at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
>> Method)
>> >>      at
>> >>
>> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
>> >>      at
>> >>
>> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
>> >>      at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
>> >>      at
>> >>
>> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:90)
>> >>      at
>> >>
>> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:57)
>> >>      at
>> org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:1897)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.DistributedFileSystem.setPermission(DistributedFileSystem.java:823)
>> >>      at
>> >>
>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.closeEventWriter(JobHistoryEventHandler.java:666)
>> >>      at
>> >>
>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.handleEvent(JobHistoryEventHandler.java:521)
>> >>      ... 2 more
>> >> Caused by:
>> >>
>> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException):
>> >> Permission denied: user=smehta, access=EXECUTE,
>> >> inode="/mapred":pkommireddi:supergroup:drwxrwx---
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
>> >>      at
>> >>
>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>> >>      at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
>> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
>> >>      at java.security.AccessController.doPrivileged(Native Method)
>> >>      at javax.security.auth.Subject.doAs(Subject.java:396)
>> >>      at
>> >>
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>> >>      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
>> >>
>> >>      at org.apache.hadoop.ipc.Client.call(Client.java:1225)
>> >>      at
>> >>
>> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202)
>> >>      at $Proxy9.setPermission(Unknown Source)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.setPermission(ClientNamenodeProtocolTranslatorPB.java:241)
>> >>      at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> >>      at
>> >>
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>> >>      at
>> >>
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>> >>      at java.lang.reflect.Method.invoke(Method.java:597)
>> >>      at
>> >>
>> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164)
>> >>      at
>> >>
>> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83)
>> >>      at $Proxy10.setPermission(Unknown Source)
>> >>      at
>> org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:1895)
>> >>      ... 5 more
>> >> Jun 18, 2013 3:20:20 PM
>> >>
>> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator$ScheduleStats log
>> >> INFO: Before Scheduling: PendingReds:0 ScheduledMaps:0 ScheduledReds:0
>> >> AssignedMaps:0 AssignedReds:1 CompletedMaps:1 CompletedReds:1
>> ContAlloc:2
>> >> ContRel:0 HostLocal:0 RackLocal:1
>> >> Jun 18, 2013 3:20:21 PM
>> >> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator getResources
>> >> INFO: Received completed container
>> container_1371593763906_0001_01_000003
>> >> Jun 18, 2013 3:20:21 PM
>> >>
>> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator$ScheduleStats log
>> >> INFO: After Scheduling: PendingReds:0 ScheduledMaps:0 ScheduledReds:0
>> >> AssignedMaps:0 AssignedReds:0 CompletedMaps:1 CompletedReds:1
>> ContAlloc:2
>> >> ContRel:0 HostLocal:0 RackLocal:1
>> >> Jun 18, 2013 3:20:21 PM
>> >>
>> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl$DiagnosticInformationUpdater
>> >> transition
>> >> INFO: Diagnostics report from attempt_1371593763906_0001_r_000000_0:
>> >> Container killed by the ApplicationMaster.
>> >>
>> >>
>> >>
>> >> On Tue, Jun 18, 2013 at 1:28 PM, Chris Nauroth <
>> cnauroth@hortonworks.com>
>> >> wrote:
>> >>>
>> >>> Prashant, can you provide more details about what you're doing when
>> you
>> >>> see this error?  Are you submitting a MapReduce job, running an HDFS
>> shell
>> >>> command, or doing some other action?  It's possible that we're also
>> seeing
>> >>> an interaction with some other change in 2.x that triggers a
>> setPermission
>> >>> call that wasn't there in 0.20.2.  I think the problem with the HDFS
>> >>> setPermission API is present in both 0.20.2 and 2.x, but if the code
>> in
>> >>> 0.20.2 never triggered a setPermission call for your usage, then you
>> >>> wouldn't have seen the problem.
>> >>>
>> >>> I'd like to gather these details for submitting a new bug report to
>> HDFS.
>> >>> Thanks!
>> >>>
>> >>> Chris Nauroth
>> >>> Hortonworks
>> >>> http://hortonworks.com/
>> >>>
>> >>>
>> >>>
>> >>> On Tue, Jun 18, 2013 at 12:14 PM, Leo Leung <ll...@ddn.com> wrote:
>> >>>>
>> >>>> I believe, the properties name should be “dfs.permissions”
>> >>>>
>> >>>>
>> >>>>
>> >>>>
>> >>>>
>> >>>> From: Prashant Kommireddi [mailto:prash1784@gmail.com]
>> >>>> Sent: Tuesday, June 18, 2013 10:54 AM
>> >>>> To: user@hadoop.apache.org
>> >>>> Subject: DFS Permissions on Hadoop 2.x
>> >>>>
>> >>>>
>> >>>>
>> >>>> Hello,
>> >>>>
>> >>>>
>> >>>>
>> >>>> We just upgraded our cluster from 0.20.2 to 2.x (with HA) and had a
>> >>>> question around disabling dfs permissions on the latter version. For
>> some
>> >>>> reason, setting the following config does not seem to work
>> >>>>
>> >>>>
>> >>>>
>> >>>> <property>
>> >>>>
>> >>>>         <name>dfs.permissions.enabled</name>
>> >>>>
>> >>>>         <value>false</value>
>> >>>>
>> >>>> </property>
>> >>>>
>> >>>>
>> >>>>
>> >>>> Any other configs that might be needed for this?
>> >>>>
>> >>>>
>> >>>>
>> >>>> Here is the stacktrace.
>> >>>>
>> >>>>
>> >>>>
>> >>>> 2013-06-17 17:35:45,429 INFO  ipc.Server - IPC Server handler 62 on
>> >>>> 8020, call
>> org.apache.hadoop.hdfs.protocol.ClientProtocol.setPermission from
>> >>>> 10.0.53.131:24059: error:
>> org.apache.hadoop.security.AccessControlException:
>> >>>> Permission denied: user=smehta, access=EXECUTE,
>> >>>> inode="/mapred":pkommireddi:supergroup:drwxrwx---
>> >>>>
>> >>>> org.apache.hadoop.security.AccessControlException: Permission denied:
>> >>>> user=smehta, access=EXECUTE,
>> >>>> inode="/mapred":pkommireddi:supergroup:drwxrwx---
>> >>>>
>> >>>>         at
>> >>>>
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
>> >>>>
>> >>>>         at
>> >>>>
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
>> >>>>
>> >>>>         at
>> >>>>
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
>> >>>>
>> >>>>         at
>> >>>>
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
>> >>>>
>> >>>>         at
>> >>>>
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
>> >>>>
>> >>>>         at
>> >>>>
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
>> >>>>
>> >>>>         at
>> >>>>
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
>> >>>>
>> >>>>         at
>> >>>>
>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
>> >>>>
>> >>>>         at
>> >>>>
>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
>> >>>>
>> >>>>         at
>> >>>>
>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
>> >>>>
>> >>>>         at
>> >>>>
>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>> >>>>
>> >>>>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>> >>>>
>> >>>>         at
>> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
>> >>>>
>> >>>>         at
>> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
>> >>>>
>> >>>>         at java.security.AccessController.doPrivileged(Native Method)
>> >>>>
>> >>>>         at javax.security.auth.Subject.doAs(Subject.java:396)
>> >>>>
>> >>>>         at
>> >>>>
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>> >>>>
>> >>>>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
>> >>>>
>> >>>>
>> >>>>
>> >>>>
>> >>>>
>> >>>>
>> >>>>
>> >>>>
>> >>>
>> >>>
>> >>
>> >
>>
>>
>>
>> --
>> Harsh J
>>
>
>

Re: DFS Permissions on Hadoop 2.x

Posted by Prashant Kommireddi <pr...@gmail.com>.
Thanks guys, I will follow the discussion there.


On Tue, Jun 18, 2013 at 10:10 PM, Azuryy Yu <az...@gmail.com> wrote:

> Yes, and I think this was lead by Snapshot.
>
> I've file a JIRA here:
> https://issues.apache.org/jira/browse/HDFS-4918
>
>
>
> On Wed, Jun 19, 2013 at 11:40 AM, Harsh J <ha...@cloudera.com> wrote:
>
>> This is a HDFS bug. Like all other methods that check for permissions
>> being enabled, the client call of setPermission should check it as
>> well. It does not do that currently and I believe it should be a NOP
>> in such a case. Please do file a JIRA (and reference the ID here to
>> close the loop)!
>>
>> On Wed, Jun 19, 2013 at 6:18 AM, Prashant Kommireddi
>> <pr...@gmail.com> wrote:
>> > Looks like the jobs fail only on the first attempt and pass thereafter.
>> > Failure occurs while setting perms on "intermediate done directory".
>> Here is
>> > what I think is happening:
>> >
>> > 1. Intermediate done dir is (ideally) created as part of deployment
>> (for eg,
>> > /mapred/history/done_intermediate)
>> >
>> > 2. When a MR job is run, it creates a user dir within intermediate done
>> dir
>> > (/mapred/history/done_intermediate/username)
>> >
>> > 3. After this dir is created, the code tries to set permissions on this
>> user
>> > dir. In doing so, it checks for EXECUTE permissions on not just its
>> parent
>> > (/mapred/history/done_intermediate) but across all dirs to the top-most
>> > level (/mapred). This fails as "/mapred" does not have execute
>> permissions
>> > for the "Other" users.
>> >
>> > 4. On successive job runs, since the user dir already exists
>> > (/mapred/history/done_intermediate/username) it no longer tries to
>> create
>> > and set permissions again. And the job completes without any perm
>> errors.
>> >
>> > This is the code within JobHistoryEventHandler that's doing it.
>> >
>> >  //Check for the existence of intermediate done dir.
>> >     Path doneDirPath = null;
>> >     try {
>> >       doneDirPath = FileSystem.get(conf).makeQualified(new
>> > Path(doneDirStr));
>> >       doneDirFS = FileSystem.get(doneDirPath.toUri(), conf);
>> >       // This directory will be in a common location, or this may be a
>> > cluster
>> >       // meant for a single user. Creating based on the conf. Should
>> ideally
>> > be
>> >       // created by the JobHistoryServer or as part of deployment.
>> >       if (!doneDirFS.exists(doneDirPath)) {
>> >       if (JobHistoryUtils.shouldCreateNonUserDirectory(conf)) {
>> >         LOG.info("Creating intermediate history logDir: ["
>> >             + doneDirPath
>> >             + "] + based on conf. Should ideally be created by the
>> > JobHistoryServer: "
>> >             + MRJobConfig.MR_AM_CREATE_JH_INTERMEDIATE_BASE_DIR);
>> >           mkdir(
>> >               doneDirFS,
>> >               doneDirPath,
>> >               new FsPermission(
>> >             JobHistoryUtils.HISTORY_INTERMEDIATE_DONE_DIR_PERMISSIONS
>> >                 .toShort()));
>> >           // TODO Temporary toShort till new FsPermission(FsPermissions)
>> >           // respects
>> >         // sticky
>> >       } else {
>> >           String message = "Not creating intermediate history logDir: ["
>> >                 + doneDirPath
>> >                 + "] based on conf: "
>> >                 + MRJobConfig.MR_AM_CREATE_JH_INTERMEDIATE_BASE_DIR
>> >                 + ". Either set to true or pre-create this directory
>> with" +
>> >                 " appropriate permissions";
>> >         LOG.error(message);
>> >         throw new YarnException(message);
>> >       }
>> >       }
>> >     } catch (IOException e) {
>> >       LOG.error("Failed checking for the existance of history
>> intermediate "
>> > +
>> >                       "done directory: [" + doneDirPath + "]");
>> >       throw new YarnException(e);
>> >     }
>> >
>> >
>> > In any case, this does not appear to be the right behavior as it does
>> not
>> > respect "dfs.permissions.enabled" (set to false) at any point. Sounds
>> like a
>> > bug?
>> >
>> >
>> > Thanks, Prashant
>> >
>> >
>> >
>> >
>> >
>> >
>> > On Tue, Jun 18, 2013 at 3:24 PM, Prashant Kommireddi <
>> prash1784@gmail.com>
>> > wrote:
>> >>
>> >> Hi Chris,
>> >>
>> >> This is while running a MR job. Please note the job is able to write
>> files
>> >> to "/mapred" directory and fails on EXECUTE permissions. On digging in
>> some
>> >> more, it looks like the failure occurs after writing to
>> >> "/mapred/history/done_intermediate".
>> >>
>> >> Here is a more detailed stacktrace.
>> >>
>> >> INFO: Job end notification started for jobID : job_1371593763906_0001
>> >> Jun 18, 2013 3:20:20 PM
>> >> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler
>> >> closeEventWriter
>> >> INFO: Unable to write out JobSummaryInfo to
>> >>
>> [hdfs://test-local-EMPTYSPEC/mapred/history/done_intermediate/smehta/job_1371593763906_0001.summary_tmp]
>> >> org.apache.hadoop.security.AccessControlException: Permission denied:
>> >> user=smehta, access=EXECUTE,
>> >> inode="/mapred":pkommireddi:supergroup:drwxrwx---
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
>> >>      at
>> >>
>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>> >>      at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
>> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
>> >>      at java.security.AccessController.doPrivileged(Native Method)
>> >>      at javax.security.auth.Subject.doAs(Subject.java:396)
>> >>      at
>> >>
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>> >>      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
>> >>
>> >>      at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
>> Method)
>> >>      at
>> >>
>> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
>> >>      at
>> >>
>> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
>> >>      at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
>> >>      at
>> >>
>> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:90)
>> >>      at
>> >>
>> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:57)
>> >>      at
>> org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:1897)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.DistributedFileSystem.setPermission(DistributedFileSystem.java:823)
>> >>      at
>> >>
>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.closeEventWriter(JobHistoryEventHandler.java:666)
>> >>      at
>> >>
>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.handleEvent(JobHistoryEventHandler.java:521)
>> >>      at
>> >>
>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler$1.run(JobHistoryEventHandler.java:273)
>> >>      at java.lang.Thread.run(Thread.java:662)
>> >> Caused by:
>> >>
>> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException):
>> >> Permission denied: user=smehta, access=EXECUTE,
>> >> inode="/mapred":pkommireddi:supergroup:drwxrwx---
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
>> >>      at
>> >>
>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>> >>      at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
>> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
>> >>      at java.security.AccessController.doPrivileged(Native Method)
>> >>      at javax.security.auth.Subject.doAs(Subject.java:396)
>> >>      at
>> >>
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>> >>      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
>> >>
>> >>      at org.apache.hadoop.ipc.Client.call(Client.java:1225)
>> >>      at
>> >>
>> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202)
>> >>      at $Proxy9.setPermission(Unknown Source)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.setPermission(ClientNamenodeProtocolTranslatorPB.java:241)
>> >>      at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> >>      at
>> >>
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>> >>      at
>> >>
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>> >>      at java.lang.reflect.Method.invoke(Method.java:597)
>> >>      at
>> >>
>> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164)
>> >>      at
>> >>
>> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83)
>> >>      at $Proxy10.setPermission(Unknown Source)
>> >>      at
>> org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:1895)
>> >>      ... 5 more
>> >> Jun 18, 2013 3:20:20 PM
>> >> org.apache.hadoop.yarn.YarnUncaughtExceptionHandler uncaughtException
>> >> SEVERE: Thread Thread[Thread-51,5,main] threw an Exception.
>> >> org.apache.hadoop.yarn.YarnException:
>> >> org.apache.hadoop.security.AccessControlException: Permission denied:
>> >> user=smehta, access=EXECUTE,
>> >> inode="/mapred":pkommireddi:supergroup:drwxrwx---
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
>> >>      at
>> >>
>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>> >>      at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
>> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
>> >>      at java.security.AccessController.doPrivileged(Native Method)
>> >>      at javax.security.auth.Subject.doAs(Subject.java:396)
>> >>      at
>> >>
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>> >>      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
>> >>
>> >>      at
>> >>
>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.handleEvent(JobHistoryEventHandler.java:523)
>> >>      at
>> >>
>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler$1.run(JobHistoryEventHandler.java:273)
>> >>      at java.lang.Thread.run(Thread.java:662)
>> >> Caused by: org.apache.hadoop.security.AccessControlException:
>> Permission
>> >> denied: user=smehta, access=EXECUTE,
>> >> inode="/mapred":pkommireddi:supergroup:drwxrwx---
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
>> >>      at
>> >>
>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>> >>      at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
>> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
>> >>      at java.security.AccessController.doPrivileged(Native Method)
>> >>      at javax.security.auth.Subject.doAs(Subject.java:396)
>> >>      at
>> >>
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>> >>      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
>> >>
>> >>      at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
>> Method)
>> >>      at
>> >>
>> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
>> >>      at
>> >>
>> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
>> >>      at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
>> >>      at
>> >>
>> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:90)
>> >>      at
>> >>
>> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:57)
>> >>      at
>> org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:1897)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.DistributedFileSystem.setPermission(DistributedFileSystem.java:823)
>> >>      at
>> >>
>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.closeEventWriter(JobHistoryEventHandler.java:666)
>> >>      at
>> >>
>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.handleEvent(JobHistoryEventHandler.java:521)
>> >>      ... 2 more
>> >> Caused by:
>> >>
>> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException):
>> >> Permission denied: user=smehta, access=EXECUTE,
>> >> inode="/mapred":pkommireddi:supergroup:drwxrwx---
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
>> >>      at
>> >>
>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>> >>      at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
>> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
>> >>      at java.security.AccessController.doPrivileged(Native Method)
>> >>      at javax.security.auth.Subject.doAs(Subject.java:396)
>> >>      at
>> >>
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>> >>      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
>> >>
>> >>      at org.apache.hadoop.ipc.Client.call(Client.java:1225)
>> >>      at
>> >>
>> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202)
>> >>      at $Proxy9.setPermission(Unknown Source)
>> >>      at
>> >>
>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.setPermission(ClientNamenodeProtocolTranslatorPB.java:241)
>> >>      at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> >>      at
>> >>
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>> >>      at
>> >>
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>> >>      at java.lang.reflect.Method.invoke(Method.java:597)
>> >>      at
>> >>
>> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164)
>> >>      at
>> >>
>> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83)
>> >>      at $Proxy10.setPermission(Unknown Source)
>> >>      at
>> org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:1895)
>> >>      ... 5 more
>> >> Jun 18, 2013 3:20:20 PM
>> >>
>> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator$ScheduleStats log
>> >> INFO: Before Scheduling: PendingReds:0 ScheduledMaps:0 ScheduledReds:0
>> >> AssignedMaps:0 AssignedReds:1 CompletedMaps:1 CompletedReds:1
>> ContAlloc:2
>> >> ContRel:0 HostLocal:0 RackLocal:1
>> >> Jun 18, 2013 3:20:21 PM
>> >> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator getResources
>> >> INFO: Received completed container
>> container_1371593763906_0001_01_000003
>> >> Jun 18, 2013 3:20:21 PM
>> >>
>> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator$ScheduleStats log
>> >> INFO: After Scheduling: PendingReds:0 ScheduledMaps:0 ScheduledReds:0
>> >> AssignedMaps:0 AssignedReds:0 CompletedMaps:1 CompletedReds:1
>> ContAlloc:2
>> >> ContRel:0 HostLocal:0 RackLocal:1
>> >> Jun 18, 2013 3:20:21 PM
>> >>
>> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl$DiagnosticInformationUpdater
>> >> transition
>> >> INFO: Diagnostics report from attempt_1371593763906_0001_r_000000_0:
>> >> Container killed by the ApplicationMaster.
>> >>
>> >>
>> >>
>> >> On Tue, Jun 18, 2013 at 1:28 PM, Chris Nauroth <
>> cnauroth@hortonworks.com>
>> >> wrote:
>> >>>
>> >>> Prashant, can you provide more details about what you're doing when
>> you
>> >>> see this error?  Are you submitting a MapReduce job, running an HDFS
>> shell
>> >>> command, or doing some other action?  It's possible that we're also
>> seeing
>> >>> an interaction with some other change in 2.x that triggers a
>> setPermission
>> >>> call that wasn't there in 0.20.2.  I think the problem with the HDFS
>> >>> setPermission API is present in both 0.20.2 and 2.x, but if the code
>> in
>> >>> 0.20.2 never triggered a setPermission call for your usage, then you
>> >>> wouldn't have seen the problem.
>> >>>
>> >>> I'd like to gather these details for submitting a new bug report to
>> HDFS.
>> >>> Thanks!
>> >>>
>> >>> Chris Nauroth
>> >>> Hortonworks
>> >>> http://hortonworks.com/
>> >>>
>> >>>
>> >>>
>> >>> On Tue, Jun 18, 2013 at 12:14 PM, Leo Leung <ll...@ddn.com> wrote:
>> >>>>
>> >>>> I believe, the properties name should be “dfs.permissions”
>> >>>>
>> >>>>
>> >>>>
>> >>>>
>> >>>>
>> >>>> From: Prashant Kommireddi [mailto:prash1784@gmail.com]
>> >>>> Sent: Tuesday, June 18, 2013 10:54 AM
>> >>>> To: user@hadoop.apache.org
>> >>>> Subject: DFS Permissions on Hadoop 2.x
>> >>>>
>> >>>>
>> >>>>
>> >>>> Hello,
>> >>>>
>> >>>>
>> >>>>
>> >>>> We just upgraded our cluster from 0.20.2 to 2.x (with HA) and had a
>> >>>> question around disabling dfs permissions on the latter version. For
>> some
>> >>>> reason, setting the following config does not seem to work
>> >>>>
>> >>>>
>> >>>>
>> >>>> <property>
>> >>>>
>> >>>>         <name>dfs.permissions.enabled</name>
>> >>>>
>> >>>>         <value>false</value>
>> >>>>
>> >>>> </property>
>> >>>>
>> >>>>
>> >>>>
>> >>>> Any other configs that might be needed for this?
>> >>>>
>> >>>>
>> >>>>
>> >>>> Here is the stacktrace.
>> >>>>
>> >>>>
>> >>>>
>> >>>> 2013-06-17 17:35:45,429 INFO  ipc.Server - IPC Server handler 62 on
>> >>>> 8020, call
>> org.apache.hadoop.hdfs.protocol.ClientProtocol.setPermission from
>> >>>> 10.0.53.131:24059: error:
>> org.apache.hadoop.security.AccessControlException:
>> >>>> Permission denied: user=smehta, access=EXECUTE,
>> >>>> inode="/mapred":pkommireddi:supergroup:drwxrwx---
>> >>>>
>> >>>> org.apache.hadoop.security.AccessControlException: Permission denied:
>> >>>> user=smehta, access=EXECUTE,
>> >>>> inode="/mapred":pkommireddi:supergroup:drwxrwx---
>> >>>>
>> >>>>         at
>> >>>>
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
>> >>>>
>> >>>>         at
>> >>>>
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
>> >>>>
>> >>>>         at
>> >>>>
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
>> >>>>
>> >>>>         at
>> >>>>
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
>> >>>>
>> >>>>         at
>> >>>>
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
>> >>>>
>> >>>>         at
>> >>>>
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
>> >>>>
>> >>>>         at
>> >>>>
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
>> >>>>
>> >>>>         at
>> >>>>
>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
>> >>>>
>> >>>>         at
>> >>>>
>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
>> >>>>
>> >>>>         at
>> >>>>
>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
>> >>>>
>> >>>>         at
>> >>>>
>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>> >>>>
>> >>>>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>> >>>>
>> >>>>         at
>> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
>> >>>>
>> >>>>         at
>> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
>> >>>>
>> >>>>         at java.security.AccessController.doPrivileged(Native Method)
>> >>>>
>> >>>>         at javax.security.auth.Subject.doAs(Subject.java:396)
>> >>>>
>> >>>>         at
>> >>>>
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>> >>>>
>> >>>>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
>> >>>>
>> >>>>
>> >>>>
>> >>>>
>> >>>>
>> >>>>
>> >>>>
>> >>>>
>> >>>
>> >>>
>> >>
>> >
>>
>>
>>
>> --
>> Harsh J
>>
>
>

Re: DFS Permissions on Hadoop 2.x

Posted by Azuryy Yu <az...@gmail.com>.
Yes, and I think this was lead by Snapshot.

I've file a JIRA here:
https://issues.apache.org/jira/browse/HDFS-4918



On Wed, Jun 19, 2013 at 11:40 AM, Harsh J <ha...@cloudera.com> wrote:

> This is a HDFS bug. Like all other methods that check for permissions
> being enabled, the client call of setPermission should check it as
> well. It does not do that currently and I believe it should be a NOP
> in such a case. Please do file a JIRA (and reference the ID here to
> close the loop)!
>
> On Wed, Jun 19, 2013 at 6:18 AM, Prashant Kommireddi
> <pr...@gmail.com> wrote:
> > Looks like the jobs fail only on the first attempt and pass thereafter.
> > Failure occurs while setting perms on "intermediate done directory".
> Here is
> > what I think is happening:
> >
> > 1. Intermediate done dir is (ideally) created as part of deployment (for
> eg,
> > /mapred/history/done_intermediate)
> >
> > 2. When a MR job is run, it creates a user dir within intermediate done
> dir
> > (/mapred/history/done_intermediate/username)
> >
> > 3. After this dir is created, the code tries to set permissions on this
> user
> > dir. In doing so, it checks for EXECUTE permissions on not just its
> parent
> > (/mapred/history/done_intermediate) but across all dirs to the top-most
> > level (/mapred). This fails as "/mapred" does not have execute
> permissions
> > for the "Other" users.
> >
> > 4. On successive job runs, since the user dir already exists
> > (/mapred/history/done_intermediate/username) it no longer tries to create
> > and set permissions again. And the job completes without any perm errors.
> >
> > This is the code within JobHistoryEventHandler that's doing it.
> >
> >  //Check for the existence of intermediate done dir.
> >     Path doneDirPath = null;
> >     try {
> >       doneDirPath = FileSystem.get(conf).makeQualified(new
> > Path(doneDirStr));
> >       doneDirFS = FileSystem.get(doneDirPath.toUri(), conf);
> >       // This directory will be in a common location, or this may be a
> > cluster
> >       // meant for a single user. Creating based on the conf. Should
> ideally
> > be
> >       // created by the JobHistoryServer or as part of deployment.
> >       if (!doneDirFS.exists(doneDirPath)) {
> >       if (JobHistoryUtils.shouldCreateNonUserDirectory(conf)) {
> >         LOG.info("Creating intermediate history logDir: ["
> >             + doneDirPath
> >             + "] + based on conf. Should ideally be created by the
> > JobHistoryServer: "
> >             + MRJobConfig.MR_AM_CREATE_JH_INTERMEDIATE_BASE_DIR);
> >           mkdir(
> >               doneDirFS,
> >               doneDirPath,
> >               new FsPermission(
> >             JobHistoryUtils.HISTORY_INTERMEDIATE_DONE_DIR_PERMISSIONS
> >                 .toShort()));
> >           // TODO Temporary toShort till new FsPermission(FsPermissions)
> >           // respects
> >         // sticky
> >       } else {
> >           String message = "Not creating intermediate history logDir: ["
> >                 + doneDirPath
> >                 + "] based on conf: "
> >                 + MRJobConfig.MR_AM_CREATE_JH_INTERMEDIATE_BASE_DIR
> >                 + ". Either set to true or pre-create this directory
> with" +
> >                 " appropriate permissions";
> >         LOG.error(message);
> >         throw new YarnException(message);
> >       }
> >       }
> >     } catch (IOException e) {
> >       LOG.error("Failed checking for the existance of history
> intermediate "
> > +
> >                       "done directory: [" + doneDirPath + "]");
> >       throw new YarnException(e);
> >     }
> >
> >
> > In any case, this does not appear to be the right behavior as it does not
> > respect "dfs.permissions.enabled" (set to false) at any point. Sounds
> like a
> > bug?
> >
> >
> > Thanks, Prashant
> >
> >
> >
> >
> >
> >
> > On Tue, Jun 18, 2013 at 3:24 PM, Prashant Kommireddi <
> prash1784@gmail.com>
> > wrote:
> >>
> >> Hi Chris,
> >>
> >> This is while running a MR job. Please note the job is able to write
> files
> >> to "/mapred" directory and fails on EXECUTE permissions. On digging in
> some
> >> more, it looks like the failure occurs after writing to
> >> "/mapred/history/done_intermediate".
> >>
> >> Here is a more detailed stacktrace.
> >>
> >> INFO: Job end notification started for jobID : job_1371593763906_0001
> >> Jun 18, 2013 3:20:20 PM
> >> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler
> >> closeEventWriter
> >> INFO: Unable to write out JobSummaryInfo to
> >>
> [hdfs://test-local-EMPTYSPEC/mapred/history/done_intermediate/smehta/job_1371593763906_0001.summary_tmp]
> >> org.apache.hadoop.security.AccessControlException: Permission denied:
> >> user=smehta, access=EXECUTE,
> >> inode="/mapred":pkommireddi:supergroup:drwxrwx---
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
> >>      at
> >>
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
> >>      at
> >>
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
> >>      at
> >>
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
> >>      at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
> >>      at java.security.AccessController.doPrivileged(Native Method)
> >>      at javax.security.auth.Subject.doAs(Subject.java:396)
> >>      at
> >>
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
> >>      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
> >>
> >>      at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
> Method)
> >>      at
> >>
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
> >>      at
> >>
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
> >>      at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
> >>      at
> >>
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:90)
> >>      at
> >>
> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:57)
> >>      at
> org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:1897)
> >>      at
> >>
> org.apache.hadoop.hdfs.DistributedFileSystem.setPermission(DistributedFileSystem.java:823)
> >>      at
> >>
> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.closeEventWriter(JobHistoryEventHandler.java:666)
> >>      at
> >>
> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.handleEvent(JobHistoryEventHandler.java:521)
> >>      at
> >>
> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler$1.run(JobHistoryEventHandler.java:273)
> >>      at java.lang.Thread.run(Thread.java:662)
> >> Caused by:
> >>
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException):
> >> Permission denied: user=smehta, access=EXECUTE,
> >> inode="/mapred":pkommireddi:supergroup:drwxrwx---
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
> >>      at
> >>
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
> >>      at
> >>
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
> >>      at
> >>
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
> >>      at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
> >>      at java.security.AccessController.doPrivileged(Native Method)
> >>      at javax.security.auth.Subject.doAs(Subject.java:396)
> >>      at
> >>
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
> >>      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
> >>
> >>      at org.apache.hadoop.ipc.Client.call(Client.java:1225)
> >>      at
> >>
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202)
> >>      at $Proxy9.setPermission(Unknown Source)
> >>      at
> >>
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.setPermission(ClientNamenodeProtocolTranslatorPB.java:241)
> >>      at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> >>      at
> >>
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> >>      at
> >>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> >>      at java.lang.reflect.Method.invoke(Method.java:597)
> >>      at
> >>
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164)
> >>      at
> >>
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83)
> >>      at $Proxy10.setPermission(Unknown Source)
> >>      at
> org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:1895)
> >>      ... 5 more
> >> Jun 18, 2013 3:20:20 PM
> >> org.apache.hadoop.yarn.YarnUncaughtExceptionHandler uncaughtException
> >> SEVERE: Thread Thread[Thread-51,5,main] threw an Exception.
> >> org.apache.hadoop.yarn.YarnException:
> >> org.apache.hadoop.security.AccessControlException: Permission denied:
> >> user=smehta, access=EXECUTE,
> >> inode="/mapred":pkommireddi:supergroup:drwxrwx---
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
> >>      at
> >>
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
> >>      at
> >>
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
> >>      at
> >>
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
> >>      at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
> >>      at java.security.AccessController.doPrivileged(Native Method)
> >>      at javax.security.auth.Subject.doAs(Subject.java:396)
> >>      at
> >>
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
> >>      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
> >>
> >>      at
> >>
> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.handleEvent(JobHistoryEventHandler.java:523)
> >>      at
> >>
> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler$1.run(JobHistoryEventHandler.java:273)
> >>      at java.lang.Thread.run(Thread.java:662)
> >> Caused by: org.apache.hadoop.security.AccessControlException: Permission
> >> denied: user=smehta, access=EXECUTE,
> >> inode="/mapred":pkommireddi:supergroup:drwxrwx---
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
> >>      at
> >>
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
> >>      at
> >>
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
> >>      at
> >>
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
> >>      at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
> >>      at java.security.AccessController.doPrivileged(Native Method)
> >>      at javax.security.auth.Subject.doAs(Subject.java:396)
> >>      at
> >>
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
> >>      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
> >>
> >>      at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
> Method)
> >>      at
> >>
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
> >>      at
> >>
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
> >>      at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
> >>      at
> >>
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:90)
> >>      at
> >>
> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:57)
> >>      at
> org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:1897)
> >>      at
> >>
> org.apache.hadoop.hdfs.DistributedFileSystem.setPermission(DistributedFileSystem.java:823)
> >>      at
> >>
> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.closeEventWriter(JobHistoryEventHandler.java:666)
> >>      at
> >>
> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.handleEvent(JobHistoryEventHandler.java:521)
> >>      ... 2 more
> >> Caused by:
> >>
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException):
> >> Permission denied: user=smehta, access=EXECUTE,
> >> inode="/mapred":pkommireddi:supergroup:drwxrwx---
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
> >>      at
> >>
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
> >>      at
> >>
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
> >>      at
> >>
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
> >>      at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
> >>      at java.security.AccessController.doPrivileged(Native Method)
> >>      at javax.security.auth.Subject.doAs(Subject.java:396)
> >>      at
> >>
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
> >>      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
> >>
> >>      at org.apache.hadoop.ipc.Client.call(Client.java:1225)
> >>      at
> >>
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202)
> >>      at $Proxy9.setPermission(Unknown Source)
> >>      at
> >>
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.setPermission(ClientNamenodeProtocolTranslatorPB.java:241)
> >>      at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> >>      at
> >>
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> >>      at
> >>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> >>      at java.lang.reflect.Method.invoke(Method.java:597)
> >>      at
> >>
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164)
> >>      at
> >>
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83)
> >>      at $Proxy10.setPermission(Unknown Source)
> >>      at
> org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:1895)
> >>      ... 5 more
> >> Jun 18, 2013 3:20:20 PM
> >>
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator$ScheduleStats log
> >> INFO: Before Scheduling: PendingReds:0 ScheduledMaps:0 ScheduledReds:0
> >> AssignedMaps:0 AssignedReds:1 CompletedMaps:1 CompletedReds:1
> ContAlloc:2
> >> ContRel:0 HostLocal:0 RackLocal:1
> >> Jun 18, 2013 3:20:21 PM
> >> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator getResources
> >> INFO: Received completed container
> container_1371593763906_0001_01_000003
> >> Jun 18, 2013 3:20:21 PM
> >>
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator$ScheduleStats log
> >> INFO: After Scheduling: PendingReds:0 ScheduledMaps:0 ScheduledReds:0
> >> AssignedMaps:0 AssignedReds:0 CompletedMaps:1 CompletedReds:1
> ContAlloc:2
> >> ContRel:0 HostLocal:0 RackLocal:1
> >> Jun 18, 2013 3:20:21 PM
> >>
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl$DiagnosticInformationUpdater
> >> transition
> >> INFO: Diagnostics report from attempt_1371593763906_0001_r_000000_0:
> >> Container killed by the ApplicationMaster.
> >>
> >>
> >>
> >> On Tue, Jun 18, 2013 at 1:28 PM, Chris Nauroth <
> cnauroth@hortonworks.com>
> >> wrote:
> >>>
> >>> Prashant, can you provide more details about what you're doing when you
> >>> see this error?  Are you submitting a MapReduce job, running an HDFS
> shell
> >>> command, or doing some other action?  It's possible that we're also
> seeing
> >>> an interaction with some other change in 2.x that triggers a
> setPermission
> >>> call that wasn't there in 0.20.2.  I think the problem with the HDFS
> >>> setPermission API is present in both 0.20.2 and 2.x, but if the code in
> >>> 0.20.2 never triggered a setPermission call for your usage, then you
> >>> wouldn't have seen the problem.
> >>>
> >>> I'd like to gather these details for submitting a new bug report to
> HDFS.
> >>> Thanks!
> >>>
> >>> Chris Nauroth
> >>> Hortonworks
> >>> http://hortonworks.com/
> >>>
> >>>
> >>>
> >>> On Tue, Jun 18, 2013 at 12:14 PM, Leo Leung <ll...@ddn.com> wrote:
> >>>>
> >>>> I believe, the properties name should be “dfs.permissions”
> >>>>
> >>>>
> >>>>
> >>>>
> >>>>
> >>>> From: Prashant Kommireddi [mailto:prash1784@gmail.com]
> >>>> Sent: Tuesday, June 18, 2013 10:54 AM
> >>>> To: user@hadoop.apache.org
> >>>> Subject: DFS Permissions on Hadoop 2.x
> >>>>
> >>>>
> >>>>
> >>>> Hello,
> >>>>
> >>>>
> >>>>
> >>>> We just upgraded our cluster from 0.20.2 to 2.x (with HA) and had a
> >>>> question around disabling dfs permissions on the latter version. For
> some
> >>>> reason, setting the following config does not seem to work
> >>>>
> >>>>
> >>>>
> >>>> <property>
> >>>>
> >>>>         <name>dfs.permissions.enabled</name>
> >>>>
> >>>>         <value>false</value>
> >>>>
> >>>> </property>
> >>>>
> >>>>
> >>>>
> >>>> Any other configs that might be needed for this?
> >>>>
> >>>>
> >>>>
> >>>> Here is the stacktrace.
> >>>>
> >>>>
> >>>>
> >>>> 2013-06-17 17:35:45,429 INFO  ipc.Server - IPC Server handler 62 on
> >>>> 8020, call
> org.apache.hadoop.hdfs.protocol.ClientProtocol.setPermission from
> >>>> 10.0.53.131:24059: error:
> org.apache.hadoop.security.AccessControlException:
> >>>> Permission denied: user=smehta, access=EXECUTE,
> >>>> inode="/mapred":pkommireddi:supergroup:drwxrwx---
> >>>>
> >>>> org.apache.hadoop.security.AccessControlException: Permission denied:
> >>>> user=smehta, access=EXECUTE,
> >>>> inode="/mapred":pkommireddi:supergroup:drwxrwx---
> >>>>
> >>>>         at
> >>>>
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
> >>>>
> >>>>         at
> >>>>
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
> >>>>
> >>>>         at
> >>>>
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
> >>>>
> >>>>         at
> >>>>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
> >>>>
> >>>>         at
> >>>>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
> >>>>
> >>>>         at
> >>>>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
> >>>>
> >>>>         at
> >>>>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
> >>>>
> >>>>         at
> >>>>
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
> >>>>
> >>>>         at
> >>>>
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
> >>>>
> >>>>         at
> >>>>
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
> >>>>
> >>>>         at
> >>>>
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
> >>>>
> >>>>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
> >>>>
> >>>>         at
> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
> >>>>
> >>>>         at
> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
> >>>>
> >>>>         at java.security.AccessController.doPrivileged(Native Method)
> >>>>
> >>>>         at javax.security.auth.Subject.doAs(Subject.java:396)
> >>>>
> >>>>         at
> >>>>
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
> >>>>
> >>>>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
> >>>>
> >>>>
> >>>>
> >>>>
> >>>>
> >>>>
> >>>>
> >>>>
> >>>
> >>>
> >>
> >
>
>
>
> --
> Harsh J
>

Re: DFS Permissions on Hadoop 2.x

Posted by Azuryy Yu <az...@gmail.com>.
Yes, and I think this was lead by Snapshot.

I've file a JIRA here:
https://issues.apache.org/jira/browse/HDFS-4918



On Wed, Jun 19, 2013 at 11:40 AM, Harsh J <ha...@cloudera.com> wrote:

> This is a HDFS bug. Like all other methods that check for permissions
> being enabled, the client call of setPermission should check it as
> well. It does not do that currently and I believe it should be a NOP
> in such a case. Please do file a JIRA (and reference the ID here to
> close the loop)!
>
> On Wed, Jun 19, 2013 at 6:18 AM, Prashant Kommireddi
> <pr...@gmail.com> wrote:
> > Looks like the jobs fail only on the first attempt and pass thereafter.
> > Failure occurs while setting perms on "intermediate done directory".
> Here is
> > what I think is happening:
> >
> > 1. Intermediate done dir is (ideally) created as part of deployment (for
> eg,
> > /mapred/history/done_intermediate)
> >
> > 2. When a MR job is run, it creates a user dir within intermediate done
> dir
> > (/mapred/history/done_intermediate/username)
> >
> > 3. After this dir is created, the code tries to set permissions on this
> user
> > dir. In doing so, it checks for EXECUTE permissions on not just its
> parent
> > (/mapred/history/done_intermediate) but across all dirs to the top-most
> > level (/mapred). This fails as "/mapred" does not have execute
> permissions
> > for the "Other" users.
> >
> > 4. On successive job runs, since the user dir already exists
> > (/mapred/history/done_intermediate/username) it no longer tries to create
> > and set permissions again. And the job completes without any perm errors.
> >
> > This is the code within JobHistoryEventHandler that's doing it.
> >
> >  //Check for the existence of intermediate done dir.
> >     Path doneDirPath = null;
> >     try {
> >       doneDirPath = FileSystem.get(conf).makeQualified(new
> > Path(doneDirStr));
> >       doneDirFS = FileSystem.get(doneDirPath.toUri(), conf);
> >       // This directory will be in a common location, or this may be a
> > cluster
> >       // meant for a single user. Creating based on the conf. Should
> ideally
> > be
> >       // created by the JobHistoryServer or as part of deployment.
> >       if (!doneDirFS.exists(doneDirPath)) {
> >       if (JobHistoryUtils.shouldCreateNonUserDirectory(conf)) {
> >         LOG.info("Creating intermediate history logDir: ["
> >             + doneDirPath
> >             + "] + based on conf. Should ideally be created by the
> > JobHistoryServer: "
> >             + MRJobConfig.MR_AM_CREATE_JH_INTERMEDIATE_BASE_DIR);
> >           mkdir(
> >               doneDirFS,
> >               doneDirPath,
> >               new FsPermission(
> >             JobHistoryUtils.HISTORY_INTERMEDIATE_DONE_DIR_PERMISSIONS
> >                 .toShort()));
> >           // TODO Temporary toShort till new FsPermission(FsPermissions)
> >           // respects
> >         // sticky
> >       } else {
> >           String message = "Not creating intermediate history logDir: ["
> >                 + doneDirPath
> >                 + "] based on conf: "
> >                 + MRJobConfig.MR_AM_CREATE_JH_INTERMEDIATE_BASE_DIR
> >                 + ". Either set to true or pre-create this directory
> with" +
> >                 " appropriate permissions";
> >         LOG.error(message);
> >         throw new YarnException(message);
> >       }
> >       }
> >     } catch (IOException e) {
> >       LOG.error("Failed checking for the existance of history
> intermediate "
> > +
> >                       "done directory: [" + doneDirPath + "]");
> >       throw new YarnException(e);
> >     }
> >
> >
> > In any case, this does not appear to be the right behavior as it does not
> > respect "dfs.permissions.enabled" (set to false) at any point. Sounds
> like a
> > bug?
> >
> >
> > Thanks, Prashant
> >
> >
> >
> >
> >
> >
> > On Tue, Jun 18, 2013 at 3:24 PM, Prashant Kommireddi <
> prash1784@gmail.com>
> > wrote:
> >>
> >> Hi Chris,
> >>
> >> This is while running a MR job. Please note the job is able to write
> files
> >> to "/mapred" directory and fails on EXECUTE permissions. On digging in
> some
> >> more, it looks like the failure occurs after writing to
> >> "/mapred/history/done_intermediate".
> >>
> >> Here is a more detailed stacktrace.
> >>
> >> INFO: Job end notification started for jobID : job_1371593763906_0001
> >> Jun 18, 2013 3:20:20 PM
> >> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler
> >> closeEventWriter
> >> INFO: Unable to write out JobSummaryInfo to
> >>
> [hdfs://test-local-EMPTYSPEC/mapred/history/done_intermediate/smehta/job_1371593763906_0001.summary_tmp]
> >> org.apache.hadoop.security.AccessControlException: Permission denied:
> >> user=smehta, access=EXECUTE,
> >> inode="/mapred":pkommireddi:supergroup:drwxrwx---
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
> >>      at
> >>
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
> >>      at
> >>
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
> >>      at
> >>
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
> >>      at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
> >>      at java.security.AccessController.doPrivileged(Native Method)
> >>      at javax.security.auth.Subject.doAs(Subject.java:396)
> >>      at
> >>
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
> >>      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
> >>
> >>      at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
> Method)
> >>      at
> >>
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
> >>      at
> >>
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
> >>      at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
> >>      at
> >>
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:90)
> >>      at
> >>
> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:57)
> >>      at
> org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:1897)
> >>      at
> >>
> org.apache.hadoop.hdfs.DistributedFileSystem.setPermission(DistributedFileSystem.java:823)
> >>      at
> >>
> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.closeEventWriter(JobHistoryEventHandler.java:666)
> >>      at
> >>
> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.handleEvent(JobHistoryEventHandler.java:521)
> >>      at
> >>
> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler$1.run(JobHistoryEventHandler.java:273)
> >>      at java.lang.Thread.run(Thread.java:662)
> >> Caused by:
> >>
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException):
> >> Permission denied: user=smehta, access=EXECUTE,
> >> inode="/mapred":pkommireddi:supergroup:drwxrwx---
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
> >>      at
> >>
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
> >>      at
> >>
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
> >>      at
> >>
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
> >>      at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
> >>      at java.security.AccessController.doPrivileged(Native Method)
> >>      at javax.security.auth.Subject.doAs(Subject.java:396)
> >>      at
> >>
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
> >>      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
> >>
> >>      at org.apache.hadoop.ipc.Client.call(Client.java:1225)
> >>      at
> >>
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202)
> >>      at $Proxy9.setPermission(Unknown Source)
> >>      at
> >>
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.setPermission(ClientNamenodeProtocolTranslatorPB.java:241)
> >>      at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> >>      at
> >>
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> >>      at
> >>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> >>      at java.lang.reflect.Method.invoke(Method.java:597)
> >>      at
> >>
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164)
> >>      at
> >>
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83)
> >>      at $Proxy10.setPermission(Unknown Source)
> >>      at
> org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:1895)
> >>      ... 5 more
> >> Jun 18, 2013 3:20:20 PM
> >> org.apache.hadoop.yarn.YarnUncaughtExceptionHandler uncaughtException
> >> SEVERE: Thread Thread[Thread-51,5,main] threw an Exception.
> >> org.apache.hadoop.yarn.YarnException:
> >> org.apache.hadoop.security.AccessControlException: Permission denied:
> >> user=smehta, access=EXECUTE,
> >> inode="/mapred":pkommireddi:supergroup:drwxrwx---
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
> >>      at
> >>
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
> >>      at
> >>
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
> >>      at
> >>
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
> >>      at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
> >>      at java.security.AccessController.doPrivileged(Native Method)
> >>      at javax.security.auth.Subject.doAs(Subject.java:396)
> >>      at
> >>
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
> >>      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
> >>
> >>      at
> >>
> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.handleEvent(JobHistoryEventHandler.java:523)
> >>      at
> >>
> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler$1.run(JobHistoryEventHandler.java:273)
> >>      at java.lang.Thread.run(Thread.java:662)
> >> Caused by: org.apache.hadoop.security.AccessControlException: Permission
> >> denied: user=smehta, access=EXECUTE,
> >> inode="/mapred":pkommireddi:supergroup:drwxrwx---
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
> >>      at
> >>
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
> >>      at
> >>
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
> >>      at
> >>
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
> >>      at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
> >>      at java.security.AccessController.doPrivileged(Native Method)
> >>      at javax.security.auth.Subject.doAs(Subject.java:396)
> >>      at
> >>
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
> >>      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
> >>
> >>      at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
> Method)
> >>      at
> >>
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
> >>      at
> >>
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
> >>      at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
> >>      at
> >>
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:90)
> >>      at
> >>
> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:57)
> >>      at
> org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:1897)
> >>      at
> >>
> org.apache.hadoop.hdfs.DistributedFileSystem.setPermission(DistributedFileSystem.java:823)
> >>      at
> >>
> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.closeEventWriter(JobHistoryEventHandler.java:666)
> >>      at
> >>
> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.handleEvent(JobHistoryEventHandler.java:521)
> >>      ... 2 more
> >> Caused by:
> >>
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException):
> >> Permission denied: user=smehta, access=EXECUTE,
> >> inode="/mapred":pkommireddi:supergroup:drwxrwx---
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
> >>      at
> >>
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
> >>      at
> >>
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
> >>      at
> >>
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
> >>      at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
> >>      at java.security.AccessController.doPrivileged(Native Method)
> >>      at javax.security.auth.Subject.doAs(Subject.java:396)
> >>      at
> >>
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
> >>      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
> >>
> >>      at org.apache.hadoop.ipc.Client.call(Client.java:1225)
> >>      at
> >>
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202)
> >>      at $Proxy9.setPermission(Unknown Source)
> >>      at
> >>
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.setPermission(ClientNamenodeProtocolTranslatorPB.java:241)
> >>      at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> >>      at
> >>
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> >>      at
> >>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> >>      at java.lang.reflect.Method.invoke(Method.java:597)
> >>      at
> >>
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164)
> >>      at
> >>
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83)
> >>      at $Proxy10.setPermission(Unknown Source)
> >>      at
> org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:1895)
> >>      ... 5 more
> >> Jun 18, 2013 3:20:20 PM
> >>
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator$ScheduleStats log
> >> INFO: Before Scheduling: PendingReds:0 ScheduledMaps:0 ScheduledReds:0
> >> AssignedMaps:0 AssignedReds:1 CompletedMaps:1 CompletedReds:1
> ContAlloc:2
> >> ContRel:0 HostLocal:0 RackLocal:1
> >> Jun 18, 2013 3:20:21 PM
> >> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator getResources
> >> INFO: Received completed container
> container_1371593763906_0001_01_000003
> >> Jun 18, 2013 3:20:21 PM
> >>
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator$ScheduleStats log
> >> INFO: After Scheduling: PendingReds:0 ScheduledMaps:0 ScheduledReds:0
> >> AssignedMaps:0 AssignedReds:0 CompletedMaps:1 CompletedReds:1
> ContAlloc:2
> >> ContRel:0 HostLocal:0 RackLocal:1
> >> Jun 18, 2013 3:20:21 PM
> >>
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl$DiagnosticInformationUpdater
> >> transition
> >> INFO: Diagnostics report from attempt_1371593763906_0001_r_000000_0:
> >> Container killed by the ApplicationMaster.
> >>
> >>
> >>
> >> On Tue, Jun 18, 2013 at 1:28 PM, Chris Nauroth <
> cnauroth@hortonworks.com>
> >> wrote:
> >>>
> >>> Prashant, can you provide more details about what you're doing when you
> >>> see this error?  Are you submitting a MapReduce job, running an HDFS
> shell
> >>> command, or doing some other action?  It's possible that we're also
> seeing
> >>> an interaction with some other change in 2.x that triggers a
> setPermission
> >>> call that wasn't there in 0.20.2.  I think the problem with the HDFS
> >>> setPermission API is present in both 0.20.2 and 2.x, but if the code in
> >>> 0.20.2 never triggered a setPermission call for your usage, then you
> >>> wouldn't have seen the problem.
> >>>
> >>> I'd like to gather these details for submitting a new bug report to
> HDFS.
> >>> Thanks!
> >>>
> >>> Chris Nauroth
> >>> Hortonworks
> >>> http://hortonworks.com/
> >>>
> >>>
> >>>
> >>> On Tue, Jun 18, 2013 at 12:14 PM, Leo Leung <ll...@ddn.com> wrote:
> >>>>
> >>>> I believe, the properties name should be “dfs.permissions”
> >>>>
> >>>>
> >>>>
> >>>>
> >>>>
> >>>> From: Prashant Kommireddi [mailto:prash1784@gmail.com]
> >>>> Sent: Tuesday, June 18, 2013 10:54 AM
> >>>> To: user@hadoop.apache.org
> >>>> Subject: DFS Permissions on Hadoop 2.x
> >>>>
> >>>>
> >>>>
> >>>> Hello,
> >>>>
> >>>>
> >>>>
> >>>> We just upgraded our cluster from 0.20.2 to 2.x (with HA) and had a
> >>>> question around disabling dfs permissions on the latter version. For
> some
> >>>> reason, setting the following config does not seem to work
> >>>>
> >>>>
> >>>>
> >>>> <property>
> >>>>
> >>>>         <name>dfs.permissions.enabled</name>
> >>>>
> >>>>         <value>false</value>
> >>>>
> >>>> </property>
> >>>>
> >>>>
> >>>>
> >>>> Any other configs that might be needed for this?
> >>>>
> >>>>
> >>>>
> >>>> Here is the stacktrace.
> >>>>
> >>>>
> >>>>
> >>>> 2013-06-17 17:35:45,429 INFO  ipc.Server - IPC Server handler 62 on
> >>>> 8020, call
> org.apache.hadoop.hdfs.protocol.ClientProtocol.setPermission from
> >>>> 10.0.53.131:24059: error:
> org.apache.hadoop.security.AccessControlException:
> >>>> Permission denied: user=smehta, access=EXECUTE,
> >>>> inode="/mapred":pkommireddi:supergroup:drwxrwx---
> >>>>
> >>>> org.apache.hadoop.security.AccessControlException: Permission denied:
> >>>> user=smehta, access=EXECUTE,
> >>>> inode="/mapred":pkommireddi:supergroup:drwxrwx---
> >>>>
> >>>>         at
> >>>>
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
> >>>>
> >>>>         at
> >>>>
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
> >>>>
> >>>>         at
> >>>>
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
> >>>>
> >>>>         at
> >>>>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
> >>>>
> >>>>         at
> >>>>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
> >>>>
> >>>>         at
> >>>>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
> >>>>
> >>>>         at
> >>>>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
> >>>>
> >>>>         at
> >>>>
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
> >>>>
> >>>>         at
> >>>>
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
> >>>>
> >>>>         at
> >>>>
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
> >>>>
> >>>>         at
> >>>>
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
> >>>>
> >>>>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
> >>>>
> >>>>         at
> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
> >>>>
> >>>>         at
> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
> >>>>
> >>>>         at java.security.AccessController.doPrivileged(Native Method)
> >>>>
> >>>>         at javax.security.auth.Subject.doAs(Subject.java:396)
> >>>>
> >>>>         at
> >>>>
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
> >>>>
> >>>>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
> >>>>
> >>>>
> >>>>
> >>>>
> >>>>
> >>>>
> >>>>
> >>>>
> >>>
> >>>
> >>
> >
>
>
>
> --
> Harsh J
>

Re: DFS Permissions on Hadoop 2.x

Posted by Azuryy Yu <az...@gmail.com>.
Yes, and I think this was lead by Snapshot.

I've file a JIRA here:
https://issues.apache.org/jira/browse/HDFS-4918



On Wed, Jun 19, 2013 at 11:40 AM, Harsh J <ha...@cloudera.com> wrote:

> This is a HDFS bug. Like all other methods that check for permissions
> being enabled, the client call of setPermission should check it as
> well. It does not do that currently and I believe it should be a NOP
> in such a case. Please do file a JIRA (and reference the ID here to
> close the loop)!
>
> On Wed, Jun 19, 2013 at 6:18 AM, Prashant Kommireddi
> <pr...@gmail.com> wrote:
> > Looks like the jobs fail only on the first attempt and pass thereafter.
> > Failure occurs while setting perms on "intermediate done directory".
> Here is
> > what I think is happening:
> >
> > 1. Intermediate done dir is (ideally) created as part of deployment (for
> eg,
> > /mapred/history/done_intermediate)
> >
> > 2. When a MR job is run, it creates a user dir within intermediate done
> dir
> > (/mapred/history/done_intermediate/username)
> >
> > 3. After this dir is created, the code tries to set permissions on this
> user
> > dir. In doing so, it checks for EXECUTE permissions on not just its
> parent
> > (/mapred/history/done_intermediate) but across all dirs to the top-most
> > level (/mapred). This fails as "/mapred" does not have execute
> permissions
> > for the "Other" users.
> >
> > 4. On successive job runs, since the user dir already exists
> > (/mapred/history/done_intermediate/username) it no longer tries to create
> > and set permissions again. And the job completes without any perm errors.
> >
> > This is the code within JobHistoryEventHandler that's doing it.
> >
> >  //Check for the existence of intermediate done dir.
> >     Path doneDirPath = null;
> >     try {
> >       doneDirPath = FileSystem.get(conf).makeQualified(new
> > Path(doneDirStr));
> >       doneDirFS = FileSystem.get(doneDirPath.toUri(), conf);
> >       // This directory will be in a common location, or this may be a
> > cluster
> >       // meant for a single user. Creating based on the conf. Should
> ideally
> > be
> >       // created by the JobHistoryServer or as part of deployment.
> >       if (!doneDirFS.exists(doneDirPath)) {
> >       if (JobHistoryUtils.shouldCreateNonUserDirectory(conf)) {
> >         LOG.info("Creating intermediate history logDir: ["
> >             + doneDirPath
> >             + "] + based on conf. Should ideally be created by the
> > JobHistoryServer: "
> >             + MRJobConfig.MR_AM_CREATE_JH_INTERMEDIATE_BASE_DIR);
> >           mkdir(
> >               doneDirFS,
> >               doneDirPath,
> >               new FsPermission(
> >             JobHistoryUtils.HISTORY_INTERMEDIATE_DONE_DIR_PERMISSIONS
> >                 .toShort()));
> >           // TODO Temporary toShort till new FsPermission(FsPermissions)
> >           // respects
> >         // sticky
> >       } else {
> >           String message = "Not creating intermediate history logDir: ["
> >                 + doneDirPath
> >                 + "] based on conf: "
> >                 + MRJobConfig.MR_AM_CREATE_JH_INTERMEDIATE_BASE_DIR
> >                 + ". Either set to true or pre-create this directory
> with" +
> >                 " appropriate permissions";
> >         LOG.error(message);
> >         throw new YarnException(message);
> >       }
> >       }
> >     } catch (IOException e) {
> >       LOG.error("Failed checking for the existance of history
> intermediate "
> > +
> >                       "done directory: [" + doneDirPath + "]");
> >       throw new YarnException(e);
> >     }
> >
> >
> > In any case, this does not appear to be the right behavior as it does not
> > respect "dfs.permissions.enabled" (set to false) at any point. Sounds
> like a
> > bug?
> >
> >
> > Thanks, Prashant
> >
> >
> >
> >
> >
> >
> > On Tue, Jun 18, 2013 at 3:24 PM, Prashant Kommireddi <
> prash1784@gmail.com>
> > wrote:
> >>
> >> Hi Chris,
> >>
> >> This is while running a MR job. Please note the job is able to write
> files
> >> to "/mapred" directory and fails on EXECUTE permissions. On digging in
> some
> >> more, it looks like the failure occurs after writing to
> >> "/mapred/history/done_intermediate".
> >>
> >> Here is a more detailed stacktrace.
> >>
> >> INFO: Job end notification started for jobID : job_1371593763906_0001
> >> Jun 18, 2013 3:20:20 PM
> >> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler
> >> closeEventWriter
> >> INFO: Unable to write out JobSummaryInfo to
> >>
> [hdfs://test-local-EMPTYSPEC/mapred/history/done_intermediate/smehta/job_1371593763906_0001.summary_tmp]
> >> org.apache.hadoop.security.AccessControlException: Permission denied:
> >> user=smehta, access=EXECUTE,
> >> inode="/mapred":pkommireddi:supergroup:drwxrwx---
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
> >>      at
> >>
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
> >>      at
> >>
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
> >>      at
> >>
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
> >>      at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
> >>      at java.security.AccessController.doPrivileged(Native Method)
> >>      at javax.security.auth.Subject.doAs(Subject.java:396)
> >>      at
> >>
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
> >>      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
> >>
> >>      at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
> Method)
> >>      at
> >>
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
> >>      at
> >>
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
> >>      at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
> >>      at
> >>
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:90)
> >>      at
> >>
> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:57)
> >>      at
> org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:1897)
> >>      at
> >>
> org.apache.hadoop.hdfs.DistributedFileSystem.setPermission(DistributedFileSystem.java:823)
> >>      at
> >>
> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.closeEventWriter(JobHistoryEventHandler.java:666)
> >>      at
> >>
> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.handleEvent(JobHistoryEventHandler.java:521)
> >>      at
> >>
> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler$1.run(JobHistoryEventHandler.java:273)
> >>      at java.lang.Thread.run(Thread.java:662)
> >> Caused by:
> >>
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException):
> >> Permission denied: user=smehta, access=EXECUTE,
> >> inode="/mapred":pkommireddi:supergroup:drwxrwx---
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
> >>      at
> >>
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
> >>      at
> >>
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
> >>      at
> >>
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
> >>      at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
> >>      at java.security.AccessController.doPrivileged(Native Method)
> >>      at javax.security.auth.Subject.doAs(Subject.java:396)
> >>      at
> >>
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
> >>      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
> >>
> >>      at org.apache.hadoop.ipc.Client.call(Client.java:1225)
> >>      at
> >>
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202)
> >>      at $Proxy9.setPermission(Unknown Source)
> >>      at
> >>
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.setPermission(ClientNamenodeProtocolTranslatorPB.java:241)
> >>      at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> >>      at
> >>
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> >>      at
> >>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> >>      at java.lang.reflect.Method.invoke(Method.java:597)
> >>      at
> >>
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164)
> >>      at
> >>
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83)
> >>      at $Proxy10.setPermission(Unknown Source)
> >>      at
> org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:1895)
> >>      ... 5 more
> >> Jun 18, 2013 3:20:20 PM
> >> org.apache.hadoop.yarn.YarnUncaughtExceptionHandler uncaughtException
> >> SEVERE: Thread Thread[Thread-51,5,main] threw an Exception.
> >> org.apache.hadoop.yarn.YarnException:
> >> org.apache.hadoop.security.AccessControlException: Permission denied:
> >> user=smehta, access=EXECUTE,
> >> inode="/mapred":pkommireddi:supergroup:drwxrwx---
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
> >>      at
> >>
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
> >>      at
> >>
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
> >>      at
> >>
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
> >>      at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
> >>      at java.security.AccessController.doPrivileged(Native Method)
> >>      at javax.security.auth.Subject.doAs(Subject.java:396)
> >>      at
> >>
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
> >>      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
> >>
> >>      at
> >>
> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.handleEvent(JobHistoryEventHandler.java:523)
> >>      at
> >>
> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler$1.run(JobHistoryEventHandler.java:273)
> >>      at java.lang.Thread.run(Thread.java:662)
> >> Caused by: org.apache.hadoop.security.AccessControlException: Permission
> >> denied: user=smehta, access=EXECUTE,
> >> inode="/mapred":pkommireddi:supergroup:drwxrwx---
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
> >>      at
> >>
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
> >>      at
> >>
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
> >>      at
> >>
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
> >>      at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
> >>      at java.security.AccessController.doPrivileged(Native Method)
> >>      at javax.security.auth.Subject.doAs(Subject.java:396)
> >>      at
> >>
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
> >>      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
> >>
> >>      at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
> Method)
> >>      at
> >>
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
> >>      at
> >>
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
> >>      at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
> >>      at
> >>
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:90)
> >>      at
> >>
> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:57)
> >>      at
> org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:1897)
> >>      at
> >>
> org.apache.hadoop.hdfs.DistributedFileSystem.setPermission(DistributedFileSystem.java:823)
> >>      at
> >>
> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.closeEventWriter(JobHistoryEventHandler.java:666)
> >>      at
> >>
> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.handleEvent(JobHistoryEventHandler.java:521)
> >>      ... 2 more
> >> Caused by:
> >>
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException):
> >> Permission denied: user=smehta, access=EXECUTE,
> >> inode="/mapred":pkommireddi:supergroup:drwxrwx---
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
> >>      at
> >>
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
> >>      at
> >>
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
> >>      at
> >>
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
> >>      at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
> >>      at java.security.AccessController.doPrivileged(Native Method)
> >>      at javax.security.auth.Subject.doAs(Subject.java:396)
> >>      at
> >>
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
> >>      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
> >>
> >>      at org.apache.hadoop.ipc.Client.call(Client.java:1225)
> >>      at
> >>
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202)
> >>      at $Proxy9.setPermission(Unknown Source)
> >>      at
> >>
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.setPermission(ClientNamenodeProtocolTranslatorPB.java:241)
> >>      at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> >>      at
> >>
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> >>      at
> >>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> >>      at java.lang.reflect.Method.invoke(Method.java:597)
> >>      at
> >>
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164)
> >>      at
> >>
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83)
> >>      at $Proxy10.setPermission(Unknown Source)
> >>      at
> org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:1895)
> >>      ... 5 more
> >> Jun 18, 2013 3:20:20 PM
> >>
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator$ScheduleStats log
> >> INFO: Before Scheduling: PendingReds:0 ScheduledMaps:0 ScheduledReds:0
> >> AssignedMaps:0 AssignedReds:1 CompletedMaps:1 CompletedReds:1
> ContAlloc:2
> >> ContRel:0 HostLocal:0 RackLocal:1
> >> Jun 18, 2013 3:20:21 PM
> >> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator getResources
> >> INFO: Received completed container
> container_1371593763906_0001_01_000003
> >> Jun 18, 2013 3:20:21 PM
> >>
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator$ScheduleStats log
> >> INFO: After Scheduling: PendingReds:0 ScheduledMaps:0 ScheduledReds:0
> >> AssignedMaps:0 AssignedReds:0 CompletedMaps:1 CompletedReds:1
> ContAlloc:2
> >> ContRel:0 HostLocal:0 RackLocal:1
> >> Jun 18, 2013 3:20:21 PM
> >>
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl$DiagnosticInformationUpdater
> >> transition
> >> INFO: Diagnostics report from attempt_1371593763906_0001_r_000000_0:
> >> Container killed by the ApplicationMaster.
> >>
> >>
> >>
> >> On Tue, Jun 18, 2013 at 1:28 PM, Chris Nauroth <
> cnauroth@hortonworks.com>
> >> wrote:
> >>>
> >>> Prashant, can you provide more details about what you're doing when you
> >>> see this error?  Are you submitting a MapReduce job, running an HDFS
> shell
> >>> command, or doing some other action?  It's possible that we're also
> seeing
> >>> an interaction with some other change in 2.x that triggers a
> setPermission
> >>> call that wasn't there in 0.20.2.  I think the problem with the HDFS
> >>> setPermission API is present in both 0.20.2 and 2.x, but if the code in
> >>> 0.20.2 never triggered a setPermission call for your usage, then you
> >>> wouldn't have seen the problem.
> >>>
> >>> I'd like to gather these details for submitting a new bug report to
> HDFS.
> >>> Thanks!
> >>>
> >>> Chris Nauroth
> >>> Hortonworks
> >>> http://hortonworks.com/
> >>>
> >>>
> >>>
> >>> On Tue, Jun 18, 2013 at 12:14 PM, Leo Leung <ll...@ddn.com> wrote:
> >>>>
> >>>> I believe, the properties name should be “dfs.permissions”
> >>>>
> >>>>
> >>>>
> >>>>
> >>>>
> >>>> From: Prashant Kommireddi [mailto:prash1784@gmail.com]
> >>>> Sent: Tuesday, June 18, 2013 10:54 AM
> >>>> To: user@hadoop.apache.org
> >>>> Subject: DFS Permissions on Hadoop 2.x
> >>>>
> >>>>
> >>>>
> >>>> Hello,
> >>>>
> >>>>
> >>>>
> >>>> We just upgraded our cluster from 0.20.2 to 2.x (with HA) and had a
> >>>> question around disabling dfs permissions on the latter version. For
> some
> >>>> reason, setting the following config does not seem to work
> >>>>
> >>>>
> >>>>
> >>>> <property>
> >>>>
> >>>>         <name>dfs.permissions.enabled</name>
> >>>>
> >>>>         <value>false</value>
> >>>>
> >>>> </property>
> >>>>
> >>>>
> >>>>
> >>>> Any other configs that might be needed for this?
> >>>>
> >>>>
> >>>>
> >>>> Here is the stacktrace.
> >>>>
> >>>>
> >>>>
> >>>> 2013-06-17 17:35:45,429 INFO  ipc.Server - IPC Server handler 62 on
> >>>> 8020, call
> org.apache.hadoop.hdfs.protocol.ClientProtocol.setPermission from
> >>>> 10.0.53.131:24059: error:
> org.apache.hadoop.security.AccessControlException:
> >>>> Permission denied: user=smehta, access=EXECUTE,
> >>>> inode="/mapred":pkommireddi:supergroup:drwxrwx---
> >>>>
> >>>> org.apache.hadoop.security.AccessControlException: Permission denied:
> >>>> user=smehta, access=EXECUTE,
> >>>> inode="/mapred":pkommireddi:supergroup:drwxrwx---
> >>>>
> >>>>         at
> >>>>
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
> >>>>
> >>>>         at
> >>>>
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
> >>>>
> >>>>         at
> >>>>
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
> >>>>
> >>>>         at
> >>>>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
> >>>>
> >>>>         at
> >>>>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
> >>>>
> >>>>         at
> >>>>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
> >>>>
> >>>>         at
> >>>>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
> >>>>
> >>>>         at
> >>>>
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
> >>>>
> >>>>         at
> >>>>
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
> >>>>
> >>>>         at
> >>>>
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
> >>>>
> >>>>         at
> >>>>
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
> >>>>
> >>>>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
> >>>>
> >>>>         at
> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
> >>>>
> >>>>         at
> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
> >>>>
> >>>>         at java.security.AccessController.doPrivileged(Native Method)
> >>>>
> >>>>         at javax.security.auth.Subject.doAs(Subject.java:396)
> >>>>
> >>>>         at
> >>>>
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
> >>>>
> >>>>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
> >>>>
> >>>>
> >>>>
> >>>>
> >>>>
> >>>>
> >>>>
> >>>>
> >>>
> >>>
> >>
> >
>
>
>
> --
> Harsh J
>

Re: DFS Permissions on Hadoop 2.x

Posted by Azuryy Yu <az...@gmail.com>.
Yes, and I think this was lead by Snapshot.

I've file a JIRA here:
https://issues.apache.org/jira/browse/HDFS-4918



On Wed, Jun 19, 2013 at 11:40 AM, Harsh J <ha...@cloudera.com> wrote:

> This is a HDFS bug. Like all other methods that check for permissions
> being enabled, the client call of setPermission should check it as
> well. It does not do that currently and I believe it should be a NOP
> in such a case. Please do file a JIRA (and reference the ID here to
> close the loop)!
>
> On Wed, Jun 19, 2013 at 6:18 AM, Prashant Kommireddi
> <pr...@gmail.com> wrote:
> > Looks like the jobs fail only on the first attempt and pass thereafter.
> > Failure occurs while setting perms on "intermediate done directory".
> Here is
> > what I think is happening:
> >
> > 1. Intermediate done dir is (ideally) created as part of deployment (for
> eg,
> > /mapred/history/done_intermediate)
> >
> > 2. When a MR job is run, it creates a user dir within intermediate done
> dir
> > (/mapred/history/done_intermediate/username)
> >
> > 3. After this dir is created, the code tries to set permissions on this
> user
> > dir. In doing so, it checks for EXECUTE permissions on not just its
> parent
> > (/mapred/history/done_intermediate) but across all dirs to the top-most
> > level (/mapred). This fails as "/mapred" does not have execute
> permissions
> > for the "Other" users.
> >
> > 4. On successive job runs, since the user dir already exists
> > (/mapred/history/done_intermediate/username) it no longer tries to create
> > and set permissions again. And the job completes without any perm errors.
> >
> > This is the code within JobHistoryEventHandler that's doing it.
> >
> >  //Check for the existence of intermediate done dir.
> >     Path doneDirPath = null;
> >     try {
> >       doneDirPath = FileSystem.get(conf).makeQualified(new
> > Path(doneDirStr));
> >       doneDirFS = FileSystem.get(doneDirPath.toUri(), conf);
> >       // This directory will be in a common location, or this may be a
> > cluster
> >       // meant for a single user. Creating based on the conf. Should
> ideally
> > be
> >       // created by the JobHistoryServer or as part of deployment.
> >       if (!doneDirFS.exists(doneDirPath)) {
> >       if (JobHistoryUtils.shouldCreateNonUserDirectory(conf)) {
> >         LOG.info("Creating intermediate history logDir: ["
> >             + doneDirPath
> >             + "] + based on conf. Should ideally be created by the
> > JobHistoryServer: "
> >             + MRJobConfig.MR_AM_CREATE_JH_INTERMEDIATE_BASE_DIR);
> >           mkdir(
> >               doneDirFS,
> >               doneDirPath,
> >               new FsPermission(
> >             JobHistoryUtils.HISTORY_INTERMEDIATE_DONE_DIR_PERMISSIONS
> >                 .toShort()));
> >           // TODO Temporary toShort till new FsPermission(FsPermissions)
> >           // respects
> >         // sticky
> >       } else {
> >           String message = "Not creating intermediate history logDir: ["
> >                 + doneDirPath
> >                 + "] based on conf: "
> >                 + MRJobConfig.MR_AM_CREATE_JH_INTERMEDIATE_BASE_DIR
> >                 + ". Either set to true or pre-create this directory
> with" +
> >                 " appropriate permissions";
> >         LOG.error(message);
> >         throw new YarnException(message);
> >       }
> >       }
> >     } catch (IOException e) {
> >       LOG.error("Failed checking for the existance of history
> intermediate "
> > +
> >                       "done directory: [" + doneDirPath + "]");
> >       throw new YarnException(e);
> >     }
> >
> >
> > In any case, this does not appear to be the right behavior as it does not
> > respect "dfs.permissions.enabled" (set to false) at any point. Sounds
> like a
> > bug?
> >
> >
> > Thanks, Prashant
> >
> >
> >
> >
> >
> >
> > On Tue, Jun 18, 2013 at 3:24 PM, Prashant Kommireddi <
> prash1784@gmail.com>
> > wrote:
> >>
> >> Hi Chris,
> >>
> >> This is while running a MR job. Please note the job is able to write
> files
> >> to "/mapred" directory and fails on EXECUTE permissions. On digging in
> some
> >> more, it looks like the failure occurs after writing to
> >> "/mapred/history/done_intermediate".
> >>
> >> Here is a more detailed stacktrace.
> >>
> >> INFO: Job end notification started for jobID : job_1371593763906_0001
> >> Jun 18, 2013 3:20:20 PM
> >> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler
> >> closeEventWriter
> >> INFO: Unable to write out JobSummaryInfo to
> >>
> [hdfs://test-local-EMPTYSPEC/mapred/history/done_intermediate/smehta/job_1371593763906_0001.summary_tmp]
> >> org.apache.hadoop.security.AccessControlException: Permission denied:
> >> user=smehta, access=EXECUTE,
> >> inode="/mapred":pkommireddi:supergroup:drwxrwx---
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
> >>      at
> >>
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
> >>      at
> >>
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
> >>      at
> >>
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
> >>      at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
> >>      at java.security.AccessController.doPrivileged(Native Method)
> >>      at javax.security.auth.Subject.doAs(Subject.java:396)
> >>      at
> >>
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
> >>      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
> >>
> >>      at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
> Method)
> >>      at
> >>
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
> >>      at
> >>
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
> >>      at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
> >>      at
> >>
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:90)
> >>      at
> >>
> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:57)
> >>      at
> org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:1897)
> >>      at
> >>
> org.apache.hadoop.hdfs.DistributedFileSystem.setPermission(DistributedFileSystem.java:823)
> >>      at
> >>
> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.closeEventWriter(JobHistoryEventHandler.java:666)
> >>      at
> >>
> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.handleEvent(JobHistoryEventHandler.java:521)
> >>      at
> >>
> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler$1.run(JobHistoryEventHandler.java:273)
> >>      at java.lang.Thread.run(Thread.java:662)
> >> Caused by:
> >>
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException):
> >> Permission denied: user=smehta, access=EXECUTE,
> >> inode="/mapred":pkommireddi:supergroup:drwxrwx---
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
> >>      at
> >>
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
> >>      at
> >>
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
> >>      at
> >>
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
> >>      at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
> >>      at java.security.AccessController.doPrivileged(Native Method)
> >>      at javax.security.auth.Subject.doAs(Subject.java:396)
> >>      at
> >>
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
> >>      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
> >>
> >>      at org.apache.hadoop.ipc.Client.call(Client.java:1225)
> >>      at
> >>
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202)
> >>      at $Proxy9.setPermission(Unknown Source)
> >>      at
> >>
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.setPermission(ClientNamenodeProtocolTranslatorPB.java:241)
> >>      at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> >>      at
> >>
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> >>      at
> >>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> >>      at java.lang.reflect.Method.invoke(Method.java:597)
> >>      at
> >>
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164)
> >>      at
> >>
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83)
> >>      at $Proxy10.setPermission(Unknown Source)
> >>      at
> org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:1895)
> >>      ... 5 more
> >> Jun 18, 2013 3:20:20 PM
> >> org.apache.hadoop.yarn.YarnUncaughtExceptionHandler uncaughtException
> >> SEVERE: Thread Thread[Thread-51,5,main] threw an Exception.
> >> org.apache.hadoop.yarn.YarnException:
> >> org.apache.hadoop.security.AccessControlException: Permission denied:
> >> user=smehta, access=EXECUTE,
> >> inode="/mapred":pkommireddi:supergroup:drwxrwx---
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
> >>      at
> >>
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
> >>      at
> >>
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
> >>      at
> >>
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
> >>      at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
> >>      at java.security.AccessController.doPrivileged(Native Method)
> >>      at javax.security.auth.Subject.doAs(Subject.java:396)
> >>      at
> >>
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
> >>      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
> >>
> >>      at
> >>
> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.handleEvent(JobHistoryEventHandler.java:523)
> >>      at
> >>
> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler$1.run(JobHistoryEventHandler.java:273)
> >>      at java.lang.Thread.run(Thread.java:662)
> >> Caused by: org.apache.hadoop.security.AccessControlException: Permission
> >> denied: user=smehta, access=EXECUTE,
> >> inode="/mapred":pkommireddi:supergroup:drwxrwx---
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
> >>      at
> >>
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
> >>      at
> >>
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
> >>      at
> >>
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
> >>      at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
> >>      at java.security.AccessController.doPrivileged(Native Method)
> >>      at javax.security.auth.Subject.doAs(Subject.java:396)
> >>      at
> >>
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
> >>      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
> >>
> >>      at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
> Method)
> >>      at
> >>
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
> >>      at
> >>
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
> >>      at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
> >>      at
> >>
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:90)
> >>      at
> >>
> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:57)
> >>      at
> org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:1897)
> >>      at
> >>
> org.apache.hadoop.hdfs.DistributedFileSystem.setPermission(DistributedFileSystem.java:823)
> >>      at
> >>
> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.closeEventWriter(JobHistoryEventHandler.java:666)
> >>      at
> >>
> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.handleEvent(JobHistoryEventHandler.java:521)
> >>      ... 2 more
> >> Caused by:
> >>
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException):
> >> Permission denied: user=smehta, access=EXECUTE,
> >> inode="/mapred":pkommireddi:supergroup:drwxrwx---
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
> >>      at
> >>
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
> >>      at
> >>
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
> >>      at
> >>
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
> >>      at
> >>
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
> >>      at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
> >>      at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
> >>      at java.security.AccessController.doPrivileged(Native Method)
> >>      at javax.security.auth.Subject.doAs(Subject.java:396)
> >>      at
> >>
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
> >>      at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
> >>
> >>      at org.apache.hadoop.ipc.Client.call(Client.java:1225)
> >>      at
> >>
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202)
> >>      at $Proxy9.setPermission(Unknown Source)
> >>      at
> >>
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.setPermission(ClientNamenodeProtocolTranslatorPB.java:241)
> >>      at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> >>      at
> >>
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> >>      at
> >>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> >>      at java.lang.reflect.Method.invoke(Method.java:597)
> >>      at
> >>
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164)
> >>      at
> >>
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83)
> >>      at $Proxy10.setPermission(Unknown Source)
> >>      at
> org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:1895)
> >>      ... 5 more
> >> Jun 18, 2013 3:20:20 PM
> >>
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator$ScheduleStats log
> >> INFO: Before Scheduling: PendingReds:0 ScheduledMaps:0 ScheduledReds:0
> >> AssignedMaps:0 AssignedReds:1 CompletedMaps:1 CompletedReds:1
> ContAlloc:2
> >> ContRel:0 HostLocal:0 RackLocal:1
> >> Jun 18, 2013 3:20:21 PM
> >> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator getResources
> >> INFO: Received completed container
> container_1371593763906_0001_01_000003
> >> Jun 18, 2013 3:20:21 PM
> >>
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator$ScheduleStats log
> >> INFO: After Scheduling: PendingReds:0 ScheduledMaps:0 ScheduledReds:0
> >> AssignedMaps:0 AssignedReds:0 CompletedMaps:1 CompletedReds:1
> ContAlloc:2
> >> ContRel:0 HostLocal:0 RackLocal:1
> >> Jun 18, 2013 3:20:21 PM
> >>
> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl$DiagnosticInformationUpdater
> >> transition
> >> INFO: Diagnostics report from attempt_1371593763906_0001_r_000000_0:
> >> Container killed by the ApplicationMaster.
> >>
> >>
> >>
> >> On Tue, Jun 18, 2013 at 1:28 PM, Chris Nauroth <
> cnauroth@hortonworks.com>
> >> wrote:
> >>>
> >>> Prashant, can you provide more details about what you're doing when you
> >>> see this error?  Are you submitting a MapReduce job, running an HDFS
> shell
> >>> command, or doing some other action?  It's possible that we're also
> seeing
> >>> an interaction with some other change in 2.x that triggers a
> setPermission
> >>> call that wasn't there in 0.20.2.  I think the problem with the HDFS
> >>> setPermission API is present in both 0.20.2 and 2.x, but if the code in
> >>> 0.20.2 never triggered a setPermission call for your usage, then you
> >>> wouldn't have seen the problem.
> >>>
> >>> I'd like to gather these details for submitting a new bug report to
> HDFS.
> >>> Thanks!
> >>>
> >>> Chris Nauroth
> >>> Hortonworks
> >>> http://hortonworks.com/
> >>>
> >>>
> >>>
> >>> On Tue, Jun 18, 2013 at 12:14 PM, Leo Leung <ll...@ddn.com> wrote:
> >>>>
> >>>> I believe, the properties name should be “dfs.permissions”
> >>>>
> >>>>
> >>>>
> >>>>
> >>>>
> >>>> From: Prashant Kommireddi [mailto:prash1784@gmail.com]
> >>>> Sent: Tuesday, June 18, 2013 10:54 AM
> >>>> To: user@hadoop.apache.org
> >>>> Subject: DFS Permissions on Hadoop 2.x
> >>>>
> >>>>
> >>>>
> >>>> Hello,
> >>>>
> >>>>
> >>>>
> >>>> We just upgraded our cluster from 0.20.2 to 2.x (with HA) and had a
> >>>> question around disabling dfs permissions on the latter version. For
> some
> >>>> reason, setting the following config does not seem to work
> >>>>
> >>>>
> >>>>
> >>>> <property>
> >>>>
> >>>>         <name>dfs.permissions.enabled</name>
> >>>>
> >>>>         <value>false</value>
> >>>>
> >>>> </property>
> >>>>
> >>>>
> >>>>
> >>>> Any other configs that might be needed for this?
> >>>>
> >>>>
> >>>>
> >>>> Here is the stacktrace.
> >>>>
> >>>>
> >>>>
> >>>> 2013-06-17 17:35:45,429 INFO  ipc.Server - IPC Server handler 62 on
> >>>> 8020, call
> org.apache.hadoop.hdfs.protocol.ClientProtocol.setPermission from
> >>>> 10.0.53.131:24059: error:
> org.apache.hadoop.security.AccessControlException:
> >>>> Permission denied: user=smehta, access=EXECUTE,
> >>>> inode="/mapred":pkommireddi:supergroup:drwxrwx---
> >>>>
> >>>> org.apache.hadoop.security.AccessControlException: Permission denied:
> >>>> user=smehta, access=EXECUTE,
> >>>> inode="/mapred":pkommireddi:supergroup:drwxrwx---
> >>>>
> >>>>         at
> >>>>
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
> >>>>
> >>>>         at
> >>>>
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
> >>>>
> >>>>         at
> >>>>
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
> >>>>
> >>>>         at
> >>>>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
> >>>>
> >>>>         at
> >>>>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
> >>>>
> >>>>         at
> >>>>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
> >>>>
> >>>>         at
> >>>>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
> >>>>
> >>>>         at
> >>>>
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
> >>>>
> >>>>         at
> >>>>
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
> >>>>
> >>>>         at
> >>>>
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
> >>>>
> >>>>         at
> >>>>
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
> >>>>
> >>>>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
> >>>>
> >>>>         at
> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
> >>>>
> >>>>         at
> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
> >>>>
> >>>>         at java.security.AccessController.doPrivileged(Native Method)
> >>>>
> >>>>         at javax.security.auth.Subject.doAs(Subject.java:396)
> >>>>
> >>>>         at
> >>>>
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
> >>>>
> >>>>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
> >>>>
> >>>>
> >>>>
> >>>>
> >>>>
> >>>>
> >>>>
> >>>>
> >>>
> >>>
> >>
> >
>
>
>
> --
> Harsh J
>

Re: DFS Permissions on Hadoop 2.x

Posted by Harsh J <ha...@cloudera.com>.
This is a HDFS bug. Like all other methods that check for permissions
being enabled, the client call of setPermission should check it as
well. It does not do that currently and I believe it should be a NOP
in such a case. Please do file a JIRA (and reference the ID here to
close the loop)!

On Wed, Jun 19, 2013 at 6:18 AM, Prashant Kommireddi
<pr...@gmail.com> wrote:
> Looks like the jobs fail only on the first attempt and pass thereafter.
> Failure occurs while setting perms on "intermediate done directory". Here is
> what I think is happening:
>
> 1. Intermediate done dir is (ideally) created as part of deployment (for eg,
> /mapred/history/done_intermediate)
>
> 2. When a MR job is run, it creates a user dir within intermediate done dir
> (/mapred/history/done_intermediate/username)
>
> 3. After this dir is created, the code tries to set permissions on this user
> dir. In doing so, it checks for EXECUTE permissions on not just its parent
> (/mapred/history/done_intermediate) but across all dirs to the top-most
> level (/mapred). This fails as "/mapred" does not have execute permissions
> for the "Other" users.
>
> 4. On successive job runs, since the user dir already exists
> (/mapred/history/done_intermediate/username) it no longer tries to create
> and set permissions again. And the job completes without any perm errors.
>
> This is the code within JobHistoryEventHandler that's doing it.
>
>  //Check for the existence of intermediate done dir.
>     Path doneDirPath = null;
>     try {
>       doneDirPath = FileSystem.get(conf).makeQualified(new
> Path(doneDirStr));
>       doneDirFS = FileSystem.get(doneDirPath.toUri(), conf);
>       // This directory will be in a common location, or this may be a
> cluster
>       // meant for a single user. Creating based on the conf. Should ideally
> be
>       // created by the JobHistoryServer or as part of deployment.
>       if (!doneDirFS.exists(doneDirPath)) {
>       if (JobHistoryUtils.shouldCreateNonUserDirectory(conf)) {
>         LOG.info("Creating intermediate history logDir: ["
>             + doneDirPath
>             + "] + based on conf. Should ideally be created by the
> JobHistoryServer: "
>             + MRJobConfig.MR_AM_CREATE_JH_INTERMEDIATE_BASE_DIR);
>           mkdir(
>               doneDirFS,
>               doneDirPath,
>               new FsPermission(
>             JobHistoryUtils.HISTORY_INTERMEDIATE_DONE_DIR_PERMISSIONS
>                 .toShort()));
>           // TODO Temporary toShort till new FsPermission(FsPermissions)
>           // respects
>         // sticky
>       } else {
>           String message = "Not creating intermediate history logDir: ["
>                 + doneDirPath
>                 + "] based on conf: "
>                 + MRJobConfig.MR_AM_CREATE_JH_INTERMEDIATE_BASE_DIR
>                 + ". Either set to true or pre-create this directory with" +
>                 " appropriate permissions";
>         LOG.error(message);
>         throw new YarnException(message);
>       }
>       }
>     } catch (IOException e) {
>       LOG.error("Failed checking for the existance of history intermediate "
> +
>       		"done directory: [" + doneDirPath + "]");
>       throw new YarnException(e);
>     }
>
>
> In any case, this does not appear to be the right behavior as it does not
> respect "dfs.permissions.enabled" (set to false) at any point. Sounds like a
> bug?
>
>
> Thanks, Prashant
>
>
>
>
>
>
> On Tue, Jun 18, 2013 at 3:24 PM, Prashant Kommireddi <pr...@gmail.com>
> wrote:
>>
>> Hi Chris,
>>
>> This is while running a MR job. Please note the job is able to write files
>> to "/mapred" directory and fails on EXECUTE permissions. On digging in some
>> more, it looks like the failure occurs after writing to
>> "/mapred/history/done_intermediate".
>>
>> Here is a more detailed stacktrace.
>>
>> INFO: Job end notification started for jobID : job_1371593763906_0001
>> Jun 18, 2013 3:20:20 PM
>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler
>> closeEventWriter
>> INFO: Unable to write out JobSummaryInfo to
>> [hdfs://test-local-EMPTYSPEC/mapred/history/done_intermediate/smehta/job_1371593763906_0001.summary_tmp]
>> org.apache.hadoop.security.AccessControlException: Permission denied:
>> user=smehta, access=EXECUTE,
>> inode="/mapred":pkommireddi:supergroup:drwxrwx---
>> 	at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
>> 	at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
>> 	at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
>> 	at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
>> 	at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
>> 	at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
>> 	at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
>> 	at
>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
>> 	at
>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
>> 	at
>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
>> 	at
>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>> 	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
>> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
>> 	at java.security.AccessController.doPrivileged(Native Method)
>> 	at javax.security.auth.Subject.doAs(Subject.java:396)
>> 	at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>> 	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
>>
>> 	at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>> 	at
>> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
>> 	at
>> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
>> 	at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
>> 	at
>> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:90)
>> 	at
>> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:57)
>> 	at org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:1897)
>> 	at
>> org.apache.hadoop.hdfs.DistributedFileSystem.setPermission(DistributedFileSystem.java:823)
>> 	at
>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.closeEventWriter(JobHistoryEventHandler.java:666)
>> 	at
>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.handleEvent(JobHistoryEventHandler.java:521)
>> 	at
>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler$1.run(JobHistoryEventHandler.java:273)
>> 	at java.lang.Thread.run(Thread.java:662)
>> Caused by:
>> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException):
>> Permission denied: user=smehta, access=EXECUTE,
>> inode="/mapred":pkommireddi:supergroup:drwxrwx---
>> 	at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
>> 	at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
>> 	at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
>> 	at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
>> 	at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
>> 	at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
>> 	at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
>> 	at
>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
>> 	at
>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
>> 	at
>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
>> 	at
>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>> 	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
>> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
>> 	at java.security.AccessController.doPrivileged(Native Method)
>> 	at javax.security.auth.Subject.doAs(Subject.java:396)
>> 	at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>> 	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
>>
>> 	at org.apache.hadoop.ipc.Client.call(Client.java:1225)
>> 	at
>> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202)
>> 	at $Proxy9.setPermission(Unknown Source)
>> 	at
>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.setPermission(ClientNamenodeProtocolTranslatorPB.java:241)
>> 	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> 	at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>> 	at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>> 	at java.lang.reflect.Method.invoke(Method.java:597)
>> 	at
>> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164)
>> 	at
>> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83)
>> 	at $Proxy10.setPermission(Unknown Source)
>> 	at org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:1895)
>> 	... 5 more
>> Jun 18, 2013 3:20:20 PM
>> org.apache.hadoop.yarn.YarnUncaughtExceptionHandler uncaughtException
>> SEVERE: Thread Thread[Thread-51,5,main] threw an Exception.
>> org.apache.hadoop.yarn.YarnException:
>> org.apache.hadoop.security.AccessControlException: Permission denied:
>> user=smehta, access=EXECUTE,
>> inode="/mapred":pkommireddi:supergroup:drwxrwx---
>> 	at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
>> 	at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
>> 	at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
>> 	at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
>> 	at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
>> 	at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
>> 	at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
>> 	at
>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
>> 	at
>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
>> 	at
>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
>> 	at
>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>> 	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
>> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
>> 	at java.security.AccessController.doPrivileged(Native Method)
>> 	at javax.security.auth.Subject.doAs(Subject.java:396)
>> 	at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>> 	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
>>
>> 	at
>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.handleEvent(JobHistoryEventHandler.java:523)
>> 	at
>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler$1.run(JobHistoryEventHandler.java:273)
>> 	at java.lang.Thread.run(Thread.java:662)
>> Caused by: org.apache.hadoop.security.AccessControlException: Permission
>> denied: user=smehta, access=EXECUTE,
>> inode="/mapred":pkommireddi:supergroup:drwxrwx---
>> 	at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
>> 	at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
>> 	at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
>> 	at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
>> 	at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
>> 	at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
>> 	at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
>> 	at
>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
>> 	at
>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
>> 	at
>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
>> 	at
>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>> 	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
>> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
>> 	at java.security.AccessController.doPrivileged(Native Method)
>> 	at javax.security.auth.Subject.doAs(Subject.java:396)
>> 	at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>> 	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
>>
>> 	at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>> 	at
>> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
>> 	at
>> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
>> 	at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
>> 	at
>> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:90)
>> 	at
>> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:57)
>> 	at org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:1897)
>> 	at
>> org.apache.hadoop.hdfs.DistributedFileSystem.setPermission(DistributedFileSystem.java:823)
>> 	at
>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.closeEventWriter(JobHistoryEventHandler.java:666)
>> 	at
>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.handleEvent(JobHistoryEventHandler.java:521)
>> 	... 2 more
>> Caused by:
>> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException):
>> Permission denied: user=smehta, access=EXECUTE,
>> inode="/mapred":pkommireddi:supergroup:drwxrwx---
>> 	at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
>> 	at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
>> 	at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
>> 	at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
>> 	at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
>> 	at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
>> 	at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
>> 	at
>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
>> 	at
>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
>> 	at
>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
>> 	at
>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>> 	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
>> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
>> 	at java.security.AccessController.doPrivileged(Native Method)
>> 	at javax.security.auth.Subject.doAs(Subject.java:396)
>> 	at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>> 	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
>>
>> 	at org.apache.hadoop.ipc.Client.call(Client.java:1225)
>> 	at
>> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202)
>> 	at $Proxy9.setPermission(Unknown Source)
>> 	at
>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.setPermission(ClientNamenodeProtocolTranslatorPB.java:241)
>> 	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> 	at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>> 	at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>> 	at java.lang.reflect.Method.invoke(Method.java:597)
>> 	at
>> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164)
>> 	at
>> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83)
>> 	at $Proxy10.setPermission(Unknown Source)
>> 	at org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:1895)
>> 	... 5 more
>> Jun 18, 2013 3:20:20 PM
>> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator$ScheduleStats log
>> INFO: Before Scheduling: PendingReds:0 ScheduledMaps:0 ScheduledReds:0
>> AssignedMaps:0 AssignedReds:1 CompletedMaps:1 CompletedReds:1 ContAlloc:2
>> ContRel:0 HostLocal:0 RackLocal:1
>> Jun 18, 2013 3:20:21 PM
>> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator getResources
>> INFO: Received completed container container_1371593763906_0001_01_000003
>> Jun 18, 2013 3:20:21 PM
>> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator$ScheduleStats log
>> INFO: After Scheduling: PendingReds:0 ScheduledMaps:0 ScheduledReds:0
>> AssignedMaps:0 AssignedReds:0 CompletedMaps:1 CompletedReds:1 ContAlloc:2
>> ContRel:0 HostLocal:0 RackLocal:1
>> Jun 18, 2013 3:20:21 PM
>> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl$DiagnosticInformationUpdater
>> transition
>> INFO: Diagnostics report from attempt_1371593763906_0001_r_000000_0:
>> Container killed by the ApplicationMaster.
>>
>>
>>
>> On Tue, Jun 18, 2013 at 1:28 PM, Chris Nauroth <cn...@hortonworks.com>
>> wrote:
>>>
>>> Prashant, can you provide more details about what you're doing when you
>>> see this error?  Are you submitting a MapReduce job, running an HDFS shell
>>> command, or doing some other action?  It's possible that we're also seeing
>>> an interaction with some other change in 2.x that triggers a setPermission
>>> call that wasn't there in 0.20.2.  I think the problem with the HDFS
>>> setPermission API is present in both 0.20.2 and 2.x, but if the code in
>>> 0.20.2 never triggered a setPermission call for your usage, then you
>>> wouldn't have seen the problem.
>>>
>>> I'd like to gather these details for submitting a new bug report to HDFS.
>>> Thanks!
>>>
>>> Chris Nauroth
>>> Hortonworks
>>> http://hortonworks.com/
>>>
>>>
>>>
>>> On Tue, Jun 18, 2013 at 12:14 PM, Leo Leung <ll...@ddn.com> wrote:
>>>>
>>>> I believe, the properties name should be “dfs.permissions”
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> From: Prashant Kommireddi [mailto:prash1784@gmail.com]
>>>> Sent: Tuesday, June 18, 2013 10:54 AM
>>>> To: user@hadoop.apache.org
>>>> Subject: DFS Permissions on Hadoop 2.x
>>>>
>>>>
>>>>
>>>> Hello,
>>>>
>>>>
>>>>
>>>> We just upgraded our cluster from 0.20.2 to 2.x (with HA) and had a
>>>> question around disabling dfs permissions on the latter version. For some
>>>> reason, setting the following config does not seem to work
>>>>
>>>>
>>>>
>>>> <property>
>>>>
>>>>         <name>dfs.permissions.enabled</name>
>>>>
>>>>         <value>false</value>
>>>>
>>>> </property>
>>>>
>>>>
>>>>
>>>> Any other configs that might be needed for this?
>>>>
>>>>
>>>>
>>>> Here is the stacktrace.
>>>>
>>>>
>>>>
>>>> 2013-06-17 17:35:45,429 INFO  ipc.Server - IPC Server handler 62 on
>>>> 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.setPermission from
>>>> 10.0.53.131:24059: error: org.apache.hadoop.security.AccessControlException:
>>>> Permission denied: user=smehta, access=EXECUTE,
>>>> inode="/mapred":pkommireddi:supergroup:drwxrwx---
>>>>
>>>> org.apache.hadoop.security.AccessControlException: Permission denied:
>>>> user=smehta, access=EXECUTE,
>>>> inode="/mapred":pkommireddi:supergroup:drwxrwx---
>>>>
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
>>>>
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
>>>>
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
>>>>
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
>>>>
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
>>>>
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
>>>>
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
>>>>
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
>>>>
>>>>         at
>>>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
>>>>
>>>>         at
>>>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
>>>>
>>>>         at
>>>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>>>>
>>>>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>>>>
>>>>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
>>>>
>>>>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
>>>>
>>>>         at java.security.AccessController.doPrivileged(Native Method)
>>>>
>>>>         at javax.security.auth.Subject.doAs(Subject.java:396)
>>>>
>>>>         at
>>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>>>>
>>>>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>
>>>
>>
>



-- 
Harsh J

Re: DFS Permissions on Hadoop 2.x

Posted by Harsh J <ha...@cloudera.com>.
This is a HDFS bug. Like all other methods that check for permissions
being enabled, the client call of setPermission should check it as
well. It does not do that currently and I believe it should be a NOP
in such a case. Please do file a JIRA (and reference the ID here to
close the loop)!

On Wed, Jun 19, 2013 at 6:18 AM, Prashant Kommireddi
<pr...@gmail.com> wrote:
> Looks like the jobs fail only on the first attempt and pass thereafter.
> Failure occurs while setting perms on "intermediate done directory". Here is
> what I think is happening:
>
> 1. Intermediate done dir is (ideally) created as part of deployment (for eg,
> /mapred/history/done_intermediate)
>
> 2. When a MR job is run, it creates a user dir within intermediate done dir
> (/mapred/history/done_intermediate/username)
>
> 3. After this dir is created, the code tries to set permissions on this user
> dir. In doing so, it checks for EXECUTE permissions on not just its parent
> (/mapred/history/done_intermediate) but across all dirs to the top-most
> level (/mapred). This fails as "/mapred" does not have execute permissions
> for the "Other" users.
>
> 4. On successive job runs, since the user dir already exists
> (/mapred/history/done_intermediate/username) it no longer tries to create
> and set permissions again. And the job completes without any perm errors.
>
> This is the code within JobHistoryEventHandler that's doing it.
>
>  //Check for the existence of intermediate done dir.
>     Path doneDirPath = null;
>     try {
>       doneDirPath = FileSystem.get(conf).makeQualified(new
> Path(doneDirStr));
>       doneDirFS = FileSystem.get(doneDirPath.toUri(), conf);
>       // This directory will be in a common location, or this may be a
> cluster
>       // meant for a single user. Creating based on the conf. Should ideally
> be
>       // created by the JobHistoryServer or as part of deployment.
>       if (!doneDirFS.exists(doneDirPath)) {
>       if (JobHistoryUtils.shouldCreateNonUserDirectory(conf)) {
>         LOG.info("Creating intermediate history logDir: ["
>             + doneDirPath
>             + "] + based on conf. Should ideally be created by the
> JobHistoryServer: "
>             + MRJobConfig.MR_AM_CREATE_JH_INTERMEDIATE_BASE_DIR);
>           mkdir(
>               doneDirFS,
>               doneDirPath,
>               new FsPermission(
>             JobHistoryUtils.HISTORY_INTERMEDIATE_DONE_DIR_PERMISSIONS
>                 .toShort()));
>           // TODO Temporary toShort till new FsPermission(FsPermissions)
>           // respects
>         // sticky
>       } else {
>           String message = "Not creating intermediate history logDir: ["
>                 + doneDirPath
>                 + "] based on conf: "
>                 + MRJobConfig.MR_AM_CREATE_JH_INTERMEDIATE_BASE_DIR
>                 + ". Either set to true or pre-create this directory with" +
>                 " appropriate permissions";
>         LOG.error(message);
>         throw new YarnException(message);
>       }
>       }
>     } catch (IOException e) {
>       LOG.error("Failed checking for the existance of history intermediate "
> +
>       		"done directory: [" + doneDirPath + "]");
>       throw new YarnException(e);
>     }
>
>
> In any case, this does not appear to be the right behavior as it does not
> respect "dfs.permissions.enabled" (set to false) at any point. Sounds like a
> bug?
>
>
> Thanks, Prashant
>
>
>
>
>
>
> On Tue, Jun 18, 2013 at 3:24 PM, Prashant Kommireddi <pr...@gmail.com>
> wrote:
>>
>> Hi Chris,
>>
>> This is while running a MR job. Please note the job is able to write files
>> to "/mapred" directory and fails on EXECUTE permissions. On digging in some
>> more, it looks like the failure occurs after writing to
>> "/mapred/history/done_intermediate".
>>
>> Here is a more detailed stacktrace.
>>
>> INFO: Job end notification started for jobID : job_1371593763906_0001
>> Jun 18, 2013 3:20:20 PM
>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler
>> closeEventWriter
>> INFO: Unable to write out JobSummaryInfo to
>> [hdfs://test-local-EMPTYSPEC/mapred/history/done_intermediate/smehta/job_1371593763906_0001.summary_tmp]
>> org.apache.hadoop.security.AccessControlException: Permission denied:
>> user=smehta, access=EXECUTE,
>> inode="/mapred":pkommireddi:supergroup:drwxrwx---
>> 	at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
>> 	at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
>> 	at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
>> 	at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
>> 	at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
>> 	at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
>> 	at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
>> 	at
>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
>> 	at
>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
>> 	at
>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
>> 	at
>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>> 	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
>> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
>> 	at java.security.AccessController.doPrivileged(Native Method)
>> 	at javax.security.auth.Subject.doAs(Subject.java:396)
>> 	at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>> 	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
>>
>> 	at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>> 	at
>> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
>> 	at
>> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
>> 	at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
>> 	at
>> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:90)
>> 	at
>> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:57)
>> 	at org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:1897)
>> 	at
>> org.apache.hadoop.hdfs.DistributedFileSystem.setPermission(DistributedFileSystem.java:823)
>> 	at
>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.closeEventWriter(JobHistoryEventHandler.java:666)
>> 	at
>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.handleEvent(JobHistoryEventHandler.java:521)
>> 	at
>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler$1.run(JobHistoryEventHandler.java:273)
>> 	at java.lang.Thread.run(Thread.java:662)
>> Caused by:
>> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException):
>> Permission denied: user=smehta, access=EXECUTE,
>> inode="/mapred":pkommireddi:supergroup:drwxrwx---
>> 	at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
>> 	at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
>> 	at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
>> 	at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
>> 	at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
>> 	at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
>> 	at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
>> 	at
>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
>> 	at
>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
>> 	at
>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
>> 	at
>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>> 	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
>> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
>> 	at java.security.AccessController.doPrivileged(Native Method)
>> 	at javax.security.auth.Subject.doAs(Subject.java:396)
>> 	at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>> 	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
>>
>> 	at org.apache.hadoop.ipc.Client.call(Client.java:1225)
>> 	at
>> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202)
>> 	at $Proxy9.setPermission(Unknown Source)
>> 	at
>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.setPermission(ClientNamenodeProtocolTranslatorPB.java:241)
>> 	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> 	at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>> 	at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>> 	at java.lang.reflect.Method.invoke(Method.java:597)
>> 	at
>> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164)
>> 	at
>> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83)
>> 	at $Proxy10.setPermission(Unknown Source)
>> 	at org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:1895)
>> 	... 5 more
>> Jun 18, 2013 3:20:20 PM
>> org.apache.hadoop.yarn.YarnUncaughtExceptionHandler uncaughtException
>> SEVERE: Thread Thread[Thread-51,5,main] threw an Exception.
>> org.apache.hadoop.yarn.YarnException:
>> org.apache.hadoop.security.AccessControlException: Permission denied:
>> user=smehta, access=EXECUTE,
>> inode="/mapred":pkommireddi:supergroup:drwxrwx---
>> 	at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
>> 	at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
>> 	at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
>> 	at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
>> 	at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
>> 	at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
>> 	at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
>> 	at
>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
>> 	at
>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
>> 	at
>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
>> 	at
>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>> 	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
>> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
>> 	at java.security.AccessController.doPrivileged(Native Method)
>> 	at javax.security.auth.Subject.doAs(Subject.java:396)
>> 	at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>> 	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
>>
>> 	at
>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.handleEvent(JobHistoryEventHandler.java:523)
>> 	at
>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler$1.run(JobHistoryEventHandler.java:273)
>> 	at java.lang.Thread.run(Thread.java:662)
>> Caused by: org.apache.hadoop.security.AccessControlException: Permission
>> denied: user=smehta, access=EXECUTE,
>> inode="/mapred":pkommireddi:supergroup:drwxrwx---
>> 	at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
>> 	at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
>> 	at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
>> 	at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
>> 	at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
>> 	at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
>> 	at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
>> 	at
>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
>> 	at
>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
>> 	at
>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
>> 	at
>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>> 	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
>> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
>> 	at java.security.AccessController.doPrivileged(Native Method)
>> 	at javax.security.auth.Subject.doAs(Subject.java:396)
>> 	at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>> 	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
>>
>> 	at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>> 	at
>> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
>> 	at
>> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
>> 	at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
>> 	at
>> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:90)
>> 	at
>> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:57)
>> 	at org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:1897)
>> 	at
>> org.apache.hadoop.hdfs.DistributedFileSystem.setPermission(DistributedFileSystem.java:823)
>> 	at
>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.closeEventWriter(JobHistoryEventHandler.java:666)
>> 	at
>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.handleEvent(JobHistoryEventHandler.java:521)
>> 	... 2 more
>> Caused by:
>> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException):
>> Permission denied: user=smehta, access=EXECUTE,
>> inode="/mapred":pkommireddi:supergroup:drwxrwx---
>> 	at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
>> 	at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
>> 	at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
>> 	at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
>> 	at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
>> 	at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
>> 	at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
>> 	at
>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
>> 	at
>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
>> 	at
>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
>> 	at
>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>> 	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
>> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
>> 	at java.security.AccessController.doPrivileged(Native Method)
>> 	at javax.security.auth.Subject.doAs(Subject.java:396)
>> 	at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>> 	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
>>
>> 	at org.apache.hadoop.ipc.Client.call(Client.java:1225)
>> 	at
>> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202)
>> 	at $Proxy9.setPermission(Unknown Source)
>> 	at
>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.setPermission(ClientNamenodeProtocolTranslatorPB.java:241)
>> 	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> 	at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>> 	at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>> 	at java.lang.reflect.Method.invoke(Method.java:597)
>> 	at
>> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164)
>> 	at
>> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83)
>> 	at $Proxy10.setPermission(Unknown Source)
>> 	at org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:1895)
>> 	... 5 more
>> Jun 18, 2013 3:20:20 PM
>> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator$ScheduleStats log
>> INFO: Before Scheduling: PendingReds:0 ScheduledMaps:0 ScheduledReds:0
>> AssignedMaps:0 AssignedReds:1 CompletedMaps:1 CompletedReds:1 ContAlloc:2
>> ContRel:0 HostLocal:0 RackLocal:1
>> Jun 18, 2013 3:20:21 PM
>> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator getResources
>> INFO: Received completed container container_1371593763906_0001_01_000003
>> Jun 18, 2013 3:20:21 PM
>> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator$ScheduleStats log
>> INFO: After Scheduling: PendingReds:0 ScheduledMaps:0 ScheduledReds:0
>> AssignedMaps:0 AssignedReds:0 CompletedMaps:1 CompletedReds:1 ContAlloc:2
>> ContRel:0 HostLocal:0 RackLocal:1
>> Jun 18, 2013 3:20:21 PM
>> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl$DiagnosticInformationUpdater
>> transition
>> INFO: Diagnostics report from attempt_1371593763906_0001_r_000000_0:
>> Container killed by the ApplicationMaster.
>>
>>
>>
>> On Tue, Jun 18, 2013 at 1:28 PM, Chris Nauroth <cn...@hortonworks.com>
>> wrote:
>>>
>>> Prashant, can you provide more details about what you're doing when you
>>> see this error?  Are you submitting a MapReduce job, running an HDFS shell
>>> command, or doing some other action?  It's possible that we're also seeing
>>> an interaction with some other change in 2.x that triggers a setPermission
>>> call that wasn't there in 0.20.2.  I think the problem with the HDFS
>>> setPermission API is present in both 0.20.2 and 2.x, but if the code in
>>> 0.20.2 never triggered a setPermission call for your usage, then you
>>> wouldn't have seen the problem.
>>>
>>> I'd like to gather these details for submitting a new bug report to HDFS.
>>> Thanks!
>>>
>>> Chris Nauroth
>>> Hortonworks
>>> http://hortonworks.com/
>>>
>>>
>>>
>>> On Tue, Jun 18, 2013 at 12:14 PM, Leo Leung <ll...@ddn.com> wrote:
>>>>
>>>> I believe, the properties name should be “dfs.permissions”
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> From: Prashant Kommireddi [mailto:prash1784@gmail.com]
>>>> Sent: Tuesday, June 18, 2013 10:54 AM
>>>> To: user@hadoop.apache.org
>>>> Subject: DFS Permissions on Hadoop 2.x
>>>>
>>>>
>>>>
>>>> Hello,
>>>>
>>>>
>>>>
>>>> We just upgraded our cluster from 0.20.2 to 2.x (with HA) and had a
>>>> question around disabling dfs permissions on the latter version. For some
>>>> reason, setting the following config does not seem to work
>>>>
>>>>
>>>>
>>>> <property>
>>>>
>>>>         <name>dfs.permissions.enabled</name>
>>>>
>>>>         <value>false</value>
>>>>
>>>> </property>
>>>>
>>>>
>>>>
>>>> Any other configs that might be needed for this?
>>>>
>>>>
>>>>
>>>> Here is the stacktrace.
>>>>
>>>>
>>>>
>>>> 2013-06-17 17:35:45,429 INFO  ipc.Server - IPC Server handler 62 on
>>>> 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.setPermission from
>>>> 10.0.53.131:24059: error: org.apache.hadoop.security.AccessControlException:
>>>> Permission denied: user=smehta, access=EXECUTE,
>>>> inode="/mapred":pkommireddi:supergroup:drwxrwx---
>>>>
>>>> org.apache.hadoop.security.AccessControlException: Permission denied:
>>>> user=smehta, access=EXECUTE,
>>>> inode="/mapred":pkommireddi:supergroup:drwxrwx---
>>>>
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
>>>>
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
>>>>
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
>>>>
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
>>>>
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
>>>>
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
>>>>
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
>>>>
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
>>>>
>>>>         at
>>>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
>>>>
>>>>         at
>>>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
>>>>
>>>>         at
>>>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>>>>
>>>>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>>>>
>>>>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
>>>>
>>>>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
>>>>
>>>>         at java.security.AccessController.doPrivileged(Native Method)
>>>>
>>>>         at javax.security.auth.Subject.doAs(Subject.java:396)
>>>>
>>>>         at
>>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>>>>
>>>>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>
>>>
>>
>



-- 
Harsh J

Re: DFS Permissions on Hadoop 2.x

Posted by Harsh J <ha...@cloudera.com>.
This is a HDFS bug. Like all other methods that check for permissions
being enabled, the client call of setPermission should check it as
well. It does not do that currently and I believe it should be a NOP
in such a case. Please do file a JIRA (and reference the ID here to
close the loop)!

On Wed, Jun 19, 2013 at 6:18 AM, Prashant Kommireddi
<pr...@gmail.com> wrote:
> Looks like the jobs fail only on the first attempt and pass thereafter.
> Failure occurs while setting perms on "intermediate done directory". Here is
> what I think is happening:
>
> 1. Intermediate done dir is (ideally) created as part of deployment (for eg,
> /mapred/history/done_intermediate)
>
> 2. When a MR job is run, it creates a user dir within intermediate done dir
> (/mapred/history/done_intermediate/username)
>
> 3. After this dir is created, the code tries to set permissions on this user
> dir. In doing so, it checks for EXECUTE permissions on not just its parent
> (/mapred/history/done_intermediate) but across all dirs to the top-most
> level (/mapred). This fails as "/mapred" does not have execute permissions
> for the "Other" users.
>
> 4. On successive job runs, since the user dir already exists
> (/mapred/history/done_intermediate/username) it no longer tries to create
> and set permissions again. And the job completes without any perm errors.
>
> This is the code within JobHistoryEventHandler that's doing it.
>
>  //Check for the existence of intermediate done dir.
>     Path doneDirPath = null;
>     try {
>       doneDirPath = FileSystem.get(conf).makeQualified(new
> Path(doneDirStr));
>       doneDirFS = FileSystem.get(doneDirPath.toUri(), conf);
>       // This directory will be in a common location, or this may be a
> cluster
>       // meant for a single user. Creating based on the conf. Should ideally
> be
>       // created by the JobHistoryServer or as part of deployment.
>       if (!doneDirFS.exists(doneDirPath)) {
>       if (JobHistoryUtils.shouldCreateNonUserDirectory(conf)) {
>         LOG.info("Creating intermediate history logDir: ["
>             + doneDirPath
>             + "] + based on conf. Should ideally be created by the
> JobHistoryServer: "
>             + MRJobConfig.MR_AM_CREATE_JH_INTERMEDIATE_BASE_DIR);
>           mkdir(
>               doneDirFS,
>               doneDirPath,
>               new FsPermission(
>             JobHistoryUtils.HISTORY_INTERMEDIATE_DONE_DIR_PERMISSIONS
>                 .toShort()));
>           // TODO Temporary toShort till new FsPermission(FsPermissions)
>           // respects
>         // sticky
>       } else {
>           String message = "Not creating intermediate history logDir: ["
>                 + doneDirPath
>                 + "] based on conf: "
>                 + MRJobConfig.MR_AM_CREATE_JH_INTERMEDIATE_BASE_DIR
>                 + ". Either set to true or pre-create this directory with" +
>                 " appropriate permissions";
>         LOG.error(message);
>         throw new YarnException(message);
>       }
>       }
>     } catch (IOException e) {
>       LOG.error("Failed checking for the existance of history intermediate "
> +
>       		"done directory: [" + doneDirPath + "]");
>       throw new YarnException(e);
>     }
>
>
> In any case, this does not appear to be the right behavior as it does not
> respect "dfs.permissions.enabled" (set to false) at any point. Sounds like a
> bug?
>
>
> Thanks, Prashant
>
>
>
>
>
>
> On Tue, Jun 18, 2013 at 3:24 PM, Prashant Kommireddi <pr...@gmail.com>
> wrote:
>>
>> Hi Chris,
>>
>> This is while running a MR job. Please note the job is able to write files
>> to "/mapred" directory and fails on EXECUTE permissions. On digging in some
>> more, it looks like the failure occurs after writing to
>> "/mapred/history/done_intermediate".
>>
>> Here is a more detailed stacktrace.
>>
>> INFO: Job end notification started for jobID : job_1371593763906_0001
>> Jun 18, 2013 3:20:20 PM
>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler
>> closeEventWriter
>> INFO: Unable to write out JobSummaryInfo to
>> [hdfs://test-local-EMPTYSPEC/mapred/history/done_intermediate/smehta/job_1371593763906_0001.summary_tmp]
>> org.apache.hadoop.security.AccessControlException: Permission denied:
>> user=smehta, access=EXECUTE,
>> inode="/mapred":pkommireddi:supergroup:drwxrwx---
>> 	at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
>> 	at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
>> 	at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
>> 	at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
>> 	at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
>> 	at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
>> 	at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
>> 	at
>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
>> 	at
>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
>> 	at
>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
>> 	at
>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>> 	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
>> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
>> 	at java.security.AccessController.doPrivileged(Native Method)
>> 	at javax.security.auth.Subject.doAs(Subject.java:396)
>> 	at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>> 	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
>>
>> 	at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>> 	at
>> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
>> 	at
>> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
>> 	at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
>> 	at
>> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:90)
>> 	at
>> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:57)
>> 	at org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:1897)
>> 	at
>> org.apache.hadoop.hdfs.DistributedFileSystem.setPermission(DistributedFileSystem.java:823)
>> 	at
>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.closeEventWriter(JobHistoryEventHandler.java:666)
>> 	at
>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.handleEvent(JobHistoryEventHandler.java:521)
>> 	at
>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler$1.run(JobHistoryEventHandler.java:273)
>> 	at java.lang.Thread.run(Thread.java:662)
>> Caused by:
>> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException):
>> Permission denied: user=smehta, access=EXECUTE,
>> inode="/mapred":pkommireddi:supergroup:drwxrwx---
>> 	at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
>> 	at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
>> 	at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
>> 	at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
>> 	at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
>> 	at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
>> 	at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
>> 	at
>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
>> 	at
>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
>> 	at
>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
>> 	at
>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>> 	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
>> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
>> 	at java.security.AccessController.doPrivileged(Native Method)
>> 	at javax.security.auth.Subject.doAs(Subject.java:396)
>> 	at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>> 	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
>>
>> 	at org.apache.hadoop.ipc.Client.call(Client.java:1225)
>> 	at
>> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202)
>> 	at $Proxy9.setPermission(Unknown Source)
>> 	at
>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.setPermission(ClientNamenodeProtocolTranslatorPB.java:241)
>> 	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> 	at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>> 	at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>> 	at java.lang.reflect.Method.invoke(Method.java:597)
>> 	at
>> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164)
>> 	at
>> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83)
>> 	at $Proxy10.setPermission(Unknown Source)
>> 	at org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:1895)
>> 	... 5 more
>> Jun 18, 2013 3:20:20 PM
>> org.apache.hadoop.yarn.YarnUncaughtExceptionHandler uncaughtException
>> SEVERE: Thread Thread[Thread-51,5,main] threw an Exception.
>> org.apache.hadoop.yarn.YarnException:
>> org.apache.hadoop.security.AccessControlException: Permission denied:
>> user=smehta, access=EXECUTE,
>> inode="/mapred":pkommireddi:supergroup:drwxrwx---
>> 	at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
>> 	at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
>> 	at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
>> 	at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
>> 	at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
>> 	at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
>> 	at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
>> 	at
>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
>> 	at
>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
>> 	at
>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
>> 	at
>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>> 	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
>> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
>> 	at java.security.AccessController.doPrivileged(Native Method)
>> 	at javax.security.auth.Subject.doAs(Subject.java:396)
>> 	at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>> 	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
>>
>> 	at
>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.handleEvent(JobHistoryEventHandler.java:523)
>> 	at
>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler$1.run(JobHistoryEventHandler.java:273)
>> 	at java.lang.Thread.run(Thread.java:662)
>> Caused by: org.apache.hadoop.security.AccessControlException: Permission
>> denied: user=smehta, access=EXECUTE,
>> inode="/mapred":pkommireddi:supergroup:drwxrwx---
>> 	at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
>> 	at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
>> 	at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
>> 	at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
>> 	at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
>> 	at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
>> 	at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
>> 	at
>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
>> 	at
>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
>> 	at
>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
>> 	at
>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>> 	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
>> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
>> 	at java.security.AccessController.doPrivileged(Native Method)
>> 	at javax.security.auth.Subject.doAs(Subject.java:396)
>> 	at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>> 	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
>>
>> 	at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>> 	at
>> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
>> 	at
>> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
>> 	at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
>> 	at
>> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:90)
>> 	at
>> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:57)
>> 	at org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:1897)
>> 	at
>> org.apache.hadoop.hdfs.DistributedFileSystem.setPermission(DistributedFileSystem.java:823)
>> 	at
>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.closeEventWriter(JobHistoryEventHandler.java:666)
>> 	at
>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.handleEvent(JobHistoryEventHandler.java:521)
>> 	... 2 more
>> Caused by:
>> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException):
>> Permission denied: user=smehta, access=EXECUTE,
>> inode="/mapred":pkommireddi:supergroup:drwxrwx---
>> 	at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
>> 	at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
>> 	at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
>> 	at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
>> 	at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
>> 	at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
>> 	at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
>> 	at
>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
>> 	at
>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
>> 	at
>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
>> 	at
>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>> 	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
>> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
>> 	at java.security.AccessController.doPrivileged(Native Method)
>> 	at javax.security.auth.Subject.doAs(Subject.java:396)
>> 	at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>> 	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
>>
>> 	at org.apache.hadoop.ipc.Client.call(Client.java:1225)
>> 	at
>> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202)
>> 	at $Proxy9.setPermission(Unknown Source)
>> 	at
>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.setPermission(ClientNamenodeProtocolTranslatorPB.java:241)
>> 	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> 	at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>> 	at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>> 	at java.lang.reflect.Method.invoke(Method.java:597)
>> 	at
>> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164)
>> 	at
>> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83)
>> 	at $Proxy10.setPermission(Unknown Source)
>> 	at org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:1895)
>> 	... 5 more
>> Jun 18, 2013 3:20:20 PM
>> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator$ScheduleStats log
>> INFO: Before Scheduling: PendingReds:0 ScheduledMaps:0 ScheduledReds:0
>> AssignedMaps:0 AssignedReds:1 CompletedMaps:1 CompletedReds:1 ContAlloc:2
>> ContRel:0 HostLocal:0 RackLocal:1
>> Jun 18, 2013 3:20:21 PM
>> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator getResources
>> INFO: Received completed container container_1371593763906_0001_01_000003
>> Jun 18, 2013 3:20:21 PM
>> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator$ScheduleStats log
>> INFO: After Scheduling: PendingReds:0 ScheduledMaps:0 ScheduledReds:0
>> AssignedMaps:0 AssignedReds:0 CompletedMaps:1 CompletedReds:1 ContAlloc:2
>> ContRel:0 HostLocal:0 RackLocal:1
>> Jun 18, 2013 3:20:21 PM
>> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl$DiagnosticInformationUpdater
>> transition
>> INFO: Diagnostics report from attempt_1371593763906_0001_r_000000_0:
>> Container killed by the ApplicationMaster.
>>
>>
>>
>> On Tue, Jun 18, 2013 at 1:28 PM, Chris Nauroth <cn...@hortonworks.com>
>> wrote:
>>>
>>> Prashant, can you provide more details about what you're doing when you
>>> see this error?  Are you submitting a MapReduce job, running an HDFS shell
>>> command, or doing some other action?  It's possible that we're also seeing
>>> an interaction with some other change in 2.x that triggers a setPermission
>>> call that wasn't there in 0.20.2.  I think the problem with the HDFS
>>> setPermission API is present in both 0.20.2 and 2.x, but if the code in
>>> 0.20.2 never triggered a setPermission call for your usage, then you
>>> wouldn't have seen the problem.
>>>
>>> I'd like to gather these details for submitting a new bug report to HDFS.
>>> Thanks!
>>>
>>> Chris Nauroth
>>> Hortonworks
>>> http://hortonworks.com/
>>>
>>>
>>>
>>> On Tue, Jun 18, 2013 at 12:14 PM, Leo Leung <ll...@ddn.com> wrote:
>>>>
>>>> I believe, the properties name should be “dfs.permissions”
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> From: Prashant Kommireddi [mailto:prash1784@gmail.com]
>>>> Sent: Tuesday, June 18, 2013 10:54 AM
>>>> To: user@hadoop.apache.org
>>>> Subject: DFS Permissions on Hadoop 2.x
>>>>
>>>>
>>>>
>>>> Hello,
>>>>
>>>>
>>>>
>>>> We just upgraded our cluster from 0.20.2 to 2.x (with HA) and had a
>>>> question around disabling dfs permissions on the latter version. For some
>>>> reason, setting the following config does not seem to work
>>>>
>>>>
>>>>
>>>> <property>
>>>>
>>>>         <name>dfs.permissions.enabled</name>
>>>>
>>>>         <value>false</value>
>>>>
>>>> </property>
>>>>
>>>>
>>>>
>>>> Any other configs that might be needed for this?
>>>>
>>>>
>>>>
>>>> Here is the stacktrace.
>>>>
>>>>
>>>>
>>>> 2013-06-17 17:35:45,429 INFO  ipc.Server - IPC Server handler 62 on
>>>> 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.setPermission from
>>>> 10.0.53.131:24059: error: org.apache.hadoop.security.AccessControlException:
>>>> Permission denied: user=smehta, access=EXECUTE,
>>>> inode="/mapred":pkommireddi:supergroup:drwxrwx---
>>>>
>>>> org.apache.hadoop.security.AccessControlException: Permission denied:
>>>> user=smehta, access=EXECUTE,
>>>> inode="/mapred":pkommireddi:supergroup:drwxrwx---
>>>>
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
>>>>
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
>>>>
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
>>>>
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
>>>>
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
>>>>
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
>>>>
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
>>>>
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
>>>>
>>>>         at
>>>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
>>>>
>>>>         at
>>>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
>>>>
>>>>         at
>>>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>>>>
>>>>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>>>>
>>>>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
>>>>
>>>>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
>>>>
>>>>         at java.security.AccessController.doPrivileged(Native Method)
>>>>
>>>>         at javax.security.auth.Subject.doAs(Subject.java:396)
>>>>
>>>>         at
>>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>>>>
>>>>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>
>>>
>>
>



-- 
Harsh J

Re: DFS Permissions on Hadoop 2.x

Posted by Harsh J <ha...@cloudera.com>.
This is a HDFS bug. Like all other methods that check for permissions
being enabled, the client call of setPermission should check it as
well. It does not do that currently and I believe it should be a NOP
in such a case. Please do file a JIRA (and reference the ID here to
close the loop)!

On Wed, Jun 19, 2013 at 6:18 AM, Prashant Kommireddi
<pr...@gmail.com> wrote:
> Looks like the jobs fail only on the first attempt and pass thereafter.
> Failure occurs while setting perms on "intermediate done directory". Here is
> what I think is happening:
>
> 1. Intermediate done dir is (ideally) created as part of deployment (for eg,
> /mapred/history/done_intermediate)
>
> 2. When a MR job is run, it creates a user dir within intermediate done dir
> (/mapred/history/done_intermediate/username)
>
> 3. After this dir is created, the code tries to set permissions on this user
> dir. In doing so, it checks for EXECUTE permissions on not just its parent
> (/mapred/history/done_intermediate) but across all dirs to the top-most
> level (/mapred). This fails as "/mapred" does not have execute permissions
> for the "Other" users.
>
> 4. On successive job runs, since the user dir already exists
> (/mapred/history/done_intermediate/username) it no longer tries to create
> and set permissions again. And the job completes without any perm errors.
>
> This is the code within JobHistoryEventHandler that's doing it.
>
>  //Check for the existence of intermediate done dir.
>     Path doneDirPath = null;
>     try {
>       doneDirPath = FileSystem.get(conf).makeQualified(new
> Path(doneDirStr));
>       doneDirFS = FileSystem.get(doneDirPath.toUri(), conf);
>       // This directory will be in a common location, or this may be a
> cluster
>       // meant for a single user. Creating based on the conf. Should ideally
> be
>       // created by the JobHistoryServer or as part of deployment.
>       if (!doneDirFS.exists(doneDirPath)) {
>       if (JobHistoryUtils.shouldCreateNonUserDirectory(conf)) {
>         LOG.info("Creating intermediate history logDir: ["
>             + doneDirPath
>             + "] + based on conf. Should ideally be created by the
> JobHistoryServer: "
>             + MRJobConfig.MR_AM_CREATE_JH_INTERMEDIATE_BASE_DIR);
>           mkdir(
>               doneDirFS,
>               doneDirPath,
>               new FsPermission(
>             JobHistoryUtils.HISTORY_INTERMEDIATE_DONE_DIR_PERMISSIONS
>                 .toShort()));
>           // TODO Temporary toShort till new FsPermission(FsPermissions)
>           // respects
>         // sticky
>       } else {
>           String message = "Not creating intermediate history logDir: ["
>                 + doneDirPath
>                 + "] based on conf: "
>                 + MRJobConfig.MR_AM_CREATE_JH_INTERMEDIATE_BASE_DIR
>                 + ". Either set to true or pre-create this directory with" +
>                 " appropriate permissions";
>         LOG.error(message);
>         throw new YarnException(message);
>       }
>       }
>     } catch (IOException e) {
>       LOG.error("Failed checking for the existance of history intermediate "
> +
>       		"done directory: [" + doneDirPath + "]");
>       throw new YarnException(e);
>     }
>
>
> In any case, this does not appear to be the right behavior as it does not
> respect "dfs.permissions.enabled" (set to false) at any point. Sounds like a
> bug?
>
>
> Thanks, Prashant
>
>
>
>
>
>
> On Tue, Jun 18, 2013 at 3:24 PM, Prashant Kommireddi <pr...@gmail.com>
> wrote:
>>
>> Hi Chris,
>>
>> This is while running a MR job. Please note the job is able to write files
>> to "/mapred" directory and fails on EXECUTE permissions. On digging in some
>> more, it looks like the failure occurs after writing to
>> "/mapred/history/done_intermediate".
>>
>> Here is a more detailed stacktrace.
>>
>> INFO: Job end notification started for jobID : job_1371593763906_0001
>> Jun 18, 2013 3:20:20 PM
>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler
>> closeEventWriter
>> INFO: Unable to write out JobSummaryInfo to
>> [hdfs://test-local-EMPTYSPEC/mapred/history/done_intermediate/smehta/job_1371593763906_0001.summary_tmp]
>> org.apache.hadoop.security.AccessControlException: Permission denied:
>> user=smehta, access=EXECUTE,
>> inode="/mapred":pkommireddi:supergroup:drwxrwx---
>> 	at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
>> 	at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
>> 	at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
>> 	at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
>> 	at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
>> 	at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
>> 	at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
>> 	at
>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
>> 	at
>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
>> 	at
>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
>> 	at
>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>> 	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
>> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
>> 	at java.security.AccessController.doPrivileged(Native Method)
>> 	at javax.security.auth.Subject.doAs(Subject.java:396)
>> 	at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>> 	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
>>
>> 	at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>> 	at
>> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
>> 	at
>> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
>> 	at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
>> 	at
>> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:90)
>> 	at
>> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:57)
>> 	at org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:1897)
>> 	at
>> org.apache.hadoop.hdfs.DistributedFileSystem.setPermission(DistributedFileSystem.java:823)
>> 	at
>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.closeEventWriter(JobHistoryEventHandler.java:666)
>> 	at
>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.handleEvent(JobHistoryEventHandler.java:521)
>> 	at
>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler$1.run(JobHistoryEventHandler.java:273)
>> 	at java.lang.Thread.run(Thread.java:662)
>> Caused by:
>> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException):
>> Permission denied: user=smehta, access=EXECUTE,
>> inode="/mapred":pkommireddi:supergroup:drwxrwx---
>> 	at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
>> 	at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
>> 	at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
>> 	at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
>> 	at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
>> 	at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
>> 	at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
>> 	at
>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
>> 	at
>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
>> 	at
>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
>> 	at
>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>> 	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
>> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
>> 	at java.security.AccessController.doPrivileged(Native Method)
>> 	at javax.security.auth.Subject.doAs(Subject.java:396)
>> 	at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>> 	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
>>
>> 	at org.apache.hadoop.ipc.Client.call(Client.java:1225)
>> 	at
>> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202)
>> 	at $Proxy9.setPermission(Unknown Source)
>> 	at
>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.setPermission(ClientNamenodeProtocolTranslatorPB.java:241)
>> 	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> 	at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>> 	at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>> 	at java.lang.reflect.Method.invoke(Method.java:597)
>> 	at
>> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164)
>> 	at
>> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83)
>> 	at $Proxy10.setPermission(Unknown Source)
>> 	at org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:1895)
>> 	... 5 more
>> Jun 18, 2013 3:20:20 PM
>> org.apache.hadoop.yarn.YarnUncaughtExceptionHandler uncaughtException
>> SEVERE: Thread Thread[Thread-51,5,main] threw an Exception.
>> org.apache.hadoop.yarn.YarnException:
>> org.apache.hadoop.security.AccessControlException: Permission denied:
>> user=smehta, access=EXECUTE,
>> inode="/mapred":pkommireddi:supergroup:drwxrwx---
>> 	at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
>> 	at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
>> 	at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
>> 	at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
>> 	at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
>> 	at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
>> 	at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
>> 	at
>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
>> 	at
>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
>> 	at
>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
>> 	at
>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>> 	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
>> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
>> 	at java.security.AccessController.doPrivileged(Native Method)
>> 	at javax.security.auth.Subject.doAs(Subject.java:396)
>> 	at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>> 	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
>>
>> 	at
>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.handleEvent(JobHistoryEventHandler.java:523)
>> 	at
>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler$1.run(JobHistoryEventHandler.java:273)
>> 	at java.lang.Thread.run(Thread.java:662)
>> Caused by: org.apache.hadoop.security.AccessControlException: Permission
>> denied: user=smehta, access=EXECUTE,
>> inode="/mapred":pkommireddi:supergroup:drwxrwx---
>> 	at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
>> 	at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
>> 	at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
>> 	at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
>> 	at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
>> 	at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
>> 	at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
>> 	at
>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
>> 	at
>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
>> 	at
>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
>> 	at
>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>> 	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
>> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
>> 	at java.security.AccessController.doPrivileged(Native Method)
>> 	at javax.security.auth.Subject.doAs(Subject.java:396)
>> 	at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>> 	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
>>
>> 	at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>> 	at
>> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
>> 	at
>> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
>> 	at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
>> 	at
>> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:90)
>> 	at
>> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:57)
>> 	at org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:1897)
>> 	at
>> org.apache.hadoop.hdfs.DistributedFileSystem.setPermission(DistributedFileSystem.java:823)
>> 	at
>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.closeEventWriter(JobHistoryEventHandler.java:666)
>> 	at
>> org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.handleEvent(JobHistoryEventHandler.java:521)
>> 	... 2 more
>> Caused by:
>> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException):
>> Permission denied: user=smehta, access=EXECUTE,
>> inode="/mapred":pkommireddi:supergroup:drwxrwx---
>> 	at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
>> 	at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
>> 	at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
>> 	at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
>> 	at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
>> 	at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
>> 	at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
>> 	at
>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
>> 	at
>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
>> 	at
>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
>> 	at
>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>> 	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
>> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
>> 	at java.security.AccessController.doPrivileged(Native Method)
>> 	at javax.security.auth.Subject.doAs(Subject.java:396)
>> 	at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>> 	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
>>
>> 	at org.apache.hadoop.ipc.Client.call(Client.java:1225)
>> 	at
>> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202)
>> 	at $Proxy9.setPermission(Unknown Source)
>> 	at
>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.setPermission(ClientNamenodeProtocolTranslatorPB.java:241)
>> 	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> 	at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>> 	at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>> 	at java.lang.reflect.Method.invoke(Method.java:597)
>> 	at
>> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164)
>> 	at
>> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83)
>> 	at $Proxy10.setPermission(Unknown Source)
>> 	at org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:1895)
>> 	... 5 more
>> Jun 18, 2013 3:20:20 PM
>> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator$ScheduleStats log
>> INFO: Before Scheduling: PendingReds:0 ScheduledMaps:0 ScheduledReds:0
>> AssignedMaps:0 AssignedReds:1 CompletedMaps:1 CompletedReds:1 ContAlloc:2
>> ContRel:0 HostLocal:0 RackLocal:1
>> Jun 18, 2013 3:20:21 PM
>> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator getResources
>> INFO: Received completed container container_1371593763906_0001_01_000003
>> Jun 18, 2013 3:20:21 PM
>> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator$ScheduleStats log
>> INFO: After Scheduling: PendingReds:0 ScheduledMaps:0 ScheduledReds:0
>> AssignedMaps:0 AssignedReds:0 CompletedMaps:1 CompletedReds:1 ContAlloc:2
>> ContRel:0 HostLocal:0 RackLocal:1
>> Jun 18, 2013 3:20:21 PM
>> org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl$DiagnosticInformationUpdater
>> transition
>> INFO: Diagnostics report from attempt_1371593763906_0001_r_000000_0:
>> Container killed by the ApplicationMaster.
>>
>>
>>
>> On Tue, Jun 18, 2013 at 1:28 PM, Chris Nauroth <cn...@hortonworks.com>
>> wrote:
>>>
>>> Prashant, can you provide more details about what you're doing when you
>>> see this error?  Are you submitting a MapReduce job, running an HDFS shell
>>> command, or doing some other action?  It's possible that we're also seeing
>>> an interaction with some other change in 2.x that triggers a setPermission
>>> call that wasn't there in 0.20.2.  I think the problem with the HDFS
>>> setPermission API is present in both 0.20.2 and 2.x, but if the code in
>>> 0.20.2 never triggered a setPermission call for your usage, then you
>>> wouldn't have seen the problem.
>>>
>>> I'd like to gather these details for submitting a new bug report to HDFS.
>>> Thanks!
>>>
>>> Chris Nauroth
>>> Hortonworks
>>> http://hortonworks.com/
>>>
>>>
>>>
>>> On Tue, Jun 18, 2013 at 12:14 PM, Leo Leung <ll...@ddn.com> wrote:
>>>>
>>>> I believe, the properties name should be “dfs.permissions”
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> From: Prashant Kommireddi [mailto:prash1784@gmail.com]
>>>> Sent: Tuesday, June 18, 2013 10:54 AM
>>>> To: user@hadoop.apache.org
>>>> Subject: DFS Permissions on Hadoop 2.x
>>>>
>>>>
>>>>
>>>> Hello,
>>>>
>>>>
>>>>
>>>> We just upgraded our cluster from 0.20.2 to 2.x (with HA) and had a
>>>> question around disabling dfs permissions on the latter version. For some
>>>> reason, setting the following config does not seem to work
>>>>
>>>>
>>>>
>>>> <property>
>>>>
>>>>         <name>dfs.permissions.enabled</name>
>>>>
>>>>         <value>false</value>
>>>>
>>>> </property>
>>>>
>>>>
>>>>
>>>> Any other configs that might be needed for this?
>>>>
>>>>
>>>>
>>>> Here is the stacktrace.
>>>>
>>>>
>>>>
>>>> 2013-06-17 17:35:45,429 INFO  ipc.Server - IPC Server handler 62 on
>>>> 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.setPermission from
>>>> 10.0.53.131:24059: error: org.apache.hadoop.security.AccessControlException:
>>>> Permission denied: user=smehta, access=EXECUTE,
>>>> inode="/mapred":pkommireddi:supergroup:drwxrwx---
>>>>
>>>> org.apache.hadoop.security.AccessControlException: Permission denied:
>>>> user=smehta, access=EXECUTE,
>>>> inode="/mapred":pkommireddi:supergroup:drwxrwx---
>>>>
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
>>>>
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
>>>>
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
>>>>
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
>>>>
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
>>>>
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
>>>>
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
>>>>
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
>>>>
>>>>         at
>>>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
>>>>
>>>>         at
>>>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
>>>>
>>>>         at
>>>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>>>>
>>>>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>>>>
>>>>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
>>>>
>>>>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
>>>>
>>>>         at java.security.AccessController.doPrivileged(Native Method)
>>>>
>>>>         at javax.security.auth.Subject.doAs(Subject.java:396)
>>>>
>>>>         at
>>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>>>>
>>>>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>
>>>
>>
>



-- 
Harsh J

Re: DFS Permissions on Hadoop 2.x

Posted by Prashant Kommireddi <pr...@gmail.com>.
Looks like the jobs fail only on the first attempt and pass thereafter.
Failure occurs while setting perms on "intermediate done directory". Here
is what I think is happening:

1. Intermediate done dir is (ideally) created as part of deployment (for
eg, /mapred/history/done_intermediate)

2. When a MR job is run, it creates a user dir within intermediate done dir
(/mapred/history/done_intermediate/username)

3. After this dir is created, the code tries to set permissions on this
user dir. In doing so, it checks for EXECUTE permissions on not just its
parent (/mapred/history/done_intermediate) but across all dirs to the
top-most level (/mapred). This fails as "/mapred" does not have execute
permissions for the "Other" users.

4. On successive job runs, since the user dir already exists
(/mapred/history/done_intermediate/username) it no longer tries to create
and set permissions again. And the job completes without any perm errors.

This is the code within JobHistoryEventHandler that's doing it.

 //Check for the existence of intermediate done dir.
    Path doneDirPath = null;
    try {
      doneDirPath = FileSystem.get(conf).makeQualified(new Path(doneDirStr));
      doneDirFS = FileSystem.get(doneDirPath.toUri(), conf);
      // This directory will be in a common location, or this may be a cluster
      // meant for a single user. Creating based on the conf. Should ideally be
      // created by the JobHistoryServer or as part of deployment.
      if (!doneDirFS.exists(doneDirPath)) {
      if (JobHistoryUtils.shouldCreateNonUserDirectory(conf)) {
        LOG.info("Creating intermediate history logDir: ["
            + doneDirPath
            + "] + based on conf. Should ideally be created by the
JobHistoryServer: "
            + MRJobConfig.MR_AM_CREATE_JH_INTERMEDIATE_BASE_DIR);
          mkdir(
              doneDirFS,
              doneDirPath,
              new FsPermission(
            JobHistoryUtils.HISTORY_INTERMEDIATE_DONE_DIR_PERMISSIONS
                .toShort()));
          // TODO Temporary toShort till new FsPermission(FsPermissions)
          // respects
        // sticky
      } else {
          String message = "Not creating intermediate history logDir: ["
                + doneDirPath
                + "] based on conf: "
                + MRJobConfig.MR_AM_CREATE_JH_INTERMEDIATE_BASE_DIR
                + ". Either set to true or pre-create this directory with" +
                " appropriate permissions";
        LOG.error(message);
        throw new YarnException(message);
      }
      }
    } catch (IOException e) {
      LOG.error("Failed checking for the existance of history intermediate " +
      		"done directory: [" + doneDirPath + "]");
      throw new YarnException(e);
    }


In any case, this does not appear to be the right behavior as it does
not respect "dfs.permissions.enabled" (set to false) at any point.
Sounds like a bug?


Thanks, Prashant






On Tue, Jun 18, 2013 at 3:24 PM, Prashant Kommireddi <pr...@gmail.com>wrote:

> Hi Chris,
>
> This is while running a MR job. Please note the job is able to write files
> to "/mapred" directory and fails on EXECUTE permissions. On digging in some
> more, it looks like the failure occurs after writing to
> "/mapred/history/done_intermediate".
>
> Here is a more detailed stacktrace.
>
> INFO: Job end notification started for jobID : job_1371593763906_0001
> Jun 18, 2013 3:20:20 PM org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler closeEventWriter
> INFO: Unable to write out JobSummaryInfo to [hdfs://test-local-EMPTYSPEC/mapred/history/done_intermediate/smehta/job_1371593763906_0001.summary_tmp]
> org.apache.hadoop.security.AccessControlException: Permission denied: user=smehta, access=EXECUTE, inode="/mapred":pkommireddi:supergroup:drwxrwx---
> 	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
> 	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
> 	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
> 	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
> 	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
> 	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
> 	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
> 	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
> 	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
> 	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
> 	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
> 	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
> 	at java.security.AccessController.doPrivileged(Native Method)
> 	at javax.security.auth.Subject.doAs(Subject.java:396)
> 	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
> 	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
>
> 	at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> 	at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
> 	at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
> 	at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
> 	at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:90)
> 	at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:57)
> 	at org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:1897)
> 	at org.apache.hadoop.hdfs.DistributedFileSystem.setPermission(DistributedFileSystem.java:823)
> 	at org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.closeEventWriter(JobHistoryEventHandler.java:666)
> 	at org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.handleEvent(JobHistoryEventHandler.java:521)
> 	at org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler$1.run(JobHistoryEventHandler.java:273)
> 	at java.lang.Thread.run(Thread.java:662)
> Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Permission denied: user=smehta, access=EXECUTE, inode="/mapred":pkommireddi:supergroup:drwxrwx---
> 	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
> 	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
> 	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
> 	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
> 	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
> 	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
> 	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
> 	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
> 	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
> 	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
> 	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
> 	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
> 	at java.security.AccessController.doPrivileged(Native Method)
> 	at javax.security.auth.Subject.doAs(Subject.java:396)
> 	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
> 	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
>
> 	at org.apache.hadoop.ipc.Client.call(Client.java:1225)
> 	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202)
> 	at $Proxy9.setPermission(Unknown Source)
> 	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.setPermission(ClientNamenodeProtocolTranslatorPB.java:241)
> 	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> 	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> 	at java.lang.reflect.Method.invoke(Method.java:597)
> 	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164)
> 	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83)
> 	at $Proxy10.setPermission(Unknown Source)
> 	at org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:1895)
> 	... 5 more
> Jun 18, 2013 3:20:20 PM org.apache.hadoop.yarn.YarnUncaughtExceptionHandler uncaughtException
> SEVERE: Thread Thread[Thread-51,5,main] threw an Exception.
> org.apache.hadoop.yarn.YarnException: org.apache.hadoop.security.AccessControlException: Permission denied: user=smehta, access=EXECUTE, inode="/mapred":pkommireddi:supergroup:drwxrwx---
> 	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
> 	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
> 	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
> 	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
> 	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
> 	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
> 	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
> 	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
> 	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
> 	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
> 	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
> 	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
> 	at java.security.AccessController.doPrivileged(Native Method)
> 	at javax.security.auth.Subject.doAs(Subject.java:396)
> 	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
> 	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
>
> 	at org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.handleEvent(JobHistoryEventHandler.java:523)
> 	at org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler$1.run(JobHistoryEventHandler.java:273)
> 	at java.lang.Thread.run(Thread.java:662)
> Caused by: org.apache.hadoop.security.AccessControlException: Permission denied: user=smehta, access=EXECUTE, inode="/mapred":pkommireddi:supergroup:drwxrwx---
> 	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
> 	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
> 	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
> 	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
> 	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
> 	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
> 	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
> 	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
> 	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
> 	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
> 	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
> 	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
> 	at java.security.AccessController.doPrivileged(Native Method)
> 	at javax.security.auth.Subject.doAs(Subject.java:396)
> 	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
> 	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
>
> 	at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> 	at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
> 	at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
> 	at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
> 	at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:90)
> 	at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:57)
> 	at org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:1897)
> 	at org.apache.hadoop.hdfs.DistributedFileSystem.setPermission(DistributedFileSystem.java:823)
> 	at org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.closeEventWriter(JobHistoryEventHandler.java:666)
> 	at org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.handleEvent(JobHistoryEventHandler.java:521)
> 	... 2 more
> Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Permission denied: user=smehta, access=EXECUTE, inode="/mapred":pkommireddi:supergroup:drwxrwx---
> 	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
> 	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
> 	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
> 	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
> 	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
> 	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
> 	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
> 	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
> 	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
> 	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
> 	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
> 	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
> 	at java.security.AccessController.doPrivileged(Native Method)
> 	at javax.security.auth.Subject.doAs(Subject.java:396)
> 	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
> 	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
>
> 	at org.apache.hadoop.ipc.Client.call(Client.java:1225)
> 	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202)
> 	at $Proxy9.setPermission(Unknown Source)
> 	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.setPermission(ClientNamenodeProtocolTranslatorPB.java:241)
> 	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> 	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> 	at java.lang.reflect.Method.invoke(Method.java:597)
> 	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164)
> 	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83)
> 	at $Proxy10.setPermission(Unknown Source)
> 	at org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:1895)
> 	... 5 more
> Jun 18, 2013 3:20:20 PM org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator$ScheduleStats log
> INFO: Before Scheduling: PendingReds:0 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:0 AssignedReds:1 CompletedMaps:1 CompletedReds:1 ContAlloc:2 ContRel:0 HostLocal:0 RackLocal:1
> Jun 18, 2013 3:20:21 PM org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator getResources
> INFO: Received completed container container_1371593763906_0001_01_000003
> Jun 18, 2013 3:20:21 PM org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator$ScheduleStats log
> INFO: After Scheduling: PendingReds:0 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:0 AssignedReds:0 CompletedMaps:1 CompletedReds:1 ContAlloc:2 ContRel:0 HostLocal:0 RackLocal:1
> Jun 18, 2013 3:20:21 PM org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl$DiagnosticInformationUpdater transition
> INFO: Diagnostics report from attempt_1371593763906_0001_r_000000_0: Container killed by the ApplicationMaster.
>
>
>
> On Tue, Jun 18, 2013 at 1:28 PM, Chris Nauroth <cn...@hortonworks.com>wrote:
>
>> Prashant, can you provide more details about what you're doing when you
>> see this error?  Are you submitting a MapReduce job, running an HDFS shell
>> command, or doing some other action?  It's possible that we're also seeing
>> an interaction with some other change in 2.x that triggers a setPermission
>> call that wasn't there in 0.20.2.  I think the problem with the HDFS
>> setPermission API is present in both 0.20.2 and 2.x, but if the code in
>> 0.20.2 never triggered a setPermission call for your usage, then you
>> wouldn't have seen the problem.
>>
>> I'd like to gather these details for submitting a new bug report to HDFS.
>>  Thanks!
>>
>> Chris Nauroth
>> Hortonworks
>> http://hortonworks.com/
>>
>>
>>
>> On Tue, Jun 18, 2013 at 12:14 PM, Leo Leung <ll...@ddn.com> wrote:
>>
>>>  I believe, the properties name should be “dfs.permissions”****
>>>
>>> ** **
>>>
>>> ** **
>>>
>>> *From:* Prashant Kommireddi [mailto:prash1784@gmail.com]
>>> *Sent:* Tuesday, June 18, 2013 10:54 AM
>>> *To:* user@hadoop.apache.org
>>> *Subject:* DFS Permissions on Hadoop 2.x****
>>>
>>> ** **
>>>
>>> Hello,****
>>>
>>> ** **
>>>
>>> We just upgraded our cluster from 0.20.2 to 2.x (with HA) and had a
>>> question around disabling dfs permissions on the latter version. For some
>>> reason, setting the following config does not seem to work****
>>>
>>> ** **
>>>
>>> <property>****
>>>
>>>         <name>dfs.permissions.enabled</name>****
>>>
>>>         <value>false</value>****
>>>
>>> </property>****
>>>
>>> ** **
>>>
>>> Any other configs that might be needed for this? ****
>>>
>>> ** **
>>>
>>> Here is the stacktrace. ****
>>>
>>> ** **
>>>
>>> 2013-06-17 17:35:45,429 INFO  ipc.Server - IPC Server handler 62 on
>>> 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.setPermission
>>> from 10.0.53.131:24059: error:
>>> org.apache.hadoop.security.AccessControlException: Permission denied:
>>> user=smehta, access=EXECUTE,
>>> inode="/mapred":pkommireddi:supergroup:drwxrwx---****
>>>
>>> org.apache.hadoop.security.AccessControlException: Permission denied:
>>> user=smehta, access=EXECUTE,
>>> inode="/mapred":pkommireddi:supergroup:drwxrwx---****
>>>
>>>         at
>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
>>> ****
>>>
>>>         at
>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
>>> ****
>>>
>>>         at
>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
>>> ****
>>>
>>>         at
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
>>> ****
>>>
>>>         at
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
>>> ****
>>>
>>>         at
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
>>> ****
>>>
>>>         at
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
>>> ****
>>>
>>>         at
>>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
>>> ****
>>>
>>>         at
>>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
>>> ****
>>>
>>>         at
>>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
>>> ****
>>>
>>>         at
>>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>>> ****
>>>
>>>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)****
>>>
>>>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)*
>>> ***
>>>
>>>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)*
>>> ***
>>>
>>>         at java.security.AccessController.doPrivileged(Native Method)***
>>> *
>>>
>>>         at javax.security.auth.Subject.doAs(Subject.java:396)****
>>>
>>>         at
>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>>> ****
>>>
>>>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)***
>>> *
>>>
>>> ** **
>>>
>>> ** **
>>>
>>> ** **
>>>
>>> ** **
>>>
>>
>>
>

Re: DFS Permissions on Hadoop 2.x

Posted by Prashant Kommireddi <pr...@gmail.com>.
Looks like the jobs fail only on the first attempt and pass thereafter.
Failure occurs while setting perms on "intermediate done directory". Here
is what I think is happening:

1. Intermediate done dir is (ideally) created as part of deployment (for
eg, /mapred/history/done_intermediate)

2. When a MR job is run, it creates a user dir within intermediate done dir
(/mapred/history/done_intermediate/username)

3. After this dir is created, the code tries to set permissions on this
user dir. In doing so, it checks for EXECUTE permissions on not just its
parent (/mapred/history/done_intermediate) but across all dirs to the
top-most level (/mapred). This fails as "/mapred" does not have execute
permissions for the "Other" users.

4. On successive job runs, since the user dir already exists
(/mapred/history/done_intermediate/username) it no longer tries to create
and set permissions again. And the job completes without any perm errors.

This is the code within JobHistoryEventHandler that's doing it.

 //Check for the existence of intermediate done dir.
    Path doneDirPath = null;
    try {
      doneDirPath = FileSystem.get(conf).makeQualified(new Path(doneDirStr));
      doneDirFS = FileSystem.get(doneDirPath.toUri(), conf);
      // This directory will be in a common location, or this may be a cluster
      // meant for a single user. Creating based on the conf. Should ideally be
      // created by the JobHistoryServer or as part of deployment.
      if (!doneDirFS.exists(doneDirPath)) {
      if (JobHistoryUtils.shouldCreateNonUserDirectory(conf)) {
        LOG.info("Creating intermediate history logDir: ["
            + doneDirPath
            + "] + based on conf. Should ideally be created by the
JobHistoryServer: "
            + MRJobConfig.MR_AM_CREATE_JH_INTERMEDIATE_BASE_DIR);
          mkdir(
              doneDirFS,
              doneDirPath,
              new FsPermission(
            JobHistoryUtils.HISTORY_INTERMEDIATE_DONE_DIR_PERMISSIONS
                .toShort()));
          // TODO Temporary toShort till new FsPermission(FsPermissions)
          // respects
        // sticky
      } else {
          String message = "Not creating intermediate history logDir: ["
                + doneDirPath
                + "] based on conf: "
                + MRJobConfig.MR_AM_CREATE_JH_INTERMEDIATE_BASE_DIR
                + ". Either set to true or pre-create this directory with" +
                " appropriate permissions";
        LOG.error(message);
        throw new YarnException(message);
      }
      }
    } catch (IOException e) {
      LOG.error("Failed checking for the existance of history intermediate " +
      		"done directory: [" + doneDirPath + "]");
      throw new YarnException(e);
    }


In any case, this does not appear to be the right behavior as it does
not respect "dfs.permissions.enabled" (set to false) at any point.
Sounds like a bug?


Thanks, Prashant






On Tue, Jun 18, 2013 at 3:24 PM, Prashant Kommireddi <pr...@gmail.com>wrote:

> Hi Chris,
>
> This is while running a MR job. Please note the job is able to write files
> to "/mapred" directory and fails on EXECUTE permissions. On digging in some
> more, it looks like the failure occurs after writing to
> "/mapred/history/done_intermediate".
>
> Here is a more detailed stacktrace.
>
> INFO: Job end notification started for jobID : job_1371593763906_0001
> Jun 18, 2013 3:20:20 PM org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler closeEventWriter
> INFO: Unable to write out JobSummaryInfo to [hdfs://test-local-EMPTYSPEC/mapred/history/done_intermediate/smehta/job_1371593763906_0001.summary_tmp]
> org.apache.hadoop.security.AccessControlException: Permission denied: user=smehta, access=EXECUTE, inode="/mapred":pkommireddi:supergroup:drwxrwx---
> 	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
> 	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
> 	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
> 	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
> 	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
> 	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
> 	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
> 	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
> 	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
> 	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
> 	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
> 	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
> 	at java.security.AccessController.doPrivileged(Native Method)
> 	at javax.security.auth.Subject.doAs(Subject.java:396)
> 	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
> 	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
>
> 	at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> 	at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
> 	at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
> 	at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
> 	at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:90)
> 	at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:57)
> 	at org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:1897)
> 	at org.apache.hadoop.hdfs.DistributedFileSystem.setPermission(DistributedFileSystem.java:823)
> 	at org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.closeEventWriter(JobHistoryEventHandler.java:666)
> 	at org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.handleEvent(JobHistoryEventHandler.java:521)
> 	at org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler$1.run(JobHistoryEventHandler.java:273)
> 	at java.lang.Thread.run(Thread.java:662)
> Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Permission denied: user=smehta, access=EXECUTE, inode="/mapred":pkommireddi:supergroup:drwxrwx---
> 	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
> 	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
> 	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
> 	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
> 	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
> 	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
> 	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
> 	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
> 	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
> 	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
> 	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
> 	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
> 	at java.security.AccessController.doPrivileged(Native Method)
> 	at javax.security.auth.Subject.doAs(Subject.java:396)
> 	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
> 	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
>
> 	at org.apache.hadoop.ipc.Client.call(Client.java:1225)
> 	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202)
> 	at $Proxy9.setPermission(Unknown Source)
> 	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.setPermission(ClientNamenodeProtocolTranslatorPB.java:241)
> 	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> 	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> 	at java.lang.reflect.Method.invoke(Method.java:597)
> 	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164)
> 	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83)
> 	at $Proxy10.setPermission(Unknown Source)
> 	at org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:1895)
> 	... 5 more
> Jun 18, 2013 3:20:20 PM org.apache.hadoop.yarn.YarnUncaughtExceptionHandler uncaughtException
> SEVERE: Thread Thread[Thread-51,5,main] threw an Exception.
> org.apache.hadoop.yarn.YarnException: org.apache.hadoop.security.AccessControlException: Permission denied: user=smehta, access=EXECUTE, inode="/mapred":pkommireddi:supergroup:drwxrwx---
> 	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
> 	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
> 	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
> 	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
> 	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
> 	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
> 	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
> 	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
> 	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
> 	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
> 	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
> 	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
> 	at java.security.AccessController.doPrivileged(Native Method)
> 	at javax.security.auth.Subject.doAs(Subject.java:396)
> 	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
> 	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
>
> 	at org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.handleEvent(JobHistoryEventHandler.java:523)
> 	at org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler$1.run(JobHistoryEventHandler.java:273)
> 	at java.lang.Thread.run(Thread.java:662)
> Caused by: org.apache.hadoop.security.AccessControlException: Permission denied: user=smehta, access=EXECUTE, inode="/mapred":pkommireddi:supergroup:drwxrwx---
> 	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
> 	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
> 	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
> 	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
> 	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
> 	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
> 	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
> 	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
> 	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
> 	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
> 	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
> 	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
> 	at java.security.AccessController.doPrivileged(Native Method)
> 	at javax.security.auth.Subject.doAs(Subject.java:396)
> 	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
> 	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
>
> 	at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> 	at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
> 	at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
> 	at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
> 	at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:90)
> 	at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:57)
> 	at org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:1897)
> 	at org.apache.hadoop.hdfs.DistributedFileSystem.setPermission(DistributedFileSystem.java:823)
> 	at org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.closeEventWriter(JobHistoryEventHandler.java:666)
> 	at org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.handleEvent(JobHistoryEventHandler.java:521)
> 	... 2 more
> Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Permission denied: user=smehta, access=EXECUTE, inode="/mapred":pkommireddi:supergroup:drwxrwx---
> 	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
> 	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
> 	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
> 	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
> 	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
> 	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
> 	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
> 	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
> 	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
> 	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
> 	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
> 	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
> 	at java.security.AccessController.doPrivileged(Native Method)
> 	at javax.security.auth.Subject.doAs(Subject.java:396)
> 	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
> 	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
>
> 	at org.apache.hadoop.ipc.Client.call(Client.java:1225)
> 	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202)
> 	at $Proxy9.setPermission(Unknown Source)
> 	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.setPermission(ClientNamenodeProtocolTranslatorPB.java:241)
> 	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> 	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> 	at java.lang.reflect.Method.invoke(Method.java:597)
> 	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164)
> 	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83)
> 	at $Proxy10.setPermission(Unknown Source)
> 	at org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:1895)
> 	... 5 more
> Jun 18, 2013 3:20:20 PM org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator$ScheduleStats log
> INFO: Before Scheduling: PendingReds:0 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:0 AssignedReds:1 CompletedMaps:1 CompletedReds:1 ContAlloc:2 ContRel:0 HostLocal:0 RackLocal:1
> Jun 18, 2013 3:20:21 PM org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator getResources
> INFO: Received completed container container_1371593763906_0001_01_000003
> Jun 18, 2013 3:20:21 PM org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator$ScheduleStats log
> INFO: After Scheduling: PendingReds:0 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:0 AssignedReds:0 CompletedMaps:1 CompletedReds:1 ContAlloc:2 ContRel:0 HostLocal:0 RackLocal:1
> Jun 18, 2013 3:20:21 PM org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl$DiagnosticInformationUpdater transition
> INFO: Diagnostics report from attempt_1371593763906_0001_r_000000_0: Container killed by the ApplicationMaster.
>
>
>
> On Tue, Jun 18, 2013 at 1:28 PM, Chris Nauroth <cn...@hortonworks.com>wrote:
>
>> Prashant, can you provide more details about what you're doing when you
>> see this error?  Are you submitting a MapReduce job, running an HDFS shell
>> command, or doing some other action?  It's possible that we're also seeing
>> an interaction with some other change in 2.x that triggers a setPermission
>> call that wasn't there in 0.20.2.  I think the problem with the HDFS
>> setPermission API is present in both 0.20.2 and 2.x, but if the code in
>> 0.20.2 never triggered a setPermission call for your usage, then you
>> wouldn't have seen the problem.
>>
>> I'd like to gather these details for submitting a new bug report to HDFS.
>>  Thanks!
>>
>> Chris Nauroth
>> Hortonworks
>> http://hortonworks.com/
>>
>>
>>
>> On Tue, Jun 18, 2013 at 12:14 PM, Leo Leung <ll...@ddn.com> wrote:
>>
>>>  I believe, the properties name should be “dfs.permissions”****
>>>
>>> ** **
>>>
>>> ** **
>>>
>>> *From:* Prashant Kommireddi [mailto:prash1784@gmail.com]
>>> *Sent:* Tuesday, June 18, 2013 10:54 AM
>>> *To:* user@hadoop.apache.org
>>> *Subject:* DFS Permissions on Hadoop 2.x****
>>>
>>> ** **
>>>
>>> Hello,****
>>>
>>> ** **
>>>
>>> We just upgraded our cluster from 0.20.2 to 2.x (with HA) and had a
>>> question around disabling dfs permissions on the latter version. For some
>>> reason, setting the following config does not seem to work****
>>>
>>> ** **
>>>
>>> <property>****
>>>
>>>         <name>dfs.permissions.enabled</name>****
>>>
>>>         <value>false</value>****
>>>
>>> </property>****
>>>
>>> ** **
>>>
>>> Any other configs that might be needed for this? ****
>>>
>>> ** **
>>>
>>> Here is the stacktrace. ****
>>>
>>> ** **
>>>
>>> 2013-06-17 17:35:45,429 INFO  ipc.Server - IPC Server handler 62 on
>>> 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.setPermission
>>> from 10.0.53.131:24059: error:
>>> org.apache.hadoop.security.AccessControlException: Permission denied:
>>> user=smehta, access=EXECUTE,
>>> inode="/mapred":pkommireddi:supergroup:drwxrwx---****
>>>
>>> org.apache.hadoop.security.AccessControlException: Permission denied:
>>> user=smehta, access=EXECUTE,
>>> inode="/mapred":pkommireddi:supergroup:drwxrwx---****
>>>
>>>         at
>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
>>> ****
>>>
>>>         at
>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
>>> ****
>>>
>>>         at
>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
>>> ****
>>>
>>>         at
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
>>> ****
>>>
>>>         at
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
>>> ****
>>>
>>>         at
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
>>> ****
>>>
>>>         at
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
>>> ****
>>>
>>>         at
>>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
>>> ****
>>>
>>>         at
>>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
>>> ****
>>>
>>>         at
>>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
>>> ****
>>>
>>>         at
>>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>>> ****
>>>
>>>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)****
>>>
>>>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)*
>>> ***
>>>
>>>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)*
>>> ***
>>>
>>>         at java.security.AccessController.doPrivileged(Native Method)***
>>> *
>>>
>>>         at javax.security.auth.Subject.doAs(Subject.java:396)****
>>>
>>>         at
>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>>> ****
>>>
>>>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)***
>>> *
>>>
>>> ** **
>>>
>>> ** **
>>>
>>> ** **
>>>
>>> ** **
>>>
>>
>>
>

Re: DFS Permissions on Hadoop 2.x

Posted by Prashant Kommireddi <pr...@gmail.com>.
Looks like the jobs fail only on the first attempt and pass thereafter.
Failure occurs while setting perms on "intermediate done directory". Here
is what I think is happening:

1. Intermediate done dir is (ideally) created as part of deployment (for
eg, /mapred/history/done_intermediate)

2. When a MR job is run, it creates a user dir within intermediate done dir
(/mapred/history/done_intermediate/username)

3. After this dir is created, the code tries to set permissions on this
user dir. In doing so, it checks for EXECUTE permissions on not just its
parent (/mapred/history/done_intermediate) but across all dirs to the
top-most level (/mapred). This fails as "/mapred" does not have execute
permissions for the "Other" users.

4. On successive job runs, since the user dir already exists
(/mapred/history/done_intermediate/username) it no longer tries to create
and set permissions again. And the job completes without any perm errors.

This is the code within JobHistoryEventHandler that's doing it.

 //Check for the existence of intermediate done dir.
    Path doneDirPath = null;
    try {
      doneDirPath = FileSystem.get(conf).makeQualified(new Path(doneDirStr));
      doneDirFS = FileSystem.get(doneDirPath.toUri(), conf);
      // This directory will be in a common location, or this may be a cluster
      // meant for a single user. Creating based on the conf. Should ideally be
      // created by the JobHistoryServer or as part of deployment.
      if (!doneDirFS.exists(doneDirPath)) {
      if (JobHistoryUtils.shouldCreateNonUserDirectory(conf)) {
        LOG.info("Creating intermediate history logDir: ["
            + doneDirPath
            + "] + based on conf. Should ideally be created by the
JobHistoryServer: "
            + MRJobConfig.MR_AM_CREATE_JH_INTERMEDIATE_BASE_DIR);
          mkdir(
              doneDirFS,
              doneDirPath,
              new FsPermission(
            JobHistoryUtils.HISTORY_INTERMEDIATE_DONE_DIR_PERMISSIONS
                .toShort()));
          // TODO Temporary toShort till new FsPermission(FsPermissions)
          // respects
        // sticky
      } else {
          String message = "Not creating intermediate history logDir: ["
                + doneDirPath
                + "] based on conf: "
                + MRJobConfig.MR_AM_CREATE_JH_INTERMEDIATE_BASE_DIR
                + ". Either set to true or pre-create this directory with" +
                " appropriate permissions";
        LOG.error(message);
        throw new YarnException(message);
      }
      }
    } catch (IOException e) {
      LOG.error("Failed checking for the existance of history intermediate " +
      		"done directory: [" + doneDirPath + "]");
      throw new YarnException(e);
    }


In any case, this does not appear to be the right behavior as it does
not respect "dfs.permissions.enabled" (set to false) at any point.
Sounds like a bug?


Thanks, Prashant






On Tue, Jun 18, 2013 at 3:24 PM, Prashant Kommireddi <pr...@gmail.com>wrote:

> Hi Chris,
>
> This is while running a MR job. Please note the job is able to write files
> to "/mapred" directory and fails on EXECUTE permissions. On digging in some
> more, it looks like the failure occurs after writing to
> "/mapred/history/done_intermediate".
>
> Here is a more detailed stacktrace.
>
> INFO: Job end notification started for jobID : job_1371593763906_0001
> Jun 18, 2013 3:20:20 PM org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler closeEventWriter
> INFO: Unable to write out JobSummaryInfo to [hdfs://test-local-EMPTYSPEC/mapred/history/done_intermediate/smehta/job_1371593763906_0001.summary_tmp]
> org.apache.hadoop.security.AccessControlException: Permission denied: user=smehta, access=EXECUTE, inode="/mapred":pkommireddi:supergroup:drwxrwx---
> 	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
> 	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
> 	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
> 	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
> 	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
> 	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
> 	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
> 	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
> 	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
> 	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
> 	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
> 	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
> 	at java.security.AccessController.doPrivileged(Native Method)
> 	at javax.security.auth.Subject.doAs(Subject.java:396)
> 	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
> 	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
>
> 	at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> 	at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
> 	at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
> 	at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
> 	at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:90)
> 	at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:57)
> 	at org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:1897)
> 	at org.apache.hadoop.hdfs.DistributedFileSystem.setPermission(DistributedFileSystem.java:823)
> 	at org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.closeEventWriter(JobHistoryEventHandler.java:666)
> 	at org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.handleEvent(JobHistoryEventHandler.java:521)
> 	at org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler$1.run(JobHistoryEventHandler.java:273)
> 	at java.lang.Thread.run(Thread.java:662)
> Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Permission denied: user=smehta, access=EXECUTE, inode="/mapred":pkommireddi:supergroup:drwxrwx---
> 	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
> 	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
> 	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
> 	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
> 	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
> 	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
> 	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
> 	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
> 	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
> 	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
> 	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
> 	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
> 	at java.security.AccessController.doPrivileged(Native Method)
> 	at javax.security.auth.Subject.doAs(Subject.java:396)
> 	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
> 	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
>
> 	at org.apache.hadoop.ipc.Client.call(Client.java:1225)
> 	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202)
> 	at $Proxy9.setPermission(Unknown Source)
> 	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.setPermission(ClientNamenodeProtocolTranslatorPB.java:241)
> 	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> 	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> 	at java.lang.reflect.Method.invoke(Method.java:597)
> 	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164)
> 	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83)
> 	at $Proxy10.setPermission(Unknown Source)
> 	at org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:1895)
> 	... 5 more
> Jun 18, 2013 3:20:20 PM org.apache.hadoop.yarn.YarnUncaughtExceptionHandler uncaughtException
> SEVERE: Thread Thread[Thread-51,5,main] threw an Exception.
> org.apache.hadoop.yarn.YarnException: org.apache.hadoop.security.AccessControlException: Permission denied: user=smehta, access=EXECUTE, inode="/mapred":pkommireddi:supergroup:drwxrwx---
> 	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
> 	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
> 	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
> 	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
> 	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
> 	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
> 	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
> 	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
> 	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
> 	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
> 	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
> 	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
> 	at java.security.AccessController.doPrivileged(Native Method)
> 	at javax.security.auth.Subject.doAs(Subject.java:396)
> 	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
> 	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
>
> 	at org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.handleEvent(JobHistoryEventHandler.java:523)
> 	at org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler$1.run(JobHistoryEventHandler.java:273)
> 	at java.lang.Thread.run(Thread.java:662)
> Caused by: org.apache.hadoop.security.AccessControlException: Permission denied: user=smehta, access=EXECUTE, inode="/mapred":pkommireddi:supergroup:drwxrwx---
> 	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
> 	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
> 	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
> 	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
> 	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
> 	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
> 	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
> 	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
> 	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
> 	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
> 	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
> 	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
> 	at java.security.AccessController.doPrivileged(Native Method)
> 	at javax.security.auth.Subject.doAs(Subject.java:396)
> 	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
> 	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
>
> 	at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> 	at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
> 	at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
> 	at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
> 	at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:90)
> 	at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:57)
> 	at org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:1897)
> 	at org.apache.hadoop.hdfs.DistributedFileSystem.setPermission(DistributedFileSystem.java:823)
> 	at org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.closeEventWriter(JobHistoryEventHandler.java:666)
> 	at org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.handleEvent(JobHistoryEventHandler.java:521)
> 	... 2 more
> Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Permission denied: user=smehta, access=EXECUTE, inode="/mapred":pkommireddi:supergroup:drwxrwx---
> 	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
> 	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
> 	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
> 	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
> 	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
> 	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
> 	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
> 	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
> 	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
> 	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
> 	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
> 	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
> 	at java.security.AccessController.doPrivileged(Native Method)
> 	at javax.security.auth.Subject.doAs(Subject.java:396)
> 	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
> 	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
>
> 	at org.apache.hadoop.ipc.Client.call(Client.java:1225)
> 	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202)
> 	at $Proxy9.setPermission(Unknown Source)
> 	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.setPermission(ClientNamenodeProtocolTranslatorPB.java:241)
> 	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> 	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> 	at java.lang.reflect.Method.invoke(Method.java:597)
> 	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164)
> 	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83)
> 	at $Proxy10.setPermission(Unknown Source)
> 	at org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:1895)
> 	... 5 more
> Jun 18, 2013 3:20:20 PM org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator$ScheduleStats log
> INFO: Before Scheduling: PendingReds:0 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:0 AssignedReds:1 CompletedMaps:1 CompletedReds:1 ContAlloc:2 ContRel:0 HostLocal:0 RackLocal:1
> Jun 18, 2013 3:20:21 PM org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator getResources
> INFO: Received completed container container_1371593763906_0001_01_000003
> Jun 18, 2013 3:20:21 PM org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator$ScheduleStats log
> INFO: After Scheduling: PendingReds:0 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:0 AssignedReds:0 CompletedMaps:1 CompletedReds:1 ContAlloc:2 ContRel:0 HostLocal:0 RackLocal:1
> Jun 18, 2013 3:20:21 PM org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl$DiagnosticInformationUpdater transition
> INFO: Diagnostics report from attempt_1371593763906_0001_r_000000_0: Container killed by the ApplicationMaster.
>
>
>
> On Tue, Jun 18, 2013 at 1:28 PM, Chris Nauroth <cn...@hortonworks.com>wrote:
>
>> Prashant, can you provide more details about what you're doing when you
>> see this error?  Are you submitting a MapReduce job, running an HDFS shell
>> command, or doing some other action?  It's possible that we're also seeing
>> an interaction with some other change in 2.x that triggers a setPermission
>> call that wasn't there in 0.20.2.  I think the problem with the HDFS
>> setPermission API is present in both 0.20.2 and 2.x, but if the code in
>> 0.20.2 never triggered a setPermission call for your usage, then you
>> wouldn't have seen the problem.
>>
>> I'd like to gather these details for submitting a new bug report to HDFS.
>>  Thanks!
>>
>> Chris Nauroth
>> Hortonworks
>> http://hortonworks.com/
>>
>>
>>
>> On Tue, Jun 18, 2013 at 12:14 PM, Leo Leung <ll...@ddn.com> wrote:
>>
>>>  I believe, the properties name should be “dfs.permissions”****
>>>
>>> ** **
>>>
>>> ** **
>>>
>>> *From:* Prashant Kommireddi [mailto:prash1784@gmail.com]
>>> *Sent:* Tuesday, June 18, 2013 10:54 AM
>>> *To:* user@hadoop.apache.org
>>> *Subject:* DFS Permissions on Hadoop 2.x****
>>>
>>> ** **
>>>
>>> Hello,****
>>>
>>> ** **
>>>
>>> We just upgraded our cluster from 0.20.2 to 2.x (with HA) and had a
>>> question around disabling dfs permissions on the latter version. For some
>>> reason, setting the following config does not seem to work****
>>>
>>> ** **
>>>
>>> <property>****
>>>
>>>         <name>dfs.permissions.enabled</name>****
>>>
>>>         <value>false</value>****
>>>
>>> </property>****
>>>
>>> ** **
>>>
>>> Any other configs that might be needed for this? ****
>>>
>>> ** **
>>>
>>> Here is the stacktrace. ****
>>>
>>> ** **
>>>
>>> 2013-06-17 17:35:45,429 INFO  ipc.Server - IPC Server handler 62 on
>>> 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.setPermission
>>> from 10.0.53.131:24059: error:
>>> org.apache.hadoop.security.AccessControlException: Permission denied:
>>> user=smehta, access=EXECUTE,
>>> inode="/mapred":pkommireddi:supergroup:drwxrwx---****
>>>
>>> org.apache.hadoop.security.AccessControlException: Permission denied:
>>> user=smehta, access=EXECUTE,
>>> inode="/mapred":pkommireddi:supergroup:drwxrwx---****
>>>
>>>         at
>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
>>> ****
>>>
>>>         at
>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
>>> ****
>>>
>>>         at
>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
>>> ****
>>>
>>>         at
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
>>> ****
>>>
>>>         at
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
>>> ****
>>>
>>>         at
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
>>> ****
>>>
>>>         at
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
>>> ****
>>>
>>>         at
>>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
>>> ****
>>>
>>>         at
>>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
>>> ****
>>>
>>>         at
>>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
>>> ****
>>>
>>>         at
>>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>>> ****
>>>
>>>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)****
>>>
>>>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)*
>>> ***
>>>
>>>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)*
>>> ***
>>>
>>>         at java.security.AccessController.doPrivileged(Native Method)***
>>> *
>>>
>>>         at javax.security.auth.Subject.doAs(Subject.java:396)****
>>>
>>>         at
>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>>> ****
>>>
>>>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)***
>>> *
>>>
>>> ** **
>>>
>>> ** **
>>>
>>> ** **
>>>
>>> ** **
>>>
>>
>>
>

Re: DFS Permissions on Hadoop 2.x

Posted by Prashant Kommireddi <pr...@gmail.com>.
Looks like the jobs fail only on the first attempt and pass thereafter.
Failure occurs while setting perms on "intermediate done directory". Here
is what I think is happening:

1. Intermediate done dir is (ideally) created as part of deployment (for
eg, /mapred/history/done_intermediate)

2. When a MR job is run, it creates a user dir within intermediate done dir
(/mapred/history/done_intermediate/username)

3. After this dir is created, the code tries to set permissions on this
user dir. In doing so, it checks for EXECUTE permissions on not just its
parent (/mapred/history/done_intermediate) but across all dirs to the
top-most level (/mapred). This fails as "/mapred" does not have execute
permissions for the "Other" users.

4. On successive job runs, since the user dir already exists
(/mapred/history/done_intermediate/username) it no longer tries to create
and set permissions again. And the job completes without any perm errors.

This is the code within JobHistoryEventHandler that's doing it.

 //Check for the existence of intermediate done dir.
    Path doneDirPath = null;
    try {
      doneDirPath = FileSystem.get(conf).makeQualified(new Path(doneDirStr));
      doneDirFS = FileSystem.get(doneDirPath.toUri(), conf);
      // This directory will be in a common location, or this may be a cluster
      // meant for a single user. Creating based on the conf. Should ideally be
      // created by the JobHistoryServer or as part of deployment.
      if (!doneDirFS.exists(doneDirPath)) {
      if (JobHistoryUtils.shouldCreateNonUserDirectory(conf)) {
        LOG.info("Creating intermediate history logDir: ["
            + doneDirPath
            + "] + based on conf. Should ideally be created by the
JobHistoryServer: "
            + MRJobConfig.MR_AM_CREATE_JH_INTERMEDIATE_BASE_DIR);
          mkdir(
              doneDirFS,
              doneDirPath,
              new FsPermission(
            JobHistoryUtils.HISTORY_INTERMEDIATE_DONE_DIR_PERMISSIONS
                .toShort()));
          // TODO Temporary toShort till new FsPermission(FsPermissions)
          // respects
        // sticky
      } else {
          String message = "Not creating intermediate history logDir: ["
                + doneDirPath
                + "] based on conf: "
                + MRJobConfig.MR_AM_CREATE_JH_INTERMEDIATE_BASE_DIR
                + ". Either set to true or pre-create this directory with" +
                " appropriate permissions";
        LOG.error(message);
        throw new YarnException(message);
      }
      }
    } catch (IOException e) {
      LOG.error("Failed checking for the existance of history intermediate " +
      		"done directory: [" + doneDirPath + "]");
      throw new YarnException(e);
    }


In any case, this does not appear to be the right behavior as it does
not respect "dfs.permissions.enabled" (set to false) at any point.
Sounds like a bug?


Thanks, Prashant






On Tue, Jun 18, 2013 at 3:24 PM, Prashant Kommireddi <pr...@gmail.com>wrote:

> Hi Chris,
>
> This is while running a MR job. Please note the job is able to write files
> to "/mapred" directory and fails on EXECUTE permissions. On digging in some
> more, it looks like the failure occurs after writing to
> "/mapred/history/done_intermediate".
>
> Here is a more detailed stacktrace.
>
> INFO: Job end notification started for jobID : job_1371593763906_0001
> Jun 18, 2013 3:20:20 PM org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler closeEventWriter
> INFO: Unable to write out JobSummaryInfo to [hdfs://test-local-EMPTYSPEC/mapred/history/done_intermediate/smehta/job_1371593763906_0001.summary_tmp]
> org.apache.hadoop.security.AccessControlException: Permission denied: user=smehta, access=EXECUTE, inode="/mapred":pkommireddi:supergroup:drwxrwx---
> 	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
> 	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
> 	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
> 	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
> 	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
> 	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
> 	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
> 	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
> 	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
> 	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
> 	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
> 	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
> 	at java.security.AccessController.doPrivileged(Native Method)
> 	at javax.security.auth.Subject.doAs(Subject.java:396)
> 	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
> 	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
>
> 	at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> 	at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
> 	at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
> 	at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
> 	at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:90)
> 	at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:57)
> 	at org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:1897)
> 	at org.apache.hadoop.hdfs.DistributedFileSystem.setPermission(DistributedFileSystem.java:823)
> 	at org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.closeEventWriter(JobHistoryEventHandler.java:666)
> 	at org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.handleEvent(JobHistoryEventHandler.java:521)
> 	at org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler$1.run(JobHistoryEventHandler.java:273)
> 	at java.lang.Thread.run(Thread.java:662)
> Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Permission denied: user=smehta, access=EXECUTE, inode="/mapred":pkommireddi:supergroup:drwxrwx---
> 	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
> 	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
> 	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
> 	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
> 	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
> 	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
> 	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
> 	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
> 	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
> 	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
> 	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
> 	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
> 	at java.security.AccessController.doPrivileged(Native Method)
> 	at javax.security.auth.Subject.doAs(Subject.java:396)
> 	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
> 	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
>
> 	at org.apache.hadoop.ipc.Client.call(Client.java:1225)
> 	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202)
> 	at $Proxy9.setPermission(Unknown Source)
> 	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.setPermission(ClientNamenodeProtocolTranslatorPB.java:241)
> 	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> 	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> 	at java.lang.reflect.Method.invoke(Method.java:597)
> 	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164)
> 	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83)
> 	at $Proxy10.setPermission(Unknown Source)
> 	at org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:1895)
> 	... 5 more
> Jun 18, 2013 3:20:20 PM org.apache.hadoop.yarn.YarnUncaughtExceptionHandler uncaughtException
> SEVERE: Thread Thread[Thread-51,5,main] threw an Exception.
> org.apache.hadoop.yarn.YarnException: org.apache.hadoop.security.AccessControlException: Permission denied: user=smehta, access=EXECUTE, inode="/mapred":pkommireddi:supergroup:drwxrwx---
> 	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
> 	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
> 	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
> 	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
> 	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
> 	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
> 	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
> 	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
> 	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
> 	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
> 	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
> 	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
> 	at java.security.AccessController.doPrivileged(Native Method)
> 	at javax.security.auth.Subject.doAs(Subject.java:396)
> 	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
> 	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
>
> 	at org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.handleEvent(JobHistoryEventHandler.java:523)
> 	at org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler$1.run(JobHistoryEventHandler.java:273)
> 	at java.lang.Thread.run(Thread.java:662)
> Caused by: org.apache.hadoop.security.AccessControlException: Permission denied: user=smehta, access=EXECUTE, inode="/mapred":pkommireddi:supergroup:drwxrwx---
> 	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
> 	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
> 	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
> 	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
> 	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
> 	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
> 	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
> 	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
> 	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
> 	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
> 	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
> 	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
> 	at java.security.AccessController.doPrivileged(Native Method)
> 	at javax.security.auth.Subject.doAs(Subject.java:396)
> 	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
> 	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
>
> 	at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> 	at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
> 	at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
> 	at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
> 	at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:90)
> 	at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:57)
> 	at org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:1897)
> 	at org.apache.hadoop.hdfs.DistributedFileSystem.setPermission(DistributedFileSystem.java:823)
> 	at org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.closeEventWriter(JobHistoryEventHandler.java:666)
> 	at org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.handleEvent(JobHistoryEventHandler.java:521)
> 	... 2 more
> Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Permission denied: user=smehta, access=EXECUTE, inode="/mapred":pkommireddi:supergroup:drwxrwx---
> 	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
> 	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
> 	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
> 	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
> 	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
> 	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
> 	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
> 	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
> 	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
> 	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
> 	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
> 	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
> 	at java.security.AccessController.doPrivileged(Native Method)
> 	at javax.security.auth.Subject.doAs(Subject.java:396)
> 	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
> 	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
>
> 	at org.apache.hadoop.ipc.Client.call(Client.java:1225)
> 	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202)
> 	at $Proxy9.setPermission(Unknown Source)
> 	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.setPermission(ClientNamenodeProtocolTranslatorPB.java:241)
> 	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> 	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> 	at java.lang.reflect.Method.invoke(Method.java:597)
> 	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164)
> 	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83)
> 	at $Proxy10.setPermission(Unknown Source)
> 	at org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:1895)
> 	... 5 more
> Jun 18, 2013 3:20:20 PM org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator$ScheduleStats log
> INFO: Before Scheduling: PendingReds:0 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:0 AssignedReds:1 CompletedMaps:1 CompletedReds:1 ContAlloc:2 ContRel:0 HostLocal:0 RackLocal:1
> Jun 18, 2013 3:20:21 PM org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator getResources
> INFO: Received completed container container_1371593763906_0001_01_000003
> Jun 18, 2013 3:20:21 PM org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator$ScheduleStats log
> INFO: After Scheduling: PendingReds:0 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:0 AssignedReds:0 CompletedMaps:1 CompletedReds:1 ContAlloc:2 ContRel:0 HostLocal:0 RackLocal:1
> Jun 18, 2013 3:20:21 PM org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl$DiagnosticInformationUpdater transition
> INFO: Diagnostics report from attempt_1371593763906_0001_r_000000_0: Container killed by the ApplicationMaster.
>
>
>
> On Tue, Jun 18, 2013 at 1:28 PM, Chris Nauroth <cn...@hortonworks.com>wrote:
>
>> Prashant, can you provide more details about what you're doing when you
>> see this error?  Are you submitting a MapReduce job, running an HDFS shell
>> command, or doing some other action?  It's possible that we're also seeing
>> an interaction with some other change in 2.x that triggers a setPermission
>> call that wasn't there in 0.20.2.  I think the problem with the HDFS
>> setPermission API is present in both 0.20.2 and 2.x, but if the code in
>> 0.20.2 never triggered a setPermission call for your usage, then you
>> wouldn't have seen the problem.
>>
>> I'd like to gather these details for submitting a new bug report to HDFS.
>>  Thanks!
>>
>> Chris Nauroth
>> Hortonworks
>> http://hortonworks.com/
>>
>>
>>
>> On Tue, Jun 18, 2013 at 12:14 PM, Leo Leung <ll...@ddn.com> wrote:
>>
>>>  I believe, the properties name should be “dfs.permissions”****
>>>
>>> ** **
>>>
>>> ** **
>>>
>>> *From:* Prashant Kommireddi [mailto:prash1784@gmail.com]
>>> *Sent:* Tuesday, June 18, 2013 10:54 AM
>>> *To:* user@hadoop.apache.org
>>> *Subject:* DFS Permissions on Hadoop 2.x****
>>>
>>> ** **
>>>
>>> Hello,****
>>>
>>> ** **
>>>
>>> We just upgraded our cluster from 0.20.2 to 2.x (with HA) and had a
>>> question around disabling dfs permissions on the latter version. For some
>>> reason, setting the following config does not seem to work****
>>>
>>> ** **
>>>
>>> <property>****
>>>
>>>         <name>dfs.permissions.enabled</name>****
>>>
>>>         <value>false</value>****
>>>
>>> </property>****
>>>
>>> ** **
>>>
>>> Any other configs that might be needed for this? ****
>>>
>>> ** **
>>>
>>> Here is the stacktrace. ****
>>>
>>> ** **
>>>
>>> 2013-06-17 17:35:45,429 INFO  ipc.Server - IPC Server handler 62 on
>>> 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.setPermission
>>> from 10.0.53.131:24059: error:
>>> org.apache.hadoop.security.AccessControlException: Permission denied:
>>> user=smehta, access=EXECUTE,
>>> inode="/mapred":pkommireddi:supergroup:drwxrwx---****
>>>
>>> org.apache.hadoop.security.AccessControlException: Permission denied:
>>> user=smehta, access=EXECUTE,
>>> inode="/mapred":pkommireddi:supergroup:drwxrwx---****
>>>
>>>         at
>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
>>> ****
>>>
>>>         at
>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
>>> ****
>>>
>>>         at
>>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
>>> ****
>>>
>>>         at
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
>>> ****
>>>
>>>         at
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
>>> ****
>>>
>>>         at
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
>>> ****
>>>
>>>         at
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
>>> ****
>>>
>>>         at
>>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
>>> ****
>>>
>>>         at
>>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
>>> ****
>>>
>>>         at
>>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
>>> ****
>>>
>>>         at
>>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>>> ****
>>>
>>>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)****
>>>
>>>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)*
>>> ***
>>>
>>>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)*
>>> ***
>>>
>>>         at java.security.AccessController.doPrivileged(Native Method)***
>>> *
>>>
>>>         at javax.security.auth.Subject.doAs(Subject.java:396)****
>>>
>>>         at
>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>>> ****
>>>
>>>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)***
>>> *
>>>
>>> ** **
>>>
>>> ** **
>>>
>>> ** **
>>>
>>> ** **
>>>
>>
>>
>

Re: DFS Permissions on Hadoop 2.x

Posted by Prashant Kommireddi <pr...@gmail.com>.
Hi Chris,

This is while running a MR job. Please note the job is able to write files
to "/mapred" directory and fails on EXECUTE permissions. On digging in some
more, it looks like the failure occurs after writing to
"/mapred/history/done_intermediate".

Here is a more detailed stacktrace.

INFO: Job end notification started for jobID : job_1371593763906_0001
Jun 18, 2013 3:20:20 PM
org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler
closeEventWriter
INFO: Unable to write out JobSummaryInfo to
[hdfs://test-local-EMPTYSPEC/mapred/history/done_intermediate/smehta/job_1371593763906_0001.summary_tmp]
org.apache.hadoop.security.AccessControlException: Permission denied:
user=smehta, access=EXECUTE,
inode="/mapred":pkommireddi:supergroup:drwxrwx---
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:396)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)

	at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
	at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
	at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
	at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
	at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:90)
	at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:57)
	at org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:1897)
	at org.apache.hadoop.hdfs.DistributedFileSystem.setPermission(DistributedFileSystem.java:823)
	at org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.closeEventWriter(JobHistoryEventHandler.java:666)
	at org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.handleEvent(JobHistoryEventHandler.java:521)
	at org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler$1.run(JobHistoryEventHandler.java:273)
	at java.lang.Thread.run(Thread.java:662)
Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException):
Permission denied: user=smehta, access=EXECUTE,
inode="/mapred":pkommireddi:supergroup:drwxrwx---
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:396)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)

	at org.apache.hadoop.ipc.Client.call(Client.java:1225)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202)
	at $Proxy9.setPermission(Unknown Source)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.setPermission(ClientNamenodeProtocolTranslatorPB.java:241)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
	at java.lang.reflect.Method.invoke(Method.java:597)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83)
	at $Proxy10.setPermission(Unknown Source)
	at org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:1895)
	... 5 more
Jun 18, 2013 3:20:20 PM
org.apache.hadoop.yarn.YarnUncaughtExceptionHandler uncaughtException
SEVERE: Thread Thread[Thread-51,5,main] threw an Exception.
org.apache.hadoop.yarn.YarnException:
org.apache.hadoop.security.AccessControlException: Permission denied:
user=smehta, access=EXECUTE,
inode="/mapred":pkommireddi:supergroup:drwxrwx---
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:396)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)

	at org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.handleEvent(JobHistoryEventHandler.java:523)
	at org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler$1.run(JobHistoryEventHandler.java:273)
	at java.lang.Thread.run(Thread.java:662)
Caused by: org.apache.hadoop.security.AccessControlException:
Permission denied: user=smehta, access=EXECUTE,
inode="/mapred":pkommireddi:supergroup:drwxrwx---
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:396)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)

	at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
	at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
	at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
	at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
	at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:90)
	at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:57)
	at org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:1897)
	at org.apache.hadoop.hdfs.DistributedFileSystem.setPermission(DistributedFileSystem.java:823)
	at org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.closeEventWriter(JobHistoryEventHandler.java:666)
	at org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.handleEvent(JobHistoryEventHandler.java:521)
	... 2 more
Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException):
Permission denied: user=smehta, access=EXECUTE,
inode="/mapred":pkommireddi:supergroup:drwxrwx---
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:396)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)

	at org.apache.hadoop.ipc.Client.call(Client.java:1225)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202)
	at $Proxy9.setPermission(Unknown Source)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.setPermission(ClientNamenodeProtocolTranslatorPB.java:241)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
	at java.lang.reflect.Method.invoke(Method.java:597)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83)
	at $Proxy10.setPermission(Unknown Source)
	at org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:1895)
	... 5 more
Jun 18, 2013 3:20:20 PM
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator$ScheduleStats
log
INFO: Before Scheduling: PendingReds:0 ScheduledMaps:0 ScheduledReds:0
AssignedMaps:0 AssignedReds:1 CompletedMaps:1 CompletedReds:1
ContAlloc:2 ContRel:0 HostLocal:0 RackLocal:1
Jun 18, 2013 3:20:21 PM
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator
getResources
INFO: Received completed container container_1371593763906_0001_01_000003
Jun 18, 2013 3:20:21 PM
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator$ScheduleStats
log
INFO: After Scheduling: PendingReds:0 ScheduledMaps:0 ScheduledReds:0
AssignedMaps:0 AssignedReds:0 CompletedMaps:1 CompletedReds:1
ContAlloc:2 ContRel:0 HostLocal:0 RackLocal:1
Jun 18, 2013 3:20:21 PM
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl$DiagnosticInformationUpdater
transition
INFO: Diagnostics report from attempt_1371593763906_0001_r_000000_0:
Container killed by the ApplicationMaster.



On Tue, Jun 18, 2013 at 1:28 PM, Chris Nauroth <cn...@hortonworks.com>wrote:

> Prashant, can you provide more details about what you're doing when you
> see this error?  Are you submitting a MapReduce job, running an HDFS shell
> command, or doing some other action?  It's possible that we're also seeing
> an interaction with some other change in 2.x that triggers a setPermission
> call that wasn't there in 0.20.2.  I think the problem with the HDFS
> setPermission API is present in both 0.20.2 and 2.x, but if the code in
> 0.20.2 never triggered a setPermission call for your usage, then you
> wouldn't have seen the problem.
>
> I'd like to gather these details for submitting a new bug report to HDFS.
>  Thanks!
>
> Chris Nauroth
> Hortonworks
> http://hortonworks.com/
>
>
>
> On Tue, Jun 18, 2013 at 12:14 PM, Leo Leung <ll...@ddn.com> wrote:
>
>>  I believe, the properties name should be “dfs.permissions”****
>>
>> ** **
>>
>> ** **
>>
>> *From:* Prashant Kommireddi [mailto:prash1784@gmail.com]
>> *Sent:* Tuesday, June 18, 2013 10:54 AM
>> *To:* user@hadoop.apache.org
>> *Subject:* DFS Permissions on Hadoop 2.x****
>>
>> ** **
>>
>> Hello,****
>>
>> ** **
>>
>> We just upgraded our cluster from 0.20.2 to 2.x (with HA) and had a
>> question around disabling dfs permissions on the latter version. For some
>> reason, setting the following config does not seem to work****
>>
>> ** **
>>
>> <property>****
>>
>>         <name>dfs.permissions.enabled</name>****
>>
>>         <value>false</value>****
>>
>> </property>****
>>
>> ** **
>>
>> Any other configs that might be needed for this? ****
>>
>> ** **
>>
>> Here is the stacktrace. ****
>>
>> ** **
>>
>> 2013-06-17 17:35:45,429 INFO  ipc.Server - IPC Server handler 62 on 8020,
>> call org.apache.hadoop.hdfs.protocol.ClientProtocol.setPermission from
>> 10.0.53.131:24059: error:
>> org.apache.hadoop.security.AccessControlException: Permission denied:
>> user=smehta, access=EXECUTE,
>> inode="/mapred":pkommireddi:supergroup:drwxrwx---****
>>
>> org.apache.hadoop.security.AccessControlException: Permission denied:
>> user=smehta, access=EXECUTE,
>> inode="/mapred":pkommireddi:supergroup:drwxrwx---****
>>
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
>> ****
>>
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
>> ****
>>
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
>> ****
>>
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
>> ****
>>
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
>> ****
>>
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
>> ****
>>
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
>> ****
>>
>>         at
>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
>> ****
>>
>>         at
>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
>> ****
>>
>>         at
>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
>> ****
>>
>>         at
>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>> ****
>>
>>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)****
>>
>>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)**
>> **
>>
>>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)**
>> **
>>
>>         at java.security.AccessController.doPrivileged(Native Method)****
>>
>>         at javax.security.auth.Subject.doAs(Subject.java:396)****
>>
>>         at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>> ****
>>
>>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)****
>>
>> ** **
>>
>> ** **
>>
>> ** **
>>
>> ** **
>>
>
>

Re: DFS Permissions on Hadoop 2.x

Posted by Prashant Kommireddi <pr...@gmail.com>.
Hi Chris,

This is while running a MR job. Please note the job is able to write files
to "/mapred" directory and fails on EXECUTE permissions. On digging in some
more, it looks like the failure occurs after writing to
"/mapred/history/done_intermediate".

Here is a more detailed stacktrace.

INFO: Job end notification started for jobID : job_1371593763906_0001
Jun 18, 2013 3:20:20 PM
org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler
closeEventWriter
INFO: Unable to write out JobSummaryInfo to
[hdfs://test-local-EMPTYSPEC/mapred/history/done_intermediate/smehta/job_1371593763906_0001.summary_tmp]
org.apache.hadoop.security.AccessControlException: Permission denied:
user=smehta, access=EXECUTE,
inode="/mapred":pkommireddi:supergroup:drwxrwx---
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:396)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)

	at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
	at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
	at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
	at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
	at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:90)
	at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:57)
	at org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:1897)
	at org.apache.hadoop.hdfs.DistributedFileSystem.setPermission(DistributedFileSystem.java:823)
	at org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.closeEventWriter(JobHistoryEventHandler.java:666)
	at org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.handleEvent(JobHistoryEventHandler.java:521)
	at org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler$1.run(JobHistoryEventHandler.java:273)
	at java.lang.Thread.run(Thread.java:662)
Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException):
Permission denied: user=smehta, access=EXECUTE,
inode="/mapred":pkommireddi:supergroup:drwxrwx---
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:396)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)

	at org.apache.hadoop.ipc.Client.call(Client.java:1225)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202)
	at $Proxy9.setPermission(Unknown Source)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.setPermission(ClientNamenodeProtocolTranslatorPB.java:241)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
	at java.lang.reflect.Method.invoke(Method.java:597)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83)
	at $Proxy10.setPermission(Unknown Source)
	at org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:1895)
	... 5 more
Jun 18, 2013 3:20:20 PM
org.apache.hadoop.yarn.YarnUncaughtExceptionHandler uncaughtException
SEVERE: Thread Thread[Thread-51,5,main] threw an Exception.
org.apache.hadoop.yarn.YarnException:
org.apache.hadoop.security.AccessControlException: Permission denied:
user=smehta, access=EXECUTE,
inode="/mapred":pkommireddi:supergroup:drwxrwx---
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:396)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)

	at org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.handleEvent(JobHistoryEventHandler.java:523)
	at org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler$1.run(JobHistoryEventHandler.java:273)
	at java.lang.Thread.run(Thread.java:662)
Caused by: org.apache.hadoop.security.AccessControlException:
Permission denied: user=smehta, access=EXECUTE,
inode="/mapred":pkommireddi:supergroup:drwxrwx---
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:396)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)

	at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
	at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
	at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
	at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
	at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:90)
	at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:57)
	at org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:1897)
	at org.apache.hadoop.hdfs.DistributedFileSystem.setPermission(DistributedFileSystem.java:823)
	at org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.closeEventWriter(JobHistoryEventHandler.java:666)
	at org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.handleEvent(JobHistoryEventHandler.java:521)
	... 2 more
Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException):
Permission denied: user=smehta, access=EXECUTE,
inode="/mapred":pkommireddi:supergroup:drwxrwx---
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:396)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)

	at org.apache.hadoop.ipc.Client.call(Client.java:1225)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202)
	at $Proxy9.setPermission(Unknown Source)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.setPermission(ClientNamenodeProtocolTranslatorPB.java:241)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
	at java.lang.reflect.Method.invoke(Method.java:597)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83)
	at $Proxy10.setPermission(Unknown Source)
	at org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:1895)
	... 5 more
Jun 18, 2013 3:20:20 PM
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator$ScheduleStats
log
INFO: Before Scheduling: PendingReds:0 ScheduledMaps:0 ScheduledReds:0
AssignedMaps:0 AssignedReds:1 CompletedMaps:1 CompletedReds:1
ContAlloc:2 ContRel:0 HostLocal:0 RackLocal:1
Jun 18, 2013 3:20:21 PM
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator
getResources
INFO: Received completed container container_1371593763906_0001_01_000003
Jun 18, 2013 3:20:21 PM
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator$ScheduleStats
log
INFO: After Scheduling: PendingReds:0 ScheduledMaps:0 ScheduledReds:0
AssignedMaps:0 AssignedReds:0 CompletedMaps:1 CompletedReds:1
ContAlloc:2 ContRel:0 HostLocal:0 RackLocal:1
Jun 18, 2013 3:20:21 PM
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl$DiagnosticInformationUpdater
transition
INFO: Diagnostics report from attempt_1371593763906_0001_r_000000_0:
Container killed by the ApplicationMaster.



On Tue, Jun 18, 2013 at 1:28 PM, Chris Nauroth <cn...@hortonworks.com>wrote:

> Prashant, can you provide more details about what you're doing when you
> see this error?  Are you submitting a MapReduce job, running an HDFS shell
> command, or doing some other action?  It's possible that we're also seeing
> an interaction with some other change in 2.x that triggers a setPermission
> call that wasn't there in 0.20.2.  I think the problem with the HDFS
> setPermission API is present in both 0.20.2 and 2.x, but if the code in
> 0.20.2 never triggered a setPermission call for your usage, then you
> wouldn't have seen the problem.
>
> I'd like to gather these details for submitting a new bug report to HDFS.
>  Thanks!
>
> Chris Nauroth
> Hortonworks
> http://hortonworks.com/
>
>
>
> On Tue, Jun 18, 2013 at 12:14 PM, Leo Leung <ll...@ddn.com> wrote:
>
>>  I believe, the properties name should be “dfs.permissions”****
>>
>> ** **
>>
>> ** **
>>
>> *From:* Prashant Kommireddi [mailto:prash1784@gmail.com]
>> *Sent:* Tuesday, June 18, 2013 10:54 AM
>> *To:* user@hadoop.apache.org
>> *Subject:* DFS Permissions on Hadoop 2.x****
>>
>> ** **
>>
>> Hello,****
>>
>> ** **
>>
>> We just upgraded our cluster from 0.20.2 to 2.x (with HA) and had a
>> question around disabling dfs permissions on the latter version. For some
>> reason, setting the following config does not seem to work****
>>
>> ** **
>>
>> <property>****
>>
>>         <name>dfs.permissions.enabled</name>****
>>
>>         <value>false</value>****
>>
>> </property>****
>>
>> ** **
>>
>> Any other configs that might be needed for this? ****
>>
>> ** **
>>
>> Here is the stacktrace. ****
>>
>> ** **
>>
>> 2013-06-17 17:35:45,429 INFO  ipc.Server - IPC Server handler 62 on 8020,
>> call org.apache.hadoop.hdfs.protocol.ClientProtocol.setPermission from
>> 10.0.53.131:24059: error:
>> org.apache.hadoop.security.AccessControlException: Permission denied:
>> user=smehta, access=EXECUTE,
>> inode="/mapred":pkommireddi:supergroup:drwxrwx---****
>>
>> org.apache.hadoop.security.AccessControlException: Permission denied:
>> user=smehta, access=EXECUTE,
>> inode="/mapred":pkommireddi:supergroup:drwxrwx---****
>>
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
>> ****
>>
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
>> ****
>>
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
>> ****
>>
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
>> ****
>>
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
>> ****
>>
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
>> ****
>>
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
>> ****
>>
>>         at
>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
>> ****
>>
>>         at
>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
>> ****
>>
>>         at
>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
>> ****
>>
>>         at
>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>> ****
>>
>>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)****
>>
>>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)**
>> **
>>
>>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)**
>> **
>>
>>         at java.security.AccessController.doPrivileged(Native Method)****
>>
>>         at javax.security.auth.Subject.doAs(Subject.java:396)****
>>
>>         at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>> ****
>>
>>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)****
>>
>> ** **
>>
>> ** **
>>
>> ** **
>>
>> ** **
>>
>
>

Re: DFS Permissions on Hadoop 2.x

Posted by Prashant Kommireddi <pr...@gmail.com>.
Hi Chris,

This is while running a MR job. Please note the job is able to write files
to "/mapred" directory and fails on EXECUTE permissions. On digging in some
more, it looks like the failure occurs after writing to
"/mapred/history/done_intermediate".

Here is a more detailed stacktrace.

INFO: Job end notification started for jobID : job_1371593763906_0001
Jun 18, 2013 3:20:20 PM
org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler
closeEventWriter
INFO: Unable to write out JobSummaryInfo to
[hdfs://test-local-EMPTYSPEC/mapred/history/done_intermediate/smehta/job_1371593763906_0001.summary_tmp]
org.apache.hadoop.security.AccessControlException: Permission denied:
user=smehta, access=EXECUTE,
inode="/mapred":pkommireddi:supergroup:drwxrwx---
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:396)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)

	at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
	at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
	at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
	at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
	at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:90)
	at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:57)
	at org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:1897)
	at org.apache.hadoop.hdfs.DistributedFileSystem.setPermission(DistributedFileSystem.java:823)
	at org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.closeEventWriter(JobHistoryEventHandler.java:666)
	at org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.handleEvent(JobHistoryEventHandler.java:521)
	at org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler$1.run(JobHistoryEventHandler.java:273)
	at java.lang.Thread.run(Thread.java:662)
Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException):
Permission denied: user=smehta, access=EXECUTE,
inode="/mapred":pkommireddi:supergroup:drwxrwx---
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:396)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)

	at org.apache.hadoop.ipc.Client.call(Client.java:1225)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202)
	at $Proxy9.setPermission(Unknown Source)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.setPermission(ClientNamenodeProtocolTranslatorPB.java:241)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
	at java.lang.reflect.Method.invoke(Method.java:597)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83)
	at $Proxy10.setPermission(Unknown Source)
	at org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:1895)
	... 5 more
Jun 18, 2013 3:20:20 PM
org.apache.hadoop.yarn.YarnUncaughtExceptionHandler uncaughtException
SEVERE: Thread Thread[Thread-51,5,main] threw an Exception.
org.apache.hadoop.yarn.YarnException:
org.apache.hadoop.security.AccessControlException: Permission denied:
user=smehta, access=EXECUTE,
inode="/mapred":pkommireddi:supergroup:drwxrwx---
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:396)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)

	at org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.handleEvent(JobHistoryEventHandler.java:523)
	at org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler$1.run(JobHistoryEventHandler.java:273)
	at java.lang.Thread.run(Thread.java:662)
Caused by: org.apache.hadoop.security.AccessControlException:
Permission denied: user=smehta, access=EXECUTE,
inode="/mapred":pkommireddi:supergroup:drwxrwx---
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:396)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)

	at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
	at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
	at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
	at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
	at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:90)
	at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:57)
	at org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:1897)
	at org.apache.hadoop.hdfs.DistributedFileSystem.setPermission(DistributedFileSystem.java:823)
	at org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.closeEventWriter(JobHistoryEventHandler.java:666)
	at org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.handleEvent(JobHistoryEventHandler.java:521)
	... 2 more
Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException):
Permission denied: user=smehta, access=EXECUTE,
inode="/mapred":pkommireddi:supergroup:drwxrwx---
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:396)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)

	at org.apache.hadoop.ipc.Client.call(Client.java:1225)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202)
	at $Proxy9.setPermission(Unknown Source)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.setPermission(ClientNamenodeProtocolTranslatorPB.java:241)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
	at java.lang.reflect.Method.invoke(Method.java:597)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83)
	at $Proxy10.setPermission(Unknown Source)
	at org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:1895)
	... 5 more
Jun 18, 2013 3:20:20 PM
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator$ScheduleStats
log
INFO: Before Scheduling: PendingReds:0 ScheduledMaps:0 ScheduledReds:0
AssignedMaps:0 AssignedReds:1 CompletedMaps:1 CompletedReds:1
ContAlloc:2 ContRel:0 HostLocal:0 RackLocal:1
Jun 18, 2013 3:20:21 PM
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator
getResources
INFO: Received completed container container_1371593763906_0001_01_000003
Jun 18, 2013 3:20:21 PM
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator$ScheduleStats
log
INFO: After Scheduling: PendingReds:0 ScheduledMaps:0 ScheduledReds:0
AssignedMaps:0 AssignedReds:0 CompletedMaps:1 CompletedReds:1
ContAlloc:2 ContRel:0 HostLocal:0 RackLocal:1
Jun 18, 2013 3:20:21 PM
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl$DiagnosticInformationUpdater
transition
INFO: Diagnostics report from attempt_1371593763906_0001_r_000000_0:
Container killed by the ApplicationMaster.



On Tue, Jun 18, 2013 at 1:28 PM, Chris Nauroth <cn...@hortonworks.com>wrote:

> Prashant, can you provide more details about what you're doing when you
> see this error?  Are you submitting a MapReduce job, running an HDFS shell
> command, or doing some other action?  It's possible that we're also seeing
> an interaction with some other change in 2.x that triggers a setPermission
> call that wasn't there in 0.20.2.  I think the problem with the HDFS
> setPermission API is present in both 0.20.2 and 2.x, but if the code in
> 0.20.2 never triggered a setPermission call for your usage, then you
> wouldn't have seen the problem.
>
> I'd like to gather these details for submitting a new bug report to HDFS.
>  Thanks!
>
> Chris Nauroth
> Hortonworks
> http://hortonworks.com/
>
>
>
> On Tue, Jun 18, 2013 at 12:14 PM, Leo Leung <ll...@ddn.com> wrote:
>
>>  I believe, the properties name should be “dfs.permissions”****
>>
>> ** **
>>
>> ** **
>>
>> *From:* Prashant Kommireddi [mailto:prash1784@gmail.com]
>> *Sent:* Tuesday, June 18, 2013 10:54 AM
>> *To:* user@hadoop.apache.org
>> *Subject:* DFS Permissions on Hadoop 2.x****
>>
>> ** **
>>
>> Hello,****
>>
>> ** **
>>
>> We just upgraded our cluster from 0.20.2 to 2.x (with HA) and had a
>> question around disabling dfs permissions on the latter version. For some
>> reason, setting the following config does not seem to work****
>>
>> ** **
>>
>> <property>****
>>
>>         <name>dfs.permissions.enabled</name>****
>>
>>         <value>false</value>****
>>
>> </property>****
>>
>> ** **
>>
>> Any other configs that might be needed for this? ****
>>
>> ** **
>>
>> Here is the stacktrace. ****
>>
>> ** **
>>
>> 2013-06-17 17:35:45,429 INFO  ipc.Server - IPC Server handler 62 on 8020,
>> call org.apache.hadoop.hdfs.protocol.ClientProtocol.setPermission from
>> 10.0.53.131:24059: error:
>> org.apache.hadoop.security.AccessControlException: Permission denied:
>> user=smehta, access=EXECUTE,
>> inode="/mapred":pkommireddi:supergroup:drwxrwx---****
>>
>> org.apache.hadoop.security.AccessControlException: Permission denied:
>> user=smehta, access=EXECUTE,
>> inode="/mapred":pkommireddi:supergroup:drwxrwx---****
>>
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
>> ****
>>
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
>> ****
>>
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
>> ****
>>
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
>> ****
>>
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
>> ****
>>
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
>> ****
>>
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
>> ****
>>
>>         at
>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
>> ****
>>
>>         at
>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
>> ****
>>
>>         at
>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
>> ****
>>
>>         at
>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>> ****
>>
>>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)****
>>
>>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)**
>> **
>>
>>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)**
>> **
>>
>>         at java.security.AccessController.doPrivileged(Native Method)****
>>
>>         at javax.security.auth.Subject.doAs(Subject.java:396)****
>>
>>         at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>> ****
>>
>>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)****
>>
>> ** **
>>
>> ** **
>>
>> ** **
>>
>> ** **
>>
>
>

Re: DFS Permissions on Hadoop 2.x

Posted by Prashant Kommireddi <pr...@gmail.com>.
Hi Chris,

This is while running a MR job. Please note the job is able to write files
to "/mapred" directory and fails on EXECUTE permissions. On digging in some
more, it looks like the failure occurs after writing to
"/mapred/history/done_intermediate".

Here is a more detailed stacktrace.

INFO: Job end notification started for jobID : job_1371593763906_0001
Jun 18, 2013 3:20:20 PM
org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler
closeEventWriter
INFO: Unable to write out JobSummaryInfo to
[hdfs://test-local-EMPTYSPEC/mapred/history/done_intermediate/smehta/job_1371593763906_0001.summary_tmp]
org.apache.hadoop.security.AccessControlException: Permission denied:
user=smehta, access=EXECUTE,
inode="/mapred":pkommireddi:supergroup:drwxrwx---
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:396)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)

	at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
	at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
	at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
	at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
	at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:90)
	at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:57)
	at org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:1897)
	at org.apache.hadoop.hdfs.DistributedFileSystem.setPermission(DistributedFileSystem.java:823)
	at org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.closeEventWriter(JobHistoryEventHandler.java:666)
	at org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.handleEvent(JobHistoryEventHandler.java:521)
	at org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler$1.run(JobHistoryEventHandler.java:273)
	at java.lang.Thread.run(Thread.java:662)
Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException):
Permission denied: user=smehta, access=EXECUTE,
inode="/mapred":pkommireddi:supergroup:drwxrwx---
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:396)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)

	at org.apache.hadoop.ipc.Client.call(Client.java:1225)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202)
	at $Proxy9.setPermission(Unknown Source)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.setPermission(ClientNamenodeProtocolTranslatorPB.java:241)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
	at java.lang.reflect.Method.invoke(Method.java:597)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83)
	at $Proxy10.setPermission(Unknown Source)
	at org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:1895)
	... 5 more
Jun 18, 2013 3:20:20 PM
org.apache.hadoop.yarn.YarnUncaughtExceptionHandler uncaughtException
SEVERE: Thread Thread[Thread-51,5,main] threw an Exception.
org.apache.hadoop.yarn.YarnException:
org.apache.hadoop.security.AccessControlException: Permission denied:
user=smehta, access=EXECUTE,
inode="/mapred":pkommireddi:supergroup:drwxrwx---
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:396)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)

	at org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.handleEvent(JobHistoryEventHandler.java:523)
	at org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler$1.run(JobHistoryEventHandler.java:273)
	at java.lang.Thread.run(Thread.java:662)
Caused by: org.apache.hadoop.security.AccessControlException:
Permission denied: user=smehta, access=EXECUTE,
inode="/mapred":pkommireddi:supergroup:drwxrwx---
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:396)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)

	at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
	at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
	at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
	at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
	at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:90)
	at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:57)
	at org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:1897)
	at org.apache.hadoop.hdfs.DistributedFileSystem.setPermission(DistributedFileSystem.java:823)
	at org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.closeEventWriter(JobHistoryEventHandler.java:666)
	at org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.handleEvent(JobHistoryEventHandler.java:521)
	... 2 more
Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException):
Permission denied: user=smehta, access=EXECUTE,
inode="/mapred":pkommireddi:supergroup:drwxrwx---
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:396)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)

	at org.apache.hadoop.ipc.Client.call(Client.java:1225)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202)
	at $Proxy9.setPermission(Unknown Source)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.setPermission(ClientNamenodeProtocolTranslatorPB.java:241)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
	at java.lang.reflect.Method.invoke(Method.java:597)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83)
	at $Proxy10.setPermission(Unknown Source)
	at org.apache.hadoop.hdfs.DFSClient.setPermission(DFSClient.java:1895)
	... 5 more
Jun 18, 2013 3:20:20 PM
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator$ScheduleStats
log
INFO: Before Scheduling: PendingReds:0 ScheduledMaps:0 ScheduledReds:0
AssignedMaps:0 AssignedReds:1 CompletedMaps:1 CompletedReds:1
ContAlloc:2 ContRel:0 HostLocal:0 RackLocal:1
Jun 18, 2013 3:20:21 PM
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator
getResources
INFO: Received completed container container_1371593763906_0001_01_000003
Jun 18, 2013 3:20:21 PM
org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator$ScheduleStats
log
INFO: After Scheduling: PendingReds:0 ScheduledMaps:0 ScheduledReds:0
AssignedMaps:0 AssignedReds:0 CompletedMaps:1 CompletedReds:1
ContAlloc:2 ContRel:0 HostLocal:0 RackLocal:1
Jun 18, 2013 3:20:21 PM
org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl$DiagnosticInformationUpdater
transition
INFO: Diagnostics report from attempt_1371593763906_0001_r_000000_0:
Container killed by the ApplicationMaster.



On Tue, Jun 18, 2013 at 1:28 PM, Chris Nauroth <cn...@hortonworks.com>wrote:

> Prashant, can you provide more details about what you're doing when you
> see this error?  Are you submitting a MapReduce job, running an HDFS shell
> command, or doing some other action?  It's possible that we're also seeing
> an interaction with some other change in 2.x that triggers a setPermission
> call that wasn't there in 0.20.2.  I think the problem with the HDFS
> setPermission API is present in both 0.20.2 and 2.x, but if the code in
> 0.20.2 never triggered a setPermission call for your usage, then you
> wouldn't have seen the problem.
>
> I'd like to gather these details for submitting a new bug report to HDFS.
>  Thanks!
>
> Chris Nauroth
> Hortonworks
> http://hortonworks.com/
>
>
>
> On Tue, Jun 18, 2013 at 12:14 PM, Leo Leung <ll...@ddn.com> wrote:
>
>>  I believe, the properties name should be “dfs.permissions”****
>>
>> ** **
>>
>> ** **
>>
>> *From:* Prashant Kommireddi [mailto:prash1784@gmail.com]
>> *Sent:* Tuesday, June 18, 2013 10:54 AM
>> *To:* user@hadoop.apache.org
>> *Subject:* DFS Permissions on Hadoop 2.x****
>>
>> ** **
>>
>> Hello,****
>>
>> ** **
>>
>> We just upgraded our cluster from 0.20.2 to 2.x (with HA) and had a
>> question around disabling dfs permissions on the latter version. For some
>> reason, setting the following config does not seem to work****
>>
>> ** **
>>
>> <property>****
>>
>>         <name>dfs.permissions.enabled</name>****
>>
>>         <value>false</value>****
>>
>> </property>****
>>
>> ** **
>>
>> Any other configs that might be needed for this? ****
>>
>> ** **
>>
>> Here is the stacktrace. ****
>>
>> ** **
>>
>> 2013-06-17 17:35:45,429 INFO  ipc.Server - IPC Server handler 62 on 8020,
>> call org.apache.hadoop.hdfs.protocol.ClientProtocol.setPermission from
>> 10.0.53.131:24059: error:
>> org.apache.hadoop.security.AccessControlException: Permission denied:
>> user=smehta, access=EXECUTE,
>> inode="/mapred":pkommireddi:supergroup:drwxrwx---****
>>
>> org.apache.hadoop.security.AccessControlException: Permission denied:
>> user=smehta, access=EXECUTE,
>> inode="/mapred":pkommireddi:supergroup:drwxrwx---****
>>
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
>> ****
>>
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
>> ****
>>
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
>> ****
>>
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
>> ****
>>
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
>> ****
>>
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
>> ****
>>
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
>> ****
>>
>>         at
>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
>> ****
>>
>>         at
>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
>> ****
>>
>>         at
>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
>> ****
>>
>>         at
>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>> ****
>>
>>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)****
>>
>>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)**
>> **
>>
>>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)**
>> **
>>
>>         at java.security.AccessController.doPrivileged(Native Method)****
>>
>>         at javax.security.auth.Subject.doAs(Subject.java:396)****
>>
>>         at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>> ****
>>
>>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)****
>>
>> ** **
>>
>> ** **
>>
>> ** **
>>
>> ** **
>>
>
>

Re: DFS Permissions on Hadoop 2.x

Posted by Chris Nauroth <cn...@hortonworks.com>.
Prashant, can you provide more details about what you're doing when you see
this error?  Are you submitting a MapReduce job, running an HDFS shell
command, or doing some other action?  It's possible that we're also seeing
an interaction with some other change in 2.x that triggers a setPermission
call that wasn't there in 0.20.2.  I think the problem with the HDFS
setPermission API is present in both 0.20.2 and 2.x, but if the code in
0.20.2 never triggered a setPermission call for your usage, then you
wouldn't have seen the problem.

I'd like to gather these details for submitting a new bug report to HDFS.
 Thanks!

Chris Nauroth
Hortonworks
http://hortonworks.com/



On Tue, Jun 18, 2013 at 12:14 PM, Leo Leung <ll...@ddn.com> wrote:

>  I believe, the properties name should be “dfs.permissions”****
>
> ** **
>
> ** **
>
> *From:* Prashant Kommireddi [mailto:prash1784@gmail.com]
> *Sent:* Tuesday, June 18, 2013 10:54 AM
> *To:* user@hadoop.apache.org
> *Subject:* DFS Permissions on Hadoop 2.x****
>
> ** **
>
> Hello,****
>
> ** **
>
> We just upgraded our cluster from 0.20.2 to 2.x (with HA) and had a
> question around disabling dfs permissions on the latter version. For some
> reason, setting the following config does not seem to work****
>
> ** **
>
> <property>****
>
>         <name>dfs.permissions.enabled</name>****
>
>         <value>false</value>****
>
> </property>****
>
> ** **
>
> Any other configs that might be needed for this? ****
>
> ** **
>
> Here is the stacktrace. ****
>
> ** **
>
> 2013-06-17 17:35:45,429 INFO  ipc.Server - IPC Server handler 62 on 8020,
> call org.apache.hadoop.hdfs.protocol.ClientProtocol.setPermission from
> 10.0.53.131:24059: error:
> org.apache.hadoop.security.AccessControlException: Permission denied:
> user=smehta, access=EXECUTE,
> inode="/mapred":pkommireddi:supergroup:drwxrwx---****
>
> org.apache.hadoop.security.AccessControlException: Permission denied:
> user=smehta, access=EXECUTE,
> inode="/mapred":pkommireddi:supergroup:drwxrwx---****
>
>         at
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
> ****
>
>         at
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
> ****
>
>         at
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
> ****
>
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
> ****
>
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
> ****
>
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
> ****
>
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
> ****
>
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
> ****
>
>         at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
> ****
>
>         at
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
> ****
>
>         at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
> ****
>
>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)****
>
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)***
> *
>
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)***
> *
>
>         at java.security.AccessController.doPrivileged(Native Method)****
>
>         at javax.security.auth.Subject.doAs(Subject.java:396)****
>
>         at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
> ****
>
>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)****
>
> ** **
>
> ** **
>
> ** **
>
> ** **
>

Re: DFS Permissions on Hadoop 2.x

Posted by Chris Nauroth <cn...@hortonworks.com>.
Prashant, can you provide more details about what you're doing when you see
this error?  Are you submitting a MapReduce job, running an HDFS shell
command, or doing some other action?  It's possible that we're also seeing
an interaction with some other change in 2.x that triggers a setPermission
call that wasn't there in 0.20.2.  I think the problem with the HDFS
setPermission API is present in both 0.20.2 and 2.x, but if the code in
0.20.2 never triggered a setPermission call for your usage, then you
wouldn't have seen the problem.

I'd like to gather these details for submitting a new bug report to HDFS.
 Thanks!

Chris Nauroth
Hortonworks
http://hortonworks.com/



On Tue, Jun 18, 2013 at 12:14 PM, Leo Leung <ll...@ddn.com> wrote:

>  I believe, the properties name should be “dfs.permissions”****
>
> ** **
>
> ** **
>
> *From:* Prashant Kommireddi [mailto:prash1784@gmail.com]
> *Sent:* Tuesday, June 18, 2013 10:54 AM
> *To:* user@hadoop.apache.org
> *Subject:* DFS Permissions on Hadoop 2.x****
>
> ** **
>
> Hello,****
>
> ** **
>
> We just upgraded our cluster from 0.20.2 to 2.x (with HA) and had a
> question around disabling dfs permissions on the latter version. For some
> reason, setting the following config does not seem to work****
>
> ** **
>
> <property>****
>
>         <name>dfs.permissions.enabled</name>****
>
>         <value>false</value>****
>
> </property>****
>
> ** **
>
> Any other configs that might be needed for this? ****
>
> ** **
>
> Here is the stacktrace. ****
>
> ** **
>
> 2013-06-17 17:35:45,429 INFO  ipc.Server - IPC Server handler 62 on 8020,
> call org.apache.hadoop.hdfs.protocol.ClientProtocol.setPermission from
> 10.0.53.131:24059: error:
> org.apache.hadoop.security.AccessControlException: Permission denied:
> user=smehta, access=EXECUTE,
> inode="/mapred":pkommireddi:supergroup:drwxrwx---****
>
> org.apache.hadoop.security.AccessControlException: Permission denied:
> user=smehta, access=EXECUTE,
> inode="/mapred":pkommireddi:supergroup:drwxrwx---****
>
>         at
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
> ****
>
>         at
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
> ****
>
>         at
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
> ****
>
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
> ****
>
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
> ****
>
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
> ****
>
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
> ****
>
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
> ****
>
>         at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
> ****
>
>         at
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
> ****
>
>         at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
> ****
>
>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)****
>
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)***
> *
>
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)***
> *
>
>         at java.security.AccessController.doPrivileged(Native Method)****
>
>         at javax.security.auth.Subject.doAs(Subject.java:396)****
>
>         at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
> ****
>
>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)****
>
> ** **
>
> ** **
>
> ** **
>
> ** **
>

Re: DFS Permissions on Hadoop 2.x

Posted by Chris Nauroth <cn...@hortonworks.com>.
Prashant, can you provide more details about what you're doing when you see
this error?  Are you submitting a MapReduce job, running an HDFS shell
command, or doing some other action?  It's possible that we're also seeing
an interaction with some other change in 2.x that triggers a setPermission
call that wasn't there in 0.20.2.  I think the problem with the HDFS
setPermission API is present in both 0.20.2 and 2.x, but if the code in
0.20.2 never triggered a setPermission call for your usage, then you
wouldn't have seen the problem.

I'd like to gather these details for submitting a new bug report to HDFS.
 Thanks!

Chris Nauroth
Hortonworks
http://hortonworks.com/



On Tue, Jun 18, 2013 at 12:14 PM, Leo Leung <ll...@ddn.com> wrote:

>  I believe, the properties name should be “dfs.permissions”****
>
> ** **
>
> ** **
>
> *From:* Prashant Kommireddi [mailto:prash1784@gmail.com]
> *Sent:* Tuesday, June 18, 2013 10:54 AM
> *To:* user@hadoop.apache.org
> *Subject:* DFS Permissions on Hadoop 2.x****
>
> ** **
>
> Hello,****
>
> ** **
>
> We just upgraded our cluster from 0.20.2 to 2.x (with HA) and had a
> question around disabling dfs permissions on the latter version. For some
> reason, setting the following config does not seem to work****
>
> ** **
>
> <property>****
>
>         <name>dfs.permissions.enabled</name>****
>
>         <value>false</value>****
>
> </property>****
>
> ** **
>
> Any other configs that might be needed for this? ****
>
> ** **
>
> Here is the stacktrace. ****
>
> ** **
>
> 2013-06-17 17:35:45,429 INFO  ipc.Server - IPC Server handler 62 on 8020,
> call org.apache.hadoop.hdfs.protocol.ClientProtocol.setPermission from
> 10.0.53.131:24059: error:
> org.apache.hadoop.security.AccessControlException: Permission denied:
> user=smehta, access=EXECUTE,
> inode="/mapred":pkommireddi:supergroup:drwxrwx---****
>
> org.apache.hadoop.security.AccessControlException: Permission denied:
> user=smehta, access=EXECUTE,
> inode="/mapred":pkommireddi:supergroup:drwxrwx---****
>
>         at
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
> ****
>
>         at
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
> ****
>
>         at
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
> ****
>
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
> ****
>
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
> ****
>
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
> ****
>
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
> ****
>
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
> ****
>
>         at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
> ****
>
>         at
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
> ****
>
>         at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
> ****
>
>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)****
>
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)***
> *
>
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)***
> *
>
>         at java.security.AccessController.doPrivileged(Native Method)****
>
>         at javax.security.auth.Subject.doAs(Subject.java:396)****
>
>         at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
> ****
>
>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)****
>
> ** **
>
> ** **
>
> ** **
>
> ** **
>

Re: DFS Permissions on Hadoop 2.x

Posted by Chris Nauroth <cn...@hortonworks.com>.
Prashant, can you provide more details about what you're doing when you see
this error?  Are you submitting a MapReduce job, running an HDFS shell
command, or doing some other action?  It's possible that we're also seeing
an interaction with some other change in 2.x that triggers a setPermission
call that wasn't there in 0.20.2.  I think the problem with the HDFS
setPermission API is present in both 0.20.2 and 2.x, but if the code in
0.20.2 never triggered a setPermission call for your usage, then you
wouldn't have seen the problem.

I'd like to gather these details for submitting a new bug report to HDFS.
 Thanks!

Chris Nauroth
Hortonworks
http://hortonworks.com/



On Tue, Jun 18, 2013 at 12:14 PM, Leo Leung <ll...@ddn.com> wrote:

>  I believe, the properties name should be “dfs.permissions”****
>
> ** **
>
> ** **
>
> *From:* Prashant Kommireddi [mailto:prash1784@gmail.com]
> *Sent:* Tuesday, June 18, 2013 10:54 AM
> *To:* user@hadoop.apache.org
> *Subject:* DFS Permissions on Hadoop 2.x****
>
> ** **
>
> Hello,****
>
> ** **
>
> We just upgraded our cluster from 0.20.2 to 2.x (with HA) and had a
> question around disabling dfs permissions on the latter version. For some
> reason, setting the following config does not seem to work****
>
> ** **
>
> <property>****
>
>         <name>dfs.permissions.enabled</name>****
>
>         <value>false</value>****
>
> </property>****
>
> ** **
>
> Any other configs that might be needed for this? ****
>
> ** **
>
> Here is the stacktrace. ****
>
> ** **
>
> 2013-06-17 17:35:45,429 INFO  ipc.Server - IPC Server handler 62 on 8020,
> call org.apache.hadoop.hdfs.protocol.ClientProtocol.setPermission from
> 10.0.53.131:24059: error:
> org.apache.hadoop.security.AccessControlException: Permission denied:
> user=smehta, access=EXECUTE,
> inode="/mapred":pkommireddi:supergroup:drwxrwx---****
>
> org.apache.hadoop.security.AccessControlException: Permission denied:
> user=smehta, access=EXECUTE,
> inode="/mapred":pkommireddi:supergroup:drwxrwx---****
>
>         at
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
> ****
>
>         at
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
> ****
>
>         at
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
> ****
>
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
> ****
>
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
> ****
>
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
> ****
>
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
> ****
>
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
> ****
>
>         at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
> ****
>
>         at
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
> ****
>
>         at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
> ****
>
>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)****
>
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)***
> *
>
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)***
> *
>
>         at java.security.AccessController.doPrivileged(Native Method)****
>
>         at javax.security.auth.Subject.doAs(Subject.java:396)****
>
>         at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
> ****
>
>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)****
>
> ** **
>
> ** **
>
> ** **
>
> ** **
>

RE: DFS Permissions on Hadoop 2.x

Posted by Leo Leung <ll...@ddn.com>.
I believe, the properties name should be "dfs.permissions"


From: Prashant Kommireddi [mailto:prash1784@gmail.com]
Sent: Tuesday, June 18, 2013 10:54 AM
To: user@hadoop.apache.org
Subject: DFS Permissions on Hadoop 2.x

Hello,

We just upgraded our cluster from 0.20.2 to 2.x (with HA) and had a question around disabling dfs permissions on the latter version. For some reason, setting the following config does not seem to work

<property>
        <name>dfs.permissions.enabled</name>
        <value>false</value>
</property>

Any other configs that might be needed for this?

Here is the stacktrace.

2013-06-17 17:35:45,429 INFO  ipc.Server - IPC Server handler 62 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.setPermission from 10.0.53.131:24059<http://10.0.53.131:24059>: error: org.apache.hadoop.security.AccessControlException: Permission denied: user=smehta, access=EXECUTE, inode="/mapred":pkommireddi:supergroup:drwxrwx---
org.apache.hadoop.security.AccessControlException: Permission denied: user=smehta, access=EXECUTE, inode="/mapred":pkommireddi:supergroup:drwxrwx---
        at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
        at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
        at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
        at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:396)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)





RE: DFS Permissions on Hadoop 2.x

Posted by Leo Leung <ll...@ddn.com>.
I believe, the properties name should be "dfs.permissions"


From: Prashant Kommireddi [mailto:prash1784@gmail.com]
Sent: Tuesday, June 18, 2013 10:54 AM
To: user@hadoop.apache.org
Subject: DFS Permissions on Hadoop 2.x

Hello,

We just upgraded our cluster from 0.20.2 to 2.x (with HA) and had a question around disabling dfs permissions on the latter version. For some reason, setting the following config does not seem to work

<property>
        <name>dfs.permissions.enabled</name>
        <value>false</value>
</property>

Any other configs that might be needed for this?

Here is the stacktrace.

2013-06-17 17:35:45,429 INFO  ipc.Server - IPC Server handler 62 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.setPermission from 10.0.53.131:24059<http://10.0.53.131:24059>: error: org.apache.hadoop.security.AccessControlException: Permission denied: user=smehta, access=EXECUTE, inode="/mapred":pkommireddi:supergroup:drwxrwx---
org.apache.hadoop.security.AccessControlException: Permission denied: user=smehta, access=EXECUTE, inode="/mapred":pkommireddi:supergroup:drwxrwx---
        at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
        at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
        at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
        at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:396)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)





RE: DFS Permissions on Hadoop 2.x

Posted by Leo Leung <ll...@ddn.com>.
I believe, the properties name should be "dfs.permissions"


From: Prashant Kommireddi [mailto:prash1784@gmail.com]
Sent: Tuesday, June 18, 2013 10:54 AM
To: user@hadoop.apache.org
Subject: DFS Permissions on Hadoop 2.x

Hello,

We just upgraded our cluster from 0.20.2 to 2.x (with HA) and had a question around disabling dfs permissions on the latter version. For some reason, setting the following config does not seem to work

<property>
        <name>dfs.permissions.enabled</name>
        <value>false</value>
</property>

Any other configs that might be needed for this?

Here is the stacktrace.

2013-06-17 17:35:45,429 INFO  ipc.Server - IPC Server handler 62 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.setPermission from 10.0.53.131:24059<http://10.0.53.131:24059>: error: org.apache.hadoop.security.AccessControlException: Permission denied: user=smehta, access=EXECUTE, inode="/mapred":pkommireddi:supergroup:drwxrwx---
org.apache.hadoop.security.AccessControlException: Permission denied: user=smehta, access=EXECUTE, inode="/mapred":pkommireddi:supergroup:drwxrwx---
        at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
        at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
        at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
        at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:396)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)





Re: DFS Permissions on Hadoop 2.x

Posted by Jean-Baptiste Onofré <jb...@nanthrax.net>.
It sounds like a change in the behavior.

Regards
JB

On 06/18/2013 09:04 PM, Prashant Kommireddi wrote:
> Thanks for the reply, Chris.
>
> Yes, I am certain this worked with 0.20.2. It used a slightly different
> property and I have checked setting it to false actually disables
> checking for perms.
>
> <property>
>      <name>dfs.permissions</name>
>      <value>false</value>
>      <final>true</final>
> </property>
>
>
>
> On Tue, Jun 18, 2013 at 11:58 AM, Chris Nauroth
> <cnauroth@hortonworks.com <ma...@hortonworks.com>> wrote:
>
>     Hello Prashant,
>
>     Reviewing the code, it appears that the setPermission operation
>     specifically is coded to always check ownership, even if
>     dfs.permissions.enabled is set to false.  From what I can tell, this
>     behavior is the same in 0.20 too though.  Are you certain that you
>     weren't seeing this stack trace in your 0.20.2 deployment?
>
>     Chris Nauroth
>     Hortonworks
>     http://hortonworks.com/
>
>
>
>     On Tue, Jun 18, 2013 at 10:54 AM, Prashant Kommireddi
>     <prash1784@gmail.com <ma...@gmail.com>> wrote:
>
>         Hello,
>
>         We just upgraded our cluster from 0.20.2 to 2.x (with HA) and
>         had a question around disabling dfs permissions on the latter
>         version. For some reason, setting the following config does not
>         seem to work
>
>         <property>
>                  <name>dfs.permissions.enabled</name>
>                  <value>false</value>
>         </property>
>
>         Any other configs that might be needed for this?
>
>         Here is the stacktrace.
>
>         2013-06-17 17:35:45,429 INFO  ipc.Server - IPC Server handler 62
>         on 8020, call
>         org.apache.hadoop.hdfs.protocol.ClientProtocol.setPermission
>         from 10.0.53.131:24059 <http://10.0.53.131:24059>: error:
>         org.apache.hadoop.security.AccessControlException: Permission
>         denied: user=smehta, access=EXECUTE,
>         inode="/mapred":pkommireddi:supergroup:drwxrwx---
>         org.apache.hadoop.security.AccessControlException: Permission
>         denied: user=smehta, access=EXECUTE,
>         inode="/mapred":pkommireddi:supergroup:drwxrwx---
>                  at
>         org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
>                  at
>         org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
>                  at
>         org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
>                  at
>         org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
>                  at
>         org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
>                  at
>         org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
>                  at
>         org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
>                  at
>         org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
>                  at
>         org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
>                  at
>         org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
>                  at
>         org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>                  at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>                  at
>         org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
>                  at
>         org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
>                  at java.security.AccessController.doPrivileged(Native
>         Method)
>                  at javax.security.auth.Subject.doAs(Subject.java:396)
>                  at
>         org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>                  at
>         org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
>
>
>
>
>
>

-- 
Jean-Baptiste Onofré
jbonofre@apache.org
http://blog.nanthrax.net
Talend - http://www.talend.com

Re: DFS Permissions on Hadoop 2.x

Posted by Jean-Baptiste Onofré <jb...@nanthrax.net>.
It sounds like a change in the behavior.

Regards
JB

On 06/18/2013 09:04 PM, Prashant Kommireddi wrote:
> Thanks for the reply, Chris.
>
> Yes, I am certain this worked with 0.20.2. It used a slightly different
> property and I have checked setting it to false actually disables
> checking for perms.
>
> <property>
>      <name>dfs.permissions</name>
>      <value>false</value>
>      <final>true</final>
> </property>
>
>
>
> On Tue, Jun 18, 2013 at 11:58 AM, Chris Nauroth
> <cnauroth@hortonworks.com <ma...@hortonworks.com>> wrote:
>
>     Hello Prashant,
>
>     Reviewing the code, it appears that the setPermission operation
>     specifically is coded to always check ownership, even if
>     dfs.permissions.enabled is set to false.  From what I can tell, this
>     behavior is the same in 0.20 too though.  Are you certain that you
>     weren't seeing this stack trace in your 0.20.2 deployment?
>
>     Chris Nauroth
>     Hortonworks
>     http://hortonworks.com/
>
>
>
>     On Tue, Jun 18, 2013 at 10:54 AM, Prashant Kommireddi
>     <prash1784@gmail.com <ma...@gmail.com>> wrote:
>
>         Hello,
>
>         We just upgraded our cluster from 0.20.2 to 2.x (with HA) and
>         had a question around disabling dfs permissions on the latter
>         version. For some reason, setting the following config does not
>         seem to work
>
>         <property>
>                  <name>dfs.permissions.enabled</name>
>                  <value>false</value>
>         </property>
>
>         Any other configs that might be needed for this?
>
>         Here is the stacktrace.
>
>         2013-06-17 17:35:45,429 INFO  ipc.Server - IPC Server handler 62
>         on 8020, call
>         org.apache.hadoop.hdfs.protocol.ClientProtocol.setPermission
>         from 10.0.53.131:24059 <http://10.0.53.131:24059>: error:
>         org.apache.hadoop.security.AccessControlException: Permission
>         denied: user=smehta, access=EXECUTE,
>         inode="/mapred":pkommireddi:supergroup:drwxrwx---
>         org.apache.hadoop.security.AccessControlException: Permission
>         denied: user=smehta, access=EXECUTE,
>         inode="/mapred":pkommireddi:supergroup:drwxrwx---
>                  at
>         org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
>                  at
>         org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
>                  at
>         org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
>                  at
>         org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
>                  at
>         org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
>                  at
>         org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
>                  at
>         org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
>                  at
>         org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
>                  at
>         org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
>                  at
>         org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
>                  at
>         org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>                  at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>                  at
>         org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
>                  at
>         org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
>                  at java.security.AccessController.doPrivileged(Native
>         Method)
>                  at javax.security.auth.Subject.doAs(Subject.java:396)
>                  at
>         org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>                  at
>         org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
>
>
>
>
>
>

-- 
Jean-Baptiste Onofré
jbonofre@apache.org
http://blog.nanthrax.net
Talend - http://www.talend.com

Re: DFS Permissions on Hadoop 2.x

Posted by Jean-Baptiste Onofré <jb...@nanthrax.net>.
It sounds like a change in the behavior.

Regards
JB

On 06/18/2013 09:04 PM, Prashant Kommireddi wrote:
> Thanks for the reply, Chris.
>
> Yes, I am certain this worked with 0.20.2. It used a slightly different
> property and I have checked setting it to false actually disables
> checking for perms.
>
> <property>
>      <name>dfs.permissions</name>
>      <value>false</value>
>      <final>true</final>
> </property>
>
>
>
> On Tue, Jun 18, 2013 at 11:58 AM, Chris Nauroth
> <cnauroth@hortonworks.com <ma...@hortonworks.com>> wrote:
>
>     Hello Prashant,
>
>     Reviewing the code, it appears that the setPermission operation
>     specifically is coded to always check ownership, even if
>     dfs.permissions.enabled is set to false.  From what I can tell, this
>     behavior is the same in 0.20 too though.  Are you certain that you
>     weren't seeing this stack trace in your 0.20.2 deployment?
>
>     Chris Nauroth
>     Hortonworks
>     http://hortonworks.com/
>
>
>
>     On Tue, Jun 18, 2013 at 10:54 AM, Prashant Kommireddi
>     <prash1784@gmail.com <ma...@gmail.com>> wrote:
>
>         Hello,
>
>         We just upgraded our cluster from 0.20.2 to 2.x (with HA) and
>         had a question around disabling dfs permissions on the latter
>         version. For some reason, setting the following config does not
>         seem to work
>
>         <property>
>                  <name>dfs.permissions.enabled</name>
>                  <value>false</value>
>         </property>
>
>         Any other configs that might be needed for this?
>
>         Here is the stacktrace.
>
>         2013-06-17 17:35:45,429 INFO  ipc.Server - IPC Server handler 62
>         on 8020, call
>         org.apache.hadoop.hdfs.protocol.ClientProtocol.setPermission
>         from 10.0.53.131:24059 <http://10.0.53.131:24059>: error:
>         org.apache.hadoop.security.AccessControlException: Permission
>         denied: user=smehta, access=EXECUTE,
>         inode="/mapred":pkommireddi:supergroup:drwxrwx---
>         org.apache.hadoop.security.AccessControlException: Permission
>         denied: user=smehta, access=EXECUTE,
>         inode="/mapred":pkommireddi:supergroup:drwxrwx---
>                  at
>         org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
>                  at
>         org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
>                  at
>         org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
>                  at
>         org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
>                  at
>         org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
>                  at
>         org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
>                  at
>         org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
>                  at
>         org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
>                  at
>         org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
>                  at
>         org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
>                  at
>         org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>                  at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>                  at
>         org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
>                  at
>         org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
>                  at java.security.AccessController.doPrivileged(Native
>         Method)
>                  at javax.security.auth.Subject.doAs(Subject.java:396)
>                  at
>         org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>                  at
>         org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
>
>
>
>
>
>

-- 
Jean-Baptiste Onofré
jbonofre@apache.org
http://blog.nanthrax.net
Talend - http://www.talend.com

Re: DFS Permissions on Hadoop 2.x

Posted by Jean-Baptiste Onofré <jb...@nanthrax.net>.
It sounds like a change in the behavior.

Regards
JB

On 06/18/2013 09:04 PM, Prashant Kommireddi wrote:
> Thanks for the reply, Chris.
>
> Yes, I am certain this worked with 0.20.2. It used a slightly different
> property and I have checked setting it to false actually disables
> checking for perms.
>
> <property>
>      <name>dfs.permissions</name>
>      <value>false</value>
>      <final>true</final>
> </property>
>
>
>
> On Tue, Jun 18, 2013 at 11:58 AM, Chris Nauroth
> <cnauroth@hortonworks.com <ma...@hortonworks.com>> wrote:
>
>     Hello Prashant,
>
>     Reviewing the code, it appears that the setPermission operation
>     specifically is coded to always check ownership, even if
>     dfs.permissions.enabled is set to false.  From what I can tell, this
>     behavior is the same in 0.20 too though.  Are you certain that you
>     weren't seeing this stack trace in your 0.20.2 deployment?
>
>     Chris Nauroth
>     Hortonworks
>     http://hortonworks.com/
>
>
>
>     On Tue, Jun 18, 2013 at 10:54 AM, Prashant Kommireddi
>     <prash1784@gmail.com <ma...@gmail.com>> wrote:
>
>         Hello,
>
>         We just upgraded our cluster from 0.20.2 to 2.x (with HA) and
>         had a question around disabling dfs permissions on the latter
>         version. For some reason, setting the following config does not
>         seem to work
>
>         <property>
>                  <name>dfs.permissions.enabled</name>
>                  <value>false</value>
>         </property>
>
>         Any other configs that might be needed for this?
>
>         Here is the stacktrace.
>
>         2013-06-17 17:35:45,429 INFO  ipc.Server - IPC Server handler 62
>         on 8020, call
>         org.apache.hadoop.hdfs.protocol.ClientProtocol.setPermission
>         from 10.0.53.131:24059 <http://10.0.53.131:24059>: error:
>         org.apache.hadoop.security.AccessControlException: Permission
>         denied: user=smehta, access=EXECUTE,
>         inode="/mapred":pkommireddi:supergroup:drwxrwx---
>         org.apache.hadoop.security.AccessControlException: Permission
>         denied: user=smehta, access=EXECUTE,
>         inode="/mapred":pkommireddi:supergroup:drwxrwx---
>                  at
>         org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
>                  at
>         org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
>                  at
>         org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
>                  at
>         org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
>                  at
>         org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
>                  at
>         org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
>                  at
>         org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
>                  at
>         org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
>                  at
>         org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
>                  at
>         org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
>                  at
>         org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>                  at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>                  at
>         org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
>                  at
>         org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
>                  at java.security.AccessController.doPrivileged(Native
>         Method)
>                  at javax.security.auth.Subject.doAs(Subject.java:396)
>                  at
>         org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>                  at
>         org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
>
>
>
>
>
>

-- 
Jean-Baptiste Onofré
jbonofre@apache.org
http://blog.nanthrax.net
Talend - http://www.talend.com

Re: DFS Permissions on Hadoop 2.x

Posted by Prashant Kommireddi <pr...@gmail.com>.
Thanks for the reply, Chris.

Yes, I am certain this worked with 0.20.2. It used a slightly different
property and I have checked setting it to false actually disables checking
for perms.

<property>
    <name>dfs.permissions</name>
    <value>false</value>
    <final>true</final>
</property>



On Tue, Jun 18, 2013 at 11:58 AM, Chris Nauroth <cn...@hortonworks.com>wrote:

> Hello Prashant,
>
> Reviewing the code, it appears that the setPermission operation
> specifically is coded to always check ownership, even if
> dfs.permissions.enabled is set to false.  From what I can tell, this
> behavior is the same in 0.20 too though.  Are you certain that you weren't
> seeing this stack trace in your 0.20.2 deployment?
>
> Chris Nauroth
> Hortonworks
> http://hortonworks.com/
>
>
>
> On Tue, Jun 18, 2013 at 10:54 AM, Prashant Kommireddi <prash1784@gmail.com
> > wrote:
>
>> Hello,
>>
>> We just upgraded our cluster from 0.20.2 to 2.x (with HA) and had a
>> question around disabling dfs permissions on the latter version. For some
>> reason, setting the following config does not seem to work
>>
>> <property>
>>         <name>dfs.permissions.enabled</name>
>>         <value>false</value>
>> </property>
>>
>> Any other configs that might be needed for this?
>>
>> Here is the stacktrace.
>>
>> 2013-06-17 17:35:45,429 INFO  ipc.Server - IPC Server handler 62 on 8020,
>> call org.apache.hadoop.hdfs.protocol.ClientProtocol.setPermission from
>> 10.0.53.131:24059: error:
>> org.apache.hadoop.security.AccessControlException: Permission denied:
>> user=smehta, access=EXECUTE,
>> inode="/mapred":pkommireddi:supergroup:drwxrwx---
>> org.apache.hadoop.security.AccessControlException: Permission denied:
>> user=smehta, access=EXECUTE,
>> inode="/mapred":pkommireddi:supergroup:drwxrwx---
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
>>         at
>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
>>         at
>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
>>         at
>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
>>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
>>         at java.security.AccessController.doPrivileged(Native Method)
>>         at javax.security.auth.Subject.doAs(Subject.java:396)
>>         at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
>>
>>
>>
>>
>>
>

Re: DFS Permissions on Hadoop 2.x

Posted by Prashant Kommireddi <pr...@gmail.com>.
Thanks for the reply, Chris.

Yes, I am certain this worked with 0.20.2. It used a slightly different
property and I have checked setting it to false actually disables checking
for perms.

<property>
    <name>dfs.permissions</name>
    <value>false</value>
    <final>true</final>
</property>



On Tue, Jun 18, 2013 at 11:58 AM, Chris Nauroth <cn...@hortonworks.com>wrote:

> Hello Prashant,
>
> Reviewing the code, it appears that the setPermission operation
> specifically is coded to always check ownership, even if
> dfs.permissions.enabled is set to false.  From what I can tell, this
> behavior is the same in 0.20 too though.  Are you certain that you weren't
> seeing this stack trace in your 0.20.2 deployment?
>
> Chris Nauroth
> Hortonworks
> http://hortonworks.com/
>
>
>
> On Tue, Jun 18, 2013 at 10:54 AM, Prashant Kommireddi <prash1784@gmail.com
> > wrote:
>
>> Hello,
>>
>> We just upgraded our cluster from 0.20.2 to 2.x (with HA) and had a
>> question around disabling dfs permissions on the latter version. For some
>> reason, setting the following config does not seem to work
>>
>> <property>
>>         <name>dfs.permissions.enabled</name>
>>         <value>false</value>
>> </property>
>>
>> Any other configs that might be needed for this?
>>
>> Here is the stacktrace.
>>
>> 2013-06-17 17:35:45,429 INFO  ipc.Server - IPC Server handler 62 on 8020,
>> call org.apache.hadoop.hdfs.protocol.ClientProtocol.setPermission from
>> 10.0.53.131:24059: error:
>> org.apache.hadoop.security.AccessControlException: Permission denied:
>> user=smehta, access=EXECUTE,
>> inode="/mapred":pkommireddi:supergroup:drwxrwx---
>> org.apache.hadoop.security.AccessControlException: Permission denied:
>> user=smehta, access=EXECUTE,
>> inode="/mapred":pkommireddi:supergroup:drwxrwx---
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
>>         at
>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
>>         at
>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
>>         at
>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
>>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
>>         at java.security.AccessController.doPrivileged(Native Method)
>>         at javax.security.auth.Subject.doAs(Subject.java:396)
>>         at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
>>
>>
>>
>>
>>
>

Re: DFS Permissions on Hadoop 2.x

Posted by Prashant Kommireddi <pr...@gmail.com>.
Thanks for the reply, Chris.

Yes, I am certain this worked with 0.20.2. It used a slightly different
property and I have checked setting it to false actually disables checking
for perms.

<property>
    <name>dfs.permissions</name>
    <value>false</value>
    <final>true</final>
</property>



On Tue, Jun 18, 2013 at 11:58 AM, Chris Nauroth <cn...@hortonworks.com>wrote:

> Hello Prashant,
>
> Reviewing the code, it appears that the setPermission operation
> specifically is coded to always check ownership, even if
> dfs.permissions.enabled is set to false.  From what I can tell, this
> behavior is the same in 0.20 too though.  Are you certain that you weren't
> seeing this stack trace in your 0.20.2 deployment?
>
> Chris Nauroth
> Hortonworks
> http://hortonworks.com/
>
>
>
> On Tue, Jun 18, 2013 at 10:54 AM, Prashant Kommireddi <prash1784@gmail.com
> > wrote:
>
>> Hello,
>>
>> We just upgraded our cluster from 0.20.2 to 2.x (with HA) and had a
>> question around disabling dfs permissions on the latter version. For some
>> reason, setting the following config does not seem to work
>>
>> <property>
>>         <name>dfs.permissions.enabled</name>
>>         <value>false</value>
>> </property>
>>
>> Any other configs that might be needed for this?
>>
>> Here is the stacktrace.
>>
>> 2013-06-17 17:35:45,429 INFO  ipc.Server - IPC Server handler 62 on 8020,
>> call org.apache.hadoop.hdfs.protocol.ClientProtocol.setPermission from
>> 10.0.53.131:24059: error:
>> org.apache.hadoop.security.AccessControlException: Permission denied:
>> user=smehta, access=EXECUTE,
>> inode="/mapred":pkommireddi:supergroup:drwxrwx---
>> org.apache.hadoop.security.AccessControlException: Permission denied:
>> user=smehta, access=EXECUTE,
>> inode="/mapred":pkommireddi:supergroup:drwxrwx---
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
>>         at
>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
>>         at
>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
>>         at
>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
>>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
>>         at java.security.AccessController.doPrivileged(Native Method)
>>         at javax.security.auth.Subject.doAs(Subject.java:396)
>>         at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
>>
>>
>>
>>
>>
>

Re: DFS Permissions on Hadoop 2.x

Posted by Prashant Kommireddi <pr...@gmail.com>.
Thanks for the reply, Chris.

Yes, I am certain this worked with 0.20.2. It used a slightly different
property and I have checked setting it to false actually disables checking
for perms.

<property>
    <name>dfs.permissions</name>
    <value>false</value>
    <final>true</final>
</property>



On Tue, Jun 18, 2013 at 11:58 AM, Chris Nauroth <cn...@hortonworks.com>wrote:

> Hello Prashant,
>
> Reviewing the code, it appears that the setPermission operation
> specifically is coded to always check ownership, even if
> dfs.permissions.enabled is set to false.  From what I can tell, this
> behavior is the same in 0.20 too though.  Are you certain that you weren't
> seeing this stack trace in your 0.20.2 deployment?
>
> Chris Nauroth
> Hortonworks
> http://hortonworks.com/
>
>
>
> On Tue, Jun 18, 2013 at 10:54 AM, Prashant Kommireddi <prash1784@gmail.com
> > wrote:
>
>> Hello,
>>
>> We just upgraded our cluster from 0.20.2 to 2.x (with HA) and had a
>> question around disabling dfs permissions on the latter version. For some
>> reason, setting the following config does not seem to work
>>
>> <property>
>>         <name>dfs.permissions.enabled</name>
>>         <value>false</value>
>> </property>
>>
>> Any other configs that might be needed for this?
>>
>> Here is the stacktrace.
>>
>> 2013-06-17 17:35:45,429 INFO  ipc.Server - IPC Server handler 62 on 8020,
>> call org.apache.hadoop.hdfs.protocol.ClientProtocol.setPermission from
>> 10.0.53.131:24059: error:
>> org.apache.hadoop.security.AccessControlException: Permission denied:
>> user=smehta, access=EXECUTE,
>> inode="/mapred":pkommireddi:supergroup:drwxrwx---
>> org.apache.hadoop.security.AccessControlException: Permission denied:
>> user=smehta, access=EXECUTE,
>> inode="/mapred":pkommireddi:supergroup:drwxrwx---
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
>>         at
>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
>>         at
>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
>>         at
>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
>>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
>>         at java.security.AccessController.doPrivileged(Native Method)
>>         at javax.security.auth.Subject.doAs(Subject.java:396)
>>         at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
>>
>>
>>
>>
>>
>

Re: DFS Permissions on Hadoop 2.x

Posted by Chris Nauroth <cn...@hortonworks.com>.
Hello Prashant,

Reviewing the code, it appears that the setPermission operation
specifically is coded to always check ownership, even if
dfs.permissions.enabled is set to false.  From what I can tell, this
behavior is the same in 0.20 too though.  Are you certain that you weren't
seeing this stack trace in your 0.20.2 deployment?

Chris Nauroth
Hortonworks
http://hortonworks.com/



On Tue, Jun 18, 2013 at 10:54 AM, Prashant Kommireddi
<pr...@gmail.com>wrote:

> Hello,
>
> We just upgraded our cluster from 0.20.2 to 2.x (with HA) and had a
> question around disabling dfs permissions on the latter version. For some
> reason, setting the following config does not seem to work
>
> <property>
>         <name>dfs.permissions.enabled</name>
>         <value>false</value>
> </property>
>
> Any other configs that might be needed for this?
>
> Here is the stacktrace.
>
> 2013-06-17 17:35:45,429 INFO  ipc.Server - IPC Server handler 62 on 8020,
> call org.apache.hadoop.hdfs.protocol.ClientProtocol.setPermission from
> 10.0.53.131:24059: error:
> org.apache.hadoop.security.AccessControlException: Permission denied:
> user=smehta, access=EXECUTE,
> inode="/mapred":pkommireddi:supergroup:drwxrwx---
> org.apache.hadoop.security.AccessControlException: Permission denied:
> user=smehta, access=EXECUTE,
> inode="/mapred":pkommireddi:supergroup:drwxrwx---
>         at
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
>         at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
>         at
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
>         at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:396)
>         at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
>
>
>
>
>

Re: DFS Permissions on Hadoop 2.x

Posted by Chris Nauroth <cn...@hortonworks.com>.
Hello Prashant,

Reviewing the code, it appears that the setPermission operation
specifically is coded to always check ownership, even if
dfs.permissions.enabled is set to false.  From what I can tell, this
behavior is the same in 0.20 too though.  Are you certain that you weren't
seeing this stack trace in your 0.20.2 deployment?

Chris Nauroth
Hortonworks
http://hortonworks.com/



On Tue, Jun 18, 2013 at 10:54 AM, Prashant Kommireddi
<pr...@gmail.com>wrote:

> Hello,
>
> We just upgraded our cluster from 0.20.2 to 2.x (with HA) and had a
> question around disabling dfs permissions on the latter version. For some
> reason, setting the following config does not seem to work
>
> <property>
>         <name>dfs.permissions.enabled</name>
>         <value>false</value>
> </property>
>
> Any other configs that might be needed for this?
>
> Here is the stacktrace.
>
> 2013-06-17 17:35:45,429 INFO  ipc.Server - IPC Server handler 62 on 8020,
> call org.apache.hadoop.hdfs.protocol.ClientProtocol.setPermission from
> 10.0.53.131:24059: error:
> org.apache.hadoop.security.AccessControlException: Permission denied:
> user=smehta, access=EXECUTE,
> inode="/mapred":pkommireddi:supergroup:drwxrwx---
> org.apache.hadoop.security.AccessControlException: Permission denied:
> user=smehta, access=EXECUTE,
> inode="/mapred":pkommireddi:supergroup:drwxrwx---
>         at
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
>         at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
>         at
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
>         at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:396)
>         at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
>
>
>
>
>

RE: DFS Permissions on Hadoop 2.x

Posted by Leo Leung <ll...@ddn.com>.
I believe, the properties name should be "dfs.permissions"


From: Prashant Kommireddi [mailto:prash1784@gmail.com]
Sent: Tuesday, June 18, 2013 10:54 AM
To: user@hadoop.apache.org
Subject: DFS Permissions on Hadoop 2.x

Hello,

We just upgraded our cluster from 0.20.2 to 2.x (with HA) and had a question around disabling dfs permissions on the latter version. For some reason, setting the following config does not seem to work

<property>
        <name>dfs.permissions.enabled</name>
        <value>false</value>
</property>

Any other configs that might be needed for this?

Here is the stacktrace.

2013-06-17 17:35:45,429 INFO  ipc.Server - IPC Server handler 62 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.setPermission from 10.0.53.131:24059<http://10.0.53.131:24059>: error: org.apache.hadoop.security.AccessControlException: Permission denied: user=smehta, access=EXECUTE, inode="/mapred":pkommireddi:supergroup:drwxrwx---
org.apache.hadoop.security.AccessControlException: Permission denied: user=smehta, access=EXECUTE, inode="/mapred":pkommireddi:supergroup:drwxrwx---
        at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
        at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
        at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
        at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:396)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)





Re: DFS Permissions on Hadoop 2.x

Posted by Chris Nauroth <cn...@hortonworks.com>.
Hello Prashant,

Reviewing the code, it appears that the setPermission operation
specifically is coded to always check ownership, even if
dfs.permissions.enabled is set to false.  From what I can tell, this
behavior is the same in 0.20 too though.  Are you certain that you weren't
seeing this stack trace in your 0.20.2 deployment?

Chris Nauroth
Hortonworks
http://hortonworks.com/



On Tue, Jun 18, 2013 at 10:54 AM, Prashant Kommireddi
<pr...@gmail.com>wrote:

> Hello,
>
> We just upgraded our cluster from 0.20.2 to 2.x (with HA) and had a
> question around disabling dfs permissions on the latter version. For some
> reason, setting the following config does not seem to work
>
> <property>
>         <name>dfs.permissions.enabled</name>
>         <value>false</value>
> </property>
>
> Any other configs that might be needed for this?
>
> Here is the stacktrace.
>
> 2013-06-17 17:35:45,429 INFO  ipc.Server - IPC Server handler 62 on 8020,
> call org.apache.hadoop.hdfs.protocol.ClientProtocol.setPermission from
> 10.0.53.131:24059: error:
> org.apache.hadoop.security.AccessControlException: Permission denied:
> user=smehta, access=EXECUTE,
> inode="/mapred":pkommireddi:supergroup:drwxrwx---
> org.apache.hadoop.security.AccessControlException: Permission denied:
> user=smehta, access=EXECUTE,
> inode="/mapred":pkommireddi:supergroup:drwxrwx---
>         at
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
>         at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
>         at
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
>         at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:396)
>         at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
>
>
>
>
>

Re: DFS Permissions on Hadoop 2.x

Posted by Chris Nauroth <cn...@hortonworks.com>.
Hello Prashant,

Reviewing the code, it appears that the setPermission operation
specifically is coded to always check ownership, even if
dfs.permissions.enabled is set to false.  From what I can tell, this
behavior is the same in 0.20 too though.  Are you certain that you weren't
seeing this stack trace in your 0.20.2 deployment?

Chris Nauroth
Hortonworks
http://hortonworks.com/



On Tue, Jun 18, 2013 at 10:54 AM, Prashant Kommireddi
<pr...@gmail.com>wrote:

> Hello,
>
> We just upgraded our cluster from 0.20.2 to 2.x (with HA) and had a
> question around disabling dfs permissions on the latter version. For some
> reason, setting the following config does not seem to work
>
> <property>
>         <name>dfs.permissions.enabled</name>
>         <value>false</value>
> </property>
>
> Any other configs that might be needed for this?
>
> Here is the stacktrace.
>
> 2013-06-17 17:35:45,429 INFO  ipc.Server - IPC Server handler 62 on 8020,
> call org.apache.hadoop.hdfs.protocol.ClientProtocol.setPermission from
> 10.0.53.131:24059: error:
> org.apache.hadoop.security.AccessControlException: Permission denied:
> user=smehta, access=EXECUTE,
> inode="/mapred":pkommireddi:supergroup:drwxrwx---
> org.apache.hadoop.security.AccessControlException: Permission denied:
> user=smehta, access=EXECUTE,
> inode="/mapred":pkommireddi:supergroup:drwxrwx---
>         at
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:161)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:128)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4684)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOwner(FSNamesystem.java:4640)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermissionInt(FSNamesystem.java:1134)
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1111)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:454)
>         at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:253)
>         at
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44074)
>         at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:396)
>         at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
>
>
>
>
>