You are viewing a plain text version of this content. The canonical link for it is here.
Posted to mapreduce-issues@hadoop.apache.org by "Marcos Sousa (JIRA)" <ji...@apache.org> on 2013/03/28 13:57:15 UTC

[jira] [Updated] (MAPREDUCE-5112) Hadoop Mapreduce fails when permission management is enabled and scheduler is FairScheduler

     [ https://issues.apache.org/jira/browse/MAPREDUCE-5112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Marcos Sousa updated MAPREDUCE-5112:
------------------------------------

    Description: 
I enabled the permission management in my hadoop cluster, but I'm facing a problem sending jobs with pig. This is the scenario:

1 - I have hadoop/hadoop user

2 - I have myuserapp/myuserapp user that runs PIG script.

3 - We setup the path /myapp to be owned by myuserapp

4 - We set pig.temp.dir to /myapp/pig/tmp

But when we pig try to run the jobs we got the following error:

job_201303221059_0009    all_actions,filtered,raw_data    DISTINCT    Message: Job failed! Error - Job initialization failed: org.apache.hadoop.security.AccessControlException: org.apache.hadoop.security.AccessControlException: Permission denied: user=realtime, access=EXECUTE, inode="system":hadoop:supergroup:rwx------
Hadoop jobtracker requires this permission to statup it's server.

My hadoop policy looks like:

<property>
<name>security.client.datanode.protocol.acl</name>
<value>hadoop,myuserapp supergroup,myuserapp</value>
</property>
<property>
<name>security.inter.tracker.protocol.acl</name>
<value>hadoop,myuserapp supergroup,myuserapp</value>
</property>
<property>
<name>security.job.submission.protocol.acl</name>
<value>hadoop,myuserapp supergroup,myuserapp</value>
<property>
My hdfs-site.xml:

<property>
<name>dfs.permissions</name>
<value>true</value>
</property>

<property>
 <name>dfs.datanode.data.dir.perm</name>
 <value>755</value>
</property>

<property>
 <name>dfs.web.ugi</name>
 <value>hadoop,supergroup</value>
</property>
My core site:

...
<property>
<name>hadoop.security.authorization</name>
<value>true</value>
</property>
...
And finally my mapred-site.xml

...
<property>
 <name>mapred.local.dir</name>
 <value>/tmp/mapred</value>
</property>

<property>
 <name>mapreduce.jobtracker.jobhistory.location</name>
 <value>/opt/logs/hadoop/history</value>
</property>
<property>
  <name>mapred.jobtracker.taskScheduler</name>
  <value>org.apache.hadoop.mapred.FairScheduler</value>
</property>
    
> Hadoop Mapreduce fails when permission management is enabled and scheduler is FairScheduler
> -------------------------------------------------------------------------------------------
>
>                 Key: MAPREDUCE-5112
>                 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5112
>             Project: Hadoop Map/Reduce
>          Issue Type: Bug
>          Components: contrib/fair-share
>    Affects Versions: 1.0.4
>         Environment: Cent0S
>            Reporter: Marcos Sousa
>
> I enabled the permission management in my hadoop cluster, but I'm facing a problem sending jobs with pig. This is the scenario:
> 1 - I have hadoop/hadoop user
> 2 - I have myuserapp/myuserapp user that runs PIG script.
> 3 - We setup the path /myapp to be owned by myuserapp
> 4 - We set pig.temp.dir to /myapp/pig/tmp
> But when we pig try to run the jobs we got the following error:
> job_201303221059_0009    all_actions,filtered,raw_data    DISTINCT    Message: Job failed! Error - Job initialization failed: org.apache.hadoop.security.AccessControlException: org.apache.hadoop.security.AccessControlException: Permission denied: user=realtime, access=EXECUTE, inode="system":hadoop:supergroup:rwx------
> Hadoop jobtracker requires this permission to statup it's server.
> My hadoop policy looks like:
> <property>
> <name>security.client.datanode.protocol.acl</name>
> <value>hadoop,myuserapp supergroup,myuserapp</value>
> </property>
> <property>
> <name>security.inter.tracker.protocol.acl</name>
> <value>hadoop,myuserapp supergroup,myuserapp</value>
> </property>
> <property>
> <name>security.job.submission.protocol.acl</name>
> <value>hadoop,myuserapp supergroup,myuserapp</value>
> <property>
> My hdfs-site.xml:
> <property>
> <name>dfs.permissions</name>
> <value>true</value>
> </property>
> <property>
>  <name>dfs.datanode.data.dir.perm</name>
>  <value>755</value>
> </property>
> <property>
>  <name>dfs.web.ugi</name>
>  <value>hadoop,supergroup</value>
> </property>
> My core site:
> ...
> <property>
> <name>hadoop.security.authorization</name>
> <value>true</value>
> </property>
> ...
> And finally my mapred-site.xml
> ...
> <property>
>  <name>mapred.local.dir</name>
>  <value>/tmp/mapred</value>
> </property>
> <property>
>  <name>mapreduce.jobtracker.jobhistory.location</name>
>  <value>/opt/logs/hadoop/history</value>
> </property>
> <property>
>   <name>mapred.jobtracker.taskScheduler</name>
>   <value>org.apache.hadoop.mapred.FairScheduler</value>
> </property>

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira