You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by Joman Chu <jo...@andrew.cmu.edu> on 2008/07/09 07:36:35 UTC

File permissions issue

Hello,

On a cluster where I run Hadoop, it seems that the temp directory created by Hadoop (in our case, /tmp/hadoop/) gets its permissions set to "drwxrwxr-x" owned by the first person that runs a job after the Hadoop services are started. This causes file permissions problems as we try to run jobs.

For example, user1:user1 starts Hadoop using ./start-all.sh. Then user2:user2 runs a Hadoop job. Temp directories (/tmp/hadoop/) are now created in all nodes in the cluster owned by user2 with permissions "drwxrwxr-x". Now user3:user3 tries to run a job and gets the following exception:

java.io.IOException: Permission denied
     at java.io.UnixFileSystem.createFileExclusively(Native Method)
     at java.io.File.checkAndCreate(File.java:1704)
     at java.io.File.createTempFile(File.java:1793)
     at org.apache.hadoop.util.RunJar.main(RunJar.java:115)
     at org.apache.hadoop.mapred.JobShell.run(JobShell.java:194)
     at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
     at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79)
     at org.apache.hadoop.mapred.JobShell.main(JobShell.java:220)

Why does this happen and how can we fix this? Our current stop gap measure is to run a job as the user that started Hadoop. That is, in our example, after user1 starts Hadoop, user1 runs a job. Everything seems to work fine then.

Thanks,
Joman Chu


Re: File permissions issue

Posted by Joman Chu <jo...@andrew.cmu.edu>.
So we can fix this issue by putting all three users in a common group? We did that after we encountered the issue, but we still got the errors. Note that we had not restarted hadoop, so the permissions were still as described earlier. Should we have restarted Hadoop after the grouping?

On Wed, July 9, 2008 2:05 am, heyongqiang said:
> because in your permission set, the other role can not write the temp
> directory. and user3 is not in the same group with user2.
> 
> 
> 
> 
> 
> heyongqiang 2008-07-09
> 
> 
> 
> ·¢¼þÈË£º Joman Chu ·¢ËÍʱ¼ä£º 2008-07-09 13:06:51 ÊÕ¼þÈË£º
> core-user@hadoop.apache.org ³­ËÍ£º Ö÷Ì⣺ File permissions issue
> 
> Hello,
> 
> On a cluster where I run Hadoop, it seems that the temp directory created
> by Hadoop (in our case, /tmp/hadoop/) gets its permissions set to
> "drwxrwxr-x" owned by the first person that runs a job after the Hadoop
> services are started. This causes file permissions problems as we try to
> run jobs.
> 
> For example, user1:user1 starts Hadoop using ./start-all.sh. Then
> user2:user2 runs a Hadoop job. Temp directories (/tmp/hadoop/) are now
> created in all nodes in the cluster owned by user2 with permissions
> "drwxrwxr-x". Now user3:user3 tries to run a job and gets the following
> exception:
> 
> java.io.IOException: Permission denied at
> java.io.UnixFileSystem.createFileExclusively(Native Method) at
> java.io.File.checkAndCreate(File.java:1704) at
> java.io.File.createTempFile(File.java:1793) at
> org.apache.hadoop.util.RunJar.main(RunJar.java:115) at
> org.apache.hadoop.mapred.JobShell.run(JobShell.java:194) at
> org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65) at
> org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79) at
> org.apache.hadoop.mapred.JobShell.main(JobShell.java:220)
> 
> Why does this happen and how can we fix this? Our current stop gap
> measure is to run a job as the user that started Hadoop. That is, in our
> example, after user1 starts Hadoop, user1 runs a job. Everything seems to
> work fine then.
> 
> Thanks, Joman Chu
> 


-- 
Joman Chu
AIM: ARcanUSNUMquam
IRC: irc.liquid-silver.net


Re: File permissions issue

Posted by heyongqiang <he...@software.ict.ac.cn>.
because in your permission set, the other role can not write the temp directory.
and user3 is not in the same group with user2. 





heyongqiang
2008-07-09



发件人: Joman Chu
发送时间: 2008-07-09 13:06:51
收件人: core-user@hadoop.apache.org
抄送: 
主题: File permissions issue

Hello,

On a cluster where I run Hadoop, it seems that the temp directory created by Hadoop (in our case, /tmp/hadoop/) gets its permissions set to "drwxrwxr-x" owned by the first person that runs a job after the Hadoop services are started. This causes file permissions problems as we try to run jobs.

For example, user1:user1 starts Hadoop using ./start-all.sh. Then user2:user2 runs a Hadoop job. Temp directories (/tmp/hadoop/) are now created in all nodes in the cluster owned by user2 with permissions "drwxrwxr-x". Now user3:user3 tries to run a job and gets the following exception:

java.io.IOException: Permission denied
     at java.io.UnixFileSystem.createFileExclusively(Native Method)
     at java.io.File.checkAndCreate(File.java:1704)
     at java.io.File.createTempFile(File.java:1793)
     at org.apache.hadoop.util.RunJar.main(RunJar.java:115)
     at org.apache.hadoop.mapred.JobShell.run(JobShell.java:194)
     at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
     at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79)
     at org.apache.hadoop.mapred.JobShell.main(JobShell.java:220)

Why does this happen and how can we fix this? Our current stop gap measure is to run a job as the user that started Hadoop. That is, in our example, after user1 starts Hadoop, user1 runs a job. Everything seems to work fine then.

Thanks,
Joman Chu