You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by Kevin Burton <rk...@charter.net> on 2013/04/30 14:52:45 UTC
RE: Permission problem
I have relaxed it even further so now it is 775
kevin@devUbuntu05:/var/log/hadoop-0.20-mapreduce$ hadoop fs -ls -d /
Found 1 items
drwxrwxr-x - hdfs supergroup 0 2013-04-29 15:43 /
But I still get this error:
2013-04-30 07:43:02,520 FATAL org.apache.hadoop.mapred.JobTracker:
org.apache.hadoop.security.AccessControlException: Permission denied:
user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
From: Mohammad Tariq [mailto:dontariq@gmail.com]
Sent: Monday, April 29, 2013 5:10 PM
To: user@hadoop.apache.org
Subject: Re: Incompartible cluserIDS
make it 755.
Warm Regards,
Tariq
https://mtariq.jux.com/
cloudfront.blogspot.com
RE: Permission problem
Posted by Kevin Burton <rk...@charter.net>.
That is what I perceive as the problem. The hdfs file system was created
with the user 'hdfs' owning the root ('/') but for some reason with a M/R
job the user 'mapred' needs to have write permission to the root. I don't
know how to satisfy both conditions. That is one reason that I relaxed the
permission to 775 so that the group would also have write permission but
that didn't seem to help.
From: Mohammad Tariq [mailto:dontariq@gmail.com]
Sent: Tuesday, April 30, 2013 8:20 AM
To: Kevin Burton
Subject: Re: Permission problem
user?"ls" shows "hdfs" and the log says "mapred"..
Warm Regards,
Tariq
https://mtariq.jux.com/
cloudfront.blogspot.com
On Tue, Apr 30, 2013 at 6:22 PM, Kevin Burton <rk...@charter.net>
wrote:
I have relaxed it even further so now it is 775
kevin@devUbuntu05:/var/log/hadoop-0.20-mapreduce$ hadoop fs -ls -d /
Found 1 items
drwxrwxr-x - hdfs supergroup 0 2013-04-29 15:43 /
But I still get this error:
2013-04-30 07:43:02,520 FATAL org.apache.hadoop.mapred.JobTracker:
org.apache.hadoop.security.AccessControlException: Permission denied:
user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
From: Mohammad Tariq [mailto:dontariq@gmail.com]
Sent: Monday, April 29, 2013 5:10 PM
To: user@hadoop.apache.org
Subject: Re: Incompartible cluserIDS
make it 755.
Warm Regards,
Tariq
https://mtariq.jux.com/
cloudfront.blogspot.com
RE: Permission problem
Posted by Kevin Burton <rk...@charter.net>.
With assistance from others I think I have overcome the permission problem
(I need to file a JIRA because apparently MapRed and HDFS are trying to get
at the same tmp folder) but now I am up against another problem that I have
posted on the user site. It seems that running hadoop jar throws an
exception but java -jar does not. The details are on my post to
user@hadoop.apache.org.
Thank you.
From: Mohammad Tariq [mailto:dontariq@gmail.com]
Sent: Tuesday, April 30, 2013 12:07 PM
To: user@hadoop.apache.org
Subject: Re: Permission problem
Sorry Kevin, I was away for a while. Are you good now?
Warm Regards,
Tariq
https://mtariq.jux.com/
cloudfront.blogspot.com
On Tue, Apr 30, 2013 at 9:50 PM, Arpit Gupta <ar...@hortonworks.com> wrote:
Kevin
You will have create a new account if you did not have one before.
--
Arpit
On Apr 30, 2013, at 9:11 AM, Kevin Burton <rk...@charter.net> wrote:
I don't see a "create issue" button or tab. If I need to log in then I am
not sure what credentials I should use to log in because all I tried failed.
<image001.png>
From: Arpit Gupta [mailto:arpit@hortonworks.com]
Sent: Tuesday, April 30, 2013 11:02 AM
To: user@hadoop.apache.org
Subject: Re: Permission problem
https://issues.apache.org/jira/browse/HADOOP and select create issue.
Set the affect version to the release you are testing and add some basic
description.
Here are the commands you should run.
sudo -u hdfs hadoop fs -mkdir /data/hadoop/tmp
and
sudo -u hdfs hadoop fs -chmod -R 777 /data
chmod is also for the directory on hdfs.
--
Arpit Gupta
Hortonworks Inc.
http://hortonworks.com/
On Apr 30, 2013, at 8:57 AM, "Kevin Burton" <rk...@charter.net>
wrote:
I am not sure how to create a jira.
Again I am not sure I understand your workaround. You are suggesting that I
create /data/hadoop/tmp on HDFS like:
sudo -u hdfs hadoop fs -mkdir /data/hadoop/tmp
I don't think I can chmod -R 777 on /data since it is a disk and as I
indicated it is being used to store data other than that used by hadoop.
Even chmod -R 777 on /data/hadoop seems extreme as there is a dfs, mapred,
and tmp folder. Which one of these local folders need to be opened up? I
would rather not open up all folders to the world if at all possible.
From: Arpit Gupta [mailto:arpit@hortonworks.com]
Sent: Tuesday, April 30, 2013 10:48 AM
To: Kevin Burton
Cc: user@hadoop.apache.org
Subject: Re: Permission problem
It looks like hadoop.tmp.dir is being used both for local and hdfs
directories. Can you create a jira for this?
What i recommended is that you create /data/hadoop/tmp on hdfs and chmod -R
/data
--
Arpit Gupta
Hortonworks Inc.
<http://hortonworks.com/> http://hortonworks.com/
On Apr 30, 2013, at 8:22 AM, "Kevin Burton" <
<ma...@charter.net> rkevinburton@charter.net> wrote:
I am not clear on what you are suggesting to create on HDFS or the local
file system. As I understand it hadoop.tmp.dir is the local file system. I
changed it so that the temporary files would be on a disk that has more
capacity then /tmp. So you are suggesting that I create /data/hadoop/tmp on
HDFS. I already have this created.
Found 1 items
drwxr-xr-x - mapred supergroup 0 2013-04-29 15:45 /tmp/mapred
kevin@devUbuntu05:/etc/hadoop/conf$ hadoop fs -ls -d /tmp
Found 1 items
drwxrwxrwt - hdfs supergroup 0 2013-04-29 15:45 /tmp
When you suggest that I 'chmod -R 777 /data'. You are suggesting that I open
up all the data to everyone? Isn't that a bit extreme? First /data is the
mount point for this drive and there are other uses for this drive than
hadoop so there are other folders. That is why there is /data/hadoop. As far
as hadoop is concerned:
kevin@devUbuntu05:/etc/hadoop/conf$ ls -l /data/hadoop/
total 12
drwxrwxr-x 4 hdfs hadoop 4096 Apr 29 16:38 dfs
drwxrwxr-x 3 mapred hadoop 4096 Apr 29 11:33 mapred
drwxrwxrwx 3 hdfs hadoop 4096 Apr 19 15:14 tmp
dfs would be where the data blocks for the hdfs file system would go, mapred
would be the folder for M/R jobs, and tmp would be temporary storage. These
are all on the local file system. Do I have to make all of this read-write
for everyone in order to get it to work?
From: Arpit Gupta [mailto:arpit@ <http://hortonworks.com> hortonworks.com]
Sent: Tuesday, April 30, 2013 10:01 AM
To: <ma...@hadoop.apache.org> user@hadoop.apache.org
Subject: Re: Permission problem
ah
this is what mapred.sytem.dir defaults to
<property>
<name>mapred.system.dir</name>
<value>${hadoop.tmp.dir}/mapred/system</value>
<description>The directory where MapReduce stores control files.
</description>
</property>
So thats why its trying to write to
/data/hadoop/tmp/hadoop-mapred/mapred/system
So if you want hadoop.tmp.dir to be /data/hadoop/tmp/hadoop-${user.name}
then i would suggest that create /data/hadoop/tmp on hdfs and chmod -R 777
/data or you can remove the hadoop.tmp.dir from your configs and let it be
set to the default value of
<property>
<name>hadoop.tmp.dir</name>
<value>/tmp/hadoop-${user.name}</value>
<description>A base for other temporary directories.</description>
</property>
So to fix your problem you can do the above or set mapred.system.dir to
/tmp/mapred/system in your mapred-site.xml.
--
Arpit Gupta
Hortonworks Inc.
<http://hortonworks.com/> http://hortonworks.com/
On Apr 30, 2013, at 7:55 AM, "Kevin Burton" <
<ma...@charter.net> rkevinburton@charter.net> wrote:
In core-site.xml I have:
<property>
<name>fs.default.name</name>
<value>hdfs://devubuntu05:9000</value>
<description>The name of the default file system. A URI whose scheme and
authority determine the FileSystem implementation. </description>
</property>
In hdfs-site.xml I have
<property>
<name>hadoop.tmp.dir</name>
<value>/data/hadoop/tmp/hadoop-${user.name}</value>
<description>Hadoop temporary folder</description>
</property>
From: Arpit Gupta [mailto:arpit@ <http://hortonworks.com> hortonworks.com]
Sent: Tuesday, April 30, 2013 9:48 AM
To: Kevin Burton
Cc: <ma...@hadoop.apache.org> user@hadoop.apache.org
Subject: Re: Permission problem
Based on the logs your system dir is set to
hdfs://devubuntu05:9000/data/hadoop/tmp/hadoop-mapred/mapred/system
what is your fs.default.name and hadoop.tmp.dir in core-site.xml set to?
--
Arpit Gupta
Hortonworks Inc.
<http://hortonworks.com/> http://hortonworks.com/
On Apr 30, 2013, at 7:39 AM, "Kevin Burton" <
<ma...@charter.net> rkevinburton@charter.net> wrote:
Thank you.
mapred.system.dir is not set. I am guessing that it is whatever the default
is. What should I set it to?
/tmp is already 777
kevin@devUbuntu05:~$ hadoop fs -ls /tmp
Found 1 items
drwxr-xr-x - hdfs supergroup 0 2013-04-29 15:45 /tmp/mapred
kevin@devUbuntu05:~$ hadoop fs -ls -d /tmp
Found 1 items
drwxrwxrwt - hdfs supergroup 0 2013-04-29 15:45 /tmp
But notice that the mapred folder in the /tmp folder is 755.
So I changed it:
kevin@devUbuntu05 $ hadoop fs -ls -d /tmp
drwxrwxrwt - hdfs supergroup 0 2013-04-29 15:45 /tmp
kevin@devUbuntu05 $ hadoop fs -ls -R /tmp
drwxr-xr-x - mapred supergroup 0 2013-04-29 15:45 /tmp/mapred
drwxr-xr-x - mapred supergroup 0 2013-04-29 15:45
/tmp/mapred/system
I still get the errors in the log file:
2013-04-30 09:35:11,609 WARN org.apache.hadoop.mapred.JobTracker: Failed to
operate on mapred.system.dir
(hdfs://devubuntu05:9000/data/hadoop/tmp/hadoop-mapred/mapred/system)
because of permissions.
2013-04-30 09:35:11,609 WARN org.apache.hadoop.mapred.JobTracker: This
directory should be owned by the user 'mapred (auth:SIMPLE)'
2013-04-30 09:35:11,609 WARN org.apache.hadoop.mapred.JobTracker: Bailing
out ...
. . . . .
org.apache.hadoop.security.AccessControlException: Permission denied:
user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
. . . . .
Caused by:
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessContr
olException): Permission denied: user=mapred, access=WRITE,
inode="/":hdfs:supergroup:drwxrwxr-x
2013-04-30 09:35:11,610 FATAL org.apache.hadoop.mapred.JobTracker:
org.apache.hadoop.security.AccessControlException: Permission denied:
user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
. . . . . .
From: Arpit Gupta [mailto:arpit@ <http://hortonworks.com> hortonworks.com]
Sent: Tuesday, April 30, 2013 9:25 AM
To: <ma...@hadoop.apache.org> user@hadoop.apache.org
Subject: Re: Permission problem
what is your mapred.system.dir set to in mapred-site.xml?
By default it will write to /tmp on hdfs.
So you can do the following
create /tmp on hdfs and chmod it to 777 as user hdfs and then restart
jobtracker and tasktrackers.
In case its set to /mapred/something then create /mapred and chown it to
user mapred.
--
Arpit Gupta
Hortonworks Inc.
<http://hortonworks.com/> http://hortonworks.com/
On Apr 30, 2013, at 6:36 AM, "Kevin Burton" <
<ma...@charter.net> rkevinburton@charter.net> wrote:
To further complicate the issue the log file in
(/var/log/hadoop-0.20-mapreduce/hadoop-hadoop-jobtracker-devUbuntu05.log) is
owned by mapred:mapred and the name of the file seems to indicate some other
lineage (hadoop,hadoop). I am out of my league in understanding the
permission structure for hadoop hdfs and mr. Ideas?
From: Kevin Burton [mailto:rkevinburton@ <http://charter.net> charter.net]
Sent: Tuesday, April 30, 2013 8:31 AM
To: <ma...@hadoop.apache.org> user@hadoop.apache.org
Cc: 'Mohammad Tariq'
Subject: RE: Permission problem
That is what I perceive as the problem. The hdfs file system was created
with the user 'hdfs' owning the root ('/') but for some reason with a M/R
job the user 'mapred' needs to have write permission to the root. I don't
know how to satisfy both conditions. That is one reason that I relaxed the
permission to 775 so that the group would also have write permission but
that didn't seem to help.
From: Mohammad Tariq [ <ma...@gmail.com>
mailto:dontariq@gmail.com]
Sent: Tuesday, April 30, 2013 8:20 AM
To: Kevin Burton
Subject: Re: Permission problem
user?"ls" shows "hdfs" and the log says "mapred"..
Warm Regards,
Tariq
<https://mtariq.jux.com/> https://mtariq.jux.com/
<http://cloudfront.blogspot.com> cloudfront.blogspot.com
On Tue, Apr 30, 2013 at 6:22 PM, Kevin Burton <
<ma...@charter.net> rkevinburton@charter.net> wrote:
I have relaxed it even further so now it is 775
<mailto:kevin@devUbuntu05:/var/log/hadoop-0.20-mapreduce$>
kevin@devUbuntu05:/var/log/hadoop-0.20-mapreduce$ hadoop fs -ls -d /
Found 1 items
drwxrwxr-x - hdfs supergroup 0 2013-04-29 15:43 /
But I still get this error:
2013-04-30 07:43:02,520 FATAL org.apache.hadoop.mapred.JobTracker:
org.apache.hadoop.security.AccessControlException: Permission denied:
user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
From: Mohammad Tariq [mailto: <ma...@gmail.com>
dontariq@gmail.com]
Sent: Monday, April 29, 2013 5:10 PM
To: <ma...@hadoop.apache.org> user@hadoop.apache.org
Subject: Re: Incompartible cluserIDS
make it 755.
Warm Regards,
Tariq
<https://mtariq.jux.com/> https://mtariq.jux.com/
<http://cloudfront.blogspot.com> cloudfront.blogspot.com
RE: Permission problem
Posted by Kevin Burton <rk...@charter.net>.
With assistance from others I think I have overcome the permission problem
(I need to file a JIRA because apparently MapRed and HDFS are trying to get
at the same tmp folder) but now I am up against another problem that I have
posted on the user site. It seems that running hadoop jar throws an
exception but java -jar does not. The details are on my post to
user@hadoop.apache.org.
Thank you.
From: Mohammad Tariq [mailto:dontariq@gmail.com]
Sent: Tuesday, April 30, 2013 12:07 PM
To: user@hadoop.apache.org
Subject: Re: Permission problem
Sorry Kevin, I was away for a while. Are you good now?
Warm Regards,
Tariq
https://mtariq.jux.com/
cloudfront.blogspot.com
On Tue, Apr 30, 2013 at 9:50 PM, Arpit Gupta <ar...@hortonworks.com> wrote:
Kevin
You will have create a new account if you did not have one before.
--
Arpit
On Apr 30, 2013, at 9:11 AM, Kevin Burton <rk...@charter.net> wrote:
I don't see a "create issue" button or tab. If I need to log in then I am
not sure what credentials I should use to log in because all I tried failed.
<image001.png>
From: Arpit Gupta [mailto:arpit@hortonworks.com]
Sent: Tuesday, April 30, 2013 11:02 AM
To: user@hadoop.apache.org
Subject: Re: Permission problem
https://issues.apache.org/jira/browse/HADOOP and select create issue.
Set the affect version to the release you are testing and add some basic
description.
Here are the commands you should run.
sudo -u hdfs hadoop fs -mkdir /data/hadoop/tmp
and
sudo -u hdfs hadoop fs -chmod -R 777 /data
chmod is also for the directory on hdfs.
--
Arpit Gupta
Hortonworks Inc.
http://hortonworks.com/
On Apr 30, 2013, at 8:57 AM, "Kevin Burton" <rk...@charter.net>
wrote:
I am not sure how to create a jira.
Again I am not sure I understand your workaround. You are suggesting that I
create /data/hadoop/tmp on HDFS like:
sudo -u hdfs hadoop fs -mkdir /data/hadoop/tmp
I don't think I can chmod -R 777 on /data since it is a disk and as I
indicated it is being used to store data other than that used by hadoop.
Even chmod -R 777 on /data/hadoop seems extreme as there is a dfs, mapred,
and tmp folder. Which one of these local folders need to be opened up? I
would rather not open up all folders to the world if at all possible.
From: Arpit Gupta [mailto:arpit@hortonworks.com]
Sent: Tuesday, April 30, 2013 10:48 AM
To: Kevin Burton
Cc: user@hadoop.apache.org
Subject: Re: Permission problem
It looks like hadoop.tmp.dir is being used both for local and hdfs
directories. Can you create a jira for this?
What i recommended is that you create /data/hadoop/tmp on hdfs and chmod -R
/data
--
Arpit Gupta
Hortonworks Inc.
<http://hortonworks.com/> http://hortonworks.com/
On Apr 30, 2013, at 8:22 AM, "Kevin Burton" <
<ma...@charter.net> rkevinburton@charter.net> wrote:
I am not clear on what you are suggesting to create on HDFS or the local
file system. As I understand it hadoop.tmp.dir is the local file system. I
changed it so that the temporary files would be on a disk that has more
capacity then /tmp. So you are suggesting that I create /data/hadoop/tmp on
HDFS. I already have this created.
Found 1 items
drwxr-xr-x - mapred supergroup 0 2013-04-29 15:45 /tmp/mapred
kevin@devUbuntu05:/etc/hadoop/conf$ hadoop fs -ls -d /tmp
Found 1 items
drwxrwxrwt - hdfs supergroup 0 2013-04-29 15:45 /tmp
When you suggest that I 'chmod -R 777 /data'. You are suggesting that I open
up all the data to everyone? Isn't that a bit extreme? First /data is the
mount point for this drive and there are other uses for this drive than
hadoop so there are other folders. That is why there is /data/hadoop. As far
as hadoop is concerned:
kevin@devUbuntu05:/etc/hadoop/conf$ ls -l /data/hadoop/
total 12
drwxrwxr-x 4 hdfs hadoop 4096 Apr 29 16:38 dfs
drwxrwxr-x 3 mapred hadoop 4096 Apr 29 11:33 mapred
drwxrwxrwx 3 hdfs hadoop 4096 Apr 19 15:14 tmp
dfs would be where the data blocks for the hdfs file system would go, mapred
would be the folder for M/R jobs, and tmp would be temporary storage. These
are all on the local file system. Do I have to make all of this read-write
for everyone in order to get it to work?
From: Arpit Gupta [mailto:arpit@ <http://hortonworks.com> hortonworks.com]
Sent: Tuesday, April 30, 2013 10:01 AM
To: <ma...@hadoop.apache.org> user@hadoop.apache.org
Subject: Re: Permission problem
ah
this is what mapred.sytem.dir defaults to
<property>
<name>mapred.system.dir</name>
<value>${hadoop.tmp.dir}/mapred/system</value>
<description>The directory where MapReduce stores control files.
</description>
</property>
So thats why its trying to write to
/data/hadoop/tmp/hadoop-mapred/mapred/system
So if you want hadoop.tmp.dir to be /data/hadoop/tmp/hadoop-${user.name}
then i would suggest that create /data/hadoop/tmp on hdfs and chmod -R 777
/data or you can remove the hadoop.tmp.dir from your configs and let it be
set to the default value of
<property>
<name>hadoop.tmp.dir</name>
<value>/tmp/hadoop-${user.name}</value>
<description>A base for other temporary directories.</description>
</property>
So to fix your problem you can do the above or set mapred.system.dir to
/tmp/mapred/system in your mapred-site.xml.
--
Arpit Gupta
Hortonworks Inc.
<http://hortonworks.com/> http://hortonworks.com/
On Apr 30, 2013, at 7:55 AM, "Kevin Burton" <
<ma...@charter.net> rkevinburton@charter.net> wrote:
In core-site.xml I have:
<property>
<name>fs.default.name</name>
<value>hdfs://devubuntu05:9000</value>
<description>The name of the default file system. A URI whose scheme and
authority determine the FileSystem implementation. </description>
</property>
In hdfs-site.xml I have
<property>
<name>hadoop.tmp.dir</name>
<value>/data/hadoop/tmp/hadoop-${user.name}</value>
<description>Hadoop temporary folder</description>
</property>
From: Arpit Gupta [mailto:arpit@ <http://hortonworks.com> hortonworks.com]
Sent: Tuesday, April 30, 2013 9:48 AM
To: Kevin Burton
Cc: <ma...@hadoop.apache.org> user@hadoop.apache.org
Subject: Re: Permission problem
Based on the logs your system dir is set to
hdfs://devubuntu05:9000/data/hadoop/tmp/hadoop-mapred/mapred/system
what is your fs.default.name and hadoop.tmp.dir in core-site.xml set to?
--
Arpit Gupta
Hortonworks Inc.
<http://hortonworks.com/> http://hortonworks.com/
On Apr 30, 2013, at 7:39 AM, "Kevin Burton" <
<ma...@charter.net> rkevinburton@charter.net> wrote:
Thank you.
mapred.system.dir is not set. I am guessing that it is whatever the default
is. What should I set it to?
/tmp is already 777
kevin@devUbuntu05:~$ hadoop fs -ls /tmp
Found 1 items
drwxr-xr-x - hdfs supergroup 0 2013-04-29 15:45 /tmp/mapred
kevin@devUbuntu05:~$ hadoop fs -ls -d /tmp
Found 1 items
drwxrwxrwt - hdfs supergroup 0 2013-04-29 15:45 /tmp
But notice that the mapred folder in the /tmp folder is 755.
So I changed it:
kevin@devUbuntu05 $ hadoop fs -ls -d /tmp
drwxrwxrwt - hdfs supergroup 0 2013-04-29 15:45 /tmp
kevin@devUbuntu05 $ hadoop fs -ls -R /tmp
drwxr-xr-x - mapred supergroup 0 2013-04-29 15:45 /tmp/mapred
drwxr-xr-x - mapred supergroup 0 2013-04-29 15:45
/tmp/mapred/system
I still get the errors in the log file:
2013-04-30 09:35:11,609 WARN org.apache.hadoop.mapred.JobTracker: Failed to
operate on mapred.system.dir
(hdfs://devubuntu05:9000/data/hadoop/tmp/hadoop-mapred/mapred/system)
because of permissions.
2013-04-30 09:35:11,609 WARN org.apache.hadoop.mapred.JobTracker: This
directory should be owned by the user 'mapred (auth:SIMPLE)'
2013-04-30 09:35:11,609 WARN org.apache.hadoop.mapred.JobTracker: Bailing
out ...
. . . . .
org.apache.hadoop.security.AccessControlException: Permission denied:
user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
. . . . .
Caused by:
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessContr
olException): Permission denied: user=mapred, access=WRITE,
inode="/":hdfs:supergroup:drwxrwxr-x
2013-04-30 09:35:11,610 FATAL org.apache.hadoop.mapred.JobTracker:
org.apache.hadoop.security.AccessControlException: Permission denied:
user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
. . . . . .
From: Arpit Gupta [mailto:arpit@ <http://hortonworks.com> hortonworks.com]
Sent: Tuesday, April 30, 2013 9:25 AM
To: <ma...@hadoop.apache.org> user@hadoop.apache.org
Subject: Re: Permission problem
what is your mapred.system.dir set to in mapred-site.xml?
By default it will write to /tmp on hdfs.
So you can do the following
create /tmp on hdfs and chmod it to 777 as user hdfs and then restart
jobtracker and tasktrackers.
In case its set to /mapred/something then create /mapred and chown it to
user mapred.
--
Arpit Gupta
Hortonworks Inc.
<http://hortonworks.com/> http://hortonworks.com/
On Apr 30, 2013, at 6:36 AM, "Kevin Burton" <
<ma...@charter.net> rkevinburton@charter.net> wrote:
To further complicate the issue the log file in
(/var/log/hadoop-0.20-mapreduce/hadoop-hadoop-jobtracker-devUbuntu05.log) is
owned by mapred:mapred and the name of the file seems to indicate some other
lineage (hadoop,hadoop). I am out of my league in understanding the
permission structure for hadoop hdfs and mr. Ideas?
From: Kevin Burton [mailto:rkevinburton@ <http://charter.net> charter.net]
Sent: Tuesday, April 30, 2013 8:31 AM
To: <ma...@hadoop.apache.org> user@hadoop.apache.org
Cc: 'Mohammad Tariq'
Subject: RE: Permission problem
That is what I perceive as the problem. The hdfs file system was created
with the user 'hdfs' owning the root ('/') but for some reason with a M/R
job the user 'mapred' needs to have write permission to the root. I don't
know how to satisfy both conditions. That is one reason that I relaxed the
permission to 775 so that the group would also have write permission but
that didn't seem to help.
From: Mohammad Tariq [ <ma...@gmail.com>
mailto:dontariq@gmail.com]
Sent: Tuesday, April 30, 2013 8:20 AM
To: Kevin Burton
Subject: Re: Permission problem
user?"ls" shows "hdfs" and the log says "mapred"..
Warm Regards,
Tariq
<https://mtariq.jux.com/> https://mtariq.jux.com/
<http://cloudfront.blogspot.com> cloudfront.blogspot.com
On Tue, Apr 30, 2013 at 6:22 PM, Kevin Burton <
<ma...@charter.net> rkevinburton@charter.net> wrote:
I have relaxed it even further so now it is 775
<mailto:kevin@devUbuntu05:/var/log/hadoop-0.20-mapreduce$>
kevin@devUbuntu05:/var/log/hadoop-0.20-mapreduce$ hadoop fs -ls -d /
Found 1 items
drwxrwxr-x - hdfs supergroup 0 2013-04-29 15:43 /
But I still get this error:
2013-04-30 07:43:02,520 FATAL org.apache.hadoop.mapred.JobTracker:
org.apache.hadoop.security.AccessControlException: Permission denied:
user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
From: Mohammad Tariq [mailto: <ma...@gmail.com>
dontariq@gmail.com]
Sent: Monday, April 29, 2013 5:10 PM
To: <ma...@hadoop.apache.org> user@hadoop.apache.org
Subject: Re: Incompartible cluserIDS
make it 755.
Warm Regards,
Tariq
<https://mtariq.jux.com/> https://mtariq.jux.com/
<http://cloudfront.blogspot.com> cloudfront.blogspot.com
RE: Permission problem
Posted by Kevin Burton <rk...@charter.net>.
With assistance from others I think I have overcome the permission problem
(I need to file a JIRA because apparently MapRed and HDFS are trying to get
at the same tmp folder) but now I am up against another problem that I have
posted on the user site. It seems that running hadoop jar throws an
exception but java -jar does not. The details are on my post to
user@hadoop.apache.org.
Thank you.
From: Mohammad Tariq [mailto:dontariq@gmail.com]
Sent: Tuesday, April 30, 2013 12:07 PM
To: user@hadoop.apache.org
Subject: Re: Permission problem
Sorry Kevin, I was away for a while. Are you good now?
Warm Regards,
Tariq
https://mtariq.jux.com/
cloudfront.blogspot.com
On Tue, Apr 30, 2013 at 9:50 PM, Arpit Gupta <ar...@hortonworks.com> wrote:
Kevin
You will have create a new account if you did not have one before.
--
Arpit
On Apr 30, 2013, at 9:11 AM, Kevin Burton <rk...@charter.net> wrote:
I don't see a "create issue" button or tab. If I need to log in then I am
not sure what credentials I should use to log in because all I tried failed.
<image001.png>
From: Arpit Gupta [mailto:arpit@hortonworks.com]
Sent: Tuesday, April 30, 2013 11:02 AM
To: user@hadoop.apache.org
Subject: Re: Permission problem
https://issues.apache.org/jira/browse/HADOOP and select create issue.
Set the affect version to the release you are testing and add some basic
description.
Here are the commands you should run.
sudo -u hdfs hadoop fs -mkdir /data/hadoop/tmp
and
sudo -u hdfs hadoop fs -chmod -R 777 /data
chmod is also for the directory on hdfs.
--
Arpit Gupta
Hortonworks Inc.
http://hortonworks.com/
On Apr 30, 2013, at 8:57 AM, "Kevin Burton" <rk...@charter.net>
wrote:
I am not sure how to create a jira.
Again I am not sure I understand your workaround. You are suggesting that I
create /data/hadoop/tmp on HDFS like:
sudo -u hdfs hadoop fs -mkdir /data/hadoop/tmp
I don't think I can chmod -R 777 on /data since it is a disk and as I
indicated it is being used to store data other than that used by hadoop.
Even chmod -R 777 on /data/hadoop seems extreme as there is a dfs, mapred,
and tmp folder. Which one of these local folders need to be opened up? I
would rather not open up all folders to the world if at all possible.
From: Arpit Gupta [mailto:arpit@hortonworks.com]
Sent: Tuesday, April 30, 2013 10:48 AM
To: Kevin Burton
Cc: user@hadoop.apache.org
Subject: Re: Permission problem
It looks like hadoop.tmp.dir is being used both for local and hdfs
directories. Can you create a jira for this?
What i recommended is that you create /data/hadoop/tmp on hdfs and chmod -R
/data
--
Arpit Gupta
Hortonworks Inc.
<http://hortonworks.com/> http://hortonworks.com/
On Apr 30, 2013, at 8:22 AM, "Kevin Burton" <
<ma...@charter.net> rkevinburton@charter.net> wrote:
I am not clear on what you are suggesting to create on HDFS or the local
file system. As I understand it hadoop.tmp.dir is the local file system. I
changed it so that the temporary files would be on a disk that has more
capacity then /tmp. So you are suggesting that I create /data/hadoop/tmp on
HDFS. I already have this created.
Found 1 items
drwxr-xr-x - mapred supergroup 0 2013-04-29 15:45 /tmp/mapred
kevin@devUbuntu05:/etc/hadoop/conf$ hadoop fs -ls -d /tmp
Found 1 items
drwxrwxrwt - hdfs supergroup 0 2013-04-29 15:45 /tmp
When you suggest that I 'chmod -R 777 /data'. You are suggesting that I open
up all the data to everyone? Isn't that a bit extreme? First /data is the
mount point for this drive and there are other uses for this drive than
hadoop so there are other folders. That is why there is /data/hadoop. As far
as hadoop is concerned:
kevin@devUbuntu05:/etc/hadoop/conf$ ls -l /data/hadoop/
total 12
drwxrwxr-x 4 hdfs hadoop 4096 Apr 29 16:38 dfs
drwxrwxr-x 3 mapred hadoop 4096 Apr 29 11:33 mapred
drwxrwxrwx 3 hdfs hadoop 4096 Apr 19 15:14 tmp
dfs would be where the data blocks for the hdfs file system would go, mapred
would be the folder for M/R jobs, and tmp would be temporary storage. These
are all on the local file system. Do I have to make all of this read-write
for everyone in order to get it to work?
From: Arpit Gupta [mailto:arpit@ <http://hortonworks.com> hortonworks.com]
Sent: Tuesday, April 30, 2013 10:01 AM
To: <ma...@hadoop.apache.org> user@hadoop.apache.org
Subject: Re: Permission problem
ah
this is what mapred.sytem.dir defaults to
<property>
<name>mapred.system.dir</name>
<value>${hadoop.tmp.dir}/mapred/system</value>
<description>The directory where MapReduce stores control files.
</description>
</property>
So thats why its trying to write to
/data/hadoop/tmp/hadoop-mapred/mapred/system
So if you want hadoop.tmp.dir to be /data/hadoop/tmp/hadoop-${user.name}
then i would suggest that create /data/hadoop/tmp on hdfs and chmod -R 777
/data or you can remove the hadoop.tmp.dir from your configs and let it be
set to the default value of
<property>
<name>hadoop.tmp.dir</name>
<value>/tmp/hadoop-${user.name}</value>
<description>A base for other temporary directories.</description>
</property>
So to fix your problem you can do the above or set mapred.system.dir to
/tmp/mapred/system in your mapred-site.xml.
--
Arpit Gupta
Hortonworks Inc.
<http://hortonworks.com/> http://hortonworks.com/
On Apr 30, 2013, at 7:55 AM, "Kevin Burton" <
<ma...@charter.net> rkevinburton@charter.net> wrote:
In core-site.xml I have:
<property>
<name>fs.default.name</name>
<value>hdfs://devubuntu05:9000</value>
<description>The name of the default file system. A URI whose scheme and
authority determine the FileSystem implementation. </description>
</property>
In hdfs-site.xml I have
<property>
<name>hadoop.tmp.dir</name>
<value>/data/hadoop/tmp/hadoop-${user.name}</value>
<description>Hadoop temporary folder</description>
</property>
From: Arpit Gupta [mailto:arpit@ <http://hortonworks.com> hortonworks.com]
Sent: Tuesday, April 30, 2013 9:48 AM
To: Kevin Burton
Cc: <ma...@hadoop.apache.org> user@hadoop.apache.org
Subject: Re: Permission problem
Based on the logs your system dir is set to
hdfs://devubuntu05:9000/data/hadoop/tmp/hadoop-mapred/mapred/system
what is your fs.default.name and hadoop.tmp.dir in core-site.xml set to?
--
Arpit Gupta
Hortonworks Inc.
<http://hortonworks.com/> http://hortonworks.com/
On Apr 30, 2013, at 7:39 AM, "Kevin Burton" <
<ma...@charter.net> rkevinburton@charter.net> wrote:
Thank you.
mapred.system.dir is not set. I am guessing that it is whatever the default
is. What should I set it to?
/tmp is already 777
kevin@devUbuntu05:~$ hadoop fs -ls /tmp
Found 1 items
drwxr-xr-x - hdfs supergroup 0 2013-04-29 15:45 /tmp/mapred
kevin@devUbuntu05:~$ hadoop fs -ls -d /tmp
Found 1 items
drwxrwxrwt - hdfs supergroup 0 2013-04-29 15:45 /tmp
But notice that the mapred folder in the /tmp folder is 755.
So I changed it:
kevin@devUbuntu05 $ hadoop fs -ls -d /tmp
drwxrwxrwt - hdfs supergroup 0 2013-04-29 15:45 /tmp
kevin@devUbuntu05 $ hadoop fs -ls -R /tmp
drwxr-xr-x - mapred supergroup 0 2013-04-29 15:45 /tmp/mapred
drwxr-xr-x - mapred supergroup 0 2013-04-29 15:45
/tmp/mapred/system
I still get the errors in the log file:
2013-04-30 09:35:11,609 WARN org.apache.hadoop.mapred.JobTracker: Failed to
operate on mapred.system.dir
(hdfs://devubuntu05:9000/data/hadoop/tmp/hadoop-mapred/mapred/system)
because of permissions.
2013-04-30 09:35:11,609 WARN org.apache.hadoop.mapred.JobTracker: This
directory should be owned by the user 'mapred (auth:SIMPLE)'
2013-04-30 09:35:11,609 WARN org.apache.hadoop.mapred.JobTracker: Bailing
out ...
. . . . .
org.apache.hadoop.security.AccessControlException: Permission denied:
user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
. . . . .
Caused by:
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessContr
olException): Permission denied: user=mapred, access=WRITE,
inode="/":hdfs:supergroup:drwxrwxr-x
2013-04-30 09:35:11,610 FATAL org.apache.hadoop.mapred.JobTracker:
org.apache.hadoop.security.AccessControlException: Permission denied:
user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
. . . . . .
From: Arpit Gupta [mailto:arpit@ <http://hortonworks.com> hortonworks.com]
Sent: Tuesday, April 30, 2013 9:25 AM
To: <ma...@hadoop.apache.org> user@hadoop.apache.org
Subject: Re: Permission problem
what is your mapred.system.dir set to in mapred-site.xml?
By default it will write to /tmp on hdfs.
So you can do the following
create /tmp on hdfs and chmod it to 777 as user hdfs and then restart
jobtracker and tasktrackers.
In case its set to /mapred/something then create /mapred and chown it to
user mapred.
--
Arpit Gupta
Hortonworks Inc.
<http://hortonworks.com/> http://hortonworks.com/
On Apr 30, 2013, at 6:36 AM, "Kevin Burton" <
<ma...@charter.net> rkevinburton@charter.net> wrote:
To further complicate the issue the log file in
(/var/log/hadoop-0.20-mapreduce/hadoop-hadoop-jobtracker-devUbuntu05.log) is
owned by mapred:mapred and the name of the file seems to indicate some other
lineage (hadoop,hadoop). I am out of my league in understanding the
permission structure for hadoop hdfs and mr. Ideas?
From: Kevin Burton [mailto:rkevinburton@ <http://charter.net> charter.net]
Sent: Tuesday, April 30, 2013 8:31 AM
To: <ma...@hadoop.apache.org> user@hadoop.apache.org
Cc: 'Mohammad Tariq'
Subject: RE: Permission problem
That is what I perceive as the problem. The hdfs file system was created
with the user 'hdfs' owning the root ('/') but for some reason with a M/R
job the user 'mapred' needs to have write permission to the root. I don't
know how to satisfy both conditions. That is one reason that I relaxed the
permission to 775 so that the group would also have write permission but
that didn't seem to help.
From: Mohammad Tariq [ <ma...@gmail.com>
mailto:dontariq@gmail.com]
Sent: Tuesday, April 30, 2013 8:20 AM
To: Kevin Burton
Subject: Re: Permission problem
user?"ls" shows "hdfs" and the log says "mapred"..
Warm Regards,
Tariq
<https://mtariq.jux.com/> https://mtariq.jux.com/
<http://cloudfront.blogspot.com> cloudfront.blogspot.com
On Tue, Apr 30, 2013 at 6:22 PM, Kevin Burton <
<ma...@charter.net> rkevinburton@charter.net> wrote:
I have relaxed it even further so now it is 775
<mailto:kevin@devUbuntu05:/var/log/hadoop-0.20-mapreduce$>
kevin@devUbuntu05:/var/log/hadoop-0.20-mapreduce$ hadoop fs -ls -d /
Found 1 items
drwxrwxr-x - hdfs supergroup 0 2013-04-29 15:43 /
But I still get this error:
2013-04-30 07:43:02,520 FATAL org.apache.hadoop.mapred.JobTracker:
org.apache.hadoop.security.AccessControlException: Permission denied:
user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
From: Mohammad Tariq [mailto: <ma...@gmail.com>
dontariq@gmail.com]
Sent: Monday, April 29, 2013 5:10 PM
To: <ma...@hadoop.apache.org> user@hadoop.apache.org
Subject: Re: Incompartible cluserIDS
make it 755.
Warm Regards,
Tariq
<https://mtariq.jux.com/> https://mtariq.jux.com/
<http://cloudfront.blogspot.com> cloudfront.blogspot.com
RE: Permission problem
Posted by Kevin Burton <rk...@charter.net>.
With assistance from others I think I have overcome the permission problem
(I need to file a JIRA because apparently MapRed and HDFS are trying to get
at the same tmp folder) but now I am up against another problem that I have
posted on the user site. It seems that running hadoop jar throws an
exception but java -jar does not. The details are on my post to
user@hadoop.apache.org.
Thank you.
From: Mohammad Tariq [mailto:dontariq@gmail.com]
Sent: Tuesday, April 30, 2013 12:07 PM
To: user@hadoop.apache.org
Subject: Re: Permission problem
Sorry Kevin, I was away for a while. Are you good now?
Warm Regards,
Tariq
https://mtariq.jux.com/
cloudfront.blogspot.com
On Tue, Apr 30, 2013 at 9:50 PM, Arpit Gupta <ar...@hortonworks.com> wrote:
Kevin
You will have create a new account if you did not have one before.
--
Arpit
On Apr 30, 2013, at 9:11 AM, Kevin Burton <rk...@charter.net> wrote:
I don't see a "create issue" button or tab. If I need to log in then I am
not sure what credentials I should use to log in because all I tried failed.
<image001.png>
From: Arpit Gupta [mailto:arpit@hortonworks.com]
Sent: Tuesday, April 30, 2013 11:02 AM
To: user@hadoop.apache.org
Subject: Re: Permission problem
https://issues.apache.org/jira/browse/HADOOP and select create issue.
Set the affect version to the release you are testing and add some basic
description.
Here are the commands you should run.
sudo -u hdfs hadoop fs -mkdir /data/hadoop/tmp
and
sudo -u hdfs hadoop fs -chmod -R 777 /data
chmod is also for the directory on hdfs.
--
Arpit Gupta
Hortonworks Inc.
http://hortonworks.com/
On Apr 30, 2013, at 8:57 AM, "Kevin Burton" <rk...@charter.net>
wrote:
I am not sure how to create a jira.
Again I am not sure I understand your workaround. You are suggesting that I
create /data/hadoop/tmp on HDFS like:
sudo -u hdfs hadoop fs -mkdir /data/hadoop/tmp
I don't think I can chmod -R 777 on /data since it is a disk and as I
indicated it is being used to store data other than that used by hadoop.
Even chmod -R 777 on /data/hadoop seems extreme as there is a dfs, mapred,
and tmp folder. Which one of these local folders need to be opened up? I
would rather not open up all folders to the world if at all possible.
From: Arpit Gupta [mailto:arpit@hortonworks.com]
Sent: Tuesday, April 30, 2013 10:48 AM
To: Kevin Burton
Cc: user@hadoop.apache.org
Subject: Re: Permission problem
It looks like hadoop.tmp.dir is being used both for local and hdfs
directories. Can you create a jira for this?
What i recommended is that you create /data/hadoop/tmp on hdfs and chmod -R
/data
--
Arpit Gupta
Hortonworks Inc.
<http://hortonworks.com/> http://hortonworks.com/
On Apr 30, 2013, at 8:22 AM, "Kevin Burton" <
<ma...@charter.net> rkevinburton@charter.net> wrote:
I am not clear on what you are suggesting to create on HDFS or the local
file system. As I understand it hadoop.tmp.dir is the local file system. I
changed it so that the temporary files would be on a disk that has more
capacity then /tmp. So you are suggesting that I create /data/hadoop/tmp on
HDFS. I already have this created.
Found 1 items
drwxr-xr-x - mapred supergroup 0 2013-04-29 15:45 /tmp/mapred
kevin@devUbuntu05:/etc/hadoop/conf$ hadoop fs -ls -d /tmp
Found 1 items
drwxrwxrwt - hdfs supergroup 0 2013-04-29 15:45 /tmp
When you suggest that I 'chmod -R 777 /data'. You are suggesting that I open
up all the data to everyone? Isn't that a bit extreme? First /data is the
mount point for this drive and there are other uses for this drive than
hadoop so there are other folders. That is why there is /data/hadoop. As far
as hadoop is concerned:
kevin@devUbuntu05:/etc/hadoop/conf$ ls -l /data/hadoop/
total 12
drwxrwxr-x 4 hdfs hadoop 4096 Apr 29 16:38 dfs
drwxrwxr-x 3 mapred hadoop 4096 Apr 29 11:33 mapred
drwxrwxrwx 3 hdfs hadoop 4096 Apr 19 15:14 tmp
dfs would be where the data blocks for the hdfs file system would go, mapred
would be the folder for M/R jobs, and tmp would be temporary storage. These
are all on the local file system. Do I have to make all of this read-write
for everyone in order to get it to work?
From: Arpit Gupta [mailto:arpit@ <http://hortonworks.com> hortonworks.com]
Sent: Tuesday, April 30, 2013 10:01 AM
To: <ma...@hadoop.apache.org> user@hadoop.apache.org
Subject: Re: Permission problem
ah
this is what mapred.sytem.dir defaults to
<property>
<name>mapred.system.dir</name>
<value>${hadoop.tmp.dir}/mapred/system</value>
<description>The directory where MapReduce stores control files.
</description>
</property>
So thats why its trying to write to
/data/hadoop/tmp/hadoop-mapred/mapred/system
So if you want hadoop.tmp.dir to be /data/hadoop/tmp/hadoop-${user.name}
then i would suggest that create /data/hadoop/tmp on hdfs and chmod -R 777
/data or you can remove the hadoop.tmp.dir from your configs and let it be
set to the default value of
<property>
<name>hadoop.tmp.dir</name>
<value>/tmp/hadoop-${user.name}</value>
<description>A base for other temporary directories.</description>
</property>
So to fix your problem you can do the above or set mapred.system.dir to
/tmp/mapred/system in your mapred-site.xml.
--
Arpit Gupta
Hortonworks Inc.
<http://hortonworks.com/> http://hortonworks.com/
On Apr 30, 2013, at 7:55 AM, "Kevin Burton" <
<ma...@charter.net> rkevinburton@charter.net> wrote:
In core-site.xml I have:
<property>
<name>fs.default.name</name>
<value>hdfs://devubuntu05:9000</value>
<description>The name of the default file system. A URI whose scheme and
authority determine the FileSystem implementation. </description>
</property>
In hdfs-site.xml I have
<property>
<name>hadoop.tmp.dir</name>
<value>/data/hadoop/tmp/hadoop-${user.name}</value>
<description>Hadoop temporary folder</description>
</property>
From: Arpit Gupta [mailto:arpit@ <http://hortonworks.com> hortonworks.com]
Sent: Tuesday, April 30, 2013 9:48 AM
To: Kevin Burton
Cc: <ma...@hadoop.apache.org> user@hadoop.apache.org
Subject: Re: Permission problem
Based on the logs your system dir is set to
hdfs://devubuntu05:9000/data/hadoop/tmp/hadoop-mapred/mapred/system
what is your fs.default.name and hadoop.tmp.dir in core-site.xml set to?
--
Arpit Gupta
Hortonworks Inc.
<http://hortonworks.com/> http://hortonworks.com/
On Apr 30, 2013, at 7:39 AM, "Kevin Burton" <
<ma...@charter.net> rkevinburton@charter.net> wrote:
Thank you.
mapred.system.dir is not set. I am guessing that it is whatever the default
is. What should I set it to?
/tmp is already 777
kevin@devUbuntu05:~$ hadoop fs -ls /tmp
Found 1 items
drwxr-xr-x - hdfs supergroup 0 2013-04-29 15:45 /tmp/mapred
kevin@devUbuntu05:~$ hadoop fs -ls -d /tmp
Found 1 items
drwxrwxrwt - hdfs supergroup 0 2013-04-29 15:45 /tmp
But notice that the mapred folder in the /tmp folder is 755.
So I changed it:
kevin@devUbuntu05 $ hadoop fs -ls -d /tmp
drwxrwxrwt - hdfs supergroup 0 2013-04-29 15:45 /tmp
kevin@devUbuntu05 $ hadoop fs -ls -R /tmp
drwxr-xr-x - mapred supergroup 0 2013-04-29 15:45 /tmp/mapred
drwxr-xr-x - mapred supergroup 0 2013-04-29 15:45
/tmp/mapred/system
I still get the errors in the log file:
2013-04-30 09:35:11,609 WARN org.apache.hadoop.mapred.JobTracker: Failed to
operate on mapred.system.dir
(hdfs://devubuntu05:9000/data/hadoop/tmp/hadoop-mapred/mapred/system)
because of permissions.
2013-04-30 09:35:11,609 WARN org.apache.hadoop.mapred.JobTracker: This
directory should be owned by the user 'mapred (auth:SIMPLE)'
2013-04-30 09:35:11,609 WARN org.apache.hadoop.mapred.JobTracker: Bailing
out ...
. . . . .
org.apache.hadoop.security.AccessControlException: Permission denied:
user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
. . . . .
Caused by:
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessContr
olException): Permission denied: user=mapred, access=WRITE,
inode="/":hdfs:supergroup:drwxrwxr-x
2013-04-30 09:35:11,610 FATAL org.apache.hadoop.mapred.JobTracker:
org.apache.hadoop.security.AccessControlException: Permission denied:
user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
. . . . . .
From: Arpit Gupta [mailto:arpit@ <http://hortonworks.com> hortonworks.com]
Sent: Tuesday, April 30, 2013 9:25 AM
To: <ma...@hadoop.apache.org> user@hadoop.apache.org
Subject: Re: Permission problem
what is your mapred.system.dir set to in mapred-site.xml?
By default it will write to /tmp on hdfs.
So you can do the following
create /tmp on hdfs and chmod it to 777 as user hdfs and then restart
jobtracker and tasktrackers.
In case its set to /mapred/something then create /mapred and chown it to
user mapred.
--
Arpit Gupta
Hortonworks Inc.
<http://hortonworks.com/> http://hortonworks.com/
On Apr 30, 2013, at 6:36 AM, "Kevin Burton" <
<ma...@charter.net> rkevinburton@charter.net> wrote:
To further complicate the issue the log file in
(/var/log/hadoop-0.20-mapreduce/hadoop-hadoop-jobtracker-devUbuntu05.log) is
owned by mapred:mapred and the name of the file seems to indicate some other
lineage (hadoop,hadoop). I am out of my league in understanding the
permission structure for hadoop hdfs and mr. Ideas?
From: Kevin Burton [mailto:rkevinburton@ <http://charter.net> charter.net]
Sent: Tuesday, April 30, 2013 8:31 AM
To: <ma...@hadoop.apache.org> user@hadoop.apache.org
Cc: 'Mohammad Tariq'
Subject: RE: Permission problem
That is what I perceive as the problem. The hdfs file system was created
with the user 'hdfs' owning the root ('/') but for some reason with a M/R
job the user 'mapred' needs to have write permission to the root. I don't
know how to satisfy both conditions. That is one reason that I relaxed the
permission to 775 so that the group would also have write permission but
that didn't seem to help.
From: Mohammad Tariq [ <ma...@gmail.com>
mailto:dontariq@gmail.com]
Sent: Tuesday, April 30, 2013 8:20 AM
To: Kevin Burton
Subject: Re: Permission problem
user?"ls" shows "hdfs" and the log says "mapred"..
Warm Regards,
Tariq
<https://mtariq.jux.com/> https://mtariq.jux.com/
<http://cloudfront.blogspot.com> cloudfront.blogspot.com
On Tue, Apr 30, 2013 at 6:22 PM, Kevin Burton <
<ma...@charter.net> rkevinburton@charter.net> wrote:
I have relaxed it even further so now it is 775
<mailto:kevin@devUbuntu05:/var/log/hadoop-0.20-mapreduce$>
kevin@devUbuntu05:/var/log/hadoop-0.20-mapreduce$ hadoop fs -ls -d /
Found 1 items
drwxrwxr-x - hdfs supergroup 0 2013-04-29 15:43 /
But I still get this error:
2013-04-30 07:43:02,520 FATAL org.apache.hadoop.mapred.JobTracker:
org.apache.hadoop.security.AccessControlException: Permission denied:
user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
From: Mohammad Tariq [mailto: <ma...@gmail.com>
dontariq@gmail.com]
Sent: Monday, April 29, 2013 5:10 PM
To: <ma...@hadoop.apache.org> user@hadoop.apache.org
Subject: Re: Incompartible cluserIDS
make it 755.
Warm Regards,
Tariq
<https://mtariq.jux.com/> https://mtariq.jux.com/
<http://cloudfront.blogspot.com> cloudfront.blogspot.com
Re: Permission problem
Posted by Mohammad Tariq <do...@gmail.com>.
Sorry Kevin, I was away for a while. Are you good now?
Warm Regards,
Tariq
https://mtariq.jux.com/
cloudfront.blogspot.com
On Tue, Apr 30, 2013 at 9:50 PM, Arpit Gupta <ar...@hortonworks.com> wrote:
> Kevin
>
> You will have create a new account if you did not have one before.
>
> --
> Arpit
>
> On Apr 30, 2013, at 9:11 AM, Kevin Burton <rk...@charter.net>
> wrote:
>
> I don’t see a “create issue” button or tab. If I need to log in then I am
> not sure what credentials I should use to log in because all I tried failed.
>
>
>
> <image001.png>
>
>
>
> *From:* Arpit Gupta [mailto:arpit@hortonworks.com <ar...@hortonworks.com>]
>
> *Sent:* Tuesday, April 30, 2013 11:02 AM
> *To:* user@hadoop.apache.org
> *Subject:* Re: Permission problem
>
>
>
> https://issues.apache.org/jira/browse/HADOOP and select create issue.
>
>
>
> Set the affect version to the release you are testing and add some basic
> description.
>
>
>
> Here are the commands you should run.
>
>
>
> sudo –u hdfs hadoop fs –mkdir /data/hadoop/tmp
>
>
>
> and
>
>
>
> sudo –u hdfs hadoop fs –chmod -R 777 /data
>
>
>
> chmod is also for the directory on hdfs.
>
>
> --
> Arpit Gupta
>
> Hortonworks Inc.
> http://hortonworks.com/
>
>
>
> On Apr 30, 2013, at 8:57 AM, "Kevin Burton" <rk...@charter.net>
> wrote:
>
>
>
> I am not sure how to create a jira.
>
>
>
> Again I am not sure I understand your workaround. You are suggesting that
> I create /data/hadoop/tmp on HDFS like:
>
>
>
> sudo –u hdfs hadoop fs –mkdir /data/hadoop/tmp
>
>
>
> I don’t think I can chmod –R 777 on /data since it is a disk and as I
> indicated it is being used to store data other than that used by hadoop.
> Even chmod –R 777 on /data/hadoop seems extreme as there is a dfs, mapred,
> and tmp folder. Which one of these local folders need to be opened up? I
> would rather not open up all folders to the world if at all possible.
>
>
>
> *From:* Arpit Gupta [mailto:arpit@hortonworks.com]
> *Sent:* Tuesday, April 30, 2013 10:48 AM
> *To:* Kevin Burton
> *Cc:* user@hadoop.apache.org
> *Subject:* Re: Permission problem
>
>
>
> It looks like hadoop.tmp.dir is being used both for local and hdfs
> directories. Can you create a jira for this?
>
>
>
> What i recommended is that you create /data/hadoop/tmp on hdfs and chmod
> -R /data
>
>
>
> --
> Arpit Gupta
>
> Hortonworks Inc.
> http://hortonworks.com/
>
>
>
> On Apr 30, 2013, at 8:22 AM, "Kevin Burton" <rk...@charter.net>
> wrote:
>
>
>
>
> I am not clear on what you are suggesting to create on HDFS or the local
> file system. As I understand it hadoop.tmp.dir is the local file system. I
> changed it so that the temporary files would be on a disk that has more
> capacity then /tmp. So you are suggesting that I create /data/hadoop/tmp on
> HDFS. I already have this created.
>
>
>
> Found 1 items
>
> drwxr-xr-x - mapred supergroup 0 2013-04-29 15:45 /tmp/mapred
>
> kevin@devUbuntu05:/etc/hadoop/conf$ hadoop fs -ls -d /tmp
>
> Found 1 items
>
> drwxrwxrwt - hdfs supergroup 0 2013-04-29 15:45 /tmp
>
>
>
> When you suggest that I ‘chmod –R 777 /data’. You are suggesting that I
> open up all the data to everyone? Isn’t that a bit extreme? First /data is
> the mount point for this drive and there are other uses for this drive than
> hadoop so there are other folders. That is why there is /data/hadoop. As
> far as hadoop is concerned:
>
>
>
> kevin@devUbuntu05:/etc/hadoop/conf$ ls -l /data/hadoop/
>
> total 12
>
> drwxrwxr-x 4 hdfs hadoop 4096 Apr 29 16:38 dfs
>
> drwxrwxr-x 3 mapred hadoop 4096 Apr 29 11:33 mapred
>
> drwxrwxrwx 3 hdfs hadoop 4096 Apr 19 15:14 tmp
>
>
>
> dfs would be where the data blocks for the hdfs file system would go,
> mapred would be the folder for M/R jobs, and tmp would be temporary
> storage. These are all on the local file system. Do I have to make all of
> this read-write for everyone in order to get it to work?
>
>
>
> *From:* Arpit Gupta [mailto:arpit@hortonworks.com]
> *Sent:* Tuesday, April 30, 2013 10:01 AM
> *To:* user@hadoop.apache.org
> *Subject:* Re: Permission problem
>
>
>
> ah
>
>
>
> this is what mapred.sytem.dir defaults to
>
>
>
> <property>
>
> <name>mapred.system.dir</name>
>
> <value>${hadoop.tmp.dir}/mapred/system</value>
>
> <description>The directory where MapReduce stores control files.
>
> </description>
>
> </property>
>
>
>
>
>
> So thats why its trying to write to
> /data/hadoop/tmp/hadoop-mapred/mapred/system
>
>
>
>
>
> So if you want hadoop.tmp.dir to be /data/hadoop/tmp/hadoop-${user.name}
> then i would suggest that create /data/hadoop/tmp on hdfs and chmod -R 777
> /data or you can remove the hadoop.tmp.dir from your configs and let it be
> set to the default value of
>
>
>
> <property>
>
> <name>hadoop.tmp.dir</name>
>
> <value>/tmp/hadoop-${user.name}</value>
>
> <description>A base for other temporary directories.</description>
>
> </property>
>
>
>
> So to fix your problem you can do the above or set mapred.system.dir to
> /tmp/mapred/system in your mapred-site.xml.
>
>
> --
> Arpit Gupta
>
> Hortonworks Inc.
> http://hortonworks.com/
>
>
>
> On Apr 30, 2013, at 7:55 AM, "Kevin Burton" <rk...@charter.net>
> wrote:
>
>
>
>
>
> In core-site.xml I have:
>
>
>
> <property>
>
> <name>fs.default.name</name>
>
> <value>hdfs://devubuntu05:9000</value>
>
> <description>The name of the default file system. A URI whose scheme and
> authority determine the FileSystem implementation. </description>
>
> </property>
>
>
>
> In hdfs-site.xml I have
>
>
>
> <property>
>
> <name>hadoop.tmp.dir</name>
>
> <value>/data/hadoop/tmp/hadoop-${user.name}</value>
>
> <description>Hadoop temporary folder</description>
>
> </property>
>
>
>
>
>
> *From:* Arpit Gupta [mailto:arpit@hortonworks.com]
> *Sent:* Tuesday, April 30, 2013 9:48 AM
> *To:* Kevin Burton
> *Cc:* user@hadoop.apache.org
> *Subject:* Re: Permission problem
>
>
>
> Based on the logs your system dir is set to
>
>
>
> hdfs://devubuntu05:9000/data/hadoop/tmp/hadoop-mapred/mapred/system
>
>
>
>
>
> what is your fs.default.name and hadoop.tmp.dir in core-site.xml set to?
>
>
>
>
>
>
> --
> Arpit Gupta
>
> Hortonworks Inc.
> http://hortonworks.com/
>
>
>
> On Apr 30, 2013, at 7:39 AM, "Kevin Burton" <rk...@charter.net>
> wrote:
>
>
>
>
>
>
>
>
> Thank you.
>
>
>
> mapred.system.dir is not set. I am guessing that it is whatever the
> default is. What should I set it to?
>
>
>
> /tmp is already 777
>
>
>
> kevin@devUbuntu05:~$ hadoop fs -ls /tmp
>
> Found 1 items
>
> drwxr-xr-x - hdfs supergroup 0 2013-04-29 15:45 /tmp/mapred
>
> kevin@devUbuntu05:~$ hadoop fs -ls -d /tmp
>
> Found 1 items
>
> drwxrwxrwt - hdfs supergroup 0 2013-04-29 15:45 /tmp
>
>
>
> But notice that the mapred folder in the /tmp folder is 755.
>
> So I changed it:
>
>
>
> kevin@devUbuntu05 $ hadoop fs -ls -d /tmp
>
> drwxrwxrwt - hdfs supergroup 0 2013-04-29 15:45 /tmp
>
>
>
> kevin@devUbuntu05 $ hadoop fs -ls -R /tmp
>
> drwxr-xr-x - mapred supergroup 0 2013-04-29 15:45 /tmp/mapred
>
> drwxr-xr-x - mapred supergroup 0 2013-04-29 15:45
> /tmp/mapred/system
>
>
>
> I still get the errors in the log file:
>
>
>
> 2013-04-30 09:35:11,609 WARN org.apache.hadoop.mapred.JobTracker: Failed
> to operate on mapred.system.dir (
> hdfs://devubuntu05:9000/data/hadoop/tmp/hadoop-mapred/mapred/system)
> because of permissions.
>
> 2013-04-30 09:35:11,609 WARN org.apache.hadoop.mapred.JobTracker: This
> directory should be owned by the user 'mapred (auth:SIMPLE)'
>
> 2013-04-30 09:35:11,609 WARN org.apache.hadoop.mapred.JobTracker: Bailing
> out ...
>
> . . . . .
>
> org.apache.hadoop.security.AccessControlException: Permission denied:
> user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
>
> . . . . .
>
> Caused by:
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException):
> Permission denied: user=mapred, access=WRITE,
> inode="/":hdfs:supergroup:drwxrwxr-x
>
> 2013-04-30 09:35:11,610 FATAL org.apache.hadoop.mapred.JobTracker:
> org.apache.hadoop.security.AccessControlException: Permission denied:
> user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
>
> . . . . . .
>
>
>
>
>
> *From:* Arpit Gupta [mailto:arpit@hortonworks.com]
> *Sent:* Tuesday, April 30, 2013 9:25 AM
> *To:* user@hadoop.apache.org
> *Subject:* Re: Permission problem
>
>
>
> what is your mapred.system.dir set to in mapred-site.xml?
>
>
>
> By default it will write to /tmp on hdfs.
>
>
>
> So you can do the following
>
>
>
> create /tmp on hdfs and chmod it to 777 as user hdfs and then restart
> jobtracker and tasktrackers.
>
>
>
> In case its set to /mapred/something then create /mapred and chown it to
> user mapred.
>
>
>
>
> --
> Arpit Gupta
>
> Hortonworks Inc.
> http://hortonworks.com/
>
>
>
> On Apr 30, 2013, at 6:36 AM, "Kevin Burton" <rk...@charter.net>
> wrote:
>
>
>
>
>
>
>
> To further complicate the issue the log file in
> (/var/log/hadoop-0.20-mapreduce/hadoop-hadoop-jobtracker-devUbuntu05.log)
> is owned by mapred:mapred and the name of the file seems to indicate some
> other lineage (hadoop,hadoop). I am out of my league in understanding the
> permission structure for hadoop hdfs and mr. Ideas?
>
>
>
> *From:* Kevin Burton [mailto:rkevinburton@charter.net]
> *Sent:* Tuesday, April 30, 2013 8:31 AM
> *To:* user@hadoop.apache.org
> *Cc:* 'Mohammad Tariq'
> *Subject:* RE: Permission problem
>
>
>
> That is what I perceive as the problem. The hdfs file system was created
> with the user ‘hdfs’ owning the root (‘/’) but for some reason with a M/R
> job the user ‘mapred’ needs to have write permission to the root. I don’t
> know how to satisfy both conditions. That is one reason that I relaxed the
> permission to 775 so that the group would also have write permission but
> that didn’t seem to help.
>
>
>
> *From:* Mohammad Tariq [mailto:dontariq@gmail.com <do...@gmail.com>]
> *Sent:* Tuesday, April 30, 2013 8:20 AM
> *To:* Kevin Burton
> *Subject:* Re: Permission problem
>
>
>
> user?"ls" shows "hdfs" and the log says "mapred"..
>
>
> Warm Regards,
>
> Tariq
>
> https://mtariq.jux.com/
>
> cloudfront.blogspot.com
>
>
>
> On Tue, Apr 30, 2013 at 6:22 PM, Kevin Burton <rk...@charter.net>
> wrote:
>
> I have relaxed it even further so now it is 775
>
>
>
> kevin@devUbuntu05:/var/log/hadoop-0.20-mapreduce$ hadoop fs -ls -d /
>
> Found 1 items
>
> drwxrwxr-x - hdfs supergroup 0 2013-04-29 15:43 /
>
>
>
> But I still get this error:
>
>
>
> 2013-04-30 07:43:02,520 FATAL org.apache.hadoop.mapred.JobTracker:
> org.apache.hadoop.security.AccessControlException: Permission denied:
> user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
>
>
>
>
>
> *From:* Mohammad Tariq [mailto:dontariq@gmail.com]
> *Sent:* Monday, April 29, 2013 5:10 PM
> *To:* user@hadoop.apache.org
> *Subject:* Re: Incompartible cluserIDS
>
>
>
> make it 755.
>
>
> Warm Regards,
>
> Tariq
>
> https://mtariq.jux.com/
>
> cloudfront.blogspot.com
>
>
>
>
>
>
>
>
Re: Permission problem
Posted by Mohammad Tariq <do...@gmail.com>.
Sorry Kevin, I was away for a while. Are you good now?
Warm Regards,
Tariq
https://mtariq.jux.com/
cloudfront.blogspot.com
On Tue, Apr 30, 2013 at 9:50 PM, Arpit Gupta <ar...@hortonworks.com> wrote:
> Kevin
>
> You will have create a new account if you did not have one before.
>
> --
> Arpit
>
> On Apr 30, 2013, at 9:11 AM, Kevin Burton <rk...@charter.net>
> wrote:
>
> I don’t see a “create issue” button or tab. If I need to log in then I am
> not sure what credentials I should use to log in because all I tried failed.
>
>
>
> <image001.png>
>
>
>
> *From:* Arpit Gupta [mailto:arpit@hortonworks.com <ar...@hortonworks.com>]
>
> *Sent:* Tuesday, April 30, 2013 11:02 AM
> *To:* user@hadoop.apache.org
> *Subject:* Re: Permission problem
>
>
>
> https://issues.apache.org/jira/browse/HADOOP and select create issue.
>
>
>
> Set the affect version to the release you are testing and add some basic
> description.
>
>
>
> Here are the commands you should run.
>
>
>
> sudo –u hdfs hadoop fs –mkdir /data/hadoop/tmp
>
>
>
> and
>
>
>
> sudo –u hdfs hadoop fs –chmod -R 777 /data
>
>
>
> chmod is also for the directory on hdfs.
>
>
> --
> Arpit Gupta
>
> Hortonworks Inc.
> http://hortonworks.com/
>
>
>
> On Apr 30, 2013, at 8:57 AM, "Kevin Burton" <rk...@charter.net>
> wrote:
>
>
>
> I am not sure how to create a jira.
>
>
>
> Again I am not sure I understand your workaround. You are suggesting that
> I create /data/hadoop/tmp on HDFS like:
>
>
>
> sudo –u hdfs hadoop fs –mkdir /data/hadoop/tmp
>
>
>
> I don’t think I can chmod –R 777 on /data since it is a disk and as I
> indicated it is being used to store data other than that used by hadoop.
> Even chmod –R 777 on /data/hadoop seems extreme as there is a dfs, mapred,
> and tmp folder. Which one of these local folders need to be opened up? I
> would rather not open up all folders to the world if at all possible.
>
>
>
> *From:* Arpit Gupta [mailto:arpit@hortonworks.com]
> *Sent:* Tuesday, April 30, 2013 10:48 AM
> *To:* Kevin Burton
> *Cc:* user@hadoop.apache.org
> *Subject:* Re: Permission problem
>
>
>
> It looks like hadoop.tmp.dir is being used both for local and hdfs
> directories. Can you create a jira for this?
>
>
>
> What i recommended is that you create /data/hadoop/tmp on hdfs and chmod
> -R /data
>
>
>
> --
> Arpit Gupta
>
> Hortonworks Inc.
> http://hortonworks.com/
>
>
>
> On Apr 30, 2013, at 8:22 AM, "Kevin Burton" <rk...@charter.net>
> wrote:
>
>
>
>
> I am not clear on what you are suggesting to create on HDFS or the local
> file system. As I understand it hadoop.tmp.dir is the local file system. I
> changed it so that the temporary files would be on a disk that has more
> capacity then /tmp. So you are suggesting that I create /data/hadoop/tmp on
> HDFS. I already have this created.
>
>
>
> Found 1 items
>
> drwxr-xr-x - mapred supergroup 0 2013-04-29 15:45 /tmp/mapred
>
> kevin@devUbuntu05:/etc/hadoop/conf$ hadoop fs -ls -d /tmp
>
> Found 1 items
>
> drwxrwxrwt - hdfs supergroup 0 2013-04-29 15:45 /tmp
>
>
>
> When you suggest that I ‘chmod –R 777 /data’. You are suggesting that I
> open up all the data to everyone? Isn’t that a bit extreme? First /data is
> the mount point for this drive and there are other uses for this drive than
> hadoop so there are other folders. That is why there is /data/hadoop. As
> far as hadoop is concerned:
>
>
>
> kevin@devUbuntu05:/etc/hadoop/conf$ ls -l /data/hadoop/
>
> total 12
>
> drwxrwxr-x 4 hdfs hadoop 4096 Apr 29 16:38 dfs
>
> drwxrwxr-x 3 mapred hadoop 4096 Apr 29 11:33 mapred
>
> drwxrwxrwx 3 hdfs hadoop 4096 Apr 19 15:14 tmp
>
>
>
> dfs would be where the data blocks for the hdfs file system would go,
> mapred would be the folder for M/R jobs, and tmp would be temporary
> storage. These are all on the local file system. Do I have to make all of
> this read-write for everyone in order to get it to work?
>
>
>
> *From:* Arpit Gupta [mailto:arpit@hortonworks.com]
> *Sent:* Tuesday, April 30, 2013 10:01 AM
> *To:* user@hadoop.apache.org
> *Subject:* Re: Permission problem
>
>
>
> ah
>
>
>
> this is what mapred.sytem.dir defaults to
>
>
>
> <property>
>
> <name>mapred.system.dir</name>
>
> <value>${hadoop.tmp.dir}/mapred/system</value>
>
> <description>The directory where MapReduce stores control files.
>
> </description>
>
> </property>
>
>
>
>
>
> So thats why its trying to write to
> /data/hadoop/tmp/hadoop-mapred/mapred/system
>
>
>
>
>
> So if you want hadoop.tmp.dir to be /data/hadoop/tmp/hadoop-${user.name}
> then i would suggest that create /data/hadoop/tmp on hdfs and chmod -R 777
> /data or you can remove the hadoop.tmp.dir from your configs and let it be
> set to the default value of
>
>
>
> <property>
>
> <name>hadoop.tmp.dir</name>
>
> <value>/tmp/hadoop-${user.name}</value>
>
> <description>A base for other temporary directories.</description>
>
> </property>
>
>
>
> So to fix your problem you can do the above or set mapred.system.dir to
> /tmp/mapred/system in your mapred-site.xml.
>
>
> --
> Arpit Gupta
>
> Hortonworks Inc.
> http://hortonworks.com/
>
>
>
> On Apr 30, 2013, at 7:55 AM, "Kevin Burton" <rk...@charter.net>
> wrote:
>
>
>
>
>
> In core-site.xml I have:
>
>
>
> <property>
>
> <name>fs.default.name</name>
>
> <value>hdfs://devubuntu05:9000</value>
>
> <description>The name of the default file system. A URI whose scheme and
> authority determine the FileSystem implementation. </description>
>
> </property>
>
>
>
> In hdfs-site.xml I have
>
>
>
> <property>
>
> <name>hadoop.tmp.dir</name>
>
> <value>/data/hadoop/tmp/hadoop-${user.name}</value>
>
> <description>Hadoop temporary folder</description>
>
> </property>
>
>
>
>
>
> *From:* Arpit Gupta [mailto:arpit@hortonworks.com]
> *Sent:* Tuesday, April 30, 2013 9:48 AM
> *To:* Kevin Burton
> *Cc:* user@hadoop.apache.org
> *Subject:* Re: Permission problem
>
>
>
> Based on the logs your system dir is set to
>
>
>
> hdfs://devubuntu05:9000/data/hadoop/tmp/hadoop-mapred/mapred/system
>
>
>
>
>
> what is your fs.default.name and hadoop.tmp.dir in core-site.xml set to?
>
>
>
>
>
>
> --
> Arpit Gupta
>
> Hortonworks Inc.
> http://hortonworks.com/
>
>
>
> On Apr 30, 2013, at 7:39 AM, "Kevin Burton" <rk...@charter.net>
> wrote:
>
>
>
>
>
>
>
>
> Thank you.
>
>
>
> mapred.system.dir is not set. I am guessing that it is whatever the
> default is. What should I set it to?
>
>
>
> /tmp is already 777
>
>
>
> kevin@devUbuntu05:~$ hadoop fs -ls /tmp
>
> Found 1 items
>
> drwxr-xr-x - hdfs supergroup 0 2013-04-29 15:45 /tmp/mapred
>
> kevin@devUbuntu05:~$ hadoop fs -ls -d /tmp
>
> Found 1 items
>
> drwxrwxrwt - hdfs supergroup 0 2013-04-29 15:45 /tmp
>
>
>
> But notice that the mapred folder in the /tmp folder is 755.
>
> So I changed it:
>
>
>
> kevin@devUbuntu05 $ hadoop fs -ls -d /tmp
>
> drwxrwxrwt - hdfs supergroup 0 2013-04-29 15:45 /tmp
>
>
>
> kevin@devUbuntu05 $ hadoop fs -ls -R /tmp
>
> drwxr-xr-x - mapred supergroup 0 2013-04-29 15:45 /tmp/mapred
>
> drwxr-xr-x - mapred supergroup 0 2013-04-29 15:45
> /tmp/mapred/system
>
>
>
> I still get the errors in the log file:
>
>
>
> 2013-04-30 09:35:11,609 WARN org.apache.hadoop.mapred.JobTracker: Failed
> to operate on mapred.system.dir (
> hdfs://devubuntu05:9000/data/hadoop/tmp/hadoop-mapred/mapred/system)
> because of permissions.
>
> 2013-04-30 09:35:11,609 WARN org.apache.hadoop.mapred.JobTracker: This
> directory should be owned by the user 'mapred (auth:SIMPLE)'
>
> 2013-04-30 09:35:11,609 WARN org.apache.hadoop.mapred.JobTracker: Bailing
> out ...
>
> . . . . .
>
> org.apache.hadoop.security.AccessControlException: Permission denied:
> user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
>
> . . . . .
>
> Caused by:
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException):
> Permission denied: user=mapred, access=WRITE,
> inode="/":hdfs:supergroup:drwxrwxr-x
>
> 2013-04-30 09:35:11,610 FATAL org.apache.hadoop.mapred.JobTracker:
> org.apache.hadoop.security.AccessControlException: Permission denied:
> user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
>
> . . . . . .
>
>
>
>
>
> *From:* Arpit Gupta [mailto:arpit@hortonworks.com]
> *Sent:* Tuesday, April 30, 2013 9:25 AM
> *To:* user@hadoop.apache.org
> *Subject:* Re: Permission problem
>
>
>
> what is your mapred.system.dir set to in mapred-site.xml?
>
>
>
> By default it will write to /tmp on hdfs.
>
>
>
> So you can do the following
>
>
>
> create /tmp on hdfs and chmod it to 777 as user hdfs and then restart
> jobtracker and tasktrackers.
>
>
>
> In case its set to /mapred/something then create /mapred and chown it to
> user mapred.
>
>
>
>
> --
> Arpit Gupta
>
> Hortonworks Inc.
> http://hortonworks.com/
>
>
>
> On Apr 30, 2013, at 6:36 AM, "Kevin Burton" <rk...@charter.net>
> wrote:
>
>
>
>
>
>
>
> To further complicate the issue the log file in
> (/var/log/hadoop-0.20-mapreduce/hadoop-hadoop-jobtracker-devUbuntu05.log)
> is owned by mapred:mapred and the name of the file seems to indicate some
> other lineage (hadoop,hadoop). I am out of my league in understanding the
> permission structure for hadoop hdfs and mr. Ideas?
>
>
>
> *From:* Kevin Burton [mailto:rkevinburton@charter.net]
> *Sent:* Tuesday, April 30, 2013 8:31 AM
> *To:* user@hadoop.apache.org
> *Cc:* 'Mohammad Tariq'
> *Subject:* RE: Permission problem
>
>
>
> That is what I perceive as the problem. The hdfs file system was created
> with the user ‘hdfs’ owning the root (‘/’) but for some reason with a M/R
> job the user ‘mapred’ needs to have write permission to the root. I don’t
> know how to satisfy both conditions. That is one reason that I relaxed the
> permission to 775 so that the group would also have write permission but
> that didn’t seem to help.
>
>
>
> *From:* Mohammad Tariq [mailto:dontariq@gmail.com <do...@gmail.com>]
> *Sent:* Tuesday, April 30, 2013 8:20 AM
> *To:* Kevin Burton
> *Subject:* Re: Permission problem
>
>
>
> user?"ls" shows "hdfs" and the log says "mapred"..
>
>
> Warm Regards,
>
> Tariq
>
> https://mtariq.jux.com/
>
> cloudfront.blogspot.com
>
>
>
> On Tue, Apr 30, 2013 at 6:22 PM, Kevin Burton <rk...@charter.net>
> wrote:
>
> I have relaxed it even further so now it is 775
>
>
>
> kevin@devUbuntu05:/var/log/hadoop-0.20-mapreduce$ hadoop fs -ls -d /
>
> Found 1 items
>
> drwxrwxr-x - hdfs supergroup 0 2013-04-29 15:43 /
>
>
>
> But I still get this error:
>
>
>
> 2013-04-30 07:43:02,520 FATAL org.apache.hadoop.mapred.JobTracker:
> org.apache.hadoop.security.AccessControlException: Permission denied:
> user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
>
>
>
>
>
> *From:* Mohammad Tariq [mailto:dontariq@gmail.com]
> *Sent:* Monday, April 29, 2013 5:10 PM
> *To:* user@hadoop.apache.org
> *Subject:* Re: Incompartible cluserIDS
>
>
>
> make it 755.
>
>
> Warm Regards,
>
> Tariq
>
> https://mtariq.jux.com/
>
> cloudfront.blogspot.com
>
>
>
>
>
>
>
>
Re: Permission problem
Posted by Mohammad Tariq <do...@gmail.com>.
Sorry Kevin, I was away for a while. Are you good now?
Warm Regards,
Tariq
https://mtariq.jux.com/
cloudfront.blogspot.com
On Tue, Apr 30, 2013 at 9:50 PM, Arpit Gupta <ar...@hortonworks.com> wrote:
> Kevin
>
> You will have create a new account if you did not have one before.
>
> --
> Arpit
>
> On Apr 30, 2013, at 9:11 AM, Kevin Burton <rk...@charter.net>
> wrote:
>
> I don’t see a “create issue” button or tab. If I need to log in then I am
> not sure what credentials I should use to log in because all I tried failed.
>
>
>
> <image001.png>
>
>
>
> *From:* Arpit Gupta [mailto:arpit@hortonworks.com <ar...@hortonworks.com>]
>
> *Sent:* Tuesday, April 30, 2013 11:02 AM
> *To:* user@hadoop.apache.org
> *Subject:* Re: Permission problem
>
>
>
> https://issues.apache.org/jira/browse/HADOOP and select create issue.
>
>
>
> Set the affect version to the release you are testing and add some basic
> description.
>
>
>
> Here are the commands you should run.
>
>
>
> sudo –u hdfs hadoop fs –mkdir /data/hadoop/tmp
>
>
>
> and
>
>
>
> sudo –u hdfs hadoop fs –chmod -R 777 /data
>
>
>
> chmod is also for the directory on hdfs.
>
>
> --
> Arpit Gupta
>
> Hortonworks Inc.
> http://hortonworks.com/
>
>
>
> On Apr 30, 2013, at 8:57 AM, "Kevin Burton" <rk...@charter.net>
> wrote:
>
>
>
> I am not sure how to create a jira.
>
>
>
> Again I am not sure I understand your workaround. You are suggesting that
> I create /data/hadoop/tmp on HDFS like:
>
>
>
> sudo –u hdfs hadoop fs –mkdir /data/hadoop/tmp
>
>
>
> I don’t think I can chmod –R 777 on /data since it is a disk and as I
> indicated it is being used to store data other than that used by hadoop.
> Even chmod –R 777 on /data/hadoop seems extreme as there is a dfs, mapred,
> and tmp folder. Which one of these local folders need to be opened up? I
> would rather not open up all folders to the world if at all possible.
>
>
>
> *From:* Arpit Gupta [mailto:arpit@hortonworks.com]
> *Sent:* Tuesday, April 30, 2013 10:48 AM
> *To:* Kevin Burton
> *Cc:* user@hadoop.apache.org
> *Subject:* Re: Permission problem
>
>
>
> It looks like hadoop.tmp.dir is being used both for local and hdfs
> directories. Can you create a jira for this?
>
>
>
> What i recommended is that you create /data/hadoop/tmp on hdfs and chmod
> -R /data
>
>
>
> --
> Arpit Gupta
>
> Hortonworks Inc.
> http://hortonworks.com/
>
>
>
> On Apr 30, 2013, at 8:22 AM, "Kevin Burton" <rk...@charter.net>
> wrote:
>
>
>
>
> I am not clear on what you are suggesting to create on HDFS or the local
> file system. As I understand it hadoop.tmp.dir is the local file system. I
> changed it so that the temporary files would be on a disk that has more
> capacity then /tmp. So you are suggesting that I create /data/hadoop/tmp on
> HDFS. I already have this created.
>
>
>
> Found 1 items
>
> drwxr-xr-x - mapred supergroup 0 2013-04-29 15:45 /tmp/mapred
>
> kevin@devUbuntu05:/etc/hadoop/conf$ hadoop fs -ls -d /tmp
>
> Found 1 items
>
> drwxrwxrwt - hdfs supergroup 0 2013-04-29 15:45 /tmp
>
>
>
> When you suggest that I ‘chmod –R 777 /data’. You are suggesting that I
> open up all the data to everyone? Isn’t that a bit extreme? First /data is
> the mount point for this drive and there are other uses for this drive than
> hadoop so there are other folders. That is why there is /data/hadoop. As
> far as hadoop is concerned:
>
>
>
> kevin@devUbuntu05:/etc/hadoop/conf$ ls -l /data/hadoop/
>
> total 12
>
> drwxrwxr-x 4 hdfs hadoop 4096 Apr 29 16:38 dfs
>
> drwxrwxr-x 3 mapred hadoop 4096 Apr 29 11:33 mapred
>
> drwxrwxrwx 3 hdfs hadoop 4096 Apr 19 15:14 tmp
>
>
>
> dfs would be where the data blocks for the hdfs file system would go,
> mapred would be the folder for M/R jobs, and tmp would be temporary
> storage. These are all on the local file system. Do I have to make all of
> this read-write for everyone in order to get it to work?
>
>
>
> *From:* Arpit Gupta [mailto:arpit@hortonworks.com]
> *Sent:* Tuesday, April 30, 2013 10:01 AM
> *To:* user@hadoop.apache.org
> *Subject:* Re: Permission problem
>
>
>
> ah
>
>
>
> this is what mapred.sytem.dir defaults to
>
>
>
> <property>
>
> <name>mapred.system.dir</name>
>
> <value>${hadoop.tmp.dir}/mapred/system</value>
>
> <description>The directory where MapReduce stores control files.
>
> </description>
>
> </property>
>
>
>
>
>
> So thats why its trying to write to
> /data/hadoop/tmp/hadoop-mapred/mapred/system
>
>
>
>
>
> So if you want hadoop.tmp.dir to be /data/hadoop/tmp/hadoop-${user.name}
> then i would suggest that create /data/hadoop/tmp on hdfs and chmod -R 777
> /data or you can remove the hadoop.tmp.dir from your configs and let it be
> set to the default value of
>
>
>
> <property>
>
> <name>hadoop.tmp.dir</name>
>
> <value>/tmp/hadoop-${user.name}</value>
>
> <description>A base for other temporary directories.</description>
>
> </property>
>
>
>
> So to fix your problem you can do the above or set mapred.system.dir to
> /tmp/mapred/system in your mapred-site.xml.
>
>
> --
> Arpit Gupta
>
> Hortonworks Inc.
> http://hortonworks.com/
>
>
>
> On Apr 30, 2013, at 7:55 AM, "Kevin Burton" <rk...@charter.net>
> wrote:
>
>
>
>
>
> In core-site.xml I have:
>
>
>
> <property>
>
> <name>fs.default.name</name>
>
> <value>hdfs://devubuntu05:9000</value>
>
> <description>The name of the default file system. A URI whose scheme and
> authority determine the FileSystem implementation. </description>
>
> </property>
>
>
>
> In hdfs-site.xml I have
>
>
>
> <property>
>
> <name>hadoop.tmp.dir</name>
>
> <value>/data/hadoop/tmp/hadoop-${user.name}</value>
>
> <description>Hadoop temporary folder</description>
>
> </property>
>
>
>
>
>
> *From:* Arpit Gupta [mailto:arpit@hortonworks.com]
> *Sent:* Tuesday, April 30, 2013 9:48 AM
> *To:* Kevin Burton
> *Cc:* user@hadoop.apache.org
> *Subject:* Re: Permission problem
>
>
>
> Based on the logs your system dir is set to
>
>
>
> hdfs://devubuntu05:9000/data/hadoop/tmp/hadoop-mapred/mapred/system
>
>
>
>
>
> what is your fs.default.name and hadoop.tmp.dir in core-site.xml set to?
>
>
>
>
>
>
> --
> Arpit Gupta
>
> Hortonworks Inc.
> http://hortonworks.com/
>
>
>
> On Apr 30, 2013, at 7:39 AM, "Kevin Burton" <rk...@charter.net>
> wrote:
>
>
>
>
>
>
>
>
> Thank you.
>
>
>
> mapred.system.dir is not set. I am guessing that it is whatever the
> default is. What should I set it to?
>
>
>
> /tmp is already 777
>
>
>
> kevin@devUbuntu05:~$ hadoop fs -ls /tmp
>
> Found 1 items
>
> drwxr-xr-x - hdfs supergroup 0 2013-04-29 15:45 /tmp/mapred
>
> kevin@devUbuntu05:~$ hadoop fs -ls -d /tmp
>
> Found 1 items
>
> drwxrwxrwt - hdfs supergroup 0 2013-04-29 15:45 /tmp
>
>
>
> But notice that the mapred folder in the /tmp folder is 755.
>
> So I changed it:
>
>
>
> kevin@devUbuntu05 $ hadoop fs -ls -d /tmp
>
> drwxrwxrwt - hdfs supergroup 0 2013-04-29 15:45 /tmp
>
>
>
> kevin@devUbuntu05 $ hadoop fs -ls -R /tmp
>
> drwxr-xr-x - mapred supergroup 0 2013-04-29 15:45 /tmp/mapred
>
> drwxr-xr-x - mapred supergroup 0 2013-04-29 15:45
> /tmp/mapred/system
>
>
>
> I still get the errors in the log file:
>
>
>
> 2013-04-30 09:35:11,609 WARN org.apache.hadoop.mapred.JobTracker: Failed
> to operate on mapred.system.dir (
> hdfs://devubuntu05:9000/data/hadoop/tmp/hadoop-mapred/mapred/system)
> because of permissions.
>
> 2013-04-30 09:35:11,609 WARN org.apache.hadoop.mapred.JobTracker: This
> directory should be owned by the user 'mapred (auth:SIMPLE)'
>
> 2013-04-30 09:35:11,609 WARN org.apache.hadoop.mapred.JobTracker: Bailing
> out ...
>
> . . . . .
>
> org.apache.hadoop.security.AccessControlException: Permission denied:
> user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
>
> . . . . .
>
> Caused by:
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException):
> Permission denied: user=mapred, access=WRITE,
> inode="/":hdfs:supergroup:drwxrwxr-x
>
> 2013-04-30 09:35:11,610 FATAL org.apache.hadoop.mapred.JobTracker:
> org.apache.hadoop.security.AccessControlException: Permission denied:
> user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
>
> . . . . . .
>
>
>
>
>
> *From:* Arpit Gupta [mailto:arpit@hortonworks.com]
> *Sent:* Tuesday, April 30, 2013 9:25 AM
> *To:* user@hadoop.apache.org
> *Subject:* Re: Permission problem
>
>
>
> what is your mapred.system.dir set to in mapred-site.xml?
>
>
>
> By default it will write to /tmp on hdfs.
>
>
>
> So you can do the following
>
>
>
> create /tmp on hdfs and chmod it to 777 as user hdfs and then restart
> jobtracker and tasktrackers.
>
>
>
> In case its set to /mapred/something then create /mapred and chown it to
> user mapred.
>
>
>
>
> --
> Arpit Gupta
>
> Hortonworks Inc.
> http://hortonworks.com/
>
>
>
> On Apr 30, 2013, at 6:36 AM, "Kevin Burton" <rk...@charter.net>
> wrote:
>
>
>
>
>
>
>
> To further complicate the issue the log file in
> (/var/log/hadoop-0.20-mapreduce/hadoop-hadoop-jobtracker-devUbuntu05.log)
> is owned by mapred:mapred and the name of the file seems to indicate some
> other lineage (hadoop,hadoop). I am out of my league in understanding the
> permission structure for hadoop hdfs and mr. Ideas?
>
>
>
> *From:* Kevin Burton [mailto:rkevinburton@charter.net]
> *Sent:* Tuesday, April 30, 2013 8:31 AM
> *To:* user@hadoop.apache.org
> *Cc:* 'Mohammad Tariq'
> *Subject:* RE: Permission problem
>
>
>
> That is what I perceive as the problem. The hdfs file system was created
> with the user ‘hdfs’ owning the root (‘/’) but for some reason with a M/R
> job the user ‘mapred’ needs to have write permission to the root. I don’t
> know how to satisfy both conditions. That is one reason that I relaxed the
> permission to 775 so that the group would also have write permission but
> that didn’t seem to help.
>
>
>
> *From:* Mohammad Tariq [mailto:dontariq@gmail.com <do...@gmail.com>]
> *Sent:* Tuesday, April 30, 2013 8:20 AM
> *To:* Kevin Burton
> *Subject:* Re: Permission problem
>
>
>
> user?"ls" shows "hdfs" and the log says "mapred"..
>
>
> Warm Regards,
>
> Tariq
>
> https://mtariq.jux.com/
>
> cloudfront.blogspot.com
>
>
>
> On Tue, Apr 30, 2013 at 6:22 PM, Kevin Burton <rk...@charter.net>
> wrote:
>
> I have relaxed it even further so now it is 775
>
>
>
> kevin@devUbuntu05:/var/log/hadoop-0.20-mapreduce$ hadoop fs -ls -d /
>
> Found 1 items
>
> drwxrwxr-x - hdfs supergroup 0 2013-04-29 15:43 /
>
>
>
> But I still get this error:
>
>
>
> 2013-04-30 07:43:02,520 FATAL org.apache.hadoop.mapred.JobTracker:
> org.apache.hadoop.security.AccessControlException: Permission denied:
> user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
>
>
>
>
>
> *From:* Mohammad Tariq [mailto:dontariq@gmail.com]
> *Sent:* Monday, April 29, 2013 5:10 PM
> *To:* user@hadoop.apache.org
> *Subject:* Re: Incompartible cluserIDS
>
>
>
> make it 755.
>
>
> Warm Regards,
>
> Tariq
>
> https://mtariq.jux.com/
>
> cloudfront.blogspot.com
>
>
>
>
>
>
>
>
Re: Permission problem
Posted by Mohammad Tariq <do...@gmail.com>.
Sorry Kevin, I was away for a while. Are you good now?
Warm Regards,
Tariq
https://mtariq.jux.com/
cloudfront.blogspot.com
On Tue, Apr 30, 2013 at 9:50 PM, Arpit Gupta <ar...@hortonworks.com> wrote:
> Kevin
>
> You will have create a new account if you did not have one before.
>
> --
> Arpit
>
> On Apr 30, 2013, at 9:11 AM, Kevin Burton <rk...@charter.net>
> wrote:
>
> I don’t see a “create issue” button or tab. If I need to log in then I am
> not sure what credentials I should use to log in because all I tried failed.
>
>
>
> <image001.png>
>
>
>
> *From:* Arpit Gupta [mailto:arpit@hortonworks.com <ar...@hortonworks.com>]
>
> *Sent:* Tuesday, April 30, 2013 11:02 AM
> *To:* user@hadoop.apache.org
> *Subject:* Re: Permission problem
>
>
>
> https://issues.apache.org/jira/browse/HADOOP and select create issue.
>
>
>
> Set the affect version to the release you are testing and add some basic
> description.
>
>
>
> Here are the commands you should run.
>
>
>
> sudo –u hdfs hadoop fs –mkdir /data/hadoop/tmp
>
>
>
> and
>
>
>
> sudo –u hdfs hadoop fs –chmod -R 777 /data
>
>
>
> chmod is also for the directory on hdfs.
>
>
> --
> Arpit Gupta
>
> Hortonworks Inc.
> http://hortonworks.com/
>
>
>
> On Apr 30, 2013, at 8:57 AM, "Kevin Burton" <rk...@charter.net>
> wrote:
>
>
>
> I am not sure how to create a jira.
>
>
>
> Again I am not sure I understand your workaround. You are suggesting that
> I create /data/hadoop/tmp on HDFS like:
>
>
>
> sudo –u hdfs hadoop fs –mkdir /data/hadoop/tmp
>
>
>
> I don’t think I can chmod –R 777 on /data since it is a disk and as I
> indicated it is being used to store data other than that used by hadoop.
> Even chmod –R 777 on /data/hadoop seems extreme as there is a dfs, mapred,
> and tmp folder. Which one of these local folders need to be opened up? I
> would rather not open up all folders to the world if at all possible.
>
>
>
> *From:* Arpit Gupta [mailto:arpit@hortonworks.com]
> *Sent:* Tuesday, April 30, 2013 10:48 AM
> *To:* Kevin Burton
> *Cc:* user@hadoop.apache.org
> *Subject:* Re: Permission problem
>
>
>
> It looks like hadoop.tmp.dir is being used both for local and hdfs
> directories. Can you create a jira for this?
>
>
>
> What i recommended is that you create /data/hadoop/tmp on hdfs and chmod
> -R /data
>
>
>
> --
> Arpit Gupta
>
> Hortonworks Inc.
> http://hortonworks.com/
>
>
>
> On Apr 30, 2013, at 8:22 AM, "Kevin Burton" <rk...@charter.net>
> wrote:
>
>
>
>
> I am not clear on what you are suggesting to create on HDFS or the local
> file system. As I understand it hadoop.tmp.dir is the local file system. I
> changed it so that the temporary files would be on a disk that has more
> capacity then /tmp. So you are suggesting that I create /data/hadoop/tmp on
> HDFS. I already have this created.
>
>
>
> Found 1 items
>
> drwxr-xr-x - mapred supergroup 0 2013-04-29 15:45 /tmp/mapred
>
> kevin@devUbuntu05:/etc/hadoop/conf$ hadoop fs -ls -d /tmp
>
> Found 1 items
>
> drwxrwxrwt - hdfs supergroup 0 2013-04-29 15:45 /tmp
>
>
>
> When you suggest that I ‘chmod –R 777 /data’. You are suggesting that I
> open up all the data to everyone? Isn’t that a bit extreme? First /data is
> the mount point for this drive and there are other uses for this drive than
> hadoop so there are other folders. That is why there is /data/hadoop. As
> far as hadoop is concerned:
>
>
>
> kevin@devUbuntu05:/etc/hadoop/conf$ ls -l /data/hadoop/
>
> total 12
>
> drwxrwxr-x 4 hdfs hadoop 4096 Apr 29 16:38 dfs
>
> drwxrwxr-x 3 mapred hadoop 4096 Apr 29 11:33 mapred
>
> drwxrwxrwx 3 hdfs hadoop 4096 Apr 19 15:14 tmp
>
>
>
> dfs would be where the data blocks for the hdfs file system would go,
> mapred would be the folder for M/R jobs, and tmp would be temporary
> storage. These are all on the local file system. Do I have to make all of
> this read-write for everyone in order to get it to work?
>
>
>
> *From:* Arpit Gupta [mailto:arpit@hortonworks.com]
> *Sent:* Tuesday, April 30, 2013 10:01 AM
> *To:* user@hadoop.apache.org
> *Subject:* Re: Permission problem
>
>
>
> ah
>
>
>
> this is what mapred.sytem.dir defaults to
>
>
>
> <property>
>
> <name>mapred.system.dir</name>
>
> <value>${hadoop.tmp.dir}/mapred/system</value>
>
> <description>The directory where MapReduce stores control files.
>
> </description>
>
> </property>
>
>
>
>
>
> So thats why its trying to write to
> /data/hadoop/tmp/hadoop-mapred/mapred/system
>
>
>
>
>
> So if you want hadoop.tmp.dir to be /data/hadoop/tmp/hadoop-${user.name}
> then i would suggest that create /data/hadoop/tmp on hdfs and chmod -R 777
> /data or you can remove the hadoop.tmp.dir from your configs and let it be
> set to the default value of
>
>
>
> <property>
>
> <name>hadoop.tmp.dir</name>
>
> <value>/tmp/hadoop-${user.name}</value>
>
> <description>A base for other temporary directories.</description>
>
> </property>
>
>
>
> So to fix your problem you can do the above or set mapred.system.dir to
> /tmp/mapred/system in your mapred-site.xml.
>
>
> --
> Arpit Gupta
>
> Hortonworks Inc.
> http://hortonworks.com/
>
>
>
> On Apr 30, 2013, at 7:55 AM, "Kevin Burton" <rk...@charter.net>
> wrote:
>
>
>
>
>
> In core-site.xml I have:
>
>
>
> <property>
>
> <name>fs.default.name</name>
>
> <value>hdfs://devubuntu05:9000</value>
>
> <description>The name of the default file system. A URI whose scheme and
> authority determine the FileSystem implementation. </description>
>
> </property>
>
>
>
> In hdfs-site.xml I have
>
>
>
> <property>
>
> <name>hadoop.tmp.dir</name>
>
> <value>/data/hadoop/tmp/hadoop-${user.name}</value>
>
> <description>Hadoop temporary folder</description>
>
> </property>
>
>
>
>
>
> *From:* Arpit Gupta [mailto:arpit@hortonworks.com]
> *Sent:* Tuesday, April 30, 2013 9:48 AM
> *To:* Kevin Burton
> *Cc:* user@hadoop.apache.org
> *Subject:* Re: Permission problem
>
>
>
> Based on the logs your system dir is set to
>
>
>
> hdfs://devubuntu05:9000/data/hadoop/tmp/hadoop-mapred/mapred/system
>
>
>
>
>
> what is your fs.default.name and hadoop.tmp.dir in core-site.xml set to?
>
>
>
>
>
>
> --
> Arpit Gupta
>
> Hortonworks Inc.
> http://hortonworks.com/
>
>
>
> On Apr 30, 2013, at 7:39 AM, "Kevin Burton" <rk...@charter.net>
> wrote:
>
>
>
>
>
>
>
>
> Thank you.
>
>
>
> mapred.system.dir is not set. I am guessing that it is whatever the
> default is. What should I set it to?
>
>
>
> /tmp is already 777
>
>
>
> kevin@devUbuntu05:~$ hadoop fs -ls /tmp
>
> Found 1 items
>
> drwxr-xr-x - hdfs supergroup 0 2013-04-29 15:45 /tmp/mapred
>
> kevin@devUbuntu05:~$ hadoop fs -ls -d /tmp
>
> Found 1 items
>
> drwxrwxrwt - hdfs supergroup 0 2013-04-29 15:45 /tmp
>
>
>
> But notice that the mapred folder in the /tmp folder is 755.
>
> So I changed it:
>
>
>
> kevin@devUbuntu05 $ hadoop fs -ls -d /tmp
>
> drwxrwxrwt - hdfs supergroup 0 2013-04-29 15:45 /tmp
>
>
>
> kevin@devUbuntu05 $ hadoop fs -ls -R /tmp
>
> drwxr-xr-x - mapred supergroup 0 2013-04-29 15:45 /tmp/mapred
>
> drwxr-xr-x - mapred supergroup 0 2013-04-29 15:45
> /tmp/mapred/system
>
>
>
> I still get the errors in the log file:
>
>
>
> 2013-04-30 09:35:11,609 WARN org.apache.hadoop.mapred.JobTracker: Failed
> to operate on mapred.system.dir (
> hdfs://devubuntu05:9000/data/hadoop/tmp/hadoop-mapred/mapred/system)
> because of permissions.
>
> 2013-04-30 09:35:11,609 WARN org.apache.hadoop.mapred.JobTracker: This
> directory should be owned by the user 'mapred (auth:SIMPLE)'
>
> 2013-04-30 09:35:11,609 WARN org.apache.hadoop.mapred.JobTracker: Bailing
> out ...
>
> . . . . .
>
> org.apache.hadoop.security.AccessControlException: Permission denied:
> user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
>
> . . . . .
>
> Caused by:
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException):
> Permission denied: user=mapred, access=WRITE,
> inode="/":hdfs:supergroup:drwxrwxr-x
>
> 2013-04-30 09:35:11,610 FATAL org.apache.hadoop.mapred.JobTracker:
> org.apache.hadoop.security.AccessControlException: Permission denied:
> user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
>
> . . . . . .
>
>
>
>
>
> *From:* Arpit Gupta [mailto:arpit@hortonworks.com]
> *Sent:* Tuesday, April 30, 2013 9:25 AM
> *To:* user@hadoop.apache.org
> *Subject:* Re: Permission problem
>
>
>
> what is your mapred.system.dir set to in mapred-site.xml?
>
>
>
> By default it will write to /tmp on hdfs.
>
>
>
> So you can do the following
>
>
>
> create /tmp on hdfs and chmod it to 777 as user hdfs and then restart
> jobtracker and tasktrackers.
>
>
>
> In case its set to /mapred/something then create /mapred and chown it to
> user mapred.
>
>
>
>
> --
> Arpit Gupta
>
> Hortonworks Inc.
> http://hortonworks.com/
>
>
>
> On Apr 30, 2013, at 6:36 AM, "Kevin Burton" <rk...@charter.net>
> wrote:
>
>
>
>
>
>
>
> To further complicate the issue the log file in
> (/var/log/hadoop-0.20-mapreduce/hadoop-hadoop-jobtracker-devUbuntu05.log)
> is owned by mapred:mapred and the name of the file seems to indicate some
> other lineage (hadoop,hadoop). I am out of my league in understanding the
> permission structure for hadoop hdfs and mr. Ideas?
>
>
>
> *From:* Kevin Burton [mailto:rkevinburton@charter.net]
> *Sent:* Tuesday, April 30, 2013 8:31 AM
> *To:* user@hadoop.apache.org
> *Cc:* 'Mohammad Tariq'
> *Subject:* RE: Permission problem
>
>
>
> That is what I perceive as the problem. The hdfs file system was created
> with the user ‘hdfs’ owning the root (‘/’) but for some reason with a M/R
> job the user ‘mapred’ needs to have write permission to the root. I don’t
> know how to satisfy both conditions. That is one reason that I relaxed the
> permission to 775 so that the group would also have write permission but
> that didn’t seem to help.
>
>
>
> *From:* Mohammad Tariq [mailto:dontariq@gmail.com <do...@gmail.com>]
> *Sent:* Tuesday, April 30, 2013 8:20 AM
> *To:* Kevin Burton
> *Subject:* Re: Permission problem
>
>
>
> user?"ls" shows "hdfs" and the log says "mapred"..
>
>
> Warm Regards,
>
> Tariq
>
> https://mtariq.jux.com/
>
> cloudfront.blogspot.com
>
>
>
> On Tue, Apr 30, 2013 at 6:22 PM, Kevin Burton <rk...@charter.net>
> wrote:
>
> I have relaxed it even further so now it is 775
>
>
>
> kevin@devUbuntu05:/var/log/hadoop-0.20-mapreduce$ hadoop fs -ls -d /
>
> Found 1 items
>
> drwxrwxr-x - hdfs supergroup 0 2013-04-29 15:43 /
>
>
>
> But I still get this error:
>
>
>
> 2013-04-30 07:43:02,520 FATAL org.apache.hadoop.mapred.JobTracker:
> org.apache.hadoop.security.AccessControlException: Permission denied:
> user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
>
>
>
>
>
> *From:* Mohammad Tariq [mailto:dontariq@gmail.com]
> *Sent:* Monday, April 29, 2013 5:10 PM
> *To:* user@hadoop.apache.org
> *Subject:* Re: Incompartible cluserIDS
>
>
>
> make it 755.
>
>
> Warm Regards,
>
> Tariq
>
> https://mtariq.jux.com/
>
> cloudfront.blogspot.com
>
>
>
>
>
>
>
>
Re: Permission problem
Posted by Arpit Gupta <ar...@hortonworks.com>.
Kevin
You will have create a new account if you did not have one before.
--
Arpit
On Apr 30, 2013, at 9:11 AM, Kevin Burton <rk...@charter.net> wrote:
I don’t see a “create issue” button or tab. If I need to log in then I am
not sure what credentials I should use to log in because all I tried failed.
<image001.png>
*From:* Arpit Gupta [mailto:arpit@hortonworks.com <ar...@hortonworks.com>]
*Sent:* Tuesday, April 30, 2013 11:02 AM
*To:* user@hadoop.apache.org
*Subject:* Re: Permission problem
https://issues.apache.org/jira/browse/HADOOP and select create issue.
Set the affect version to the release you are testing and add some basic
description.
Here are the commands you should run.
sudo –u hdfs hadoop fs –mkdir /data/hadoop/tmp
and
sudo –u hdfs hadoop fs –chmod -R 777 /data
chmod is also for the directory on hdfs.
--
Arpit Gupta
Hortonworks Inc.
http://hortonworks.com/
On Apr 30, 2013, at 8:57 AM, "Kevin Burton" <rk...@charter.net>
wrote:
I am not sure how to create a jira.
Again I am not sure I understand your workaround. You are suggesting that I
create /data/hadoop/tmp on HDFS like:
sudo –u hdfs hadoop fs –mkdir /data/hadoop/tmp
I don’t think I can chmod –R 777 on /data since it is a disk and as I
indicated it is being used to store data other than that used by hadoop.
Even chmod –R 777 on /data/hadoop seems extreme as there is a dfs, mapred,
and tmp folder. Which one of these local folders need to be opened up? I
would rather not open up all folders to the world if at all possible.
*From:* Arpit Gupta [mailto:arpit@hortonworks.com]
*Sent:* Tuesday, April 30, 2013 10:48 AM
*To:* Kevin Burton
*Cc:* user@hadoop.apache.org
*Subject:* Re: Permission problem
It looks like hadoop.tmp.dir is being used both for local and hdfs
directories. Can you create a jira for this?
What i recommended is that you create /data/hadoop/tmp on hdfs and chmod -R
/data
--
Arpit Gupta
Hortonworks Inc.
http://hortonworks.com/
On Apr 30, 2013, at 8:22 AM, "Kevin Burton" <rk...@charter.net>
wrote:
I am not clear on what you are suggesting to create on HDFS or the local
file system. As I understand it hadoop.tmp.dir is the local file system. I
changed it so that the temporary files would be on a disk that has more
capacity then /tmp. So you are suggesting that I create /data/hadoop/tmp on
HDFS. I already have this created.
Found 1 items
drwxr-xr-x - mapred supergroup 0 2013-04-29 15:45 /tmp/mapred
kevin@devUbuntu05:/etc/hadoop/conf$ hadoop fs -ls -d /tmp
Found 1 items
drwxrwxrwt - hdfs supergroup 0 2013-04-29 15:45 /tmp
When you suggest that I ‘chmod –R 777 /data’. You are suggesting that I
open up all the data to everyone? Isn’t that a bit extreme? First /data is
the mount point for this drive and there are other uses for this drive than
hadoop so there are other folders. That is why there is /data/hadoop. As
far as hadoop is concerned:
kevin@devUbuntu05:/etc/hadoop/conf$ ls -l /data/hadoop/
total 12
drwxrwxr-x 4 hdfs hadoop 4096 Apr 29 16:38 dfs
drwxrwxr-x 3 mapred hadoop 4096 Apr 29 11:33 mapred
drwxrwxrwx 3 hdfs hadoop 4096 Apr 19 15:14 tmp
dfs would be where the data blocks for the hdfs file system would go,
mapred would be the folder for M/R jobs, and tmp would be temporary
storage. These are all on the local file system. Do I have to make all of
this read-write for everyone in order to get it to work?
*From:* Arpit Gupta [mailto:arpit@hortonworks.com]
*Sent:* Tuesday, April 30, 2013 10:01 AM
*To:* user@hadoop.apache.org
*Subject:* Re: Permission problem
ah
this is what mapred.sytem.dir defaults to
<property>
<name>mapred.system.dir</name>
<value>${hadoop.tmp.dir}/mapred/system</value>
<description>The directory where MapReduce stores control files.
</description>
</property>
So thats why its trying to write to
/data/hadoop/tmp/hadoop-mapred/mapred/system<hdfs://devubuntu05:9000/data/hadoop/tmp/hadoop-mapred/mapred/system>
So if you want hadoop.tmp.dir to be /data/hadoop/tmp/hadoop-${user.name}
then i would suggest that create /data/hadoop/tmp on hdfs and chmod -R 777
/data or you can remove the hadoop.tmp.dir from your configs and let it be
set to the default value of
<property>
<name>hadoop.tmp.dir</name>
<value>/tmp/hadoop-${user.name}</value>
<description>A base for other temporary directories.</description>
</property>
So to fix your problem you can do the above or set mapred.system.dir to
/tmp/mapred/system in your mapred-site.xml.
--
Arpit Gupta
Hortonworks Inc.
http://hortonworks.com/
On Apr 30, 2013, at 7:55 AM, "Kevin Burton" <rk...@charter.net>
wrote:
In core-site.xml I have:
<property>
<name>fs.default.name</name>
<value>hdfs://devubuntu05:9000</value>
<description>The name of the default file system. A URI whose scheme and
authority determine the FileSystem implementation. </description>
</property>
In hdfs-site.xml I have
<property>
<name>hadoop.tmp.dir</name>
<value>/data/hadoop/tmp/hadoop-${user.name}</value>
<description>Hadoop temporary folder</description>
</property>
*From:* Arpit Gupta [mailto:arpit@hortonworks.com]
*Sent:* Tuesday, April 30, 2013 9:48 AM
*To:* Kevin Burton
*Cc:* user@hadoop.apache.org
*Subject:* Re: Permission problem
Based on the logs your system dir is set to
hdfs://devubuntu05:9000/data/hadoop/tmp/hadoop-mapred/mapred/system
what is your fs.default.name and hadoop.tmp.dir in core-site.xml set to?
--
Arpit Gupta
Hortonworks Inc.
http://hortonworks.com/
On Apr 30, 2013, at 7:39 AM, "Kevin Burton" <rk...@charter.net>
wrote:
Thank you.
mapred.system.dir is not set. I am guessing that it is whatever the default
is. What should I set it to?
/tmp is already 777
kevin@devUbuntu05:~$ hadoop fs -ls /tmp
Found 1 items
drwxr-xr-x - hdfs supergroup 0 2013-04-29 15:45 /tmp/mapred
kevin@devUbuntu05:~$ hadoop fs -ls -d /tmp
Found 1 items
drwxrwxrwt - hdfs supergroup 0 2013-04-29 15:45 /tmp
But notice that the mapred folder in the /tmp folder is 755.
So I changed it:
kevin@devUbuntu05 $ hadoop fs -ls -d /tmp
drwxrwxrwt - hdfs supergroup 0 2013-04-29 15:45 /tmp
kevin@devUbuntu05 $ hadoop fs -ls -R /tmp
drwxr-xr-x - mapred supergroup 0 2013-04-29 15:45 /tmp/mapred
drwxr-xr-x - mapred supergroup 0 2013-04-29 15:45
/tmp/mapred/system
I still get the errors in the log file:
2013-04-30 09:35:11,609 WARN org.apache.hadoop.mapred.JobTracker: Failed to
operate on mapred.system.dir (
hdfs://devubuntu05:9000/data/hadoop/tmp/hadoop-mapred/mapred/system)
because of permissions.
2013-04-30 09:35:11,609 WARN org.apache.hadoop.mapred.JobTracker: This
directory should be owned by the user 'mapred (auth:SIMPLE)'
2013-04-30 09:35:11,609 WARN org.apache.hadoop.mapred.JobTracker: Bailing
out ...
. . . . .
org.apache.hadoop.security.AccessControlException: Permission denied:
user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
. . . . .
Caused by:
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException):
Permission denied: user=mapred, access=WRITE,
inode="/":hdfs:supergroup:drwxrwxr-x
2013-04-30 09:35:11,610 FATAL org.apache.hadoop.mapred.JobTracker:
org.apache.hadoop.security.AccessControlException: Permission denied:
user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
. . . . . .
*From:* Arpit Gupta [mailto:arpit@hortonworks.com]
*Sent:* Tuesday, April 30, 2013 9:25 AM
*To:* user@hadoop.apache.org
*Subject:* Re: Permission problem
what is your mapred.system.dir set to in mapred-site.xml?
By default it will write to /tmp on hdfs.
So you can do the following
create /tmp on hdfs and chmod it to 777 as user hdfs and then restart
jobtracker and tasktrackers.
In case its set to /mapred/something then create /mapred and chown it to
user mapred.
--
Arpit Gupta
Hortonworks Inc.
http://hortonworks.com/
On Apr 30, 2013, at 6:36 AM, "Kevin Burton" <rk...@charter.net>
wrote:
To further complicate the issue the log file in
(/var/log/hadoop-0.20-mapreduce/hadoop-hadoop-jobtracker-devUbuntu05.log)
is owned by mapred:mapred and the name of the file seems to indicate some
other lineage (hadoop,hadoop). I am out of my league in understanding the
permission structure for hadoop hdfs and mr. Ideas?
*From:* Kevin Burton [mailto:rkevinburton@charter.net]
*Sent:* Tuesday, April 30, 2013 8:31 AM
*To:* user@hadoop.apache.org
*Cc:* 'Mohammad Tariq'
*Subject:* RE: Permission problem
That is what I perceive as the problem. The hdfs file system was created
with the user ‘hdfs’ owning the root (‘/’) but for some reason with a M/R
job the user ‘mapred’ needs to have write permission to the root. I don’t
know how to satisfy both conditions. That is one reason that I relaxed the
permission to 775 so that the group would also have write permission but
that didn’t seem to help.
*From:* Mohammad Tariq [mailto:dontariq@gmail.com <do...@gmail.com>]
*Sent:* Tuesday, April 30, 2013 8:20 AM
*To:* Kevin Burton
*Subject:* Re: Permission problem
user?"ls" shows "hdfs" and the log says "mapred"..
Warm Regards,
Tariq
https://mtariq.jux.com/
cloudfront.blogspot.com
On Tue, Apr 30, 2013 at 6:22 PM, Kevin Burton <rk...@charter.net>
wrote:
I have relaxed it even further so now it is 775
kevin@devUbuntu05:/var/log/hadoop-0.20-mapreduce$ hadoop fs -ls -d /
Found 1 items
drwxrwxr-x - hdfs supergroup 0 2013-04-29 15:43 /
But I still get this error:
2013-04-30 07:43:02,520 FATAL org.apache.hadoop.mapred.JobTracker:
org.apache.hadoop.security.AccessControlException: Permission denied:
user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
*From:* Mohammad Tariq [mailto:dontariq@gmail.com]
*Sent:* Monday, April 29, 2013 5:10 PM
*To:* user@hadoop.apache.org
*Subject:* Re: Incompartible cluserIDS
make it 755.
Warm Regards,
Tariq
https://mtariq.jux.com/
cloudfront.blogspot.com
Re: Permission problem
Posted by Arpit Gupta <ar...@hortonworks.com>.
Kevin
You will have create a new account if you did not have one before.
--
Arpit
On Apr 30, 2013, at 9:11 AM, Kevin Burton <rk...@charter.net> wrote:
I don’t see a “create issue” button or tab. If I need to log in then I am
not sure what credentials I should use to log in because all I tried failed.
<image001.png>
*From:* Arpit Gupta [mailto:arpit@hortonworks.com <ar...@hortonworks.com>]
*Sent:* Tuesday, April 30, 2013 11:02 AM
*To:* user@hadoop.apache.org
*Subject:* Re: Permission problem
https://issues.apache.org/jira/browse/HADOOP and select create issue.
Set the affect version to the release you are testing and add some basic
description.
Here are the commands you should run.
sudo –u hdfs hadoop fs –mkdir /data/hadoop/tmp
and
sudo –u hdfs hadoop fs –chmod -R 777 /data
chmod is also for the directory on hdfs.
--
Arpit Gupta
Hortonworks Inc.
http://hortonworks.com/
On Apr 30, 2013, at 8:57 AM, "Kevin Burton" <rk...@charter.net>
wrote:
I am not sure how to create a jira.
Again I am not sure I understand your workaround. You are suggesting that I
create /data/hadoop/tmp on HDFS like:
sudo –u hdfs hadoop fs –mkdir /data/hadoop/tmp
I don’t think I can chmod –R 777 on /data since it is a disk and as I
indicated it is being used to store data other than that used by hadoop.
Even chmod –R 777 on /data/hadoop seems extreme as there is a dfs, mapred,
and tmp folder. Which one of these local folders need to be opened up? I
would rather not open up all folders to the world if at all possible.
*From:* Arpit Gupta [mailto:arpit@hortonworks.com]
*Sent:* Tuesday, April 30, 2013 10:48 AM
*To:* Kevin Burton
*Cc:* user@hadoop.apache.org
*Subject:* Re: Permission problem
It looks like hadoop.tmp.dir is being used both for local and hdfs
directories. Can you create a jira for this?
What i recommended is that you create /data/hadoop/tmp on hdfs and chmod -R
/data
--
Arpit Gupta
Hortonworks Inc.
http://hortonworks.com/
On Apr 30, 2013, at 8:22 AM, "Kevin Burton" <rk...@charter.net>
wrote:
I am not clear on what you are suggesting to create on HDFS or the local
file system. As I understand it hadoop.tmp.dir is the local file system. I
changed it so that the temporary files would be on a disk that has more
capacity then /tmp. So you are suggesting that I create /data/hadoop/tmp on
HDFS. I already have this created.
Found 1 items
drwxr-xr-x - mapred supergroup 0 2013-04-29 15:45 /tmp/mapred
kevin@devUbuntu05:/etc/hadoop/conf$ hadoop fs -ls -d /tmp
Found 1 items
drwxrwxrwt - hdfs supergroup 0 2013-04-29 15:45 /tmp
When you suggest that I ‘chmod –R 777 /data’. You are suggesting that I
open up all the data to everyone? Isn’t that a bit extreme? First /data is
the mount point for this drive and there are other uses for this drive than
hadoop so there are other folders. That is why there is /data/hadoop. As
far as hadoop is concerned:
kevin@devUbuntu05:/etc/hadoop/conf$ ls -l /data/hadoop/
total 12
drwxrwxr-x 4 hdfs hadoop 4096 Apr 29 16:38 dfs
drwxrwxr-x 3 mapred hadoop 4096 Apr 29 11:33 mapred
drwxrwxrwx 3 hdfs hadoop 4096 Apr 19 15:14 tmp
dfs would be where the data blocks for the hdfs file system would go,
mapred would be the folder for M/R jobs, and tmp would be temporary
storage. These are all on the local file system. Do I have to make all of
this read-write for everyone in order to get it to work?
*From:* Arpit Gupta [mailto:arpit@hortonworks.com]
*Sent:* Tuesday, April 30, 2013 10:01 AM
*To:* user@hadoop.apache.org
*Subject:* Re: Permission problem
ah
this is what mapred.sytem.dir defaults to
<property>
<name>mapred.system.dir</name>
<value>${hadoop.tmp.dir}/mapred/system</value>
<description>The directory where MapReduce stores control files.
</description>
</property>
So thats why its trying to write to
/data/hadoop/tmp/hadoop-mapred/mapred/system<hdfs://devubuntu05:9000/data/hadoop/tmp/hadoop-mapred/mapred/system>
So if you want hadoop.tmp.dir to be /data/hadoop/tmp/hadoop-${user.name}
then i would suggest that create /data/hadoop/tmp on hdfs and chmod -R 777
/data or you can remove the hadoop.tmp.dir from your configs and let it be
set to the default value of
<property>
<name>hadoop.tmp.dir</name>
<value>/tmp/hadoop-${user.name}</value>
<description>A base for other temporary directories.</description>
</property>
So to fix your problem you can do the above or set mapred.system.dir to
/tmp/mapred/system in your mapred-site.xml.
--
Arpit Gupta
Hortonworks Inc.
http://hortonworks.com/
On Apr 30, 2013, at 7:55 AM, "Kevin Burton" <rk...@charter.net>
wrote:
In core-site.xml I have:
<property>
<name>fs.default.name</name>
<value>hdfs://devubuntu05:9000</value>
<description>The name of the default file system. A URI whose scheme and
authority determine the FileSystem implementation. </description>
</property>
In hdfs-site.xml I have
<property>
<name>hadoop.tmp.dir</name>
<value>/data/hadoop/tmp/hadoop-${user.name}</value>
<description>Hadoop temporary folder</description>
</property>
*From:* Arpit Gupta [mailto:arpit@hortonworks.com]
*Sent:* Tuesday, April 30, 2013 9:48 AM
*To:* Kevin Burton
*Cc:* user@hadoop.apache.org
*Subject:* Re: Permission problem
Based on the logs your system dir is set to
hdfs://devubuntu05:9000/data/hadoop/tmp/hadoop-mapred/mapred/system
what is your fs.default.name and hadoop.tmp.dir in core-site.xml set to?
--
Arpit Gupta
Hortonworks Inc.
http://hortonworks.com/
On Apr 30, 2013, at 7:39 AM, "Kevin Burton" <rk...@charter.net>
wrote:
Thank you.
mapred.system.dir is not set. I am guessing that it is whatever the default
is. What should I set it to?
/tmp is already 777
kevin@devUbuntu05:~$ hadoop fs -ls /tmp
Found 1 items
drwxr-xr-x - hdfs supergroup 0 2013-04-29 15:45 /tmp/mapred
kevin@devUbuntu05:~$ hadoop fs -ls -d /tmp
Found 1 items
drwxrwxrwt - hdfs supergroup 0 2013-04-29 15:45 /tmp
But notice that the mapred folder in the /tmp folder is 755.
So I changed it:
kevin@devUbuntu05 $ hadoop fs -ls -d /tmp
drwxrwxrwt - hdfs supergroup 0 2013-04-29 15:45 /tmp
kevin@devUbuntu05 $ hadoop fs -ls -R /tmp
drwxr-xr-x - mapred supergroup 0 2013-04-29 15:45 /tmp/mapred
drwxr-xr-x - mapred supergroup 0 2013-04-29 15:45
/tmp/mapred/system
I still get the errors in the log file:
2013-04-30 09:35:11,609 WARN org.apache.hadoop.mapred.JobTracker: Failed to
operate on mapred.system.dir (
hdfs://devubuntu05:9000/data/hadoop/tmp/hadoop-mapred/mapred/system)
because of permissions.
2013-04-30 09:35:11,609 WARN org.apache.hadoop.mapred.JobTracker: This
directory should be owned by the user 'mapred (auth:SIMPLE)'
2013-04-30 09:35:11,609 WARN org.apache.hadoop.mapred.JobTracker: Bailing
out ...
. . . . .
org.apache.hadoop.security.AccessControlException: Permission denied:
user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
. . . . .
Caused by:
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException):
Permission denied: user=mapred, access=WRITE,
inode="/":hdfs:supergroup:drwxrwxr-x
2013-04-30 09:35:11,610 FATAL org.apache.hadoop.mapred.JobTracker:
org.apache.hadoop.security.AccessControlException: Permission denied:
user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
. . . . . .
*From:* Arpit Gupta [mailto:arpit@hortonworks.com]
*Sent:* Tuesday, April 30, 2013 9:25 AM
*To:* user@hadoop.apache.org
*Subject:* Re: Permission problem
what is your mapred.system.dir set to in mapred-site.xml?
By default it will write to /tmp on hdfs.
So you can do the following
create /tmp on hdfs and chmod it to 777 as user hdfs and then restart
jobtracker and tasktrackers.
In case its set to /mapred/something then create /mapred and chown it to
user mapred.
--
Arpit Gupta
Hortonworks Inc.
http://hortonworks.com/
On Apr 30, 2013, at 6:36 AM, "Kevin Burton" <rk...@charter.net>
wrote:
To further complicate the issue the log file in
(/var/log/hadoop-0.20-mapreduce/hadoop-hadoop-jobtracker-devUbuntu05.log)
is owned by mapred:mapred and the name of the file seems to indicate some
other lineage (hadoop,hadoop). I am out of my league in understanding the
permission structure for hadoop hdfs and mr. Ideas?
*From:* Kevin Burton [mailto:rkevinburton@charter.net]
*Sent:* Tuesday, April 30, 2013 8:31 AM
*To:* user@hadoop.apache.org
*Cc:* 'Mohammad Tariq'
*Subject:* RE: Permission problem
That is what I perceive as the problem. The hdfs file system was created
with the user ‘hdfs’ owning the root (‘/’) but for some reason with a M/R
job the user ‘mapred’ needs to have write permission to the root. I don’t
know how to satisfy both conditions. That is one reason that I relaxed the
permission to 775 so that the group would also have write permission but
that didn’t seem to help.
*From:* Mohammad Tariq [mailto:dontariq@gmail.com <do...@gmail.com>]
*Sent:* Tuesday, April 30, 2013 8:20 AM
*To:* Kevin Burton
*Subject:* Re: Permission problem
user?"ls" shows "hdfs" and the log says "mapred"..
Warm Regards,
Tariq
https://mtariq.jux.com/
cloudfront.blogspot.com
On Tue, Apr 30, 2013 at 6:22 PM, Kevin Burton <rk...@charter.net>
wrote:
I have relaxed it even further so now it is 775
kevin@devUbuntu05:/var/log/hadoop-0.20-mapreduce$ hadoop fs -ls -d /
Found 1 items
drwxrwxr-x - hdfs supergroup 0 2013-04-29 15:43 /
But I still get this error:
2013-04-30 07:43:02,520 FATAL org.apache.hadoop.mapred.JobTracker:
org.apache.hadoop.security.AccessControlException: Permission denied:
user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
*From:* Mohammad Tariq [mailto:dontariq@gmail.com]
*Sent:* Monday, April 29, 2013 5:10 PM
*To:* user@hadoop.apache.org
*Subject:* Re: Incompartible cluserIDS
make it 755.
Warm Regards,
Tariq
https://mtariq.jux.com/
cloudfront.blogspot.com
Re: Permission problem
Posted by Arpit Gupta <ar...@hortonworks.com>.
Kevin
You will have create a new account if you did not have one before.
--
Arpit
On Apr 30, 2013, at 9:11 AM, Kevin Burton <rk...@charter.net> wrote:
I don’t see a “create issue” button or tab. If I need to log in then I am
not sure what credentials I should use to log in because all I tried failed.
<image001.png>
*From:* Arpit Gupta [mailto:arpit@hortonworks.com <ar...@hortonworks.com>]
*Sent:* Tuesday, April 30, 2013 11:02 AM
*To:* user@hadoop.apache.org
*Subject:* Re: Permission problem
https://issues.apache.org/jira/browse/HADOOP and select create issue.
Set the affect version to the release you are testing and add some basic
description.
Here are the commands you should run.
sudo –u hdfs hadoop fs –mkdir /data/hadoop/tmp
and
sudo –u hdfs hadoop fs –chmod -R 777 /data
chmod is also for the directory on hdfs.
--
Arpit Gupta
Hortonworks Inc.
http://hortonworks.com/
On Apr 30, 2013, at 8:57 AM, "Kevin Burton" <rk...@charter.net>
wrote:
I am not sure how to create a jira.
Again I am not sure I understand your workaround. You are suggesting that I
create /data/hadoop/tmp on HDFS like:
sudo –u hdfs hadoop fs –mkdir /data/hadoop/tmp
I don’t think I can chmod –R 777 on /data since it is a disk and as I
indicated it is being used to store data other than that used by hadoop.
Even chmod –R 777 on /data/hadoop seems extreme as there is a dfs, mapred,
and tmp folder. Which one of these local folders need to be opened up? I
would rather not open up all folders to the world if at all possible.
*From:* Arpit Gupta [mailto:arpit@hortonworks.com]
*Sent:* Tuesday, April 30, 2013 10:48 AM
*To:* Kevin Burton
*Cc:* user@hadoop.apache.org
*Subject:* Re: Permission problem
It looks like hadoop.tmp.dir is being used both for local and hdfs
directories. Can you create a jira for this?
What i recommended is that you create /data/hadoop/tmp on hdfs and chmod -R
/data
--
Arpit Gupta
Hortonworks Inc.
http://hortonworks.com/
On Apr 30, 2013, at 8:22 AM, "Kevin Burton" <rk...@charter.net>
wrote:
I am not clear on what you are suggesting to create on HDFS or the local
file system. As I understand it hadoop.tmp.dir is the local file system. I
changed it so that the temporary files would be on a disk that has more
capacity then /tmp. So you are suggesting that I create /data/hadoop/tmp on
HDFS. I already have this created.
Found 1 items
drwxr-xr-x - mapred supergroup 0 2013-04-29 15:45 /tmp/mapred
kevin@devUbuntu05:/etc/hadoop/conf$ hadoop fs -ls -d /tmp
Found 1 items
drwxrwxrwt - hdfs supergroup 0 2013-04-29 15:45 /tmp
When you suggest that I ‘chmod –R 777 /data’. You are suggesting that I
open up all the data to everyone? Isn’t that a bit extreme? First /data is
the mount point for this drive and there are other uses for this drive than
hadoop so there are other folders. That is why there is /data/hadoop. As
far as hadoop is concerned:
kevin@devUbuntu05:/etc/hadoop/conf$ ls -l /data/hadoop/
total 12
drwxrwxr-x 4 hdfs hadoop 4096 Apr 29 16:38 dfs
drwxrwxr-x 3 mapred hadoop 4096 Apr 29 11:33 mapred
drwxrwxrwx 3 hdfs hadoop 4096 Apr 19 15:14 tmp
dfs would be where the data blocks for the hdfs file system would go,
mapred would be the folder for M/R jobs, and tmp would be temporary
storage. These are all on the local file system. Do I have to make all of
this read-write for everyone in order to get it to work?
*From:* Arpit Gupta [mailto:arpit@hortonworks.com]
*Sent:* Tuesday, April 30, 2013 10:01 AM
*To:* user@hadoop.apache.org
*Subject:* Re: Permission problem
ah
this is what mapred.sytem.dir defaults to
<property>
<name>mapred.system.dir</name>
<value>${hadoop.tmp.dir}/mapred/system</value>
<description>The directory where MapReduce stores control files.
</description>
</property>
So thats why its trying to write to
/data/hadoop/tmp/hadoop-mapred/mapred/system<hdfs://devubuntu05:9000/data/hadoop/tmp/hadoop-mapred/mapred/system>
So if you want hadoop.tmp.dir to be /data/hadoop/tmp/hadoop-${user.name}
then i would suggest that create /data/hadoop/tmp on hdfs and chmod -R 777
/data or you can remove the hadoop.tmp.dir from your configs and let it be
set to the default value of
<property>
<name>hadoop.tmp.dir</name>
<value>/tmp/hadoop-${user.name}</value>
<description>A base for other temporary directories.</description>
</property>
So to fix your problem you can do the above or set mapred.system.dir to
/tmp/mapred/system in your mapred-site.xml.
--
Arpit Gupta
Hortonworks Inc.
http://hortonworks.com/
On Apr 30, 2013, at 7:55 AM, "Kevin Burton" <rk...@charter.net>
wrote:
In core-site.xml I have:
<property>
<name>fs.default.name</name>
<value>hdfs://devubuntu05:9000</value>
<description>The name of the default file system. A URI whose scheme and
authority determine the FileSystem implementation. </description>
</property>
In hdfs-site.xml I have
<property>
<name>hadoop.tmp.dir</name>
<value>/data/hadoop/tmp/hadoop-${user.name}</value>
<description>Hadoop temporary folder</description>
</property>
*From:* Arpit Gupta [mailto:arpit@hortonworks.com]
*Sent:* Tuesday, April 30, 2013 9:48 AM
*To:* Kevin Burton
*Cc:* user@hadoop.apache.org
*Subject:* Re: Permission problem
Based on the logs your system dir is set to
hdfs://devubuntu05:9000/data/hadoop/tmp/hadoop-mapred/mapred/system
what is your fs.default.name and hadoop.tmp.dir in core-site.xml set to?
--
Arpit Gupta
Hortonworks Inc.
http://hortonworks.com/
On Apr 30, 2013, at 7:39 AM, "Kevin Burton" <rk...@charter.net>
wrote:
Thank you.
mapred.system.dir is not set. I am guessing that it is whatever the default
is. What should I set it to?
/tmp is already 777
kevin@devUbuntu05:~$ hadoop fs -ls /tmp
Found 1 items
drwxr-xr-x - hdfs supergroup 0 2013-04-29 15:45 /tmp/mapred
kevin@devUbuntu05:~$ hadoop fs -ls -d /tmp
Found 1 items
drwxrwxrwt - hdfs supergroup 0 2013-04-29 15:45 /tmp
But notice that the mapred folder in the /tmp folder is 755.
So I changed it:
kevin@devUbuntu05 $ hadoop fs -ls -d /tmp
drwxrwxrwt - hdfs supergroup 0 2013-04-29 15:45 /tmp
kevin@devUbuntu05 $ hadoop fs -ls -R /tmp
drwxr-xr-x - mapred supergroup 0 2013-04-29 15:45 /tmp/mapred
drwxr-xr-x - mapred supergroup 0 2013-04-29 15:45
/tmp/mapred/system
I still get the errors in the log file:
2013-04-30 09:35:11,609 WARN org.apache.hadoop.mapred.JobTracker: Failed to
operate on mapred.system.dir (
hdfs://devubuntu05:9000/data/hadoop/tmp/hadoop-mapred/mapred/system)
because of permissions.
2013-04-30 09:35:11,609 WARN org.apache.hadoop.mapred.JobTracker: This
directory should be owned by the user 'mapred (auth:SIMPLE)'
2013-04-30 09:35:11,609 WARN org.apache.hadoop.mapred.JobTracker: Bailing
out ...
. . . . .
org.apache.hadoop.security.AccessControlException: Permission denied:
user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
. . . . .
Caused by:
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException):
Permission denied: user=mapred, access=WRITE,
inode="/":hdfs:supergroup:drwxrwxr-x
2013-04-30 09:35:11,610 FATAL org.apache.hadoop.mapred.JobTracker:
org.apache.hadoop.security.AccessControlException: Permission denied:
user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
. . . . . .
*From:* Arpit Gupta [mailto:arpit@hortonworks.com]
*Sent:* Tuesday, April 30, 2013 9:25 AM
*To:* user@hadoop.apache.org
*Subject:* Re: Permission problem
what is your mapred.system.dir set to in mapred-site.xml?
By default it will write to /tmp on hdfs.
So you can do the following
create /tmp on hdfs and chmod it to 777 as user hdfs and then restart
jobtracker and tasktrackers.
In case its set to /mapred/something then create /mapred and chown it to
user mapred.
--
Arpit Gupta
Hortonworks Inc.
http://hortonworks.com/
On Apr 30, 2013, at 6:36 AM, "Kevin Burton" <rk...@charter.net>
wrote:
To further complicate the issue the log file in
(/var/log/hadoop-0.20-mapreduce/hadoop-hadoop-jobtracker-devUbuntu05.log)
is owned by mapred:mapred and the name of the file seems to indicate some
other lineage (hadoop,hadoop). I am out of my league in understanding the
permission structure for hadoop hdfs and mr. Ideas?
*From:* Kevin Burton [mailto:rkevinburton@charter.net]
*Sent:* Tuesday, April 30, 2013 8:31 AM
*To:* user@hadoop.apache.org
*Cc:* 'Mohammad Tariq'
*Subject:* RE: Permission problem
That is what I perceive as the problem. The hdfs file system was created
with the user ‘hdfs’ owning the root (‘/’) but for some reason with a M/R
job the user ‘mapred’ needs to have write permission to the root. I don’t
know how to satisfy both conditions. That is one reason that I relaxed the
permission to 775 so that the group would also have write permission but
that didn’t seem to help.
*From:* Mohammad Tariq [mailto:dontariq@gmail.com <do...@gmail.com>]
*Sent:* Tuesday, April 30, 2013 8:20 AM
*To:* Kevin Burton
*Subject:* Re: Permission problem
user?"ls" shows "hdfs" and the log says "mapred"..
Warm Regards,
Tariq
https://mtariq.jux.com/
cloudfront.blogspot.com
On Tue, Apr 30, 2013 at 6:22 PM, Kevin Burton <rk...@charter.net>
wrote:
I have relaxed it even further so now it is 775
kevin@devUbuntu05:/var/log/hadoop-0.20-mapreduce$ hadoop fs -ls -d /
Found 1 items
drwxrwxr-x - hdfs supergroup 0 2013-04-29 15:43 /
But I still get this error:
2013-04-30 07:43:02,520 FATAL org.apache.hadoop.mapred.JobTracker:
org.apache.hadoop.security.AccessControlException: Permission denied:
user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
*From:* Mohammad Tariq [mailto:dontariq@gmail.com]
*Sent:* Monday, April 29, 2013 5:10 PM
*To:* user@hadoop.apache.org
*Subject:* Re: Incompartible cluserIDS
make it 755.
Warm Regards,
Tariq
https://mtariq.jux.com/
cloudfront.blogspot.com
Re: Permission problem
Posted by Arpit Gupta <ar...@hortonworks.com>.
Kevin
You will have create a new account if you did not have one before.
--
Arpit
On Apr 30, 2013, at 9:11 AM, Kevin Burton <rk...@charter.net> wrote:
I don’t see a “create issue” button or tab. If I need to log in then I am
not sure what credentials I should use to log in because all I tried failed.
<image001.png>
*From:* Arpit Gupta [mailto:arpit@hortonworks.com <ar...@hortonworks.com>]
*Sent:* Tuesday, April 30, 2013 11:02 AM
*To:* user@hadoop.apache.org
*Subject:* Re: Permission problem
https://issues.apache.org/jira/browse/HADOOP and select create issue.
Set the affect version to the release you are testing and add some basic
description.
Here are the commands you should run.
sudo –u hdfs hadoop fs –mkdir /data/hadoop/tmp
and
sudo –u hdfs hadoop fs –chmod -R 777 /data
chmod is also for the directory on hdfs.
--
Arpit Gupta
Hortonworks Inc.
http://hortonworks.com/
On Apr 30, 2013, at 8:57 AM, "Kevin Burton" <rk...@charter.net>
wrote:
I am not sure how to create a jira.
Again I am not sure I understand your workaround. You are suggesting that I
create /data/hadoop/tmp on HDFS like:
sudo –u hdfs hadoop fs –mkdir /data/hadoop/tmp
I don’t think I can chmod –R 777 on /data since it is a disk and as I
indicated it is being used to store data other than that used by hadoop.
Even chmod –R 777 on /data/hadoop seems extreme as there is a dfs, mapred,
and tmp folder. Which one of these local folders need to be opened up? I
would rather not open up all folders to the world if at all possible.
*From:* Arpit Gupta [mailto:arpit@hortonworks.com]
*Sent:* Tuesday, April 30, 2013 10:48 AM
*To:* Kevin Burton
*Cc:* user@hadoop.apache.org
*Subject:* Re: Permission problem
It looks like hadoop.tmp.dir is being used both for local and hdfs
directories. Can you create a jira for this?
What i recommended is that you create /data/hadoop/tmp on hdfs and chmod -R
/data
--
Arpit Gupta
Hortonworks Inc.
http://hortonworks.com/
On Apr 30, 2013, at 8:22 AM, "Kevin Burton" <rk...@charter.net>
wrote:
I am not clear on what you are suggesting to create on HDFS or the local
file system. As I understand it hadoop.tmp.dir is the local file system. I
changed it so that the temporary files would be on a disk that has more
capacity then /tmp. So you are suggesting that I create /data/hadoop/tmp on
HDFS. I already have this created.
Found 1 items
drwxr-xr-x - mapred supergroup 0 2013-04-29 15:45 /tmp/mapred
kevin@devUbuntu05:/etc/hadoop/conf$ hadoop fs -ls -d /tmp
Found 1 items
drwxrwxrwt - hdfs supergroup 0 2013-04-29 15:45 /tmp
When you suggest that I ‘chmod –R 777 /data’. You are suggesting that I
open up all the data to everyone? Isn’t that a bit extreme? First /data is
the mount point for this drive and there are other uses for this drive than
hadoop so there are other folders. That is why there is /data/hadoop. As
far as hadoop is concerned:
kevin@devUbuntu05:/etc/hadoop/conf$ ls -l /data/hadoop/
total 12
drwxrwxr-x 4 hdfs hadoop 4096 Apr 29 16:38 dfs
drwxrwxr-x 3 mapred hadoop 4096 Apr 29 11:33 mapred
drwxrwxrwx 3 hdfs hadoop 4096 Apr 19 15:14 tmp
dfs would be where the data blocks for the hdfs file system would go,
mapred would be the folder for M/R jobs, and tmp would be temporary
storage. These are all on the local file system. Do I have to make all of
this read-write for everyone in order to get it to work?
*From:* Arpit Gupta [mailto:arpit@hortonworks.com]
*Sent:* Tuesday, April 30, 2013 10:01 AM
*To:* user@hadoop.apache.org
*Subject:* Re: Permission problem
ah
this is what mapred.sytem.dir defaults to
<property>
<name>mapred.system.dir</name>
<value>${hadoop.tmp.dir}/mapred/system</value>
<description>The directory where MapReduce stores control files.
</description>
</property>
So thats why its trying to write to
/data/hadoop/tmp/hadoop-mapred/mapred/system<hdfs://devubuntu05:9000/data/hadoop/tmp/hadoop-mapred/mapred/system>
So if you want hadoop.tmp.dir to be /data/hadoop/tmp/hadoop-${user.name}
then i would suggest that create /data/hadoop/tmp on hdfs and chmod -R 777
/data or you can remove the hadoop.tmp.dir from your configs and let it be
set to the default value of
<property>
<name>hadoop.tmp.dir</name>
<value>/tmp/hadoop-${user.name}</value>
<description>A base for other temporary directories.</description>
</property>
So to fix your problem you can do the above or set mapred.system.dir to
/tmp/mapred/system in your mapred-site.xml.
--
Arpit Gupta
Hortonworks Inc.
http://hortonworks.com/
On Apr 30, 2013, at 7:55 AM, "Kevin Burton" <rk...@charter.net>
wrote:
In core-site.xml I have:
<property>
<name>fs.default.name</name>
<value>hdfs://devubuntu05:9000</value>
<description>The name of the default file system. A URI whose scheme and
authority determine the FileSystem implementation. </description>
</property>
In hdfs-site.xml I have
<property>
<name>hadoop.tmp.dir</name>
<value>/data/hadoop/tmp/hadoop-${user.name}</value>
<description>Hadoop temporary folder</description>
</property>
*From:* Arpit Gupta [mailto:arpit@hortonworks.com]
*Sent:* Tuesday, April 30, 2013 9:48 AM
*To:* Kevin Burton
*Cc:* user@hadoop.apache.org
*Subject:* Re: Permission problem
Based on the logs your system dir is set to
hdfs://devubuntu05:9000/data/hadoop/tmp/hadoop-mapred/mapred/system
what is your fs.default.name and hadoop.tmp.dir in core-site.xml set to?
--
Arpit Gupta
Hortonworks Inc.
http://hortonworks.com/
On Apr 30, 2013, at 7:39 AM, "Kevin Burton" <rk...@charter.net>
wrote:
Thank you.
mapred.system.dir is not set. I am guessing that it is whatever the default
is. What should I set it to?
/tmp is already 777
kevin@devUbuntu05:~$ hadoop fs -ls /tmp
Found 1 items
drwxr-xr-x - hdfs supergroup 0 2013-04-29 15:45 /tmp/mapred
kevin@devUbuntu05:~$ hadoop fs -ls -d /tmp
Found 1 items
drwxrwxrwt - hdfs supergroup 0 2013-04-29 15:45 /tmp
But notice that the mapred folder in the /tmp folder is 755.
So I changed it:
kevin@devUbuntu05 $ hadoop fs -ls -d /tmp
drwxrwxrwt - hdfs supergroup 0 2013-04-29 15:45 /tmp
kevin@devUbuntu05 $ hadoop fs -ls -R /tmp
drwxr-xr-x - mapred supergroup 0 2013-04-29 15:45 /tmp/mapred
drwxr-xr-x - mapred supergroup 0 2013-04-29 15:45
/tmp/mapred/system
I still get the errors in the log file:
2013-04-30 09:35:11,609 WARN org.apache.hadoop.mapred.JobTracker: Failed to
operate on mapred.system.dir (
hdfs://devubuntu05:9000/data/hadoop/tmp/hadoop-mapred/mapred/system)
because of permissions.
2013-04-30 09:35:11,609 WARN org.apache.hadoop.mapred.JobTracker: This
directory should be owned by the user 'mapred (auth:SIMPLE)'
2013-04-30 09:35:11,609 WARN org.apache.hadoop.mapred.JobTracker: Bailing
out ...
. . . . .
org.apache.hadoop.security.AccessControlException: Permission denied:
user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
. . . . .
Caused by:
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException):
Permission denied: user=mapred, access=WRITE,
inode="/":hdfs:supergroup:drwxrwxr-x
2013-04-30 09:35:11,610 FATAL org.apache.hadoop.mapred.JobTracker:
org.apache.hadoop.security.AccessControlException: Permission denied:
user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
. . . . . .
*From:* Arpit Gupta [mailto:arpit@hortonworks.com]
*Sent:* Tuesday, April 30, 2013 9:25 AM
*To:* user@hadoop.apache.org
*Subject:* Re: Permission problem
what is your mapred.system.dir set to in mapred-site.xml?
By default it will write to /tmp on hdfs.
So you can do the following
create /tmp on hdfs and chmod it to 777 as user hdfs and then restart
jobtracker and tasktrackers.
In case its set to /mapred/something then create /mapred and chown it to
user mapred.
--
Arpit Gupta
Hortonworks Inc.
http://hortonworks.com/
On Apr 30, 2013, at 6:36 AM, "Kevin Burton" <rk...@charter.net>
wrote:
To further complicate the issue the log file in
(/var/log/hadoop-0.20-mapreduce/hadoop-hadoop-jobtracker-devUbuntu05.log)
is owned by mapred:mapred and the name of the file seems to indicate some
other lineage (hadoop,hadoop). I am out of my league in understanding the
permission structure for hadoop hdfs and mr. Ideas?
*From:* Kevin Burton [mailto:rkevinburton@charter.net]
*Sent:* Tuesday, April 30, 2013 8:31 AM
*To:* user@hadoop.apache.org
*Cc:* 'Mohammad Tariq'
*Subject:* RE: Permission problem
That is what I perceive as the problem. The hdfs file system was created
with the user ‘hdfs’ owning the root (‘/’) but for some reason with a M/R
job the user ‘mapred’ needs to have write permission to the root. I don’t
know how to satisfy both conditions. That is one reason that I relaxed the
permission to 775 so that the group would also have write permission but
that didn’t seem to help.
*From:* Mohammad Tariq [mailto:dontariq@gmail.com <do...@gmail.com>]
*Sent:* Tuesday, April 30, 2013 8:20 AM
*To:* Kevin Burton
*Subject:* Re: Permission problem
user?"ls" shows "hdfs" and the log says "mapred"..
Warm Regards,
Tariq
https://mtariq.jux.com/
cloudfront.blogspot.com
On Tue, Apr 30, 2013 at 6:22 PM, Kevin Burton <rk...@charter.net>
wrote:
I have relaxed it even further so now it is 775
kevin@devUbuntu05:/var/log/hadoop-0.20-mapreduce$ hadoop fs -ls -d /
Found 1 items
drwxrwxr-x - hdfs supergroup 0 2013-04-29 15:43 /
But I still get this error:
2013-04-30 07:43:02,520 FATAL org.apache.hadoop.mapred.JobTracker:
org.apache.hadoop.security.AccessControlException: Permission denied:
user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
*From:* Mohammad Tariq [mailto:dontariq@gmail.com]
*Sent:* Monday, April 29, 2013 5:10 PM
*To:* user@hadoop.apache.org
*Subject:* Re: Incompartible cluserIDS
make it 755.
Warm Regards,
Tariq
https://mtariq.jux.com/
cloudfront.blogspot.com
RE: Permission problem
Posted by Kevin Burton <rk...@charter.net>.
I don't see a "create issue" button or tab. If I need to log in then I am
not sure what credentials I should use to log in because all I tried failed.
From: Arpit Gupta [mailto:arpit@hortonworks.com]
Sent: Tuesday, April 30, 2013 11:02 AM
To: user@hadoop.apache.org
Subject: Re: Permission problem
https://issues.apache.org/jira/browse/HADOOP and select create issue.
Set the affect version to the release you are testing and add some basic
description.
Here are the commands you should run.
sudo -u hdfs hadoop fs -mkdir /data/hadoop/tmp
and
sudo -u hdfs hadoop fs -chmod -R 777 /data
chmod is also for the directory on hdfs.
--
Arpit Gupta
Hortonworks Inc.
http://hortonworks.com/
On Apr 30, 2013, at 8:57 AM, "Kevin Burton" <rk...@charter.net>
wrote:
I am not sure how to create a jira.
Again I am not sure I understand your workaround. You are suggesting that I
create /data/hadoop/tmp on HDFS like:
sudo -u hdfs hadoop fs -mkdir /data/hadoop/tmp
I don't think I can chmod -R 777 on /data since it is a disk and as I
indicated it is being used to store data other than that used by hadoop.
Even chmod -R 777 on /data/hadoop seems extreme as there is a dfs, mapred,
and tmp folder. Which one of these local folders need to be opened up? I
would rather not open up all folders to the world if at all possible.
From: Arpit Gupta [mailto:arpit@hortonworks.com]
Sent: Tuesday, April 30, 2013 10:48 AM
To: Kevin Burton
Cc: user@hadoop.apache.org
Subject: Re: Permission problem
It looks like hadoop.tmp.dir is being used both for local and hdfs
directories. Can you create a jira for this?
What i recommended is that you create /data/hadoop/tmp on hdfs and chmod -R
/data
--
Arpit Gupta
Hortonworks Inc.
<http://hortonworks.com/> http://hortonworks.com/
On Apr 30, 2013, at 8:22 AM, "Kevin Burton" <
<ma...@charter.net> rkevinburton@charter.net> wrote:
I am not clear on what you are suggesting to create on HDFS or the local
file system. As I understand it hadoop.tmp.dir is the local file system. I
changed it so that the temporary files would be on a disk that has more
capacity then /tmp. So you are suggesting that I create /data/hadoop/tmp on
HDFS. I already have this created.
Found 1 items
drwxr-xr-x - mapred supergroup 0 2013-04-29 15:45 /tmp/mapred
kevin@devUbuntu05:/etc/hadoop/conf$ hadoop fs -ls -d /tmp
Found 1 items
drwxrwxrwt - hdfs supergroup 0 2013-04-29 15:45 /tmp
When you suggest that I 'chmod -R 777 /data'. You are suggesting that I open
up all the data to everyone? Isn't that a bit extreme? First /data is the
mount point for this drive and there are other uses for this drive than
hadoop so there are other folders. That is why there is /data/hadoop. As far
as hadoop is concerned:
kevin@devUbuntu05:/etc/hadoop/conf$ ls -l /data/hadoop/
total 12
drwxrwxr-x 4 hdfs hadoop 4096 Apr 29 16:38 dfs
drwxrwxr-x 3 mapred hadoop 4096 Apr 29 11:33 mapred
drwxrwxrwx 3 hdfs hadoop 4096 Apr 19 15:14 tmp
dfs would be where the data blocks for the hdfs file system would go, mapred
would be the folder for M/R jobs, and tmp would be temporary storage. These
are all on the local file system. Do I have to make all of this read-write
for everyone in order to get it to work?
From: Arpit Gupta [mailto:arpit@ <http://hortonworks.com> hortonworks.com]
Sent: Tuesday, April 30, 2013 10:01 AM
To: <ma...@hadoop.apache.org> user@hadoop.apache.org
Subject: Re: Permission problem
ah
this is what mapred.sytem.dir defaults to
<property>
<name>mapred.system.dir</name>
<value>${hadoop.tmp.dir}/mapred/system</value>
<description>The directory where MapReduce stores control files.
</description>
</property>
So thats why its trying to write to
<hdfs://devubuntu05:9000/data/hadoop/tmp/hadoop-mapred/mapred/system>
/data/hadoop/tmp/hadoop-mapred/mapred/system
So if you want hadoop.tmp.dir to be /data/hadoop/tmp/hadoop-${user.name}
then i would suggest that create /data/hadoop/tmp on hdfs and chmod -R 777
/data or you can remove the hadoop.tmp.dir from your configs and let it be
set to the default value of
<property>
<name>hadoop.tmp.dir</name>
<value>/tmp/hadoop-${user.name}</value>
<description>A base for other temporary directories.</description>
</property>
So to fix your problem you can do the above or set mapred.system.dir to
/tmp/mapred/system in your mapred-site.xml.
--
Arpit Gupta
Hortonworks Inc.
<http://hortonworks.com/> http://hortonworks.com/
On Apr 30, 2013, at 7:55 AM, "Kevin Burton" <
<ma...@charter.net> rkevinburton@charter.net> wrote:
In core-site.xml I have:
<property>
<name>fs.default.name</name>
<value> <hdfs://devubuntu05:9000%3c/value%3e>
hdfs://devubuntu05:9000</value>
<description>The name of the default file system. A URI whose scheme and
authority determine the FileSystem implementation. </description>
</property>
In hdfs-site.xml I have
<property>
<name>hadoop.tmp.dir</name>
<value>/data/hadoop/tmp/hadoop-${user.name}</value>
<description>Hadoop temporary folder</description>
</property>
From: Arpit Gupta [mailto:arpit@ <http://hortonworks.com> hortonworks.com]
Sent: Tuesday, April 30, 2013 9:48 AM
To: Kevin Burton
Cc: <ma...@hadoop.apache.org> user@hadoop.apache.org
Subject: Re: Permission problem
Based on the logs your system dir is set to
<hdfs://devubuntu05:9000/data/hadoop/tmp/hadoop-mapred/mapred/system>
hdfs://devubuntu05:9000/data/hadoop/tmp/hadoop-mapred/mapred/system
what is your fs.default.name and hadoop.tmp.dir in core-site.xml set to?
--
Arpit Gupta
Hortonworks Inc.
<http://hortonworks.com/> http://hortonworks.com/
On Apr 30, 2013, at 7:39 AM, "Kevin Burton" <
<ma...@charter.net> rkevinburton@charter.net> wrote:
Thank you.
mapred.system.dir is not set. I am guessing that it is whatever the default
is. What should I set it to?
/tmp is already 777
kevin@devUbuntu05:~$ hadoop fs -ls /tmp
Found 1 items
drwxr-xr-x - hdfs supergroup 0 2013-04-29 15:45 /tmp/mapred
kevin@devUbuntu05:~$ hadoop fs -ls -d /tmp
Found 1 items
drwxrwxrwt - hdfs supergroup 0 2013-04-29 15:45 /tmp
But notice that the mapred folder in the /tmp folder is 755.
So I changed it:
kevin@devUbuntu05 $ hadoop fs -ls -d /tmp
drwxrwxrwt - hdfs supergroup 0 2013-04-29 15:45 /tmp
kevin@devUbuntu05 $ hadoop fs -ls -R /tmp
drwxr-xr-x - mapred supergroup 0 2013-04-29 15:45 /tmp/mapred
drwxr-xr-x - mapred supergroup 0 2013-04-29 15:45
/tmp/mapred/system
I still get the errors in the log file:
2013-04-30 09:35:11,609 WARN org.apache.hadoop.mapred.JobTracker: Failed to
operate on mapred.system.dir (
<hdfs://devubuntu05:9000/data/hadoop/tmp/hadoop-mapred/mapred/system>
hdfs://devubuntu05:9000/data/hadoop/tmp/hadoop-mapred/mapred/system) because
of permissions.
2013-04-30 09:35:11,609 WARN org.apache.hadoop.mapred.JobTracker: This
directory should be owned by the user 'mapred (auth:SIMPLE)'
2013-04-30 09:35:11,609 WARN org.apache.hadoop.mapred.JobTracker: Bailing
out ...
. . . . .
org.apache.hadoop.security.AccessControlException: Permission denied:
user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
. . . . .
Caused by:
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessContr
olException): Permission denied: user=mapred, access=WRITE,
inode="/":hdfs:supergroup:drwxrwxr-x
2013-04-30 09:35:11,610 FATAL org.apache.hadoop.mapred.JobTracker:
org.apache.hadoop.security.AccessControlException: Permission denied:
user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
. . . . . .
From: Arpit Gupta [mailto:arpit@ <http://hortonworks.com> hortonworks.com]
Sent: Tuesday, April 30, 2013 9:25 AM
To: <ma...@hadoop.apache.org> user@hadoop.apache.org
Subject: Re: Permission problem
what is your mapred.system.dir set to in mapred-site.xml?
By default it will write to /tmp on hdfs.
So you can do the following
create /tmp on hdfs and chmod it to 777 as user hdfs and then restart
jobtracker and tasktrackers.
In case its set to /mapred/something then create /mapred and chown it to
user mapred.
--
Arpit Gupta
Hortonworks Inc.
<http://hortonworks.com/> http://hortonworks.com/
On Apr 30, 2013, at 6:36 AM, "Kevin Burton" <
<ma...@charter.net> rkevinburton@charter.net> wrote:
To further complicate the issue the log file in
(/var/log/hadoop-0.20-mapreduce/hadoop-hadoop-jobtracker-devUbuntu05.log) is
owned by mapred:mapred and the name of the file seems to indicate some other
lineage (hadoop,hadoop). I am out of my league in understanding the
permission structure for hadoop hdfs and mr. Ideas?
From: Kevin Burton [mailto:rkevinburton@ <http://charter.net> charter.net]
Sent: Tuesday, April 30, 2013 8:31 AM
To: <ma...@hadoop.apache.org> user@hadoop.apache.org
Cc: 'Mohammad Tariq'
Subject: RE: Permission problem
That is what I perceive as the problem. The hdfs file system was created
with the user 'hdfs' owning the root ('/') but for some reason with a M/R
job the user 'mapred' needs to have write permission to the root. I don't
know how to satisfy both conditions. That is one reason that I relaxed the
permission to 775 so that the group would also have write permission but
that didn't seem to help.
From: Mohammad Tariq [ <ma...@gmail.com>
mailto:dontariq@gmail.com]
Sent: Tuesday, April 30, 2013 8:20 AM
To: Kevin Burton
Subject: Re: Permission problem
user?"ls" shows "hdfs" and the log says "mapred"..
Warm Regards,
Tariq
<https://mtariq.jux.com/> https://mtariq.jux.com/
<http://cloudfront.blogspot.com> cloudfront.blogspot.com
On Tue, Apr 30, 2013 at 6:22 PM, Kevin Burton <
<ma...@charter.net> rkevinburton@charter.net> wrote:
I have relaxed it even further so now it is 775
<mailto:kevin@devUbuntu05:/var/log/hadoop-0.20-mapreduce$>
kevin@devUbuntu05:/var/log/hadoop-0.20-mapreduce$ hadoop fs -ls -d /
Found 1 items
drwxrwxr-x - hdfs supergroup 0 2013-04-29 15:43 /
But I still get this error:
2013-04-30 07:43:02,520 FATAL org.apache.hadoop.mapred.JobTracker:
org.apache.hadoop.security.AccessControlException: Permission denied:
user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
From: Mohammad Tariq [mailto: <ma...@gmail.com>
dontariq@gmail.com]
Sent: Monday, April 29, 2013 5:10 PM
To: <ma...@hadoop.apache.org> user@hadoop.apache.org
Subject: Re: Incompartible cluserIDS
make it 755.
Warm Regards,
Tariq
<https://mtariq.jux.com/> https://mtariq.jux.com/
<http://cloudfront.blogspot.com> cloudfront.blogspot.com
RE: Permission problem
Posted by Kevin Burton <rk...@charter.net>.
I don't see a "create issue" button or tab. If I need to log in then I am
not sure what credentials I should use to log in because all I tried failed.
From: Arpit Gupta [mailto:arpit@hortonworks.com]
Sent: Tuesday, April 30, 2013 11:02 AM
To: user@hadoop.apache.org
Subject: Re: Permission problem
https://issues.apache.org/jira/browse/HADOOP and select create issue.
Set the affect version to the release you are testing and add some basic
description.
Here are the commands you should run.
sudo -u hdfs hadoop fs -mkdir /data/hadoop/tmp
and
sudo -u hdfs hadoop fs -chmod -R 777 /data
chmod is also for the directory on hdfs.
--
Arpit Gupta
Hortonworks Inc.
http://hortonworks.com/
On Apr 30, 2013, at 8:57 AM, "Kevin Burton" <rk...@charter.net>
wrote:
I am not sure how to create a jira.
Again I am not sure I understand your workaround. You are suggesting that I
create /data/hadoop/tmp on HDFS like:
sudo -u hdfs hadoop fs -mkdir /data/hadoop/tmp
I don't think I can chmod -R 777 on /data since it is a disk and as I
indicated it is being used to store data other than that used by hadoop.
Even chmod -R 777 on /data/hadoop seems extreme as there is a dfs, mapred,
and tmp folder. Which one of these local folders need to be opened up? I
would rather not open up all folders to the world if at all possible.
From: Arpit Gupta [mailto:arpit@hortonworks.com]
Sent: Tuesday, April 30, 2013 10:48 AM
To: Kevin Burton
Cc: user@hadoop.apache.org
Subject: Re: Permission problem
It looks like hadoop.tmp.dir is being used both for local and hdfs
directories. Can you create a jira for this?
What i recommended is that you create /data/hadoop/tmp on hdfs and chmod -R
/data
--
Arpit Gupta
Hortonworks Inc.
<http://hortonworks.com/> http://hortonworks.com/
On Apr 30, 2013, at 8:22 AM, "Kevin Burton" <
<ma...@charter.net> rkevinburton@charter.net> wrote:
I am not clear on what you are suggesting to create on HDFS or the local
file system. As I understand it hadoop.tmp.dir is the local file system. I
changed it so that the temporary files would be on a disk that has more
capacity then /tmp. So you are suggesting that I create /data/hadoop/tmp on
HDFS. I already have this created.
Found 1 items
drwxr-xr-x - mapred supergroup 0 2013-04-29 15:45 /tmp/mapred
kevin@devUbuntu05:/etc/hadoop/conf$ hadoop fs -ls -d /tmp
Found 1 items
drwxrwxrwt - hdfs supergroup 0 2013-04-29 15:45 /tmp
When you suggest that I 'chmod -R 777 /data'. You are suggesting that I open
up all the data to everyone? Isn't that a bit extreme? First /data is the
mount point for this drive and there are other uses for this drive than
hadoop so there are other folders. That is why there is /data/hadoop. As far
as hadoop is concerned:
kevin@devUbuntu05:/etc/hadoop/conf$ ls -l /data/hadoop/
total 12
drwxrwxr-x 4 hdfs hadoop 4096 Apr 29 16:38 dfs
drwxrwxr-x 3 mapred hadoop 4096 Apr 29 11:33 mapred
drwxrwxrwx 3 hdfs hadoop 4096 Apr 19 15:14 tmp
dfs would be where the data blocks for the hdfs file system would go, mapred
would be the folder for M/R jobs, and tmp would be temporary storage. These
are all on the local file system. Do I have to make all of this read-write
for everyone in order to get it to work?
From: Arpit Gupta [mailto:arpit@ <http://hortonworks.com> hortonworks.com]
Sent: Tuesday, April 30, 2013 10:01 AM
To: <ma...@hadoop.apache.org> user@hadoop.apache.org
Subject: Re: Permission problem
ah
this is what mapred.sytem.dir defaults to
<property>
<name>mapred.system.dir</name>
<value>${hadoop.tmp.dir}/mapred/system</value>
<description>The directory where MapReduce stores control files.
</description>
</property>
So thats why its trying to write to
<hdfs://devubuntu05:9000/data/hadoop/tmp/hadoop-mapred/mapred/system>
/data/hadoop/tmp/hadoop-mapred/mapred/system
So if you want hadoop.tmp.dir to be /data/hadoop/tmp/hadoop-${user.name}
then i would suggest that create /data/hadoop/tmp on hdfs and chmod -R 777
/data or you can remove the hadoop.tmp.dir from your configs and let it be
set to the default value of
<property>
<name>hadoop.tmp.dir</name>
<value>/tmp/hadoop-${user.name}</value>
<description>A base for other temporary directories.</description>
</property>
So to fix your problem you can do the above or set mapred.system.dir to
/tmp/mapred/system in your mapred-site.xml.
--
Arpit Gupta
Hortonworks Inc.
<http://hortonworks.com/> http://hortonworks.com/
On Apr 30, 2013, at 7:55 AM, "Kevin Burton" <
<ma...@charter.net> rkevinburton@charter.net> wrote:
In core-site.xml I have:
<property>
<name>fs.default.name</name>
<value> <hdfs://devubuntu05:9000%3c/value%3e>
hdfs://devubuntu05:9000</value>
<description>The name of the default file system. A URI whose scheme and
authority determine the FileSystem implementation. </description>
</property>
In hdfs-site.xml I have
<property>
<name>hadoop.tmp.dir</name>
<value>/data/hadoop/tmp/hadoop-${user.name}</value>
<description>Hadoop temporary folder</description>
</property>
From: Arpit Gupta [mailto:arpit@ <http://hortonworks.com> hortonworks.com]
Sent: Tuesday, April 30, 2013 9:48 AM
To: Kevin Burton
Cc: <ma...@hadoop.apache.org> user@hadoop.apache.org
Subject: Re: Permission problem
Based on the logs your system dir is set to
<hdfs://devubuntu05:9000/data/hadoop/tmp/hadoop-mapred/mapred/system>
hdfs://devubuntu05:9000/data/hadoop/tmp/hadoop-mapred/mapred/system
what is your fs.default.name and hadoop.tmp.dir in core-site.xml set to?
--
Arpit Gupta
Hortonworks Inc.
<http://hortonworks.com/> http://hortonworks.com/
On Apr 30, 2013, at 7:39 AM, "Kevin Burton" <
<ma...@charter.net> rkevinburton@charter.net> wrote:
Thank you.
mapred.system.dir is not set. I am guessing that it is whatever the default
is. What should I set it to?
/tmp is already 777
kevin@devUbuntu05:~$ hadoop fs -ls /tmp
Found 1 items
drwxr-xr-x - hdfs supergroup 0 2013-04-29 15:45 /tmp/mapred
kevin@devUbuntu05:~$ hadoop fs -ls -d /tmp
Found 1 items
drwxrwxrwt - hdfs supergroup 0 2013-04-29 15:45 /tmp
But notice that the mapred folder in the /tmp folder is 755.
So I changed it:
kevin@devUbuntu05 $ hadoop fs -ls -d /tmp
drwxrwxrwt - hdfs supergroup 0 2013-04-29 15:45 /tmp
kevin@devUbuntu05 $ hadoop fs -ls -R /tmp
drwxr-xr-x - mapred supergroup 0 2013-04-29 15:45 /tmp/mapred
drwxr-xr-x - mapred supergroup 0 2013-04-29 15:45
/tmp/mapred/system
I still get the errors in the log file:
2013-04-30 09:35:11,609 WARN org.apache.hadoop.mapred.JobTracker: Failed to
operate on mapred.system.dir (
<hdfs://devubuntu05:9000/data/hadoop/tmp/hadoop-mapred/mapred/system>
hdfs://devubuntu05:9000/data/hadoop/tmp/hadoop-mapred/mapred/system) because
of permissions.
2013-04-30 09:35:11,609 WARN org.apache.hadoop.mapred.JobTracker: This
directory should be owned by the user 'mapred (auth:SIMPLE)'
2013-04-30 09:35:11,609 WARN org.apache.hadoop.mapred.JobTracker: Bailing
out ...
. . . . .
org.apache.hadoop.security.AccessControlException: Permission denied:
user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
. . . . .
Caused by:
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessContr
olException): Permission denied: user=mapred, access=WRITE,
inode="/":hdfs:supergroup:drwxrwxr-x
2013-04-30 09:35:11,610 FATAL org.apache.hadoop.mapred.JobTracker:
org.apache.hadoop.security.AccessControlException: Permission denied:
user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
. . . . . .
From: Arpit Gupta [mailto:arpit@ <http://hortonworks.com> hortonworks.com]
Sent: Tuesday, April 30, 2013 9:25 AM
To: <ma...@hadoop.apache.org> user@hadoop.apache.org
Subject: Re: Permission problem
what is your mapred.system.dir set to in mapred-site.xml?
By default it will write to /tmp on hdfs.
So you can do the following
create /tmp on hdfs and chmod it to 777 as user hdfs and then restart
jobtracker and tasktrackers.
In case its set to /mapred/something then create /mapred and chown it to
user mapred.
--
Arpit Gupta
Hortonworks Inc.
<http://hortonworks.com/> http://hortonworks.com/
On Apr 30, 2013, at 6:36 AM, "Kevin Burton" <
<ma...@charter.net> rkevinburton@charter.net> wrote:
To further complicate the issue the log file in
(/var/log/hadoop-0.20-mapreduce/hadoop-hadoop-jobtracker-devUbuntu05.log) is
owned by mapred:mapred and the name of the file seems to indicate some other
lineage (hadoop,hadoop). I am out of my league in understanding the
permission structure for hadoop hdfs and mr. Ideas?
From: Kevin Burton [mailto:rkevinburton@ <http://charter.net> charter.net]
Sent: Tuesday, April 30, 2013 8:31 AM
To: <ma...@hadoop.apache.org> user@hadoop.apache.org
Cc: 'Mohammad Tariq'
Subject: RE: Permission problem
That is what I perceive as the problem. The hdfs file system was created
with the user 'hdfs' owning the root ('/') but for some reason with a M/R
job the user 'mapred' needs to have write permission to the root. I don't
know how to satisfy both conditions. That is one reason that I relaxed the
permission to 775 so that the group would also have write permission but
that didn't seem to help.
From: Mohammad Tariq [ <ma...@gmail.com>
mailto:dontariq@gmail.com]
Sent: Tuesday, April 30, 2013 8:20 AM
To: Kevin Burton
Subject: Re: Permission problem
user?"ls" shows "hdfs" and the log says "mapred"..
Warm Regards,
Tariq
<https://mtariq.jux.com/> https://mtariq.jux.com/
<http://cloudfront.blogspot.com> cloudfront.blogspot.com
On Tue, Apr 30, 2013 at 6:22 PM, Kevin Burton <
<ma...@charter.net> rkevinburton@charter.net> wrote:
I have relaxed it even further so now it is 775
<mailto:kevin@devUbuntu05:/var/log/hadoop-0.20-mapreduce$>
kevin@devUbuntu05:/var/log/hadoop-0.20-mapreduce$ hadoop fs -ls -d /
Found 1 items
drwxrwxr-x - hdfs supergroup 0 2013-04-29 15:43 /
But I still get this error:
2013-04-30 07:43:02,520 FATAL org.apache.hadoop.mapred.JobTracker:
org.apache.hadoop.security.AccessControlException: Permission denied:
user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
From: Mohammad Tariq [mailto: <ma...@gmail.com>
dontariq@gmail.com]
Sent: Monday, April 29, 2013 5:10 PM
To: <ma...@hadoop.apache.org> user@hadoop.apache.org
Subject: Re: Incompartible cluserIDS
make it 755.
Warm Regards,
Tariq
<https://mtariq.jux.com/> https://mtariq.jux.com/
<http://cloudfront.blogspot.com> cloudfront.blogspot.com
RE: Permission problem
Posted by Kevin Burton <rk...@charter.net>.
I don't see a "create issue" button or tab. If I need to log in then I am
not sure what credentials I should use to log in because all I tried failed.
From: Arpit Gupta [mailto:arpit@hortonworks.com]
Sent: Tuesday, April 30, 2013 11:02 AM
To: user@hadoop.apache.org
Subject: Re: Permission problem
https://issues.apache.org/jira/browse/HADOOP and select create issue.
Set the affect version to the release you are testing and add some basic
description.
Here are the commands you should run.
sudo -u hdfs hadoop fs -mkdir /data/hadoop/tmp
and
sudo -u hdfs hadoop fs -chmod -R 777 /data
chmod is also for the directory on hdfs.
--
Arpit Gupta
Hortonworks Inc.
http://hortonworks.com/
On Apr 30, 2013, at 8:57 AM, "Kevin Burton" <rk...@charter.net>
wrote:
I am not sure how to create a jira.
Again I am not sure I understand your workaround. You are suggesting that I
create /data/hadoop/tmp on HDFS like:
sudo -u hdfs hadoop fs -mkdir /data/hadoop/tmp
I don't think I can chmod -R 777 on /data since it is a disk and as I
indicated it is being used to store data other than that used by hadoop.
Even chmod -R 777 on /data/hadoop seems extreme as there is a dfs, mapred,
and tmp folder. Which one of these local folders need to be opened up? I
would rather not open up all folders to the world if at all possible.
From: Arpit Gupta [mailto:arpit@hortonworks.com]
Sent: Tuesday, April 30, 2013 10:48 AM
To: Kevin Burton
Cc: user@hadoop.apache.org
Subject: Re: Permission problem
It looks like hadoop.tmp.dir is being used both for local and hdfs
directories. Can you create a jira for this?
What i recommended is that you create /data/hadoop/tmp on hdfs and chmod -R
/data
--
Arpit Gupta
Hortonworks Inc.
<http://hortonworks.com/> http://hortonworks.com/
On Apr 30, 2013, at 8:22 AM, "Kevin Burton" <
<ma...@charter.net> rkevinburton@charter.net> wrote:
I am not clear on what you are suggesting to create on HDFS or the local
file system. As I understand it hadoop.tmp.dir is the local file system. I
changed it so that the temporary files would be on a disk that has more
capacity then /tmp. So you are suggesting that I create /data/hadoop/tmp on
HDFS. I already have this created.
Found 1 items
drwxr-xr-x - mapred supergroup 0 2013-04-29 15:45 /tmp/mapred
kevin@devUbuntu05:/etc/hadoop/conf$ hadoop fs -ls -d /tmp
Found 1 items
drwxrwxrwt - hdfs supergroup 0 2013-04-29 15:45 /tmp
When you suggest that I 'chmod -R 777 /data'. You are suggesting that I open
up all the data to everyone? Isn't that a bit extreme? First /data is the
mount point for this drive and there are other uses for this drive than
hadoop so there are other folders. That is why there is /data/hadoop. As far
as hadoop is concerned:
kevin@devUbuntu05:/etc/hadoop/conf$ ls -l /data/hadoop/
total 12
drwxrwxr-x 4 hdfs hadoop 4096 Apr 29 16:38 dfs
drwxrwxr-x 3 mapred hadoop 4096 Apr 29 11:33 mapred
drwxrwxrwx 3 hdfs hadoop 4096 Apr 19 15:14 tmp
dfs would be where the data blocks for the hdfs file system would go, mapred
would be the folder for M/R jobs, and tmp would be temporary storage. These
are all on the local file system. Do I have to make all of this read-write
for everyone in order to get it to work?
From: Arpit Gupta [mailto:arpit@ <http://hortonworks.com> hortonworks.com]
Sent: Tuesday, April 30, 2013 10:01 AM
To: <ma...@hadoop.apache.org> user@hadoop.apache.org
Subject: Re: Permission problem
ah
this is what mapred.sytem.dir defaults to
<property>
<name>mapred.system.dir</name>
<value>${hadoop.tmp.dir}/mapred/system</value>
<description>The directory where MapReduce stores control files.
</description>
</property>
So thats why its trying to write to
<hdfs://devubuntu05:9000/data/hadoop/tmp/hadoop-mapred/mapred/system>
/data/hadoop/tmp/hadoop-mapred/mapred/system
So if you want hadoop.tmp.dir to be /data/hadoop/tmp/hadoop-${user.name}
then i would suggest that create /data/hadoop/tmp on hdfs and chmod -R 777
/data or you can remove the hadoop.tmp.dir from your configs and let it be
set to the default value of
<property>
<name>hadoop.tmp.dir</name>
<value>/tmp/hadoop-${user.name}</value>
<description>A base for other temporary directories.</description>
</property>
So to fix your problem you can do the above or set mapred.system.dir to
/tmp/mapred/system in your mapred-site.xml.
--
Arpit Gupta
Hortonworks Inc.
<http://hortonworks.com/> http://hortonworks.com/
On Apr 30, 2013, at 7:55 AM, "Kevin Burton" <
<ma...@charter.net> rkevinburton@charter.net> wrote:
In core-site.xml I have:
<property>
<name>fs.default.name</name>
<value> <hdfs://devubuntu05:9000%3c/value%3e>
hdfs://devubuntu05:9000</value>
<description>The name of the default file system. A URI whose scheme and
authority determine the FileSystem implementation. </description>
</property>
In hdfs-site.xml I have
<property>
<name>hadoop.tmp.dir</name>
<value>/data/hadoop/tmp/hadoop-${user.name}</value>
<description>Hadoop temporary folder</description>
</property>
From: Arpit Gupta [mailto:arpit@ <http://hortonworks.com> hortonworks.com]
Sent: Tuesday, April 30, 2013 9:48 AM
To: Kevin Burton
Cc: <ma...@hadoop.apache.org> user@hadoop.apache.org
Subject: Re: Permission problem
Based on the logs your system dir is set to
<hdfs://devubuntu05:9000/data/hadoop/tmp/hadoop-mapred/mapred/system>
hdfs://devubuntu05:9000/data/hadoop/tmp/hadoop-mapred/mapred/system
what is your fs.default.name and hadoop.tmp.dir in core-site.xml set to?
--
Arpit Gupta
Hortonworks Inc.
<http://hortonworks.com/> http://hortonworks.com/
On Apr 30, 2013, at 7:39 AM, "Kevin Burton" <
<ma...@charter.net> rkevinburton@charter.net> wrote:
Thank you.
mapred.system.dir is not set. I am guessing that it is whatever the default
is. What should I set it to?
/tmp is already 777
kevin@devUbuntu05:~$ hadoop fs -ls /tmp
Found 1 items
drwxr-xr-x - hdfs supergroup 0 2013-04-29 15:45 /tmp/mapred
kevin@devUbuntu05:~$ hadoop fs -ls -d /tmp
Found 1 items
drwxrwxrwt - hdfs supergroup 0 2013-04-29 15:45 /tmp
But notice that the mapred folder in the /tmp folder is 755.
So I changed it:
kevin@devUbuntu05 $ hadoop fs -ls -d /tmp
drwxrwxrwt - hdfs supergroup 0 2013-04-29 15:45 /tmp
kevin@devUbuntu05 $ hadoop fs -ls -R /tmp
drwxr-xr-x - mapred supergroup 0 2013-04-29 15:45 /tmp/mapred
drwxr-xr-x - mapred supergroup 0 2013-04-29 15:45
/tmp/mapred/system
I still get the errors in the log file:
2013-04-30 09:35:11,609 WARN org.apache.hadoop.mapred.JobTracker: Failed to
operate on mapred.system.dir (
<hdfs://devubuntu05:9000/data/hadoop/tmp/hadoop-mapred/mapred/system>
hdfs://devubuntu05:9000/data/hadoop/tmp/hadoop-mapred/mapred/system) because
of permissions.
2013-04-30 09:35:11,609 WARN org.apache.hadoop.mapred.JobTracker: This
directory should be owned by the user 'mapred (auth:SIMPLE)'
2013-04-30 09:35:11,609 WARN org.apache.hadoop.mapred.JobTracker: Bailing
out ...
. . . . .
org.apache.hadoop.security.AccessControlException: Permission denied:
user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
. . . . .
Caused by:
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessContr
olException): Permission denied: user=mapred, access=WRITE,
inode="/":hdfs:supergroup:drwxrwxr-x
2013-04-30 09:35:11,610 FATAL org.apache.hadoop.mapred.JobTracker:
org.apache.hadoop.security.AccessControlException: Permission denied:
user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
. . . . . .
From: Arpit Gupta [mailto:arpit@ <http://hortonworks.com> hortonworks.com]
Sent: Tuesday, April 30, 2013 9:25 AM
To: <ma...@hadoop.apache.org> user@hadoop.apache.org
Subject: Re: Permission problem
what is your mapred.system.dir set to in mapred-site.xml?
By default it will write to /tmp on hdfs.
So you can do the following
create /tmp on hdfs and chmod it to 777 as user hdfs and then restart
jobtracker and tasktrackers.
In case its set to /mapred/something then create /mapred and chown it to
user mapred.
--
Arpit Gupta
Hortonworks Inc.
<http://hortonworks.com/> http://hortonworks.com/
On Apr 30, 2013, at 6:36 AM, "Kevin Burton" <
<ma...@charter.net> rkevinburton@charter.net> wrote:
To further complicate the issue the log file in
(/var/log/hadoop-0.20-mapreduce/hadoop-hadoop-jobtracker-devUbuntu05.log) is
owned by mapred:mapred and the name of the file seems to indicate some other
lineage (hadoop,hadoop). I am out of my league in understanding the
permission structure for hadoop hdfs and mr. Ideas?
From: Kevin Burton [mailto:rkevinburton@ <http://charter.net> charter.net]
Sent: Tuesday, April 30, 2013 8:31 AM
To: <ma...@hadoop.apache.org> user@hadoop.apache.org
Cc: 'Mohammad Tariq'
Subject: RE: Permission problem
That is what I perceive as the problem. The hdfs file system was created
with the user 'hdfs' owning the root ('/') but for some reason with a M/R
job the user 'mapred' needs to have write permission to the root. I don't
know how to satisfy both conditions. That is one reason that I relaxed the
permission to 775 so that the group would also have write permission but
that didn't seem to help.
From: Mohammad Tariq [ <ma...@gmail.com>
mailto:dontariq@gmail.com]
Sent: Tuesday, April 30, 2013 8:20 AM
To: Kevin Burton
Subject: Re: Permission problem
user?"ls" shows "hdfs" and the log says "mapred"..
Warm Regards,
Tariq
<https://mtariq.jux.com/> https://mtariq.jux.com/
<http://cloudfront.blogspot.com> cloudfront.blogspot.com
On Tue, Apr 30, 2013 at 6:22 PM, Kevin Burton <
<ma...@charter.net> rkevinburton@charter.net> wrote:
I have relaxed it even further so now it is 775
<mailto:kevin@devUbuntu05:/var/log/hadoop-0.20-mapreduce$>
kevin@devUbuntu05:/var/log/hadoop-0.20-mapreduce$ hadoop fs -ls -d /
Found 1 items
drwxrwxr-x - hdfs supergroup 0 2013-04-29 15:43 /
But I still get this error:
2013-04-30 07:43:02,520 FATAL org.apache.hadoop.mapred.JobTracker:
org.apache.hadoop.security.AccessControlException: Permission denied:
user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
From: Mohammad Tariq [mailto: <ma...@gmail.com>
dontariq@gmail.com]
Sent: Monday, April 29, 2013 5:10 PM
To: <ma...@hadoop.apache.org> user@hadoop.apache.org
Subject: Re: Incompartible cluserIDS
make it 755.
Warm Regards,
Tariq
<https://mtariq.jux.com/> https://mtariq.jux.com/
<http://cloudfront.blogspot.com> cloudfront.blogspot.com
RE: Permission problem
Posted by Kevin Burton <rk...@charter.net>.
I don't see a "create issue" button or tab. If I need to log in then I am
not sure what credentials I should use to log in because all I tried failed.
From: Arpit Gupta [mailto:arpit@hortonworks.com]
Sent: Tuesday, April 30, 2013 11:02 AM
To: user@hadoop.apache.org
Subject: Re: Permission problem
https://issues.apache.org/jira/browse/HADOOP and select create issue.
Set the affect version to the release you are testing and add some basic
description.
Here are the commands you should run.
sudo -u hdfs hadoop fs -mkdir /data/hadoop/tmp
and
sudo -u hdfs hadoop fs -chmod -R 777 /data
chmod is also for the directory on hdfs.
--
Arpit Gupta
Hortonworks Inc.
http://hortonworks.com/
On Apr 30, 2013, at 8:57 AM, "Kevin Burton" <rk...@charter.net>
wrote:
I am not sure how to create a jira.
Again I am not sure I understand your workaround. You are suggesting that I
create /data/hadoop/tmp on HDFS like:
sudo -u hdfs hadoop fs -mkdir /data/hadoop/tmp
I don't think I can chmod -R 777 on /data since it is a disk and as I
indicated it is being used to store data other than that used by hadoop.
Even chmod -R 777 on /data/hadoop seems extreme as there is a dfs, mapred,
and tmp folder. Which one of these local folders need to be opened up? I
would rather not open up all folders to the world if at all possible.
From: Arpit Gupta [mailto:arpit@hortonworks.com]
Sent: Tuesday, April 30, 2013 10:48 AM
To: Kevin Burton
Cc: user@hadoop.apache.org
Subject: Re: Permission problem
It looks like hadoop.tmp.dir is being used both for local and hdfs
directories. Can you create a jira for this?
What i recommended is that you create /data/hadoop/tmp on hdfs and chmod -R
/data
--
Arpit Gupta
Hortonworks Inc.
<http://hortonworks.com/> http://hortonworks.com/
On Apr 30, 2013, at 8:22 AM, "Kevin Burton" <
<ma...@charter.net> rkevinburton@charter.net> wrote:
I am not clear on what you are suggesting to create on HDFS or the local
file system. As I understand it hadoop.tmp.dir is the local file system. I
changed it so that the temporary files would be on a disk that has more
capacity then /tmp. So you are suggesting that I create /data/hadoop/tmp on
HDFS. I already have this created.
Found 1 items
drwxr-xr-x - mapred supergroup 0 2013-04-29 15:45 /tmp/mapred
kevin@devUbuntu05:/etc/hadoop/conf$ hadoop fs -ls -d /tmp
Found 1 items
drwxrwxrwt - hdfs supergroup 0 2013-04-29 15:45 /tmp
When you suggest that I 'chmod -R 777 /data'. You are suggesting that I open
up all the data to everyone? Isn't that a bit extreme? First /data is the
mount point for this drive and there are other uses for this drive than
hadoop so there are other folders. That is why there is /data/hadoop. As far
as hadoop is concerned:
kevin@devUbuntu05:/etc/hadoop/conf$ ls -l /data/hadoop/
total 12
drwxrwxr-x 4 hdfs hadoop 4096 Apr 29 16:38 dfs
drwxrwxr-x 3 mapred hadoop 4096 Apr 29 11:33 mapred
drwxrwxrwx 3 hdfs hadoop 4096 Apr 19 15:14 tmp
dfs would be where the data blocks for the hdfs file system would go, mapred
would be the folder for M/R jobs, and tmp would be temporary storage. These
are all on the local file system. Do I have to make all of this read-write
for everyone in order to get it to work?
From: Arpit Gupta [mailto:arpit@ <http://hortonworks.com> hortonworks.com]
Sent: Tuesday, April 30, 2013 10:01 AM
To: <ma...@hadoop.apache.org> user@hadoop.apache.org
Subject: Re: Permission problem
ah
this is what mapred.sytem.dir defaults to
<property>
<name>mapred.system.dir</name>
<value>${hadoop.tmp.dir}/mapred/system</value>
<description>The directory where MapReduce stores control files.
</description>
</property>
So thats why its trying to write to
<hdfs://devubuntu05:9000/data/hadoop/tmp/hadoop-mapred/mapred/system>
/data/hadoop/tmp/hadoop-mapred/mapred/system
So if you want hadoop.tmp.dir to be /data/hadoop/tmp/hadoop-${user.name}
then i would suggest that create /data/hadoop/tmp on hdfs and chmod -R 777
/data or you can remove the hadoop.tmp.dir from your configs and let it be
set to the default value of
<property>
<name>hadoop.tmp.dir</name>
<value>/tmp/hadoop-${user.name}</value>
<description>A base for other temporary directories.</description>
</property>
So to fix your problem you can do the above or set mapred.system.dir to
/tmp/mapred/system in your mapred-site.xml.
--
Arpit Gupta
Hortonworks Inc.
<http://hortonworks.com/> http://hortonworks.com/
On Apr 30, 2013, at 7:55 AM, "Kevin Burton" <
<ma...@charter.net> rkevinburton@charter.net> wrote:
In core-site.xml I have:
<property>
<name>fs.default.name</name>
<value> <hdfs://devubuntu05:9000%3c/value%3e>
hdfs://devubuntu05:9000</value>
<description>The name of the default file system. A URI whose scheme and
authority determine the FileSystem implementation. </description>
</property>
In hdfs-site.xml I have
<property>
<name>hadoop.tmp.dir</name>
<value>/data/hadoop/tmp/hadoop-${user.name}</value>
<description>Hadoop temporary folder</description>
</property>
From: Arpit Gupta [mailto:arpit@ <http://hortonworks.com> hortonworks.com]
Sent: Tuesday, April 30, 2013 9:48 AM
To: Kevin Burton
Cc: <ma...@hadoop.apache.org> user@hadoop.apache.org
Subject: Re: Permission problem
Based on the logs your system dir is set to
<hdfs://devubuntu05:9000/data/hadoop/tmp/hadoop-mapred/mapred/system>
hdfs://devubuntu05:9000/data/hadoop/tmp/hadoop-mapred/mapred/system
what is your fs.default.name and hadoop.tmp.dir in core-site.xml set to?
--
Arpit Gupta
Hortonworks Inc.
<http://hortonworks.com/> http://hortonworks.com/
On Apr 30, 2013, at 7:39 AM, "Kevin Burton" <
<ma...@charter.net> rkevinburton@charter.net> wrote:
Thank you.
mapred.system.dir is not set. I am guessing that it is whatever the default
is. What should I set it to?
/tmp is already 777
kevin@devUbuntu05:~$ hadoop fs -ls /tmp
Found 1 items
drwxr-xr-x - hdfs supergroup 0 2013-04-29 15:45 /tmp/mapred
kevin@devUbuntu05:~$ hadoop fs -ls -d /tmp
Found 1 items
drwxrwxrwt - hdfs supergroup 0 2013-04-29 15:45 /tmp
But notice that the mapred folder in the /tmp folder is 755.
So I changed it:
kevin@devUbuntu05 $ hadoop fs -ls -d /tmp
drwxrwxrwt - hdfs supergroup 0 2013-04-29 15:45 /tmp
kevin@devUbuntu05 $ hadoop fs -ls -R /tmp
drwxr-xr-x - mapred supergroup 0 2013-04-29 15:45 /tmp/mapred
drwxr-xr-x - mapred supergroup 0 2013-04-29 15:45
/tmp/mapred/system
I still get the errors in the log file:
2013-04-30 09:35:11,609 WARN org.apache.hadoop.mapred.JobTracker: Failed to
operate on mapred.system.dir (
<hdfs://devubuntu05:9000/data/hadoop/tmp/hadoop-mapred/mapred/system>
hdfs://devubuntu05:9000/data/hadoop/tmp/hadoop-mapred/mapred/system) because
of permissions.
2013-04-30 09:35:11,609 WARN org.apache.hadoop.mapred.JobTracker: This
directory should be owned by the user 'mapred (auth:SIMPLE)'
2013-04-30 09:35:11,609 WARN org.apache.hadoop.mapred.JobTracker: Bailing
out ...
. . . . .
org.apache.hadoop.security.AccessControlException: Permission denied:
user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
. . . . .
Caused by:
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessContr
olException): Permission denied: user=mapred, access=WRITE,
inode="/":hdfs:supergroup:drwxrwxr-x
2013-04-30 09:35:11,610 FATAL org.apache.hadoop.mapred.JobTracker:
org.apache.hadoop.security.AccessControlException: Permission denied:
user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
. . . . . .
From: Arpit Gupta [mailto:arpit@ <http://hortonworks.com> hortonworks.com]
Sent: Tuesday, April 30, 2013 9:25 AM
To: <ma...@hadoop.apache.org> user@hadoop.apache.org
Subject: Re: Permission problem
what is your mapred.system.dir set to in mapred-site.xml?
By default it will write to /tmp on hdfs.
So you can do the following
create /tmp on hdfs and chmod it to 777 as user hdfs and then restart
jobtracker and tasktrackers.
In case its set to /mapred/something then create /mapred and chown it to
user mapred.
--
Arpit Gupta
Hortonworks Inc.
<http://hortonworks.com/> http://hortonworks.com/
On Apr 30, 2013, at 6:36 AM, "Kevin Burton" <
<ma...@charter.net> rkevinburton@charter.net> wrote:
To further complicate the issue the log file in
(/var/log/hadoop-0.20-mapreduce/hadoop-hadoop-jobtracker-devUbuntu05.log) is
owned by mapred:mapred and the name of the file seems to indicate some other
lineage (hadoop,hadoop). I am out of my league in understanding the
permission structure for hadoop hdfs and mr. Ideas?
From: Kevin Burton [mailto:rkevinburton@ <http://charter.net> charter.net]
Sent: Tuesday, April 30, 2013 8:31 AM
To: <ma...@hadoop.apache.org> user@hadoop.apache.org
Cc: 'Mohammad Tariq'
Subject: RE: Permission problem
That is what I perceive as the problem. The hdfs file system was created
with the user 'hdfs' owning the root ('/') but for some reason with a M/R
job the user 'mapred' needs to have write permission to the root. I don't
know how to satisfy both conditions. That is one reason that I relaxed the
permission to 775 so that the group would also have write permission but
that didn't seem to help.
From: Mohammad Tariq [ <ma...@gmail.com>
mailto:dontariq@gmail.com]
Sent: Tuesday, April 30, 2013 8:20 AM
To: Kevin Burton
Subject: Re: Permission problem
user?"ls" shows "hdfs" and the log says "mapred"..
Warm Regards,
Tariq
<https://mtariq.jux.com/> https://mtariq.jux.com/
<http://cloudfront.blogspot.com> cloudfront.blogspot.com
On Tue, Apr 30, 2013 at 6:22 PM, Kevin Burton <
<ma...@charter.net> rkevinburton@charter.net> wrote:
I have relaxed it even further so now it is 775
<mailto:kevin@devUbuntu05:/var/log/hadoop-0.20-mapreduce$>
kevin@devUbuntu05:/var/log/hadoop-0.20-mapreduce$ hadoop fs -ls -d /
Found 1 items
drwxrwxr-x - hdfs supergroup 0 2013-04-29 15:43 /
But I still get this error:
2013-04-30 07:43:02,520 FATAL org.apache.hadoop.mapred.JobTracker:
org.apache.hadoop.security.AccessControlException: Permission denied:
user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
From: Mohammad Tariq [mailto: <ma...@gmail.com>
dontariq@gmail.com]
Sent: Monday, April 29, 2013 5:10 PM
To: <ma...@hadoop.apache.org> user@hadoop.apache.org
Subject: Re: Incompartible cluserIDS
make it 755.
Warm Regards,
Tariq
<https://mtariq.jux.com/> https://mtariq.jux.com/
<http://cloudfront.blogspot.com> cloudfront.blogspot.com
Re: Permission problem
Posted by Arpit Gupta <ar...@hortonworks.com>.
https://issues.apache.org/jira/browse/HADOOP and select create issue.
Set the affect version to the release you are testing and add some basic description.
Here are the commands you should run.
> sudo –u hdfs hadoop fs –mkdir /data/hadoop/tmp
and
> sudo –u hdfs hadoop fs –chmod -R 777 /data
chmod is also for the directory on hdfs.
--
Arpit Gupta
Hortonworks Inc.
http://hortonworks.com/
On Apr 30, 2013, at 8:57 AM, "Kevin Burton" <rk...@charter.net> wrote:
> I am not sure how to create a jira.
>
> Again I am not sure I understand your workaround. You are suggesting that I create /data/hadoop/tmp on HDFS like:
>
> sudo –u hdfs hadoop fs –mkdir /data/hadoop/tmp
>
> I don’t think I can chmod –R 777 on /data since it is a disk and as I indicated it is being used to store data other than that used by hadoop. Even chmod –R 777 on /data/hadoop seems extreme as there is a dfs, mapred, and tmp folder. Which one of these local folders need to be opened up? I would rather not open up all folders to the world if at all possible.
>
> From: Arpit Gupta [mailto:arpit@hortonworks.com]
> Sent: Tuesday, April 30, 2013 10:48 AM
> To: Kevin Burton
> Cc: user@hadoop.apache.org
> Subject: Re: Permission problem
>
> It looks like hadoop.tmp.dir is being used both for local and hdfs directories. Can you create a jira for this?
>
> What i recommended is that you create /data/hadoop/tmp on hdfs and chmod -R /data
>
> --
> Arpit Gupta
> Hortonworks Inc.
> http://hortonworks.com/
>
> On Apr 30, 2013, at 8:22 AM, "Kevin Burton" <rk...@charter.net> wrote:
>
>
> I am not clear on what you are suggesting to create on HDFS or the local file system. As I understand it hadoop.tmp.dir is the local file system. I changed it so that the temporary files would be on a disk that has more capacity then /tmp. So you are suggesting that I create /data/hadoop/tmp on HDFS. I already have this created.
>
> Found 1 items
> drwxr-xr-x - mapred supergroup 0 2013-04-29 15:45 /tmp/mapred
> kevin@devUbuntu05:/etc/hadoop/conf$ hadoop fs -ls -d /tmp
> Found 1 items
> drwxrwxrwt - hdfs supergroup 0 2013-04-29 15:45 /tmp
>
> When you suggest that I ‘chmod –R 777 /data’. You are suggesting that I open up all the data to everyone? Isn’t that a bit extreme? First /data is the mount point for this drive and there are other uses for this drive than hadoop so there are other folders. That is why there is /data/hadoop. As far as hadoop is concerned:
>
> kevin@devUbuntu05:/etc/hadoop/conf$ ls -l /data/hadoop/
> total 12
> drwxrwxr-x 4 hdfs hadoop 4096 Apr 29 16:38 dfs
> drwxrwxr-x 3 mapred hadoop 4096 Apr 29 11:33 mapred
> drwxrwxrwx 3 hdfs hadoop 4096 Apr 19 15:14 tmp
>
> dfs would be where the data blocks for the hdfs file system would go, mapred would be the folder for M/R jobs, and tmp would be temporary storage. These are all on the local file system. Do I have to make all of this read-write for everyone in order to get it to work?
>
> From: Arpit Gupta [mailto:arpit@hortonworks.com]
> Sent: Tuesday, April 30, 2013 10:01 AM
> To: user@hadoop.apache.org
> Subject: Re: Permission problem
>
> ah
>
> this is what mapred.sytem.dir defaults to
>
> <property>
> <name>mapred.system.dir</name>
> <value>${hadoop.tmp.dir}/mapred/system</value>
> <description>The directory where MapReduce stores control files.
> </description>
> </property>
>
>
> So thats why its trying to write to /data/hadoop/tmp/hadoop-mapred/mapred/system
>
>
> So if you want hadoop.tmp.dir to be /data/hadoop/tmp/hadoop-${user.name} then i would suggest that create /data/hadoop/tmp on hdfs and chmod -R 777 /data or you can remove the hadoop.tmp.dir from your configs and let it be set to the default value of
>
> <property>
> <name>hadoop.tmp.dir</name>
> <value>/tmp/hadoop-${user.name}</value>
> <description>A base for other temporary directories.</description>
> </property>
>
> So to fix your problem you can do the above or set mapred.system.dir to /tmp/mapred/system in your mapred-site.xml.
>
> --
> Arpit Gupta
> Hortonworks Inc.
> http://hortonworks.com/
>
> On Apr 30, 2013, at 7:55 AM, "Kevin Burton" <rk...@charter.net> wrote:
>
>
>
> In core-site.xml I have:
>
> <property>
> <name>fs.default.name</name>
> <value>hdfs://devubuntu05:9000</value>
> <description>The name of the default file system. A URI whose scheme and authority determine the FileSystem implementation. </description>
> </property>
>
> In hdfs-site.xml I have
>
> <property>
> <name>hadoop.tmp.dir</name>
> <value>/data/hadoop/tmp/hadoop-${user.name}</value>
> <description>Hadoop temporary folder</description>
> </property>
>
>
> From: Arpit Gupta [mailto:arpit@hortonworks.com]
> Sent: Tuesday, April 30, 2013 9:48 AM
> To: Kevin Burton
> Cc: user@hadoop.apache.org
> Subject: Re: Permission problem
>
> Based on the logs your system dir is set to
>
> hdfs://devubuntu05:9000/data/hadoop/tmp/hadoop-mapred/mapred/system
>
>
> what is your fs.default.name and hadoop.tmp.dir in core-site.xml set to?
>
>
>
> --
> Arpit Gupta
> Hortonworks Inc.
> http://hortonworks.com/
>
> On Apr 30, 2013, at 7:39 AM, "Kevin Burton" <rk...@charter.net> wrote:
>
>
>
>
>
> Thank you.
>
> mapred.system.dir is not set. I am guessing that it is whatever the default is. What should I set it to?
>
> /tmp is already 777
>
> kevin@devUbuntu05:~$ hadoop fs -ls /tmp
> Found 1 items
> drwxr-xr-x - hdfs supergroup 0 2013-04-29 15:45 /tmp/mapred
> kevin@devUbuntu05:~$ hadoop fs -ls -d /tmp
> Found 1 items
> drwxrwxrwt - hdfs supergroup 0 2013-04-29 15:45 /tmp
>
> But notice that the mapred folder in the /tmp folder is 755.
> So I changed it:
>
> kevin@devUbuntu05 $ hadoop fs -ls -d /tmp
> drwxrwxrwt - hdfs supergroup 0 2013-04-29 15:45 /tmp
>
> kevin@devUbuntu05 $ hadoop fs -ls -R /tmp
> drwxr-xr-x - mapred supergroup 0 2013-04-29 15:45 /tmp/mapred
> drwxr-xr-x - mapred supergroup 0 2013-04-29 15:45 /tmp/mapred/system
>
> I still get the errors in the log file:
>
> 2013-04-30 09:35:11,609 WARN org.apache.hadoop.mapred.JobTracker: Failed to operate on mapred.system.dir (hdfs://devubuntu05:9000/data/hadoop/tmp/hadoop-mapred/mapred/system) because of permissions.
> 2013-04-30 09:35:11,609 WARN org.apache.hadoop.mapred.JobTracker: This directory should be owned by the user 'mapred (auth:SIMPLE)'
> 2013-04-30 09:35:11,609 WARN org.apache.hadoop.mapred.JobTracker: Bailing out ...
> . . . . .
> org.apache.hadoop.security.AccessControlException: Permission denied: user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
> . . . . .
> Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Permission denied: user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
> 2013-04-30 09:35:11,610 FATAL org.apache.hadoop.mapred.JobTracker: org.apache.hadoop.security.AccessControlException: Permission denied: user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
> . . . . . .
>
>
> From: Arpit Gupta [mailto:arpit@hortonworks.com]
> Sent: Tuesday, April 30, 2013 9:25 AM
> To: user@hadoop.apache.org
> Subject: Re: Permission problem
>
> what is your mapred.system.dir set to in mapred-site.xml?
>
> By default it will write to /tmp on hdfs.
>
> So you can do the following
>
> create /tmp on hdfs and chmod it to 777 as user hdfs and then restart jobtracker and tasktrackers.
>
> In case its set to /mapred/something then create /mapred and chown it to user mapred.
>
>
> --
> Arpit Gupta
> Hortonworks Inc.
> http://hortonworks.com/
>
> On Apr 30, 2013, at 6:36 AM, "Kevin Burton" <rk...@charter.net> wrote:
>
>
>
>
>
> To further complicate the issue the log file in (/var/log/hadoop-0.20-mapreduce/hadoop-hadoop-jobtracker-devUbuntu05.log) is owned by mapred:mapred and the name of the file seems to indicate some other lineage (hadoop,hadoop). I am out of my league in understanding the permission structure for hadoop hdfs and mr. Ideas?
>
> From: Kevin Burton [mailto:rkevinburton@charter.net]
> Sent: Tuesday, April 30, 2013 8:31 AM
> To: user@hadoop.apache.org
> Cc: 'Mohammad Tariq'
> Subject: RE: Permission problem
>
> That is what I perceive as the problem. The hdfs file system was created with the user ‘hdfs’ owning the root (‘/’) but for some reason with a M/R job the user ‘mapred’ needs to have write permission to the root. I don’t know how to satisfy both conditions. That is one reason that I relaxed the permission to 775 so that the group would also have write permission but that didn’t seem to help.
>
> From: Mohammad Tariq [mailto:dontariq@gmail.com]
> Sent: Tuesday, April 30, 2013 8:20 AM
> To: Kevin Burton
> Subject: Re: Permission problem
>
> user?"ls" shows "hdfs" and the log says "mapred"..
>
> Warm Regards,
> Tariq
> https://mtariq.jux.com/
> cloudfront.blogspot.com
>
>
> On Tue, Apr 30, 2013 at 6:22 PM, Kevin Burton <rk...@charter.net> wrote:
> I have relaxed it even further so now it is 775
>
> kevin@devUbuntu05:/var/log/hadoop-0.20-mapreduce$ hadoop fs -ls -d /
> Found 1 items
> drwxrwxr-x - hdfs supergroup 0 2013-04-29 15:43 /
>
> But I still get this error:
>
> 2013-04-30 07:43:02,520 FATAL org.apache.hadoop.mapred.JobTracker: org.apache.hadoop.security.AccessControlException: Permission denied: user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
>
>
> From: Mohammad Tariq [mailto:dontariq@gmail.com]
> Sent: Monday, April 29, 2013 5:10 PM
> To: user@hadoop.apache.org
> Subject: Re: Incompartible cluserIDS
>
> make it 755.
>
> Warm Regards,
> Tariq
> https://mtariq.jux.com/
> cloudfront.blogspot.com
>
>
>
Re: Permission problem
Posted by Arpit Gupta <ar...@hortonworks.com>.
https://issues.apache.org/jira/browse/HADOOP and select create issue.
Set the affect version to the release you are testing and add some basic description.
Here are the commands you should run.
> sudo –u hdfs hadoop fs –mkdir /data/hadoop/tmp
and
> sudo –u hdfs hadoop fs –chmod -R 777 /data
chmod is also for the directory on hdfs.
--
Arpit Gupta
Hortonworks Inc.
http://hortonworks.com/
On Apr 30, 2013, at 8:57 AM, "Kevin Burton" <rk...@charter.net> wrote:
> I am not sure how to create a jira.
>
> Again I am not sure I understand your workaround. You are suggesting that I create /data/hadoop/tmp on HDFS like:
>
> sudo –u hdfs hadoop fs –mkdir /data/hadoop/tmp
>
> I don’t think I can chmod –R 777 on /data since it is a disk and as I indicated it is being used to store data other than that used by hadoop. Even chmod –R 777 on /data/hadoop seems extreme as there is a dfs, mapred, and tmp folder. Which one of these local folders need to be opened up? I would rather not open up all folders to the world if at all possible.
>
> From: Arpit Gupta [mailto:arpit@hortonworks.com]
> Sent: Tuesday, April 30, 2013 10:48 AM
> To: Kevin Burton
> Cc: user@hadoop.apache.org
> Subject: Re: Permission problem
>
> It looks like hadoop.tmp.dir is being used both for local and hdfs directories. Can you create a jira for this?
>
> What i recommended is that you create /data/hadoop/tmp on hdfs and chmod -R /data
>
> --
> Arpit Gupta
> Hortonworks Inc.
> http://hortonworks.com/
>
> On Apr 30, 2013, at 8:22 AM, "Kevin Burton" <rk...@charter.net> wrote:
>
>
> I am not clear on what you are suggesting to create on HDFS or the local file system. As I understand it hadoop.tmp.dir is the local file system. I changed it so that the temporary files would be on a disk that has more capacity then /tmp. So you are suggesting that I create /data/hadoop/tmp on HDFS. I already have this created.
>
> Found 1 items
> drwxr-xr-x - mapred supergroup 0 2013-04-29 15:45 /tmp/mapred
> kevin@devUbuntu05:/etc/hadoop/conf$ hadoop fs -ls -d /tmp
> Found 1 items
> drwxrwxrwt - hdfs supergroup 0 2013-04-29 15:45 /tmp
>
> When you suggest that I ‘chmod –R 777 /data’. You are suggesting that I open up all the data to everyone? Isn’t that a bit extreme? First /data is the mount point for this drive and there are other uses for this drive than hadoop so there are other folders. That is why there is /data/hadoop. As far as hadoop is concerned:
>
> kevin@devUbuntu05:/etc/hadoop/conf$ ls -l /data/hadoop/
> total 12
> drwxrwxr-x 4 hdfs hadoop 4096 Apr 29 16:38 dfs
> drwxrwxr-x 3 mapred hadoop 4096 Apr 29 11:33 mapred
> drwxrwxrwx 3 hdfs hadoop 4096 Apr 19 15:14 tmp
>
> dfs would be where the data blocks for the hdfs file system would go, mapred would be the folder for M/R jobs, and tmp would be temporary storage. These are all on the local file system. Do I have to make all of this read-write for everyone in order to get it to work?
>
> From: Arpit Gupta [mailto:arpit@hortonworks.com]
> Sent: Tuesday, April 30, 2013 10:01 AM
> To: user@hadoop.apache.org
> Subject: Re: Permission problem
>
> ah
>
> this is what mapred.sytem.dir defaults to
>
> <property>
> <name>mapred.system.dir</name>
> <value>${hadoop.tmp.dir}/mapred/system</value>
> <description>The directory where MapReduce stores control files.
> </description>
> </property>
>
>
> So thats why its trying to write to /data/hadoop/tmp/hadoop-mapred/mapred/system
>
>
> So if you want hadoop.tmp.dir to be /data/hadoop/tmp/hadoop-${user.name} then i would suggest that create /data/hadoop/tmp on hdfs and chmod -R 777 /data or you can remove the hadoop.tmp.dir from your configs and let it be set to the default value of
>
> <property>
> <name>hadoop.tmp.dir</name>
> <value>/tmp/hadoop-${user.name}</value>
> <description>A base for other temporary directories.</description>
> </property>
>
> So to fix your problem you can do the above or set mapred.system.dir to /tmp/mapred/system in your mapred-site.xml.
>
> --
> Arpit Gupta
> Hortonworks Inc.
> http://hortonworks.com/
>
> On Apr 30, 2013, at 7:55 AM, "Kevin Burton" <rk...@charter.net> wrote:
>
>
>
> In core-site.xml I have:
>
> <property>
> <name>fs.default.name</name>
> <value>hdfs://devubuntu05:9000</value>
> <description>The name of the default file system. A URI whose scheme and authority determine the FileSystem implementation. </description>
> </property>
>
> In hdfs-site.xml I have
>
> <property>
> <name>hadoop.tmp.dir</name>
> <value>/data/hadoop/tmp/hadoop-${user.name}</value>
> <description>Hadoop temporary folder</description>
> </property>
>
>
> From: Arpit Gupta [mailto:arpit@hortonworks.com]
> Sent: Tuesday, April 30, 2013 9:48 AM
> To: Kevin Burton
> Cc: user@hadoop.apache.org
> Subject: Re: Permission problem
>
> Based on the logs your system dir is set to
>
> hdfs://devubuntu05:9000/data/hadoop/tmp/hadoop-mapred/mapred/system
>
>
> what is your fs.default.name and hadoop.tmp.dir in core-site.xml set to?
>
>
>
> --
> Arpit Gupta
> Hortonworks Inc.
> http://hortonworks.com/
>
> On Apr 30, 2013, at 7:39 AM, "Kevin Burton" <rk...@charter.net> wrote:
>
>
>
>
>
> Thank you.
>
> mapred.system.dir is not set. I am guessing that it is whatever the default is. What should I set it to?
>
> /tmp is already 777
>
> kevin@devUbuntu05:~$ hadoop fs -ls /tmp
> Found 1 items
> drwxr-xr-x - hdfs supergroup 0 2013-04-29 15:45 /tmp/mapred
> kevin@devUbuntu05:~$ hadoop fs -ls -d /tmp
> Found 1 items
> drwxrwxrwt - hdfs supergroup 0 2013-04-29 15:45 /tmp
>
> But notice that the mapred folder in the /tmp folder is 755.
> So I changed it:
>
> kevin@devUbuntu05 $ hadoop fs -ls -d /tmp
> drwxrwxrwt - hdfs supergroup 0 2013-04-29 15:45 /tmp
>
> kevin@devUbuntu05 $ hadoop fs -ls -R /tmp
> drwxr-xr-x - mapred supergroup 0 2013-04-29 15:45 /tmp/mapred
> drwxr-xr-x - mapred supergroup 0 2013-04-29 15:45 /tmp/mapred/system
>
> I still get the errors in the log file:
>
> 2013-04-30 09:35:11,609 WARN org.apache.hadoop.mapred.JobTracker: Failed to operate on mapred.system.dir (hdfs://devubuntu05:9000/data/hadoop/tmp/hadoop-mapred/mapred/system) because of permissions.
> 2013-04-30 09:35:11,609 WARN org.apache.hadoop.mapred.JobTracker: This directory should be owned by the user 'mapred (auth:SIMPLE)'
> 2013-04-30 09:35:11,609 WARN org.apache.hadoop.mapred.JobTracker: Bailing out ...
> . . . . .
> org.apache.hadoop.security.AccessControlException: Permission denied: user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
> . . . . .
> Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Permission denied: user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
> 2013-04-30 09:35:11,610 FATAL org.apache.hadoop.mapred.JobTracker: org.apache.hadoop.security.AccessControlException: Permission denied: user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
> . . . . . .
>
>
> From: Arpit Gupta [mailto:arpit@hortonworks.com]
> Sent: Tuesday, April 30, 2013 9:25 AM
> To: user@hadoop.apache.org
> Subject: Re: Permission problem
>
> what is your mapred.system.dir set to in mapred-site.xml?
>
> By default it will write to /tmp on hdfs.
>
> So you can do the following
>
> create /tmp on hdfs and chmod it to 777 as user hdfs and then restart jobtracker and tasktrackers.
>
> In case its set to /mapred/something then create /mapred and chown it to user mapred.
>
>
> --
> Arpit Gupta
> Hortonworks Inc.
> http://hortonworks.com/
>
> On Apr 30, 2013, at 6:36 AM, "Kevin Burton" <rk...@charter.net> wrote:
>
>
>
>
>
> To further complicate the issue the log file in (/var/log/hadoop-0.20-mapreduce/hadoop-hadoop-jobtracker-devUbuntu05.log) is owned by mapred:mapred and the name of the file seems to indicate some other lineage (hadoop,hadoop). I am out of my league in understanding the permission structure for hadoop hdfs and mr. Ideas?
>
> From: Kevin Burton [mailto:rkevinburton@charter.net]
> Sent: Tuesday, April 30, 2013 8:31 AM
> To: user@hadoop.apache.org
> Cc: 'Mohammad Tariq'
> Subject: RE: Permission problem
>
> That is what I perceive as the problem. The hdfs file system was created with the user ‘hdfs’ owning the root (‘/’) but for some reason with a M/R job the user ‘mapred’ needs to have write permission to the root. I don’t know how to satisfy both conditions. That is one reason that I relaxed the permission to 775 so that the group would also have write permission but that didn’t seem to help.
>
> From: Mohammad Tariq [mailto:dontariq@gmail.com]
> Sent: Tuesday, April 30, 2013 8:20 AM
> To: Kevin Burton
> Subject: Re: Permission problem
>
> user?"ls" shows "hdfs" and the log says "mapred"..
>
> Warm Regards,
> Tariq
> https://mtariq.jux.com/
> cloudfront.blogspot.com
>
>
> On Tue, Apr 30, 2013 at 6:22 PM, Kevin Burton <rk...@charter.net> wrote:
> I have relaxed it even further so now it is 775
>
> kevin@devUbuntu05:/var/log/hadoop-0.20-mapreduce$ hadoop fs -ls -d /
> Found 1 items
> drwxrwxr-x - hdfs supergroup 0 2013-04-29 15:43 /
>
> But I still get this error:
>
> 2013-04-30 07:43:02,520 FATAL org.apache.hadoop.mapred.JobTracker: org.apache.hadoop.security.AccessControlException: Permission denied: user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
>
>
> From: Mohammad Tariq [mailto:dontariq@gmail.com]
> Sent: Monday, April 29, 2013 5:10 PM
> To: user@hadoop.apache.org
> Subject: Re: Incompartible cluserIDS
>
> make it 755.
>
> Warm Regards,
> Tariq
> https://mtariq.jux.com/
> cloudfront.blogspot.com
>
>
>
Re: Permission problem
Posted by Arpit Gupta <ar...@hortonworks.com>.
https://issues.apache.org/jira/browse/HADOOP and select create issue.
Set the affect version to the release you are testing and add some basic description.
Here are the commands you should run.
> sudo –u hdfs hadoop fs –mkdir /data/hadoop/tmp
and
> sudo –u hdfs hadoop fs –chmod -R 777 /data
chmod is also for the directory on hdfs.
--
Arpit Gupta
Hortonworks Inc.
http://hortonworks.com/
On Apr 30, 2013, at 8:57 AM, "Kevin Burton" <rk...@charter.net> wrote:
> I am not sure how to create a jira.
>
> Again I am not sure I understand your workaround. You are suggesting that I create /data/hadoop/tmp on HDFS like:
>
> sudo –u hdfs hadoop fs –mkdir /data/hadoop/tmp
>
> I don’t think I can chmod –R 777 on /data since it is a disk and as I indicated it is being used to store data other than that used by hadoop. Even chmod –R 777 on /data/hadoop seems extreme as there is a dfs, mapred, and tmp folder. Which one of these local folders need to be opened up? I would rather not open up all folders to the world if at all possible.
>
> From: Arpit Gupta [mailto:arpit@hortonworks.com]
> Sent: Tuesday, April 30, 2013 10:48 AM
> To: Kevin Burton
> Cc: user@hadoop.apache.org
> Subject: Re: Permission problem
>
> It looks like hadoop.tmp.dir is being used both for local and hdfs directories. Can you create a jira for this?
>
> What i recommended is that you create /data/hadoop/tmp on hdfs and chmod -R /data
>
> --
> Arpit Gupta
> Hortonworks Inc.
> http://hortonworks.com/
>
> On Apr 30, 2013, at 8:22 AM, "Kevin Burton" <rk...@charter.net> wrote:
>
>
> I am not clear on what you are suggesting to create on HDFS or the local file system. As I understand it hadoop.tmp.dir is the local file system. I changed it so that the temporary files would be on a disk that has more capacity then /tmp. So you are suggesting that I create /data/hadoop/tmp on HDFS. I already have this created.
>
> Found 1 items
> drwxr-xr-x - mapred supergroup 0 2013-04-29 15:45 /tmp/mapred
> kevin@devUbuntu05:/etc/hadoop/conf$ hadoop fs -ls -d /tmp
> Found 1 items
> drwxrwxrwt - hdfs supergroup 0 2013-04-29 15:45 /tmp
>
> When you suggest that I ‘chmod –R 777 /data’. You are suggesting that I open up all the data to everyone? Isn’t that a bit extreme? First /data is the mount point for this drive and there are other uses for this drive than hadoop so there are other folders. That is why there is /data/hadoop. As far as hadoop is concerned:
>
> kevin@devUbuntu05:/etc/hadoop/conf$ ls -l /data/hadoop/
> total 12
> drwxrwxr-x 4 hdfs hadoop 4096 Apr 29 16:38 dfs
> drwxrwxr-x 3 mapred hadoop 4096 Apr 29 11:33 mapred
> drwxrwxrwx 3 hdfs hadoop 4096 Apr 19 15:14 tmp
>
> dfs would be where the data blocks for the hdfs file system would go, mapred would be the folder for M/R jobs, and tmp would be temporary storage. These are all on the local file system. Do I have to make all of this read-write for everyone in order to get it to work?
>
> From: Arpit Gupta [mailto:arpit@hortonworks.com]
> Sent: Tuesday, April 30, 2013 10:01 AM
> To: user@hadoop.apache.org
> Subject: Re: Permission problem
>
> ah
>
> this is what mapred.sytem.dir defaults to
>
> <property>
> <name>mapred.system.dir</name>
> <value>${hadoop.tmp.dir}/mapred/system</value>
> <description>The directory where MapReduce stores control files.
> </description>
> </property>
>
>
> So thats why its trying to write to /data/hadoop/tmp/hadoop-mapred/mapred/system
>
>
> So if you want hadoop.tmp.dir to be /data/hadoop/tmp/hadoop-${user.name} then i would suggest that create /data/hadoop/tmp on hdfs and chmod -R 777 /data or you can remove the hadoop.tmp.dir from your configs and let it be set to the default value of
>
> <property>
> <name>hadoop.tmp.dir</name>
> <value>/tmp/hadoop-${user.name}</value>
> <description>A base for other temporary directories.</description>
> </property>
>
> So to fix your problem you can do the above or set mapred.system.dir to /tmp/mapred/system in your mapred-site.xml.
>
> --
> Arpit Gupta
> Hortonworks Inc.
> http://hortonworks.com/
>
> On Apr 30, 2013, at 7:55 AM, "Kevin Burton" <rk...@charter.net> wrote:
>
>
>
> In core-site.xml I have:
>
> <property>
> <name>fs.default.name</name>
> <value>hdfs://devubuntu05:9000</value>
> <description>The name of the default file system. A URI whose scheme and authority determine the FileSystem implementation. </description>
> </property>
>
> In hdfs-site.xml I have
>
> <property>
> <name>hadoop.tmp.dir</name>
> <value>/data/hadoop/tmp/hadoop-${user.name}</value>
> <description>Hadoop temporary folder</description>
> </property>
>
>
> From: Arpit Gupta [mailto:arpit@hortonworks.com]
> Sent: Tuesday, April 30, 2013 9:48 AM
> To: Kevin Burton
> Cc: user@hadoop.apache.org
> Subject: Re: Permission problem
>
> Based on the logs your system dir is set to
>
> hdfs://devubuntu05:9000/data/hadoop/tmp/hadoop-mapred/mapred/system
>
>
> what is your fs.default.name and hadoop.tmp.dir in core-site.xml set to?
>
>
>
> --
> Arpit Gupta
> Hortonworks Inc.
> http://hortonworks.com/
>
> On Apr 30, 2013, at 7:39 AM, "Kevin Burton" <rk...@charter.net> wrote:
>
>
>
>
>
> Thank you.
>
> mapred.system.dir is not set. I am guessing that it is whatever the default is. What should I set it to?
>
> /tmp is already 777
>
> kevin@devUbuntu05:~$ hadoop fs -ls /tmp
> Found 1 items
> drwxr-xr-x - hdfs supergroup 0 2013-04-29 15:45 /tmp/mapred
> kevin@devUbuntu05:~$ hadoop fs -ls -d /tmp
> Found 1 items
> drwxrwxrwt - hdfs supergroup 0 2013-04-29 15:45 /tmp
>
> But notice that the mapred folder in the /tmp folder is 755.
> So I changed it:
>
> kevin@devUbuntu05 $ hadoop fs -ls -d /tmp
> drwxrwxrwt - hdfs supergroup 0 2013-04-29 15:45 /tmp
>
> kevin@devUbuntu05 $ hadoop fs -ls -R /tmp
> drwxr-xr-x - mapred supergroup 0 2013-04-29 15:45 /tmp/mapred
> drwxr-xr-x - mapred supergroup 0 2013-04-29 15:45 /tmp/mapred/system
>
> I still get the errors in the log file:
>
> 2013-04-30 09:35:11,609 WARN org.apache.hadoop.mapred.JobTracker: Failed to operate on mapred.system.dir (hdfs://devubuntu05:9000/data/hadoop/tmp/hadoop-mapred/mapred/system) because of permissions.
> 2013-04-30 09:35:11,609 WARN org.apache.hadoop.mapred.JobTracker: This directory should be owned by the user 'mapred (auth:SIMPLE)'
> 2013-04-30 09:35:11,609 WARN org.apache.hadoop.mapred.JobTracker: Bailing out ...
> . . . . .
> org.apache.hadoop.security.AccessControlException: Permission denied: user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
> . . . . .
> Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Permission denied: user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
> 2013-04-30 09:35:11,610 FATAL org.apache.hadoop.mapred.JobTracker: org.apache.hadoop.security.AccessControlException: Permission denied: user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
> . . . . . .
>
>
> From: Arpit Gupta [mailto:arpit@hortonworks.com]
> Sent: Tuesday, April 30, 2013 9:25 AM
> To: user@hadoop.apache.org
> Subject: Re: Permission problem
>
> what is your mapred.system.dir set to in mapred-site.xml?
>
> By default it will write to /tmp on hdfs.
>
> So you can do the following
>
> create /tmp on hdfs and chmod it to 777 as user hdfs and then restart jobtracker and tasktrackers.
>
> In case its set to /mapred/something then create /mapred and chown it to user mapred.
>
>
> --
> Arpit Gupta
> Hortonworks Inc.
> http://hortonworks.com/
>
> On Apr 30, 2013, at 6:36 AM, "Kevin Burton" <rk...@charter.net> wrote:
>
>
>
>
>
> To further complicate the issue the log file in (/var/log/hadoop-0.20-mapreduce/hadoop-hadoop-jobtracker-devUbuntu05.log) is owned by mapred:mapred and the name of the file seems to indicate some other lineage (hadoop,hadoop). I am out of my league in understanding the permission structure for hadoop hdfs and mr. Ideas?
>
> From: Kevin Burton [mailto:rkevinburton@charter.net]
> Sent: Tuesday, April 30, 2013 8:31 AM
> To: user@hadoop.apache.org
> Cc: 'Mohammad Tariq'
> Subject: RE: Permission problem
>
> That is what I perceive as the problem. The hdfs file system was created with the user ‘hdfs’ owning the root (‘/’) but for some reason with a M/R job the user ‘mapred’ needs to have write permission to the root. I don’t know how to satisfy both conditions. That is one reason that I relaxed the permission to 775 so that the group would also have write permission but that didn’t seem to help.
>
> From: Mohammad Tariq [mailto:dontariq@gmail.com]
> Sent: Tuesday, April 30, 2013 8:20 AM
> To: Kevin Burton
> Subject: Re: Permission problem
>
> user?"ls" shows "hdfs" and the log says "mapred"..
>
> Warm Regards,
> Tariq
> https://mtariq.jux.com/
> cloudfront.blogspot.com
>
>
> On Tue, Apr 30, 2013 at 6:22 PM, Kevin Burton <rk...@charter.net> wrote:
> I have relaxed it even further so now it is 775
>
> kevin@devUbuntu05:/var/log/hadoop-0.20-mapreduce$ hadoop fs -ls -d /
> Found 1 items
> drwxrwxr-x - hdfs supergroup 0 2013-04-29 15:43 /
>
> But I still get this error:
>
> 2013-04-30 07:43:02,520 FATAL org.apache.hadoop.mapred.JobTracker: org.apache.hadoop.security.AccessControlException: Permission denied: user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
>
>
> From: Mohammad Tariq [mailto:dontariq@gmail.com]
> Sent: Monday, April 29, 2013 5:10 PM
> To: user@hadoop.apache.org
> Subject: Re: Incompartible cluserIDS
>
> make it 755.
>
> Warm Regards,
> Tariq
> https://mtariq.jux.com/
> cloudfront.blogspot.com
>
>
>
Re: Permission problem
Posted by Arpit Gupta <ar...@hortonworks.com>.
https://issues.apache.org/jira/browse/HADOOP and select create issue.
Set the affect version to the release you are testing and add some basic description.
Here are the commands you should run.
> sudo –u hdfs hadoop fs –mkdir /data/hadoop/tmp
and
> sudo –u hdfs hadoop fs –chmod -R 777 /data
chmod is also for the directory on hdfs.
--
Arpit Gupta
Hortonworks Inc.
http://hortonworks.com/
On Apr 30, 2013, at 8:57 AM, "Kevin Burton" <rk...@charter.net> wrote:
> I am not sure how to create a jira.
>
> Again I am not sure I understand your workaround. You are suggesting that I create /data/hadoop/tmp on HDFS like:
>
> sudo –u hdfs hadoop fs –mkdir /data/hadoop/tmp
>
> I don’t think I can chmod –R 777 on /data since it is a disk and as I indicated it is being used to store data other than that used by hadoop. Even chmod –R 777 on /data/hadoop seems extreme as there is a dfs, mapred, and tmp folder. Which one of these local folders need to be opened up? I would rather not open up all folders to the world if at all possible.
>
> From: Arpit Gupta [mailto:arpit@hortonworks.com]
> Sent: Tuesday, April 30, 2013 10:48 AM
> To: Kevin Burton
> Cc: user@hadoop.apache.org
> Subject: Re: Permission problem
>
> It looks like hadoop.tmp.dir is being used both for local and hdfs directories. Can you create a jira for this?
>
> What i recommended is that you create /data/hadoop/tmp on hdfs and chmod -R /data
>
> --
> Arpit Gupta
> Hortonworks Inc.
> http://hortonworks.com/
>
> On Apr 30, 2013, at 8:22 AM, "Kevin Burton" <rk...@charter.net> wrote:
>
>
> I am not clear on what you are suggesting to create on HDFS or the local file system. As I understand it hadoop.tmp.dir is the local file system. I changed it so that the temporary files would be on a disk that has more capacity then /tmp. So you are suggesting that I create /data/hadoop/tmp on HDFS. I already have this created.
>
> Found 1 items
> drwxr-xr-x - mapred supergroup 0 2013-04-29 15:45 /tmp/mapred
> kevin@devUbuntu05:/etc/hadoop/conf$ hadoop fs -ls -d /tmp
> Found 1 items
> drwxrwxrwt - hdfs supergroup 0 2013-04-29 15:45 /tmp
>
> When you suggest that I ‘chmod –R 777 /data’. You are suggesting that I open up all the data to everyone? Isn’t that a bit extreme? First /data is the mount point for this drive and there are other uses for this drive than hadoop so there are other folders. That is why there is /data/hadoop. As far as hadoop is concerned:
>
> kevin@devUbuntu05:/etc/hadoop/conf$ ls -l /data/hadoop/
> total 12
> drwxrwxr-x 4 hdfs hadoop 4096 Apr 29 16:38 dfs
> drwxrwxr-x 3 mapred hadoop 4096 Apr 29 11:33 mapred
> drwxrwxrwx 3 hdfs hadoop 4096 Apr 19 15:14 tmp
>
> dfs would be where the data blocks for the hdfs file system would go, mapred would be the folder for M/R jobs, and tmp would be temporary storage. These are all on the local file system. Do I have to make all of this read-write for everyone in order to get it to work?
>
> From: Arpit Gupta [mailto:arpit@hortonworks.com]
> Sent: Tuesday, April 30, 2013 10:01 AM
> To: user@hadoop.apache.org
> Subject: Re: Permission problem
>
> ah
>
> this is what mapred.sytem.dir defaults to
>
> <property>
> <name>mapred.system.dir</name>
> <value>${hadoop.tmp.dir}/mapred/system</value>
> <description>The directory where MapReduce stores control files.
> </description>
> </property>
>
>
> So thats why its trying to write to /data/hadoop/tmp/hadoop-mapred/mapred/system
>
>
> So if you want hadoop.tmp.dir to be /data/hadoop/tmp/hadoop-${user.name} then i would suggest that create /data/hadoop/tmp on hdfs and chmod -R 777 /data or you can remove the hadoop.tmp.dir from your configs and let it be set to the default value of
>
> <property>
> <name>hadoop.tmp.dir</name>
> <value>/tmp/hadoop-${user.name}</value>
> <description>A base for other temporary directories.</description>
> </property>
>
> So to fix your problem you can do the above or set mapred.system.dir to /tmp/mapred/system in your mapred-site.xml.
>
> --
> Arpit Gupta
> Hortonworks Inc.
> http://hortonworks.com/
>
> On Apr 30, 2013, at 7:55 AM, "Kevin Burton" <rk...@charter.net> wrote:
>
>
>
> In core-site.xml I have:
>
> <property>
> <name>fs.default.name</name>
> <value>hdfs://devubuntu05:9000</value>
> <description>The name of the default file system. A URI whose scheme and authority determine the FileSystem implementation. </description>
> </property>
>
> In hdfs-site.xml I have
>
> <property>
> <name>hadoop.tmp.dir</name>
> <value>/data/hadoop/tmp/hadoop-${user.name}</value>
> <description>Hadoop temporary folder</description>
> </property>
>
>
> From: Arpit Gupta [mailto:arpit@hortonworks.com]
> Sent: Tuesday, April 30, 2013 9:48 AM
> To: Kevin Burton
> Cc: user@hadoop.apache.org
> Subject: Re: Permission problem
>
> Based on the logs your system dir is set to
>
> hdfs://devubuntu05:9000/data/hadoop/tmp/hadoop-mapred/mapred/system
>
>
> what is your fs.default.name and hadoop.tmp.dir in core-site.xml set to?
>
>
>
> --
> Arpit Gupta
> Hortonworks Inc.
> http://hortonworks.com/
>
> On Apr 30, 2013, at 7:39 AM, "Kevin Burton" <rk...@charter.net> wrote:
>
>
>
>
>
> Thank you.
>
> mapred.system.dir is not set. I am guessing that it is whatever the default is. What should I set it to?
>
> /tmp is already 777
>
> kevin@devUbuntu05:~$ hadoop fs -ls /tmp
> Found 1 items
> drwxr-xr-x - hdfs supergroup 0 2013-04-29 15:45 /tmp/mapred
> kevin@devUbuntu05:~$ hadoop fs -ls -d /tmp
> Found 1 items
> drwxrwxrwt - hdfs supergroup 0 2013-04-29 15:45 /tmp
>
> But notice that the mapred folder in the /tmp folder is 755.
> So I changed it:
>
> kevin@devUbuntu05 $ hadoop fs -ls -d /tmp
> drwxrwxrwt - hdfs supergroup 0 2013-04-29 15:45 /tmp
>
> kevin@devUbuntu05 $ hadoop fs -ls -R /tmp
> drwxr-xr-x - mapred supergroup 0 2013-04-29 15:45 /tmp/mapred
> drwxr-xr-x - mapred supergroup 0 2013-04-29 15:45 /tmp/mapred/system
>
> I still get the errors in the log file:
>
> 2013-04-30 09:35:11,609 WARN org.apache.hadoop.mapred.JobTracker: Failed to operate on mapred.system.dir (hdfs://devubuntu05:9000/data/hadoop/tmp/hadoop-mapred/mapred/system) because of permissions.
> 2013-04-30 09:35:11,609 WARN org.apache.hadoop.mapred.JobTracker: This directory should be owned by the user 'mapred (auth:SIMPLE)'
> 2013-04-30 09:35:11,609 WARN org.apache.hadoop.mapred.JobTracker: Bailing out ...
> . . . . .
> org.apache.hadoop.security.AccessControlException: Permission denied: user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
> . . . . .
> Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Permission denied: user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
> 2013-04-30 09:35:11,610 FATAL org.apache.hadoop.mapred.JobTracker: org.apache.hadoop.security.AccessControlException: Permission denied: user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
> . . . . . .
>
>
> From: Arpit Gupta [mailto:arpit@hortonworks.com]
> Sent: Tuesday, April 30, 2013 9:25 AM
> To: user@hadoop.apache.org
> Subject: Re: Permission problem
>
> what is your mapred.system.dir set to in mapred-site.xml?
>
> By default it will write to /tmp on hdfs.
>
> So you can do the following
>
> create /tmp on hdfs and chmod it to 777 as user hdfs and then restart jobtracker and tasktrackers.
>
> In case its set to /mapred/something then create /mapred and chown it to user mapred.
>
>
> --
> Arpit Gupta
> Hortonworks Inc.
> http://hortonworks.com/
>
> On Apr 30, 2013, at 6:36 AM, "Kevin Burton" <rk...@charter.net> wrote:
>
>
>
>
>
> To further complicate the issue the log file in (/var/log/hadoop-0.20-mapreduce/hadoop-hadoop-jobtracker-devUbuntu05.log) is owned by mapred:mapred and the name of the file seems to indicate some other lineage (hadoop,hadoop). I am out of my league in understanding the permission structure for hadoop hdfs and mr. Ideas?
>
> From: Kevin Burton [mailto:rkevinburton@charter.net]
> Sent: Tuesday, April 30, 2013 8:31 AM
> To: user@hadoop.apache.org
> Cc: 'Mohammad Tariq'
> Subject: RE: Permission problem
>
> That is what I perceive as the problem. The hdfs file system was created with the user ‘hdfs’ owning the root (‘/’) but for some reason with a M/R job the user ‘mapred’ needs to have write permission to the root. I don’t know how to satisfy both conditions. That is one reason that I relaxed the permission to 775 so that the group would also have write permission but that didn’t seem to help.
>
> From: Mohammad Tariq [mailto:dontariq@gmail.com]
> Sent: Tuesday, April 30, 2013 8:20 AM
> To: Kevin Burton
> Subject: Re: Permission problem
>
> user?"ls" shows "hdfs" and the log says "mapred"..
>
> Warm Regards,
> Tariq
> https://mtariq.jux.com/
> cloudfront.blogspot.com
>
>
> On Tue, Apr 30, 2013 at 6:22 PM, Kevin Burton <rk...@charter.net> wrote:
> I have relaxed it even further so now it is 775
>
> kevin@devUbuntu05:/var/log/hadoop-0.20-mapreduce$ hadoop fs -ls -d /
> Found 1 items
> drwxrwxr-x - hdfs supergroup 0 2013-04-29 15:43 /
>
> But I still get this error:
>
> 2013-04-30 07:43:02,520 FATAL org.apache.hadoop.mapred.JobTracker: org.apache.hadoop.security.AccessControlException: Permission denied: user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
>
>
> From: Mohammad Tariq [mailto:dontariq@gmail.com]
> Sent: Monday, April 29, 2013 5:10 PM
> To: user@hadoop.apache.org
> Subject: Re: Incompartible cluserIDS
>
> make it 755.
>
> Warm Regards,
> Tariq
> https://mtariq.jux.com/
> cloudfront.blogspot.com
>
>
>
RE: Permission problem
Posted by Kevin Burton <rk...@charter.net>.
I am not sure how to create a jira.
Again I am not sure I understand your workaround. You are suggesting that I
create /data/hadoop/tmp on HDFS like:
sudo -u hdfs hadoop fs -mkdir /data/hadoop/tmp
I don't think I can chmod -R 777 on /data since it is a disk and as I
indicated it is being used to store data other than that used by hadoop.
Even chmod -R 777 on /data/hadoop seems extreme as there is a dfs, mapred,
and tmp folder. Which one of these local folders need to be opened up? I
would rather not open up all folders to the world if at all possible.
From: Arpit Gupta [mailto:arpit@hortonworks.com]
Sent: Tuesday, April 30, 2013 10:48 AM
To: Kevin Burton
Cc: user@hadoop.apache.org
Subject: Re: Permission problem
It looks like hadoop.tmp.dir is being used both for local and hdfs
directories. Can you create a jira for this?
What i recommended is that you create /data/hadoop/tmp on hdfs and chmod -R
/data
--
Arpit Gupta
Hortonworks Inc.
http://hortonworks.com/
On Apr 30, 2013, at 8:22 AM, "Kevin Burton" <rk...@charter.net>
wrote:
I am not clear on what you are suggesting to create on HDFS or the local
file system. As I understand it hadoop.tmp.dir is the local file system. I
changed it so that the temporary files would be on a disk that has more
capacity then /tmp. So you are suggesting that I create /data/hadoop/tmp on
HDFS. I already have this created.
Found 1 items
drwxr-xr-x - mapred supergroup 0 2013-04-29 15:45 /tmp/mapred
kevin@devUbuntu05:/etc/hadoop/conf$ hadoop fs -ls -d /tmp
Found 1 items
drwxrwxrwt - hdfs supergroup 0 2013-04-29 15:45 /tmp
When you suggest that I 'chmod -R 777 /data'. You are suggesting that I open
up all the data to everyone? Isn't that a bit extreme? First /data is the
mount point for this drive and there are other uses for this drive than
hadoop so there are other folders. That is why there is /data/hadoop. As far
as hadoop is concerned:
kevin@devUbuntu05:/etc/hadoop/conf$ ls -l /data/hadoop/
total 12
drwxrwxr-x 4 hdfs hadoop 4096 Apr 29 16:38 dfs
drwxrwxr-x 3 mapred hadoop 4096 Apr 29 11:33 mapred
drwxrwxrwx 3 hdfs hadoop 4096 Apr 19 15:14 tmp
dfs would be where the data blocks for the hdfs file system would go, mapred
would be the folder for M/R jobs, and tmp would be temporary storage. These
are all on the local file system. Do I have to make all of this read-write
for everyone in order to get it to work?
From: Arpit Gupta [mailto:arpit@hortonworks.com]
Sent: Tuesday, April 30, 2013 10:01 AM
To: user@hadoop.apache.org
Subject: Re: Permission problem
ah
this is what mapred.sytem.dir defaults to
<property>
<name>mapred.system.dir</name>
<value>${hadoop.tmp.dir}/mapred/system</value>
<description>The directory where MapReduce stores control files.
</description>
</property>
So thats why its trying to write to
<hdfs://devubuntu05:9000/data/hadoop/tmp/hadoop-mapred/mapred/system>
/data/hadoop/tmp/hadoop-mapred/mapred/system
So if you want hadoop.tmp.dir to be /data/hadoop/tmp/hadoop-${user.name}
then i would suggest that create /data/hadoop/tmp on hdfs and chmod -R 777
/data or you can remove the hadoop.tmp.dir from your configs and let it be
set to the default value of
<property>
<name>hadoop.tmp.dir</name>
<value>/tmp/hadoop-${user.name}</value>
<description>A base for other temporary directories.</description>
</property>
So to fix your problem you can do the above or set mapred.system.dir to
/tmp/mapred/system in your mapred-site.xml.
--
Arpit Gupta
Hortonworks Inc.
<http://hortonworks.com/> http://hortonworks.com/
On Apr 30, 2013, at 7:55 AM, "Kevin Burton" <
<ma...@charter.net> rkevinburton@charter.net> wrote:
In core-site.xml I have:
<property>
<name>fs.default.name</name>
<value> <hdfs://devubuntu05:9000%3c/value%3e>
hdfs://devubuntu05:9000</value>
<description>The name of the default file system. A URI whose scheme and
authority determine the FileSystem implementation. </description>
</property>
In hdfs-site.xml I have
<property>
<name>hadoop.tmp.dir</name>
<value>/data/hadoop/tmp/hadoop-${user.name}</value>
<description>Hadoop temporary folder</description>
</property>
From: Arpit Gupta [mailto:arpit@ <http://hortonworks.com> hortonworks.com]
Sent: Tuesday, April 30, 2013 9:48 AM
To: Kevin Burton
Cc: <ma...@hadoop.apache.org> user@hadoop.apache.org
Subject: Re: Permission problem
Based on the logs your system dir is set to
<hdfs://devubuntu05:9000/data/hadoop/tmp/hadoop-mapred/mapred/system>
hdfs://devubuntu05:9000/data/hadoop/tmp/hadoop-mapred/mapred/system
what is your fs.default.name and hadoop.tmp.dir in core-site.xml set to?
--
Arpit Gupta
Hortonworks Inc.
<http://hortonworks.com/> http://hortonworks.com/
On Apr 30, 2013, at 7:39 AM, "Kevin Burton" <
<ma...@charter.net> rkevinburton@charter.net> wrote:
Thank you.
mapred.system.dir is not set. I am guessing that it is whatever the default
is. What should I set it to?
/tmp is already 777
kevin@devUbuntu05:~$ hadoop fs -ls /tmp
Found 1 items
drwxr-xr-x - hdfs supergroup 0 2013-04-29 15:45 /tmp/mapred
kevin@devUbuntu05:~$ hadoop fs -ls -d /tmp
Found 1 items
drwxrwxrwt - hdfs supergroup 0 2013-04-29 15:45 /tmp
But notice that the mapred folder in the /tmp folder is 755.
So I changed it:
kevin@devUbuntu05 $ hadoop fs -ls -d /tmp
drwxrwxrwt - hdfs supergroup 0 2013-04-29 15:45 /tmp
kevin@devUbuntu05 $ hadoop fs -ls -R /tmp
drwxr-xr-x - mapred supergroup 0 2013-04-29 15:45 /tmp/mapred
drwxr-xr-x - mapred supergroup 0 2013-04-29 15:45
/tmp/mapred/system
I still get the errors in the log file:
2013-04-30 09:35:11,609 WARN org.apache.hadoop.mapred.JobTracker: Failed to
operate on mapred.system.dir (
<hdfs://devubuntu05:9000/data/hadoop/tmp/hadoop-mapred/mapred/system>
hdfs://devubuntu05:9000/data/hadoop/tmp/hadoop-mapred/mapred/system) because
of permissions.
2013-04-30 09:35:11,609 WARN org.apache.hadoop.mapred.JobTracker: This
directory should be owned by the user 'mapred (auth:SIMPLE)'
2013-04-30 09:35:11,609 WARN org.apache.hadoop.mapred.JobTracker: Bailing
out ...
. . . . .
org.apache.hadoop.security.AccessControlException: Permission denied:
user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
. . . . .
Caused by:
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessContr
olException): Permission denied: user=mapred, access=WRITE,
inode="/":hdfs:supergroup:drwxrwxr-x
2013-04-30 09:35:11,610 FATAL org.apache.hadoop.mapred.JobTracker:
org.apache.hadoop.security.AccessControlException: Permission denied:
user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
. . . . . .
From: Arpit Gupta [mailto:arpit@ <http://hortonworks.com> hortonworks.com]
Sent: Tuesday, April 30, 2013 9:25 AM
To: <ma...@hadoop.apache.org> user@hadoop.apache.org
Subject: Re: Permission problem
what is your mapred.system.dir set to in mapred-site.xml?
By default it will write to /tmp on hdfs.
So you can do the following
create /tmp on hdfs and chmod it to 777 as user hdfs and then restart
jobtracker and tasktrackers.
In case its set to /mapred/something then create /mapred and chown it to
user mapred.
--
Arpit Gupta
Hortonworks Inc.
<http://hortonworks.com/> http://hortonworks.com/
On Apr 30, 2013, at 6:36 AM, "Kevin Burton" <
<ma...@charter.net> rkevinburton@charter.net> wrote:
To further complicate the issue the log file in
(/var/log/hadoop-0.20-mapreduce/hadoop-hadoop-jobtracker-devUbuntu05.log) is
owned by mapred:mapred and the name of the file seems to indicate some other
lineage (hadoop,hadoop). I am out of my league in understanding the
permission structure for hadoop hdfs and mr. Ideas?
From: Kevin Burton [mailto:rkevinburton@ <http://charter.net> charter.net]
Sent: Tuesday, April 30, 2013 8:31 AM
To: <ma...@hadoop.apache.org> user@hadoop.apache.org
Cc: 'Mohammad Tariq'
Subject: RE: Permission problem
That is what I perceive as the problem. The hdfs file system was created
with the user 'hdfs' owning the root ('/') but for some reason with a M/R
job the user 'mapred' needs to have write permission to the root. I don't
know how to satisfy both conditions. That is one reason that I relaxed the
permission to 775 so that the group would also have write permission but
that didn't seem to help.
From: Mohammad Tariq [ <ma...@gmail.com>
mailto:dontariq@gmail.com]
Sent: Tuesday, April 30, 2013 8:20 AM
To: Kevin Burton
Subject: Re: Permission problem
user?"ls" shows "hdfs" and the log says "mapred"..
Warm Regards,
Tariq
<https://mtariq.jux.com/> https://mtariq.jux.com/
<http://cloudfront.blogspot.com> cloudfront.blogspot.com
On Tue, Apr 30, 2013 at 6:22 PM, Kevin Burton <
<ma...@charter.net> rkevinburton@charter.net> wrote:
I have relaxed it even further so now it is 775
<mailto:kevin@devUbuntu05:/var/log/hadoop-0.20-mapreduce$>
kevin@devUbuntu05:/var/log/hadoop-0.20-mapreduce$ hadoop fs -ls -d /
Found 1 items
drwxrwxr-x - hdfs supergroup 0 2013-04-29 15:43 /
But I still get this error:
2013-04-30 07:43:02,520 FATAL org.apache.hadoop.mapred.JobTracker:
org.apache.hadoop.security.AccessControlException: Permission denied:
user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
From: Mohammad Tariq [mailto: <ma...@gmail.com>
dontariq@gmail.com]
Sent: Monday, April 29, 2013 5:10 PM
To: <ma...@hadoop.apache.org> user@hadoop.apache.org
Subject: Re: Incompartible cluserIDS
make it 755.
Warm Regards,
Tariq
<https://mtariq.jux.com/> https://mtariq.jux.com/
<http://cloudfront.blogspot.com> cloudfront.blogspot.com
RE: Permission problem
Posted by Kevin Burton <rk...@charter.net>.
I am not sure how to create a jira.
Again I am not sure I understand your workaround. You are suggesting that I
create /data/hadoop/tmp on HDFS like:
sudo -u hdfs hadoop fs -mkdir /data/hadoop/tmp
I don't think I can chmod -R 777 on /data since it is a disk and as I
indicated it is being used to store data other than that used by hadoop.
Even chmod -R 777 on /data/hadoop seems extreme as there is a dfs, mapred,
and tmp folder. Which one of these local folders need to be opened up? I
would rather not open up all folders to the world if at all possible.
From: Arpit Gupta [mailto:arpit@hortonworks.com]
Sent: Tuesday, April 30, 2013 10:48 AM
To: Kevin Burton
Cc: user@hadoop.apache.org
Subject: Re: Permission problem
It looks like hadoop.tmp.dir is being used both for local and hdfs
directories. Can you create a jira for this?
What i recommended is that you create /data/hadoop/tmp on hdfs and chmod -R
/data
--
Arpit Gupta
Hortonworks Inc.
http://hortonworks.com/
On Apr 30, 2013, at 8:22 AM, "Kevin Burton" <rk...@charter.net>
wrote:
I am not clear on what you are suggesting to create on HDFS or the local
file system. As I understand it hadoop.tmp.dir is the local file system. I
changed it so that the temporary files would be on a disk that has more
capacity then /tmp. So you are suggesting that I create /data/hadoop/tmp on
HDFS. I already have this created.
Found 1 items
drwxr-xr-x - mapred supergroup 0 2013-04-29 15:45 /tmp/mapred
kevin@devUbuntu05:/etc/hadoop/conf$ hadoop fs -ls -d /tmp
Found 1 items
drwxrwxrwt - hdfs supergroup 0 2013-04-29 15:45 /tmp
When you suggest that I 'chmod -R 777 /data'. You are suggesting that I open
up all the data to everyone? Isn't that a bit extreme? First /data is the
mount point for this drive and there are other uses for this drive than
hadoop so there are other folders. That is why there is /data/hadoop. As far
as hadoop is concerned:
kevin@devUbuntu05:/etc/hadoop/conf$ ls -l /data/hadoop/
total 12
drwxrwxr-x 4 hdfs hadoop 4096 Apr 29 16:38 dfs
drwxrwxr-x 3 mapred hadoop 4096 Apr 29 11:33 mapred
drwxrwxrwx 3 hdfs hadoop 4096 Apr 19 15:14 tmp
dfs would be where the data blocks for the hdfs file system would go, mapred
would be the folder for M/R jobs, and tmp would be temporary storage. These
are all on the local file system. Do I have to make all of this read-write
for everyone in order to get it to work?
From: Arpit Gupta [mailto:arpit@hortonworks.com]
Sent: Tuesday, April 30, 2013 10:01 AM
To: user@hadoop.apache.org
Subject: Re: Permission problem
ah
this is what mapred.sytem.dir defaults to
<property>
<name>mapred.system.dir</name>
<value>${hadoop.tmp.dir}/mapred/system</value>
<description>The directory where MapReduce stores control files.
</description>
</property>
So thats why its trying to write to
<hdfs://devubuntu05:9000/data/hadoop/tmp/hadoop-mapred/mapred/system>
/data/hadoop/tmp/hadoop-mapred/mapred/system
So if you want hadoop.tmp.dir to be /data/hadoop/tmp/hadoop-${user.name}
then i would suggest that create /data/hadoop/tmp on hdfs and chmod -R 777
/data or you can remove the hadoop.tmp.dir from your configs and let it be
set to the default value of
<property>
<name>hadoop.tmp.dir</name>
<value>/tmp/hadoop-${user.name}</value>
<description>A base for other temporary directories.</description>
</property>
So to fix your problem you can do the above or set mapred.system.dir to
/tmp/mapred/system in your mapred-site.xml.
--
Arpit Gupta
Hortonworks Inc.
<http://hortonworks.com/> http://hortonworks.com/
On Apr 30, 2013, at 7:55 AM, "Kevin Burton" <
<ma...@charter.net> rkevinburton@charter.net> wrote:
In core-site.xml I have:
<property>
<name>fs.default.name</name>
<value> <hdfs://devubuntu05:9000%3c/value%3e>
hdfs://devubuntu05:9000</value>
<description>The name of the default file system. A URI whose scheme and
authority determine the FileSystem implementation. </description>
</property>
In hdfs-site.xml I have
<property>
<name>hadoop.tmp.dir</name>
<value>/data/hadoop/tmp/hadoop-${user.name}</value>
<description>Hadoop temporary folder</description>
</property>
From: Arpit Gupta [mailto:arpit@ <http://hortonworks.com> hortonworks.com]
Sent: Tuesday, April 30, 2013 9:48 AM
To: Kevin Burton
Cc: <ma...@hadoop.apache.org> user@hadoop.apache.org
Subject: Re: Permission problem
Based on the logs your system dir is set to
<hdfs://devubuntu05:9000/data/hadoop/tmp/hadoop-mapred/mapred/system>
hdfs://devubuntu05:9000/data/hadoop/tmp/hadoop-mapred/mapred/system
what is your fs.default.name and hadoop.tmp.dir in core-site.xml set to?
--
Arpit Gupta
Hortonworks Inc.
<http://hortonworks.com/> http://hortonworks.com/
On Apr 30, 2013, at 7:39 AM, "Kevin Burton" <
<ma...@charter.net> rkevinburton@charter.net> wrote:
Thank you.
mapred.system.dir is not set. I am guessing that it is whatever the default
is. What should I set it to?
/tmp is already 777
kevin@devUbuntu05:~$ hadoop fs -ls /tmp
Found 1 items
drwxr-xr-x - hdfs supergroup 0 2013-04-29 15:45 /tmp/mapred
kevin@devUbuntu05:~$ hadoop fs -ls -d /tmp
Found 1 items
drwxrwxrwt - hdfs supergroup 0 2013-04-29 15:45 /tmp
But notice that the mapred folder in the /tmp folder is 755.
So I changed it:
kevin@devUbuntu05 $ hadoop fs -ls -d /tmp
drwxrwxrwt - hdfs supergroup 0 2013-04-29 15:45 /tmp
kevin@devUbuntu05 $ hadoop fs -ls -R /tmp
drwxr-xr-x - mapred supergroup 0 2013-04-29 15:45 /tmp/mapred
drwxr-xr-x - mapred supergroup 0 2013-04-29 15:45
/tmp/mapred/system
I still get the errors in the log file:
2013-04-30 09:35:11,609 WARN org.apache.hadoop.mapred.JobTracker: Failed to
operate on mapred.system.dir (
<hdfs://devubuntu05:9000/data/hadoop/tmp/hadoop-mapred/mapred/system>
hdfs://devubuntu05:9000/data/hadoop/tmp/hadoop-mapred/mapred/system) because
of permissions.
2013-04-30 09:35:11,609 WARN org.apache.hadoop.mapred.JobTracker: This
directory should be owned by the user 'mapred (auth:SIMPLE)'
2013-04-30 09:35:11,609 WARN org.apache.hadoop.mapred.JobTracker: Bailing
out ...
. . . . .
org.apache.hadoop.security.AccessControlException: Permission denied:
user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
. . . . .
Caused by:
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessContr
olException): Permission denied: user=mapred, access=WRITE,
inode="/":hdfs:supergroup:drwxrwxr-x
2013-04-30 09:35:11,610 FATAL org.apache.hadoop.mapred.JobTracker:
org.apache.hadoop.security.AccessControlException: Permission denied:
user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
. . . . . .
From: Arpit Gupta [mailto:arpit@ <http://hortonworks.com> hortonworks.com]
Sent: Tuesday, April 30, 2013 9:25 AM
To: <ma...@hadoop.apache.org> user@hadoop.apache.org
Subject: Re: Permission problem
what is your mapred.system.dir set to in mapred-site.xml?
By default it will write to /tmp on hdfs.
So you can do the following
create /tmp on hdfs and chmod it to 777 as user hdfs and then restart
jobtracker and tasktrackers.
In case its set to /mapred/something then create /mapred and chown it to
user mapred.
--
Arpit Gupta
Hortonworks Inc.
<http://hortonworks.com/> http://hortonworks.com/
On Apr 30, 2013, at 6:36 AM, "Kevin Burton" <
<ma...@charter.net> rkevinburton@charter.net> wrote:
To further complicate the issue the log file in
(/var/log/hadoop-0.20-mapreduce/hadoop-hadoop-jobtracker-devUbuntu05.log) is
owned by mapred:mapred and the name of the file seems to indicate some other
lineage (hadoop,hadoop). I am out of my league in understanding the
permission structure for hadoop hdfs and mr. Ideas?
From: Kevin Burton [mailto:rkevinburton@ <http://charter.net> charter.net]
Sent: Tuesday, April 30, 2013 8:31 AM
To: <ma...@hadoop.apache.org> user@hadoop.apache.org
Cc: 'Mohammad Tariq'
Subject: RE: Permission problem
That is what I perceive as the problem. The hdfs file system was created
with the user 'hdfs' owning the root ('/') but for some reason with a M/R
job the user 'mapred' needs to have write permission to the root. I don't
know how to satisfy both conditions. That is one reason that I relaxed the
permission to 775 so that the group would also have write permission but
that didn't seem to help.
From: Mohammad Tariq [ <ma...@gmail.com>
mailto:dontariq@gmail.com]
Sent: Tuesday, April 30, 2013 8:20 AM
To: Kevin Burton
Subject: Re: Permission problem
user?"ls" shows "hdfs" and the log says "mapred"..
Warm Regards,
Tariq
<https://mtariq.jux.com/> https://mtariq.jux.com/
<http://cloudfront.blogspot.com> cloudfront.blogspot.com
On Tue, Apr 30, 2013 at 6:22 PM, Kevin Burton <
<ma...@charter.net> rkevinburton@charter.net> wrote:
I have relaxed it even further so now it is 775
<mailto:kevin@devUbuntu05:/var/log/hadoop-0.20-mapreduce$>
kevin@devUbuntu05:/var/log/hadoop-0.20-mapreduce$ hadoop fs -ls -d /
Found 1 items
drwxrwxr-x - hdfs supergroup 0 2013-04-29 15:43 /
But I still get this error:
2013-04-30 07:43:02,520 FATAL org.apache.hadoop.mapred.JobTracker:
org.apache.hadoop.security.AccessControlException: Permission denied:
user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
From: Mohammad Tariq [mailto: <ma...@gmail.com>
dontariq@gmail.com]
Sent: Monday, April 29, 2013 5:10 PM
To: <ma...@hadoop.apache.org> user@hadoop.apache.org
Subject: Re: Incompartible cluserIDS
make it 755.
Warm Regards,
Tariq
<https://mtariq.jux.com/> https://mtariq.jux.com/
<http://cloudfront.blogspot.com> cloudfront.blogspot.com
RE: Permission problem
Posted by Kevin Burton <rk...@charter.net>.
I am not sure how to create a jira.
Again I am not sure I understand your workaround. You are suggesting that I
create /data/hadoop/tmp on HDFS like:
sudo -u hdfs hadoop fs -mkdir /data/hadoop/tmp
I don't think I can chmod -R 777 on /data since it is a disk and as I
indicated it is being used to store data other than that used by hadoop.
Even chmod -R 777 on /data/hadoop seems extreme as there is a dfs, mapred,
and tmp folder. Which one of these local folders need to be opened up? I
would rather not open up all folders to the world if at all possible.
From: Arpit Gupta [mailto:arpit@hortonworks.com]
Sent: Tuesday, April 30, 2013 10:48 AM
To: Kevin Burton
Cc: user@hadoop.apache.org
Subject: Re: Permission problem
It looks like hadoop.tmp.dir is being used both for local and hdfs
directories. Can you create a jira for this?
What i recommended is that you create /data/hadoop/tmp on hdfs and chmod -R
/data
--
Arpit Gupta
Hortonworks Inc.
http://hortonworks.com/
On Apr 30, 2013, at 8:22 AM, "Kevin Burton" <rk...@charter.net>
wrote:
I am not clear on what you are suggesting to create on HDFS or the local
file system. As I understand it hadoop.tmp.dir is the local file system. I
changed it so that the temporary files would be on a disk that has more
capacity then /tmp. So you are suggesting that I create /data/hadoop/tmp on
HDFS. I already have this created.
Found 1 items
drwxr-xr-x - mapred supergroup 0 2013-04-29 15:45 /tmp/mapred
kevin@devUbuntu05:/etc/hadoop/conf$ hadoop fs -ls -d /tmp
Found 1 items
drwxrwxrwt - hdfs supergroup 0 2013-04-29 15:45 /tmp
When you suggest that I 'chmod -R 777 /data'. You are suggesting that I open
up all the data to everyone? Isn't that a bit extreme? First /data is the
mount point for this drive and there are other uses for this drive than
hadoop so there are other folders. That is why there is /data/hadoop. As far
as hadoop is concerned:
kevin@devUbuntu05:/etc/hadoop/conf$ ls -l /data/hadoop/
total 12
drwxrwxr-x 4 hdfs hadoop 4096 Apr 29 16:38 dfs
drwxrwxr-x 3 mapred hadoop 4096 Apr 29 11:33 mapred
drwxrwxrwx 3 hdfs hadoop 4096 Apr 19 15:14 tmp
dfs would be where the data blocks for the hdfs file system would go, mapred
would be the folder for M/R jobs, and tmp would be temporary storage. These
are all on the local file system. Do I have to make all of this read-write
for everyone in order to get it to work?
From: Arpit Gupta [mailto:arpit@hortonworks.com]
Sent: Tuesday, April 30, 2013 10:01 AM
To: user@hadoop.apache.org
Subject: Re: Permission problem
ah
this is what mapred.sytem.dir defaults to
<property>
<name>mapred.system.dir</name>
<value>${hadoop.tmp.dir}/mapred/system</value>
<description>The directory where MapReduce stores control files.
</description>
</property>
So thats why its trying to write to
<hdfs://devubuntu05:9000/data/hadoop/tmp/hadoop-mapred/mapred/system>
/data/hadoop/tmp/hadoop-mapred/mapred/system
So if you want hadoop.tmp.dir to be /data/hadoop/tmp/hadoop-${user.name}
then i would suggest that create /data/hadoop/tmp on hdfs and chmod -R 777
/data or you can remove the hadoop.tmp.dir from your configs and let it be
set to the default value of
<property>
<name>hadoop.tmp.dir</name>
<value>/tmp/hadoop-${user.name}</value>
<description>A base for other temporary directories.</description>
</property>
So to fix your problem you can do the above or set mapred.system.dir to
/tmp/mapred/system in your mapred-site.xml.
--
Arpit Gupta
Hortonworks Inc.
<http://hortonworks.com/> http://hortonworks.com/
On Apr 30, 2013, at 7:55 AM, "Kevin Burton" <
<ma...@charter.net> rkevinburton@charter.net> wrote:
In core-site.xml I have:
<property>
<name>fs.default.name</name>
<value> <hdfs://devubuntu05:9000%3c/value%3e>
hdfs://devubuntu05:9000</value>
<description>The name of the default file system. A URI whose scheme and
authority determine the FileSystem implementation. </description>
</property>
In hdfs-site.xml I have
<property>
<name>hadoop.tmp.dir</name>
<value>/data/hadoop/tmp/hadoop-${user.name}</value>
<description>Hadoop temporary folder</description>
</property>
From: Arpit Gupta [mailto:arpit@ <http://hortonworks.com> hortonworks.com]
Sent: Tuesday, April 30, 2013 9:48 AM
To: Kevin Burton
Cc: <ma...@hadoop.apache.org> user@hadoop.apache.org
Subject: Re: Permission problem
Based on the logs your system dir is set to
<hdfs://devubuntu05:9000/data/hadoop/tmp/hadoop-mapred/mapred/system>
hdfs://devubuntu05:9000/data/hadoop/tmp/hadoop-mapred/mapred/system
what is your fs.default.name and hadoop.tmp.dir in core-site.xml set to?
--
Arpit Gupta
Hortonworks Inc.
<http://hortonworks.com/> http://hortonworks.com/
On Apr 30, 2013, at 7:39 AM, "Kevin Burton" <
<ma...@charter.net> rkevinburton@charter.net> wrote:
Thank you.
mapred.system.dir is not set. I am guessing that it is whatever the default
is. What should I set it to?
/tmp is already 777
kevin@devUbuntu05:~$ hadoop fs -ls /tmp
Found 1 items
drwxr-xr-x - hdfs supergroup 0 2013-04-29 15:45 /tmp/mapred
kevin@devUbuntu05:~$ hadoop fs -ls -d /tmp
Found 1 items
drwxrwxrwt - hdfs supergroup 0 2013-04-29 15:45 /tmp
But notice that the mapred folder in the /tmp folder is 755.
So I changed it:
kevin@devUbuntu05 $ hadoop fs -ls -d /tmp
drwxrwxrwt - hdfs supergroup 0 2013-04-29 15:45 /tmp
kevin@devUbuntu05 $ hadoop fs -ls -R /tmp
drwxr-xr-x - mapred supergroup 0 2013-04-29 15:45 /tmp/mapred
drwxr-xr-x - mapred supergroup 0 2013-04-29 15:45
/tmp/mapred/system
I still get the errors in the log file:
2013-04-30 09:35:11,609 WARN org.apache.hadoop.mapred.JobTracker: Failed to
operate on mapred.system.dir (
<hdfs://devubuntu05:9000/data/hadoop/tmp/hadoop-mapred/mapred/system>
hdfs://devubuntu05:9000/data/hadoop/tmp/hadoop-mapred/mapred/system) because
of permissions.
2013-04-30 09:35:11,609 WARN org.apache.hadoop.mapred.JobTracker: This
directory should be owned by the user 'mapred (auth:SIMPLE)'
2013-04-30 09:35:11,609 WARN org.apache.hadoop.mapred.JobTracker: Bailing
out ...
. . . . .
org.apache.hadoop.security.AccessControlException: Permission denied:
user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
. . . . .
Caused by:
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessContr
olException): Permission denied: user=mapred, access=WRITE,
inode="/":hdfs:supergroup:drwxrwxr-x
2013-04-30 09:35:11,610 FATAL org.apache.hadoop.mapred.JobTracker:
org.apache.hadoop.security.AccessControlException: Permission denied:
user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
. . . . . .
From: Arpit Gupta [mailto:arpit@ <http://hortonworks.com> hortonworks.com]
Sent: Tuesday, April 30, 2013 9:25 AM
To: <ma...@hadoop.apache.org> user@hadoop.apache.org
Subject: Re: Permission problem
what is your mapred.system.dir set to in mapred-site.xml?
By default it will write to /tmp on hdfs.
So you can do the following
create /tmp on hdfs and chmod it to 777 as user hdfs and then restart
jobtracker and tasktrackers.
In case its set to /mapred/something then create /mapred and chown it to
user mapred.
--
Arpit Gupta
Hortonworks Inc.
<http://hortonworks.com/> http://hortonworks.com/
On Apr 30, 2013, at 6:36 AM, "Kevin Burton" <
<ma...@charter.net> rkevinburton@charter.net> wrote:
To further complicate the issue the log file in
(/var/log/hadoop-0.20-mapreduce/hadoop-hadoop-jobtracker-devUbuntu05.log) is
owned by mapred:mapred and the name of the file seems to indicate some other
lineage (hadoop,hadoop). I am out of my league in understanding the
permission structure for hadoop hdfs and mr. Ideas?
From: Kevin Burton [mailto:rkevinburton@ <http://charter.net> charter.net]
Sent: Tuesday, April 30, 2013 8:31 AM
To: <ma...@hadoop.apache.org> user@hadoop.apache.org
Cc: 'Mohammad Tariq'
Subject: RE: Permission problem
That is what I perceive as the problem. The hdfs file system was created
with the user 'hdfs' owning the root ('/') but for some reason with a M/R
job the user 'mapred' needs to have write permission to the root. I don't
know how to satisfy both conditions. That is one reason that I relaxed the
permission to 775 so that the group would also have write permission but
that didn't seem to help.
From: Mohammad Tariq [ <ma...@gmail.com>
mailto:dontariq@gmail.com]
Sent: Tuesday, April 30, 2013 8:20 AM
To: Kevin Burton
Subject: Re: Permission problem
user?"ls" shows "hdfs" and the log says "mapred"..
Warm Regards,
Tariq
<https://mtariq.jux.com/> https://mtariq.jux.com/
<http://cloudfront.blogspot.com> cloudfront.blogspot.com
On Tue, Apr 30, 2013 at 6:22 PM, Kevin Burton <
<ma...@charter.net> rkevinburton@charter.net> wrote:
I have relaxed it even further so now it is 775
<mailto:kevin@devUbuntu05:/var/log/hadoop-0.20-mapreduce$>
kevin@devUbuntu05:/var/log/hadoop-0.20-mapreduce$ hadoop fs -ls -d /
Found 1 items
drwxrwxr-x - hdfs supergroup 0 2013-04-29 15:43 /
But I still get this error:
2013-04-30 07:43:02,520 FATAL org.apache.hadoop.mapred.JobTracker:
org.apache.hadoop.security.AccessControlException: Permission denied:
user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
From: Mohammad Tariq [mailto: <ma...@gmail.com>
dontariq@gmail.com]
Sent: Monday, April 29, 2013 5:10 PM
To: <ma...@hadoop.apache.org> user@hadoop.apache.org
Subject: Re: Incompartible cluserIDS
make it 755.
Warm Regards,
Tariq
<https://mtariq.jux.com/> https://mtariq.jux.com/
<http://cloudfront.blogspot.com> cloudfront.blogspot.com
RE: Permission problem
Posted by Kevin Burton <rk...@charter.net>.
I am not sure how to create a jira.
Again I am not sure I understand your workaround. You are suggesting that I
create /data/hadoop/tmp on HDFS like:
sudo -u hdfs hadoop fs -mkdir /data/hadoop/tmp
I don't think I can chmod -R 777 on /data since it is a disk and as I
indicated it is being used to store data other than that used by hadoop.
Even chmod -R 777 on /data/hadoop seems extreme as there is a dfs, mapred,
and tmp folder. Which one of these local folders need to be opened up? I
would rather not open up all folders to the world if at all possible.
From: Arpit Gupta [mailto:arpit@hortonworks.com]
Sent: Tuesday, April 30, 2013 10:48 AM
To: Kevin Burton
Cc: user@hadoop.apache.org
Subject: Re: Permission problem
It looks like hadoop.tmp.dir is being used both for local and hdfs
directories. Can you create a jira for this?
What i recommended is that you create /data/hadoop/tmp on hdfs and chmod -R
/data
--
Arpit Gupta
Hortonworks Inc.
http://hortonworks.com/
On Apr 30, 2013, at 8:22 AM, "Kevin Burton" <rk...@charter.net>
wrote:
I am not clear on what you are suggesting to create on HDFS or the local
file system. As I understand it hadoop.tmp.dir is the local file system. I
changed it so that the temporary files would be on a disk that has more
capacity then /tmp. So you are suggesting that I create /data/hadoop/tmp on
HDFS. I already have this created.
Found 1 items
drwxr-xr-x - mapred supergroup 0 2013-04-29 15:45 /tmp/mapred
kevin@devUbuntu05:/etc/hadoop/conf$ hadoop fs -ls -d /tmp
Found 1 items
drwxrwxrwt - hdfs supergroup 0 2013-04-29 15:45 /tmp
When you suggest that I 'chmod -R 777 /data'. You are suggesting that I open
up all the data to everyone? Isn't that a bit extreme? First /data is the
mount point for this drive and there are other uses for this drive than
hadoop so there are other folders. That is why there is /data/hadoop. As far
as hadoop is concerned:
kevin@devUbuntu05:/etc/hadoop/conf$ ls -l /data/hadoop/
total 12
drwxrwxr-x 4 hdfs hadoop 4096 Apr 29 16:38 dfs
drwxrwxr-x 3 mapred hadoop 4096 Apr 29 11:33 mapred
drwxrwxrwx 3 hdfs hadoop 4096 Apr 19 15:14 tmp
dfs would be where the data blocks for the hdfs file system would go, mapred
would be the folder for M/R jobs, and tmp would be temporary storage. These
are all on the local file system. Do I have to make all of this read-write
for everyone in order to get it to work?
From: Arpit Gupta [mailto:arpit@hortonworks.com]
Sent: Tuesday, April 30, 2013 10:01 AM
To: user@hadoop.apache.org
Subject: Re: Permission problem
ah
this is what mapred.sytem.dir defaults to
<property>
<name>mapred.system.dir</name>
<value>${hadoop.tmp.dir}/mapred/system</value>
<description>The directory where MapReduce stores control files.
</description>
</property>
So thats why its trying to write to
<hdfs://devubuntu05:9000/data/hadoop/tmp/hadoop-mapred/mapred/system>
/data/hadoop/tmp/hadoop-mapred/mapred/system
So if you want hadoop.tmp.dir to be /data/hadoop/tmp/hadoop-${user.name}
then i would suggest that create /data/hadoop/tmp on hdfs and chmod -R 777
/data or you can remove the hadoop.tmp.dir from your configs and let it be
set to the default value of
<property>
<name>hadoop.tmp.dir</name>
<value>/tmp/hadoop-${user.name}</value>
<description>A base for other temporary directories.</description>
</property>
So to fix your problem you can do the above or set mapred.system.dir to
/tmp/mapred/system in your mapred-site.xml.
--
Arpit Gupta
Hortonworks Inc.
<http://hortonworks.com/> http://hortonworks.com/
On Apr 30, 2013, at 7:55 AM, "Kevin Burton" <
<ma...@charter.net> rkevinburton@charter.net> wrote:
In core-site.xml I have:
<property>
<name>fs.default.name</name>
<value> <hdfs://devubuntu05:9000%3c/value%3e>
hdfs://devubuntu05:9000</value>
<description>The name of the default file system. A URI whose scheme and
authority determine the FileSystem implementation. </description>
</property>
In hdfs-site.xml I have
<property>
<name>hadoop.tmp.dir</name>
<value>/data/hadoop/tmp/hadoop-${user.name}</value>
<description>Hadoop temporary folder</description>
</property>
From: Arpit Gupta [mailto:arpit@ <http://hortonworks.com> hortonworks.com]
Sent: Tuesday, April 30, 2013 9:48 AM
To: Kevin Burton
Cc: <ma...@hadoop.apache.org> user@hadoop.apache.org
Subject: Re: Permission problem
Based on the logs your system dir is set to
<hdfs://devubuntu05:9000/data/hadoop/tmp/hadoop-mapred/mapred/system>
hdfs://devubuntu05:9000/data/hadoop/tmp/hadoop-mapred/mapred/system
what is your fs.default.name and hadoop.tmp.dir in core-site.xml set to?
--
Arpit Gupta
Hortonworks Inc.
<http://hortonworks.com/> http://hortonworks.com/
On Apr 30, 2013, at 7:39 AM, "Kevin Burton" <
<ma...@charter.net> rkevinburton@charter.net> wrote:
Thank you.
mapred.system.dir is not set. I am guessing that it is whatever the default
is. What should I set it to?
/tmp is already 777
kevin@devUbuntu05:~$ hadoop fs -ls /tmp
Found 1 items
drwxr-xr-x - hdfs supergroup 0 2013-04-29 15:45 /tmp/mapred
kevin@devUbuntu05:~$ hadoop fs -ls -d /tmp
Found 1 items
drwxrwxrwt - hdfs supergroup 0 2013-04-29 15:45 /tmp
But notice that the mapred folder in the /tmp folder is 755.
So I changed it:
kevin@devUbuntu05 $ hadoop fs -ls -d /tmp
drwxrwxrwt - hdfs supergroup 0 2013-04-29 15:45 /tmp
kevin@devUbuntu05 $ hadoop fs -ls -R /tmp
drwxr-xr-x - mapred supergroup 0 2013-04-29 15:45 /tmp/mapred
drwxr-xr-x - mapred supergroup 0 2013-04-29 15:45
/tmp/mapred/system
I still get the errors in the log file:
2013-04-30 09:35:11,609 WARN org.apache.hadoop.mapred.JobTracker: Failed to
operate on mapred.system.dir (
<hdfs://devubuntu05:9000/data/hadoop/tmp/hadoop-mapred/mapred/system>
hdfs://devubuntu05:9000/data/hadoop/tmp/hadoop-mapred/mapred/system) because
of permissions.
2013-04-30 09:35:11,609 WARN org.apache.hadoop.mapred.JobTracker: This
directory should be owned by the user 'mapred (auth:SIMPLE)'
2013-04-30 09:35:11,609 WARN org.apache.hadoop.mapred.JobTracker: Bailing
out ...
. . . . .
org.apache.hadoop.security.AccessControlException: Permission denied:
user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
. . . . .
Caused by:
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessContr
olException): Permission denied: user=mapred, access=WRITE,
inode="/":hdfs:supergroup:drwxrwxr-x
2013-04-30 09:35:11,610 FATAL org.apache.hadoop.mapred.JobTracker:
org.apache.hadoop.security.AccessControlException: Permission denied:
user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
. . . . . .
From: Arpit Gupta [mailto:arpit@ <http://hortonworks.com> hortonworks.com]
Sent: Tuesday, April 30, 2013 9:25 AM
To: <ma...@hadoop.apache.org> user@hadoop.apache.org
Subject: Re: Permission problem
what is your mapred.system.dir set to in mapred-site.xml?
By default it will write to /tmp on hdfs.
So you can do the following
create /tmp on hdfs and chmod it to 777 as user hdfs and then restart
jobtracker and tasktrackers.
In case its set to /mapred/something then create /mapred and chown it to
user mapred.
--
Arpit Gupta
Hortonworks Inc.
<http://hortonworks.com/> http://hortonworks.com/
On Apr 30, 2013, at 6:36 AM, "Kevin Burton" <
<ma...@charter.net> rkevinburton@charter.net> wrote:
To further complicate the issue the log file in
(/var/log/hadoop-0.20-mapreduce/hadoop-hadoop-jobtracker-devUbuntu05.log) is
owned by mapred:mapred and the name of the file seems to indicate some other
lineage (hadoop,hadoop). I am out of my league in understanding the
permission structure for hadoop hdfs and mr. Ideas?
From: Kevin Burton [mailto:rkevinburton@ <http://charter.net> charter.net]
Sent: Tuesday, April 30, 2013 8:31 AM
To: <ma...@hadoop.apache.org> user@hadoop.apache.org
Cc: 'Mohammad Tariq'
Subject: RE: Permission problem
That is what I perceive as the problem. The hdfs file system was created
with the user 'hdfs' owning the root ('/') but for some reason with a M/R
job the user 'mapred' needs to have write permission to the root. I don't
know how to satisfy both conditions. That is one reason that I relaxed the
permission to 775 so that the group would also have write permission but
that didn't seem to help.
From: Mohammad Tariq [ <ma...@gmail.com>
mailto:dontariq@gmail.com]
Sent: Tuesday, April 30, 2013 8:20 AM
To: Kevin Burton
Subject: Re: Permission problem
user?"ls" shows "hdfs" and the log says "mapred"..
Warm Regards,
Tariq
<https://mtariq.jux.com/> https://mtariq.jux.com/
<http://cloudfront.blogspot.com> cloudfront.blogspot.com
On Tue, Apr 30, 2013 at 6:22 PM, Kevin Burton <
<ma...@charter.net> rkevinburton@charter.net> wrote:
I have relaxed it even further so now it is 775
<mailto:kevin@devUbuntu05:/var/log/hadoop-0.20-mapreduce$>
kevin@devUbuntu05:/var/log/hadoop-0.20-mapreduce$ hadoop fs -ls -d /
Found 1 items
drwxrwxr-x - hdfs supergroup 0 2013-04-29 15:43 /
But I still get this error:
2013-04-30 07:43:02,520 FATAL org.apache.hadoop.mapred.JobTracker:
org.apache.hadoop.security.AccessControlException: Permission denied:
user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
From: Mohammad Tariq [mailto: <ma...@gmail.com>
dontariq@gmail.com]
Sent: Monday, April 29, 2013 5:10 PM
To: <ma...@hadoop.apache.org> user@hadoop.apache.org
Subject: Re: Incompartible cluserIDS
make it 755.
Warm Regards,
Tariq
<https://mtariq.jux.com/> https://mtariq.jux.com/
<http://cloudfront.blogspot.com> cloudfront.blogspot.com
Re: Permission problem
Posted by Arpit Gupta <ar...@hortonworks.com>.
It looks like hadoop.tmp.dir is being used both for local and hdfs directories. Can you create a jira for this?
What i recommended is that you create /data/hadoop/tmp on hdfs and chmod -R /data
--
Arpit Gupta
Hortonworks Inc.
http://hortonworks.com/
On Apr 30, 2013, at 8:22 AM, "Kevin Burton" <rk...@charter.net> wrote:
> I am not clear on what you are suggesting to create on HDFS or the local file system. As I understand it hadoop.tmp.dir is the local file system. I changed it so that the temporary files would be on a disk that has more capacity then /tmp. So you are suggesting that I create /data/hadoop/tmp on HDFS. I already have this created.
>
> Found 1 items
> drwxr-xr-x - mapred supergroup 0 2013-04-29 15:45 /tmp/mapred
> kevin@devUbuntu05:/etc/hadoop/conf$ hadoop fs -ls -d /tmp
> Found 1 items
> drwxrwxrwt - hdfs supergroup 0 2013-04-29 15:45 /tmp
>
> When you suggest that I ‘chmod –R 777 /data’. You are suggesting that I open up all the data to everyone? Isn’t that a bit extreme? First /data is the mount point for this drive and there are other uses for this drive than hadoop so there are other folders. That is why there is /data/hadoop. As far as hadoop is concerned:
>
> kevin@devUbuntu05:/etc/hadoop/conf$ ls -l /data/hadoop/
> total 12
> drwxrwxr-x 4 hdfs hadoop 4096 Apr 29 16:38 dfs
> drwxrwxr-x 3 mapred hadoop 4096 Apr 29 11:33 mapred
> drwxrwxrwx 3 hdfs hadoop 4096 Apr 19 15:14 tmp
>
> dfs would be where the data blocks for the hdfs file system would go, mapred would be the folder for M/R jobs, and tmp would be temporary storage. These are all on the local file system. Do I have to make all of this read-write for everyone in order to get it to work?
>
> From: Arpit Gupta [mailto:arpit@hortonworks.com]
> Sent: Tuesday, April 30, 2013 10:01 AM
> To: user@hadoop.apache.org
> Subject: Re: Permission problem
>
> ah
>
> this is what mapred.sytem.dir defaults to
>
> <property>
> <name>mapred.system.dir</name>
> <value>${hadoop.tmp.dir}/mapred/system</value>
> <description>The directory where MapReduce stores control files.
> </description>
> </property>
>
>
> So thats why its trying to write to /data/hadoop/tmp/hadoop-mapred/mapred/system
>
>
> So if you want hadoop.tmp.dir to be /data/hadoop/tmp/hadoop-${user.name} then i would suggest that create /data/hadoop/tmp on hdfs and chmod -R 777 /data or you can remove the hadoop.tmp.dir from your configs and let it be set to the default value of
>
> <property>
> <name>hadoop.tmp.dir</name>
> <value>/tmp/hadoop-${user.name}</value>
> <description>A base for other temporary directories.</description>
> </property>
>
> So to fix your problem you can do the above or set mapred.system.dir to /tmp/mapred/system in your mapred-site.xml.
>
> --
> Arpit Gupta
> Hortonworks Inc.
> http://hortonworks.com/
>
> On Apr 30, 2013, at 7:55 AM, "Kevin Burton" <rk...@charter.net> wrote:
>
>
> In core-site.xml I have:
>
> <property>
> <name>fs.default.name</name>
> <value>hdfs://devubuntu05:9000</value>
> <description>The name of the default file system. A URI whose scheme and authority determine the FileSystem implementation. </description>
> </property>
>
> In hdfs-site.xml I have
>
> <property>
> <name>hadoop.tmp.dir</name>
> <value>/data/hadoop/tmp/hadoop-${user.name}</value>
> <description>Hadoop temporary folder</description>
> </property>
>
>
> From: Arpit Gupta [mailto:arpit@hortonworks.com]
> Sent: Tuesday, April 30, 2013 9:48 AM
> To: Kevin Burton
> Cc: user@hadoop.apache.org
> Subject: Re: Permission problem
>
> Based on the logs your system dir is set to
>
> hdfs://devubuntu05:9000/data/hadoop/tmp/hadoop-mapred/mapred/system
>
>
> what is your fs.default.name and hadoop.tmp.dir in core-site.xml set to?
>
>
>
> --
> Arpit Gupta
> Hortonworks Inc.
> http://hortonworks.com/
>
> On Apr 30, 2013, at 7:39 AM, "Kevin Burton" <rk...@charter.net> wrote:
>
>
>
>
> Thank you.
>
> mapred.system.dir is not set. I am guessing that it is whatever the default is. What should I set it to?
>
> /tmp is already 777
>
> kevin@devUbuntu05:~$ hadoop fs -ls /tmp
> Found 1 items
> drwxr-xr-x - hdfs supergroup 0 2013-04-29 15:45 /tmp/mapred
> kevin@devUbuntu05:~$ hadoop fs -ls -d /tmp
> Found 1 items
> drwxrwxrwt - hdfs supergroup 0 2013-04-29 15:45 /tmp
>
> But notice that the mapred folder in the /tmp folder is 755.
> So I changed it:
>
> kevin@devUbuntu05 $ hadoop fs -ls -d /tmp
> drwxrwxrwt - hdfs supergroup 0 2013-04-29 15:45 /tmp
>
> kevin@devUbuntu05 $ hadoop fs -ls -R /tmp
> drwxr-xr-x - mapred supergroup 0 2013-04-29 15:45 /tmp/mapred
> drwxr-xr-x - mapred supergroup 0 2013-04-29 15:45 /tmp/mapred/system
>
> I still get the errors in the log file:
>
> 2013-04-30 09:35:11,609 WARN org.apache.hadoop.mapred.JobTracker: Failed to operate on mapred.system.dir (hdfs://devubuntu05:9000/data/hadoop/tmp/hadoop-mapred/mapred/system) because of permissions.
> 2013-04-30 09:35:11,609 WARN org.apache.hadoop.mapred.JobTracker: This directory should be owned by the user 'mapred (auth:SIMPLE)'
> 2013-04-30 09:35:11,609 WARN org.apache.hadoop.mapred.JobTracker: Bailing out ...
> . . . . .
> org.apache.hadoop.security.AccessControlException: Permission denied: user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
> . . . . .
> Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Permission denied: user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
> 2013-04-30 09:35:11,610 FATAL org.apache.hadoop.mapred.JobTracker: org.apache.hadoop.security.AccessControlException: Permission denied: user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
> . . . . . .
>
>
> From: Arpit Gupta [mailto:arpit@hortonworks.com]
> Sent: Tuesday, April 30, 2013 9:25 AM
> To: user@hadoop.apache.org
> Subject: Re: Permission problem
>
> what is your mapred.system.dir set to in mapred-site.xml?
>
> By default it will write to /tmp on hdfs.
>
> So you can do the following
>
> create /tmp on hdfs and chmod it to 777 as user hdfs and then restart jobtracker and tasktrackers.
>
> In case its set to /mapred/something then create /mapred and chown it to user mapred.
>
>
> --
> Arpit Gupta
> Hortonworks Inc.
> http://hortonworks.com/
>
> On Apr 30, 2013, at 6:36 AM, "Kevin Burton" <rk...@charter.net> wrote:
>
>
>
>
> To further complicate the issue the log file in (/var/log/hadoop-0.20-mapreduce/hadoop-hadoop-jobtracker-devUbuntu05.log) is owned by mapred:mapred and the name of the file seems to indicate some other lineage (hadoop,hadoop). I am out of my league in understanding the permission structure for hadoop hdfs and mr. Ideas?
>
> From: Kevin Burton [mailto:rkevinburton@charter.net]
> Sent: Tuesday, April 30, 2013 8:31 AM
> To: user@hadoop.apache.org
> Cc: 'Mohammad Tariq'
> Subject: RE: Permission problem
>
> That is what I perceive as the problem. The hdfs file system was created with the user ‘hdfs’ owning the root (‘/’) but for some reason with a M/R job the user ‘mapred’ needs to have write permission to the root. I don’t know how to satisfy both conditions. That is one reason that I relaxed the permission to 775 so that the group would also have write permission but that didn’t seem to help.
>
> From: Mohammad Tariq [mailto:dontariq@gmail.com]
> Sent: Tuesday, April 30, 2013 8:20 AM
> To: Kevin Burton
> Subject: Re: Permission problem
>
> user?"ls" shows "hdfs" and the log says "mapred"..
>
> Warm Regards,
> Tariq
> https://mtariq.jux.com/
> cloudfront.blogspot.com
>
>
> On Tue, Apr 30, 2013 at 6:22 PM, Kevin Burton <rk...@charter.net> wrote:
> I have relaxed it even further so now it is 775
>
> kevin@devUbuntu05:/var/log/hadoop-0.20-mapreduce$ hadoop fs -ls -d /
> Found 1 items
> drwxrwxr-x - hdfs supergroup 0 2013-04-29 15:43 /
>
> But I still get this error:
>
> 2013-04-30 07:43:02,520 FATAL org.apache.hadoop.mapred.JobTracker: org.apache.hadoop.security.AccessControlException: Permission denied: user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
>
>
> From: Mohammad Tariq [mailto:dontariq@gmail.com]
> Sent: Monday, April 29, 2013 5:10 PM
> To: user@hadoop.apache.org
> Subject: Re: Incompartible cluserIDS
>
> make it 755.
>
> Warm Regards,
> Tariq
> https://mtariq.jux.com/
> cloudfront.blogspot.com
>
>
>
Re: Permission problem
Posted by Arpit Gupta <ar...@hortonworks.com>.
It looks like hadoop.tmp.dir is being used both for local and hdfs directories. Can you create a jira for this?
What i recommended is that you create /data/hadoop/tmp on hdfs and chmod -R /data
--
Arpit Gupta
Hortonworks Inc.
http://hortonworks.com/
On Apr 30, 2013, at 8:22 AM, "Kevin Burton" <rk...@charter.net> wrote:
> I am not clear on what you are suggesting to create on HDFS or the local file system. As I understand it hadoop.tmp.dir is the local file system. I changed it so that the temporary files would be on a disk that has more capacity then /tmp. So you are suggesting that I create /data/hadoop/tmp on HDFS. I already have this created.
>
> Found 1 items
> drwxr-xr-x - mapred supergroup 0 2013-04-29 15:45 /tmp/mapred
> kevin@devUbuntu05:/etc/hadoop/conf$ hadoop fs -ls -d /tmp
> Found 1 items
> drwxrwxrwt - hdfs supergroup 0 2013-04-29 15:45 /tmp
>
> When you suggest that I ‘chmod –R 777 /data’. You are suggesting that I open up all the data to everyone? Isn’t that a bit extreme? First /data is the mount point for this drive and there are other uses for this drive than hadoop so there are other folders. That is why there is /data/hadoop. As far as hadoop is concerned:
>
> kevin@devUbuntu05:/etc/hadoop/conf$ ls -l /data/hadoop/
> total 12
> drwxrwxr-x 4 hdfs hadoop 4096 Apr 29 16:38 dfs
> drwxrwxr-x 3 mapred hadoop 4096 Apr 29 11:33 mapred
> drwxrwxrwx 3 hdfs hadoop 4096 Apr 19 15:14 tmp
>
> dfs would be where the data blocks for the hdfs file system would go, mapred would be the folder for M/R jobs, and tmp would be temporary storage. These are all on the local file system. Do I have to make all of this read-write for everyone in order to get it to work?
>
> From: Arpit Gupta [mailto:arpit@hortonworks.com]
> Sent: Tuesday, April 30, 2013 10:01 AM
> To: user@hadoop.apache.org
> Subject: Re: Permission problem
>
> ah
>
> this is what mapred.sytem.dir defaults to
>
> <property>
> <name>mapred.system.dir</name>
> <value>${hadoop.tmp.dir}/mapred/system</value>
> <description>The directory where MapReduce stores control files.
> </description>
> </property>
>
>
> So thats why its trying to write to /data/hadoop/tmp/hadoop-mapred/mapred/system
>
>
> So if you want hadoop.tmp.dir to be /data/hadoop/tmp/hadoop-${user.name} then i would suggest that create /data/hadoop/tmp on hdfs and chmod -R 777 /data or you can remove the hadoop.tmp.dir from your configs and let it be set to the default value of
>
> <property>
> <name>hadoop.tmp.dir</name>
> <value>/tmp/hadoop-${user.name}</value>
> <description>A base for other temporary directories.</description>
> </property>
>
> So to fix your problem you can do the above or set mapred.system.dir to /tmp/mapred/system in your mapred-site.xml.
>
> --
> Arpit Gupta
> Hortonworks Inc.
> http://hortonworks.com/
>
> On Apr 30, 2013, at 7:55 AM, "Kevin Burton" <rk...@charter.net> wrote:
>
>
> In core-site.xml I have:
>
> <property>
> <name>fs.default.name</name>
> <value>hdfs://devubuntu05:9000</value>
> <description>The name of the default file system. A URI whose scheme and authority determine the FileSystem implementation. </description>
> </property>
>
> In hdfs-site.xml I have
>
> <property>
> <name>hadoop.tmp.dir</name>
> <value>/data/hadoop/tmp/hadoop-${user.name}</value>
> <description>Hadoop temporary folder</description>
> </property>
>
>
> From: Arpit Gupta [mailto:arpit@hortonworks.com]
> Sent: Tuesday, April 30, 2013 9:48 AM
> To: Kevin Burton
> Cc: user@hadoop.apache.org
> Subject: Re: Permission problem
>
> Based on the logs your system dir is set to
>
> hdfs://devubuntu05:9000/data/hadoop/tmp/hadoop-mapred/mapred/system
>
>
> what is your fs.default.name and hadoop.tmp.dir in core-site.xml set to?
>
>
>
> --
> Arpit Gupta
> Hortonworks Inc.
> http://hortonworks.com/
>
> On Apr 30, 2013, at 7:39 AM, "Kevin Burton" <rk...@charter.net> wrote:
>
>
>
>
> Thank you.
>
> mapred.system.dir is not set. I am guessing that it is whatever the default is. What should I set it to?
>
> /tmp is already 777
>
> kevin@devUbuntu05:~$ hadoop fs -ls /tmp
> Found 1 items
> drwxr-xr-x - hdfs supergroup 0 2013-04-29 15:45 /tmp/mapred
> kevin@devUbuntu05:~$ hadoop fs -ls -d /tmp
> Found 1 items
> drwxrwxrwt - hdfs supergroup 0 2013-04-29 15:45 /tmp
>
> But notice that the mapred folder in the /tmp folder is 755.
> So I changed it:
>
> kevin@devUbuntu05 $ hadoop fs -ls -d /tmp
> drwxrwxrwt - hdfs supergroup 0 2013-04-29 15:45 /tmp
>
> kevin@devUbuntu05 $ hadoop fs -ls -R /tmp
> drwxr-xr-x - mapred supergroup 0 2013-04-29 15:45 /tmp/mapred
> drwxr-xr-x - mapred supergroup 0 2013-04-29 15:45 /tmp/mapred/system
>
> I still get the errors in the log file:
>
> 2013-04-30 09:35:11,609 WARN org.apache.hadoop.mapred.JobTracker: Failed to operate on mapred.system.dir (hdfs://devubuntu05:9000/data/hadoop/tmp/hadoop-mapred/mapred/system) because of permissions.
> 2013-04-30 09:35:11,609 WARN org.apache.hadoop.mapred.JobTracker: This directory should be owned by the user 'mapred (auth:SIMPLE)'
> 2013-04-30 09:35:11,609 WARN org.apache.hadoop.mapred.JobTracker: Bailing out ...
> . . . . .
> org.apache.hadoop.security.AccessControlException: Permission denied: user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
> . . . . .
> Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Permission denied: user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
> 2013-04-30 09:35:11,610 FATAL org.apache.hadoop.mapred.JobTracker: org.apache.hadoop.security.AccessControlException: Permission denied: user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
> . . . . . .
>
>
> From: Arpit Gupta [mailto:arpit@hortonworks.com]
> Sent: Tuesday, April 30, 2013 9:25 AM
> To: user@hadoop.apache.org
> Subject: Re: Permission problem
>
> what is your mapred.system.dir set to in mapred-site.xml?
>
> By default it will write to /tmp on hdfs.
>
> So you can do the following
>
> create /tmp on hdfs and chmod it to 777 as user hdfs and then restart jobtracker and tasktrackers.
>
> In case its set to /mapred/something then create /mapred and chown it to user mapred.
>
>
> --
> Arpit Gupta
> Hortonworks Inc.
> http://hortonworks.com/
>
> On Apr 30, 2013, at 6:36 AM, "Kevin Burton" <rk...@charter.net> wrote:
>
>
>
>
> To further complicate the issue the log file in (/var/log/hadoop-0.20-mapreduce/hadoop-hadoop-jobtracker-devUbuntu05.log) is owned by mapred:mapred and the name of the file seems to indicate some other lineage (hadoop,hadoop). I am out of my league in understanding the permission structure for hadoop hdfs and mr. Ideas?
>
> From: Kevin Burton [mailto:rkevinburton@charter.net]
> Sent: Tuesday, April 30, 2013 8:31 AM
> To: user@hadoop.apache.org
> Cc: 'Mohammad Tariq'
> Subject: RE: Permission problem
>
> That is what I perceive as the problem. The hdfs file system was created with the user ‘hdfs’ owning the root (‘/’) but for some reason with a M/R job the user ‘mapred’ needs to have write permission to the root. I don’t know how to satisfy both conditions. That is one reason that I relaxed the permission to 775 so that the group would also have write permission but that didn’t seem to help.
>
> From: Mohammad Tariq [mailto:dontariq@gmail.com]
> Sent: Tuesday, April 30, 2013 8:20 AM
> To: Kevin Burton
> Subject: Re: Permission problem
>
> user?"ls" shows "hdfs" and the log says "mapred"..
>
> Warm Regards,
> Tariq
> https://mtariq.jux.com/
> cloudfront.blogspot.com
>
>
> On Tue, Apr 30, 2013 at 6:22 PM, Kevin Burton <rk...@charter.net> wrote:
> I have relaxed it even further so now it is 775
>
> kevin@devUbuntu05:/var/log/hadoop-0.20-mapreduce$ hadoop fs -ls -d /
> Found 1 items
> drwxrwxr-x - hdfs supergroup 0 2013-04-29 15:43 /
>
> But I still get this error:
>
> 2013-04-30 07:43:02,520 FATAL org.apache.hadoop.mapred.JobTracker: org.apache.hadoop.security.AccessControlException: Permission denied: user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
>
>
> From: Mohammad Tariq [mailto:dontariq@gmail.com]
> Sent: Monday, April 29, 2013 5:10 PM
> To: user@hadoop.apache.org
> Subject: Re: Incompartible cluserIDS
>
> make it 755.
>
> Warm Regards,
> Tariq
> https://mtariq.jux.com/
> cloudfront.blogspot.com
>
>
>
Re: Permission problem
Posted by Arpit Gupta <ar...@hortonworks.com>.
It looks like hadoop.tmp.dir is being used both for local and hdfs directories. Can you create a jira for this?
What i recommended is that you create /data/hadoop/tmp on hdfs and chmod -R /data
--
Arpit Gupta
Hortonworks Inc.
http://hortonworks.com/
On Apr 30, 2013, at 8:22 AM, "Kevin Burton" <rk...@charter.net> wrote:
> I am not clear on what you are suggesting to create on HDFS or the local file system. As I understand it hadoop.tmp.dir is the local file system. I changed it so that the temporary files would be on a disk that has more capacity then /tmp. So you are suggesting that I create /data/hadoop/tmp on HDFS. I already have this created.
>
> Found 1 items
> drwxr-xr-x - mapred supergroup 0 2013-04-29 15:45 /tmp/mapred
> kevin@devUbuntu05:/etc/hadoop/conf$ hadoop fs -ls -d /tmp
> Found 1 items
> drwxrwxrwt - hdfs supergroup 0 2013-04-29 15:45 /tmp
>
> When you suggest that I ‘chmod –R 777 /data’. You are suggesting that I open up all the data to everyone? Isn’t that a bit extreme? First /data is the mount point for this drive and there are other uses for this drive than hadoop so there are other folders. That is why there is /data/hadoop. As far as hadoop is concerned:
>
> kevin@devUbuntu05:/etc/hadoop/conf$ ls -l /data/hadoop/
> total 12
> drwxrwxr-x 4 hdfs hadoop 4096 Apr 29 16:38 dfs
> drwxrwxr-x 3 mapred hadoop 4096 Apr 29 11:33 mapred
> drwxrwxrwx 3 hdfs hadoop 4096 Apr 19 15:14 tmp
>
> dfs would be where the data blocks for the hdfs file system would go, mapred would be the folder for M/R jobs, and tmp would be temporary storage. These are all on the local file system. Do I have to make all of this read-write for everyone in order to get it to work?
>
> From: Arpit Gupta [mailto:arpit@hortonworks.com]
> Sent: Tuesday, April 30, 2013 10:01 AM
> To: user@hadoop.apache.org
> Subject: Re: Permission problem
>
> ah
>
> this is what mapred.sytem.dir defaults to
>
> <property>
> <name>mapred.system.dir</name>
> <value>${hadoop.tmp.dir}/mapred/system</value>
> <description>The directory where MapReduce stores control files.
> </description>
> </property>
>
>
> So thats why its trying to write to /data/hadoop/tmp/hadoop-mapred/mapred/system
>
>
> So if you want hadoop.tmp.dir to be /data/hadoop/tmp/hadoop-${user.name} then i would suggest that create /data/hadoop/tmp on hdfs and chmod -R 777 /data or you can remove the hadoop.tmp.dir from your configs and let it be set to the default value of
>
> <property>
> <name>hadoop.tmp.dir</name>
> <value>/tmp/hadoop-${user.name}</value>
> <description>A base for other temporary directories.</description>
> </property>
>
> So to fix your problem you can do the above or set mapred.system.dir to /tmp/mapred/system in your mapred-site.xml.
>
> --
> Arpit Gupta
> Hortonworks Inc.
> http://hortonworks.com/
>
> On Apr 30, 2013, at 7:55 AM, "Kevin Burton" <rk...@charter.net> wrote:
>
>
> In core-site.xml I have:
>
> <property>
> <name>fs.default.name</name>
> <value>hdfs://devubuntu05:9000</value>
> <description>The name of the default file system. A URI whose scheme and authority determine the FileSystem implementation. </description>
> </property>
>
> In hdfs-site.xml I have
>
> <property>
> <name>hadoop.tmp.dir</name>
> <value>/data/hadoop/tmp/hadoop-${user.name}</value>
> <description>Hadoop temporary folder</description>
> </property>
>
>
> From: Arpit Gupta [mailto:arpit@hortonworks.com]
> Sent: Tuesday, April 30, 2013 9:48 AM
> To: Kevin Burton
> Cc: user@hadoop.apache.org
> Subject: Re: Permission problem
>
> Based on the logs your system dir is set to
>
> hdfs://devubuntu05:9000/data/hadoop/tmp/hadoop-mapred/mapred/system
>
>
> what is your fs.default.name and hadoop.tmp.dir in core-site.xml set to?
>
>
>
> --
> Arpit Gupta
> Hortonworks Inc.
> http://hortonworks.com/
>
> On Apr 30, 2013, at 7:39 AM, "Kevin Burton" <rk...@charter.net> wrote:
>
>
>
>
> Thank you.
>
> mapred.system.dir is not set. I am guessing that it is whatever the default is. What should I set it to?
>
> /tmp is already 777
>
> kevin@devUbuntu05:~$ hadoop fs -ls /tmp
> Found 1 items
> drwxr-xr-x - hdfs supergroup 0 2013-04-29 15:45 /tmp/mapred
> kevin@devUbuntu05:~$ hadoop fs -ls -d /tmp
> Found 1 items
> drwxrwxrwt - hdfs supergroup 0 2013-04-29 15:45 /tmp
>
> But notice that the mapred folder in the /tmp folder is 755.
> So I changed it:
>
> kevin@devUbuntu05 $ hadoop fs -ls -d /tmp
> drwxrwxrwt - hdfs supergroup 0 2013-04-29 15:45 /tmp
>
> kevin@devUbuntu05 $ hadoop fs -ls -R /tmp
> drwxr-xr-x - mapred supergroup 0 2013-04-29 15:45 /tmp/mapred
> drwxr-xr-x - mapred supergroup 0 2013-04-29 15:45 /tmp/mapred/system
>
> I still get the errors in the log file:
>
> 2013-04-30 09:35:11,609 WARN org.apache.hadoop.mapred.JobTracker: Failed to operate on mapred.system.dir (hdfs://devubuntu05:9000/data/hadoop/tmp/hadoop-mapred/mapred/system) because of permissions.
> 2013-04-30 09:35:11,609 WARN org.apache.hadoop.mapred.JobTracker: This directory should be owned by the user 'mapred (auth:SIMPLE)'
> 2013-04-30 09:35:11,609 WARN org.apache.hadoop.mapred.JobTracker: Bailing out ...
> . . . . .
> org.apache.hadoop.security.AccessControlException: Permission denied: user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
> . . . . .
> Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Permission denied: user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
> 2013-04-30 09:35:11,610 FATAL org.apache.hadoop.mapred.JobTracker: org.apache.hadoop.security.AccessControlException: Permission denied: user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
> . . . . . .
>
>
> From: Arpit Gupta [mailto:arpit@hortonworks.com]
> Sent: Tuesday, April 30, 2013 9:25 AM
> To: user@hadoop.apache.org
> Subject: Re: Permission problem
>
> what is your mapred.system.dir set to in mapred-site.xml?
>
> By default it will write to /tmp on hdfs.
>
> So you can do the following
>
> create /tmp on hdfs and chmod it to 777 as user hdfs and then restart jobtracker and tasktrackers.
>
> In case its set to /mapred/something then create /mapred and chown it to user mapred.
>
>
> --
> Arpit Gupta
> Hortonworks Inc.
> http://hortonworks.com/
>
> On Apr 30, 2013, at 6:36 AM, "Kevin Burton" <rk...@charter.net> wrote:
>
>
>
>
> To further complicate the issue the log file in (/var/log/hadoop-0.20-mapreduce/hadoop-hadoop-jobtracker-devUbuntu05.log) is owned by mapred:mapred and the name of the file seems to indicate some other lineage (hadoop,hadoop). I am out of my league in understanding the permission structure for hadoop hdfs and mr. Ideas?
>
> From: Kevin Burton [mailto:rkevinburton@charter.net]
> Sent: Tuesday, April 30, 2013 8:31 AM
> To: user@hadoop.apache.org
> Cc: 'Mohammad Tariq'
> Subject: RE: Permission problem
>
> That is what I perceive as the problem. The hdfs file system was created with the user ‘hdfs’ owning the root (‘/’) but for some reason with a M/R job the user ‘mapred’ needs to have write permission to the root. I don’t know how to satisfy both conditions. That is one reason that I relaxed the permission to 775 so that the group would also have write permission but that didn’t seem to help.
>
> From: Mohammad Tariq [mailto:dontariq@gmail.com]
> Sent: Tuesday, April 30, 2013 8:20 AM
> To: Kevin Burton
> Subject: Re: Permission problem
>
> user?"ls" shows "hdfs" and the log says "mapred"..
>
> Warm Regards,
> Tariq
> https://mtariq.jux.com/
> cloudfront.blogspot.com
>
>
> On Tue, Apr 30, 2013 at 6:22 PM, Kevin Burton <rk...@charter.net> wrote:
> I have relaxed it even further so now it is 775
>
> kevin@devUbuntu05:/var/log/hadoop-0.20-mapreduce$ hadoop fs -ls -d /
> Found 1 items
> drwxrwxr-x - hdfs supergroup 0 2013-04-29 15:43 /
>
> But I still get this error:
>
> 2013-04-30 07:43:02,520 FATAL org.apache.hadoop.mapred.JobTracker: org.apache.hadoop.security.AccessControlException: Permission denied: user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
>
>
> From: Mohammad Tariq [mailto:dontariq@gmail.com]
> Sent: Monday, April 29, 2013 5:10 PM
> To: user@hadoop.apache.org
> Subject: Re: Incompartible cluserIDS
>
> make it 755.
>
> Warm Regards,
> Tariq
> https://mtariq.jux.com/
> cloudfront.blogspot.com
>
>
>
Re: Permission problem
Posted by Arpit Gupta <ar...@hortonworks.com>.
It looks like hadoop.tmp.dir is being used both for local and hdfs directories. Can you create a jira for this?
What i recommended is that you create /data/hadoop/tmp on hdfs and chmod -R /data
--
Arpit Gupta
Hortonworks Inc.
http://hortonworks.com/
On Apr 30, 2013, at 8:22 AM, "Kevin Burton" <rk...@charter.net> wrote:
> I am not clear on what you are suggesting to create on HDFS or the local file system. As I understand it hadoop.tmp.dir is the local file system. I changed it so that the temporary files would be on a disk that has more capacity then /tmp. So you are suggesting that I create /data/hadoop/tmp on HDFS. I already have this created.
>
> Found 1 items
> drwxr-xr-x - mapred supergroup 0 2013-04-29 15:45 /tmp/mapred
> kevin@devUbuntu05:/etc/hadoop/conf$ hadoop fs -ls -d /tmp
> Found 1 items
> drwxrwxrwt - hdfs supergroup 0 2013-04-29 15:45 /tmp
>
> When you suggest that I ‘chmod –R 777 /data’. You are suggesting that I open up all the data to everyone? Isn’t that a bit extreme? First /data is the mount point for this drive and there are other uses for this drive than hadoop so there are other folders. That is why there is /data/hadoop. As far as hadoop is concerned:
>
> kevin@devUbuntu05:/etc/hadoop/conf$ ls -l /data/hadoop/
> total 12
> drwxrwxr-x 4 hdfs hadoop 4096 Apr 29 16:38 dfs
> drwxrwxr-x 3 mapred hadoop 4096 Apr 29 11:33 mapred
> drwxrwxrwx 3 hdfs hadoop 4096 Apr 19 15:14 tmp
>
> dfs would be where the data blocks for the hdfs file system would go, mapred would be the folder for M/R jobs, and tmp would be temporary storage. These are all on the local file system. Do I have to make all of this read-write for everyone in order to get it to work?
>
> From: Arpit Gupta [mailto:arpit@hortonworks.com]
> Sent: Tuesday, April 30, 2013 10:01 AM
> To: user@hadoop.apache.org
> Subject: Re: Permission problem
>
> ah
>
> this is what mapred.sytem.dir defaults to
>
> <property>
> <name>mapred.system.dir</name>
> <value>${hadoop.tmp.dir}/mapred/system</value>
> <description>The directory where MapReduce stores control files.
> </description>
> </property>
>
>
> So thats why its trying to write to /data/hadoop/tmp/hadoop-mapred/mapred/system
>
>
> So if you want hadoop.tmp.dir to be /data/hadoop/tmp/hadoop-${user.name} then i would suggest that create /data/hadoop/tmp on hdfs and chmod -R 777 /data or you can remove the hadoop.tmp.dir from your configs and let it be set to the default value of
>
> <property>
> <name>hadoop.tmp.dir</name>
> <value>/tmp/hadoop-${user.name}</value>
> <description>A base for other temporary directories.</description>
> </property>
>
> So to fix your problem you can do the above or set mapred.system.dir to /tmp/mapred/system in your mapred-site.xml.
>
> --
> Arpit Gupta
> Hortonworks Inc.
> http://hortonworks.com/
>
> On Apr 30, 2013, at 7:55 AM, "Kevin Burton" <rk...@charter.net> wrote:
>
>
> In core-site.xml I have:
>
> <property>
> <name>fs.default.name</name>
> <value>hdfs://devubuntu05:9000</value>
> <description>The name of the default file system. A URI whose scheme and authority determine the FileSystem implementation. </description>
> </property>
>
> In hdfs-site.xml I have
>
> <property>
> <name>hadoop.tmp.dir</name>
> <value>/data/hadoop/tmp/hadoop-${user.name}</value>
> <description>Hadoop temporary folder</description>
> </property>
>
>
> From: Arpit Gupta [mailto:arpit@hortonworks.com]
> Sent: Tuesday, April 30, 2013 9:48 AM
> To: Kevin Burton
> Cc: user@hadoop.apache.org
> Subject: Re: Permission problem
>
> Based on the logs your system dir is set to
>
> hdfs://devubuntu05:9000/data/hadoop/tmp/hadoop-mapred/mapred/system
>
>
> what is your fs.default.name and hadoop.tmp.dir in core-site.xml set to?
>
>
>
> --
> Arpit Gupta
> Hortonworks Inc.
> http://hortonworks.com/
>
> On Apr 30, 2013, at 7:39 AM, "Kevin Burton" <rk...@charter.net> wrote:
>
>
>
>
> Thank you.
>
> mapred.system.dir is not set. I am guessing that it is whatever the default is. What should I set it to?
>
> /tmp is already 777
>
> kevin@devUbuntu05:~$ hadoop fs -ls /tmp
> Found 1 items
> drwxr-xr-x - hdfs supergroup 0 2013-04-29 15:45 /tmp/mapred
> kevin@devUbuntu05:~$ hadoop fs -ls -d /tmp
> Found 1 items
> drwxrwxrwt - hdfs supergroup 0 2013-04-29 15:45 /tmp
>
> But notice that the mapred folder in the /tmp folder is 755.
> So I changed it:
>
> kevin@devUbuntu05 $ hadoop fs -ls -d /tmp
> drwxrwxrwt - hdfs supergroup 0 2013-04-29 15:45 /tmp
>
> kevin@devUbuntu05 $ hadoop fs -ls -R /tmp
> drwxr-xr-x - mapred supergroup 0 2013-04-29 15:45 /tmp/mapred
> drwxr-xr-x - mapred supergroup 0 2013-04-29 15:45 /tmp/mapred/system
>
> I still get the errors in the log file:
>
> 2013-04-30 09:35:11,609 WARN org.apache.hadoop.mapred.JobTracker: Failed to operate on mapred.system.dir (hdfs://devubuntu05:9000/data/hadoop/tmp/hadoop-mapred/mapred/system) because of permissions.
> 2013-04-30 09:35:11,609 WARN org.apache.hadoop.mapred.JobTracker: This directory should be owned by the user 'mapred (auth:SIMPLE)'
> 2013-04-30 09:35:11,609 WARN org.apache.hadoop.mapred.JobTracker: Bailing out ...
> . . . . .
> org.apache.hadoop.security.AccessControlException: Permission denied: user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
> . . . . .
> Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Permission denied: user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
> 2013-04-30 09:35:11,610 FATAL org.apache.hadoop.mapred.JobTracker: org.apache.hadoop.security.AccessControlException: Permission denied: user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
> . . . . . .
>
>
> From: Arpit Gupta [mailto:arpit@hortonworks.com]
> Sent: Tuesday, April 30, 2013 9:25 AM
> To: user@hadoop.apache.org
> Subject: Re: Permission problem
>
> what is your mapred.system.dir set to in mapred-site.xml?
>
> By default it will write to /tmp on hdfs.
>
> So you can do the following
>
> create /tmp on hdfs and chmod it to 777 as user hdfs and then restart jobtracker and tasktrackers.
>
> In case its set to /mapred/something then create /mapred and chown it to user mapred.
>
>
> --
> Arpit Gupta
> Hortonworks Inc.
> http://hortonworks.com/
>
> On Apr 30, 2013, at 6:36 AM, "Kevin Burton" <rk...@charter.net> wrote:
>
>
>
>
> To further complicate the issue the log file in (/var/log/hadoop-0.20-mapreduce/hadoop-hadoop-jobtracker-devUbuntu05.log) is owned by mapred:mapred and the name of the file seems to indicate some other lineage (hadoop,hadoop). I am out of my league in understanding the permission structure for hadoop hdfs and mr. Ideas?
>
> From: Kevin Burton [mailto:rkevinburton@charter.net]
> Sent: Tuesday, April 30, 2013 8:31 AM
> To: user@hadoop.apache.org
> Cc: 'Mohammad Tariq'
> Subject: RE: Permission problem
>
> That is what I perceive as the problem. The hdfs file system was created with the user ‘hdfs’ owning the root (‘/’) but for some reason with a M/R job the user ‘mapred’ needs to have write permission to the root. I don’t know how to satisfy both conditions. That is one reason that I relaxed the permission to 775 so that the group would also have write permission but that didn’t seem to help.
>
> From: Mohammad Tariq [mailto:dontariq@gmail.com]
> Sent: Tuesday, April 30, 2013 8:20 AM
> To: Kevin Burton
> Subject: Re: Permission problem
>
> user?"ls" shows "hdfs" and the log says "mapred"..
>
> Warm Regards,
> Tariq
> https://mtariq.jux.com/
> cloudfront.blogspot.com
>
>
> On Tue, Apr 30, 2013 at 6:22 PM, Kevin Burton <rk...@charter.net> wrote:
> I have relaxed it even further so now it is 775
>
> kevin@devUbuntu05:/var/log/hadoop-0.20-mapreduce$ hadoop fs -ls -d /
> Found 1 items
> drwxrwxr-x - hdfs supergroup 0 2013-04-29 15:43 /
>
> But I still get this error:
>
> 2013-04-30 07:43:02,520 FATAL org.apache.hadoop.mapred.JobTracker: org.apache.hadoop.security.AccessControlException: Permission denied: user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
>
>
> From: Mohammad Tariq [mailto:dontariq@gmail.com]
> Sent: Monday, April 29, 2013 5:10 PM
> To: user@hadoop.apache.org
> Subject: Re: Incompartible cluserIDS
>
> make it 755.
>
> Warm Regards,
> Tariq
> https://mtariq.jux.com/
> cloudfront.blogspot.com
>
>
>
RE: Permission problem
Posted by Kevin Burton <rk...@charter.net>.
I am not clear on what you are suggesting to create on HDFS or the local
file system. As I understand it hadoop.tmp.dir is the local file system. I
changed it so that the temporary files would be on a disk that has more
capacity then /tmp. So you are suggesting that I create /data/hadoop/tmp on
HDFS. I already have this created.
Found 1 items
drwxr-xr-x - mapred supergroup 0 2013-04-29 15:45 /tmp/mapred
kevin@devUbuntu05:/etc/hadoop/conf$ hadoop fs -ls -d /tmp
Found 1 items
drwxrwxrwt - hdfs supergroup 0 2013-04-29 15:45 /tmp
When you suggest that I 'chmod -R 777 /data'. You are suggesting that I open
up all the data to everyone? Isn't that a bit extreme? First /data is the
mount point for this drive and there are other uses for this drive than
hadoop so there are other folders. That is why there is /data/hadoop. As far
as hadoop is concerned:
kevin@devUbuntu05:/etc/hadoop/conf$ ls -l /data/hadoop/
total 12
drwxrwxr-x 4 hdfs hadoop 4096 Apr 29 16:38 dfs
drwxrwxr-x 3 mapred hadoop 4096 Apr 29 11:33 mapred
drwxrwxrwx 3 hdfs hadoop 4096 Apr 19 15:14 tmp
dfs would be where the data blocks for the hdfs file system would go, mapred
would be the folder for M/R jobs, and tmp would be temporary storage. These
are all on the local file system. Do I have to make all of this read-write
for everyone in order to get it to work?
From: Arpit Gupta [mailto:arpit@hortonworks.com]
Sent: Tuesday, April 30, 2013 10:01 AM
To: user@hadoop.apache.org
Subject: Re: Permission problem
ah
this is what mapred.sytem.dir defaults to
<property>
<name>mapred.system.dir</name>
<value>${hadoop.tmp.dir}/mapred/system</value>
<description>The directory where MapReduce stores control files.
</description>
</property>
So thats why its trying to write to
/data/hadoop/tmp/hadoop-mapred/mapred/system
<hdfs://devubuntu05:9000/data/hadoop/tmp/hadoop-mapred/mapred/system>
So if you want hadoop.tmp.dir to be /data/hadoop/tmp/hadoop-${user.name}
then i would suggest that create /data/hadoop/tmp on hdfs and chmod -R 777
/data or you can remove the hadoop.tmp.dir from your configs and let it be
set to the default value of
<property>
<name>hadoop.tmp.dir</name>
<value>/tmp/hadoop-${user.name}</value>
<description>A base for other temporary directories.</description>
</property>
So to fix your problem you can do the above or set mapred.system.dir to
/tmp/mapred/system in your mapred-site.xml.
--
Arpit Gupta
Hortonworks Inc.
http://hortonworks.com/
On Apr 30, 2013, at 7:55 AM, "Kevin Burton" <rk...@charter.net>
wrote:
In core-site.xml I have:
<property>
<name>fs.default.name</name>
<value> <hdfs://devubuntu05:9000%3c/value%3e>
hdfs://devubuntu05:9000</value>
<description>The name of the default file system. A URI whose scheme and
authority determine the FileSystem implementation. </description>
</property>
In hdfs-site.xml I have
<property>
<name>hadoop.tmp.dir</name>
<value>/data/hadoop/tmp/hadoop-${user.name}</value>
<description>Hadoop temporary folder</description>
</property>
From: Arpit Gupta [mailto:arpit@hortonworks.com]
Sent: Tuesday, April 30, 2013 9:48 AM
To: Kevin Burton
Cc: user@hadoop.apache.org
Subject: Re: Permission problem
Based on the logs your system dir is set to
<hdfs://devubuntu05:9000/data/hadoop/tmp/hadoop-mapred/mapred/system>
hdfs://devubuntu05:9000/data/hadoop/tmp/hadoop-mapred/mapred/system
what is your fs.default.name and hadoop.tmp.dir in core-site.xml set to?
--
Arpit Gupta
Hortonworks Inc.
<http://hortonworks.com/> http://hortonworks.com/
On Apr 30, 2013, at 7:39 AM, "Kevin Burton" <
<ma...@charter.net> rkevinburton@charter.net> wrote:
Thank you.
mapred.system.dir is not set. I am guessing that it is whatever the default
is. What should I set it to?
/tmp is already 777
kevin@devUbuntu05:~$ hadoop fs -ls /tmp
Found 1 items
drwxr-xr-x - hdfs supergroup 0 2013-04-29 15:45 /tmp/mapred
kevin@devUbuntu05:~$ hadoop fs -ls -d /tmp
Found 1 items
drwxrwxrwt - hdfs supergroup 0 2013-04-29 15:45 /tmp
But notice that the mapred folder in the /tmp folder is 755.
So I changed it:
kevin@devUbuntu05 $ hadoop fs -ls -d /tmp
drwxrwxrwt - hdfs supergroup 0 2013-04-29 15:45 /tmp
kevin@devUbuntu05 $ hadoop fs -ls -R /tmp
drwxr-xr-x - mapred supergroup 0 2013-04-29 15:45 /tmp/mapred
drwxr-xr-x - mapred supergroup 0 2013-04-29 15:45
/tmp/mapred/system
I still get the errors in the log file:
2013-04-30 09:35:11,609 WARN org.apache.hadoop.mapred.JobTracker: Failed to
operate on mapred.system.dir (
<hdfs://devubuntu05:9000/data/hadoop/tmp/hadoop-mapred/mapred/system>
hdfs://devubuntu05:9000/data/hadoop/tmp/hadoop-mapred/mapred/system) because
of permissions.
2013-04-30 09:35:11,609 WARN org.apache.hadoop.mapred.JobTracker: This
directory should be owned by the user 'mapred (auth:SIMPLE)'
2013-04-30 09:35:11,609 WARN org.apache.hadoop.mapred.JobTracker: Bailing
out ...
. . . . .
org.apache.hadoop.security.AccessControlException: Permission denied:
user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
. . . . .
Caused by:
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessContr
olException): Permission denied: user=mapred, access=WRITE,
inode="/":hdfs:supergroup:drwxrwxr-x
2013-04-30 09:35:11,610 FATAL org.apache.hadoop.mapred.JobTracker:
org.apache.hadoop.security.AccessControlException: Permission denied:
user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
. . . . . .
From: Arpit Gupta [mailto:arpit@ <http://hortonworks.com> hortonworks.com]
Sent: Tuesday, April 30, 2013 9:25 AM
To: <ma...@hadoop.apache.org> user@hadoop.apache.org
Subject: Re: Permission problem
what is your mapred.system.dir set to in mapred-site.xml?
By default it will write to /tmp on hdfs.
So you can do the following
create /tmp on hdfs and chmod it to 777 as user hdfs and then restart
jobtracker and tasktrackers.
In case its set to /mapred/something then create /mapred and chown it to
user mapred.
--
Arpit Gupta
Hortonworks Inc.
<http://hortonworks.com/> http://hortonworks.com/
On Apr 30, 2013, at 6:36 AM, "Kevin Burton" <
<ma...@charter.net> rkevinburton@charter.net> wrote:
To further complicate the issue the log file in
(/var/log/hadoop-0.20-mapreduce/hadoop-hadoop-jobtracker-devUbuntu05.log) is
owned by mapred:mapred and the name of the file seems to indicate some other
lineage (hadoop,hadoop). I am out of my league in understanding the
permission structure for hadoop hdfs and mr. Ideas?
From: Kevin Burton [mailto:rkevinburton@ <http://charter.net> charter.net]
Sent: Tuesday, April 30, 2013 8:31 AM
To: <ma...@hadoop.apache.org> user@hadoop.apache.org
Cc: 'Mohammad Tariq'
Subject: RE: Permission problem
That is what I perceive as the problem. The hdfs file system was created
with the user 'hdfs' owning the root ('/') but for some reason with a M/R
job the user 'mapred' needs to have write permission to the root. I don't
know how to satisfy both conditions. That is one reason that I relaxed the
permission to 775 so that the group would also have write permission but
that didn't seem to help.
From: Mohammad Tariq [ <ma...@gmail.com>
mailto:dontariq@gmail.com]
Sent: Tuesday, April 30, 2013 8:20 AM
To: Kevin Burton
Subject: Re: Permission problem
user?"ls" shows "hdfs" and the log says "mapred"..
Warm Regards,
Tariq
<https://mtariq.jux.com/> https://mtariq.jux.com/
<http://cloudfront.blogspot.com> cloudfront.blogspot.com
On Tue, Apr 30, 2013 at 6:22 PM, Kevin Burton <
<ma...@charter.net> rkevinburton@charter.net> wrote:
I have relaxed it even further so now it is 775
<mailto:kevin@devUbuntu05:/var/log/hadoop-0.20-mapreduce$>
kevin@devUbuntu05:/var/log/hadoop-0.20-mapreduce$ hadoop fs -ls -d /
Found 1 items
drwxrwxr-x - hdfs supergroup 0 2013-04-29 15:43 /
But I still get this error:
2013-04-30 07:43:02,520 FATAL org.apache.hadoop.mapred.JobTracker:
org.apache.hadoop.security.AccessControlException: Permission denied:
user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
From: Mohammad Tariq [mailto: <ma...@gmail.com>
dontariq@gmail.com]
Sent: Monday, April 29, 2013 5:10 PM
To: <ma...@hadoop.apache.org> user@hadoop.apache.org
Subject: Re: Incompartible cluserIDS
make it 755.
Warm Regards,
Tariq
<https://mtariq.jux.com/> https://mtariq.jux.com/
<http://cloudfront.blogspot.com> cloudfront.blogspot.com
RE: Permission problem
Posted by Kevin Burton <rk...@charter.net>.
I am not clear on what you are suggesting to create on HDFS or the local
file system. As I understand it hadoop.tmp.dir is the local file system. I
changed it so that the temporary files would be on a disk that has more
capacity then /tmp. So you are suggesting that I create /data/hadoop/tmp on
HDFS. I already have this created.
Found 1 items
drwxr-xr-x - mapred supergroup 0 2013-04-29 15:45 /tmp/mapred
kevin@devUbuntu05:/etc/hadoop/conf$ hadoop fs -ls -d /tmp
Found 1 items
drwxrwxrwt - hdfs supergroup 0 2013-04-29 15:45 /tmp
When you suggest that I 'chmod -R 777 /data'. You are suggesting that I open
up all the data to everyone? Isn't that a bit extreme? First /data is the
mount point for this drive and there are other uses for this drive than
hadoop so there are other folders. That is why there is /data/hadoop. As far
as hadoop is concerned:
kevin@devUbuntu05:/etc/hadoop/conf$ ls -l /data/hadoop/
total 12
drwxrwxr-x 4 hdfs hadoop 4096 Apr 29 16:38 dfs
drwxrwxr-x 3 mapred hadoop 4096 Apr 29 11:33 mapred
drwxrwxrwx 3 hdfs hadoop 4096 Apr 19 15:14 tmp
dfs would be where the data blocks for the hdfs file system would go, mapred
would be the folder for M/R jobs, and tmp would be temporary storage. These
are all on the local file system. Do I have to make all of this read-write
for everyone in order to get it to work?
From: Arpit Gupta [mailto:arpit@hortonworks.com]
Sent: Tuesday, April 30, 2013 10:01 AM
To: user@hadoop.apache.org
Subject: Re: Permission problem
ah
this is what mapred.sytem.dir defaults to
<property>
<name>mapred.system.dir</name>
<value>${hadoop.tmp.dir}/mapred/system</value>
<description>The directory where MapReduce stores control files.
</description>
</property>
So thats why its trying to write to
/data/hadoop/tmp/hadoop-mapred/mapred/system
<hdfs://devubuntu05:9000/data/hadoop/tmp/hadoop-mapred/mapred/system>
So if you want hadoop.tmp.dir to be /data/hadoop/tmp/hadoop-${user.name}
then i would suggest that create /data/hadoop/tmp on hdfs and chmod -R 777
/data or you can remove the hadoop.tmp.dir from your configs and let it be
set to the default value of
<property>
<name>hadoop.tmp.dir</name>
<value>/tmp/hadoop-${user.name}</value>
<description>A base for other temporary directories.</description>
</property>
So to fix your problem you can do the above or set mapred.system.dir to
/tmp/mapred/system in your mapred-site.xml.
--
Arpit Gupta
Hortonworks Inc.
http://hortonworks.com/
On Apr 30, 2013, at 7:55 AM, "Kevin Burton" <rk...@charter.net>
wrote:
In core-site.xml I have:
<property>
<name>fs.default.name</name>
<value> <hdfs://devubuntu05:9000%3c/value%3e>
hdfs://devubuntu05:9000</value>
<description>The name of the default file system. A URI whose scheme and
authority determine the FileSystem implementation. </description>
</property>
In hdfs-site.xml I have
<property>
<name>hadoop.tmp.dir</name>
<value>/data/hadoop/tmp/hadoop-${user.name}</value>
<description>Hadoop temporary folder</description>
</property>
From: Arpit Gupta [mailto:arpit@hortonworks.com]
Sent: Tuesday, April 30, 2013 9:48 AM
To: Kevin Burton
Cc: user@hadoop.apache.org
Subject: Re: Permission problem
Based on the logs your system dir is set to
<hdfs://devubuntu05:9000/data/hadoop/tmp/hadoop-mapred/mapred/system>
hdfs://devubuntu05:9000/data/hadoop/tmp/hadoop-mapred/mapred/system
what is your fs.default.name and hadoop.tmp.dir in core-site.xml set to?
--
Arpit Gupta
Hortonworks Inc.
<http://hortonworks.com/> http://hortonworks.com/
On Apr 30, 2013, at 7:39 AM, "Kevin Burton" <
<ma...@charter.net> rkevinburton@charter.net> wrote:
Thank you.
mapred.system.dir is not set. I am guessing that it is whatever the default
is. What should I set it to?
/tmp is already 777
kevin@devUbuntu05:~$ hadoop fs -ls /tmp
Found 1 items
drwxr-xr-x - hdfs supergroup 0 2013-04-29 15:45 /tmp/mapred
kevin@devUbuntu05:~$ hadoop fs -ls -d /tmp
Found 1 items
drwxrwxrwt - hdfs supergroup 0 2013-04-29 15:45 /tmp
But notice that the mapred folder in the /tmp folder is 755.
So I changed it:
kevin@devUbuntu05 $ hadoop fs -ls -d /tmp
drwxrwxrwt - hdfs supergroup 0 2013-04-29 15:45 /tmp
kevin@devUbuntu05 $ hadoop fs -ls -R /tmp
drwxr-xr-x - mapred supergroup 0 2013-04-29 15:45 /tmp/mapred
drwxr-xr-x - mapred supergroup 0 2013-04-29 15:45
/tmp/mapred/system
I still get the errors in the log file:
2013-04-30 09:35:11,609 WARN org.apache.hadoop.mapred.JobTracker: Failed to
operate on mapred.system.dir (
<hdfs://devubuntu05:9000/data/hadoop/tmp/hadoop-mapred/mapred/system>
hdfs://devubuntu05:9000/data/hadoop/tmp/hadoop-mapred/mapred/system) because
of permissions.
2013-04-30 09:35:11,609 WARN org.apache.hadoop.mapred.JobTracker: This
directory should be owned by the user 'mapred (auth:SIMPLE)'
2013-04-30 09:35:11,609 WARN org.apache.hadoop.mapred.JobTracker: Bailing
out ...
. . . . .
org.apache.hadoop.security.AccessControlException: Permission denied:
user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
. . . . .
Caused by:
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessContr
olException): Permission denied: user=mapred, access=WRITE,
inode="/":hdfs:supergroup:drwxrwxr-x
2013-04-30 09:35:11,610 FATAL org.apache.hadoop.mapred.JobTracker:
org.apache.hadoop.security.AccessControlException: Permission denied:
user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
. . . . . .
From: Arpit Gupta [mailto:arpit@ <http://hortonworks.com> hortonworks.com]
Sent: Tuesday, April 30, 2013 9:25 AM
To: <ma...@hadoop.apache.org> user@hadoop.apache.org
Subject: Re: Permission problem
what is your mapred.system.dir set to in mapred-site.xml?
By default it will write to /tmp on hdfs.
So you can do the following
create /tmp on hdfs and chmod it to 777 as user hdfs and then restart
jobtracker and tasktrackers.
In case its set to /mapred/something then create /mapred and chown it to
user mapred.
--
Arpit Gupta
Hortonworks Inc.
<http://hortonworks.com/> http://hortonworks.com/
On Apr 30, 2013, at 6:36 AM, "Kevin Burton" <
<ma...@charter.net> rkevinburton@charter.net> wrote:
To further complicate the issue the log file in
(/var/log/hadoop-0.20-mapreduce/hadoop-hadoop-jobtracker-devUbuntu05.log) is
owned by mapred:mapred and the name of the file seems to indicate some other
lineage (hadoop,hadoop). I am out of my league in understanding the
permission structure for hadoop hdfs and mr. Ideas?
From: Kevin Burton [mailto:rkevinburton@ <http://charter.net> charter.net]
Sent: Tuesday, April 30, 2013 8:31 AM
To: <ma...@hadoop.apache.org> user@hadoop.apache.org
Cc: 'Mohammad Tariq'
Subject: RE: Permission problem
That is what I perceive as the problem. The hdfs file system was created
with the user 'hdfs' owning the root ('/') but for some reason with a M/R
job the user 'mapred' needs to have write permission to the root. I don't
know how to satisfy both conditions. That is one reason that I relaxed the
permission to 775 so that the group would also have write permission but
that didn't seem to help.
From: Mohammad Tariq [ <ma...@gmail.com>
mailto:dontariq@gmail.com]
Sent: Tuesday, April 30, 2013 8:20 AM
To: Kevin Burton
Subject: Re: Permission problem
user?"ls" shows "hdfs" and the log says "mapred"..
Warm Regards,
Tariq
<https://mtariq.jux.com/> https://mtariq.jux.com/
<http://cloudfront.blogspot.com> cloudfront.blogspot.com
On Tue, Apr 30, 2013 at 6:22 PM, Kevin Burton <
<ma...@charter.net> rkevinburton@charter.net> wrote:
I have relaxed it even further so now it is 775
<mailto:kevin@devUbuntu05:/var/log/hadoop-0.20-mapreduce$>
kevin@devUbuntu05:/var/log/hadoop-0.20-mapreduce$ hadoop fs -ls -d /
Found 1 items
drwxrwxr-x - hdfs supergroup 0 2013-04-29 15:43 /
But I still get this error:
2013-04-30 07:43:02,520 FATAL org.apache.hadoop.mapred.JobTracker:
org.apache.hadoop.security.AccessControlException: Permission denied:
user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
From: Mohammad Tariq [mailto: <ma...@gmail.com>
dontariq@gmail.com]
Sent: Monday, April 29, 2013 5:10 PM
To: <ma...@hadoop.apache.org> user@hadoop.apache.org
Subject: Re: Incompartible cluserIDS
make it 755.
Warm Regards,
Tariq
<https://mtariq.jux.com/> https://mtariq.jux.com/
<http://cloudfront.blogspot.com> cloudfront.blogspot.com
RE: Permission problem
Posted by Kevin Burton <rk...@charter.net>.
I am not clear on what you are suggesting to create on HDFS or the local
file system. As I understand it hadoop.tmp.dir is the local file system. I
changed it so that the temporary files would be on a disk that has more
capacity then /tmp. So you are suggesting that I create /data/hadoop/tmp on
HDFS. I already have this created.
Found 1 items
drwxr-xr-x - mapred supergroup 0 2013-04-29 15:45 /tmp/mapred
kevin@devUbuntu05:/etc/hadoop/conf$ hadoop fs -ls -d /tmp
Found 1 items
drwxrwxrwt - hdfs supergroup 0 2013-04-29 15:45 /tmp
When you suggest that I 'chmod -R 777 /data'. You are suggesting that I open
up all the data to everyone? Isn't that a bit extreme? First /data is the
mount point for this drive and there are other uses for this drive than
hadoop so there are other folders. That is why there is /data/hadoop. As far
as hadoop is concerned:
kevin@devUbuntu05:/etc/hadoop/conf$ ls -l /data/hadoop/
total 12
drwxrwxr-x 4 hdfs hadoop 4096 Apr 29 16:38 dfs
drwxrwxr-x 3 mapred hadoop 4096 Apr 29 11:33 mapred
drwxrwxrwx 3 hdfs hadoop 4096 Apr 19 15:14 tmp
dfs would be where the data blocks for the hdfs file system would go, mapred
would be the folder for M/R jobs, and tmp would be temporary storage. These
are all on the local file system. Do I have to make all of this read-write
for everyone in order to get it to work?
From: Arpit Gupta [mailto:arpit@hortonworks.com]
Sent: Tuesday, April 30, 2013 10:01 AM
To: user@hadoop.apache.org
Subject: Re: Permission problem
ah
this is what mapred.sytem.dir defaults to
<property>
<name>mapred.system.dir</name>
<value>${hadoop.tmp.dir}/mapred/system</value>
<description>The directory where MapReduce stores control files.
</description>
</property>
So thats why its trying to write to
/data/hadoop/tmp/hadoop-mapred/mapred/system
<hdfs://devubuntu05:9000/data/hadoop/tmp/hadoop-mapred/mapred/system>
So if you want hadoop.tmp.dir to be /data/hadoop/tmp/hadoop-${user.name}
then i would suggest that create /data/hadoop/tmp on hdfs and chmod -R 777
/data or you can remove the hadoop.tmp.dir from your configs and let it be
set to the default value of
<property>
<name>hadoop.tmp.dir</name>
<value>/tmp/hadoop-${user.name}</value>
<description>A base for other temporary directories.</description>
</property>
So to fix your problem you can do the above or set mapred.system.dir to
/tmp/mapred/system in your mapred-site.xml.
--
Arpit Gupta
Hortonworks Inc.
http://hortonworks.com/
On Apr 30, 2013, at 7:55 AM, "Kevin Burton" <rk...@charter.net>
wrote:
In core-site.xml I have:
<property>
<name>fs.default.name</name>
<value> <hdfs://devubuntu05:9000%3c/value%3e>
hdfs://devubuntu05:9000</value>
<description>The name of the default file system. A URI whose scheme and
authority determine the FileSystem implementation. </description>
</property>
In hdfs-site.xml I have
<property>
<name>hadoop.tmp.dir</name>
<value>/data/hadoop/tmp/hadoop-${user.name}</value>
<description>Hadoop temporary folder</description>
</property>
From: Arpit Gupta [mailto:arpit@hortonworks.com]
Sent: Tuesday, April 30, 2013 9:48 AM
To: Kevin Burton
Cc: user@hadoop.apache.org
Subject: Re: Permission problem
Based on the logs your system dir is set to
<hdfs://devubuntu05:9000/data/hadoop/tmp/hadoop-mapred/mapred/system>
hdfs://devubuntu05:9000/data/hadoop/tmp/hadoop-mapred/mapred/system
what is your fs.default.name and hadoop.tmp.dir in core-site.xml set to?
--
Arpit Gupta
Hortonworks Inc.
<http://hortonworks.com/> http://hortonworks.com/
On Apr 30, 2013, at 7:39 AM, "Kevin Burton" <
<ma...@charter.net> rkevinburton@charter.net> wrote:
Thank you.
mapred.system.dir is not set. I am guessing that it is whatever the default
is. What should I set it to?
/tmp is already 777
kevin@devUbuntu05:~$ hadoop fs -ls /tmp
Found 1 items
drwxr-xr-x - hdfs supergroup 0 2013-04-29 15:45 /tmp/mapred
kevin@devUbuntu05:~$ hadoop fs -ls -d /tmp
Found 1 items
drwxrwxrwt - hdfs supergroup 0 2013-04-29 15:45 /tmp
But notice that the mapred folder in the /tmp folder is 755.
So I changed it:
kevin@devUbuntu05 $ hadoop fs -ls -d /tmp
drwxrwxrwt - hdfs supergroup 0 2013-04-29 15:45 /tmp
kevin@devUbuntu05 $ hadoop fs -ls -R /tmp
drwxr-xr-x - mapred supergroup 0 2013-04-29 15:45 /tmp/mapred
drwxr-xr-x - mapred supergroup 0 2013-04-29 15:45
/tmp/mapred/system
I still get the errors in the log file:
2013-04-30 09:35:11,609 WARN org.apache.hadoop.mapred.JobTracker: Failed to
operate on mapred.system.dir (
<hdfs://devubuntu05:9000/data/hadoop/tmp/hadoop-mapred/mapred/system>
hdfs://devubuntu05:9000/data/hadoop/tmp/hadoop-mapred/mapred/system) because
of permissions.
2013-04-30 09:35:11,609 WARN org.apache.hadoop.mapred.JobTracker: This
directory should be owned by the user 'mapred (auth:SIMPLE)'
2013-04-30 09:35:11,609 WARN org.apache.hadoop.mapred.JobTracker: Bailing
out ...
. . . . .
org.apache.hadoop.security.AccessControlException: Permission denied:
user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
. . . . .
Caused by:
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessContr
olException): Permission denied: user=mapred, access=WRITE,
inode="/":hdfs:supergroup:drwxrwxr-x
2013-04-30 09:35:11,610 FATAL org.apache.hadoop.mapred.JobTracker:
org.apache.hadoop.security.AccessControlException: Permission denied:
user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
. . . . . .
From: Arpit Gupta [mailto:arpit@ <http://hortonworks.com> hortonworks.com]
Sent: Tuesday, April 30, 2013 9:25 AM
To: <ma...@hadoop.apache.org> user@hadoop.apache.org
Subject: Re: Permission problem
what is your mapred.system.dir set to in mapred-site.xml?
By default it will write to /tmp on hdfs.
So you can do the following
create /tmp on hdfs and chmod it to 777 as user hdfs and then restart
jobtracker and tasktrackers.
In case its set to /mapred/something then create /mapred and chown it to
user mapred.
--
Arpit Gupta
Hortonworks Inc.
<http://hortonworks.com/> http://hortonworks.com/
On Apr 30, 2013, at 6:36 AM, "Kevin Burton" <
<ma...@charter.net> rkevinburton@charter.net> wrote:
To further complicate the issue the log file in
(/var/log/hadoop-0.20-mapreduce/hadoop-hadoop-jobtracker-devUbuntu05.log) is
owned by mapred:mapred and the name of the file seems to indicate some other
lineage (hadoop,hadoop). I am out of my league in understanding the
permission structure for hadoop hdfs and mr. Ideas?
From: Kevin Burton [mailto:rkevinburton@ <http://charter.net> charter.net]
Sent: Tuesday, April 30, 2013 8:31 AM
To: <ma...@hadoop.apache.org> user@hadoop.apache.org
Cc: 'Mohammad Tariq'
Subject: RE: Permission problem
That is what I perceive as the problem. The hdfs file system was created
with the user 'hdfs' owning the root ('/') but for some reason with a M/R
job the user 'mapred' needs to have write permission to the root. I don't
know how to satisfy both conditions. That is one reason that I relaxed the
permission to 775 so that the group would also have write permission but
that didn't seem to help.
From: Mohammad Tariq [ <ma...@gmail.com>
mailto:dontariq@gmail.com]
Sent: Tuesday, April 30, 2013 8:20 AM
To: Kevin Burton
Subject: Re: Permission problem
user?"ls" shows "hdfs" and the log says "mapred"..
Warm Regards,
Tariq
<https://mtariq.jux.com/> https://mtariq.jux.com/
<http://cloudfront.blogspot.com> cloudfront.blogspot.com
On Tue, Apr 30, 2013 at 6:22 PM, Kevin Burton <
<ma...@charter.net> rkevinburton@charter.net> wrote:
I have relaxed it even further so now it is 775
<mailto:kevin@devUbuntu05:/var/log/hadoop-0.20-mapreduce$>
kevin@devUbuntu05:/var/log/hadoop-0.20-mapreduce$ hadoop fs -ls -d /
Found 1 items
drwxrwxr-x - hdfs supergroup 0 2013-04-29 15:43 /
But I still get this error:
2013-04-30 07:43:02,520 FATAL org.apache.hadoop.mapred.JobTracker:
org.apache.hadoop.security.AccessControlException: Permission denied:
user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
From: Mohammad Tariq [mailto: <ma...@gmail.com>
dontariq@gmail.com]
Sent: Monday, April 29, 2013 5:10 PM
To: <ma...@hadoop.apache.org> user@hadoop.apache.org
Subject: Re: Incompartible cluserIDS
make it 755.
Warm Regards,
Tariq
<https://mtariq.jux.com/> https://mtariq.jux.com/
<http://cloudfront.blogspot.com> cloudfront.blogspot.com
RE: Permission problem
Posted by Kevin Burton <rk...@charter.net>.
I am not clear on what you are suggesting to create on HDFS or the local
file system. As I understand it hadoop.tmp.dir is the local file system. I
changed it so that the temporary files would be on a disk that has more
capacity then /tmp. So you are suggesting that I create /data/hadoop/tmp on
HDFS. I already have this created.
Found 1 items
drwxr-xr-x - mapred supergroup 0 2013-04-29 15:45 /tmp/mapred
kevin@devUbuntu05:/etc/hadoop/conf$ hadoop fs -ls -d /tmp
Found 1 items
drwxrwxrwt - hdfs supergroup 0 2013-04-29 15:45 /tmp
When you suggest that I 'chmod -R 777 /data'. You are suggesting that I open
up all the data to everyone? Isn't that a bit extreme? First /data is the
mount point for this drive and there are other uses for this drive than
hadoop so there are other folders. That is why there is /data/hadoop. As far
as hadoop is concerned:
kevin@devUbuntu05:/etc/hadoop/conf$ ls -l /data/hadoop/
total 12
drwxrwxr-x 4 hdfs hadoop 4096 Apr 29 16:38 dfs
drwxrwxr-x 3 mapred hadoop 4096 Apr 29 11:33 mapred
drwxrwxrwx 3 hdfs hadoop 4096 Apr 19 15:14 tmp
dfs would be where the data blocks for the hdfs file system would go, mapred
would be the folder for M/R jobs, and tmp would be temporary storage. These
are all on the local file system. Do I have to make all of this read-write
for everyone in order to get it to work?
From: Arpit Gupta [mailto:arpit@hortonworks.com]
Sent: Tuesday, April 30, 2013 10:01 AM
To: user@hadoop.apache.org
Subject: Re: Permission problem
ah
this is what mapred.sytem.dir defaults to
<property>
<name>mapred.system.dir</name>
<value>${hadoop.tmp.dir}/mapred/system</value>
<description>The directory where MapReduce stores control files.
</description>
</property>
So thats why its trying to write to
/data/hadoop/tmp/hadoop-mapred/mapred/system
<hdfs://devubuntu05:9000/data/hadoop/tmp/hadoop-mapred/mapred/system>
So if you want hadoop.tmp.dir to be /data/hadoop/tmp/hadoop-${user.name}
then i would suggest that create /data/hadoop/tmp on hdfs and chmod -R 777
/data or you can remove the hadoop.tmp.dir from your configs and let it be
set to the default value of
<property>
<name>hadoop.tmp.dir</name>
<value>/tmp/hadoop-${user.name}</value>
<description>A base for other temporary directories.</description>
</property>
So to fix your problem you can do the above or set mapred.system.dir to
/tmp/mapred/system in your mapred-site.xml.
--
Arpit Gupta
Hortonworks Inc.
http://hortonworks.com/
On Apr 30, 2013, at 7:55 AM, "Kevin Burton" <rk...@charter.net>
wrote:
In core-site.xml I have:
<property>
<name>fs.default.name</name>
<value> <hdfs://devubuntu05:9000%3c/value%3e>
hdfs://devubuntu05:9000</value>
<description>The name of the default file system. A URI whose scheme and
authority determine the FileSystem implementation. </description>
</property>
In hdfs-site.xml I have
<property>
<name>hadoop.tmp.dir</name>
<value>/data/hadoop/tmp/hadoop-${user.name}</value>
<description>Hadoop temporary folder</description>
</property>
From: Arpit Gupta [mailto:arpit@hortonworks.com]
Sent: Tuesday, April 30, 2013 9:48 AM
To: Kevin Burton
Cc: user@hadoop.apache.org
Subject: Re: Permission problem
Based on the logs your system dir is set to
<hdfs://devubuntu05:9000/data/hadoop/tmp/hadoop-mapred/mapred/system>
hdfs://devubuntu05:9000/data/hadoop/tmp/hadoop-mapred/mapred/system
what is your fs.default.name and hadoop.tmp.dir in core-site.xml set to?
--
Arpit Gupta
Hortonworks Inc.
<http://hortonworks.com/> http://hortonworks.com/
On Apr 30, 2013, at 7:39 AM, "Kevin Burton" <
<ma...@charter.net> rkevinburton@charter.net> wrote:
Thank you.
mapred.system.dir is not set. I am guessing that it is whatever the default
is. What should I set it to?
/tmp is already 777
kevin@devUbuntu05:~$ hadoop fs -ls /tmp
Found 1 items
drwxr-xr-x - hdfs supergroup 0 2013-04-29 15:45 /tmp/mapred
kevin@devUbuntu05:~$ hadoop fs -ls -d /tmp
Found 1 items
drwxrwxrwt - hdfs supergroup 0 2013-04-29 15:45 /tmp
But notice that the mapred folder in the /tmp folder is 755.
So I changed it:
kevin@devUbuntu05 $ hadoop fs -ls -d /tmp
drwxrwxrwt - hdfs supergroup 0 2013-04-29 15:45 /tmp
kevin@devUbuntu05 $ hadoop fs -ls -R /tmp
drwxr-xr-x - mapred supergroup 0 2013-04-29 15:45 /tmp/mapred
drwxr-xr-x - mapred supergroup 0 2013-04-29 15:45
/tmp/mapred/system
I still get the errors in the log file:
2013-04-30 09:35:11,609 WARN org.apache.hadoop.mapred.JobTracker: Failed to
operate on mapred.system.dir (
<hdfs://devubuntu05:9000/data/hadoop/tmp/hadoop-mapred/mapred/system>
hdfs://devubuntu05:9000/data/hadoop/tmp/hadoop-mapred/mapred/system) because
of permissions.
2013-04-30 09:35:11,609 WARN org.apache.hadoop.mapred.JobTracker: This
directory should be owned by the user 'mapred (auth:SIMPLE)'
2013-04-30 09:35:11,609 WARN org.apache.hadoop.mapred.JobTracker: Bailing
out ...
. . . . .
org.apache.hadoop.security.AccessControlException: Permission denied:
user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
. . . . .
Caused by:
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessContr
olException): Permission denied: user=mapred, access=WRITE,
inode="/":hdfs:supergroup:drwxrwxr-x
2013-04-30 09:35:11,610 FATAL org.apache.hadoop.mapred.JobTracker:
org.apache.hadoop.security.AccessControlException: Permission denied:
user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
. . . . . .
From: Arpit Gupta [mailto:arpit@ <http://hortonworks.com> hortonworks.com]
Sent: Tuesday, April 30, 2013 9:25 AM
To: <ma...@hadoop.apache.org> user@hadoop.apache.org
Subject: Re: Permission problem
what is your mapred.system.dir set to in mapred-site.xml?
By default it will write to /tmp on hdfs.
So you can do the following
create /tmp on hdfs and chmod it to 777 as user hdfs and then restart
jobtracker and tasktrackers.
In case its set to /mapred/something then create /mapred and chown it to
user mapred.
--
Arpit Gupta
Hortonworks Inc.
<http://hortonworks.com/> http://hortonworks.com/
On Apr 30, 2013, at 6:36 AM, "Kevin Burton" <
<ma...@charter.net> rkevinburton@charter.net> wrote:
To further complicate the issue the log file in
(/var/log/hadoop-0.20-mapreduce/hadoop-hadoop-jobtracker-devUbuntu05.log) is
owned by mapred:mapred and the name of the file seems to indicate some other
lineage (hadoop,hadoop). I am out of my league in understanding the
permission structure for hadoop hdfs and mr. Ideas?
From: Kevin Burton [mailto:rkevinburton@ <http://charter.net> charter.net]
Sent: Tuesday, April 30, 2013 8:31 AM
To: <ma...@hadoop.apache.org> user@hadoop.apache.org
Cc: 'Mohammad Tariq'
Subject: RE: Permission problem
That is what I perceive as the problem. The hdfs file system was created
with the user 'hdfs' owning the root ('/') but for some reason with a M/R
job the user 'mapred' needs to have write permission to the root. I don't
know how to satisfy both conditions. That is one reason that I relaxed the
permission to 775 so that the group would also have write permission but
that didn't seem to help.
From: Mohammad Tariq [ <ma...@gmail.com>
mailto:dontariq@gmail.com]
Sent: Tuesday, April 30, 2013 8:20 AM
To: Kevin Burton
Subject: Re: Permission problem
user?"ls" shows "hdfs" and the log says "mapred"..
Warm Regards,
Tariq
<https://mtariq.jux.com/> https://mtariq.jux.com/
<http://cloudfront.blogspot.com> cloudfront.blogspot.com
On Tue, Apr 30, 2013 at 6:22 PM, Kevin Burton <
<ma...@charter.net> rkevinburton@charter.net> wrote:
I have relaxed it even further so now it is 775
<mailto:kevin@devUbuntu05:/var/log/hadoop-0.20-mapreduce$>
kevin@devUbuntu05:/var/log/hadoop-0.20-mapreduce$ hadoop fs -ls -d /
Found 1 items
drwxrwxr-x - hdfs supergroup 0 2013-04-29 15:43 /
But I still get this error:
2013-04-30 07:43:02,520 FATAL org.apache.hadoop.mapred.JobTracker:
org.apache.hadoop.security.AccessControlException: Permission denied:
user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
From: Mohammad Tariq [mailto: <ma...@gmail.com>
dontariq@gmail.com]
Sent: Monday, April 29, 2013 5:10 PM
To: <ma...@hadoop.apache.org> user@hadoop.apache.org
Subject: Re: Incompartible cluserIDS
make it 755.
Warm Regards,
Tariq
<https://mtariq.jux.com/> https://mtariq.jux.com/
<http://cloudfront.blogspot.com> cloudfront.blogspot.com
Re: Permission problem
Posted by Arpit Gupta <ar...@hortonworks.com>.
ah
this is what mapred.sytem.dir defaults to
<property>
<name>mapred.system.dir</name>
<value>${hadoop.tmp.dir}/mapred/system</value>
<description>The directory where MapReduce stores control files.
</description>
</property>
So thats why its trying to write to /data/hadoop/tmp/hadoop-mapred/mapred/system
So if you want hadoop.tmp.dir to be /data/hadoop/tmp/hadoop-${user.name} then i would suggest that create /data/hadoop/tmp on hdfs and chmod -R 777 /data or you can remove the hadoop.tmp.dir from your configs and let it be set to the default value of
<property>
<name>hadoop.tmp.dir</name>
<value>/tmp/hadoop-${user.name}</value>
<description>A base for other temporary directories.</description>
</property>
So to fix your problem you can do the above or set mapred.system.dir to /tmp/mapred/system in your mapred-site.xml.
--
Arpit Gupta
Hortonworks Inc.
http://hortonworks.com/
On Apr 30, 2013, at 7:55 AM, "Kevin Burton" <rk...@charter.net> wrote:
> In core-site.xml I have:
>
> <property>
> <name>fs.default.name</name>
> <value>hdfs://devubuntu05:9000</value>
> <description>The name of the default file system. A URI whose scheme and authority determine the FileSystem implementation. </description>
> </property>
>
> In hdfs-site.xml I have
>
> <property>
> <name>hadoop.tmp.dir</name>
> <value>/data/hadoop/tmp/hadoop-${user.name}</value>
> <description>Hadoop temporary folder</description>
> </property>
>
>
> From: Arpit Gupta [mailto:arpit@hortonworks.com]
> Sent: Tuesday, April 30, 2013 9:48 AM
> To: Kevin Burton
> Cc: user@hadoop.apache.org
> Subject: Re: Permission problem
>
> Based on the logs your system dir is set to
>
> hdfs://devubuntu05:9000/data/hadoop/tmp/hadoop-mapred/mapred/system
>
>
> what is your fs.default.name and hadoop.tmp.dir in core-site.xml set to?
>
>
>
> --
> Arpit Gupta
> Hortonworks Inc.
> http://hortonworks.com/
>
> On Apr 30, 2013, at 7:39 AM, "Kevin Burton" <rk...@charter.net> wrote:
>
>
>
> Thank you.
>
> mapred.system.dir is not set. I am guessing that it is whatever the default is. What should I set it to?
>
> /tmp is already 777
>
> kevin@devUbuntu05:~$ hadoop fs -ls /tmp
> Found 1 items
> drwxr-xr-x - hdfs supergroup 0 2013-04-29 15:45 /tmp/mapred
> kevin@devUbuntu05:~$ hadoop fs -ls -d /tmp
> Found 1 items
> drwxrwxrwt - hdfs supergroup 0 2013-04-29 15:45 /tmp
>
> But notice that the mapred folder in the /tmp folder is 755.
> So I changed it:
>
> kevin@devUbuntu05 $ hadoop fs -ls -d /tmp
> drwxrwxrwt - hdfs supergroup 0 2013-04-29 15:45 /tmp
>
> kevin@devUbuntu05 $ hadoop fs -ls -R /tmp
> drwxr-xr-x - mapred supergroup 0 2013-04-29 15:45 /tmp/mapred
> drwxr-xr-x - mapred supergroup 0 2013-04-29 15:45 /tmp/mapred/system
>
> I still get the errors in the log file:
>
> 2013-04-30 09:35:11,609 WARN org.apache.hadoop.mapred.JobTracker: Failed to operate on mapred.system.dir (hdfs://devubuntu05:9000/data/hadoop/tmp/hadoop-mapred/mapred/system) because of permissions.
> 2013-04-30 09:35:11,609 WARN org.apache.hadoop.mapred.JobTracker: This directory should be owned by the user 'mapred (auth:SIMPLE)'
> 2013-04-30 09:35:11,609 WARN org.apache.hadoop.mapred.JobTracker: Bailing out ...
> . . . . .
> org.apache.hadoop.security.AccessControlException: Permission denied: user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
> . . . . .
> Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Permission denied: user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
> 2013-04-30 09:35:11,610 FATAL org.apache.hadoop.mapred.JobTracker: org.apache.hadoop.security.AccessControlException: Permission denied: user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
> . . . . . .
>
>
> From: Arpit Gupta [mailto:arpit@hortonworks.com]
> Sent: Tuesday, April 30, 2013 9:25 AM
> To: user@hadoop.apache.org
> Subject: Re: Permission problem
>
> what is your mapred.system.dir set to in mapred-site.xml?
>
> By default it will write to /tmp on hdfs.
>
> So you can do the following
>
> create /tmp on hdfs and chmod it to 777 as user hdfs and then restart jobtracker and tasktrackers.
>
> In case its set to /mapred/something then create /mapred and chown it to user mapred.
>
>
> --
> Arpit Gupta
> Hortonworks Inc.
> http://hortonworks.com/
>
> On Apr 30, 2013, at 6:36 AM, "Kevin Burton" <rk...@charter.net> wrote:
>
>
>
> To further complicate the issue the log file in (/var/log/hadoop-0.20-mapreduce/hadoop-hadoop-jobtracker-devUbuntu05.log) is owned by mapred:mapred and the name of the file seems to indicate some other lineage (hadoop,hadoop). I am out of my league in understanding the permission structure for hadoop hdfs and mr. Ideas?
>
> From: Kevin Burton [mailto:rkevinburton@charter.net]
> Sent: Tuesday, April 30, 2013 8:31 AM
> To: user@hadoop.apache.org
> Cc: 'Mohammad Tariq'
> Subject: RE: Permission problem
>
> That is what I perceive as the problem. The hdfs file system was created with the user ‘hdfs’ owning the root (‘/’) but for some reason with a M/R job the user ‘mapred’ needs to have write permission to the root. I don’t know how to satisfy both conditions. That is one reason that I relaxed the permission to 775 so that the group would also have write permission but that didn’t seem to help.
>
> From: Mohammad Tariq [mailto:dontariq@gmail.com]
> Sent: Tuesday, April 30, 2013 8:20 AM
> To: Kevin Burton
> Subject: Re: Permission problem
>
> user?"ls" shows "hdfs" and the log says "mapred"..
>
> Warm Regards,
> Tariq
> https://mtariq.jux.com/
> cloudfront.blogspot.com
>
>
> On Tue, Apr 30, 2013 at 6:22 PM, Kevin Burton <rk...@charter.net> wrote:
> I have relaxed it even further so now it is 775
>
> kevin@devUbuntu05:/var/log/hadoop-0.20-mapreduce$ hadoop fs -ls -d /
> Found 1 items
> drwxrwxr-x - hdfs supergroup 0 2013-04-29 15:43 /
>
> But I still get this error:
>
> 2013-04-30 07:43:02,520 FATAL org.apache.hadoop.mapred.JobTracker: org.apache.hadoop.security.AccessControlException: Permission denied: user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
>
>
> From: Mohammad Tariq [mailto:dontariq@gmail.com]
> Sent: Monday, April 29, 2013 5:10 PM
> To: user@hadoop.apache.org
> Subject: Re: Incompartible cluserIDS
>
> make it 755.
>
> Warm Regards,
> Tariq
> https://mtariq.jux.com/
> cloudfront.blogspot.com
>
>
>
Re: Permission problem
Posted by Arpit Gupta <ar...@hortonworks.com>.
ah
this is what mapred.sytem.dir defaults to
<property>
<name>mapred.system.dir</name>
<value>${hadoop.tmp.dir}/mapred/system</value>
<description>The directory where MapReduce stores control files.
</description>
</property>
So thats why its trying to write to /data/hadoop/tmp/hadoop-mapred/mapred/system
So if you want hadoop.tmp.dir to be /data/hadoop/tmp/hadoop-${user.name} then i would suggest that create /data/hadoop/tmp on hdfs and chmod -R 777 /data or you can remove the hadoop.tmp.dir from your configs and let it be set to the default value of
<property>
<name>hadoop.tmp.dir</name>
<value>/tmp/hadoop-${user.name}</value>
<description>A base for other temporary directories.</description>
</property>
So to fix your problem you can do the above or set mapred.system.dir to /tmp/mapred/system in your mapred-site.xml.
--
Arpit Gupta
Hortonworks Inc.
http://hortonworks.com/
On Apr 30, 2013, at 7:55 AM, "Kevin Burton" <rk...@charter.net> wrote:
> In core-site.xml I have:
>
> <property>
> <name>fs.default.name</name>
> <value>hdfs://devubuntu05:9000</value>
> <description>The name of the default file system. A URI whose scheme and authority determine the FileSystem implementation. </description>
> </property>
>
> In hdfs-site.xml I have
>
> <property>
> <name>hadoop.tmp.dir</name>
> <value>/data/hadoop/tmp/hadoop-${user.name}</value>
> <description>Hadoop temporary folder</description>
> </property>
>
>
> From: Arpit Gupta [mailto:arpit@hortonworks.com]
> Sent: Tuesday, April 30, 2013 9:48 AM
> To: Kevin Burton
> Cc: user@hadoop.apache.org
> Subject: Re: Permission problem
>
> Based on the logs your system dir is set to
>
> hdfs://devubuntu05:9000/data/hadoop/tmp/hadoop-mapred/mapred/system
>
>
> what is your fs.default.name and hadoop.tmp.dir in core-site.xml set to?
>
>
>
> --
> Arpit Gupta
> Hortonworks Inc.
> http://hortonworks.com/
>
> On Apr 30, 2013, at 7:39 AM, "Kevin Burton" <rk...@charter.net> wrote:
>
>
>
> Thank you.
>
> mapred.system.dir is not set. I am guessing that it is whatever the default is. What should I set it to?
>
> /tmp is already 777
>
> kevin@devUbuntu05:~$ hadoop fs -ls /tmp
> Found 1 items
> drwxr-xr-x - hdfs supergroup 0 2013-04-29 15:45 /tmp/mapred
> kevin@devUbuntu05:~$ hadoop fs -ls -d /tmp
> Found 1 items
> drwxrwxrwt - hdfs supergroup 0 2013-04-29 15:45 /tmp
>
> But notice that the mapred folder in the /tmp folder is 755.
> So I changed it:
>
> kevin@devUbuntu05 $ hadoop fs -ls -d /tmp
> drwxrwxrwt - hdfs supergroup 0 2013-04-29 15:45 /tmp
>
> kevin@devUbuntu05 $ hadoop fs -ls -R /tmp
> drwxr-xr-x - mapred supergroup 0 2013-04-29 15:45 /tmp/mapred
> drwxr-xr-x - mapred supergroup 0 2013-04-29 15:45 /tmp/mapred/system
>
> I still get the errors in the log file:
>
> 2013-04-30 09:35:11,609 WARN org.apache.hadoop.mapred.JobTracker: Failed to operate on mapred.system.dir (hdfs://devubuntu05:9000/data/hadoop/tmp/hadoop-mapred/mapred/system) because of permissions.
> 2013-04-30 09:35:11,609 WARN org.apache.hadoop.mapred.JobTracker: This directory should be owned by the user 'mapred (auth:SIMPLE)'
> 2013-04-30 09:35:11,609 WARN org.apache.hadoop.mapred.JobTracker: Bailing out ...
> . . . . .
> org.apache.hadoop.security.AccessControlException: Permission denied: user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
> . . . . .
> Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Permission denied: user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
> 2013-04-30 09:35:11,610 FATAL org.apache.hadoop.mapred.JobTracker: org.apache.hadoop.security.AccessControlException: Permission denied: user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
> . . . . . .
>
>
> From: Arpit Gupta [mailto:arpit@hortonworks.com]
> Sent: Tuesday, April 30, 2013 9:25 AM
> To: user@hadoop.apache.org
> Subject: Re: Permission problem
>
> what is your mapred.system.dir set to in mapred-site.xml?
>
> By default it will write to /tmp on hdfs.
>
> So you can do the following
>
> create /tmp on hdfs and chmod it to 777 as user hdfs and then restart jobtracker and tasktrackers.
>
> In case its set to /mapred/something then create /mapred and chown it to user mapred.
>
>
> --
> Arpit Gupta
> Hortonworks Inc.
> http://hortonworks.com/
>
> On Apr 30, 2013, at 6:36 AM, "Kevin Burton" <rk...@charter.net> wrote:
>
>
>
> To further complicate the issue the log file in (/var/log/hadoop-0.20-mapreduce/hadoop-hadoop-jobtracker-devUbuntu05.log) is owned by mapred:mapred and the name of the file seems to indicate some other lineage (hadoop,hadoop). I am out of my league in understanding the permission structure for hadoop hdfs and mr. Ideas?
>
> From: Kevin Burton [mailto:rkevinburton@charter.net]
> Sent: Tuesday, April 30, 2013 8:31 AM
> To: user@hadoop.apache.org
> Cc: 'Mohammad Tariq'
> Subject: RE: Permission problem
>
> That is what I perceive as the problem. The hdfs file system was created with the user ‘hdfs’ owning the root (‘/’) but for some reason with a M/R job the user ‘mapred’ needs to have write permission to the root. I don’t know how to satisfy both conditions. That is one reason that I relaxed the permission to 775 so that the group would also have write permission but that didn’t seem to help.
>
> From: Mohammad Tariq [mailto:dontariq@gmail.com]
> Sent: Tuesday, April 30, 2013 8:20 AM
> To: Kevin Burton
> Subject: Re: Permission problem
>
> user?"ls" shows "hdfs" and the log says "mapred"..
>
> Warm Regards,
> Tariq
> https://mtariq.jux.com/
> cloudfront.blogspot.com
>
>
> On Tue, Apr 30, 2013 at 6:22 PM, Kevin Burton <rk...@charter.net> wrote:
> I have relaxed it even further so now it is 775
>
> kevin@devUbuntu05:/var/log/hadoop-0.20-mapreduce$ hadoop fs -ls -d /
> Found 1 items
> drwxrwxr-x - hdfs supergroup 0 2013-04-29 15:43 /
>
> But I still get this error:
>
> 2013-04-30 07:43:02,520 FATAL org.apache.hadoop.mapred.JobTracker: org.apache.hadoop.security.AccessControlException: Permission denied: user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
>
>
> From: Mohammad Tariq [mailto:dontariq@gmail.com]
> Sent: Monday, April 29, 2013 5:10 PM
> To: user@hadoop.apache.org
> Subject: Re: Incompartible cluserIDS
>
> make it 755.
>
> Warm Regards,
> Tariq
> https://mtariq.jux.com/
> cloudfront.blogspot.com
>
>
>
Re: Permission problem
Posted by Arpit Gupta <ar...@hortonworks.com>.
ah
this is what mapred.sytem.dir defaults to
<property>
<name>mapred.system.dir</name>
<value>${hadoop.tmp.dir}/mapred/system</value>
<description>The directory where MapReduce stores control files.
</description>
</property>
So thats why its trying to write to /data/hadoop/tmp/hadoop-mapred/mapred/system
So if you want hadoop.tmp.dir to be /data/hadoop/tmp/hadoop-${user.name} then i would suggest that create /data/hadoop/tmp on hdfs and chmod -R 777 /data or you can remove the hadoop.tmp.dir from your configs and let it be set to the default value of
<property>
<name>hadoop.tmp.dir</name>
<value>/tmp/hadoop-${user.name}</value>
<description>A base for other temporary directories.</description>
</property>
So to fix your problem you can do the above or set mapred.system.dir to /tmp/mapred/system in your mapred-site.xml.
--
Arpit Gupta
Hortonworks Inc.
http://hortonworks.com/
On Apr 30, 2013, at 7:55 AM, "Kevin Burton" <rk...@charter.net> wrote:
> In core-site.xml I have:
>
> <property>
> <name>fs.default.name</name>
> <value>hdfs://devubuntu05:9000</value>
> <description>The name of the default file system. A URI whose scheme and authority determine the FileSystem implementation. </description>
> </property>
>
> In hdfs-site.xml I have
>
> <property>
> <name>hadoop.tmp.dir</name>
> <value>/data/hadoop/tmp/hadoop-${user.name}</value>
> <description>Hadoop temporary folder</description>
> </property>
>
>
> From: Arpit Gupta [mailto:arpit@hortonworks.com]
> Sent: Tuesday, April 30, 2013 9:48 AM
> To: Kevin Burton
> Cc: user@hadoop.apache.org
> Subject: Re: Permission problem
>
> Based on the logs your system dir is set to
>
> hdfs://devubuntu05:9000/data/hadoop/tmp/hadoop-mapred/mapred/system
>
>
> what is your fs.default.name and hadoop.tmp.dir in core-site.xml set to?
>
>
>
> --
> Arpit Gupta
> Hortonworks Inc.
> http://hortonworks.com/
>
> On Apr 30, 2013, at 7:39 AM, "Kevin Burton" <rk...@charter.net> wrote:
>
>
>
> Thank you.
>
> mapred.system.dir is not set. I am guessing that it is whatever the default is. What should I set it to?
>
> /tmp is already 777
>
> kevin@devUbuntu05:~$ hadoop fs -ls /tmp
> Found 1 items
> drwxr-xr-x - hdfs supergroup 0 2013-04-29 15:45 /tmp/mapred
> kevin@devUbuntu05:~$ hadoop fs -ls -d /tmp
> Found 1 items
> drwxrwxrwt - hdfs supergroup 0 2013-04-29 15:45 /tmp
>
> But notice that the mapred folder in the /tmp folder is 755.
> So I changed it:
>
> kevin@devUbuntu05 $ hadoop fs -ls -d /tmp
> drwxrwxrwt - hdfs supergroup 0 2013-04-29 15:45 /tmp
>
> kevin@devUbuntu05 $ hadoop fs -ls -R /tmp
> drwxr-xr-x - mapred supergroup 0 2013-04-29 15:45 /tmp/mapred
> drwxr-xr-x - mapred supergroup 0 2013-04-29 15:45 /tmp/mapred/system
>
> I still get the errors in the log file:
>
> 2013-04-30 09:35:11,609 WARN org.apache.hadoop.mapred.JobTracker: Failed to operate on mapred.system.dir (hdfs://devubuntu05:9000/data/hadoop/tmp/hadoop-mapred/mapred/system) because of permissions.
> 2013-04-30 09:35:11,609 WARN org.apache.hadoop.mapred.JobTracker: This directory should be owned by the user 'mapred (auth:SIMPLE)'
> 2013-04-30 09:35:11,609 WARN org.apache.hadoop.mapred.JobTracker: Bailing out ...
> . . . . .
> org.apache.hadoop.security.AccessControlException: Permission denied: user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
> . . . . .
> Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Permission denied: user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
> 2013-04-30 09:35:11,610 FATAL org.apache.hadoop.mapred.JobTracker: org.apache.hadoop.security.AccessControlException: Permission denied: user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
> . . . . . .
>
>
> From: Arpit Gupta [mailto:arpit@hortonworks.com]
> Sent: Tuesday, April 30, 2013 9:25 AM
> To: user@hadoop.apache.org
> Subject: Re: Permission problem
>
> what is your mapred.system.dir set to in mapred-site.xml?
>
> By default it will write to /tmp on hdfs.
>
> So you can do the following
>
> create /tmp on hdfs and chmod it to 777 as user hdfs and then restart jobtracker and tasktrackers.
>
> In case its set to /mapred/something then create /mapred and chown it to user mapred.
>
>
> --
> Arpit Gupta
> Hortonworks Inc.
> http://hortonworks.com/
>
> On Apr 30, 2013, at 6:36 AM, "Kevin Burton" <rk...@charter.net> wrote:
>
>
>
> To further complicate the issue the log file in (/var/log/hadoop-0.20-mapreduce/hadoop-hadoop-jobtracker-devUbuntu05.log) is owned by mapred:mapred and the name of the file seems to indicate some other lineage (hadoop,hadoop). I am out of my league in understanding the permission structure for hadoop hdfs and mr. Ideas?
>
> From: Kevin Burton [mailto:rkevinburton@charter.net]
> Sent: Tuesday, April 30, 2013 8:31 AM
> To: user@hadoop.apache.org
> Cc: 'Mohammad Tariq'
> Subject: RE: Permission problem
>
> That is what I perceive as the problem. The hdfs file system was created with the user ‘hdfs’ owning the root (‘/’) but for some reason with a M/R job the user ‘mapred’ needs to have write permission to the root. I don’t know how to satisfy both conditions. That is one reason that I relaxed the permission to 775 so that the group would also have write permission but that didn’t seem to help.
>
> From: Mohammad Tariq [mailto:dontariq@gmail.com]
> Sent: Tuesday, April 30, 2013 8:20 AM
> To: Kevin Burton
> Subject: Re: Permission problem
>
> user?"ls" shows "hdfs" and the log says "mapred"..
>
> Warm Regards,
> Tariq
> https://mtariq.jux.com/
> cloudfront.blogspot.com
>
>
> On Tue, Apr 30, 2013 at 6:22 PM, Kevin Burton <rk...@charter.net> wrote:
> I have relaxed it even further so now it is 775
>
> kevin@devUbuntu05:/var/log/hadoop-0.20-mapreduce$ hadoop fs -ls -d /
> Found 1 items
> drwxrwxr-x - hdfs supergroup 0 2013-04-29 15:43 /
>
> But I still get this error:
>
> 2013-04-30 07:43:02,520 FATAL org.apache.hadoop.mapred.JobTracker: org.apache.hadoop.security.AccessControlException: Permission denied: user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
>
>
> From: Mohammad Tariq [mailto:dontariq@gmail.com]
> Sent: Monday, April 29, 2013 5:10 PM
> To: user@hadoop.apache.org
> Subject: Re: Incompartible cluserIDS
>
> make it 755.
>
> Warm Regards,
> Tariq
> https://mtariq.jux.com/
> cloudfront.blogspot.com
>
>
>
Re: Permission problem
Posted by Arpit Gupta <ar...@hortonworks.com>.
ah
this is what mapred.sytem.dir defaults to
<property>
<name>mapred.system.dir</name>
<value>${hadoop.tmp.dir}/mapred/system</value>
<description>The directory where MapReduce stores control files.
</description>
</property>
So thats why its trying to write to /data/hadoop/tmp/hadoop-mapred/mapred/system
So if you want hadoop.tmp.dir to be /data/hadoop/tmp/hadoop-${user.name} then i would suggest that create /data/hadoop/tmp on hdfs and chmod -R 777 /data or you can remove the hadoop.tmp.dir from your configs and let it be set to the default value of
<property>
<name>hadoop.tmp.dir</name>
<value>/tmp/hadoop-${user.name}</value>
<description>A base for other temporary directories.</description>
</property>
So to fix your problem you can do the above or set mapred.system.dir to /tmp/mapred/system in your mapred-site.xml.
--
Arpit Gupta
Hortonworks Inc.
http://hortonworks.com/
On Apr 30, 2013, at 7:55 AM, "Kevin Burton" <rk...@charter.net> wrote:
> In core-site.xml I have:
>
> <property>
> <name>fs.default.name</name>
> <value>hdfs://devubuntu05:9000</value>
> <description>The name of the default file system. A URI whose scheme and authority determine the FileSystem implementation. </description>
> </property>
>
> In hdfs-site.xml I have
>
> <property>
> <name>hadoop.tmp.dir</name>
> <value>/data/hadoop/tmp/hadoop-${user.name}</value>
> <description>Hadoop temporary folder</description>
> </property>
>
>
> From: Arpit Gupta [mailto:arpit@hortonworks.com]
> Sent: Tuesday, April 30, 2013 9:48 AM
> To: Kevin Burton
> Cc: user@hadoop.apache.org
> Subject: Re: Permission problem
>
> Based on the logs your system dir is set to
>
> hdfs://devubuntu05:9000/data/hadoop/tmp/hadoop-mapred/mapred/system
>
>
> what is your fs.default.name and hadoop.tmp.dir in core-site.xml set to?
>
>
>
> --
> Arpit Gupta
> Hortonworks Inc.
> http://hortonworks.com/
>
> On Apr 30, 2013, at 7:39 AM, "Kevin Burton" <rk...@charter.net> wrote:
>
>
>
> Thank you.
>
> mapred.system.dir is not set. I am guessing that it is whatever the default is. What should I set it to?
>
> /tmp is already 777
>
> kevin@devUbuntu05:~$ hadoop fs -ls /tmp
> Found 1 items
> drwxr-xr-x - hdfs supergroup 0 2013-04-29 15:45 /tmp/mapred
> kevin@devUbuntu05:~$ hadoop fs -ls -d /tmp
> Found 1 items
> drwxrwxrwt - hdfs supergroup 0 2013-04-29 15:45 /tmp
>
> But notice that the mapred folder in the /tmp folder is 755.
> So I changed it:
>
> kevin@devUbuntu05 $ hadoop fs -ls -d /tmp
> drwxrwxrwt - hdfs supergroup 0 2013-04-29 15:45 /tmp
>
> kevin@devUbuntu05 $ hadoop fs -ls -R /tmp
> drwxr-xr-x - mapred supergroup 0 2013-04-29 15:45 /tmp/mapred
> drwxr-xr-x - mapred supergroup 0 2013-04-29 15:45 /tmp/mapred/system
>
> I still get the errors in the log file:
>
> 2013-04-30 09:35:11,609 WARN org.apache.hadoop.mapred.JobTracker: Failed to operate on mapred.system.dir (hdfs://devubuntu05:9000/data/hadoop/tmp/hadoop-mapred/mapred/system) because of permissions.
> 2013-04-30 09:35:11,609 WARN org.apache.hadoop.mapred.JobTracker: This directory should be owned by the user 'mapred (auth:SIMPLE)'
> 2013-04-30 09:35:11,609 WARN org.apache.hadoop.mapred.JobTracker: Bailing out ...
> . . . . .
> org.apache.hadoop.security.AccessControlException: Permission denied: user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
> . . . . .
> Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Permission denied: user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
> 2013-04-30 09:35:11,610 FATAL org.apache.hadoop.mapred.JobTracker: org.apache.hadoop.security.AccessControlException: Permission denied: user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
> . . . . . .
>
>
> From: Arpit Gupta [mailto:arpit@hortonworks.com]
> Sent: Tuesday, April 30, 2013 9:25 AM
> To: user@hadoop.apache.org
> Subject: Re: Permission problem
>
> what is your mapred.system.dir set to in mapred-site.xml?
>
> By default it will write to /tmp on hdfs.
>
> So you can do the following
>
> create /tmp on hdfs and chmod it to 777 as user hdfs and then restart jobtracker and tasktrackers.
>
> In case its set to /mapred/something then create /mapred and chown it to user mapred.
>
>
> --
> Arpit Gupta
> Hortonworks Inc.
> http://hortonworks.com/
>
> On Apr 30, 2013, at 6:36 AM, "Kevin Burton" <rk...@charter.net> wrote:
>
>
>
> To further complicate the issue the log file in (/var/log/hadoop-0.20-mapreduce/hadoop-hadoop-jobtracker-devUbuntu05.log) is owned by mapred:mapred and the name of the file seems to indicate some other lineage (hadoop,hadoop). I am out of my league in understanding the permission structure for hadoop hdfs and mr. Ideas?
>
> From: Kevin Burton [mailto:rkevinburton@charter.net]
> Sent: Tuesday, April 30, 2013 8:31 AM
> To: user@hadoop.apache.org
> Cc: 'Mohammad Tariq'
> Subject: RE: Permission problem
>
> That is what I perceive as the problem. The hdfs file system was created with the user ‘hdfs’ owning the root (‘/’) but for some reason with a M/R job the user ‘mapred’ needs to have write permission to the root. I don’t know how to satisfy both conditions. That is one reason that I relaxed the permission to 775 so that the group would also have write permission but that didn’t seem to help.
>
> From: Mohammad Tariq [mailto:dontariq@gmail.com]
> Sent: Tuesday, April 30, 2013 8:20 AM
> To: Kevin Burton
> Subject: Re: Permission problem
>
> user?"ls" shows "hdfs" and the log says "mapred"..
>
> Warm Regards,
> Tariq
> https://mtariq.jux.com/
> cloudfront.blogspot.com
>
>
> On Tue, Apr 30, 2013 at 6:22 PM, Kevin Burton <rk...@charter.net> wrote:
> I have relaxed it even further so now it is 775
>
> kevin@devUbuntu05:/var/log/hadoop-0.20-mapreduce$ hadoop fs -ls -d /
> Found 1 items
> drwxrwxr-x - hdfs supergroup 0 2013-04-29 15:43 /
>
> But I still get this error:
>
> 2013-04-30 07:43:02,520 FATAL org.apache.hadoop.mapred.JobTracker: org.apache.hadoop.security.AccessControlException: Permission denied: user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
>
>
> From: Mohammad Tariq [mailto:dontariq@gmail.com]
> Sent: Monday, April 29, 2013 5:10 PM
> To: user@hadoop.apache.org
> Subject: Re: Incompartible cluserIDS
>
> make it 755.
>
> Warm Regards,
> Tariq
> https://mtariq.jux.com/
> cloudfront.blogspot.com
>
>
>
RE: Permission problem
Posted by Kevin Burton <rk...@charter.net>.
In core-site.xml I have:
<property>
<name>fs.default.name</name>
<value>hdfs://devubuntu05:9000</value>
<description>The name of the default file system. A URI whose scheme and
authority determine the FileSystem implementation. </description>
</property>
In hdfs-site.xml I have
<property>
<name>hadoop.tmp.dir</name>
<value>/data/hadoop/tmp/hadoop-${user.name}</value>
<description>Hadoop temporary folder</description>
</property>
From: Arpit Gupta [mailto:arpit@hortonworks.com]
Sent: Tuesday, April 30, 2013 9:48 AM
To: Kevin Burton
Cc: user@hadoop.apache.org
Subject: Re: Permission problem
Based on the logs your system dir is set to
<hdfs://devubuntu05:9000/data/hadoop/tmp/hadoop-mapred/mapred/system>
hdfs://devubuntu05:9000/data/hadoop/tmp/hadoop-mapred/mapred/system
what is your fs.default.name and hadoop.tmp.dir in core-site.xml set to?
--
Arpit Gupta
Hortonworks Inc.
http://hortonworks.com/
On Apr 30, 2013, at 7:39 AM, "Kevin Burton" <rk...@charter.net>
wrote:
Thank you.
mapred.system.dir is not set. I am guessing that it is whatever the default
is. What should I set it to?
/tmp is already 777
kevin@devUbuntu05:~$ hadoop fs -ls /tmp
Found 1 items
drwxr-xr-x - hdfs supergroup 0 2013-04-29 15:45 /tmp/mapred
kevin@devUbuntu05:~$ hadoop fs -ls -d /tmp
Found 1 items
drwxrwxrwt - hdfs supergroup 0 2013-04-29 15:45 /tmp
But notice that the mapred folder in the /tmp folder is 755.
So I changed it:
kevin@devUbuntu05 $ hadoop fs -ls -d /tmp
drwxrwxrwt - hdfs supergroup 0 2013-04-29 15:45 /tmp
kevin@devUbuntu05 $ hadoop fs -ls -R /tmp
drwxr-xr-x - mapred supergroup 0 2013-04-29 15:45 /tmp/mapred
drwxr-xr-x - mapred supergroup 0 2013-04-29 15:45
/tmp/mapred/system
I still get the errors in the log file:
2013-04-30 09:35:11,609 WARN org.apache.hadoop.mapred.JobTracker: Failed to
operate on mapred.system.dir (
<hdfs://devubuntu05:9000/data/hadoop/tmp/hadoop-mapred/mapred/system>
hdfs://devubuntu05:9000/data/hadoop/tmp/hadoop-mapred/mapred/system) because
of permissions.
2013-04-30 09:35:11,609 WARN org.apache.hadoop.mapred.JobTracker: This
directory should be owned by the user 'mapred (auth:SIMPLE)'
2013-04-30 09:35:11,609 WARN org.apache.hadoop.mapred.JobTracker: Bailing
out ...
. . . . .
org.apache.hadoop.security.AccessControlException: Permission denied:
user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
. . . . .
Caused by:
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessContr
olException): Permission denied: user=mapred, access=WRITE,
inode="/":hdfs:supergroup:drwxrwxr-x
2013-04-30 09:35:11,610 FATAL org.apache.hadoop.mapred.JobTracker:
org.apache.hadoop.security.AccessControlException: Permission denied:
user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
. . . . . .
From: Arpit Gupta [mailto:arpit@hortonworks.com]
Sent: Tuesday, April 30, 2013 9:25 AM
To: user@hadoop.apache.org
Subject: Re: Permission problem
what is your mapred.system.dir set to in mapred-site.xml?
By default it will write to /tmp on hdfs.
So you can do the following
create /tmp on hdfs and chmod it to 777 as user hdfs and then restart
jobtracker and tasktrackers.
In case its set to /mapred/something then create /mapred and chown it to
user mapred.
--
Arpit Gupta
Hortonworks Inc.
<http://hortonworks.com/> http://hortonworks.com/
On Apr 30, 2013, at 6:36 AM, "Kevin Burton" <
<ma...@charter.net> rkevinburton@charter.net> wrote:
To further complicate the issue the log file in
(/var/log/hadoop-0.20-mapreduce/hadoop-hadoop-jobtracker-devUbuntu05.log) is
owned by mapred:mapred and the name of the file seems to indicate some other
lineage (hadoop,hadoop). I am out of my league in understanding the
permission structure for hadoop hdfs and mr. Ideas?
From: Kevin Burton [mailto:rkevinburton@ <http://charter.net> charter.net]
Sent: Tuesday, April 30, 2013 8:31 AM
To: <ma...@hadoop.apache.org> user@hadoop.apache.org
Cc: 'Mohammad Tariq'
Subject: RE: Permission problem
That is what I perceive as the problem. The hdfs file system was created
with the user 'hdfs' owning the root ('/') but for some reason with a M/R
job the user 'mapred' needs to have write permission to the root. I don't
know how to satisfy both conditions. That is one reason that I relaxed the
permission to 775 so that the group would also have write permission but
that didn't seem to help.
From: Mohammad Tariq [ <ma...@gmail.com>
mailto:dontariq@gmail.com]
Sent: Tuesday, April 30, 2013 8:20 AM
To: Kevin Burton
Subject: Re: Permission problem
user?"ls" shows "hdfs" and the log says "mapred"..
Warm Regards,
Tariq
<https://mtariq.jux.com/> https://mtariq.jux.com/
<http://cloudfront.blogspot.com> cloudfront.blogspot.com
On Tue, Apr 30, 2013 at 6:22 PM, Kevin Burton <
<ma...@charter.net> rkevinburton@charter.net> wrote:
I have relaxed it even further so now it is 775
<mailto:kevin@devUbuntu05:/var/log/hadoop-0.20-mapreduce$>
kevin@devUbuntu05:/var/log/hadoop-0.20-mapreduce$ hadoop fs -ls -d /
Found 1 items
drwxrwxr-x - hdfs supergroup 0 2013-04-29 15:43 /
But I still get this error:
2013-04-30 07:43:02,520 FATAL org.apache.hadoop.mapred.JobTracker:
org.apache.hadoop.security.AccessControlException: Permission denied:
user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
From: Mohammad Tariq [mailto: <ma...@gmail.com>
dontariq@gmail.com]
Sent: Monday, April 29, 2013 5:10 PM
To: <ma...@hadoop.apache.org> user@hadoop.apache.org
Subject: Re: Incompartible cluserIDS
make it 755.
Warm Regards,
Tariq
<https://mtariq.jux.com/> https://mtariq.jux.com/
<http://cloudfront.blogspot.com> cloudfront.blogspot.com
RE: Permission problem
Posted by Kevin Burton <rk...@charter.net>.
In core-site.xml I have:
<property>
<name>fs.default.name</name>
<value>hdfs://devubuntu05:9000</value>
<description>The name of the default file system. A URI whose scheme and
authority determine the FileSystem implementation. </description>
</property>
In hdfs-site.xml I have
<property>
<name>hadoop.tmp.dir</name>
<value>/data/hadoop/tmp/hadoop-${user.name}</value>
<description>Hadoop temporary folder</description>
</property>
From: Arpit Gupta [mailto:arpit@hortonworks.com]
Sent: Tuesday, April 30, 2013 9:48 AM
To: Kevin Burton
Cc: user@hadoop.apache.org
Subject: Re: Permission problem
Based on the logs your system dir is set to
<hdfs://devubuntu05:9000/data/hadoop/tmp/hadoop-mapred/mapred/system>
hdfs://devubuntu05:9000/data/hadoop/tmp/hadoop-mapred/mapred/system
what is your fs.default.name and hadoop.tmp.dir in core-site.xml set to?
--
Arpit Gupta
Hortonworks Inc.
http://hortonworks.com/
On Apr 30, 2013, at 7:39 AM, "Kevin Burton" <rk...@charter.net>
wrote:
Thank you.
mapred.system.dir is not set. I am guessing that it is whatever the default
is. What should I set it to?
/tmp is already 777
kevin@devUbuntu05:~$ hadoop fs -ls /tmp
Found 1 items
drwxr-xr-x - hdfs supergroup 0 2013-04-29 15:45 /tmp/mapred
kevin@devUbuntu05:~$ hadoop fs -ls -d /tmp
Found 1 items
drwxrwxrwt - hdfs supergroup 0 2013-04-29 15:45 /tmp
But notice that the mapred folder in the /tmp folder is 755.
So I changed it:
kevin@devUbuntu05 $ hadoop fs -ls -d /tmp
drwxrwxrwt - hdfs supergroup 0 2013-04-29 15:45 /tmp
kevin@devUbuntu05 $ hadoop fs -ls -R /tmp
drwxr-xr-x - mapred supergroup 0 2013-04-29 15:45 /tmp/mapred
drwxr-xr-x - mapred supergroup 0 2013-04-29 15:45
/tmp/mapred/system
I still get the errors in the log file:
2013-04-30 09:35:11,609 WARN org.apache.hadoop.mapred.JobTracker: Failed to
operate on mapred.system.dir (
<hdfs://devubuntu05:9000/data/hadoop/tmp/hadoop-mapred/mapred/system>
hdfs://devubuntu05:9000/data/hadoop/tmp/hadoop-mapred/mapred/system) because
of permissions.
2013-04-30 09:35:11,609 WARN org.apache.hadoop.mapred.JobTracker: This
directory should be owned by the user 'mapred (auth:SIMPLE)'
2013-04-30 09:35:11,609 WARN org.apache.hadoop.mapred.JobTracker: Bailing
out ...
. . . . .
org.apache.hadoop.security.AccessControlException: Permission denied:
user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
. . . . .
Caused by:
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessContr
olException): Permission denied: user=mapred, access=WRITE,
inode="/":hdfs:supergroup:drwxrwxr-x
2013-04-30 09:35:11,610 FATAL org.apache.hadoop.mapred.JobTracker:
org.apache.hadoop.security.AccessControlException: Permission denied:
user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
. . . . . .
From: Arpit Gupta [mailto:arpit@hortonworks.com]
Sent: Tuesday, April 30, 2013 9:25 AM
To: user@hadoop.apache.org
Subject: Re: Permission problem
what is your mapred.system.dir set to in mapred-site.xml?
By default it will write to /tmp on hdfs.
So you can do the following
create /tmp on hdfs and chmod it to 777 as user hdfs and then restart
jobtracker and tasktrackers.
In case its set to /mapred/something then create /mapred and chown it to
user mapred.
--
Arpit Gupta
Hortonworks Inc.
<http://hortonworks.com/> http://hortonworks.com/
On Apr 30, 2013, at 6:36 AM, "Kevin Burton" <
<ma...@charter.net> rkevinburton@charter.net> wrote:
To further complicate the issue the log file in
(/var/log/hadoop-0.20-mapreduce/hadoop-hadoop-jobtracker-devUbuntu05.log) is
owned by mapred:mapred and the name of the file seems to indicate some other
lineage (hadoop,hadoop). I am out of my league in understanding the
permission structure for hadoop hdfs and mr. Ideas?
From: Kevin Burton [mailto:rkevinburton@ <http://charter.net> charter.net]
Sent: Tuesday, April 30, 2013 8:31 AM
To: <ma...@hadoop.apache.org> user@hadoop.apache.org
Cc: 'Mohammad Tariq'
Subject: RE: Permission problem
That is what I perceive as the problem. The hdfs file system was created
with the user 'hdfs' owning the root ('/') but for some reason with a M/R
job the user 'mapred' needs to have write permission to the root. I don't
know how to satisfy both conditions. That is one reason that I relaxed the
permission to 775 so that the group would also have write permission but
that didn't seem to help.
From: Mohammad Tariq [ <ma...@gmail.com>
mailto:dontariq@gmail.com]
Sent: Tuesday, April 30, 2013 8:20 AM
To: Kevin Burton
Subject: Re: Permission problem
user?"ls" shows "hdfs" and the log says "mapred"..
Warm Regards,
Tariq
<https://mtariq.jux.com/> https://mtariq.jux.com/
<http://cloudfront.blogspot.com> cloudfront.blogspot.com
On Tue, Apr 30, 2013 at 6:22 PM, Kevin Burton <
<ma...@charter.net> rkevinburton@charter.net> wrote:
I have relaxed it even further so now it is 775
<mailto:kevin@devUbuntu05:/var/log/hadoop-0.20-mapreduce$>
kevin@devUbuntu05:/var/log/hadoop-0.20-mapreduce$ hadoop fs -ls -d /
Found 1 items
drwxrwxr-x - hdfs supergroup 0 2013-04-29 15:43 /
But I still get this error:
2013-04-30 07:43:02,520 FATAL org.apache.hadoop.mapred.JobTracker:
org.apache.hadoop.security.AccessControlException: Permission denied:
user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
From: Mohammad Tariq [mailto: <ma...@gmail.com>
dontariq@gmail.com]
Sent: Monday, April 29, 2013 5:10 PM
To: <ma...@hadoop.apache.org> user@hadoop.apache.org
Subject: Re: Incompartible cluserIDS
make it 755.
Warm Regards,
Tariq
<https://mtariq.jux.com/> https://mtariq.jux.com/
<http://cloudfront.blogspot.com> cloudfront.blogspot.com
RE: Permission problem
Posted by Kevin Burton <rk...@charter.net>.
In core-site.xml I have:
<property>
<name>fs.default.name</name>
<value>hdfs://devubuntu05:9000</value>
<description>The name of the default file system. A URI whose scheme and
authority determine the FileSystem implementation. </description>
</property>
In hdfs-site.xml I have
<property>
<name>hadoop.tmp.dir</name>
<value>/data/hadoop/tmp/hadoop-${user.name}</value>
<description>Hadoop temporary folder</description>
</property>
From: Arpit Gupta [mailto:arpit@hortonworks.com]
Sent: Tuesday, April 30, 2013 9:48 AM
To: Kevin Burton
Cc: user@hadoop.apache.org
Subject: Re: Permission problem
Based on the logs your system dir is set to
<hdfs://devubuntu05:9000/data/hadoop/tmp/hadoop-mapred/mapred/system>
hdfs://devubuntu05:9000/data/hadoop/tmp/hadoop-mapred/mapred/system
what is your fs.default.name and hadoop.tmp.dir in core-site.xml set to?
--
Arpit Gupta
Hortonworks Inc.
http://hortonworks.com/
On Apr 30, 2013, at 7:39 AM, "Kevin Burton" <rk...@charter.net>
wrote:
Thank you.
mapred.system.dir is not set. I am guessing that it is whatever the default
is. What should I set it to?
/tmp is already 777
kevin@devUbuntu05:~$ hadoop fs -ls /tmp
Found 1 items
drwxr-xr-x - hdfs supergroup 0 2013-04-29 15:45 /tmp/mapred
kevin@devUbuntu05:~$ hadoop fs -ls -d /tmp
Found 1 items
drwxrwxrwt - hdfs supergroup 0 2013-04-29 15:45 /tmp
But notice that the mapred folder in the /tmp folder is 755.
So I changed it:
kevin@devUbuntu05 $ hadoop fs -ls -d /tmp
drwxrwxrwt - hdfs supergroup 0 2013-04-29 15:45 /tmp
kevin@devUbuntu05 $ hadoop fs -ls -R /tmp
drwxr-xr-x - mapred supergroup 0 2013-04-29 15:45 /tmp/mapred
drwxr-xr-x - mapred supergroup 0 2013-04-29 15:45
/tmp/mapred/system
I still get the errors in the log file:
2013-04-30 09:35:11,609 WARN org.apache.hadoop.mapred.JobTracker: Failed to
operate on mapred.system.dir (
<hdfs://devubuntu05:9000/data/hadoop/tmp/hadoop-mapred/mapred/system>
hdfs://devubuntu05:9000/data/hadoop/tmp/hadoop-mapred/mapred/system) because
of permissions.
2013-04-30 09:35:11,609 WARN org.apache.hadoop.mapred.JobTracker: This
directory should be owned by the user 'mapred (auth:SIMPLE)'
2013-04-30 09:35:11,609 WARN org.apache.hadoop.mapred.JobTracker: Bailing
out ...
. . . . .
org.apache.hadoop.security.AccessControlException: Permission denied:
user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
. . . . .
Caused by:
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessContr
olException): Permission denied: user=mapred, access=WRITE,
inode="/":hdfs:supergroup:drwxrwxr-x
2013-04-30 09:35:11,610 FATAL org.apache.hadoop.mapred.JobTracker:
org.apache.hadoop.security.AccessControlException: Permission denied:
user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
. . . . . .
From: Arpit Gupta [mailto:arpit@hortonworks.com]
Sent: Tuesday, April 30, 2013 9:25 AM
To: user@hadoop.apache.org
Subject: Re: Permission problem
what is your mapred.system.dir set to in mapred-site.xml?
By default it will write to /tmp on hdfs.
So you can do the following
create /tmp on hdfs and chmod it to 777 as user hdfs and then restart
jobtracker and tasktrackers.
In case its set to /mapred/something then create /mapred and chown it to
user mapred.
--
Arpit Gupta
Hortonworks Inc.
<http://hortonworks.com/> http://hortonworks.com/
On Apr 30, 2013, at 6:36 AM, "Kevin Burton" <
<ma...@charter.net> rkevinburton@charter.net> wrote:
To further complicate the issue the log file in
(/var/log/hadoop-0.20-mapreduce/hadoop-hadoop-jobtracker-devUbuntu05.log) is
owned by mapred:mapred and the name of the file seems to indicate some other
lineage (hadoop,hadoop). I am out of my league in understanding the
permission structure for hadoop hdfs and mr. Ideas?
From: Kevin Burton [mailto:rkevinburton@ <http://charter.net> charter.net]
Sent: Tuesday, April 30, 2013 8:31 AM
To: <ma...@hadoop.apache.org> user@hadoop.apache.org
Cc: 'Mohammad Tariq'
Subject: RE: Permission problem
That is what I perceive as the problem. The hdfs file system was created
with the user 'hdfs' owning the root ('/') but for some reason with a M/R
job the user 'mapred' needs to have write permission to the root. I don't
know how to satisfy both conditions. That is one reason that I relaxed the
permission to 775 so that the group would also have write permission but
that didn't seem to help.
From: Mohammad Tariq [ <ma...@gmail.com>
mailto:dontariq@gmail.com]
Sent: Tuesday, April 30, 2013 8:20 AM
To: Kevin Burton
Subject: Re: Permission problem
user?"ls" shows "hdfs" and the log says "mapred"..
Warm Regards,
Tariq
<https://mtariq.jux.com/> https://mtariq.jux.com/
<http://cloudfront.blogspot.com> cloudfront.blogspot.com
On Tue, Apr 30, 2013 at 6:22 PM, Kevin Burton <
<ma...@charter.net> rkevinburton@charter.net> wrote:
I have relaxed it even further so now it is 775
<mailto:kevin@devUbuntu05:/var/log/hadoop-0.20-mapreduce$>
kevin@devUbuntu05:/var/log/hadoop-0.20-mapreduce$ hadoop fs -ls -d /
Found 1 items
drwxrwxr-x - hdfs supergroup 0 2013-04-29 15:43 /
But I still get this error:
2013-04-30 07:43:02,520 FATAL org.apache.hadoop.mapred.JobTracker:
org.apache.hadoop.security.AccessControlException: Permission denied:
user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
From: Mohammad Tariq [mailto: <ma...@gmail.com>
dontariq@gmail.com]
Sent: Monday, April 29, 2013 5:10 PM
To: <ma...@hadoop.apache.org> user@hadoop.apache.org
Subject: Re: Incompartible cluserIDS
make it 755.
Warm Regards,
Tariq
<https://mtariq.jux.com/> https://mtariq.jux.com/
<http://cloudfront.blogspot.com> cloudfront.blogspot.com
RE: Permission problem
Posted by Kevin Burton <rk...@charter.net>.
In core-site.xml I have:
<property>
<name>fs.default.name</name>
<value>hdfs://devubuntu05:9000</value>
<description>The name of the default file system. A URI whose scheme and
authority determine the FileSystem implementation. </description>
</property>
In hdfs-site.xml I have
<property>
<name>hadoop.tmp.dir</name>
<value>/data/hadoop/tmp/hadoop-${user.name}</value>
<description>Hadoop temporary folder</description>
</property>
From: Arpit Gupta [mailto:arpit@hortonworks.com]
Sent: Tuesday, April 30, 2013 9:48 AM
To: Kevin Burton
Cc: user@hadoop.apache.org
Subject: Re: Permission problem
Based on the logs your system dir is set to
<hdfs://devubuntu05:9000/data/hadoop/tmp/hadoop-mapred/mapred/system>
hdfs://devubuntu05:9000/data/hadoop/tmp/hadoop-mapred/mapred/system
what is your fs.default.name and hadoop.tmp.dir in core-site.xml set to?
--
Arpit Gupta
Hortonworks Inc.
http://hortonworks.com/
On Apr 30, 2013, at 7:39 AM, "Kevin Burton" <rk...@charter.net>
wrote:
Thank you.
mapred.system.dir is not set. I am guessing that it is whatever the default
is. What should I set it to?
/tmp is already 777
kevin@devUbuntu05:~$ hadoop fs -ls /tmp
Found 1 items
drwxr-xr-x - hdfs supergroup 0 2013-04-29 15:45 /tmp/mapred
kevin@devUbuntu05:~$ hadoop fs -ls -d /tmp
Found 1 items
drwxrwxrwt - hdfs supergroup 0 2013-04-29 15:45 /tmp
But notice that the mapred folder in the /tmp folder is 755.
So I changed it:
kevin@devUbuntu05 $ hadoop fs -ls -d /tmp
drwxrwxrwt - hdfs supergroup 0 2013-04-29 15:45 /tmp
kevin@devUbuntu05 $ hadoop fs -ls -R /tmp
drwxr-xr-x - mapred supergroup 0 2013-04-29 15:45 /tmp/mapred
drwxr-xr-x - mapred supergroup 0 2013-04-29 15:45
/tmp/mapred/system
I still get the errors in the log file:
2013-04-30 09:35:11,609 WARN org.apache.hadoop.mapred.JobTracker: Failed to
operate on mapred.system.dir (
<hdfs://devubuntu05:9000/data/hadoop/tmp/hadoop-mapred/mapred/system>
hdfs://devubuntu05:9000/data/hadoop/tmp/hadoop-mapred/mapred/system) because
of permissions.
2013-04-30 09:35:11,609 WARN org.apache.hadoop.mapred.JobTracker: This
directory should be owned by the user 'mapred (auth:SIMPLE)'
2013-04-30 09:35:11,609 WARN org.apache.hadoop.mapred.JobTracker: Bailing
out ...
. . . . .
org.apache.hadoop.security.AccessControlException: Permission denied:
user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
. . . . .
Caused by:
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessContr
olException): Permission denied: user=mapred, access=WRITE,
inode="/":hdfs:supergroup:drwxrwxr-x
2013-04-30 09:35:11,610 FATAL org.apache.hadoop.mapred.JobTracker:
org.apache.hadoop.security.AccessControlException: Permission denied:
user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
. . . . . .
From: Arpit Gupta [mailto:arpit@hortonworks.com]
Sent: Tuesday, April 30, 2013 9:25 AM
To: user@hadoop.apache.org
Subject: Re: Permission problem
what is your mapred.system.dir set to in mapred-site.xml?
By default it will write to /tmp on hdfs.
So you can do the following
create /tmp on hdfs and chmod it to 777 as user hdfs and then restart
jobtracker and tasktrackers.
In case its set to /mapred/something then create /mapred and chown it to
user mapred.
--
Arpit Gupta
Hortonworks Inc.
<http://hortonworks.com/> http://hortonworks.com/
On Apr 30, 2013, at 6:36 AM, "Kevin Burton" <
<ma...@charter.net> rkevinburton@charter.net> wrote:
To further complicate the issue the log file in
(/var/log/hadoop-0.20-mapreduce/hadoop-hadoop-jobtracker-devUbuntu05.log) is
owned by mapred:mapred and the name of the file seems to indicate some other
lineage (hadoop,hadoop). I am out of my league in understanding the
permission structure for hadoop hdfs and mr. Ideas?
From: Kevin Burton [mailto:rkevinburton@ <http://charter.net> charter.net]
Sent: Tuesday, April 30, 2013 8:31 AM
To: <ma...@hadoop.apache.org> user@hadoop.apache.org
Cc: 'Mohammad Tariq'
Subject: RE: Permission problem
That is what I perceive as the problem. The hdfs file system was created
with the user 'hdfs' owning the root ('/') but for some reason with a M/R
job the user 'mapred' needs to have write permission to the root. I don't
know how to satisfy both conditions. That is one reason that I relaxed the
permission to 775 so that the group would also have write permission but
that didn't seem to help.
From: Mohammad Tariq [ <ma...@gmail.com>
mailto:dontariq@gmail.com]
Sent: Tuesday, April 30, 2013 8:20 AM
To: Kevin Burton
Subject: Re: Permission problem
user?"ls" shows "hdfs" and the log says "mapred"..
Warm Regards,
Tariq
<https://mtariq.jux.com/> https://mtariq.jux.com/
<http://cloudfront.blogspot.com> cloudfront.blogspot.com
On Tue, Apr 30, 2013 at 6:22 PM, Kevin Burton <
<ma...@charter.net> rkevinburton@charter.net> wrote:
I have relaxed it even further so now it is 775
<mailto:kevin@devUbuntu05:/var/log/hadoop-0.20-mapreduce$>
kevin@devUbuntu05:/var/log/hadoop-0.20-mapreduce$ hadoop fs -ls -d /
Found 1 items
drwxrwxr-x - hdfs supergroup 0 2013-04-29 15:43 /
But I still get this error:
2013-04-30 07:43:02,520 FATAL org.apache.hadoop.mapred.JobTracker:
org.apache.hadoop.security.AccessControlException: Permission denied:
user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
From: Mohammad Tariq [mailto: <ma...@gmail.com>
dontariq@gmail.com]
Sent: Monday, April 29, 2013 5:10 PM
To: <ma...@hadoop.apache.org> user@hadoop.apache.org
Subject: Re: Incompartible cluserIDS
make it 755.
Warm Regards,
Tariq
<https://mtariq.jux.com/> https://mtariq.jux.com/
<http://cloudfront.blogspot.com> cloudfront.blogspot.com
Re: Permission problem
Posted by Arpit Gupta <ar...@hortonworks.com>.
Based on the logs your system dir is set to
> hdfs://devubuntu05:9000/data/hadoop/tmp/hadoop-mapred/mapred/system
what is your fs.default.name and hadoop.tmp.dir in core-site.xml set to?
--
Arpit Gupta
Hortonworks Inc.
http://hortonworks.com/
On Apr 30, 2013, at 7:39 AM, "Kevin Burton" <rk...@charter.net> wrote:
>
> Thank you.
>
> mapred.system.dir is not set. I am guessing that it is whatever the default is. What should I set it to?
>
> /tmp is already 777
>
> kevin@devUbuntu05:~$ hadoop fs -ls /tmp
> Found 1 items
> drwxr-xr-x - hdfs supergroup 0 2013-04-29 15:45 /tmp/mapred
> kevin@devUbuntu05:~$ hadoop fs -ls -d /tmp
> Found 1 items
> drwxrwxrwt - hdfs supergroup 0 2013-04-29 15:45 /tmp
>
> But notice that the mapred folder in the /tmp folder is 755.
> So I changed it:
>
> kevin@devUbuntu05 $ hadoop fs -ls -d /tmp
> drwxrwxrwt - hdfs supergroup 0 2013-04-29 15:45 /tmp
>
> kevin@devUbuntu05 $ hadoop fs -ls -R /tmp
> drwxr-xr-x - mapred supergroup 0 2013-04-29 15:45 /tmp/mapred
> drwxr-xr-x - mapred supergroup 0 2013-04-29 15:45 /tmp/mapred/system
>
> I still get the errors in the log file:
>
> 2013-04-30 09:35:11,609 WARN org.apache.hadoop.mapred.JobTracker: Failed to operate on mapred.system.dir (hdfs://devubuntu05:9000/data/hadoop/tmp/hadoop-mapred/mapred/system) because of permissions.
> 2013-04-30 09:35:11,609 WARN org.apache.hadoop.mapred.JobTracker: This directory should be owned by the user 'mapred (auth:SIMPLE)'
> 2013-04-30 09:35:11,609 WARN org.apache.hadoop.mapred.JobTracker: Bailing out ...
> . . . . .
> org.apache.hadoop.security.AccessControlException: Permission denied: user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
> . . . . .
> Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Permission denied: user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
> 2013-04-30 09:35:11,610 FATAL org.apache.hadoop.mapred.JobTracker: org.apache.hadoop.security.AccessControlException: Permission denied: user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
> . . . . . .
>
>
> From: Arpit Gupta [mailto:arpit@hortonworks.com]
> Sent: Tuesday, April 30, 2013 9:25 AM
> To: user@hadoop.apache.org
> Subject: Re: Permission problem
>
> what is your mapred.system.dir set to in mapred-site.xml?
>
> By default it will write to /tmp on hdfs.
>
> So you can do the following
>
> create /tmp on hdfs and chmod it to 777 as user hdfs and then restart jobtracker and tasktrackers.
>
> In case its set to /mapred/something then create /mapred and chown it to user mapred.
>
>
> --
> Arpit Gupta
> Hortonworks Inc.
> http://hortonworks.com/
>
> On Apr 30, 2013, at 6:36 AM, "Kevin Burton" <rk...@charter.net> wrote:
>
>
> To further complicate the issue the log file in (/var/log/hadoop-0.20-mapreduce/hadoop-hadoop-jobtracker-devUbuntu05.log) is owned by mapred:mapred and the name of the file seems to indicate some other lineage (hadoop,hadoop). I am out of my league in understanding the permission structure for hadoop hdfs and mr. Ideas?
>
> From: Kevin Burton [mailto:rkevinburton@charter.net]
> Sent: Tuesday, April 30, 2013 8:31 AM
> To: user@hadoop.apache.org
> Cc: 'Mohammad Tariq'
> Subject: RE: Permission problem
>
> That is what I perceive as the problem. The hdfs file system was created with the user ‘hdfs’ owning the root (‘/’) but for some reason with a M/R job the user ‘mapred’ needs to have write permission to the root. I don’t know how to satisfy both conditions. That is one reason that I relaxed the permission to 775 so that the group would also have write permission but that didn’t seem to help.
>
> From: Mohammad Tariq [mailto:dontariq@gmail.com]
> Sent: Tuesday, April 30, 2013 8:20 AM
> To: Kevin Burton
> Subject: Re: Permission problem
>
> user?"ls" shows "hdfs" and the log says "mapred"..
>
> Warm Regards,
> Tariq
> https://mtariq.jux.com/
> cloudfront.blogspot.com
>
>
> On Tue, Apr 30, 2013 at 6:22 PM, Kevin Burton <rk...@charter.net> wrote:
> I have relaxed it even further so now it is 775
>
> kevin@devUbuntu05:/var/log/hadoop-0.20-mapreduce$ hadoop fs -ls -d /
> Found 1 items
> drwxrwxr-x - hdfs supergroup 0 2013-04-29 15:43 /
>
> But I still get this error:
>
> 2013-04-30 07:43:02,520 FATAL org.apache.hadoop.mapred.JobTracker: org.apache.hadoop.security.AccessControlException: Permission denied: user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
>
>
> From: Mohammad Tariq [mailto:dontariq@gmail.com]
> Sent: Monday, April 29, 2013 5:10 PM
> To: user@hadoop.apache.org
> Subject: Re: Incompartible cluserIDS
>
> make it 755.
>
> Warm Regards,
> Tariq
> https://mtariq.jux.com/
> cloudfront.blogspot.com
>
>
>
Re: Permission problem
Posted by Arpit Gupta <ar...@hortonworks.com>.
Based on the logs your system dir is set to
> hdfs://devubuntu05:9000/data/hadoop/tmp/hadoop-mapred/mapred/system
what is your fs.default.name and hadoop.tmp.dir in core-site.xml set to?
--
Arpit Gupta
Hortonworks Inc.
http://hortonworks.com/
On Apr 30, 2013, at 7:39 AM, "Kevin Burton" <rk...@charter.net> wrote:
>
> Thank you.
>
> mapred.system.dir is not set. I am guessing that it is whatever the default is. What should I set it to?
>
> /tmp is already 777
>
> kevin@devUbuntu05:~$ hadoop fs -ls /tmp
> Found 1 items
> drwxr-xr-x - hdfs supergroup 0 2013-04-29 15:45 /tmp/mapred
> kevin@devUbuntu05:~$ hadoop fs -ls -d /tmp
> Found 1 items
> drwxrwxrwt - hdfs supergroup 0 2013-04-29 15:45 /tmp
>
> But notice that the mapred folder in the /tmp folder is 755.
> So I changed it:
>
> kevin@devUbuntu05 $ hadoop fs -ls -d /tmp
> drwxrwxrwt - hdfs supergroup 0 2013-04-29 15:45 /tmp
>
> kevin@devUbuntu05 $ hadoop fs -ls -R /tmp
> drwxr-xr-x - mapred supergroup 0 2013-04-29 15:45 /tmp/mapred
> drwxr-xr-x - mapred supergroup 0 2013-04-29 15:45 /tmp/mapred/system
>
> I still get the errors in the log file:
>
> 2013-04-30 09:35:11,609 WARN org.apache.hadoop.mapred.JobTracker: Failed to operate on mapred.system.dir (hdfs://devubuntu05:9000/data/hadoop/tmp/hadoop-mapred/mapred/system) because of permissions.
> 2013-04-30 09:35:11,609 WARN org.apache.hadoop.mapred.JobTracker: This directory should be owned by the user 'mapred (auth:SIMPLE)'
> 2013-04-30 09:35:11,609 WARN org.apache.hadoop.mapred.JobTracker: Bailing out ...
> . . . . .
> org.apache.hadoop.security.AccessControlException: Permission denied: user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
> . . . . .
> Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Permission denied: user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
> 2013-04-30 09:35:11,610 FATAL org.apache.hadoop.mapred.JobTracker: org.apache.hadoop.security.AccessControlException: Permission denied: user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
> . . . . . .
>
>
> From: Arpit Gupta [mailto:arpit@hortonworks.com]
> Sent: Tuesday, April 30, 2013 9:25 AM
> To: user@hadoop.apache.org
> Subject: Re: Permission problem
>
> what is your mapred.system.dir set to in mapred-site.xml?
>
> By default it will write to /tmp on hdfs.
>
> So you can do the following
>
> create /tmp on hdfs and chmod it to 777 as user hdfs and then restart jobtracker and tasktrackers.
>
> In case its set to /mapred/something then create /mapred and chown it to user mapred.
>
>
> --
> Arpit Gupta
> Hortonworks Inc.
> http://hortonworks.com/
>
> On Apr 30, 2013, at 6:36 AM, "Kevin Burton" <rk...@charter.net> wrote:
>
>
> To further complicate the issue the log file in (/var/log/hadoop-0.20-mapreduce/hadoop-hadoop-jobtracker-devUbuntu05.log) is owned by mapred:mapred and the name of the file seems to indicate some other lineage (hadoop,hadoop). I am out of my league in understanding the permission structure for hadoop hdfs and mr. Ideas?
>
> From: Kevin Burton [mailto:rkevinburton@charter.net]
> Sent: Tuesday, April 30, 2013 8:31 AM
> To: user@hadoop.apache.org
> Cc: 'Mohammad Tariq'
> Subject: RE: Permission problem
>
> That is what I perceive as the problem. The hdfs file system was created with the user ‘hdfs’ owning the root (‘/’) but for some reason with a M/R job the user ‘mapred’ needs to have write permission to the root. I don’t know how to satisfy both conditions. That is one reason that I relaxed the permission to 775 so that the group would also have write permission but that didn’t seem to help.
>
> From: Mohammad Tariq [mailto:dontariq@gmail.com]
> Sent: Tuesday, April 30, 2013 8:20 AM
> To: Kevin Burton
> Subject: Re: Permission problem
>
> user?"ls" shows "hdfs" and the log says "mapred"..
>
> Warm Regards,
> Tariq
> https://mtariq.jux.com/
> cloudfront.blogspot.com
>
>
> On Tue, Apr 30, 2013 at 6:22 PM, Kevin Burton <rk...@charter.net> wrote:
> I have relaxed it even further so now it is 775
>
> kevin@devUbuntu05:/var/log/hadoop-0.20-mapreduce$ hadoop fs -ls -d /
> Found 1 items
> drwxrwxr-x - hdfs supergroup 0 2013-04-29 15:43 /
>
> But I still get this error:
>
> 2013-04-30 07:43:02,520 FATAL org.apache.hadoop.mapred.JobTracker: org.apache.hadoop.security.AccessControlException: Permission denied: user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
>
>
> From: Mohammad Tariq [mailto:dontariq@gmail.com]
> Sent: Monday, April 29, 2013 5:10 PM
> To: user@hadoop.apache.org
> Subject: Re: Incompartible cluserIDS
>
> make it 755.
>
> Warm Regards,
> Tariq
> https://mtariq.jux.com/
> cloudfront.blogspot.com
>
>
>
Re: Permission problem
Posted by Arpit Gupta <ar...@hortonworks.com>.
Based on the logs your system dir is set to
> hdfs://devubuntu05:9000/data/hadoop/tmp/hadoop-mapred/mapred/system
what is your fs.default.name and hadoop.tmp.dir in core-site.xml set to?
--
Arpit Gupta
Hortonworks Inc.
http://hortonworks.com/
On Apr 30, 2013, at 7:39 AM, "Kevin Burton" <rk...@charter.net> wrote:
>
> Thank you.
>
> mapred.system.dir is not set. I am guessing that it is whatever the default is. What should I set it to?
>
> /tmp is already 777
>
> kevin@devUbuntu05:~$ hadoop fs -ls /tmp
> Found 1 items
> drwxr-xr-x - hdfs supergroup 0 2013-04-29 15:45 /tmp/mapred
> kevin@devUbuntu05:~$ hadoop fs -ls -d /tmp
> Found 1 items
> drwxrwxrwt - hdfs supergroup 0 2013-04-29 15:45 /tmp
>
> But notice that the mapred folder in the /tmp folder is 755.
> So I changed it:
>
> kevin@devUbuntu05 $ hadoop fs -ls -d /tmp
> drwxrwxrwt - hdfs supergroup 0 2013-04-29 15:45 /tmp
>
> kevin@devUbuntu05 $ hadoop fs -ls -R /tmp
> drwxr-xr-x - mapred supergroup 0 2013-04-29 15:45 /tmp/mapred
> drwxr-xr-x - mapred supergroup 0 2013-04-29 15:45 /tmp/mapred/system
>
> I still get the errors in the log file:
>
> 2013-04-30 09:35:11,609 WARN org.apache.hadoop.mapred.JobTracker: Failed to operate on mapred.system.dir (hdfs://devubuntu05:9000/data/hadoop/tmp/hadoop-mapred/mapred/system) because of permissions.
> 2013-04-30 09:35:11,609 WARN org.apache.hadoop.mapred.JobTracker: This directory should be owned by the user 'mapred (auth:SIMPLE)'
> 2013-04-30 09:35:11,609 WARN org.apache.hadoop.mapred.JobTracker: Bailing out ...
> . . . . .
> org.apache.hadoop.security.AccessControlException: Permission denied: user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
> . . . . .
> Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Permission denied: user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
> 2013-04-30 09:35:11,610 FATAL org.apache.hadoop.mapred.JobTracker: org.apache.hadoop.security.AccessControlException: Permission denied: user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
> . . . . . .
>
>
> From: Arpit Gupta [mailto:arpit@hortonworks.com]
> Sent: Tuesday, April 30, 2013 9:25 AM
> To: user@hadoop.apache.org
> Subject: Re: Permission problem
>
> what is your mapred.system.dir set to in mapred-site.xml?
>
> By default it will write to /tmp on hdfs.
>
> So you can do the following
>
> create /tmp on hdfs and chmod it to 777 as user hdfs and then restart jobtracker and tasktrackers.
>
> In case its set to /mapred/something then create /mapred and chown it to user mapred.
>
>
> --
> Arpit Gupta
> Hortonworks Inc.
> http://hortonworks.com/
>
> On Apr 30, 2013, at 6:36 AM, "Kevin Burton" <rk...@charter.net> wrote:
>
>
> To further complicate the issue the log file in (/var/log/hadoop-0.20-mapreduce/hadoop-hadoop-jobtracker-devUbuntu05.log) is owned by mapred:mapred and the name of the file seems to indicate some other lineage (hadoop,hadoop). I am out of my league in understanding the permission structure for hadoop hdfs and mr. Ideas?
>
> From: Kevin Burton [mailto:rkevinburton@charter.net]
> Sent: Tuesday, April 30, 2013 8:31 AM
> To: user@hadoop.apache.org
> Cc: 'Mohammad Tariq'
> Subject: RE: Permission problem
>
> That is what I perceive as the problem. The hdfs file system was created with the user ‘hdfs’ owning the root (‘/’) but for some reason with a M/R job the user ‘mapred’ needs to have write permission to the root. I don’t know how to satisfy both conditions. That is one reason that I relaxed the permission to 775 so that the group would also have write permission but that didn’t seem to help.
>
> From: Mohammad Tariq [mailto:dontariq@gmail.com]
> Sent: Tuesday, April 30, 2013 8:20 AM
> To: Kevin Burton
> Subject: Re: Permission problem
>
> user?"ls" shows "hdfs" and the log says "mapred"..
>
> Warm Regards,
> Tariq
> https://mtariq.jux.com/
> cloudfront.blogspot.com
>
>
> On Tue, Apr 30, 2013 at 6:22 PM, Kevin Burton <rk...@charter.net> wrote:
> I have relaxed it even further so now it is 775
>
> kevin@devUbuntu05:/var/log/hadoop-0.20-mapreduce$ hadoop fs -ls -d /
> Found 1 items
> drwxrwxr-x - hdfs supergroup 0 2013-04-29 15:43 /
>
> But I still get this error:
>
> 2013-04-30 07:43:02,520 FATAL org.apache.hadoop.mapred.JobTracker: org.apache.hadoop.security.AccessControlException: Permission denied: user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
>
>
> From: Mohammad Tariq [mailto:dontariq@gmail.com]
> Sent: Monday, April 29, 2013 5:10 PM
> To: user@hadoop.apache.org
> Subject: Re: Incompartible cluserIDS
>
> make it 755.
>
> Warm Regards,
> Tariq
> https://mtariq.jux.com/
> cloudfront.blogspot.com
>
>
>
Re: Permission problem
Posted by Arpit Gupta <ar...@hortonworks.com>.
Based on the logs your system dir is set to
> hdfs://devubuntu05:9000/data/hadoop/tmp/hadoop-mapred/mapred/system
what is your fs.default.name and hadoop.tmp.dir in core-site.xml set to?
--
Arpit Gupta
Hortonworks Inc.
http://hortonworks.com/
On Apr 30, 2013, at 7:39 AM, "Kevin Burton" <rk...@charter.net> wrote:
>
> Thank you.
>
> mapred.system.dir is not set. I am guessing that it is whatever the default is. What should I set it to?
>
> /tmp is already 777
>
> kevin@devUbuntu05:~$ hadoop fs -ls /tmp
> Found 1 items
> drwxr-xr-x - hdfs supergroup 0 2013-04-29 15:45 /tmp/mapred
> kevin@devUbuntu05:~$ hadoop fs -ls -d /tmp
> Found 1 items
> drwxrwxrwt - hdfs supergroup 0 2013-04-29 15:45 /tmp
>
> But notice that the mapred folder in the /tmp folder is 755.
> So I changed it:
>
> kevin@devUbuntu05 $ hadoop fs -ls -d /tmp
> drwxrwxrwt - hdfs supergroup 0 2013-04-29 15:45 /tmp
>
> kevin@devUbuntu05 $ hadoop fs -ls -R /tmp
> drwxr-xr-x - mapred supergroup 0 2013-04-29 15:45 /tmp/mapred
> drwxr-xr-x - mapred supergroup 0 2013-04-29 15:45 /tmp/mapred/system
>
> I still get the errors in the log file:
>
> 2013-04-30 09:35:11,609 WARN org.apache.hadoop.mapred.JobTracker: Failed to operate on mapred.system.dir (hdfs://devubuntu05:9000/data/hadoop/tmp/hadoop-mapred/mapred/system) because of permissions.
> 2013-04-30 09:35:11,609 WARN org.apache.hadoop.mapred.JobTracker: This directory should be owned by the user 'mapred (auth:SIMPLE)'
> 2013-04-30 09:35:11,609 WARN org.apache.hadoop.mapred.JobTracker: Bailing out ...
> . . . . .
> org.apache.hadoop.security.AccessControlException: Permission denied: user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
> . . . . .
> Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Permission denied: user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
> 2013-04-30 09:35:11,610 FATAL org.apache.hadoop.mapred.JobTracker: org.apache.hadoop.security.AccessControlException: Permission denied: user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
> . . . . . .
>
>
> From: Arpit Gupta [mailto:arpit@hortonworks.com]
> Sent: Tuesday, April 30, 2013 9:25 AM
> To: user@hadoop.apache.org
> Subject: Re: Permission problem
>
> what is your mapred.system.dir set to in mapred-site.xml?
>
> By default it will write to /tmp on hdfs.
>
> So you can do the following
>
> create /tmp on hdfs and chmod it to 777 as user hdfs and then restart jobtracker and tasktrackers.
>
> In case its set to /mapred/something then create /mapred and chown it to user mapred.
>
>
> --
> Arpit Gupta
> Hortonworks Inc.
> http://hortonworks.com/
>
> On Apr 30, 2013, at 6:36 AM, "Kevin Burton" <rk...@charter.net> wrote:
>
>
> To further complicate the issue the log file in (/var/log/hadoop-0.20-mapreduce/hadoop-hadoop-jobtracker-devUbuntu05.log) is owned by mapred:mapred and the name of the file seems to indicate some other lineage (hadoop,hadoop). I am out of my league in understanding the permission structure for hadoop hdfs and mr. Ideas?
>
> From: Kevin Burton [mailto:rkevinburton@charter.net]
> Sent: Tuesday, April 30, 2013 8:31 AM
> To: user@hadoop.apache.org
> Cc: 'Mohammad Tariq'
> Subject: RE: Permission problem
>
> That is what I perceive as the problem. The hdfs file system was created with the user ‘hdfs’ owning the root (‘/’) but for some reason with a M/R job the user ‘mapred’ needs to have write permission to the root. I don’t know how to satisfy both conditions. That is one reason that I relaxed the permission to 775 so that the group would also have write permission but that didn’t seem to help.
>
> From: Mohammad Tariq [mailto:dontariq@gmail.com]
> Sent: Tuesday, April 30, 2013 8:20 AM
> To: Kevin Burton
> Subject: Re: Permission problem
>
> user?"ls" shows "hdfs" and the log says "mapred"..
>
> Warm Regards,
> Tariq
> https://mtariq.jux.com/
> cloudfront.blogspot.com
>
>
> On Tue, Apr 30, 2013 at 6:22 PM, Kevin Burton <rk...@charter.net> wrote:
> I have relaxed it even further so now it is 775
>
> kevin@devUbuntu05:/var/log/hadoop-0.20-mapreduce$ hadoop fs -ls -d /
> Found 1 items
> drwxrwxr-x - hdfs supergroup 0 2013-04-29 15:43 /
>
> But I still get this error:
>
> 2013-04-30 07:43:02,520 FATAL org.apache.hadoop.mapred.JobTracker: org.apache.hadoop.security.AccessControlException: Permission denied: user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
>
>
> From: Mohammad Tariq [mailto:dontariq@gmail.com]
> Sent: Monday, April 29, 2013 5:10 PM
> To: user@hadoop.apache.org
> Subject: Re: Incompartible cluserIDS
>
> make it 755.
>
> Warm Regards,
> Tariq
> https://mtariq.jux.com/
> cloudfront.blogspot.com
>
>
>
RE: Permission problem
Posted by Kevin Burton <rk...@charter.net>.
Thank you.
mapred.system.dir is not set. I am guessing that it is whatever the default
is. What should I set it to?
/tmp is already 777
kevin@devUbuntu05:~$ hadoop fs -ls /tmp
Found 1 items
drwxr-xr-x - hdfs supergroup 0 2013-04-29 15:45 /tmp/mapred
kevin@devUbuntu05:~$ hadoop fs -ls -d /tmp
Found 1 items
drwxrwxrwt - hdfs supergroup 0 2013-04-29 15:45 /tmp
But notice that the mapred folder in the /tmp folder is 755.
So I changed it:
kevin@devUbuntu05 $ hadoop fs -ls -d /tmp
drwxrwxrwt - hdfs supergroup 0 2013-04-29 15:45 /tmp
kevin@devUbuntu05 $ hadoop fs -ls -R /tmp
drwxr-xr-x - mapred supergroup 0 2013-04-29 15:45 /tmp/mapred
drwxr-xr-x - mapred supergroup 0 2013-04-29 15:45
/tmp/mapred/system
I still get the errors in the log file:
2013-04-30 09:35:11,609 WARN org.apache.hadoop.mapred.JobTracker: Failed to
operate on mapred.system.dir
(hdfs://devubuntu05:9000/data/hadoop/tmp/hadoop-mapred/mapred/system)
because of permissions.
2013-04-30 09:35:11,609 WARN org.apache.hadoop.mapred.JobTracker: This
directory should be owned by the user 'mapred (auth:SIMPLE)'
2013-04-30 09:35:11,609 WARN org.apache.hadoop.mapred.JobTracker: Bailing
out ...
. . . . .
org.apache.hadoop.security.AccessControlException: Permission denied:
user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
. . . . .
Caused by:
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessContr
olException): Permission denied: user=mapred, access=WRITE,
inode="/":hdfs:supergroup:drwxrwxr-x
2013-04-30 09:35:11,610 FATAL org.apache.hadoop.mapred.JobTracker:
org.apache.hadoop.security.AccessControlException: Permission denied:
user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
. . . . . .
From: Arpit Gupta [mailto:arpit@hortonworks.com]
Sent: Tuesday, April 30, 2013 9:25 AM
To: user@hadoop.apache.org
Subject: Re: Permission problem
what is your mapred.system.dir set to in mapred-site.xml?
By default it will write to /tmp on hdfs.
So you can do the following
create /tmp on hdfs and chmod it to 777 as user hdfs and then restart
jobtracker and tasktrackers.
In case its set to /mapred/something then create /mapred and chown it to
user mapred.
--
Arpit Gupta
Hortonworks Inc.
http://hortonworks.com/
On Apr 30, 2013, at 6:36 AM, "Kevin Burton" <rk...@charter.net>
wrote:
To further complicate the issue the log file in
(/var/log/hadoop-0.20-mapreduce/hadoop-hadoop-jobtracker-devUbuntu05.log) is
owned by mapred:mapred and the name of the file seems to indicate some other
lineage (hadoop,hadoop). I am out of my league in understanding the
permission structure for hadoop hdfs and mr. Ideas?
From: Kevin Burton [mailto:rkevinburton@ <http://charter.net> charter.net]
Sent: Tuesday, April 30, 2013 8:31 AM
To: <ma...@hadoop.apache.org> user@hadoop.apache.org
Cc: 'Mohammad Tariq'
Subject: RE: Permission problem
That is what I perceive as the problem. The hdfs file system was created
with the user 'hdfs' owning the root ('/') but for some reason with a M/R
job the user 'mapred' needs to have write permission to the root. I don't
know how to satisfy both conditions. That is one reason that I relaxed the
permission to 775 so that the group would also have write permission but
that didn't seem to help.
From: Mohammad Tariq [ <ma...@gmail.com>
mailto:dontariq@gmail.com]
Sent: Tuesday, April 30, 2013 8:20 AM
To: Kevin Burton
Subject: Re: Permission problem
user?"ls" shows "hdfs" and the log says "mapred"..
Warm Regards,
Tariq
<https://mtariq.jux.com/> https://mtariq.jux.com/
<http://cloudfront.blogspot.com> cloudfront.blogspot.com
On Tue, Apr 30, 2013 at 6:22 PM, Kevin Burton <
<ma...@charter.net> rkevinburton@charter.net> wrote:
I have relaxed it even further so now it is 775
<mailto:kevin@devUbuntu05:/var/log/hadoop-0.20-mapreduce$>
kevin@devUbuntu05:/var/log/hadoop-0.20-mapreduce$ hadoop fs -ls -d /
Found 1 items
drwxrwxr-x - hdfs supergroup 0 2013-04-29 15:43 /
But I still get this error:
2013-04-30 07:43:02,520 FATAL org.apache.hadoop.mapred.JobTracker:
org.apache.hadoop.security.AccessControlException: Permission denied:
user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
From: Mohammad Tariq [mailto: <ma...@gmail.com>
dontariq@gmail.com]
Sent: Monday, April 29, 2013 5:10 PM
To: <ma...@hadoop.apache.org> user@hadoop.apache.org
Subject: Re: Incompartible cluserIDS
make it 755.
Warm Regards,
Tariq
<https://mtariq.jux.com/> https://mtariq.jux.com/
<http://cloudfront.blogspot.com> cloudfront.blogspot.com
RE: Permission problem
Posted by Kevin Burton <rk...@charter.net>.
Thank you.
mapred.system.dir is not set. I am guessing that it is whatever the default
is. What should I set it to?
/tmp is already 777
kevin@devUbuntu05:~$ hadoop fs -ls /tmp
Found 1 items
drwxr-xr-x - hdfs supergroup 0 2013-04-29 15:45 /tmp/mapred
kevin@devUbuntu05:~$ hadoop fs -ls -d /tmp
Found 1 items
drwxrwxrwt - hdfs supergroup 0 2013-04-29 15:45 /tmp
But notice that the mapred folder in the /tmp folder is 755.
So I changed it:
kevin@devUbuntu05 $ hadoop fs -ls -d /tmp
drwxrwxrwt - hdfs supergroup 0 2013-04-29 15:45 /tmp
kevin@devUbuntu05 $ hadoop fs -ls -R /tmp
drwxr-xr-x - mapred supergroup 0 2013-04-29 15:45 /tmp/mapred
drwxr-xr-x - mapred supergroup 0 2013-04-29 15:45
/tmp/mapred/system
I still get the errors in the log file:
2013-04-30 09:35:11,609 WARN org.apache.hadoop.mapred.JobTracker: Failed to
operate on mapred.system.dir
(hdfs://devubuntu05:9000/data/hadoop/tmp/hadoop-mapred/mapred/system)
because of permissions.
2013-04-30 09:35:11,609 WARN org.apache.hadoop.mapred.JobTracker: This
directory should be owned by the user 'mapred (auth:SIMPLE)'
2013-04-30 09:35:11,609 WARN org.apache.hadoop.mapred.JobTracker: Bailing
out ...
. . . . .
org.apache.hadoop.security.AccessControlException: Permission denied:
user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
. . . . .
Caused by:
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessContr
olException): Permission denied: user=mapred, access=WRITE,
inode="/":hdfs:supergroup:drwxrwxr-x
2013-04-30 09:35:11,610 FATAL org.apache.hadoop.mapred.JobTracker:
org.apache.hadoop.security.AccessControlException: Permission denied:
user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
. . . . . .
From: Arpit Gupta [mailto:arpit@hortonworks.com]
Sent: Tuesday, April 30, 2013 9:25 AM
To: user@hadoop.apache.org
Subject: Re: Permission problem
what is your mapred.system.dir set to in mapred-site.xml?
By default it will write to /tmp on hdfs.
So you can do the following
create /tmp on hdfs and chmod it to 777 as user hdfs and then restart
jobtracker and tasktrackers.
In case its set to /mapred/something then create /mapred and chown it to
user mapred.
--
Arpit Gupta
Hortonworks Inc.
http://hortonworks.com/
On Apr 30, 2013, at 6:36 AM, "Kevin Burton" <rk...@charter.net>
wrote:
To further complicate the issue the log file in
(/var/log/hadoop-0.20-mapreduce/hadoop-hadoop-jobtracker-devUbuntu05.log) is
owned by mapred:mapred and the name of the file seems to indicate some other
lineage (hadoop,hadoop). I am out of my league in understanding the
permission structure for hadoop hdfs and mr. Ideas?
From: Kevin Burton [mailto:rkevinburton@ <http://charter.net> charter.net]
Sent: Tuesday, April 30, 2013 8:31 AM
To: <ma...@hadoop.apache.org> user@hadoop.apache.org
Cc: 'Mohammad Tariq'
Subject: RE: Permission problem
That is what I perceive as the problem. The hdfs file system was created
with the user 'hdfs' owning the root ('/') but for some reason with a M/R
job the user 'mapred' needs to have write permission to the root. I don't
know how to satisfy both conditions. That is one reason that I relaxed the
permission to 775 so that the group would also have write permission but
that didn't seem to help.
From: Mohammad Tariq [ <ma...@gmail.com>
mailto:dontariq@gmail.com]
Sent: Tuesday, April 30, 2013 8:20 AM
To: Kevin Burton
Subject: Re: Permission problem
user?"ls" shows "hdfs" and the log says "mapred"..
Warm Regards,
Tariq
<https://mtariq.jux.com/> https://mtariq.jux.com/
<http://cloudfront.blogspot.com> cloudfront.blogspot.com
On Tue, Apr 30, 2013 at 6:22 PM, Kevin Burton <
<ma...@charter.net> rkevinburton@charter.net> wrote:
I have relaxed it even further so now it is 775
<mailto:kevin@devUbuntu05:/var/log/hadoop-0.20-mapreduce$>
kevin@devUbuntu05:/var/log/hadoop-0.20-mapreduce$ hadoop fs -ls -d /
Found 1 items
drwxrwxr-x - hdfs supergroup 0 2013-04-29 15:43 /
But I still get this error:
2013-04-30 07:43:02,520 FATAL org.apache.hadoop.mapred.JobTracker:
org.apache.hadoop.security.AccessControlException: Permission denied:
user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
From: Mohammad Tariq [mailto: <ma...@gmail.com>
dontariq@gmail.com]
Sent: Monday, April 29, 2013 5:10 PM
To: <ma...@hadoop.apache.org> user@hadoop.apache.org
Subject: Re: Incompartible cluserIDS
make it 755.
Warm Regards,
Tariq
<https://mtariq.jux.com/> https://mtariq.jux.com/
<http://cloudfront.blogspot.com> cloudfront.blogspot.com
RE: Permission problem
Posted by Kevin Burton <rk...@charter.net>.
Thank you.
mapred.system.dir is not set. I am guessing that it is whatever the default
is. What should I set it to?
/tmp is already 777
kevin@devUbuntu05:~$ hadoop fs -ls /tmp
Found 1 items
drwxr-xr-x - hdfs supergroup 0 2013-04-29 15:45 /tmp/mapred
kevin@devUbuntu05:~$ hadoop fs -ls -d /tmp
Found 1 items
drwxrwxrwt - hdfs supergroup 0 2013-04-29 15:45 /tmp
But notice that the mapred folder in the /tmp folder is 755.
So I changed it:
kevin@devUbuntu05 $ hadoop fs -ls -d /tmp
drwxrwxrwt - hdfs supergroup 0 2013-04-29 15:45 /tmp
kevin@devUbuntu05 $ hadoop fs -ls -R /tmp
drwxr-xr-x - mapred supergroup 0 2013-04-29 15:45 /tmp/mapred
drwxr-xr-x - mapred supergroup 0 2013-04-29 15:45
/tmp/mapred/system
I still get the errors in the log file:
2013-04-30 09:35:11,609 WARN org.apache.hadoop.mapred.JobTracker: Failed to
operate on mapred.system.dir
(hdfs://devubuntu05:9000/data/hadoop/tmp/hadoop-mapred/mapred/system)
because of permissions.
2013-04-30 09:35:11,609 WARN org.apache.hadoop.mapred.JobTracker: This
directory should be owned by the user 'mapred (auth:SIMPLE)'
2013-04-30 09:35:11,609 WARN org.apache.hadoop.mapred.JobTracker: Bailing
out ...
. . . . .
org.apache.hadoop.security.AccessControlException: Permission denied:
user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
. . . . .
Caused by:
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessContr
olException): Permission denied: user=mapred, access=WRITE,
inode="/":hdfs:supergroup:drwxrwxr-x
2013-04-30 09:35:11,610 FATAL org.apache.hadoop.mapred.JobTracker:
org.apache.hadoop.security.AccessControlException: Permission denied:
user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
. . . . . .
From: Arpit Gupta [mailto:arpit@hortonworks.com]
Sent: Tuesday, April 30, 2013 9:25 AM
To: user@hadoop.apache.org
Subject: Re: Permission problem
what is your mapred.system.dir set to in mapred-site.xml?
By default it will write to /tmp on hdfs.
So you can do the following
create /tmp on hdfs and chmod it to 777 as user hdfs and then restart
jobtracker and tasktrackers.
In case its set to /mapred/something then create /mapred and chown it to
user mapred.
--
Arpit Gupta
Hortonworks Inc.
http://hortonworks.com/
On Apr 30, 2013, at 6:36 AM, "Kevin Burton" <rk...@charter.net>
wrote:
To further complicate the issue the log file in
(/var/log/hadoop-0.20-mapreduce/hadoop-hadoop-jobtracker-devUbuntu05.log) is
owned by mapred:mapred and the name of the file seems to indicate some other
lineage (hadoop,hadoop). I am out of my league in understanding the
permission structure for hadoop hdfs and mr. Ideas?
From: Kevin Burton [mailto:rkevinburton@ <http://charter.net> charter.net]
Sent: Tuesday, April 30, 2013 8:31 AM
To: <ma...@hadoop.apache.org> user@hadoop.apache.org
Cc: 'Mohammad Tariq'
Subject: RE: Permission problem
That is what I perceive as the problem. The hdfs file system was created
with the user 'hdfs' owning the root ('/') but for some reason with a M/R
job the user 'mapred' needs to have write permission to the root. I don't
know how to satisfy both conditions. That is one reason that I relaxed the
permission to 775 so that the group would also have write permission but
that didn't seem to help.
From: Mohammad Tariq [ <ma...@gmail.com>
mailto:dontariq@gmail.com]
Sent: Tuesday, April 30, 2013 8:20 AM
To: Kevin Burton
Subject: Re: Permission problem
user?"ls" shows "hdfs" and the log says "mapred"..
Warm Regards,
Tariq
<https://mtariq.jux.com/> https://mtariq.jux.com/
<http://cloudfront.blogspot.com> cloudfront.blogspot.com
On Tue, Apr 30, 2013 at 6:22 PM, Kevin Burton <
<ma...@charter.net> rkevinburton@charter.net> wrote:
I have relaxed it even further so now it is 775
<mailto:kevin@devUbuntu05:/var/log/hadoop-0.20-mapreduce$>
kevin@devUbuntu05:/var/log/hadoop-0.20-mapreduce$ hadoop fs -ls -d /
Found 1 items
drwxrwxr-x - hdfs supergroup 0 2013-04-29 15:43 /
But I still get this error:
2013-04-30 07:43:02,520 FATAL org.apache.hadoop.mapred.JobTracker:
org.apache.hadoop.security.AccessControlException: Permission denied:
user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
From: Mohammad Tariq [mailto: <ma...@gmail.com>
dontariq@gmail.com]
Sent: Monday, April 29, 2013 5:10 PM
To: <ma...@hadoop.apache.org> user@hadoop.apache.org
Subject: Re: Incompartible cluserIDS
make it 755.
Warm Regards,
Tariq
<https://mtariq.jux.com/> https://mtariq.jux.com/
<http://cloudfront.blogspot.com> cloudfront.blogspot.com
RE: Permission problem
Posted by Kevin Burton <rk...@charter.net>.
Thank you.
mapred.system.dir is not set. I am guessing that it is whatever the default
is. What should I set it to?
/tmp is already 777
kevin@devUbuntu05:~$ hadoop fs -ls /tmp
Found 1 items
drwxr-xr-x - hdfs supergroup 0 2013-04-29 15:45 /tmp/mapred
kevin@devUbuntu05:~$ hadoop fs -ls -d /tmp
Found 1 items
drwxrwxrwt - hdfs supergroup 0 2013-04-29 15:45 /tmp
But notice that the mapred folder in the /tmp folder is 755.
So I changed it:
kevin@devUbuntu05 $ hadoop fs -ls -d /tmp
drwxrwxrwt - hdfs supergroup 0 2013-04-29 15:45 /tmp
kevin@devUbuntu05 $ hadoop fs -ls -R /tmp
drwxr-xr-x - mapred supergroup 0 2013-04-29 15:45 /tmp/mapred
drwxr-xr-x - mapred supergroup 0 2013-04-29 15:45
/tmp/mapred/system
I still get the errors in the log file:
2013-04-30 09:35:11,609 WARN org.apache.hadoop.mapred.JobTracker: Failed to
operate on mapred.system.dir
(hdfs://devubuntu05:9000/data/hadoop/tmp/hadoop-mapred/mapred/system)
because of permissions.
2013-04-30 09:35:11,609 WARN org.apache.hadoop.mapred.JobTracker: This
directory should be owned by the user 'mapred (auth:SIMPLE)'
2013-04-30 09:35:11,609 WARN org.apache.hadoop.mapred.JobTracker: Bailing
out ...
. . . . .
org.apache.hadoop.security.AccessControlException: Permission denied:
user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
. . . . .
Caused by:
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessContr
olException): Permission denied: user=mapred, access=WRITE,
inode="/":hdfs:supergroup:drwxrwxr-x
2013-04-30 09:35:11,610 FATAL org.apache.hadoop.mapred.JobTracker:
org.apache.hadoop.security.AccessControlException: Permission denied:
user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
. . . . . .
From: Arpit Gupta [mailto:arpit@hortonworks.com]
Sent: Tuesday, April 30, 2013 9:25 AM
To: user@hadoop.apache.org
Subject: Re: Permission problem
what is your mapred.system.dir set to in mapred-site.xml?
By default it will write to /tmp on hdfs.
So you can do the following
create /tmp on hdfs and chmod it to 777 as user hdfs and then restart
jobtracker and tasktrackers.
In case its set to /mapred/something then create /mapred and chown it to
user mapred.
--
Arpit Gupta
Hortonworks Inc.
http://hortonworks.com/
On Apr 30, 2013, at 6:36 AM, "Kevin Burton" <rk...@charter.net>
wrote:
To further complicate the issue the log file in
(/var/log/hadoop-0.20-mapreduce/hadoop-hadoop-jobtracker-devUbuntu05.log) is
owned by mapred:mapred and the name of the file seems to indicate some other
lineage (hadoop,hadoop). I am out of my league in understanding the
permission structure for hadoop hdfs and mr. Ideas?
From: Kevin Burton [mailto:rkevinburton@ <http://charter.net> charter.net]
Sent: Tuesday, April 30, 2013 8:31 AM
To: <ma...@hadoop.apache.org> user@hadoop.apache.org
Cc: 'Mohammad Tariq'
Subject: RE: Permission problem
That is what I perceive as the problem. The hdfs file system was created
with the user 'hdfs' owning the root ('/') but for some reason with a M/R
job the user 'mapred' needs to have write permission to the root. I don't
know how to satisfy both conditions. That is one reason that I relaxed the
permission to 775 so that the group would also have write permission but
that didn't seem to help.
From: Mohammad Tariq [ <ma...@gmail.com>
mailto:dontariq@gmail.com]
Sent: Tuesday, April 30, 2013 8:20 AM
To: Kevin Burton
Subject: Re: Permission problem
user?"ls" shows "hdfs" and the log says "mapred"..
Warm Regards,
Tariq
<https://mtariq.jux.com/> https://mtariq.jux.com/
<http://cloudfront.blogspot.com> cloudfront.blogspot.com
On Tue, Apr 30, 2013 at 6:22 PM, Kevin Burton <
<ma...@charter.net> rkevinburton@charter.net> wrote:
I have relaxed it even further so now it is 775
<mailto:kevin@devUbuntu05:/var/log/hadoop-0.20-mapreduce$>
kevin@devUbuntu05:/var/log/hadoop-0.20-mapreduce$ hadoop fs -ls -d /
Found 1 items
drwxrwxr-x - hdfs supergroup 0 2013-04-29 15:43 /
But I still get this error:
2013-04-30 07:43:02,520 FATAL org.apache.hadoop.mapred.JobTracker:
org.apache.hadoop.security.AccessControlException: Permission denied:
user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
From: Mohammad Tariq [mailto: <ma...@gmail.com>
dontariq@gmail.com]
Sent: Monday, April 29, 2013 5:10 PM
To: <ma...@hadoop.apache.org> user@hadoop.apache.org
Subject: Re: Incompartible cluserIDS
make it 755.
Warm Regards,
Tariq
<https://mtariq.jux.com/> https://mtariq.jux.com/
<http://cloudfront.blogspot.com> cloudfront.blogspot.com
Re: Permission problem
Posted by Arpit Gupta <ar...@hortonworks.com>.
what is your mapred.system.dir set to in mapred-site.xml?
By default it will write to /tmp on hdfs.
So you can do the following
create /tmp on hdfs and chmod it to 777 as user hdfs and then restart jobtracker and tasktrackers.
In case its set to /mapred/something then create /mapred and chown it to user mapred.
--
Arpit Gupta
Hortonworks Inc.
http://hortonworks.com/
On Apr 30, 2013, at 6:36 AM, "Kevin Burton" <rk...@charter.net> wrote:
> To further complicate the issue the log file in (/var/log/hadoop-0.20-mapreduce/hadoop-hadoop-jobtracker-devUbuntu05.log) is owned by mapred:mapred and the name of the file seems to indicate some other lineage (hadoop,hadoop). I am out of my league in understanding the permission structure for hadoop hdfs and mr. Ideas?
>
> From: Kevin Burton [mailto:rkevinburton@charter.net]
> Sent: Tuesday, April 30, 2013 8:31 AM
> To: user@hadoop.apache.org
> Cc: 'Mohammad Tariq'
> Subject: RE: Permission problem
>
> That is what I perceive as the problem. The hdfs file system was created with the user ‘hdfs’ owning the root (‘/’) but for some reason with a M/R job the user ‘mapred’ needs to have write permission to the root. I don’t know how to satisfy both conditions. That is one reason that I relaxed the permission to 775 so that the group would also have write permission but that didn’t seem to help.
>
> From: Mohammad Tariq [mailto:dontariq@gmail.com]
> Sent: Tuesday, April 30, 2013 8:20 AM
> To: Kevin Burton
> Subject: Re: Permission problem
>
> user?"ls" shows "hdfs" and the log says "mapred"..
>
> Warm Regards,
> Tariq
> https://mtariq.jux.com/
> cloudfront.blogspot.com
>
>
> On Tue, Apr 30, 2013 at 6:22 PM, Kevin Burton <rk...@charter.net> wrote:
> I have relaxed it even further so now it is 775
>
> kevin@devUbuntu05:/var/log/hadoop-0.20-mapreduce$ hadoop fs -ls -d /
> Found 1 items
> drwxrwxr-x - hdfs supergroup 0 2013-04-29 15:43 /
>
> But I still get this error:
>
> 2013-04-30 07:43:02,520 FATAL org.apache.hadoop.mapred.JobTracker: org.apache.hadoop.security.AccessControlException: Permission denied: user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
>
>
> From: Mohammad Tariq [mailto:dontariq@gmail.com]
> Sent: Monday, April 29, 2013 5:10 PM
> To: user@hadoop.apache.org
> Subject: Re: Incompartible cluserIDS
>
> make it 755.
>
> Warm Regards,
> Tariq
> https://mtariq.jux.com/
> cloudfront.blogspot.com
>
>
>
Re: Permission problem
Posted by Arpit Gupta <ar...@hortonworks.com>.
what is your mapred.system.dir set to in mapred-site.xml?
By default it will write to /tmp on hdfs.
So you can do the following
create /tmp on hdfs and chmod it to 777 as user hdfs and then restart jobtracker and tasktrackers.
In case its set to /mapred/something then create /mapred and chown it to user mapred.
--
Arpit Gupta
Hortonworks Inc.
http://hortonworks.com/
On Apr 30, 2013, at 6:36 AM, "Kevin Burton" <rk...@charter.net> wrote:
> To further complicate the issue the log file in (/var/log/hadoop-0.20-mapreduce/hadoop-hadoop-jobtracker-devUbuntu05.log) is owned by mapred:mapred and the name of the file seems to indicate some other lineage (hadoop,hadoop). I am out of my league in understanding the permission structure for hadoop hdfs and mr. Ideas?
>
> From: Kevin Burton [mailto:rkevinburton@charter.net]
> Sent: Tuesday, April 30, 2013 8:31 AM
> To: user@hadoop.apache.org
> Cc: 'Mohammad Tariq'
> Subject: RE: Permission problem
>
> That is what I perceive as the problem. The hdfs file system was created with the user ‘hdfs’ owning the root (‘/’) but for some reason with a M/R job the user ‘mapred’ needs to have write permission to the root. I don’t know how to satisfy both conditions. That is one reason that I relaxed the permission to 775 so that the group would also have write permission but that didn’t seem to help.
>
> From: Mohammad Tariq [mailto:dontariq@gmail.com]
> Sent: Tuesday, April 30, 2013 8:20 AM
> To: Kevin Burton
> Subject: Re: Permission problem
>
> user?"ls" shows "hdfs" and the log says "mapred"..
>
> Warm Regards,
> Tariq
> https://mtariq.jux.com/
> cloudfront.blogspot.com
>
>
> On Tue, Apr 30, 2013 at 6:22 PM, Kevin Burton <rk...@charter.net> wrote:
> I have relaxed it even further so now it is 775
>
> kevin@devUbuntu05:/var/log/hadoop-0.20-mapreduce$ hadoop fs -ls -d /
> Found 1 items
> drwxrwxr-x - hdfs supergroup 0 2013-04-29 15:43 /
>
> But I still get this error:
>
> 2013-04-30 07:43:02,520 FATAL org.apache.hadoop.mapred.JobTracker: org.apache.hadoop.security.AccessControlException: Permission denied: user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
>
>
> From: Mohammad Tariq [mailto:dontariq@gmail.com]
> Sent: Monday, April 29, 2013 5:10 PM
> To: user@hadoop.apache.org
> Subject: Re: Incompartible cluserIDS
>
> make it 755.
>
> Warm Regards,
> Tariq
> https://mtariq.jux.com/
> cloudfront.blogspot.com
>
>
>
Re: Permission problem
Posted by Arpit Gupta <ar...@hortonworks.com>.
what is your mapred.system.dir set to in mapred-site.xml?
By default it will write to /tmp on hdfs.
So you can do the following
create /tmp on hdfs and chmod it to 777 as user hdfs and then restart jobtracker and tasktrackers.
In case its set to /mapred/something then create /mapred and chown it to user mapred.
--
Arpit Gupta
Hortonworks Inc.
http://hortonworks.com/
On Apr 30, 2013, at 6:36 AM, "Kevin Burton" <rk...@charter.net> wrote:
> To further complicate the issue the log file in (/var/log/hadoop-0.20-mapreduce/hadoop-hadoop-jobtracker-devUbuntu05.log) is owned by mapred:mapred and the name of the file seems to indicate some other lineage (hadoop,hadoop). I am out of my league in understanding the permission structure for hadoop hdfs and mr. Ideas?
>
> From: Kevin Burton [mailto:rkevinburton@charter.net]
> Sent: Tuesday, April 30, 2013 8:31 AM
> To: user@hadoop.apache.org
> Cc: 'Mohammad Tariq'
> Subject: RE: Permission problem
>
> That is what I perceive as the problem. The hdfs file system was created with the user ‘hdfs’ owning the root (‘/’) but for some reason with a M/R job the user ‘mapred’ needs to have write permission to the root. I don’t know how to satisfy both conditions. That is one reason that I relaxed the permission to 775 so that the group would also have write permission but that didn’t seem to help.
>
> From: Mohammad Tariq [mailto:dontariq@gmail.com]
> Sent: Tuesday, April 30, 2013 8:20 AM
> To: Kevin Burton
> Subject: Re: Permission problem
>
> user?"ls" shows "hdfs" and the log says "mapred"..
>
> Warm Regards,
> Tariq
> https://mtariq.jux.com/
> cloudfront.blogspot.com
>
>
> On Tue, Apr 30, 2013 at 6:22 PM, Kevin Burton <rk...@charter.net> wrote:
> I have relaxed it even further so now it is 775
>
> kevin@devUbuntu05:/var/log/hadoop-0.20-mapreduce$ hadoop fs -ls -d /
> Found 1 items
> drwxrwxr-x - hdfs supergroup 0 2013-04-29 15:43 /
>
> But I still get this error:
>
> 2013-04-30 07:43:02,520 FATAL org.apache.hadoop.mapred.JobTracker: org.apache.hadoop.security.AccessControlException: Permission denied: user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
>
>
> From: Mohammad Tariq [mailto:dontariq@gmail.com]
> Sent: Monday, April 29, 2013 5:10 PM
> To: user@hadoop.apache.org
> Subject: Re: Incompartible cluserIDS
>
> make it 755.
>
> Warm Regards,
> Tariq
> https://mtariq.jux.com/
> cloudfront.blogspot.com
>
>
>
Re: Permission problem
Posted by Arpit Gupta <ar...@hortonworks.com>.
what is your mapred.system.dir set to in mapred-site.xml?
By default it will write to /tmp on hdfs.
So you can do the following
create /tmp on hdfs and chmod it to 777 as user hdfs and then restart jobtracker and tasktrackers.
In case its set to /mapred/something then create /mapred and chown it to user mapred.
--
Arpit Gupta
Hortonworks Inc.
http://hortonworks.com/
On Apr 30, 2013, at 6:36 AM, "Kevin Burton" <rk...@charter.net> wrote:
> To further complicate the issue the log file in (/var/log/hadoop-0.20-mapreduce/hadoop-hadoop-jobtracker-devUbuntu05.log) is owned by mapred:mapred and the name of the file seems to indicate some other lineage (hadoop,hadoop). I am out of my league in understanding the permission structure for hadoop hdfs and mr. Ideas?
>
> From: Kevin Burton [mailto:rkevinburton@charter.net]
> Sent: Tuesday, April 30, 2013 8:31 AM
> To: user@hadoop.apache.org
> Cc: 'Mohammad Tariq'
> Subject: RE: Permission problem
>
> That is what I perceive as the problem. The hdfs file system was created with the user ‘hdfs’ owning the root (‘/’) but for some reason with a M/R job the user ‘mapred’ needs to have write permission to the root. I don’t know how to satisfy both conditions. That is one reason that I relaxed the permission to 775 so that the group would also have write permission but that didn’t seem to help.
>
> From: Mohammad Tariq [mailto:dontariq@gmail.com]
> Sent: Tuesday, April 30, 2013 8:20 AM
> To: Kevin Burton
> Subject: Re: Permission problem
>
> user?"ls" shows "hdfs" and the log says "mapred"..
>
> Warm Regards,
> Tariq
> https://mtariq.jux.com/
> cloudfront.blogspot.com
>
>
> On Tue, Apr 30, 2013 at 6:22 PM, Kevin Burton <rk...@charter.net> wrote:
> I have relaxed it even further so now it is 775
>
> kevin@devUbuntu05:/var/log/hadoop-0.20-mapreduce$ hadoop fs -ls -d /
> Found 1 items
> drwxrwxr-x - hdfs supergroup 0 2013-04-29 15:43 /
>
> But I still get this error:
>
> 2013-04-30 07:43:02,520 FATAL org.apache.hadoop.mapred.JobTracker: org.apache.hadoop.security.AccessControlException: Permission denied: user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
>
>
> From: Mohammad Tariq [mailto:dontariq@gmail.com]
> Sent: Monday, April 29, 2013 5:10 PM
> To: user@hadoop.apache.org
> Subject: Re: Incompartible cluserIDS
>
> make it 755.
>
> Warm Regards,
> Tariq
> https://mtariq.jux.com/
> cloudfront.blogspot.com
>
>
>
RE: Permission problem
Posted by Kevin Burton <rk...@charter.net>.
To further complicate the issue the log file in
(/var/log/hadoop-0.20-mapreduce/hadoop-hadoop-jobtracker-devUbuntu05.log) is
owned by mapred:mapred and the name of the file seems to indicate some other
lineage (hadoop,hadoop). I am out of my league in understanding the
permission structure for hadoop hdfs and mr. Ideas?
From: Kevin Burton [mailto:rkevinburton@charter.net]
Sent: Tuesday, April 30, 2013 8:31 AM
To: user@hadoop.apache.org
Cc: 'Mohammad Tariq'
Subject: RE: Permission problem
That is what I perceive as the problem. The hdfs file system was created
with the user 'hdfs' owning the root ('/') but for some reason with a M/R
job the user 'mapred' needs to have write permission to the root. I don't
know how to satisfy both conditions. That is one reason that I relaxed the
permission to 775 so that the group would also have write permission but
that didn't seem to help.
From: Mohammad Tariq [mailto:dontariq@gmail.com]
Sent: Tuesday, April 30, 2013 8:20 AM
To: Kevin Burton
Subject: Re: Permission problem
user?"ls" shows "hdfs" and the log says "mapred"..
Warm Regards,
Tariq
https://mtariq.jux.com/
cloudfront.blogspot.com
On Tue, Apr 30, 2013 at 6:22 PM, Kevin Burton <rk...@charter.net>
wrote:
I have relaxed it even further so now it is 775
kevin@devUbuntu05:/var/log/hadoop-0.20-mapreduce$ hadoop fs -ls -d /
Found 1 items
drwxrwxr-x - hdfs supergroup 0 2013-04-29 15:43 /
But I still get this error:
2013-04-30 07:43:02,520 FATAL org.apache.hadoop.mapred.JobTracker:
org.apache.hadoop.security.AccessControlException: Permission denied:
user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
From: Mohammad Tariq [mailto:dontariq@gmail.com]
Sent: Monday, April 29, 2013 5:10 PM
To: user@hadoop.apache.org
Subject: Re: Incompartible cluserIDS
make it 755.
Warm Regards,
Tariq
https://mtariq.jux.com/
cloudfront.blogspot.com
RE: Permission problem
Posted by Kevin Burton <rk...@charter.net>.
To further complicate the issue the log file in
(/var/log/hadoop-0.20-mapreduce/hadoop-hadoop-jobtracker-devUbuntu05.log) is
owned by mapred:mapred and the name of the file seems to indicate some other
lineage (hadoop,hadoop). I am out of my league in understanding the
permission structure for hadoop hdfs and mr. Ideas?
From: Kevin Burton [mailto:rkevinburton@charter.net]
Sent: Tuesday, April 30, 2013 8:31 AM
To: user@hadoop.apache.org
Cc: 'Mohammad Tariq'
Subject: RE: Permission problem
That is what I perceive as the problem. The hdfs file system was created
with the user 'hdfs' owning the root ('/') but for some reason with a M/R
job the user 'mapred' needs to have write permission to the root. I don't
know how to satisfy both conditions. That is one reason that I relaxed the
permission to 775 so that the group would also have write permission but
that didn't seem to help.
From: Mohammad Tariq [mailto:dontariq@gmail.com]
Sent: Tuesday, April 30, 2013 8:20 AM
To: Kevin Burton
Subject: Re: Permission problem
user?"ls" shows "hdfs" and the log says "mapred"..
Warm Regards,
Tariq
https://mtariq.jux.com/
cloudfront.blogspot.com
On Tue, Apr 30, 2013 at 6:22 PM, Kevin Burton <rk...@charter.net>
wrote:
I have relaxed it even further so now it is 775
kevin@devUbuntu05:/var/log/hadoop-0.20-mapreduce$ hadoop fs -ls -d /
Found 1 items
drwxrwxr-x - hdfs supergroup 0 2013-04-29 15:43 /
But I still get this error:
2013-04-30 07:43:02,520 FATAL org.apache.hadoop.mapred.JobTracker:
org.apache.hadoop.security.AccessControlException: Permission denied:
user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
From: Mohammad Tariq [mailto:dontariq@gmail.com]
Sent: Monday, April 29, 2013 5:10 PM
To: user@hadoop.apache.org
Subject: Re: Incompartible cluserIDS
make it 755.
Warm Regards,
Tariq
https://mtariq.jux.com/
cloudfront.blogspot.com
RE: Permission problem
Posted by Kevin Burton <rk...@charter.net>.
To further complicate the issue the log file in
(/var/log/hadoop-0.20-mapreduce/hadoop-hadoop-jobtracker-devUbuntu05.log) is
owned by mapred:mapred and the name of the file seems to indicate some other
lineage (hadoop,hadoop). I am out of my league in understanding the
permission structure for hadoop hdfs and mr. Ideas?
From: Kevin Burton [mailto:rkevinburton@charter.net]
Sent: Tuesday, April 30, 2013 8:31 AM
To: user@hadoop.apache.org
Cc: 'Mohammad Tariq'
Subject: RE: Permission problem
That is what I perceive as the problem. The hdfs file system was created
with the user 'hdfs' owning the root ('/') but for some reason with a M/R
job the user 'mapred' needs to have write permission to the root. I don't
know how to satisfy both conditions. That is one reason that I relaxed the
permission to 775 so that the group would also have write permission but
that didn't seem to help.
From: Mohammad Tariq [mailto:dontariq@gmail.com]
Sent: Tuesday, April 30, 2013 8:20 AM
To: Kevin Burton
Subject: Re: Permission problem
user?"ls" shows "hdfs" and the log says "mapred"..
Warm Regards,
Tariq
https://mtariq.jux.com/
cloudfront.blogspot.com
On Tue, Apr 30, 2013 at 6:22 PM, Kevin Burton <rk...@charter.net>
wrote:
I have relaxed it even further so now it is 775
kevin@devUbuntu05:/var/log/hadoop-0.20-mapreduce$ hadoop fs -ls -d /
Found 1 items
drwxrwxr-x - hdfs supergroup 0 2013-04-29 15:43 /
But I still get this error:
2013-04-30 07:43:02,520 FATAL org.apache.hadoop.mapred.JobTracker:
org.apache.hadoop.security.AccessControlException: Permission denied:
user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
From: Mohammad Tariq [mailto:dontariq@gmail.com]
Sent: Monday, April 29, 2013 5:10 PM
To: user@hadoop.apache.org
Subject: Re: Incompartible cluserIDS
make it 755.
Warm Regards,
Tariq
https://mtariq.jux.com/
cloudfront.blogspot.com
RE: Permission problem
Posted by Kevin Burton <rk...@charter.net>.
To further complicate the issue the log file in
(/var/log/hadoop-0.20-mapreduce/hadoop-hadoop-jobtracker-devUbuntu05.log) is
owned by mapred:mapred and the name of the file seems to indicate some other
lineage (hadoop,hadoop). I am out of my league in understanding the
permission structure for hadoop hdfs and mr. Ideas?
From: Kevin Burton [mailto:rkevinburton@charter.net]
Sent: Tuesday, April 30, 2013 8:31 AM
To: user@hadoop.apache.org
Cc: 'Mohammad Tariq'
Subject: RE: Permission problem
That is what I perceive as the problem. The hdfs file system was created
with the user 'hdfs' owning the root ('/') but for some reason with a M/R
job the user 'mapred' needs to have write permission to the root. I don't
know how to satisfy both conditions. That is one reason that I relaxed the
permission to 775 so that the group would also have write permission but
that didn't seem to help.
From: Mohammad Tariq [mailto:dontariq@gmail.com]
Sent: Tuesday, April 30, 2013 8:20 AM
To: Kevin Burton
Subject: Re: Permission problem
user?"ls" shows "hdfs" and the log says "mapred"..
Warm Regards,
Tariq
https://mtariq.jux.com/
cloudfront.blogspot.com
On Tue, Apr 30, 2013 at 6:22 PM, Kevin Burton <rk...@charter.net>
wrote:
I have relaxed it even further so now it is 775
kevin@devUbuntu05:/var/log/hadoop-0.20-mapreduce$ hadoop fs -ls -d /
Found 1 items
drwxrwxr-x - hdfs supergroup 0 2013-04-29 15:43 /
But I still get this error:
2013-04-30 07:43:02,520 FATAL org.apache.hadoop.mapred.JobTracker:
org.apache.hadoop.security.AccessControlException: Permission denied:
user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
From: Mohammad Tariq [mailto:dontariq@gmail.com]
Sent: Monday, April 29, 2013 5:10 PM
To: user@hadoop.apache.org
Subject: Re: Incompartible cluserIDS
make it 755.
Warm Regards,
Tariq
https://mtariq.jux.com/
cloudfront.blogspot.com
RE: Permission problem
Posted by Kevin Burton <rk...@charter.net>.
That is what I perceive as the problem. The hdfs file system was created
with the user 'hdfs' owning the root ('/') but for some reason with a M/R
job the user 'mapred' needs to have write permission to the root. I don't
know how to satisfy both conditions. That is one reason that I relaxed the
permission to 775 so that the group would also have write permission but
that didn't seem to help.
From: Mohammad Tariq [mailto:dontariq@gmail.com]
Sent: Tuesday, April 30, 2013 8:20 AM
To: Kevin Burton
Subject: Re: Permission problem
user?"ls" shows "hdfs" and the log says "mapred"..
Warm Regards,
Tariq
https://mtariq.jux.com/
cloudfront.blogspot.com
On Tue, Apr 30, 2013 at 6:22 PM, Kevin Burton <rk...@charter.net>
wrote:
I have relaxed it even further so now it is 775
kevin@devUbuntu05:/var/log/hadoop-0.20-mapreduce$ hadoop fs -ls -d /
Found 1 items
drwxrwxr-x - hdfs supergroup 0 2013-04-29 15:43 /
But I still get this error:
2013-04-30 07:43:02,520 FATAL org.apache.hadoop.mapred.JobTracker:
org.apache.hadoop.security.AccessControlException: Permission denied:
user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
From: Mohammad Tariq [mailto:dontariq@gmail.com]
Sent: Monday, April 29, 2013 5:10 PM
To: user@hadoop.apache.org
Subject: Re: Incompartible cluserIDS
make it 755.
Warm Regards,
Tariq
https://mtariq.jux.com/
cloudfront.blogspot.com
RE: Permission problem
Posted by Kevin Burton <rk...@charter.net>.
That is what I perceive as the problem. The hdfs file system was created
with the user 'hdfs' owning the root ('/') but for some reason with a M/R
job the user 'mapred' needs to have write permission to the root. I don't
know how to satisfy both conditions. That is one reason that I relaxed the
permission to 775 so that the group would also have write permission but
that didn't seem to help.
From: Mohammad Tariq [mailto:dontariq@gmail.com]
Sent: Tuesday, April 30, 2013 8:20 AM
To: Kevin Burton
Subject: Re: Permission problem
user?"ls" shows "hdfs" and the log says "mapred"..
Warm Regards,
Tariq
https://mtariq.jux.com/
cloudfront.blogspot.com
On Tue, Apr 30, 2013 at 6:22 PM, Kevin Burton <rk...@charter.net>
wrote:
I have relaxed it even further so now it is 775
kevin@devUbuntu05:/var/log/hadoop-0.20-mapreduce$ hadoop fs -ls -d /
Found 1 items
drwxrwxr-x - hdfs supergroup 0 2013-04-29 15:43 /
But I still get this error:
2013-04-30 07:43:02,520 FATAL org.apache.hadoop.mapred.JobTracker:
org.apache.hadoop.security.AccessControlException: Permission denied:
user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
From: Mohammad Tariq [mailto:dontariq@gmail.com]
Sent: Monday, April 29, 2013 5:10 PM
To: user@hadoop.apache.org
Subject: Re: Incompartible cluserIDS
make it 755.
Warm Regards,
Tariq
https://mtariq.jux.com/
cloudfront.blogspot.com
RE: Permission problem
Posted by Kevin Burton <rk...@charter.net>.
That is what I perceive as the problem. The hdfs file system was created
with the user 'hdfs' owning the root ('/') but for some reason with a M/R
job the user 'mapred' needs to have write permission to the root. I don't
know how to satisfy both conditions. That is one reason that I relaxed the
permission to 775 so that the group would also have write permission but
that didn't seem to help.
From: Mohammad Tariq [mailto:dontariq@gmail.com]
Sent: Tuesday, April 30, 2013 8:20 AM
To: Kevin Burton
Subject: Re: Permission problem
user?"ls" shows "hdfs" and the log says "mapred"..
Warm Regards,
Tariq
https://mtariq.jux.com/
cloudfront.blogspot.com
On Tue, Apr 30, 2013 at 6:22 PM, Kevin Burton <rk...@charter.net>
wrote:
I have relaxed it even further so now it is 775
kevin@devUbuntu05:/var/log/hadoop-0.20-mapreduce$ hadoop fs -ls -d /
Found 1 items
drwxrwxr-x - hdfs supergroup 0 2013-04-29 15:43 /
But I still get this error:
2013-04-30 07:43:02,520 FATAL org.apache.hadoop.mapred.JobTracker:
org.apache.hadoop.security.AccessControlException: Permission denied:
user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxrwxr-x
From: Mohammad Tariq [mailto:dontariq@gmail.com]
Sent: Monday, April 29, 2013 5:10 PM
To: user@hadoop.apache.org
Subject: Re: Incompartible cluserIDS
make it 755.
Warm Regards,
Tariq
https://mtariq.jux.com/
cloudfront.blogspot.com