You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by ArunKumar <ar...@gmail.com> on 2011/09/18 08:29:44 UTC

Submitting Jobs from different user to a queue in capacity scheduler

Hi !

I have set up hadoop on my machine as per
http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/
I am able to run application with capacity scheduler by submit jobs to a
paricular queue from owner of hadoop "hduser".

I tried this from other user :
1. Configured ssh
2. Changed the hadoop exract's permission to 777.
3. Updated $HOME/.bashrc as per above link
4. Changed hadoop.tmp.dir permission to 777.
5. $bin/start-all.sh gives
chown: changing ownership of `/home/hduser/hadoop203/bin/../logs': Operation
not permitted
starting namenode, logging to
/home/hduser/hadoop203/bin/../logs/hadoop-arun-namenode-arun-Presario-C500-RU914PA-ACJ.out
localhost: chown: changing ownership of
`/home/hduser/hadoop203/bin/../logs': Operation not permitted
localhost: starting datanode, logging to
/home/hduser/hadoop203/bin/../logs/hadoop-arun-datanode-arun-Presario-C500-RU914PA-ACJ.out
localhost: chown: changing ownership of
`/home/hduser/hadoop203/bin/../logs': Operation not permitted
localhost: starting secondarynamenode, logging to
/home/hduser/hadoop203/bin/../logs/hadoop-arun-secondarynamenode-arun-Presario-C500-RU914PA-ACJ.out
chown: changing ownership of `/home/hduser/hadoop203/bin/../logs': Operation
not permitted
starting jobtracker, logging to
/home/hduser/hadoop203/bin/../logs/hadoop-arun-jobtracker-arun-Presario-C500-RU914PA-ACJ.out
localhost: chown: changing ownership of
`/home/hduser/hadoop203/bin/../logs': Operation not permitted
localhost: starting tasktracker, logging to
/home/hduser/hadoop203/bin/../logs/hadoop-arun-tasktracker-arun-Presario-C500-RU914PA-ACJ.out

How can i submit jobs from other users ? Any help ?

Thanks,
Arun


--
View this message in context: http://lucene.472066.n3.nabble.com/Submitting-Jobs-from-different-user-to-a-queue-in-capacity-scheduler-tp3345752p3345752.html
Sent from the Hadoop lucene-users mailing list archive at Nabble.com.

Re: Submitting Jobs from different user to a queue in capacity scheduler

Posted by Uma Maheswara Rao G 72686 <ma...@huawei.com>.
Hello Arun,

Now we reached to hadoop permissions ;)

If you really need not worry about permissions, then you can disable it and proceed (dfs.permissions = false). 
else you can set the required permissions to user as well.

permissions guide.
http://hadoop.apache.org/common/docs/current/hdfs_permissions_guide.html

Regards,
Uma
----- Original Message -----
From: ArunKumar <ar...@gmail.com>
Date: Sunday, September 18, 2011 1:38 pm
Subject: Re: Submitting Jobs from different user to a queue in capacity scheduler
To: hadoop-user@lucene.apache.org

> Hi !
> 
> I have given permissions in the beginning $ sudo chown -R 
> hduser:hadoophadoop .
> I gave $chmod  -R 777 hadoop
> When i try
> arun$ /home/hduser/hadoop203/bin/hadoop jar 
> /home/hduser/hadoop203/hadoop-examples*.jar pi 1 1 
> I get
> Number of Maps  = 1
> Samples per Map = 1
> org.apache.hadoop.security.AccessControlException:
> org.apache.hadoop.security.AccessControlException: Permission denied:
> user=arun, access=WRITE, inode="user":hduser:supergroup:rwxr-xr-x
> .....
> Caused by: org.apache.hadoop.ipc.RemoteException:
> org.apache.hadoop.security.AccessControlException: Permission denied:
> user=arun, access=WRITE, inode="user":hduser:supergroup:rwxr-xr-x
> 	at
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:199)
> I have attached mapred-sie.xml  http://pastebin.com/scS6EevU here  and
> capacity-scheduler.xml  http://pastebin.com/ScGFAfv5 here 
> 
> Arun
> 
> 
> --
> View this message in context: 
> http://lucene.472066.n3.nabble.com/Submitting-Jobs-from-different-
> user-to-a-queue-in-capacity-scheduler-tp3345752p3345838.html
> Sent from the Hadoop lucene-users mailing list archive at Nabble.com.
> 

Re: Submitting Jobs from different user to a queue in capacity scheduler

Posted by ArunKumar <ar...@gmail.com>.
Hi !

I gave rwx permission recursively for everybody:
drwxrwxrwx   3 root root  4096 2011-09-18 23:38 app


Arun


--
View this message in context: http://lucene.472066.n3.nabble.com/Submitting-Jobs-from-different-user-to-a-queue-in-capacity-scheduler-tp3345752p3351331.html
Sent from the Hadoop lucene-users mailing list archive at Nabble.com.

Re: Submitting Jobs from different user to a queue in capacity scheduler

Posted by Joey Echeverria <jo...@cloudera.com>.
FYI, I'm moving this to mapreduce-user@ and bccing common-user@. 

It looks like your latest permission problem is on the local disk. What is your setting for hadoop.tmp.dir? What are the permissions on that directory?

-Joey


On Sep 18, 2011, at 23:27, ArunKumar <ar...@gmail.com> wrote:

> Hi guys ! 
> 
> Common things done by me :
> $chmod -R 777 hadoop_extract
> $chmod -R 777 /app
> 
> @Joey
> I have created dfs dir /user/arun and made arun owner and tried as below
> 1>
> arun@arun-Presario-C500-RU914PA-ACJ:/$ /usr/local/hadoop/bin/hadoop jar
> /usr/local/hadoop/hadoop-examples-0.20.203.0.jar wordcount
> -Dmapred.job.queue.name=myqueue1 /user/arun/wcin /user/arun/wcout2
> Exception in thread "main" java.io.IOException: Permission denied
>    at java.io.UnixFileSystem.createFileExclusively(Native Method)
>    at java.io.File.checkAndCreate(File.java:1704)
>    at java.io.File.createTempFile(File.java:1792)
>    at org.apache.hadoop.util.RunJar.main(RunJar.java:115)
> 
> 2>
> arun@arun-Presario-C500-RU914PA-ACJ:/$ /usr/local/hadoop/bin/hadoop jar
> /usr/local/hadoop/hadoop-examples-0.20.203.0.jar wordcount
> -Dmapred.job.queue.name=myqueue1 /user/hduser/wcin /user/hduser/wcout2
> Exception in thread "main" java.io.IOException: Permission denied
>    at java.io.UnixFileSystem.createFileExclusively(Native Method)
>    at java.io.File.checkAndCreate(File.java:1704)
>    at java.io.File.createTempFile(File.java:1792)
>    at org.apache.hadoop.util.RunJar.main(RunJar.java:115)
> 
> @Uma
> 
> I have Set mapreduce.jobtracker.staging.root.dir propery value in
> mapred-site.xml  to /user and restartd cluster but that doesn't  work.
> 
> @Aaron
> I have set  the config value "dfs.permissions" to "false" in hdfs-site.xml
> and restarted.
> i get the same error above while running applicn.
> 
> Only thing left is : hadoop fs -chmod 777
> 
> Arun
> 
> 
> 
> --
> View this message in context: http://lucene.472066.n3.nabble.com/Submitting-Jobs-from-different-user-to-a-queue-in-capacity-scheduler-tp3345752p3347813.html
> Sent from the Hadoop lucene-users mailing list archive at Nabble.com.

Re: Submitting Jobs from different user to a queue in capacity scheduler

Posted by Joey Echeverria <jo...@cloudera.com>.
FYI, I'm moving this to mapreduce-user@ and bccing common-user@. 

It looks like your latest permission problem is on the local disk. What is your setting for hadoop.tmp.dir? What are the permissions on that directory?

-Joey


On Sep 18, 2011, at 23:27, ArunKumar <ar...@gmail.com> wrote:

> Hi guys ! 
> 
> Common things done by me :
> $chmod -R 777 hadoop_extract
> $chmod -R 777 /app
> 
> @Joey
> I have created dfs dir /user/arun and made arun owner and tried as below
> 1>
> arun@arun-Presario-C500-RU914PA-ACJ:/$ /usr/local/hadoop/bin/hadoop jar
> /usr/local/hadoop/hadoop-examples-0.20.203.0.jar wordcount
> -Dmapred.job.queue.name=myqueue1 /user/arun/wcin /user/arun/wcout2
> Exception in thread "main" java.io.IOException: Permission denied
>    at java.io.UnixFileSystem.createFileExclusively(Native Method)
>    at java.io.File.checkAndCreate(File.java:1704)
>    at java.io.File.createTempFile(File.java:1792)
>    at org.apache.hadoop.util.RunJar.main(RunJar.java:115)
> 
> 2>
> arun@arun-Presario-C500-RU914PA-ACJ:/$ /usr/local/hadoop/bin/hadoop jar
> /usr/local/hadoop/hadoop-examples-0.20.203.0.jar wordcount
> -Dmapred.job.queue.name=myqueue1 /user/hduser/wcin /user/hduser/wcout2
> Exception in thread "main" java.io.IOException: Permission denied
>    at java.io.UnixFileSystem.createFileExclusively(Native Method)
>    at java.io.File.checkAndCreate(File.java:1704)
>    at java.io.File.createTempFile(File.java:1792)
>    at org.apache.hadoop.util.RunJar.main(RunJar.java:115)
> 
> @Uma
> 
> I have Set mapreduce.jobtracker.staging.root.dir propery value in
> mapred-site.xml  to /user and restartd cluster but that doesn't  work.
> 
> @Aaron
> I have set  the config value "dfs.permissions" to "false" in hdfs-site.xml
> and restarted.
> i get the same error above while running applicn.
> 
> Only thing left is : hadoop fs -chmod 777
> 
> Arun
> 
> 
> 
> --
> View this message in context: http://lucene.472066.n3.nabble.com/Submitting-Jobs-from-different-user-to-a-queue-in-capacity-scheduler-tp3345752p3347813.html
> Sent from the Hadoop lucene-users mailing list archive at Nabble.com.

Re: Submitting Jobs from different user to a queue in capacity scheduler

Posted by ArunKumar <ar...@gmail.com>.
Hi guys ! 

Common things done by me :
$chmod -R 777 hadoop_extract
$chmod -R 777 /app

@Joey
 I have created dfs dir /user/arun and made arun owner and tried as below
 1>
arun@arun-Presario-C500-RU914PA-ACJ:/$ /usr/local/hadoop/bin/hadoop jar
/usr/local/hadoop/hadoop-examples-0.20.203.0.jar wordcount
-Dmapred.job.queue.name=myqueue1 /user/arun/wcin /user/arun/wcout2
Exception in thread "main" java.io.IOException: Permission denied
	at java.io.UnixFileSystem.createFileExclusively(Native Method)
	at java.io.File.checkAndCreate(File.java:1704)
	at java.io.File.createTempFile(File.java:1792)
	at org.apache.hadoop.util.RunJar.main(RunJar.java:115)

2>
arun@arun-Presario-C500-RU914PA-ACJ:/$ /usr/local/hadoop/bin/hadoop jar
/usr/local/hadoop/hadoop-examples-0.20.203.0.jar wordcount
-Dmapred.job.queue.name=myqueue1 /user/hduser/wcin /user/hduser/wcout2
Exception in thread "main" java.io.IOException: Permission denied
	at java.io.UnixFileSystem.createFileExclusively(Native Method)
	at java.io.File.checkAndCreate(File.java:1704)
	at java.io.File.createTempFile(File.java:1792)
	at org.apache.hadoop.util.RunJar.main(RunJar.java:115)

@Uma

I have Set mapreduce.jobtracker.staging.root.dir propery value in
mapred-site.xml  to /user and restartd cluster but that doesn't  work.

@Aaron
I have set  the config value "dfs.permissions" to "false" in hdfs-site.xml
and restarted.
i get the same error above while running applicn.

Only thing left is : hadoop fs -chmod 777

Arun

 

--
View this message in context: http://lucene.472066.n3.nabble.com/Submitting-Jobs-from-different-user-to-a-queue-in-capacity-scheduler-tp3345752p3347813.html
Sent from the Hadoop lucene-users mailing list archive at Nabble.com.

Re: Submitting Jobs from different user to a queue in capacity scheduler

Posted by Uma Maheswara Rao G 72686 <ma...@huawei.com>.
Agreed.
i suggested  'dfs.permissions' flag also earlier in this thread. :-)

Regards,
Uma

----- Original Message -----
From: "Aaron T. Myers" <at...@cloudera.com>
Date: Monday, September 19, 2011 7:45 am
Subject: Re: Submitting Jobs from different user to a queue in capacity scheduler
To: common-user@hadoop.apache.org
Cc: hadoop-user@lucene.apache.org

> On Sun, Sep 18, 2011 at 9:35 AM, Uma Maheswara Rao G 72686 <
> maheswara@huawei.com> wrote:
> 
> > or other way could be, just execute below command
> >  hadoop fs -chmod 777 /
> >
> 
> I wouldn't do this - it's overkill, and there's no way to go back. 
> Instead,if you really want to disregard all permissions on HDFS, 
> you can just set
> the config value "dfs.permissions" to "false" and restart your NN. 
> This is
> still overkill, but at least you could roll back if you change 
> your mind
> later. :)
> 
> --
> Aaron T. Myers
> Software Engineer, Cloudera
> 

Re: Submitting Jobs from different user to a queue in capacity scheduler

Posted by "Aaron T. Myers" <at...@cloudera.com>.
On Sun, Sep 18, 2011 at 9:35 AM, Uma Maheswara Rao G 72686 <
maheswara@huawei.com> wrote:

> or other way could be, just execute below command
>  hadoop fs -chmod 777 /
>

I wouldn't do this - it's overkill, and there's no way to go back. Instead,
if you really want to disregard all permissions on HDFS, you can just set
the config value "dfs.permissions" to "false" and restart your NN. This is
still overkill, but at least you could roll back if you change your mind
later. :)

--
Aaron T. Myers
Software Engineer, Cloudera

Re: Submitting Jobs from different user to a queue in capacity scheduler

Posted by Uma Maheswara Rao G 72686 <ma...@huawei.com>.
Hi Arun,

Setting mapreduce.jobtracker.staging.root.dir propery value to /user might fix this issue...

or other way could be, just execute below command
 hadoop fs -chmod 777 /

Regards,
Uma
----- Original Message -----
From: ArunKumar <ar...@gmail.com>
Date: Sunday, September 18, 2011 8:38 pm
Subject: Re: Submitting Jobs from different user to a queue in capacity scheduler
To: hadoop-user@lucene.apache.org

> Hi Uma !
> 
> I have deleted the data in /app/hadoop/tmp and formatted namenode and
> restarted cluster..
> I tried 
> arun$ /home/hduser/hadoop203/bin/hadoop jar
> /home/hduser/hadoop203/hadoop-examples*.jar pi 1 1
> Number of Maps  = 1
> Samples per Map = 1
> org.apache.hadoop.security.AccessControlException:
> org.apache.hadoop.security.AccessControlException: Permission denied:
> user=arun, access=WRITE, inode="":hduser:supergroup:rwxr-xr-x
> .....
> 
> Arun
> 
> 
> --
> View this message in context: 
> http://lucene.472066.n3.nabble.com/Submitting-Jobs-from-different-
> user-to-a-queue-in-capacity-scheduler-tp3345752p3346364.html
> Sent from the Hadoop lucene-users mailing list archive at Nabble.com.
> 

Re: Submitting Jobs from different user to a queue in capacity scheduler

Posted by Joey Echeverria <jo...@cloudera.com>.
As hfuser, create the /user/arun directory in hdfs-user. Then change the
ownership /user/arun to arun.

-Joey
On Sep 18, 2011 8:07 AM, "ArunKumar" <ar...@gmail.com> wrote:
> Hi Uma !
>
> I have deleted the data in /app/hadoop/tmp and formatted namenode and
> restarted cluster..
> I tried
> arun$ /home/hduser/hadoop203/bin/hadoop jar
> /home/hduser/hadoop203/hadoop-examples*.jar pi 1 1
> Number of Maps = 1
> Samples per Map = 1
> org.apache.hadoop.security.AccessControlException:
> org.apache.hadoop.security.AccessControlException: Permission denied:
> user=arun, access=WRITE, inode="":hduser:supergroup:rwxr-xr-x
> .....
>
> Arun
>
>
> --
> View this message in context:
http://lucene.472066.n3.nabble.com/Submitting-Jobs-from-different-user-to-a-queue-in-capacity-scheduler-tp3345752p3346364.html
> Sent from the Hadoop lucene-users mailing list archive at Nabble.com.

Re: Submitting Jobs from different user to a queue in capacity scheduler

Posted by ArunKumar <ar...@gmail.com>.
Hi Uma !

I have deleted the data in /app/hadoop/tmp and formatted namenode and
restarted cluster..
I tried 
arun$ /home/hduser/hadoop203/bin/hadoop jar
/home/hduser/hadoop203/hadoop-examples*.jar pi 1 1
Number of Maps  = 1
Samples per Map = 1
org.apache.hadoop.security.AccessControlException:
org.apache.hadoop.security.AccessControlException: Permission denied:
user=arun, access=WRITE, inode="":hduser:supergroup:rwxr-xr-x
.....

Arun


--
View this message in context: http://lucene.472066.n3.nabble.com/Submitting-Jobs-from-different-user-to-a-queue-in-capacity-scheduler-tp3345752p3346364.html
Sent from the Hadoop lucene-users mailing list archive at Nabble.com.

Re: Submitting Jobs from different user to a queue in capacity scheduler

Posted by Uma Maheswara Rao G 72686 <ma...@huawei.com>.
Hi Arun,

 Here NameNode is in safe mode. This is not related to permission problem. It looks to me that you have synced some blocks and restarted NN. So, NN will expect some blocks to come out of safemode. But in your version of hadoop, that partial blocks will not be reported again from DN. or DN did not send blk reports yet (in ideal case.)
If you dont have any useful data in cluster, can you please format once and restrat again?

Hope, it should work.

Regards,
Uma  



----- Original Message -----
From: ArunKumar <ar...@gmail.com>
Date: Sunday, September 18, 2011 3:37 pm
Subject: Re: Submitting Jobs from different user to a queue in capacity scheduler
To: hadoop-user@lucene.apache.org

> Hi Uma !
> 
> I have added in hdfs-site.xml the following
> <property> 
>  <name>dfs.permissions</name> 
>  <value>false</value> 
> </property>
> and restarted the cluster.
> I tried :
> arun@arun-Presario-C500-RU914PA-ACJ:~$ 
> /home/hduser/hadoop203/bin/hadoop jar
> /home/hduser/hadoop203/hadoop-examples*.jar pi 1 1 
> Number of Maps  = 1
> Samples per Map = 1
> org.apache.hadoop.ipc.RemoteException:
> org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot 
> createdirectory /user/arun/PiEstimator_TMP_3_141592654/in. Name 
> node is in safe
> mode.
> The ratio of reported blocks 0.0000 has not reached the threshold 
> 0.9990.Safe mode will be turned off automatically.
> 	at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInternal(FSNamesystem.java:1912)
> 	at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:1886)
>        ....................
> How do i set hdfs permission for a particular user ?
> 
> Arun
> 
> 
> --
> View this message in context: 
> http://lucene.472066.n3.nabble.com/Submitting-Jobs-from-different-
> user-to-a-queue-in-capacity-scheduler-tp3345752p3345973.html
> Sent from the Hadoop lucene-users mailing list archive at Nabble.com.
> 

Re: Submitting Jobs from different user to a queue in capacity scheduler

Posted by ArunKumar <ar...@gmail.com>.
Hi Uma !

I have added in hdfs-site.xml the following
<property> 
  <name>dfs.permissions</name> 
  <value>false</value> 
</property>
and restarted the cluster.
I tried :
arun@arun-Presario-C500-RU914PA-ACJ:~$ /home/hduser/hadoop203/bin/hadoop jar
/home/hduser/hadoop203/hadoop-examples*.jar pi 1 1 
Number of Maps  = 1
Samples per Map = 1
org.apache.hadoop.ipc.RemoteException:
org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot create
directory /user/arun/PiEstimator_TMP_3_141592654/in. Name node is in safe
mode.
The ratio of reported blocks 0.0000 has not reached the threshold 0.9990.
Safe mode will be turned off automatically.
	at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInternal(FSNamesystem.java:1912)
	at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:1886)
        ....................
How do i set hdfs permission for a particular user ?

Arun


--
View this message in context: http://lucene.472066.n3.nabble.com/Submitting-Jobs-from-different-user-to-a-queue-in-capacity-scheduler-tp3345752p3345973.html
Sent from the Hadoop lucene-users mailing list archive at Nabble.com.

Re: Submitting Jobs from different user to a queue in capacity scheduler

Posted by ArunKumar <ar...@gmail.com>.
Hi !

I have given permissions in the beginning $ sudo chown -R hduser:hadoop
hadoop .
I gave $chmod  -R 777 hadoop
When i try
arun$ /home/hduser/hadoop203/bin/hadoop jar 
/home/hduser/hadoop203/hadoop-examples*.jar pi 1 1 
I get
Number of Maps  = 1
Samples per Map = 1
org.apache.hadoop.security.AccessControlException:
org.apache.hadoop.security.AccessControlException: Permission denied:
user=arun, access=WRITE, inode="user":hduser:supergroup:rwxr-xr-x
.....
Caused by: org.apache.hadoop.ipc.RemoteException:
org.apache.hadoop.security.AccessControlException: Permission denied:
user=arun, access=WRITE, inode="user":hduser:supergroup:rwxr-xr-x
	at
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:199)
I have attached mapred-sie.xml  http://pastebin.com/scS6EevU here  and
capacity-scheduler.xml  http://pastebin.com/ScGFAfv5 here 

Arun


--
View this message in context: http://lucene.472066.n3.nabble.com/Submitting-Jobs-from-different-user-to-a-queue-in-capacity-scheduler-tp3345752p3345838.html
Sent from the Hadoop lucene-users mailing list archive at Nabble.com.

Re: Submitting Jobs from different user to a queue in capacity scheduler

Posted by Harsh J <ha...@cloudera.com>.
Hello Arun,

On Sun, Sep 18, 2011 at 11:59 AM, ArunKumar <ar...@gmail.com> wrote:
> Hi !
>
> I have set up hadoop on my machine as per
> http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/
> I am able to run application with capacity scheduler by submit jobs to a
> paricular queue from owner of hadoop "hduser".

You don't need to run Hadoop daemons as a separate user to run jobs
from them. Run Hadoop from 'hduser' and submit jobs from any other
user just like normal.

As another user, try running a sample job this way:

hduser $ start-all.sh
hduser $ su - other
other $ /home/hduser/hadoop203/bin/hadoop jar
/home/hduser/hadoop203/hadoop-examples*.jar pi 1 1

What's the error you get if the above fails? (Ensuring your 'other'
user has proper permissions to utilize /home/hduser/hadoop203). Can
you also paste in your mapreduce + capacity scheduler configurations
if things fail as user 'other'?

> I tried this from other user :
> 1. Configured ssh
> 2. Changed the hadoop exract's permission to 777.
> 3. Updated $HOME/.bashrc as per above link
> 4. Changed hadoop.tmp.dir permission to 777.
> 5. $bin/start-all.sh gives

Unsure why you want to start hadoop as another user. I don't think
thats your goal - you're looking to submit jobs as another user if I
understand right. In that case, you don't have to start daemons as a
new user as well.


-- 
Harsh J

Re: Submitting Jobs from different user to a queue in capacity scheduler

Posted by Uma Maheswara Rao G 72686 <ma...@huawei.com>.
Did you give permissions recursively?
$ sudo chown -R hduser:hadoop hadoop

Regards,
Uma
----- Original Message -----
From: ArunKumar <ar...@gmail.com>
Date: Sunday, September 18, 2011 12:00 pm
Subject: Submitting Jobs from different user to a queue in capacity scheduler
To: hadoop-user@lucene.apache.org

> Hi !
> 
> I have set up hadoop on my machine as per
> http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-
> linux-single-node-cluster/
> I am able to run application with capacity scheduler by submit jobs 
> to a
> paricular queue from owner of hadoop "hduser".
> 
> I tried this from other user :
> 1. Configured ssh
> 2. Changed the hadoop exract's permission to 777.
> 3. Updated $HOME/.bashrc as per above link
> 4. Changed hadoop.tmp.dir permission to 777.
> 5. $bin/start-all.sh gives
> chown: changing ownership of `/home/hduser/hadoop203/bin/../logs': 
> Operationnot permitted
> starting namenode, logging to
> /home/hduser/hadoop203/bin/../logs/hadoop-arun-namenode-arun-
> Presario-C500-RU914PA-ACJ.out
> localhost: chown: changing ownership of
> `/home/hduser/hadoop203/bin/../logs': Operation not permitted
> localhost: starting datanode, logging to
> /home/hduser/hadoop203/bin/../logs/hadoop-arun-datanode-arun-
> Presario-C500-RU914PA-ACJ.out
> localhost: chown: changing ownership of
> `/home/hduser/hadoop203/bin/../logs': Operation not permitted
> localhost: starting secondarynamenode, logging to
> /home/hduser/hadoop203/bin/../logs/hadoop-arun-secondarynamenode-
> arun-Presario-C500-RU914PA-ACJ.out
> chown: changing ownership of `/home/hduser/hadoop203/bin/../logs': 
> Operationnot permitted
> starting jobtracker, logging to
> /home/hduser/hadoop203/bin/../logs/hadoop-arun-jobtracker-arun-
> Presario-C500-RU914PA-ACJ.out
> localhost: chown: changing ownership of
> `/home/hduser/hadoop203/bin/../logs': Operation not permitted
> localhost: starting tasktracker, logging to
> /home/hduser/hadoop203/bin/../logs/hadoop-arun-tasktracker-arun-
> Presario-C500-RU914PA-ACJ.out
> 
> How can i submit jobs from other users ? Any help ?
> 
> Thanks,
> Arun
> 
> 
> --
> View this message in context: 
> http://lucene.472066.n3.nabble.com/Submitting-Jobs-from-different-
> user-to-a-queue-in-capacity-scheduler-tp3345752p3345752.html
> Sent from the Hadoop lucene-users mailing list archive at Nabble.com.
>