You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hadoop.apache.org by Panshul Whisper <ou...@gmail.com> on 2014/02/15 18:56:32 UTC

Job Tracker not running, Permission denied

Hello,

I had a Cloudera Hadoop cluster running on AWS EC2 instances. I used the
Cloudera Manager to setup and configure the cluster. It is a CDH 4.4.0
version package.

Everything was running fine till I got a mail from Amazon saying that I
need to re-instantiate one of my EC2 instances and create a new one as the
old one was to be terminated. Un-luckily it was the the Master Node.
So I did the following:

   1. Made an image of the running Master Node instance.
   2. Stopped the cluster services from the cloudera manager.
   3. Terminated the existing the node.
   4. Setup a new node with the AMI image of old Master node.
   5. Configured the same private IP Address as the old master node, so it
   also got the same DNS address.
   6. Since the new machine was a clone of the old master node. I did not
   change anything.
   7. I restarted all the services from  Cloudera Manager.
   8. NameNode failed to start saying that DFS was not formatted.
   9. I formatted the DFS from the Cloudera Manager form the NameNode
   service menu.
   10. The restarted the HDFS service with the Namenode service,
   Successfully.
   11. Now All the Task Trackers are running.
   12. I can access the Master Node with the old SSH keys and the old IP
   Adress.
   13. But the Task Tracker on the name node gives an error while starting
   up.
   14. It gives the following error:
*org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException):
   Permission denied: user=mapred, access=WRITE,
   inode="/":hdfs:supergroup:drwxr-xr-x*

Please help me resolve this error. Earlier also I did not have a user named
mapred. Everything was running fine with the user hdfs.

Thanking You,

-- 
Regards,
Ouch Whisper
010101010101

Re: Job Tracker not running, Permission denied

Posted by Panshul Whisper <ou...@gmail.com>.
Thanks for reply.

I solved it. When I formatted the DFS.. all the directories were deleted
and so was the mapred system folder from the /tmp folder. So there was no
folder for the JobTracker.
I simply recreated the folder /tmp/mapred/system and gave permission to
mapred:hadoop. This started the JobTracker.

But now I am stuck in another problem
When I submit a hive query from Hue, it gives the following error:

Job Submission failed with exception
'org.apache.hadoop.security.AccessControlException(Permission denied:
user=hueuser, access=EXECUTE,
inode="/mapred":ubuntu:supergroup:drwx------


I have the user directory in place for the hue user in /user/admin owned by
admin:admin

What am I not doing.

Please help me resolve this.

Thanking You,

Regards,


On Sat, Feb 15, 2014 at 8:14 PM, Peyman Mohajerian <mo...@gmail.com>wrote:

> Maybe you just have to add 'mapred' user to the group that owns the 'hdfs'
> root directory, it seems the group name is called: 'hdfs'.
>
>
>
> On Sat, Feb 15, 2014 at 9:56 AM, Panshul Whisper <ou...@gmail.com>wrote:
>
>> Hello,
>>
>> I had a Cloudera Hadoop cluster running on AWS EC2 instances. I used the
>> Cloudera Manager to setup and configure the cluster. It is a CDH 4.4.0
>> version package.
>>
>> Everything was running fine till I got a mail from Amazon saying that I
>> need to re-instantiate one of my EC2 instances and create a new one as the
>> old one was to be terminated. Un-luckily it was the the Master Node.
>> So I did the following:
>>
>>    1. Made an image of the running Master Node instance.
>>    2. Stopped the cluster services from the cloudera manager.
>>    3. Terminated the existing the node.
>>    4. Setup a new node with the AMI image of old Master node.
>>    5. Configured the same private IP Address as the old master node, so
>>    it also got the same DNS address.
>>    6. Since the new machine was a clone of the old master node. I did
>>    not change anything.
>>    7. I restarted all the services from  Cloudera Manager.
>>    8. NameNode failed to start saying that DFS was not formatted.
>>    9. I formatted the DFS from the Cloudera Manager form the NameNode
>>    service menu.
>>    10. The restarted the HDFS service with the Namenode service,
>>    Successfully.
>>    11. Now All the Task Trackers are running.
>>    12. I can access the Master Node with the old SSH keys and the old IP
>>    Adress.
>>    13. But the Task Tracker on the name node gives an error while
>>    starting up.
>>    14. It gives the following error: *org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException):
>>    Permission denied: user=mapred, access=WRITE,
>>    inode="/":hdfs:supergroup:drwxr-xr-x*
>>
>> Please help me resolve this error. Earlier also I did not have a user
>> named mapred. Everything was running fine with the user hdfs.
>>
>> Thanking You,
>>
>> --
>> Regards,
>> Ouch Whisper
>> 010101010101
>>
>
>


-- 
Regards,
Ouch Whisper
010101010101

Re: Job Tracker not running, Permission denied

Posted by Panshul Whisper <ou...@gmail.com>.
Thanks for reply.

I solved it. When I formatted the DFS.. all the directories were deleted
and so was the mapred system folder from the /tmp folder. So there was no
folder for the JobTracker.
I simply recreated the folder /tmp/mapred/system and gave permission to
mapred:hadoop. This started the JobTracker.

But now I am stuck in another problem
When I submit a hive query from Hue, it gives the following error:

Job Submission failed with exception
'org.apache.hadoop.security.AccessControlException(Permission denied:
user=hueuser, access=EXECUTE,
inode="/mapred":ubuntu:supergroup:drwx------


I have the user directory in place for the hue user in /user/admin owned by
admin:admin

What am I not doing.

Please help me resolve this.

Thanking You,

Regards,


On Sat, Feb 15, 2014 at 8:14 PM, Peyman Mohajerian <mo...@gmail.com>wrote:

> Maybe you just have to add 'mapred' user to the group that owns the 'hdfs'
> root directory, it seems the group name is called: 'hdfs'.
>
>
>
> On Sat, Feb 15, 2014 at 9:56 AM, Panshul Whisper <ou...@gmail.com>wrote:
>
>> Hello,
>>
>> I had a Cloudera Hadoop cluster running on AWS EC2 instances. I used the
>> Cloudera Manager to setup and configure the cluster. It is a CDH 4.4.0
>> version package.
>>
>> Everything was running fine till I got a mail from Amazon saying that I
>> need to re-instantiate one of my EC2 instances and create a new one as the
>> old one was to be terminated. Un-luckily it was the the Master Node.
>> So I did the following:
>>
>>    1. Made an image of the running Master Node instance.
>>    2. Stopped the cluster services from the cloudera manager.
>>    3. Terminated the existing the node.
>>    4. Setup a new node with the AMI image of old Master node.
>>    5. Configured the same private IP Address as the old master node, so
>>    it also got the same DNS address.
>>    6. Since the new machine was a clone of the old master node. I did
>>    not change anything.
>>    7. I restarted all the services from  Cloudera Manager.
>>    8. NameNode failed to start saying that DFS was not formatted.
>>    9. I formatted the DFS from the Cloudera Manager form the NameNode
>>    service menu.
>>    10. The restarted the HDFS service with the Namenode service,
>>    Successfully.
>>    11. Now All the Task Trackers are running.
>>    12. I can access the Master Node with the old SSH keys and the old IP
>>    Adress.
>>    13. But the Task Tracker on the name node gives an error while
>>    starting up.
>>    14. It gives the following error: *org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException):
>>    Permission denied: user=mapred, access=WRITE,
>>    inode="/":hdfs:supergroup:drwxr-xr-x*
>>
>> Please help me resolve this error. Earlier also I did not have a user
>> named mapred. Everything was running fine with the user hdfs.
>>
>> Thanking You,
>>
>> --
>> Regards,
>> Ouch Whisper
>> 010101010101
>>
>
>


-- 
Regards,
Ouch Whisper
010101010101

Re: Job Tracker not running, Permission denied

Posted by Panshul Whisper <ou...@gmail.com>.
Thanks for reply.

I solved it. When I formatted the DFS.. all the directories were deleted
and so was the mapred system folder from the /tmp folder. So there was no
folder for the JobTracker.
I simply recreated the folder /tmp/mapred/system and gave permission to
mapred:hadoop. This started the JobTracker.

But now I am stuck in another problem
When I submit a hive query from Hue, it gives the following error:

Job Submission failed with exception
'org.apache.hadoop.security.AccessControlException(Permission denied:
user=hueuser, access=EXECUTE,
inode="/mapred":ubuntu:supergroup:drwx------


I have the user directory in place for the hue user in /user/admin owned by
admin:admin

What am I not doing.

Please help me resolve this.

Thanking You,

Regards,


On Sat, Feb 15, 2014 at 8:14 PM, Peyman Mohajerian <mo...@gmail.com>wrote:

> Maybe you just have to add 'mapred' user to the group that owns the 'hdfs'
> root directory, it seems the group name is called: 'hdfs'.
>
>
>
> On Sat, Feb 15, 2014 at 9:56 AM, Panshul Whisper <ou...@gmail.com>wrote:
>
>> Hello,
>>
>> I had a Cloudera Hadoop cluster running on AWS EC2 instances. I used the
>> Cloudera Manager to setup and configure the cluster. It is a CDH 4.4.0
>> version package.
>>
>> Everything was running fine till I got a mail from Amazon saying that I
>> need to re-instantiate one of my EC2 instances and create a new one as the
>> old one was to be terminated. Un-luckily it was the the Master Node.
>> So I did the following:
>>
>>    1. Made an image of the running Master Node instance.
>>    2. Stopped the cluster services from the cloudera manager.
>>    3. Terminated the existing the node.
>>    4. Setup a new node with the AMI image of old Master node.
>>    5. Configured the same private IP Address as the old master node, so
>>    it also got the same DNS address.
>>    6. Since the new machine was a clone of the old master node. I did
>>    not change anything.
>>    7. I restarted all the services from  Cloudera Manager.
>>    8. NameNode failed to start saying that DFS was not formatted.
>>    9. I formatted the DFS from the Cloudera Manager form the NameNode
>>    service menu.
>>    10. The restarted the HDFS service with the Namenode service,
>>    Successfully.
>>    11. Now All the Task Trackers are running.
>>    12. I can access the Master Node with the old SSH keys and the old IP
>>    Adress.
>>    13. But the Task Tracker on the name node gives an error while
>>    starting up.
>>    14. It gives the following error: *org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException):
>>    Permission denied: user=mapred, access=WRITE,
>>    inode="/":hdfs:supergroup:drwxr-xr-x*
>>
>> Please help me resolve this error. Earlier also I did not have a user
>> named mapred. Everything was running fine with the user hdfs.
>>
>> Thanking You,
>>
>> --
>> Regards,
>> Ouch Whisper
>> 010101010101
>>
>
>


-- 
Regards,
Ouch Whisper
010101010101

Re: Job Tracker not running, Permission denied

Posted by Panshul Whisper <ou...@gmail.com>.
Thanks for reply.

I solved it. When I formatted the DFS.. all the directories were deleted
and so was the mapred system folder from the /tmp folder. So there was no
folder for the JobTracker.
I simply recreated the folder /tmp/mapred/system and gave permission to
mapred:hadoop. This started the JobTracker.

But now I am stuck in another problem
When I submit a hive query from Hue, it gives the following error:

Job Submission failed with exception
'org.apache.hadoop.security.AccessControlException(Permission denied:
user=hueuser, access=EXECUTE,
inode="/mapred":ubuntu:supergroup:drwx------


I have the user directory in place for the hue user in /user/admin owned by
admin:admin

What am I not doing.

Please help me resolve this.

Thanking You,

Regards,


On Sat, Feb 15, 2014 at 8:14 PM, Peyman Mohajerian <mo...@gmail.com>wrote:

> Maybe you just have to add 'mapred' user to the group that owns the 'hdfs'
> root directory, it seems the group name is called: 'hdfs'.
>
>
>
> On Sat, Feb 15, 2014 at 9:56 AM, Panshul Whisper <ou...@gmail.com>wrote:
>
>> Hello,
>>
>> I had a Cloudera Hadoop cluster running on AWS EC2 instances. I used the
>> Cloudera Manager to setup and configure the cluster. It is a CDH 4.4.0
>> version package.
>>
>> Everything was running fine till I got a mail from Amazon saying that I
>> need to re-instantiate one of my EC2 instances and create a new one as the
>> old one was to be terminated. Un-luckily it was the the Master Node.
>> So I did the following:
>>
>>    1. Made an image of the running Master Node instance.
>>    2. Stopped the cluster services from the cloudera manager.
>>    3. Terminated the existing the node.
>>    4. Setup a new node with the AMI image of old Master node.
>>    5. Configured the same private IP Address as the old master node, so
>>    it also got the same DNS address.
>>    6. Since the new machine was a clone of the old master node. I did
>>    not change anything.
>>    7. I restarted all the services from  Cloudera Manager.
>>    8. NameNode failed to start saying that DFS was not formatted.
>>    9. I formatted the DFS from the Cloudera Manager form the NameNode
>>    service menu.
>>    10. The restarted the HDFS service with the Namenode service,
>>    Successfully.
>>    11. Now All the Task Trackers are running.
>>    12. I can access the Master Node with the old SSH keys and the old IP
>>    Adress.
>>    13. But the Task Tracker on the name node gives an error while
>>    starting up.
>>    14. It gives the following error: *org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException):
>>    Permission denied: user=mapred, access=WRITE,
>>    inode="/":hdfs:supergroup:drwxr-xr-x*
>>
>> Please help me resolve this error. Earlier also I did not have a user
>> named mapred. Everything was running fine with the user hdfs.
>>
>> Thanking You,
>>
>> --
>> Regards,
>> Ouch Whisper
>> 010101010101
>>
>
>


-- 
Regards,
Ouch Whisper
010101010101

Re: Job Tracker not running, Permission denied

Posted by Peyman Mohajerian <mo...@gmail.com>.
Maybe you just have to add 'mapred' user to the group that owns the 'hdfs'
root directory, it seems the group name is called: 'hdfs'.



On Sat, Feb 15, 2014 at 9:56 AM, Panshul Whisper <ou...@gmail.com>wrote:

> Hello,
>
> I had a Cloudera Hadoop cluster running on AWS EC2 instances. I used the
> Cloudera Manager to setup and configure the cluster. It is a CDH 4.4.0
> version package.
>
> Everything was running fine till I got a mail from Amazon saying that I
> need to re-instantiate one of my EC2 instances and create a new one as the
> old one was to be terminated. Un-luckily it was the the Master Node.
> So I did the following:
>
>    1. Made an image of the running Master Node instance.
>    2. Stopped the cluster services from the cloudera manager.
>    3. Terminated the existing the node.
>    4. Setup a new node with the AMI image of old Master node.
>    5. Configured the same private IP Address as the old master node, so
>    it also got the same DNS address.
>    6. Since the new machine was a clone of the old master node. I did not
>    change anything.
>    7. I restarted all the services from  Cloudera Manager.
>    8. NameNode failed to start saying that DFS was not formatted.
>    9. I formatted the DFS from the Cloudera Manager form the NameNode
>    service menu.
>    10. The restarted the HDFS service with the Namenode service,
>    Successfully.
>    11. Now All the Task Trackers are running.
>    12. I can access the Master Node with the old SSH keys and the old IP
>    Adress.
>    13. But the Task Tracker on the name node gives an error while
>    starting up.
>    14. It gives the following error: *org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException):
>    Permission denied: user=mapred, access=WRITE,
>    inode="/":hdfs:supergroup:drwxr-xr-x*
>
> Please help me resolve this error. Earlier also I did not have a user
> named mapred. Everything was running fine with the user hdfs.
>
> Thanking You,
>
> --
> Regards,
> Ouch Whisper
> 010101010101
>

Re: Job Tracker not running, Permission denied

Posted by Peyman Mohajerian <mo...@gmail.com>.
Maybe you just have to add 'mapred' user to the group that owns the 'hdfs'
root directory, it seems the group name is called: 'hdfs'.



On Sat, Feb 15, 2014 at 9:56 AM, Panshul Whisper <ou...@gmail.com>wrote:

> Hello,
>
> I had a Cloudera Hadoop cluster running on AWS EC2 instances. I used the
> Cloudera Manager to setup and configure the cluster. It is a CDH 4.4.0
> version package.
>
> Everything was running fine till I got a mail from Amazon saying that I
> need to re-instantiate one of my EC2 instances and create a new one as the
> old one was to be terminated. Un-luckily it was the the Master Node.
> So I did the following:
>
>    1. Made an image of the running Master Node instance.
>    2. Stopped the cluster services from the cloudera manager.
>    3. Terminated the existing the node.
>    4. Setup a new node with the AMI image of old Master node.
>    5. Configured the same private IP Address as the old master node, so
>    it also got the same DNS address.
>    6. Since the new machine was a clone of the old master node. I did not
>    change anything.
>    7. I restarted all the services from  Cloudera Manager.
>    8. NameNode failed to start saying that DFS was not formatted.
>    9. I formatted the DFS from the Cloudera Manager form the NameNode
>    service menu.
>    10. The restarted the HDFS service with the Namenode service,
>    Successfully.
>    11. Now All the Task Trackers are running.
>    12. I can access the Master Node with the old SSH keys and the old IP
>    Adress.
>    13. But the Task Tracker on the name node gives an error while
>    starting up.
>    14. It gives the following error: *org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException):
>    Permission denied: user=mapred, access=WRITE,
>    inode="/":hdfs:supergroup:drwxr-xr-x*
>
> Please help me resolve this error. Earlier also I did not have a user
> named mapred. Everything was running fine with the user hdfs.
>
> Thanking You,
>
> --
> Regards,
> Ouch Whisper
> 010101010101
>

Re: Job Tracker not running, Permission denied

Posted by Peyman Mohajerian <mo...@gmail.com>.
Maybe you just have to add 'mapred' user to the group that owns the 'hdfs'
root directory, it seems the group name is called: 'hdfs'.



On Sat, Feb 15, 2014 at 9:56 AM, Panshul Whisper <ou...@gmail.com>wrote:

> Hello,
>
> I had a Cloudera Hadoop cluster running on AWS EC2 instances. I used the
> Cloudera Manager to setup and configure the cluster. It is a CDH 4.4.0
> version package.
>
> Everything was running fine till I got a mail from Amazon saying that I
> need to re-instantiate one of my EC2 instances and create a new one as the
> old one was to be terminated. Un-luckily it was the the Master Node.
> So I did the following:
>
>    1. Made an image of the running Master Node instance.
>    2. Stopped the cluster services from the cloudera manager.
>    3. Terminated the existing the node.
>    4. Setup a new node with the AMI image of old Master node.
>    5. Configured the same private IP Address as the old master node, so
>    it also got the same DNS address.
>    6. Since the new machine was a clone of the old master node. I did not
>    change anything.
>    7. I restarted all the services from  Cloudera Manager.
>    8. NameNode failed to start saying that DFS was not formatted.
>    9. I formatted the DFS from the Cloudera Manager form the NameNode
>    service menu.
>    10. The restarted the HDFS service with the Namenode service,
>    Successfully.
>    11. Now All the Task Trackers are running.
>    12. I can access the Master Node with the old SSH keys and the old IP
>    Adress.
>    13. But the Task Tracker on the name node gives an error while
>    starting up.
>    14. It gives the following error: *org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException):
>    Permission denied: user=mapred, access=WRITE,
>    inode="/":hdfs:supergroup:drwxr-xr-x*
>
> Please help me resolve this error. Earlier also I did not have a user
> named mapred. Everything was running fine with the user hdfs.
>
> Thanking You,
>
> --
> Regards,
> Ouch Whisper
> 010101010101
>

Re: Job Tracker not running, Permission denied

Posted by Peyman Mohajerian <mo...@gmail.com>.
Maybe you just have to add 'mapred' user to the group that owns the 'hdfs'
root directory, it seems the group name is called: 'hdfs'.



On Sat, Feb 15, 2014 at 9:56 AM, Panshul Whisper <ou...@gmail.com>wrote:

> Hello,
>
> I had a Cloudera Hadoop cluster running on AWS EC2 instances. I used the
> Cloudera Manager to setup and configure the cluster. It is a CDH 4.4.0
> version package.
>
> Everything was running fine till I got a mail from Amazon saying that I
> need to re-instantiate one of my EC2 instances and create a new one as the
> old one was to be terminated. Un-luckily it was the the Master Node.
> So I did the following:
>
>    1. Made an image of the running Master Node instance.
>    2. Stopped the cluster services from the cloudera manager.
>    3. Terminated the existing the node.
>    4. Setup a new node with the AMI image of old Master node.
>    5. Configured the same private IP Address as the old master node, so
>    it also got the same DNS address.
>    6. Since the new machine was a clone of the old master node. I did not
>    change anything.
>    7. I restarted all the services from  Cloudera Manager.
>    8. NameNode failed to start saying that DFS was not formatted.
>    9. I formatted the DFS from the Cloudera Manager form the NameNode
>    service menu.
>    10. The restarted the HDFS service with the Namenode service,
>    Successfully.
>    11. Now All the Task Trackers are running.
>    12. I can access the Master Node with the old SSH keys and the old IP
>    Adress.
>    13. But the Task Tracker on the name node gives an error while
>    starting up.
>    14. It gives the following error: *org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException):
>    Permission denied: user=mapred, access=WRITE,
>    inode="/":hdfs:supergroup:drwxr-xr-x*
>
> Please help me resolve this error. Earlier also I did not have a user
> named mapred. Everything was running fine with the user hdfs.
>
> Thanking You,
>
> --
> Regards,
> Ouch Whisper
> 010101010101
>