You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@ambari.apache.org by Aaron Zimmerman <az...@sproutsocial.com> on 2014/01/02 20:23:27 UTC

bootstrap hdfs

I had to reformat the namenode under an ambari managed cluster, and HDFS
came back online no problem, but the various dependent services fail
(seemingly because of missing directories, such as /user/... and /etc, and
/tmp.  Is there a way to get ambari to recreate these various directories,
permissions, etc?

If not, is there a guide somewhere that could help me recreate it manually?

Thanks,

Aaron Zimmerman

Re: bootstrap hdfs

Posted by Aaron Zimmerman <az...@sproutsocial.com>.
>From nothing running, I tried to start all.  I see errors such as this in
the namenode log:

2014-01-06 12:45:01,546 ERROR security.UserGroupInformation
(UserGroupInformation.java:doAs(1494)) - PriviledgedActionException
as:mapred (auth:SIMPLE)
cause:org.apache.hadoop.security.AccessControlException: Permission denied:
user=mapred, access=WRITE, inode="/":hdfs:hdfs:drwxr-xr-x
2014-01-06 12:45:01,546 INFO  ipc.Server (Server.java:run(2073)) - IPC
Server handler 13 on 8020, call
org.apache.hadoop.hdfs.protocol.ClientProtocol.mkdirs from
192.168.101.0:57275 Call#1 Retry#0: error:
org.apache.hadoop.security.AccessControlException: Permission denied:
user=mapred, access=WRITE, inode="/":hdfs:hdfs:drwxr-xr-x

So, some permissions thing is not being run?

The actual error that triggers the overall failure is in starting the oozie
server:

e -c 'echo 0; hadoop dfs -put /usr/lib/oozie/share /user/oozie ;
hadoop dfs -chmod -R 755 /user/oozie/share']/returns: put:
`/user/oozie': No such file or directory


But hdfs starts successully, I can open the namenode UI, there's just
nothing in hdfs.
Yarn also starts, but all of the NodeManagers are dead.
Hive, Ganglia, and Zookeeper are all running as well, but everything else
is not.

On Mon, Jan 6, 2014 at 12:40 PM, Hitesh Shah <hi...@apache.org> wrote:

> It might be that you need to clean up both the directories on the namenode
> as well as the datanodes especially if you triggered a re-format of HDFS (
> caused by deleting "/var/run/hadoop/hdfs/namenode-formatted" on the NN ).
>
> @Siddharth, can you confirm the above.
>
> thanks
> -- Hitesh
>
> On Jan 6, 2014, at 10:30 AM, Siddharth Wagle wrote:
>
> > Hi Aaron,
> >
> > The directories in HDFS that you noticed on first installation are all
> created after Namenode is started by Ambari.
> > Every restart of NN, Ambari agent code will check for the directories
> and create them if not already present.
> >
> > Could you provide the error message that you are getting in the web UI ?
> >
> > Also, make sure to check Namenode logs
> (/var/log/hadoop/hdfs/*-namenode.log).
> > There is a possibility that the Namenode comes up and is shutdown
> because of some error.
> >
> > Best Regards,
> > Sid
> >
> >
> > On Mon, Jan 6, 2014 at 10:18 AM, Aaron Zimmerman <
> azimmerman@sproutsocial.com> wrote:
> > Thanks for the reply Siddharth,
> >
> > I tried it and the namenode came back (I had to delete the local data
> directories as well).
> >
> > But the other services did not work.  Specifically it was dying on oozie
> server install, appears to be trying to create hdfs directories without
> success.  Does the /var/run directory serve as markers for initialization?
>  If so, perhaps deleting the subdir of each service will reinitialize all
> components?
> >
> > There is still nothing in hdfs.  On my first installation, after
> installing software on the cluster nodes, the file system had a bunch of
> stuff in it.
> >
> > Thanks,
> >
> > Aaron Zimmerman
> >
> >
> > On Thu, Jan 2, 2014 at 6:51 PM, Siddharth Wagle <sw...@hortonworks.com>
> wrote:
> > Hi Aaron,
> >
> > Could you check if the following stub file exists:
> /var/run/hadoop/hdfs/namenode-formatted ?
> >
> > A restart of NN through Ambari should create all the required
> directories for you.
> > Go ahead and delete the stub if it exists and restart the NN through
> Ambari web UI.
> >
> > Best Regards,
> > Sid
> >
> >
> > On Thu, Jan 2, 2014 at 11:23 AM, Aaron Zimmerman <
> azimmerman@sproutsocial.com> wrote:
> > I had to reformat the namenode under an ambari managed cluster, and HDFS
> came back online no problem, but the various dependent services fail
> (seemingly because of missing directories, such as /user/... and /etc, and
> /tmp.  Is there a way to get ambari to recreate these various directories,
> permissions, etc?
> >
> > If not, is there a guide somewhere that could help me recreate it
> manually?
> >
> > Thanks,
> >
> > Aaron Zimmerman
> >
> >
> > CONFIDENTIALITY NOTICE
> > NOTICE: This message is intended for the use of the individual or entity
> to which it is addressed and may contain information that is confidential,
> privileged and exempt from disclosure under applicable law. If the reader
> of this message is not the intended recipient, you are hereby notified that
> any printing, copying, dissemination, distribution, disclosure or
> forwarding of this communication is strictly prohibited. If you have
> received this communication in error, please contact the sender immediately
> and delete it from your system. Thank You.
> >
> >
> >
> > CONFIDENTIALITY NOTICE
> > NOTICE: This message is intended for the use of the individual or entity
> to which it is addressed and may contain information that is confidential,
> privileged and exempt from disclosure under applicable law. If the reader
> of this message is not the intended recipient, you are hereby notified that
> any printing, copying, dissemination, distribution, disclosure or
> forwarding of this communication is strictly prohibited. If you have
> received this communication in error, please contact the sender immediately
> and delete it from your system. Thank You.
>
>

Re: bootstrap hdfs

Posted by Hitesh Shah <hi...@apache.org>.
It might be that you need to clean up both the directories on the namenode as well as the datanodes especially if you triggered a re-format of HDFS ( caused by deleting "/var/run/hadoop/hdfs/namenode-formatted" on the NN ).

@Siddharth, can you confirm the above.

thanks
-- Hitesh
 
On Jan 6, 2014, at 10:30 AM, Siddharth Wagle wrote:

> Hi Aaron,
> 
> The directories in HDFS that you noticed on first installation are all created after Namenode is started by Ambari.
> Every restart of NN, Ambari agent code will check for the directories and create them if not already present.
> 
> Could you provide the error message that you are getting in the web UI ?
> 
> Also, make sure to check Namenode logs (/var/log/hadoop/hdfs/*-namenode.log). 
> There is a possibility that the Namenode comes up and is shutdown because of some error. 
> 
> Best Regards,
> Sid
> 
> 
> On Mon, Jan 6, 2014 at 10:18 AM, Aaron Zimmerman <az...@sproutsocial.com> wrote:
> Thanks for the reply Siddharth,
>   
> I tried it and the namenode came back (I had to delete the local data directories as well).
> 
> But the other services did not work.  Specifically it was dying on oozie server install, appears to be trying to create hdfs directories without success.  Does the /var/run directory serve as markers for initialization?  If so, perhaps deleting the subdir of each service will reinitialize all components? 
> 
> There is still nothing in hdfs.  On my first installation, after installing software on the cluster nodes, the file system had a bunch of stuff in it.  
> 
> Thanks,
> 
> Aaron Zimmerman
> 
> 
> On Thu, Jan 2, 2014 at 6:51 PM, Siddharth Wagle <sw...@hortonworks.com> wrote:
> Hi Aaron,
> 
> Could you check if the following stub file exists: /var/run/hadoop/hdfs/namenode-formatted ?
> 
> A restart of NN through Ambari should create all the required directories for you.
> Go ahead and delete the stub if it exists and restart the NN through Ambari web UI.
> 
> Best Regards,
> Sid
> 
> 
> On Thu, Jan 2, 2014 at 11:23 AM, Aaron Zimmerman <az...@sproutsocial.com> wrote:
> I had to reformat the namenode under an ambari managed cluster, and HDFS came back online no problem, but the various dependent services fail (seemingly because of missing directories, such as /user/... and /etc, and /tmp.  Is there a way to get ambari to recreate these various directories, permissions, etc?  
> 
> If not, is there a guide somewhere that could help me recreate it manually?
> 
> Thanks,
> 
> Aaron Zimmerman
> 
> 
> CONFIDENTIALITY NOTICE
> NOTICE: This message is intended for the use of the individual or entity to which it is addressed and may contain information that is confidential, privileged and exempt from disclosure under applicable law. If the reader of this message is not the intended recipient, you are hereby notified that any printing, copying, dissemination, distribution, disclosure or forwarding of this communication is strictly prohibited. If you have received this communication in error, please contact the sender immediately and delete it from your system. Thank You.
> 
> 
> 
> CONFIDENTIALITY NOTICE
> NOTICE: This message is intended for the use of the individual or entity to which it is addressed and may contain information that is confidential, privileged and exempt from disclosure under applicable law. If the reader of this message is not the intended recipient, you are hereby notified that any printing, copying, dissemination, distribution, disclosure or forwarding of this communication is strictly prohibited. If you have received this communication in error, please contact the sender immediately and delete it from your system. Thank You.


Re: bootstrap hdfs

Posted by Siddharth Wagle <sw...@hortonworks.com>.
Hi Aaron,

The directories in HDFS that you noticed on first installation are all
created after Namenode is started by Ambari.
Every restart of NN, Ambari agent code will check for the directories and
create them if not already present.

Could you provide the error message that you are getting in the web UI ?

Also, make sure to check Namenode logs
(/var/log/hadoop/hdfs/*-namenode.log).
There is a possibility that the Namenode comes up and is shutdown because
of some error.

Best Regards,
Sid


On Mon, Jan 6, 2014 at 10:18 AM, Aaron Zimmerman <
azimmerman@sproutsocial.com> wrote:

> Thanks for the reply Siddharth,
>
> I tried it and the namenode came back (I had to delete the local data
> directories as well).
>
> But the other services did not work.  Specifically it was dying on oozie
> server install, appears to be trying to create hdfs directories without
> success.  Does the /var/run directory serve as markers for initialization?
>  If so, perhaps deleting the subdir of each service will reinitialize all
> components?
>
> There is still nothing in hdfs.  On my first installation, after
> installing software on the cluster nodes, the file system had a bunch of
> stuff in it.
>
> Thanks,
>
> Aaron Zimmerman
>
>
> On Thu, Jan 2, 2014 at 6:51 PM, Siddharth Wagle <sw...@hortonworks.com>wrote:
>
>> Hi Aaron,
>>
>> Could you check if the following stub file exists:
>> /var/run/hadoop/hdfs/namenode-formatted ?
>>
>> A restart of NN through Ambari should create all the required directories
>> for you.
>> Go ahead and delete the stub if it exists and restart the NN through
>> Ambari web UI.
>>
>> Best Regards,
>> Sid
>>
>>
>> On Thu, Jan 2, 2014 at 11:23 AM, Aaron Zimmerman <
>> azimmerman@sproutsocial.com> wrote:
>>
>>> I had to reformat the namenode under an ambari managed cluster, and HDFS
>>> came back online no problem, but the various dependent services fail
>>> (seemingly because of missing directories, such as /user/... and /etc, and
>>> /tmp.  Is there a way to get ambari to recreate these various directories,
>>> permissions, etc?
>>>
>>> If not, is there a guide somewhere that could help me recreate it
>>> manually?
>>>
>>> Thanks,
>>>
>>> Aaron Zimmerman
>>>
>>
>>
>> CONFIDENTIALITY NOTICE
>> NOTICE: This message is intended for the use of the individual or entity
>> to which it is addressed and may contain information that is confidential,
>> privileged and exempt from disclosure under applicable law. If the reader
>> of this message is not the intended recipient, you are hereby notified that
>> any printing, copying, dissemination, distribution, disclosure or
>> forwarding of this communication is strictly prohibited. If you have
>> received this communication in error, please contact the sender immediately
>> and delete it from your system. Thank You.
>
>
>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.

Re: bootstrap hdfs

Posted by Aaron Zimmerman <az...@sproutsocial.com>.
Thanks for the reply Siddharth,

I tried it and the namenode came back (I had to delete the local data
directories as well).

But the other services did not work.  Specifically it was dying on oozie
server install, appears to be trying to create hdfs directories without
success.  Does the /var/run directory serve as markers for initialization?
 If so, perhaps deleting the subdir of each service will reinitialize all
components?

There is still nothing in hdfs.  On my first installation, after installing
software on the cluster nodes, the file system had a bunch of stuff in it.

Thanks,

Aaron Zimmerman


On Thu, Jan 2, 2014 at 6:51 PM, Siddharth Wagle <sw...@hortonworks.com>wrote:

> Hi Aaron,
>
> Could you check if the following stub file exists:
> /var/run/hadoop/hdfs/namenode-formatted ?
>
> A restart of NN through Ambari should create all the required directories
> for you.
> Go ahead and delete the stub if it exists and restart the NN through
> Ambari web UI.
>
> Best Regards,
> Sid
>
>
> On Thu, Jan 2, 2014 at 11:23 AM, Aaron Zimmerman <
> azimmerman@sproutsocial.com> wrote:
>
>> I had to reformat the namenode under an ambari managed cluster, and HDFS
>> came back online no problem, but the various dependent services fail
>> (seemingly because of missing directories, such as /user/... and /etc, and
>> /tmp.  Is there a way to get ambari to recreate these various directories,
>> permissions, etc?
>>
>> If not, is there a guide somewhere that could help me recreate it
>> manually?
>>
>> Thanks,
>>
>> Aaron Zimmerman
>>
>
>
> CONFIDENTIALITY NOTICE
> NOTICE: This message is intended for the use of the individual or entity
> to which it is addressed and may contain information that is confidential,
> privileged and exempt from disclosure under applicable law. If the reader
> of this message is not the intended recipient, you are hereby notified that
> any printing, copying, dissemination, distribution, disclosure or
> forwarding of this communication is strictly prohibited. If you have
> received this communication in error, please contact the sender immediately
> and delete it from your system. Thank You.

Re: bootstrap hdfs

Posted by Siddharth Wagle <sw...@hortonworks.com>.
Hi Aaron,

Could you check if the following stub file exists:
/var/run/hadoop/hdfs/namenode-formatted ?

A restart of NN through Ambari should create all the required directories
for you.
Go ahead and delete the stub if it exists and restart the NN through Ambari
web UI.

Best Regards,
Sid


On Thu, Jan 2, 2014 at 11:23 AM, Aaron Zimmerman <
azimmerman@sproutsocial.com> wrote:

> I had to reformat the namenode under an ambari managed cluster, and HDFS
> came back online no problem, but the various dependent services fail
> (seemingly because of missing directories, such as /user/... and /etc, and
> /tmp.  Is there a way to get ambari to recreate these various directories,
> permissions, etc?
>
> If not, is there a guide somewhere that could help me recreate it manually?
>
> Thanks,
>
> Aaron Zimmerman
>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.