You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by Steve Loughran <st...@apache.org> on 2009/06/03 13:50:03 UTC

Re: Hadoop ReInitialization.

b wrote:

> But after formatting and starting DFS i need to wait some time (sleep 
> 60) before putting data into HDFS. Else i will receive 
> "NotReplicatedYetException".

that means the namenode is up but there aren't enough workers yet.

Re: Hadoop ReInitialization.

Posted by Aaron Kimball <aa...@cloudera.com>.
You can block for safemode exit by running 'hadoop dfsadmin -safemode wait'
rather than sleeping for an arbitrary amount of time.

More generally, I'm a bit confused what you mean by all this. Hadoop daemons
may individually crash, but you should never need to reformat HDFS and start
from scratch. If you're doing this, that means that you're probably sticking
some important hadoop files in a temp dir that's getting cleaned out or
something of the like. Are dfs.data.dir and dfs.name.dir suitably
well-protected from tmpwatch or other such "housekeeping" programs?

- Aaron

On Wed, Jun 3, 2009 at 4:50 AM, Steve Loughran <st...@apache.org> wrote:

> b wrote:
>
>  But after formatting and starting DFS i need to wait some time (sleep 60)
>> before putting data into HDFS. Else i will receive
>> "NotReplicatedYetException".
>>
>
> that means the namenode is up but there aren't enough workers yet.
>