You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by Amogh Vasekar <am...@yahoo-inc.com> on 2010/01/19 06:17:34 UTC

Re: rmr: org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot delete /op. Name node is in safe mode.

Hi,
When NN is in safe mode, you get a read-only view of the hadoop file system. ( since NN is reconstructing its image of FS )
Use  "hadoop dfsadmin -safemode get" to check if in safe mode.
"hadoop dfsadmin -safemode leave" to leave safe mode forcefully. Or use "hadoop dfsadmin -safemode wait" to block till NN leaves by itself.

Amogh


On 1/19/10 10:31 AM, "prasenjit mukherjee" <pr...@gmail.com> wrote:

Hmmm.  I am actually running it from a batch file. Is "hadoop fs -rmr"
not that stable compared to pig's rm OR hadoop's FileSystem ?

Let me try your suggestion by writing a cleanup script in pig.

-Thanks,
Prasen

On Tue, Jan 19, 2010 at 10:25 AM, Rekha Joshi <re...@yahoo-inc.com> wrote:
> Can you try with dfs/ without quotes?If using pig to run jobs you can use rmf within your script(again w/o quotes) to force remove and avoid error if file/dir not present.Or if doing this inside hadoop job, you can use FileSystem/FileStatus to delete directories.HTH.
> Cheers,
> /R
>
> On 1/19/10 10:15 AM, "prasenjit mukherjee" <pr...@gmail.com> wrote:
>
> "hadoop fs -rmr /op"
>
> That command always fails. I am trying to run sequential hadoop jobs.
> After the first run all subsequent runs fail while cleaning up ( aka
> removing the hadoop dir created by previous run ). What can I do to
> avoid this ?
>
> here is my hadoop version :
> # hadoop version
> Hadoop 0.20.0
> Subversion https://svn.apache.org/repos/asf/hadoop/core/branches/branch-0.20
> -r 763504
> Compiled by ndaley on Thu Apr  9 05:18:40 UTC 2009
>
> Any help is greatly appreciated.
>
> -Prasen
>
>


Re: rmr: org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot delete /op. Name node is in safe mode.

Posted by Amogh Vasekar <am...@yahoo-inc.com>.
Hi,
Glad to know it helped.
If you need to get your cluster up and running quickly, you can manipulate the parameter dfs.namenode.threshold.percent. If you set it to 0, NN will not enter safe mode.

Amogh


On 1/19/10 12:39 PM, "prasenjit mukherjee" <pm...@quattrowireless.com> wrote:

That was exactly the reason. Thanks  a bunch.

On Tue, Jan 19, 2010 at 12:24 PM, Mafish Liu <ma...@gmail.com> wrote:
> 2010/1/19 prasenjit mukherjee <pm...@quattrowireless.com>:
>>  I run "hadoop fs -rmr .." immediately after start-all.sh    Does the
>> namenode always start in safemode and after sometime switches to
>> normal mode ? If that is the problem then your suggestion of waiting
>> might work. Lemme check.
>
> This is the point. Namenode will enter safemode on starting to gather
> metadata information of files, and then switch to normal mode. The
> time spent in safemode depends one the data scale in your HDFS.
>>
>> -Thanks for the pointer.
>> Prasen
>>
>> On Tue, Jan 19, 2010 at 10:47 AM, Amogh Vasekar <am...@yahoo-inc.com> wrote:
>>> Hi,
>>> When NN is in safe mode, you get a read-only view of the hadoop file system. ( since NN is reconstructing its image of FS )
>>> Use  "hadoop dfsadmin -safemode get" to check if in safe mode.
>>> "hadoop dfsadmin -safemode leave" to leave safe mode forcefully. Or use "hadoop dfsadmin -safemode wait" to block till NN leaves by itself.
>>>
>>> Amogh
>>>
>>>
>>> On 1/19/10 10:31 AM, "prasenjit mukherjee" <pr...@gmail.com> wrote:
>>>
>>> Hmmm.  I am actually running it from a batch file. Is "hadoop fs -rmr"
>>> not that stable compared to pig's rm OR hadoop's FileSystem ?
>>>
>>> Let me try your suggestion by writing a cleanup script in pig.
>>>
>>> -Thanks,
>>> Prasen
>>>
>>> On Tue, Jan 19, 2010 at 10:25 AM, Rekha Joshi <re...@yahoo-inc.com> wrote:
>>>> Can you try with dfs/ without quotes?If using pig to run jobs you can use rmf within your script(again w/o quotes) to force remove and avoid error if file/dir not present.Or if doing this inside hadoop job, you can use FileSystem/FileStatus to delete directories.HTH.
>>>> Cheers,
>>>> /R
>>>>
>>>> On 1/19/10 10:15 AM, "prasenjit mukherjee" <pr...@gmail.com> wrote:
>>>>
>>>> "hadoop fs -rmr /op"
>>>>
>>>> That command always fails. I am trying to run sequential hadoop jobs.
>>>> After the first run all subsequent runs fail while cleaning up ( aka
>>>> removing the hadoop dir created by previous run ). What can I do to
>>>> avoid this ?
>>>>
>>>> here is my hadoop version :
>>>> # hadoop version
>>>> Hadoop 0.20.0
>>>> Subversion https://svn.apache.org/repos/asf/hadoop/core/branches/branch-0.20
>>>> -r 763504
>>>> Compiled by ndaley on Thu Apr  9 05:18:40 UTC 2009
>>>>
>>>> Any help is greatly appreciated.
>>>>
>>>> -Prasen
>>>>
>>>>
>>>
>>>
>>
>
>
>
> --
> Mafish@gmail.com
>


Re: rmr: org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot delete /op. Name node is in safe mode.

Posted by prasenjit mukherjee <pm...@quattrowireless.com>.
That was exactly the reason. Thanks  a bunch.

On Tue, Jan 19, 2010 at 12:24 PM, Mafish Liu <ma...@gmail.com> wrote:
> 2010/1/19 prasenjit mukherjee <pm...@quattrowireless.com>:
>>  I run "hadoop fs -rmr .." immediately after start-all.sh    Does the
>> namenode always start in safemode and after sometime switches to
>> normal mode ? If that is the problem then your suggestion of waiting
>> might work. Lemme check.
>
> This is the point. Namenode will enter safemode on starting to gather
> metadata information of files, and then switch to normal mode. The
> time spent in safemode depends one the data scale in your HDFS.
>>
>> -Thanks for the pointer.
>> Prasen
>>
>> On Tue, Jan 19, 2010 at 10:47 AM, Amogh Vasekar <am...@yahoo-inc.com> wrote:
>>> Hi,
>>> When NN is in safe mode, you get a read-only view of the hadoop file system. ( since NN is reconstructing its image of FS )
>>> Use  "hadoop dfsadmin -safemode get" to check if in safe mode.
>>> "hadoop dfsadmin -safemode leave" to leave safe mode forcefully. Or use "hadoop dfsadmin -safemode wait" to block till NN leaves by itself.
>>>
>>> Amogh
>>>
>>>
>>> On 1/19/10 10:31 AM, "prasenjit mukherjee" <pr...@gmail.com> wrote:
>>>
>>> Hmmm.  I am actually running it from a batch file. Is "hadoop fs -rmr"
>>> not that stable compared to pig's rm OR hadoop's FileSystem ?
>>>
>>> Let me try your suggestion by writing a cleanup script in pig.
>>>
>>> -Thanks,
>>> Prasen
>>>
>>> On Tue, Jan 19, 2010 at 10:25 AM, Rekha Joshi <re...@yahoo-inc.com> wrote:
>>>> Can you try with dfs/ without quotes?If using pig to run jobs you can use rmf within your script(again w/o quotes) to force remove and avoid error if file/dir not present.Or if doing this inside hadoop job, you can use FileSystem/FileStatus to delete directories.HTH.
>>>> Cheers,
>>>> /R
>>>>
>>>> On 1/19/10 10:15 AM, "prasenjit mukherjee" <pr...@gmail.com> wrote:
>>>>
>>>> "hadoop fs -rmr /op"
>>>>
>>>> That command always fails. I am trying to run sequential hadoop jobs.
>>>> After the first run all subsequent runs fail while cleaning up ( aka
>>>> removing the hadoop dir created by previous run ). What can I do to
>>>> avoid this ?
>>>>
>>>> here is my hadoop version :
>>>> # hadoop version
>>>> Hadoop 0.20.0
>>>> Subversion https://svn.apache.org/repos/asf/hadoop/core/branches/branch-0.20
>>>> -r 763504
>>>> Compiled by ndaley on Thu Apr  9 05:18:40 UTC 2009
>>>>
>>>> Any help is greatly appreciated.
>>>>
>>>> -Prasen
>>>>
>>>>
>>>
>>>
>>
>
>
>
> --
> Mafish@gmail.com
>

Re: rmr: org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot delete /op. Name node is in safe mode.

Posted by Mafish Liu <ma...@gmail.com>.
2010/1/19 prasenjit mukherjee <pm...@quattrowireless.com>:
>  I run "hadoop fs -rmr .." immediately after start-all.sh    Does the
> namenode always start in safemode and after sometime switches to
> normal mode ? If that is the problem then your suggestion of waiting
> might work. Lemme check.

This is the point. Namenode will enter safemode on starting to gather
metadata information of files, and then switch to normal mode. The
time spent in safemode depends one the data scale in your HDFS.
>
> -Thanks for the pointer.
> Prasen
>
> On Tue, Jan 19, 2010 at 10:47 AM, Amogh Vasekar <am...@yahoo-inc.com> wrote:
>> Hi,
>> When NN is in safe mode, you get a read-only view of the hadoop file system. ( since NN is reconstructing its image of FS )
>> Use  "hadoop dfsadmin -safemode get" to check if in safe mode.
>> "hadoop dfsadmin -safemode leave" to leave safe mode forcefully. Or use "hadoop dfsadmin -safemode wait" to block till NN leaves by itself.
>>
>> Amogh
>>
>>
>> On 1/19/10 10:31 AM, "prasenjit mukherjee" <pr...@gmail.com> wrote:
>>
>> Hmmm.  I am actually running it from a batch file. Is "hadoop fs -rmr"
>> not that stable compared to pig's rm OR hadoop's FileSystem ?
>>
>> Let me try your suggestion by writing a cleanup script in pig.
>>
>> -Thanks,
>> Prasen
>>
>> On Tue, Jan 19, 2010 at 10:25 AM, Rekha Joshi <re...@yahoo-inc.com> wrote:
>>> Can you try with dfs/ without quotes?If using pig to run jobs you can use rmf within your script(again w/o quotes) to force remove and avoid error if file/dir not present.Or if doing this inside hadoop job, you can use FileSystem/FileStatus to delete directories.HTH.
>>> Cheers,
>>> /R
>>>
>>> On 1/19/10 10:15 AM, "prasenjit mukherjee" <pr...@gmail.com> wrote:
>>>
>>> "hadoop fs -rmr /op"
>>>
>>> That command always fails. I am trying to run sequential hadoop jobs.
>>> After the first run all subsequent runs fail while cleaning up ( aka
>>> removing the hadoop dir created by previous run ). What can I do to
>>> avoid this ?
>>>
>>> here is my hadoop version :
>>> # hadoop version
>>> Hadoop 0.20.0
>>> Subversion https://svn.apache.org/repos/asf/hadoop/core/branches/branch-0.20
>>> -r 763504
>>> Compiled by ndaley on Thu Apr  9 05:18:40 UTC 2009
>>>
>>> Any help is greatly appreciated.
>>>
>>> -Prasen
>>>
>>>
>>
>>
>



-- 
Mafish@gmail.com

Re: rmr: org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot delete /op. Name node is in safe mode.

Posted by prasenjit mukherjee <pm...@quattrowireless.com>.
 I run "hadoop fs -rmr .." immediately after start-all.sh    Does the
namenode always start in safemode and after sometime switches to
normal mode ? If that is the problem then your suggestion of waiting
might work. Lemme check.

-Thanks for the pointer.
Prasen

On Tue, Jan 19, 2010 at 10:47 AM, Amogh Vasekar <am...@yahoo-inc.com> wrote:
> Hi,
> When NN is in safe mode, you get a read-only view of the hadoop file system. ( since NN is reconstructing its image of FS )
> Use  "hadoop dfsadmin -safemode get" to check if in safe mode.
> "hadoop dfsadmin -safemode leave" to leave safe mode forcefully. Or use "hadoop dfsadmin -safemode wait" to block till NN leaves by itself.
>
> Amogh
>
>
> On 1/19/10 10:31 AM, "prasenjit mukherjee" <pr...@gmail.com> wrote:
>
> Hmmm.  I am actually running it from a batch file. Is "hadoop fs -rmr"
> not that stable compared to pig's rm OR hadoop's FileSystem ?
>
> Let me try your suggestion by writing a cleanup script in pig.
>
> -Thanks,
> Prasen
>
> On Tue, Jan 19, 2010 at 10:25 AM, Rekha Joshi <re...@yahoo-inc.com> wrote:
>> Can you try with dfs/ without quotes?If using pig to run jobs you can use rmf within your script(again w/o quotes) to force remove and avoid error if file/dir not present.Or if doing this inside hadoop job, you can use FileSystem/FileStatus to delete directories.HTH.
>> Cheers,
>> /R
>>
>> On 1/19/10 10:15 AM, "prasenjit mukherjee" <pr...@gmail.com> wrote:
>>
>> "hadoop fs -rmr /op"
>>
>> That command always fails. I am trying to run sequential hadoop jobs.
>> After the first run all subsequent runs fail while cleaning up ( aka
>> removing the hadoop dir created by previous run ). What can I do to
>> avoid this ?
>>
>> here is my hadoop version :
>> # hadoop version
>> Hadoop 0.20.0
>> Subversion https://svn.apache.org/repos/asf/hadoop/core/branches/branch-0.20
>> -r 763504
>> Compiled by ndaley on Thu Apr  9 05:18:40 UTC 2009
>>
>> Any help is greatly appreciated.
>>
>> -Prasen
>>
>>
>
>