You are viewing a plain text version of this content. The canonical link for it is here.
Posted to mapreduce-user@hadoop.apache.org by Vincent Emonet <vi...@gmail.com> on 2014/08/26 17:05:51 UTC

Hadoop on Safe Mode because Resources are low on NameNode

Hello,

We have a 11 nodes Hadoop cluster installed from Hortonworks RPM doc:
http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.0.9.1/bk_installing_manually_book/content/rpm-chap1.html

The cluster was working fine since it went on Safe Mode during the
execution of a job with this message on the NameNode interface:



*Safe mode is ON. Resources are low on NN. Please add or free up more
resources then turn off safe mode manually. NOTE: If you turn off safe mode
before adding resources, the NN will immediately return to safe mode. Use
"hdfs dfsadmin -safemode leave" to turn safe mode off.*
The error displayed in the job log is:
2014-08-22 08:51:35,446 WARN namenode.NameNodeResourceChecker
(NameNodeResourceChecker.java:isResourceAvailable(89)) - Space available on
volume 'null' is 100720640, which is below the configured reserved amount
104857600 2014-08-22 08:51:35,446 WARN namenode.FSNamesystem
(FSNamesystem.java:run(4042)) - NameNode low on available disk space.
Already in safe mode.

On each node we have 5 hdd used for Hadoop
And we checked the 5 hdd on the namenode are all full (between 95 and 100%)
when the HDFS as still 50% of its capacity available : on the other nodes
the 5 hdd are at 30/40%

So I think this is the cause of the error.

On the NameNode we had some Non HDFS data on 1 hdd, so I deleted them to
have 50% of this hdd available (the 4 others are still between 95 and 100%)
But this didn't resolve the problem
I have also followed the advices found here :
https://issues.apache.org/jira/browse/HDFS-4425
And added the following property to the hdfs-site.xml of the NameNode
(multiplying the default value by 2)
  <property>
     <name>dfs.namenode.resource.du.reserved</name>
       <value>209715200</value>
   </property>

Still impossible to get out of the safe mode and as log as we are in safe
mode we can't delete anything in the HDFS.


Is anyone having a tip about this issue?


Thankfully,

Vincent.

Re: Hadoop on Safe Mode because Resources are low on NameNode

Posted by unmesha sreeveni <un...@gmail.com>.
You can leave safe mode:
Namenode in safe mode how to leave:
http://www.unmeshasreeveni.blogspot.in/2014/04/name-node-is-in-safe-mode-how-to-leave.html



On Wed, Aug 27, 2014 at 9:38 AM, Stanley Shi <ss...@pivotal.io> wrote:

> You can force the namenode to get out of safe mode: hadoop dfsadmin
> -safemode leave
>
>
> On Tue, Aug 26, 2014 at 11:05 PM, Vincent Emonet <vincent.emonet@gmail.com
> > wrote:
>
>> Hello,
>>
>> We have a 11 nodes Hadoop cluster installed from Hortonworks RPM doc:
>>
>> http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.0.9.1/bk_installing_manually_book/content/rpm-chap1.html
>>
>> The cluster was working fine since it went on Safe Mode during the
>> execution of a job with this message on the NameNode interface:
>>
>>
>>
>> *Safe mode is ON. Resources are low on NN. Please add or free up more
>> resources then turn off safe mode manually. NOTE: If you turn off safe mode
>> before adding resources, the NN will immediately return to safe mode. Use
>> "hdfs dfsadmin -safemode leave" to turn safe mode off.*
>> The error displayed in the job log is:
>> 2014-08-22 08:51:35,446 WARN namenode.NameNodeResourceChecker
>> (NameNodeResourceChecker.java:isResourceAvailable(89)) - Space available on
>> volume 'null' is 100720640, which is below the configured reserved amount
>> 104857600 2014-08-22 08:51:35,446 WARN namenode.FSNamesystem
>> (FSNamesystem.java:run(4042)) - NameNode low on available disk space.
>> Already in safe mode.
>>
>> On each node we have 5 hdd used for Hadoop
>> And we checked the 5 hdd on the namenode are all full (between 95 and
>> 100%) when the HDFS as still 50% of its capacity available : on the other
>> nodes the 5 hdd are at 30/40%
>>
>> So I think this is the cause of the error.
>>
>> On the NameNode we had some Non HDFS data on 1 hdd, so I deleted them to
>> have 50% of this hdd available (the 4 others are still between 95 and 100%)
>> But this didn't resolve the problem
>> I have also followed the advices found here :
>> https://issues.apache.org/jira/browse/HDFS-4425
>> And added the following property to the hdfs-site.xml of the NameNode
>> (multiplying the default value by 2)
>>   <property>
>>      <name>dfs.namenode.resource.du.reserved</name>
>>        <value>209715200</value>
>>    </property>
>>
>> Still impossible to get out of the safe mode and as log as we are in safe
>> mode we can't delete anything in the HDFS.
>>
>>
>> Is anyone having a tip about this issue?
>>
>>
>> Thankfully,
>>
>> Vincent.
>>
>>
>>
>
>
> --
> Regards,
> *Stanley Shi,*
>
>


-- 
*Thanks & Regards *


*Unmesha Sreeveni U.B*
*Hadoop, Bigdata Developer*
*Center for Cyber Security | Amrita Vishwa Vidyapeetham*
http://www.unmeshasreeveni.blogspot.in/

Re: Hadoop on Safe Mode because Resources are low on NameNode

Posted by unmesha sreeveni <un...@gmail.com>.
You can leave safe mode:
Namenode in safe mode how to leave:
http://www.unmeshasreeveni.blogspot.in/2014/04/name-node-is-in-safe-mode-how-to-leave.html



On Wed, Aug 27, 2014 at 9:38 AM, Stanley Shi <ss...@pivotal.io> wrote:

> You can force the namenode to get out of safe mode: hadoop dfsadmin
> -safemode leave
>
>
> On Tue, Aug 26, 2014 at 11:05 PM, Vincent Emonet <vincent.emonet@gmail.com
> > wrote:
>
>> Hello,
>>
>> We have a 11 nodes Hadoop cluster installed from Hortonworks RPM doc:
>>
>> http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.0.9.1/bk_installing_manually_book/content/rpm-chap1.html
>>
>> The cluster was working fine since it went on Safe Mode during the
>> execution of a job with this message on the NameNode interface:
>>
>>
>>
>> *Safe mode is ON. Resources are low on NN. Please add or free up more
>> resources then turn off safe mode manually. NOTE: If you turn off safe mode
>> before adding resources, the NN will immediately return to safe mode. Use
>> "hdfs dfsadmin -safemode leave" to turn safe mode off.*
>> The error displayed in the job log is:
>> 2014-08-22 08:51:35,446 WARN namenode.NameNodeResourceChecker
>> (NameNodeResourceChecker.java:isResourceAvailable(89)) - Space available on
>> volume 'null' is 100720640, which is below the configured reserved amount
>> 104857600 2014-08-22 08:51:35,446 WARN namenode.FSNamesystem
>> (FSNamesystem.java:run(4042)) - NameNode low on available disk space.
>> Already in safe mode.
>>
>> On each node we have 5 hdd used for Hadoop
>> And we checked the 5 hdd on the namenode are all full (between 95 and
>> 100%) when the HDFS as still 50% of its capacity available : on the other
>> nodes the 5 hdd are at 30/40%
>>
>> So I think this is the cause of the error.
>>
>> On the NameNode we had some Non HDFS data on 1 hdd, so I deleted them to
>> have 50% of this hdd available (the 4 others are still between 95 and 100%)
>> But this didn't resolve the problem
>> I have also followed the advices found here :
>> https://issues.apache.org/jira/browse/HDFS-4425
>> And added the following property to the hdfs-site.xml of the NameNode
>> (multiplying the default value by 2)
>>   <property>
>>      <name>dfs.namenode.resource.du.reserved</name>
>>        <value>209715200</value>
>>    </property>
>>
>> Still impossible to get out of the safe mode and as log as we are in safe
>> mode we can't delete anything in the HDFS.
>>
>>
>> Is anyone having a tip about this issue?
>>
>>
>> Thankfully,
>>
>> Vincent.
>>
>>
>>
>
>
> --
> Regards,
> *Stanley Shi,*
>
>


-- 
*Thanks & Regards *


*Unmesha Sreeveni U.B*
*Hadoop, Bigdata Developer*
*Center for Cyber Security | Amrita Vishwa Vidyapeetham*
http://www.unmeshasreeveni.blogspot.in/

Re: Hadoop on Safe Mode because Resources are low on NameNode

Posted by unmesha sreeveni <un...@gmail.com>.
You can leave safe mode:
Namenode in safe mode how to leave:
http://www.unmeshasreeveni.blogspot.in/2014/04/name-node-is-in-safe-mode-how-to-leave.html



On Wed, Aug 27, 2014 at 9:38 AM, Stanley Shi <ss...@pivotal.io> wrote:

> You can force the namenode to get out of safe mode: hadoop dfsadmin
> -safemode leave
>
>
> On Tue, Aug 26, 2014 at 11:05 PM, Vincent Emonet <vincent.emonet@gmail.com
> > wrote:
>
>> Hello,
>>
>> We have a 11 nodes Hadoop cluster installed from Hortonworks RPM doc:
>>
>> http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.0.9.1/bk_installing_manually_book/content/rpm-chap1.html
>>
>> The cluster was working fine since it went on Safe Mode during the
>> execution of a job with this message on the NameNode interface:
>>
>>
>>
>> *Safe mode is ON. Resources are low on NN. Please add or free up more
>> resources then turn off safe mode manually. NOTE: If you turn off safe mode
>> before adding resources, the NN will immediately return to safe mode. Use
>> "hdfs dfsadmin -safemode leave" to turn safe mode off.*
>> The error displayed in the job log is:
>> 2014-08-22 08:51:35,446 WARN namenode.NameNodeResourceChecker
>> (NameNodeResourceChecker.java:isResourceAvailable(89)) - Space available on
>> volume 'null' is 100720640, which is below the configured reserved amount
>> 104857600 2014-08-22 08:51:35,446 WARN namenode.FSNamesystem
>> (FSNamesystem.java:run(4042)) - NameNode low on available disk space.
>> Already in safe mode.
>>
>> On each node we have 5 hdd used for Hadoop
>> And we checked the 5 hdd on the namenode are all full (between 95 and
>> 100%) when the HDFS as still 50% of its capacity available : on the other
>> nodes the 5 hdd are at 30/40%
>>
>> So I think this is the cause of the error.
>>
>> On the NameNode we had some Non HDFS data on 1 hdd, so I deleted them to
>> have 50% of this hdd available (the 4 others are still between 95 and 100%)
>> But this didn't resolve the problem
>> I have also followed the advices found here :
>> https://issues.apache.org/jira/browse/HDFS-4425
>> And added the following property to the hdfs-site.xml of the NameNode
>> (multiplying the default value by 2)
>>   <property>
>>      <name>dfs.namenode.resource.du.reserved</name>
>>        <value>209715200</value>
>>    </property>
>>
>> Still impossible to get out of the safe mode and as log as we are in safe
>> mode we can't delete anything in the HDFS.
>>
>>
>> Is anyone having a tip about this issue?
>>
>>
>> Thankfully,
>>
>> Vincent.
>>
>>
>>
>
>
> --
> Regards,
> *Stanley Shi,*
>
>


-- 
*Thanks & Regards *


*Unmesha Sreeveni U.B*
*Hadoop, Bigdata Developer*
*Center for Cyber Security | Amrita Vishwa Vidyapeetham*
http://www.unmeshasreeveni.blogspot.in/

Re: Hadoop on Safe Mode because Resources are low on NameNode

Posted by unmesha sreeveni <un...@gmail.com>.
You can leave safe mode:
Namenode in safe mode how to leave:
http://www.unmeshasreeveni.blogspot.in/2014/04/name-node-is-in-safe-mode-how-to-leave.html



On Wed, Aug 27, 2014 at 9:38 AM, Stanley Shi <ss...@pivotal.io> wrote:

> You can force the namenode to get out of safe mode: hadoop dfsadmin
> -safemode leave
>
>
> On Tue, Aug 26, 2014 at 11:05 PM, Vincent Emonet <vincent.emonet@gmail.com
> > wrote:
>
>> Hello,
>>
>> We have a 11 nodes Hadoop cluster installed from Hortonworks RPM doc:
>>
>> http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.0.9.1/bk_installing_manually_book/content/rpm-chap1.html
>>
>> The cluster was working fine since it went on Safe Mode during the
>> execution of a job with this message on the NameNode interface:
>>
>>
>>
>> *Safe mode is ON. Resources are low on NN. Please add or free up more
>> resources then turn off safe mode manually. NOTE: If you turn off safe mode
>> before adding resources, the NN will immediately return to safe mode. Use
>> "hdfs dfsadmin -safemode leave" to turn safe mode off.*
>> The error displayed in the job log is:
>> 2014-08-22 08:51:35,446 WARN namenode.NameNodeResourceChecker
>> (NameNodeResourceChecker.java:isResourceAvailable(89)) - Space available on
>> volume 'null' is 100720640, which is below the configured reserved amount
>> 104857600 2014-08-22 08:51:35,446 WARN namenode.FSNamesystem
>> (FSNamesystem.java:run(4042)) - NameNode low on available disk space.
>> Already in safe mode.
>>
>> On each node we have 5 hdd used for Hadoop
>> And we checked the 5 hdd on the namenode are all full (between 95 and
>> 100%) when the HDFS as still 50% of its capacity available : on the other
>> nodes the 5 hdd are at 30/40%
>>
>> So I think this is the cause of the error.
>>
>> On the NameNode we had some Non HDFS data on 1 hdd, so I deleted them to
>> have 50% of this hdd available (the 4 others are still between 95 and 100%)
>> But this didn't resolve the problem
>> I have also followed the advices found here :
>> https://issues.apache.org/jira/browse/HDFS-4425
>> And added the following property to the hdfs-site.xml of the NameNode
>> (multiplying the default value by 2)
>>   <property>
>>      <name>dfs.namenode.resource.du.reserved</name>
>>        <value>209715200</value>
>>    </property>
>>
>> Still impossible to get out of the safe mode and as log as we are in safe
>> mode we can't delete anything in the HDFS.
>>
>>
>> Is anyone having a tip about this issue?
>>
>>
>> Thankfully,
>>
>> Vincent.
>>
>>
>>
>
>
> --
> Regards,
> *Stanley Shi,*
>
>


-- 
*Thanks & Regards *


*Unmesha Sreeveni U.B*
*Hadoop, Bigdata Developer*
*Center for Cyber Security | Amrita Vishwa Vidyapeetham*
http://www.unmeshasreeveni.blogspot.in/

Re: Hadoop on Safe Mode because Resources are low on NameNode

Posted by Stanley Shi <ss...@pivotal.io>.
You can force the namenode to get out of safe mode: hadoop dfsadmin
-safemode leave


On Tue, Aug 26, 2014 at 11:05 PM, Vincent Emonet <vi...@gmail.com>
wrote:

> Hello,
>
> We have a 11 nodes Hadoop cluster installed from Hortonworks RPM doc:
>
> http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.0.9.1/bk_installing_manually_book/content/rpm-chap1.html
>
> The cluster was working fine since it went on Safe Mode during the
> execution of a job with this message on the NameNode interface:
>
>
>
> *Safe mode is ON. Resources are low on NN. Please add or free up more
> resources then turn off safe mode manually. NOTE: If you turn off safe mode
> before adding resources, the NN will immediately return to safe mode. Use
> "hdfs dfsadmin -safemode leave" to turn safe mode off.*
> The error displayed in the job log is:
> 2014-08-22 08:51:35,446 WARN namenode.NameNodeResourceChecker
> (NameNodeResourceChecker.java:isResourceAvailable(89)) - Space available on
> volume 'null' is 100720640, which is below the configured reserved amount
> 104857600 2014-08-22 08:51:35,446 WARN namenode.FSNamesystem
> (FSNamesystem.java:run(4042)) - NameNode low on available disk space.
> Already in safe mode.
>
> On each node we have 5 hdd used for Hadoop
> And we checked the 5 hdd on the namenode are all full (between 95 and
> 100%) when the HDFS as still 50% of its capacity available : on the other
> nodes the 5 hdd are at 30/40%
>
> So I think this is the cause of the error.
>
> On the NameNode we had some Non HDFS data on 1 hdd, so I deleted them to
> have 50% of this hdd available (the 4 others are still between 95 and 100%)
> But this didn't resolve the problem
> I have also followed the advices found here :
> https://issues.apache.org/jira/browse/HDFS-4425
> And added the following property to the hdfs-site.xml of the NameNode
> (multiplying the default value by 2)
>   <property>
>      <name>dfs.namenode.resource.du.reserved</name>
>        <value>209715200</value>
>    </property>
>
> Still impossible to get out of the safe mode and as log as we are in safe
> mode we can't delete anything in the HDFS.
>
>
> Is anyone having a tip about this issue?
>
>
> Thankfully,
>
> Vincent.
>
>
>


-- 
Regards,
*Stanley Shi,*

Re: Hadoop on Safe Mode because Resources are low on NameNode

Posted by Stanley Shi <ss...@pivotal.io>.
You can force the namenode to get out of safe mode: hadoop dfsadmin
-safemode leave


On Tue, Aug 26, 2014 at 11:05 PM, Vincent Emonet <vi...@gmail.com>
wrote:

> Hello,
>
> We have a 11 nodes Hadoop cluster installed from Hortonworks RPM doc:
>
> http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.0.9.1/bk_installing_manually_book/content/rpm-chap1.html
>
> The cluster was working fine since it went on Safe Mode during the
> execution of a job with this message on the NameNode interface:
>
>
>
> *Safe mode is ON. Resources are low on NN. Please add or free up more
> resources then turn off safe mode manually. NOTE: If you turn off safe mode
> before adding resources, the NN will immediately return to safe mode. Use
> "hdfs dfsadmin -safemode leave" to turn safe mode off.*
> The error displayed in the job log is:
> 2014-08-22 08:51:35,446 WARN namenode.NameNodeResourceChecker
> (NameNodeResourceChecker.java:isResourceAvailable(89)) - Space available on
> volume 'null' is 100720640, which is below the configured reserved amount
> 104857600 2014-08-22 08:51:35,446 WARN namenode.FSNamesystem
> (FSNamesystem.java:run(4042)) - NameNode low on available disk space.
> Already in safe mode.
>
> On each node we have 5 hdd used for Hadoop
> And we checked the 5 hdd on the namenode are all full (between 95 and
> 100%) when the HDFS as still 50% of its capacity available : on the other
> nodes the 5 hdd are at 30/40%
>
> So I think this is the cause of the error.
>
> On the NameNode we had some Non HDFS data on 1 hdd, so I deleted them to
> have 50% of this hdd available (the 4 others are still between 95 and 100%)
> But this didn't resolve the problem
> I have also followed the advices found here :
> https://issues.apache.org/jira/browse/HDFS-4425
> And added the following property to the hdfs-site.xml of the NameNode
> (multiplying the default value by 2)
>   <property>
>      <name>dfs.namenode.resource.du.reserved</name>
>        <value>209715200</value>
>    </property>
>
> Still impossible to get out of the safe mode and as log as we are in safe
> mode we can't delete anything in the HDFS.
>
>
> Is anyone having a tip about this issue?
>
>
> Thankfully,
>
> Vincent.
>
>
>


-- 
Regards,
*Stanley Shi,*

Re: Hadoop on Safe Mode because Resources are low on NameNode

Posted by Stanley Shi <ss...@pivotal.io>.
You can force the namenode to get out of safe mode: hadoop dfsadmin
-safemode leave


On Tue, Aug 26, 2014 at 11:05 PM, Vincent Emonet <vi...@gmail.com>
wrote:

> Hello,
>
> We have a 11 nodes Hadoop cluster installed from Hortonworks RPM doc:
>
> http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.0.9.1/bk_installing_manually_book/content/rpm-chap1.html
>
> The cluster was working fine since it went on Safe Mode during the
> execution of a job with this message on the NameNode interface:
>
>
>
> *Safe mode is ON. Resources are low on NN. Please add or free up more
> resources then turn off safe mode manually. NOTE: If you turn off safe mode
> before adding resources, the NN will immediately return to safe mode. Use
> "hdfs dfsadmin -safemode leave" to turn safe mode off.*
> The error displayed in the job log is:
> 2014-08-22 08:51:35,446 WARN namenode.NameNodeResourceChecker
> (NameNodeResourceChecker.java:isResourceAvailable(89)) - Space available on
> volume 'null' is 100720640, which is below the configured reserved amount
> 104857600 2014-08-22 08:51:35,446 WARN namenode.FSNamesystem
> (FSNamesystem.java:run(4042)) - NameNode low on available disk space.
> Already in safe mode.
>
> On each node we have 5 hdd used for Hadoop
> And we checked the 5 hdd on the namenode are all full (between 95 and
> 100%) when the HDFS as still 50% of its capacity available : on the other
> nodes the 5 hdd are at 30/40%
>
> So I think this is the cause of the error.
>
> On the NameNode we had some Non HDFS data on 1 hdd, so I deleted them to
> have 50% of this hdd available (the 4 others are still between 95 and 100%)
> But this didn't resolve the problem
> I have also followed the advices found here :
> https://issues.apache.org/jira/browse/HDFS-4425
> And added the following property to the hdfs-site.xml of the NameNode
> (multiplying the default value by 2)
>   <property>
>      <name>dfs.namenode.resource.du.reserved</name>
>        <value>209715200</value>
>    </property>
>
> Still impossible to get out of the safe mode and as log as we are in safe
> mode we can't delete anything in the HDFS.
>
>
> Is anyone having a tip about this issue?
>
>
> Thankfully,
>
> Vincent.
>
>
>


-- 
Regards,
*Stanley Shi,*

Re: Hadoop on Safe Mode because Resources are low on NameNode

Posted by Stanley Shi <ss...@pivotal.io>.
You can force the namenode to get out of safe mode: hadoop dfsadmin
-safemode leave


On Tue, Aug 26, 2014 at 11:05 PM, Vincent Emonet <vi...@gmail.com>
wrote:

> Hello,
>
> We have a 11 nodes Hadoop cluster installed from Hortonworks RPM doc:
>
> http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.0.9.1/bk_installing_manually_book/content/rpm-chap1.html
>
> The cluster was working fine since it went on Safe Mode during the
> execution of a job with this message on the NameNode interface:
>
>
>
> *Safe mode is ON. Resources are low on NN. Please add or free up more
> resources then turn off safe mode manually. NOTE: If you turn off safe mode
> before adding resources, the NN will immediately return to safe mode. Use
> "hdfs dfsadmin -safemode leave" to turn safe mode off.*
> The error displayed in the job log is:
> 2014-08-22 08:51:35,446 WARN namenode.NameNodeResourceChecker
> (NameNodeResourceChecker.java:isResourceAvailable(89)) - Space available on
> volume 'null' is 100720640, which is below the configured reserved amount
> 104857600 2014-08-22 08:51:35,446 WARN namenode.FSNamesystem
> (FSNamesystem.java:run(4042)) - NameNode low on available disk space.
> Already in safe mode.
>
> On each node we have 5 hdd used for Hadoop
> And we checked the 5 hdd on the namenode are all full (between 95 and
> 100%) when the HDFS as still 50% of its capacity available : on the other
> nodes the 5 hdd are at 30/40%
>
> So I think this is the cause of the error.
>
> On the NameNode we had some Non HDFS data on 1 hdd, so I deleted them to
> have 50% of this hdd available (the 4 others are still between 95 and 100%)
> But this didn't resolve the problem
> I have also followed the advices found here :
> https://issues.apache.org/jira/browse/HDFS-4425
> And added the following property to the hdfs-site.xml of the NameNode
> (multiplying the default value by 2)
>   <property>
>      <name>dfs.namenode.resource.du.reserved</name>
>        <value>209715200</value>
>    </property>
>
> Still impossible to get out of the safe mode and as log as we are in safe
> mode we can't delete anything in the HDFS.
>
>
> Is anyone having a tip about this issue?
>
>
> Thankfully,
>
> Vincent.
>
>
>


-- 
Regards,
*Stanley Shi,*