You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by EdwardKing <zh...@neusoft.com> on 2014/02/27 02:38:44 UTC

How to set heartbeat ?

I have two machine,one is master and another is slave, I want to know how to configure heartbeat of hadoop 2.2.0,which file will be modified? 
Thanks.
---------------------------------------------------------------------------------------------------
Confidentiality Notice: The information contained in this e-mail and any accompanying attachment(s) 
is intended only for the use of the intended recipient and may be confidential and/or privileged of 
Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader of this communication is 
not the intended recipient, unauthorized use, forwarding, printing,  storing, disclosure or copying 
is strictly prohibited, and may be unlawful.If you have received this communication in error,please 
immediately notify the sender by return e-mail, and delete the original message and all copies from 
your system. Thank you. 
---------------------------------------------------------------------------------------------------

Re: How to set heartbeat ?

Posted by sudhakara st <su...@gmail.com>.
 Default time interval for marking a datanode as "stale", i.e., if the
namenode has not received heartbeat msg from a datanode for more than this
time interval, the datanode will be marked and treated as "stale" by
default. The stale interval cannot be too small since otherwise this may
cause too frequent change of stale states. We thus set a minimum stale
interval value (the default value is 3 times of heartbeat interval) and
guarantee that the stale interval cannot be less than the minimum value. A
stale data node is avoided during lease/block recovery. It can be
conditionally avoided for reads (see
dfs.namenode.avoid.read.stale.datanode) and for writes (see
dfs.namenode.avoid.write.stale.datanode).
http://hadoop.apache.org/docs/r2.2.0/hadoop-project-dist/hadoop-hdfs/hdfs-default.xml


On Thu, Feb 27, 2014 at 10:42 AM, EdwardKing <zh...@neusoft.com> wrote:

>  I set hdfs-site.xml,like follows:
>
> <configuration>
> <property>
>  <name>dfs.name.dir</name>
>  <value>file:/home/software/name</value>
>  <description> </description>
> </property>
> <property>
>  <name>dfs.namenode.secondary.http-address</name>
>  <value>master:9001</value>
> </property>
> <property>
>  <name>dfs.data.dir</name>
>  <value>file:/home/software/data</value>
> </property>
> <property>
>  <name>dfs.http.address</name>
>  <value>master:9002</value>
> </property>
> <property>
>  <name>dfs.replication</name>
>  <value>2</value>
> </property>
> <property>
>  <name>dfs.datanode.du.reserved</name>
>  <value>1073741824</value>
> </property>
> <property>
>  <name>dfs.block.size</name>
>  <value>4194304</value>
> </property>
> <property>
>  <name>dfs.namenode.logging.level</name>
>  <value>all</value>
> </property>
> <property>
> <name>dfs.permissions</name>
> <value>false</value>
> </property>
> <property>
>  <name>dfs.namenode.stale.datanode.interval</name>
>  <value>3</value>
> </property>
> </configuration>
>
> Then I check status,like follows:
> [hadoop@master sbin]$ hadoop dfsadmin -report
> DEPRECATED: Use of this script to execute hdfs command is deprecated.
> Instead use the hdfs command for it.
>
> 14/02/26 21:05:11 WARN util.NativeCodeLoader: Unable to load native-hadoop
> library for your platform... using builtin-java classes where applicable
> Configured Capacity: 34678300672 (32.30 GB)
> Present Capacity: 27971788800 (26.05 GB)
> DFS Remaining: 27657437184 (25.76 GB)
> DFS Used: 314351616 (299.79 MB)
> DFS Used%: 1.12%
> Under replicated blocks: 12
> Blocks with corrupt replicas: 2
> Missing blocks: 0
>
> -------------------------------------------------
> Datanodes available: 2 (2 total, 0 dead)
>
> Live datanodes:
> Name: 172.11.12.7:50010 (node1)
> Hostname: node1
> Decommission Status : Normal
> Configured Capacity: 17339150336 (16.15 GB)
> DFS Used: 157192192 (149.91 MB)
> Non DFS Used: 2949230592 (2.75 GB)
> DFS Remaining: 14232727552 (13.26 GB)
> DFS Used%: 0.91%
> DFS Remaining%: 82.08%
> Last contact: Wed Feb 26 21:02:22 PST 2014
>
>
> Name: 172.11.12.6:50010 (master)
> Hostname: master
> Decommission Status : Normal
> Configured Capacity: 17339150336 (16.15 GB)
> DFS Used: 157159424 (149.88 MB)
> Non DFS Used: 3757281280 (3.50 GB)
> DFS Remaining: 13424709632 (12.50 GB)
> DFS Used%: 0.91%
> DFS Remaining%: 77.42%
> Last contact: Wed Feb 26 21:05:13 PST 2014
>
> Then I kill slave DataNode,like follows:
>
> [hadoop@node1 ~]$ jps
> 4529 DataNode
> 4805 Jps
> [hadoop@node1 ~]$ kill -9 4529
> [hadoop@node1 ~]$ jps
> 4817 Jps
>
> After a very long time,I check the status again
> [hadoop@master hadoop]$ hadoop dfsadmin -report
> DEPRECATED: Use of this script to execute hdfs command is deprecated.
> Instead use the hdfs command for it.
>
> 14/02/26 21:08:20 WARN util.NativeCodeLoader: Unable to load native-hadoop
> library for your platform... using builtin-java classes where applicable
> Configured Capacity: 34678300672 (32.30 GB)
> Present Capacity: 27971780608 (26.05 GB)
> DFS Remaining: 27657428992 (25.76 GB)
> DFS Used: 314351616 (299.79 MB)
> DFS Used%: 1.12%
> Under replicated blocks: 12
> Blocks with corrupt replicas: 2
> Missing blocks: 0
>
> -------------------------------------------------
> Datanodes available: 2 (2 total, 0 dead)
>
> Live datanodes:
> Name: 172.11.12.7:50010 (node1)
> Hostname: node1
> Decommission Status : Normal
> Configured Capacity: 17339150336 (16.15 GB)
> DFS Used: 157192192 (149.91 MB)
> Non DFS Used: 2949230592 (2.75 GB)
> DFS Remaining: 14232727552 (13.26 GB)
> DFS Used%: 0.91%
> DFS Remaining%: 82.08%
> Last contact: Wed Feb 26 21:02:22 PST 2014
>
>
> Name: 172.11.12.6:50010 (master)
> Hostname: master
> Decommission Status : Normal
> Configured Capacity: 17339150336 (16.15 GB)
> DFS Used: 157159424 (149.88 MB)
> Non DFS Used: 3757289472 (3.50 GB)
> DFS Remaining: 13424701440 (12.50 GB)
> DFS Used%: 0.91%
> DFS Remaining%: 77.42%
> Last contact: Wed Feb 26 21:08:22 PST 2014
>
>
> Why can't  I get  slave node dead?  Where is wrong? Thanks
>
>
> ----- Original Message -----
> *From:* sudhakara st <su...@gmail.com>
> *To:* user@hadoop.apache.org
> *Sent:* Thursday, February 27, 2014 11:36 AM
> *Subject:* Re: How to set heartbeat ?
>
>  In hdfs-site.xml *'dfs.heartbeat.interval*' parameter determines
> datanode heartbeat interval in seconds.
> and have look these variables this parameter also
> *'dfs.namenode.stale.datanode.interval'*
>
>
> On Thu, Feb 27, 2014 at 7:08 AM, EdwardKing <zh...@neusoft.com> wrote:
>
>>  I have two machine,one is master and another is slave, I want to know
>> how to configure heartbeat of hadoop 2.2.0,which file will be modified?
>> Thanks.
>>
>>
>> ---------------------------------------------------------------------------------------------------
>> Confidentiality Notice: The information contained in this e-mail and any
>> accompanying attachment(s)
>> is intended only for the use of the intended recipient and may be
>> confidential and/or privileged of
>> Neusoft Corporation, its subsidiaries and/or its affiliates. If any
>> reader of this communication is
>> not the intended recipient, unauthorized use, forwarding, printing,
>> storing, disclosure or copying
>> is strictly prohibited, and may be unlawful.If you have received this
>> communication in error,please
>> immediately notify the sender by return e-mail, and delete the original
>> message and all copies from
>> your system. Thank you.
>>
>> ---------------------------------------------------------------------------------------------------
>>
>
>
>
> --
>
> Regards,
> ...sudhakara
>
>
>
> ---------------------------------------------------------------------------------------------------
> Confidentiality Notice: The information contained in this e-mail and any
> accompanying attachment(s)
> is intended only for the use of the intended recipient and may be
> confidential and/or privileged of
> Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader
> of this communication is
> not the intended recipient, unauthorized use, forwarding, printing,
> storing, disclosure or copying
> is strictly prohibited, and may be unlawful.If you have received this
> communication in error,please
> immediately notify the sender by return e-mail, and delete the original
> message and all copies from
> your system. Thank you.
>
> ---------------------------------------------------------------------------------------------------
>



-- 

Regards,
...sudhakara

Re: How to set heartbeat ?

Posted by sudhakara st <su...@gmail.com>.
 Default time interval for marking a datanode as "stale", i.e., if the
namenode has not received heartbeat msg from a datanode for more than this
time interval, the datanode will be marked and treated as "stale" by
default. The stale interval cannot be too small since otherwise this may
cause too frequent change of stale states. We thus set a minimum stale
interval value (the default value is 3 times of heartbeat interval) and
guarantee that the stale interval cannot be less than the minimum value. A
stale data node is avoided during lease/block recovery. It can be
conditionally avoided for reads (see
dfs.namenode.avoid.read.stale.datanode) and for writes (see
dfs.namenode.avoid.write.stale.datanode).
http://hadoop.apache.org/docs/r2.2.0/hadoop-project-dist/hadoop-hdfs/hdfs-default.xml


On Thu, Feb 27, 2014 at 10:42 AM, EdwardKing <zh...@neusoft.com> wrote:

>  I set hdfs-site.xml,like follows:
>
> <configuration>
> <property>
>  <name>dfs.name.dir</name>
>  <value>file:/home/software/name</value>
>  <description> </description>
> </property>
> <property>
>  <name>dfs.namenode.secondary.http-address</name>
>  <value>master:9001</value>
> </property>
> <property>
>  <name>dfs.data.dir</name>
>  <value>file:/home/software/data</value>
> </property>
> <property>
>  <name>dfs.http.address</name>
>  <value>master:9002</value>
> </property>
> <property>
>  <name>dfs.replication</name>
>  <value>2</value>
> </property>
> <property>
>  <name>dfs.datanode.du.reserved</name>
>  <value>1073741824</value>
> </property>
> <property>
>  <name>dfs.block.size</name>
>  <value>4194304</value>
> </property>
> <property>
>  <name>dfs.namenode.logging.level</name>
>  <value>all</value>
> </property>
> <property>
> <name>dfs.permissions</name>
> <value>false</value>
> </property>
> <property>
>  <name>dfs.namenode.stale.datanode.interval</name>
>  <value>3</value>
> </property>
> </configuration>
>
> Then I check status,like follows:
> [hadoop@master sbin]$ hadoop dfsadmin -report
> DEPRECATED: Use of this script to execute hdfs command is deprecated.
> Instead use the hdfs command for it.
>
> 14/02/26 21:05:11 WARN util.NativeCodeLoader: Unable to load native-hadoop
> library for your platform... using builtin-java classes where applicable
> Configured Capacity: 34678300672 (32.30 GB)
> Present Capacity: 27971788800 (26.05 GB)
> DFS Remaining: 27657437184 (25.76 GB)
> DFS Used: 314351616 (299.79 MB)
> DFS Used%: 1.12%
> Under replicated blocks: 12
> Blocks with corrupt replicas: 2
> Missing blocks: 0
>
> -------------------------------------------------
> Datanodes available: 2 (2 total, 0 dead)
>
> Live datanodes:
> Name: 172.11.12.7:50010 (node1)
> Hostname: node1
> Decommission Status : Normal
> Configured Capacity: 17339150336 (16.15 GB)
> DFS Used: 157192192 (149.91 MB)
> Non DFS Used: 2949230592 (2.75 GB)
> DFS Remaining: 14232727552 (13.26 GB)
> DFS Used%: 0.91%
> DFS Remaining%: 82.08%
> Last contact: Wed Feb 26 21:02:22 PST 2014
>
>
> Name: 172.11.12.6:50010 (master)
> Hostname: master
> Decommission Status : Normal
> Configured Capacity: 17339150336 (16.15 GB)
> DFS Used: 157159424 (149.88 MB)
> Non DFS Used: 3757281280 (3.50 GB)
> DFS Remaining: 13424709632 (12.50 GB)
> DFS Used%: 0.91%
> DFS Remaining%: 77.42%
> Last contact: Wed Feb 26 21:05:13 PST 2014
>
> Then I kill slave DataNode,like follows:
>
> [hadoop@node1 ~]$ jps
> 4529 DataNode
> 4805 Jps
> [hadoop@node1 ~]$ kill -9 4529
> [hadoop@node1 ~]$ jps
> 4817 Jps
>
> After a very long time,I check the status again
> [hadoop@master hadoop]$ hadoop dfsadmin -report
> DEPRECATED: Use of this script to execute hdfs command is deprecated.
> Instead use the hdfs command for it.
>
> 14/02/26 21:08:20 WARN util.NativeCodeLoader: Unable to load native-hadoop
> library for your platform... using builtin-java classes where applicable
> Configured Capacity: 34678300672 (32.30 GB)
> Present Capacity: 27971780608 (26.05 GB)
> DFS Remaining: 27657428992 (25.76 GB)
> DFS Used: 314351616 (299.79 MB)
> DFS Used%: 1.12%
> Under replicated blocks: 12
> Blocks with corrupt replicas: 2
> Missing blocks: 0
>
> -------------------------------------------------
> Datanodes available: 2 (2 total, 0 dead)
>
> Live datanodes:
> Name: 172.11.12.7:50010 (node1)
> Hostname: node1
> Decommission Status : Normal
> Configured Capacity: 17339150336 (16.15 GB)
> DFS Used: 157192192 (149.91 MB)
> Non DFS Used: 2949230592 (2.75 GB)
> DFS Remaining: 14232727552 (13.26 GB)
> DFS Used%: 0.91%
> DFS Remaining%: 82.08%
> Last contact: Wed Feb 26 21:02:22 PST 2014
>
>
> Name: 172.11.12.6:50010 (master)
> Hostname: master
> Decommission Status : Normal
> Configured Capacity: 17339150336 (16.15 GB)
> DFS Used: 157159424 (149.88 MB)
> Non DFS Used: 3757289472 (3.50 GB)
> DFS Remaining: 13424701440 (12.50 GB)
> DFS Used%: 0.91%
> DFS Remaining%: 77.42%
> Last contact: Wed Feb 26 21:08:22 PST 2014
>
>
> Why can't  I get  slave node dead?  Where is wrong? Thanks
>
>
> ----- Original Message -----
> *From:* sudhakara st <su...@gmail.com>
> *To:* user@hadoop.apache.org
> *Sent:* Thursday, February 27, 2014 11:36 AM
> *Subject:* Re: How to set heartbeat ?
>
>  In hdfs-site.xml *'dfs.heartbeat.interval*' parameter determines
> datanode heartbeat interval in seconds.
> and have look these variables this parameter also
> *'dfs.namenode.stale.datanode.interval'*
>
>
> On Thu, Feb 27, 2014 at 7:08 AM, EdwardKing <zh...@neusoft.com> wrote:
>
>>  I have two machine,one is master and another is slave, I want to know
>> how to configure heartbeat of hadoop 2.2.0,which file will be modified?
>> Thanks.
>>
>>
>> ---------------------------------------------------------------------------------------------------
>> Confidentiality Notice: The information contained in this e-mail and any
>> accompanying attachment(s)
>> is intended only for the use of the intended recipient and may be
>> confidential and/or privileged of
>> Neusoft Corporation, its subsidiaries and/or its affiliates. If any
>> reader of this communication is
>> not the intended recipient, unauthorized use, forwarding, printing,
>> storing, disclosure or copying
>> is strictly prohibited, and may be unlawful.If you have received this
>> communication in error,please
>> immediately notify the sender by return e-mail, and delete the original
>> message and all copies from
>> your system. Thank you.
>>
>> ---------------------------------------------------------------------------------------------------
>>
>
>
>
> --
>
> Regards,
> ...sudhakara
>
>
>
> ---------------------------------------------------------------------------------------------------
> Confidentiality Notice: The information contained in this e-mail and any
> accompanying attachment(s)
> is intended only for the use of the intended recipient and may be
> confidential and/or privileged of
> Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader
> of this communication is
> not the intended recipient, unauthorized use, forwarding, printing,
> storing, disclosure or copying
> is strictly prohibited, and may be unlawful.If you have received this
> communication in error,please
> immediately notify the sender by return e-mail, and delete the original
> message and all copies from
> your system. Thank you.
>
> ---------------------------------------------------------------------------------------------------
>



-- 

Regards,
...sudhakara

Re: How to set heartbeat ?

Posted by sudhakara st <su...@gmail.com>.
 Default time interval for marking a datanode as "stale", i.e., if the
namenode has not received heartbeat msg from a datanode for more than this
time interval, the datanode will be marked and treated as "stale" by
default. The stale interval cannot be too small since otherwise this may
cause too frequent change of stale states. We thus set a minimum stale
interval value (the default value is 3 times of heartbeat interval) and
guarantee that the stale interval cannot be less than the minimum value. A
stale data node is avoided during lease/block recovery. It can be
conditionally avoided for reads (see
dfs.namenode.avoid.read.stale.datanode) and for writes (see
dfs.namenode.avoid.write.stale.datanode).
http://hadoop.apache.org/docs/r2.2.0/hadoop-project-dist/hadoop-hdfs/hdfs-default.xml


On Thu, Feb 27, 2014 at 10:42 AM, EdwardKing <zh...@neusoft.com> wrote:

>  I set hdfs-site.xml,like follows:
>
> <configuration>
> <property>
>  <name>dfs.name.dir</name>
>  <value>file:/home/software/name</value>
>  <description> </description>
> </property>
> <property>
>  <name>dfs.namenode.secondary.http-address</name>
>  <value>master:9001</value>
> </property>
> <property>
>  <name>dfs.data.dir</name>
>  <value>file:/home/software/data</value>
> </property>
> <property>
>  <name>dfs.http.address</name>
>  <value>master:9002</value>
> </property>
> <property>
>  <name>dfs.replication</name>
>  <value>2</value>
> </property>
> <property>
>  <name>dfs.datanode.du.reserved</name>
>  <value>1073741824</value>
> </property>
> <property>
>  <name>dfs.block.size</name>
>  <value>4194304</value>
> </property>
> <property>
>  <name>dfs.namenode.logging.level</name>
>  <value>all</value>
> </property>
> <property>
> <name>dfs.permissions</name>
> <value>false</value>
> </property>
> <property>
>  <name>dfs.namenode.stale.datanode.interval</name>
>  <value>3</value>
> </property>
> </configuration>
>
> Then I check status,like follows:
> [hadoop@master sbin]$ hadoop dfsadmin -report
> DEPRECATED: Use of this script to execute hdfs command is deprecated.
> Instead use the hdfs command for it.
>
> 14/02/26 21:05:11 WARN util.NativeCodeLoader: Unable to load native-hadoop
> library for your platform... using builtin-java classes where applicable
> Configured Capacity: 34678300672 (32.30 GB)
> Present Capacity: 27971788800 (26.05 GB)
> DFS Remaining: 27657437184 (25.76 GB)
> DFS Used: 314351616 (299.79 MB)
> DFS Used%: 1.12%
> Under replicated blocks: 12
> Blocks with corrupt replicas: 2
> Missing blocks: 0
>
> -------------------------------------------------
> Datanodes available: 2 (2 total, 0 dead)
>
> Live datanodes:
> Name: 172.11.12.7:50010 (node1)
> Hostname: node1
> Decommission Status : Normal
> Configured Capacity: 17339150336 (16.15 GB)
> DFS Used: 157192192 (149.91 MB)
> Non DFS Used: 2949230592 (2.75 GB)
> DFS Remaining: 14232727552 (13.26 GB)
> DFS Used%: 0.91%
> DFS Remaining%: 82.08%
> Last contact: Wed Feb 26 21:02:22 PST 2014
>
>
> Name: 172.11.12.6:50010 (master)
> Hostname: master
> Decommission Status : Normal
> Configured Capacity: 17339150336 (16.15 GB)
> DFS Used: 157159424 (149.88 MB)
> Non DFS Used: 3757281280 (3.50 GB)
> DFS Remaining: 13424709632 (12.50 GB)
> DFS Used%: 0.91%
> DFS Remaining%: 77.42%
> Last contact: Wed Feb 26 21:05:13 PST 2014
>
> Then I kill slave DataNode,like follows:
>
> [hadoop@node1 ~]$ jps
> 4529 DataNode
> 4805 Jps
> [hadoop@node1 ~]$ kill -9 4529
> [hadoop@node1 ~]$ jps
> 4817 Jps
>
> After a very long time,I check the status again
> [hadoop@master hadoop]$ hadoop dfsadmin -report
> DEPRECATED: Use of this script to execute hdfs command is deprecated.
> Instead use the hdfs command for it.
>
> 14/02/26 21:08:20 WARN util.NativeCodeLoader: Unable to load native-hadoop
> library for your platform... using builtin-java classes where applicable
> Configured Capacity: 34678300672 (32.30 GB)
> Present Capacity: 27971780608 (26.05 GB)
> DFS Remaining: 27657428992 (25.76 GB)
> DFS Used: 314351616 (299.79 MB)
> DFS Used%: 1.12%
> Under replicated blocks: 12
> Blocks with corrupt replicas: 2
> Missing blocks: 0
>
> -------------------------------------------------
> Datanodes available: 2 (2 total, 0 dead)
>
> Live datanodes:
> Name: 172.11.12.7:50010 (node1)
> Hostname: node1
> Decommission Status : Normal
> Configured Capacity: 17339150336 (16.15 GB)
> DFS Used: 157192192 (149.91 MB)
> Non DFS Used: 2949230592 (2.75 GB)
> DFS Remaining: 14232727552 (13.26 GB)
> DFS Used%: 0.91%
> DFS Remaining%: 82.08%
> Last contact: Wed Feb 26 21:02:22 PST 2014
>
>
> Name: 172.11.12.6:50010 (master)
> Hostname: master
> Decommission Status : Normal
> Configured Capacity: 17339150336 (16.15 GB)
> DFS Used: 157159424 (149.88 MB)
> Non DFS Used: 3757289472 (3.50 GB)
> DFS Remaining: 13424701440 (12.50 GB)
> DFS Used%: 0.91%
> DFS Remaining%: 77.42%
> Last contact: Wed Feb 26 21:08:22 PST 2014
>
>
> Why can't  I get  slave node dead?  Where is wrong? Thanks
>
>
> ----- Original Message -----
> *From:* sudhakara st <su...@gmail.com>
> *To:* user@hadoop.apache.org
> *Sent:* Thursday, February 27, 2014 11:36 AM
> *Subject:* Re: How to set heartbeat ?
>
>  In hdfs-site.xml *'dfs.heartbeat.interval*' parameter determines
> datanode heartbeat interval in seconds.
> and have look these variables this parameter also
> *'dfs.namenode.stale.datanode.interval'*
>
>
> On Thu, Feb 27, 2014 at 7:08 AM, EdwardKing <zh...@neusoft.com> wrote:
>
>>  I have two machine,one is master and another is slave, I want to know
>> how to configure heartbeat of hadoop 2.2.0,which file will be modified?
>> Thanks.
>>
>>
>> ---------------------------------------------------------------------------------------------------
>> Confidentiality Notice: The information contained in this e-mail and any
>> accompanying attachment(s)
>> is intended only for the use of the intended recipient and may be
>> confidential and/or privileged of
>> Neusoft Corporation, its subsidiaries and/or its affiliates. If any
>> reader of this communication is
>> not the intended recipient, unauthorized use, forwarding, printing,
>> storing, disclosure or copying
>> is strictly prohibited, and may be unlawful.If you have received this
>> communication in error,please
>> immediately notify the sender by return e-mail, and delete the original
>> message and all copies from
>> your system. Thank you.
>>
>> ---------------------------------------------------------------------------------------------------
>>
>
>
>
> --
>
> Regards,
> ...sudhakara
>
>
>
> ---------------------------------------------------------------------------------------------------
> Confidentiality Notice: The information contained in this e-mail and any
> accompanying attachment(s)
> is intended only for the use of the intended recipient and may be
> confidential and/or privileged of
> Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader
> of this communication is
> not the intended recipient, unauthorized use, forwarding, printing,
> storing, disclosure or copying
> is strictly prohibited, and may be unlawful.If you have received this
> communication in error,please
> immediately notify the sender by return e-mail, and delete the original
> message and all copies from
> your system. Thank you.
>
> ---------------------------------------------------------------------------------------------------
>



-- 

Regards,
...sudhakara

Re: How to set heartbeat ?

Posted by sudhakara st <su...@gmail.com>.
 Default time interval for marking a datanode as "stale", i.e., if the
namenode has not received heartbeat msg from a datanode for more than this
time interval, the datanode will be marked and treated as "stale" by
default. The stale interval cannot be too small since otherwise this may
cause too frequent change of stale states. We thus set a minimum stale
interval value (the default value is 3 times of heartbeat interval) and
guarantee that the stale interval cannot be less than the minimum value. A
stale data node is avoided during lease/block recovery. It can be
conditionally avoided for reads (see
dfs.namenode.avoid.read.stale.datanode) and for writes (see
dfs.namenode.avoid.write.stale.datanode).
http://hadoop.apache.org/docs/r2.2.0/hadoop-project-dist/hadoop-hdfs/hdfs-default.xml


On Thu, Feb 27, 2014 at 10:42 AM, EdwardKing <zh...@neusoft.com> wrote:

>  I set hdfs-site.xml,like follows:
>
> <configuration>
> <property>
>  <name>dfs.name.dir</name>
>  <value>file:/home/software/name</value>
>  <description> </description>
> </property>
> <property>
>  <name>dfs.namenode.secondary.http-address</name>
>  <value>master:9001</value>
> </property>
> <property>
>  <name>dfs.data.dir</name>
>  <value>file:/home/software/data</value>
> </property>
> <property>
>  <name>dfs.http.address</name>
>  <value>master:9002</value>
> </property>
> <property>
>  <name>dfs.replication</name>
>  <value>2</value>
> </property>
> <property>
>  <name>dfs.datanode.du.reserved</name>
>  <value>1073741824</value>
> </property>
> <property>
>  <name>dfs.block.size</name>
>  <value>4194304</value>
> </property>
> <property>
>  <name>dfs.namenode.logging.level</name>
>  <value>all</value>
> </property>
> <property>
> <name>dfs.permissions</name>
> <value>false</value>
> </property>
> <property>
>  <name>dfs.namenode.stale.datanode.interval</name>
>  <value>3</value>
> </property>
> </configuration>
>
> Then I check status,like follows:
> [hadoop@master sbin]$ hadoop dfsadmin -report
> DEPRECATED: Use of this script to execute hdfs command is deprecated.
> Instead use the hdfs command for it.
>
> 14/02/26 21:05:11 WARN util.NativeCodeLoader: Unable to load native-hadoop
> library for your platform... using builtin-java classes where applicable
> Configured Capacity: 34678300672 (32.30 GB)
> Present Capacity: 27971788800 (26.05 GB)
> DFS Remaining: 27657437184 (25.76 GB)
> DFS Used: 314351616 (299.79 MB)
> DFS Used%: 1.12%
> Under replicated blocks: 12
> Blocks with corrupt replicas: 2
> Missing blocks: 0
>
> -------------------------------------------------
> Datanodes available: 2 (2 total, 0 dead)
>
> Live datanodes:
> Name: 172.11.12.7:50010 (node1)
> Hostname: node1
> Decommission Status : Normal
> Configured Capacity: 17339150336 (16.15 GB)
> DFS Used: 157192192 (149.91 MB)
> Non DFS Used: 2949230592 (2.75 GB)
> DFS Remaining: 14232727552 (13.26 GB)
> DFS Used%: 0.91%
> DFS Remaining%: 82.08%
> Last contact: Wed Feb 26 21:02:22 PST 2014
>
>
> Name: 172.11.12.6:50010 (master)
> Hostname: master
> Decommission Status : Normal
> Configured Capacity: 17339150336 (16.15 GB)
> DFS Used: 157159424 (149.88 MB)
> Non DFS Used: 3757281280 (3.50 GB)
> DFS Remaining: 13424709632 (12.50 GB)
> DFS Used%: 0.91%
> DFS Remaining%: 77.42%
> Last contact: Wed Feb 26 21:05:13 PST 2014
>
> Then I kill slave DataNode,like follows:
>
> [hadoop@node1 ~]$ jps
> 4529 DataNode
> 4805 Jps
> [hadoop@node1 ~]$ kill -9 4529
> [hadoop@node1 ~]$ jps
> 4817 Jps
>
> After a very long time,I check the status again
> [hadoop@master hadoop]$ hadoop dfsadmin -report
> DEPRECATED: Use of this script to execute hdfs command is deprecated.
> Instead use the hdfs command for it.
>
> 14/02/26 21:08:20 WARN util.NativeCodeLoader: Unable to load native-hadoop
> library for your platform... using builtin-java classes where applicable
> Configured Capacity: 34678300672 (32.30 GB)
> Present Capacity: 27971780608 (26.05 GB)
> DFS Remaining: 27657428992 (25.76 GB)
> DFS Used: 314351616 (299.79 MB)
> DFS Used%: 1.12%
> Under replicated blocks: 12
> Blocks with corrupt replicas: 2
> Missing blocks: 0
>
> -------------------------------------------------
> Datanodes available: 2 (2 total, 0 dead)
>
> Live datanodes:
> Name: 172.11.12.7:50010 (node1)
> Hostname: node1
> Decommission Status : Normal
> Configured Capacity: 17339150336 (16.15 GB)
> DFS Used: 157192192 (149.91 MB)
> Non DFS Used: 2949230592 (2.75 GB)
> DFS Remaining: 14232727552 (13.26 GB)
> DFS Used%: 0.91%
> DFS Remaining%: 82.08%
> Last contact: Wed Feb 26 21:02:22 PST 2014
>
>
> Name: 172.11.12.6:50010 (master)
> Hostname: master
> Decommission Status : Normal
> Configured Capacity: 17339150336 (16.15 GB)
> DFS Used: 157159424 (149.88 MB)
> Non DFS Used: 3757289472 (3.50 GB)
> DFS Remaining: 13424701440 (12.50 GB)
> DFS Used%: 0.91%
> DFS Remaining%: 77.42%
> Last contact: Wed Feb 26 21:08:22 PST 2014
>
>
> Why can't  I get  slave node dead?  Where is wrong? Thanks
>
>
> ----- Original Message -----
> *From:* sudhakara st <su...@gmail.com>
> *To:* user@hadoop.apache.org
> *Sent:* Thursday, February 27, 2014 11:36 AM
> *Subject:* Re: How to set heartbeat ?
>
>  In hdfs-site.xml *'dfs.heartbeat.interval*' parameter determines
> datanode heartbeat interval in seconds.
> and have look these variables this parameter also
> *'dfs.namenode.stale.datanode.interval'*
>
>
> On Thu, Feb 27, 2014 at 7:08 AM, EdwardKing <zh...@neusoft.com> wrote:
>
>>  I have two machine,one is master and another is slave, I want to know
>> how to configure heartbeat of hadoop 2.2.0,which file will be modified?
>> Thanks.
>>
>>
>> ---------------------------------------------------------------------------------------------------
>> Confidentiality Notice: The information contained in this e-mail and any
>> accompanying attachment(s)
>> is intended only for the use of the intended recipient and may be
>> confidential and/or privileged of
>> Neusoft Corporation, its subsidiaries and/or its affiliates. If any
>> reader of this communication is
>> not the intended recipient, unauthorized use, forwarding, printing,
>> storing, disclosure or copying
>> is strictly prohibited, and may be unlawful.If you have received this
>> communication in error,please
>> immediately notify the sender by return e-mail, and delete the original
>> message and all copies from
>> your system. Thank you.
>>
>> ---------------------------------------------------------------------------------------------------
>>
>
>
>
> --
>
> Regards,
> ...sudhakara
>
>
>
> ---------------------------------------------------------------------------------------------------
> Confidentiality Notice: The information contained in this e-mail and any
> accompanying attachment(s)
> is intended only for the use of the intended recipient and may be
> confidential and/or privileged of
> Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader
> of this communication is
> not the intended recipient, unauthorized use, forwarding, printing,
> storing, disclosure or copying
> is strictly prohibited, and may be unlawful.If you have received this
> communication in error,please
> immediately notify the sender by return e-mail, and delete the original
> message and all copies from
> your system. Thank you.
>
> ---------------------------------------------------------------------------------------------------
>



-- 

Regards,
...sudhakara

Re: How to set heartbeat ?

Posted by EdwardKing <zh...@neusoft.com>.
I set hdfs-site.xml,like follows:

<configuration>
<property>
 <name>dfs.name.dir</name>
 <value>file:/home/software/name</value>
 <description> </description>
</property>
<property>
 <name>dfs.namenode.secondary.http-address</name>
 <value>master:9001</value>
</property>
<property>
 <name>dfs.data.dir</name>
 <value>file:/home/software/data</value>
</property>
<property>
 <name>dfs.http.address</name>
 <value>master:9002</value>
</property>
<property>
 <name>dfs.replication</name>
 <value>2</value>
</property>
<property>
 <name>dfs.datanode.du.reserved</name>
 <value>1073741824</value>
</property>
<property>
 <name>dfs.block.size</name>
 <value>4194304</value>
</property>
<property>
 <name>dfs.namenode.logging.level</name>
 <value>all</value>
</property>
<property>
<name>dfs.permissions</name>
<value>false</value>
</property>
<property>
 <name>dfs.namenode.stale.datanode.interval</name>
 <value>3</value>
</property>
</configuration>

Then I check status,like follows:
[hadoop@master sbin]$ hadoop dfsadmin -report
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.

14/02/26 21:05:11 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Configured Capacity: 34678300672 (32.30 GB)
Present Capacity: 27971788800 (26.05 GB)
DFS Remaining: 27657437184 (25.76 GB)
DFS Used: 314351616 (299.79 MB)
DFS Used%: 1.12%
Under replicated blocks: 12
Blocks with corrupt replicas: 2
Missing blocks: 0

-------------------------------------------------
Datanodes available: 2 (2 total, 0 dead)

Live datanodes:
Name: 172.11.12.7:50010 (node1)
Hostname: node1
Decommission Status : Normal
Configured Capacity: 17339150336 (16.15 GB)
DFS Used: 157192192 (149.91 MB)
Non DFS Used: 2949230592 (2.75 GB)
DFS Remaining: 14232727552 (13.26 GB)
DFS Used%: 0.91%
DFS Remaining%: 82.08%
Last contact: Wed Feb 26 21:02:22 PST 2014


Name: 172.11.12.6:50010 (master)
Hostname: master
Decommission Status : Normal
Configured Capacity: 17339150336 (16.15 GB)
DFS Used: 157159424 (149.88 MB)
Non DFS Used: 3757281280 (3.50 GB)
DFS Remaining: 13424709632 (12.50 GB)
DFS Used%: 0.91%
DFS Remaining%: 77.42%
Last contact: Wed Feb 26 21:05:13 PST 2014

Then I kill slave DataNode,like follows:

[hadoop@node1 ~]$ jps
4529 DataNode
4805 Jps
[hadoop@node1 ~]$ kill -9 4529
[hadoop@node1 ~]$ jps
4817 Jps

After a very long time,I check the status again
[hadoop@master hadoop]$ hadoop dfsadmin -report
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.

14/02/26 21:08:20 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Configured Capacity: 34678300672 (32.30 GB)
Present Capacity: 27971780608 (26.05 GB)
DFS Remaining: 27657428992 (25.76 GB)
DFS Used: 314351616 (299.79 MB)
DFS Used%: 1.12%
Under replicated blocks: 12
Blocks with corrupt replicas: 2
Missing blocks: 0

-------------------------------------------------
Datanodes available: 2 (2 total, 0 dead)

Live datanodes:
Name: 172.11.12.7:50010 (node1)
Hostname: node1
Decommission Status : Normal
Configured Capacity: 17339150336 (16.15 GB)
DFS Used: 157192192 (149.91 MB)
Non DFS Used: 2949230592 (2.75 GB)
DFS Remaining: 14232727552 (13.26 GB)
DFS Used%: 0.91%
DFS Remaining%: 82.08%
Last contact: Wed Feb 26 21:02:22 PST 2014


Name: 172.11.12.6:50010 (master)
Hostname: master
Decommission Status : Normal
Configured Capacity: 17339150336 (16.15 GB)
DFS Used: 157159424 (149.88 MB)
Non DFS Used: 3757289472 (3.50 GB)
DFS Remaining: 13424701440 (12.50 GB)
DFS Used%: 0.91%
DFS Remaining%: 77.42%
Last contact: Wed Feb 26 21:08:22 PST 2014


Why can't  I get  slave node dead?  Where is wrong? Thanks

  ----- Original Message ----- 
  From: sudhakara st 
  To: user@hadoop.apache.org 
  Sent: Thursday, February 27, 2014 11:36 AM
  Subject: Re: How to set heartbeat ?


  In hdfs-site.xml 'dfs.heartbeat.interval' parameter determines datanode heartbeat interval in seconds.

  and have look these variables this parameter also 'dfs.namenode.stale.datanode.interval'



  On Thu, Feb 27, 2014 at 7:08 AM, EdwardKing <zh...@neusoft.com> wrote:

    I have two machine,one is master and another is slave, I want to know how to configure heartbeat of hadoop 2.2.0,which file will be modified? 
    Thanks.
    ---------------------------------------------------------------------------------------------------
    Confidentiality Notice: The information contained in this e-mail and any accompanying attachment(s) 
    is intended only for the use of the intended recipient and may be confidential and/or privileged of 
    Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader of this communication is 
    not the intended recipient, unauthorized use, forwarding, printing,  storing, disclosure or copying 
    is strictly prohibited, and may be unlawful.If you have received this communication in error,please 
    immediately notify the sender by return e-mail, and delete the original message and all copies from 
    your system. Thank you. 
    ---------------------------------------------------------------------------------------------------




  -- 

         
  Regards,
  ...sudhakara
                         
---------------------------------------------------------------------------------------------------
Confidentiality Notice: The information contained in this e-mail and any accompanying attachment(s) 
is intended only for the use of the intended recipient and may be confidential and/or privileged of 
Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader of this communication is 
not the intended recipient, unauthorized use, forwarding, printing,  storing, disclosure or copying 
is strictly prohibited, and may be unlawful.If you have received this communication in error,please 
immediately notify the sender by return e-mail, and delete the original message and all copies from 
your system. Thank you. 
---------------------------------------------------------------------------------------------------

Re: How to set heartbeat ?

Posted by EdwardKing <zh...@neusoft.com>.
I set hdfs-site.xml,like follows:

<configuration>
<property>
 <name>dfs.name.dir</name>
 <value>file:/home/software/name</value>
 <description> </description>
</property>
<property>
 <name>dfs.namenode.secondary.http-address</name>
 <value>master:9001</value>
</property>
<property>
 <name>dfs.data.dir</name>
 <value>file:/home/software/data</value>
</property>
<property>
 <name>dfs.http.address</name>
 <value>master:9002</value>
</property>
<property>
 <name>dfs.replication</name>
 <value>2</value>
</property>
<property>
 <name>dfs.datanode.du.reserved</name>
 <value>1073741824</value>
</property>
<property>
 <name>dfs.block.size</name>
 <value>4194304</value>
</property>
<property>
 <name>dfs.namenode.logging.level</name>
 <value>all</value>
</property>
<property>
<name>dfs.permissions</name>
<value>false</value>
</property>
<property>
 <name>dfs.namenode.stale.datanode.interval</name>
 <value>3</value>
</property>
</configuration>

Then I check status,like follows:
[hadoop@master sbin]$ hadoop dfsadmin -report
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.

14/02/26 21:05:11 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Configured Capacity: 34678300672 (32.30 GB)
Present Capacity: 27971788800 (26.05 GB)
DFS Remaining: 27657437184 (25.76 GB)
DFS Used: 314351616 (299.79 MB)
DFS Used%: 1.12%
Under replicated blocks: 12
Blocks with corrupt replicas: 2
Missing blocks: 0

-------------------------------------------------
Datanodes available: 2 (2 total, 0 dead)

Live datanodes:
Name: 172.11.12.7:50010 (node1)
Hostname: node1
Decommission Status : Normal
Configured Capacity: 17339150336 (16.15 GB)
DFS Used: 157192192 (149.91 MB)
Non DFS Used: 2949230592 (2.75 GB)
DFS Remaining: 14232727552 (13.26 GB)
DFS Used%: 0.91%
DFS Remaining%: 82.08%
Last contact: Wed Feb 26 21:02:22 PST 2014


Name: 172.11.12.6:50010 (master)
Hostname: master
Decommission Status : Normal
Configured Capacity: 17339150336 (16.15 GB)
DFS Used: 157159424 (149.88 MB)
Non DFS Used: 3757281280 (3.50 GB)
DFS Remaining: 13424709632 (12.50 GB)
DFS Used%: 0.91%
DFS Remaining%: 77.42%
Last contact: Wed Feb 26 21:05:13 PST 2014

Then I kill slave DataNode,like follows:

[hadoop@node1 ~]$ jps
4529 DataNode
4805 Jps
[hadoop@node1 ~]$ kill -9 4529
[hadoop@node1 ~]$ jps
4817 Jps

After a very long time,I check the status again
[hadoop@master hadoop]$ hadoop dfsadmin -report
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.

14/02/26 21:08:20 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Configured Capacity: 34678300672 (32.30 GB)
Present Capacity: 27971780608 (26.05 GB)
DFS Remaining: 27657428992 (25.76 GB)
DFS Used: 314351616 (299.79 MB)
DFS Used%: 1.12%
Under replicated blocks: 12
Blocks with corrupt replicas: 2
Missing blocks: 0

-------------------------------------------------
Datanodes available: 2 (2 total, 0 dead)

Live datanodes:
Name: 172.11.12.7:50010 (node1)
Hostname: node1
Decommission Status : Normal
Configured Capacity: 17339150336 (16.15 GB)
DFS Used: 157192192 (149.91 MB)
Non DFS Used: 2949230592 (2.75 GB)
DFS Remaining: 14232727552 (13.26 GB)
DFS Used%: 0.91%
DFS Remaining%: 82.08%
Last contact: Wed Feb 26 21:02:22 PST 2014


Name: 172.11.12.6:50010 (master)
Hostname: master
Decommission Status : Normal
Configured Capacity: 17339150336 (16.15 GB)
DFS Used: 157159424 (149.88 MB)
Non DFS Used: 3757289472 (3.50 GB)
DFS Remaining: 13424701440 (12.50 GB)
DFS Used%: 0.91%
DFS Remaining%: 77.42%
Last contact: Wed Feb 26 21:08:22 PST 2014


Why can't  I get  slave node dead?  Where is wrong? Thanks

  ----- Original Message ----- 
  From: sudhakara st 
  To: user@hadoop.apache.org 
  Sent: Thursday, February 27, 2014 11:36 AM
  Subject: Re: How to set heartbeat ?


  In hdfs-site.xml 'dfs.heartbeat.interval' parameter determines datanode heartbeat interval in seconds.

  and have look these variables this parameter also 'dfs.namenode.stale.datanode.interval'



  On Thu, Feb 27, 2014 at 7:08 AM, EdwardKing <zh...@neusoft.com> wrote:

    I have two machine,one is master and another is slave, I want to know how to configure heartbeat of hadoop 2.2.0,which file will be modified? 
    Thanks.
    ---------------------------------------------------------------------------------------------------
    Confidentiality Notice: The information contained in this e-mail and any accompanying attachment(s) 
    is intended only for the use of the intended recipient and may be confidential and/or privileged of 
    Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader of this communication is 
    not the intended recipient, unauthorized use, forwarding, printing,  storing, disclosure or copying 
    is strictly prohibited, and may be unlawful.If you have received this communication in error,please 
    immediately notify the sender by return e-mail, and delete the original message and all copies from 
    your system. Thank you. 
    ---------------------------------------------------------------------------------------------------




  -- 

         
  Regards,
  ...sudhakara
                         
---------------------------------------------------------------------------------------------------
Confidentiality Notice: The information contained in this e-mail and any accompanying attachment(s) 
is intended only for the use of the intended recipient and may be confidential and/or privileged of 
Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader of this communication is 
not the intended recipient, unauthorized use, forwarding, printing,  storing, disclosure or copying 
is strictly prohibited, and may be unlawful.If you have received this communication in error,please 
immediately notify the sender by return e-mail, and delete the original message and all copies from 
your system. Thank you. 
---------------------------------------------------------------------------------------------------

Re: How to set heartbeat ?

Posted by EdwardKing <zh...@neusoft.com>.
I set hdfs-site.xml,like follows:

<configuration>
<property>
 <name>dfs.name.dir</name>
 <value>file:/home/software/name</value>
 <description> </description>
</property>
<property>
 <name>dfs.namenode.secondary.http-address</name>
 <value>master:9001</value>
</property>
<property>
 <name>dfs.data.dir</name>
 <value>file:/home/software/data</value>
</property>
<property>
 <name>dfs.http.address</name>
 <value>master:9002</value>
</property>
<property>
 <name>dfs.replication</name>
 <value>2</value>
</property>
<property>
 <name>dfs.datanode.du.reserved</name>
 <value>1073741824</value>
</property>
<property>
 <name>dfs.block.size</name>
 <value>4194304</value>
</property>
<property>
 <name>dfs.namenode.logging.level</name>
 <value>all</value>
</property>
<property>
<name>dfs.permissions</name>
<value>false</value>
</property>
<property>
 <name>dfs.namenode.stale.datanode.interval</name>
 <value>3</value>
</property>
</configuration>

Then I check status,like follows:
[hadoop@master sbin]$ hadoop dfsadmin -report
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.

14/02/26 21:05:11 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Configured Capacity: 34678300672 (32.30 GB)
Present Capacity: 27971788800 (26.05 GB)
DFS Remaining: 27657437184 (25.76 GB)
DFS Used: 314351616 (299.79 MB)
DFS Used%: 1.12%
Under replicated blocks: 12
Blocks with corrupt replicas: 2
Missing blocks: 0

-------------------------------------------------
Datanodes available: 2 (2 total, 0 dead)

Live datanodes:
Name: 172.11.12.7:50010 (node1)
Hostname: node1
Decommission Status : Normal
Configured Capacity: 17339150336 (16.15 GB)
DFS Used: 157192192 (149.91 MB)
Non DFS Used: 2949230592 (2.75 GB)
DFS Remaining: 14232727552 (13.26 GB)
DFS Used%: 0.91%
DFS Remaining%: 82.08%
Last contact: Wed Feb 26 21:02:22 PST 2014


Name: 172.11.12.6:50010 (master)
Hostname: master
Decommission Status : Normal
Configured Capacity: 17339150336 (16.15 GB)
DFS Used: 157159424 (149.88 MB)
Non DFS Used: 3757281280 (3.50 GB)
DFS Remaining: 13424709632 (12.50 GB)
DFS Used%: 0.91%
DFS Remaining%: 77.42%
Last contact: Wed Feb 26 21:05:13 PST 2014

Then I kill slave DataNode,like follows:

[hadoop@node1 ~]$ jps
4529 DataNode
4805 Jps
[hadoop@node1 ~]$ kill -9 4529
[hadoop@node1 ~]$ jps
4817 Jps

After a very long time,I check the status again
[hadoop@master hadoop]$ hadoop dfsadmin -report
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.

14/02/26 21:08:20 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Configured Capacity: 34678300672 (32.30 GB)
Present Capacity: 27971780608 (26.05 GB)
DFS Remaining: 27657428992 (25.76 GB)
DFS Used: 314351616 (299.79 MB)
DFS Used%: 1.12%
Under replicated blocks: 12
Blocks with corrupt replicas: 2
Missing blocks: 0

-------------------------------------------------
Datanodes available: 2 (2 total, 0 dead)

Live datanodes:
Name: 172.11.12.7:50010 (node1)
Hostname: node1
Decommission Status : Normal
Configured Capacity: 17339150336 (16.15 GB)
DFS Used: 157192192 (149.91 MB)
Non DFS Used: 2949230592 (2.75 GB)
DFS Remaining: 14232727552 (13.26 GB)
DFS Used%: 0.91%
DFS Remaining%: 82.08%
Last contact: Wed Feb 26 21:02:22 PST 2014


Name: 172.11.12.6:50010 (master)
Hostname: master
Decommission Status : Normal
Configured Capacity: 17339150336 (16.15 GB)
DFS Used: 157159424 (149.88 MB)
Non DFS Used: 3757289472 (3.50 GB)
DFS Remaining: 13424701440 (12.50 GB)
DFS Used%: 0.91%
DFS Remaining%: 77.42%
Last contact: Wed Feb 26 21:08:22 PST 2014


Why can't  I get  slave node dead?  Where is wrong? Thanks

  ----- Original Message ----- 
  From: sudhakara st 
  To: user@hadoop.apache.org 
  Sent: Thursday, February 27, 2014 11:36 AM
  Subject: Re: How to set heartbeat ?


  In hdfs-site.xml 'dfs.heartbeat.interval' parameter determines datanode heartbeat interval in seconds.

  and have look these variables this parameter also 'dfs.namenode.stale.datanode.interval'



  On Thu, Feb 27, 2014 at 7:08 AM, EdwardKing <zh...@neusoft.com> wrote:

    I have two machine,one is master and another is slave, I want to know how to configure heartbeat of hadoop 2.2.0,which file will be modified? 
    Thanks.
    ---------------------------------------------------------------------------------------------------
    Confidentiality Notice: The information contained in this e-mail and any accompanying attachment(s) 
    is intended only for the use of the intended recipient and may be confidential and/or privileged of 
    Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader of this communication is 
    not the intended recipient, unauthorized use, forwarding, printing,  storing, disclosure or copying 
    is strictly prohibited, and may be unlawful.If you have received this communication in error,please 
    immediately notify the sender by return e-mail, and delete the original message and all copies from 
    your system. Thank you. 
    ---------------------------------------------------------------------------------------------------




  -- 

         
  Regards,
  ...sudhakara
                         
---------------------------------------------------------------------------------------------------
Confidentiality Notice: The information contained in this e-mail and any accompanying attachment(s) 
is intended only for the use of the intended recipient and may be confidential and/or privileged of 
Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader of this communication is 
not the intended recipient, unauthorized use, forwarding, printing,  storing, disclosure or copying 
is strictly prohibited, and may be unlawful.If you have received this communication in error,please 
immediately notify the sender by return e-mail, and delete the original message and all copies from 
your system. Thank you. 
---------------------------------------------------------------------------------------------------

Re: How to set heartbeat ?

Posted by EdwardKing <zh...@neusoft.com>.
I set hdfs-site.xml,like follows:

<configuration>
<property>
 <name>dfs.name.dir</name>
 <value>file:/home/software/name</value>
 <description> </description>
</property>
<property>
 <name>dfs.namenode.secondary.http-address</name>
 <value>master:9001</value>
</property>
<property>
 <name>dfs.data.dir</name>
 <value>file:/home/software/data</value>
</property>
<property>
 <name>dfs.http.address</name>
 <value>master:9002</value>
</property>
<property>
 <name>dfs.replication</name>
 <value>2</value>
</property>
<property>
 <name>dfs.datanode.du.reserved</name>
 <value>1073741824</value>
</property>
<property>
 <name>dfs.block.size</name>
 <value>4194304</value>
</property>
<property>
 <name>dfs.namenode.logging.level</name>
 <value>all</value>
</property>
<property>
<name>dfs.permissions</name>
<value>false</value>
</property>
<property>
 <name>dfs.namenode.stale.datanode.interval</name>
 <value>3</value>
</property>
</configuration>

Then I check status,like follows:
[hadoop@master sbin]$ hadoop dfsadmin -report
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.

14/02/26 21:05:11 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Configured Capacity: 34678300672 (32.30 GB)
Present Capacity: 27971788800 (26.05 GB)
DFS Remaining: 27657437184 (25.76 GB)
DFS Used: 314351616 (299.79 MB)
DFS Used%: 1.12%
Under replicated blocks: 12
Blocks with corrupt replicas: 2
Missing blocks: 0

-------------------------------------------------
Datanodes available: 2 (2 total, 0 dead)

Live datanodes:
Name: 172.11.12.7:50010 (node1)
Hostname: node1
Decommission Status : Normal
Configured Capacity: 17339150336 (16.15 GB)
DFS Used: 157192192 (149.91 MB)
Non DFS Used: 2949230592 (2.75 GB)
DFS Remaining: 14232727552 (13.26 GB)
DFS Used%: 0.91%
DFS Remaining%: 82.08%
Last contact: Wed Feb 26 21:02:22 PST 2014


Name: 172.11.12.6:50010 (master)
Hostname: master
Decommission Status : Normal
Configured Capacity: 17339150336 (16.15 GB)
DFS Used: 157159424 (149.88 MB)
Non DFS Used: 3757281280 (3.50 GB)
DFS Remaining: 13424709632 (12.50 GB)
DFS Used%: 0.91%
DFS Remaining%: 77.42%
Last contact: Wed Feb 26 21:05:13 PST 2014

Then I kill slave DataNode,like follows:

[hadoop@node1 ~]$ jps
4529 DataNode
4805 Jps
[hadoop@node1 ~]$ kill -9 4529
[hadoop@node1 ~]$ jps
4817 Jps

After a very long time,I check the status again
[hadoop@master hadoop]$ hadoop dfsadmin -report
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.

14/02/26 21:08:20 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Configured Capacity: 34678300672 (32.30 GB)
Present Capacity: 27971780608 (26.05 GB)
DFS Remaining: 27657428992 (25.76 GB)
DFS Used: 314351616 (299.79 MB)
DFS Used%: 1.12%
Under replicated blocks: 12
Blocks with corrupt replicas: 2
Missing blocks: 0

-------------------------------------------------
Datanodes available: 2 (2 total, 0 dead)

Live datanodes:
Name: 172.11.12.7:50010 (node1)
Hostname: node1
Decommission Status : Normal
Configured Capacity: 17339150336 (16.15 GB)
DFS Used: 157192192 (149.91 MB)
Non DFS Used: 2949230592 (2.75 GB)
DFS Remaining: 14232727552 (13.26 GB)
DFS Used%: 0.91%
DFS Remaining%: 82.08%
Last contact: Wed Feb 26 21:02:22 PST 2014


Name: 172.11.12.6:50010 (master)
Hostname: master
Decommission Status : Normal
Configured Capacity: 17339150336 (16.15 GB)
DFS Used: 157159424 (149.88 MB)
Non DFS Used: 3757289472 (3.50 GB)
DFS Remaining: 13424701440 (12.50 GB)
DFS Used%: 0.91%
DFS Remaining%: 77.42%
Last contact: Wed Feb 26 21:08:22 PST 2014


Why can't  I get  slave node dead?  Where is wrong? Thanks

  ----- Original Message ----- 
  From: sudhakara st 
  To: user@hadoop.apache.org 
  Sent: Thursday, February 27, 2014 11:36 AM
  Subject: Re: How to set heartbeat ?


  In hdfs-site.xml 'dfs.heartbeat.interval' parameter determines datanode heartbeat interval in seconds.

  and have look these variables this parameter also 'dfs.namenode.stale.datanode.interval'



  On Thu, Feb 27, 2014 at 7:08 AM, EdwardKing <zh...@neusoft.com> wrote:

    I have two machine,one is master and another is slave, I want to know how to configure heartbeat of hadoop 2.2.0,which file will be modified? 
    Thanks.
    ---------------------------------------------------------------------------------------------------
    Confidentiality Notice: The information contained in this e-mail and any accompanying attachment(s) 
    is intended only for the use of the intended recipient and may be confidential and/or privileged of 
    Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader of this communication is 
    not the intended recipient, unauthorized use, forwarding, printing,  storing, disclosure or copying 
    is strictly prohibited, and may be unlawful.If you have received this communication in error,please 
    immediately notify the sender by return e-mail, and delete the original message and all copies from 
    your system. Thank you. 
    ---------------------------------------------------------------------------------------------------




  -- 

         
  Regards,
  ...sudhakara
                         
---------------------------------------------------------------------------------------------------
Confidentiality Notice: The information contained in this e-mail and any accompanying attachment(s) 
is intended only for the use of the intended recipient and may be confidential and/or privileged of 
Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader of this communication is 
not the intended recipient, unauthorized use, forwarding, printing,  storing, disclosure or copying 
is strictly prohibited, and may be unlawful.If you have received this communication in error,please 
immediately notify the sender by return e-mail, and delete the original message and all copies from 
your system. Thank you. 
---------------------------------------------------------------------------------------------------

Re: How to set heartbeat ?

Posted by sudhakara st <su...@gmail.com>.
In hdfs-site.xml *'dfs.heartbeat.interval*' parameter determines datanode
heartbeat interval in seconds.
and have look these variables this parameter also
*'dfs.namenode.stale.datanode.interval'*


On Thu, Feb 27, 2014 at 7:08 AM, EdwardKing <zh...@neusoft.com> wrote:

>  I have two machine,one is master and another is slave, I want to know
> how to configure heartbeat of hadoop 2.2.0,which file will be modified?
> Thanks.
>
>
> ---------------------------------------------------------------------------------------------------
> Confidentiality Notice: The information contained in this e-mail and any
> accompanying attachment(s)
> is intended only for the use of the intended recipient and may be
> confidential and/or privileged of
> Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader
> of this communication is
> not the intended recipient, unauthorized use, forwarding, printing,
> storing, disclosure or copying
> is strictly prohibited, and may be unlawful.If you have received this
> communication in error,please
> immediately notify the sender by return e-mail, and delete the original
> message and all copies from
> your system. Thank you.
>
> ---------------------------------------------------------------------------------------------------
>



-- 

Regards,
...sudhakara

Re: How to set heartbeat ?

Posted by sudhakara st <su...@gmail.com>.
In hdfs-site.xml *'dfs.heartbeat.interval*' parameter determines datanode
heartbeat interval in seconds.
and have look these variables this parameter also
*'dfs.namenode.stale.datanode.interval'*


On Thu, Feb 27, 2014 at 7:08 AM, EdwardKing <zh...@neusoft.com> wrote:

>  I have two machine,one is master and another is slave, I want to know
> how to configure heartbeat of hadoop 2.2.0,which file will be modified?
> Thanks.
>
>
> ---------------------------------------------------------------------------------------------------
> Confidentiality Notice: The information contained in this e-mail and any
> accompanying attachment(s)
> is intended only for the use of the intended recipient and may be
> confidential and/or privileged of
> Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader
> of this communication is
> not the intended recipient, unauthorized use, forwarding, printing,
> storing, disclosure or copying
> is strictly prohibited, and may be unlawful.If you have received this
> communication in error,please
> immediately notify the sender by return e-mail, and delete the original
> message and all copies from
> your system. Thank you.
>
> ---------------------------------------------------------------------------------------------------
>



-- 

Regards,
...sudhakara

Re: How to set heartbeat ?

Posted by sudhakara st <su...@gmail.com>.
In hdfs-site.xml *'dfs.heartbeat.interval*' parameter determines datanode
heartbeat interval in seconds.
and have look these variables this parameter also
*'dfs.namenode.stale.datanode.interval'*


On Thu, Feb 27, 2014 at 7:08 AM, EdwardKing <zh...@neusoft.com> wrote:

>  I have two machine,one is master and another is slave, I want to know
> how to configure heartbeat of hadoop 2.2.0,which file will be modified?
> Thanks.
>
>
> ---------------------------------------------------------------------------------------------------
> Confidentiality Notice: The information contained in this e-mail and any
> accompanying attachment(s)
> is intended only for the use of the intended recipient and may be
> confidential and/or privileged of
> Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader
> of this communication is
> not the intended recipient, unauthorized use, forwarding, printing,
> storing, disclosure or copying
> is strictly prohibited, and may be unlawful.If you have received this
> communication in error,please
> immediately notify the sender by return e-mail, and delete the original
> message and all copies from
> your system. Thank you.
>
> ---------------------------------------------------------------------------------------------------
>



-- 

Regards,
...sudhakara

Re: How to set heartbeat ?

Posted by sudhakara st <su...@gmail.com>.
In hdfs-site.xml *'dfs.heartbeat.interval*' parameter determines datanode
heartbeat interval in seconds.
and have look these variables this parameter also
*'dfs.namenode.stale.datanode.interval'*


On Thu, Feb 27, 2014 at 7:08 AM, EdwardKing <zh...@neusoft.com> wrote:

>  I have two machine,one is master and another is slave, I want to know
> how to configure heartbeat of hadoop 2.2.0,which file will be modified?
> Thanks.
>
>
> ---------------------------------------------------------------------------------------------------
> Confidentiality Notice: The information contained in this e-mail and any
> accompanying attachment(s)
> is intended only for the use of the intended recipient and may be
> confidential and/or privileged of
> Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader
> of this communication is
> not the intended recipient, unauthorized use, forwarding, printing,
> storing, disclosure or copying
> is strictly prohibited, and may be unlawful.If you have received this
> communication in error,please
> immediately notify the sender by return e-mail, and delete the original
> message and all copies from
> your system. Thank you.
>
> ---------------------------------------------------------------------------------------------------
>



-- 

Regards,
...sudhakara