You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-user@hadoop.apache.org by Manjunath Hegde <he...@gmail.com> on 2013/12/07 07:23:45 UTC

Datanode not starting on slaves, no error issue

Hi,

I am unable to start datanode deamon on my cluster(version v2.2). It starts
fine in master node but simply do not start in data nodes. No log files are
created on data nodes,they are created in master-node deamon and no error
message. I have made sure below things are right. I also have appropriate
aliases in /etc/hosts file for name resolution.

   1. I am able to ssh all data nodes from master withought password. I
   have also set HADOOP_SECURE_DN_USER user to "hadoop" this is the user i am
   planning to start datanodes on, On all nodes.
   2. I have added data nodes to slaves file, one per line.
   3. HADOOP_HOME(/home/hadoop/hadoop-2.2.0),HADOOP_CONF_DIR($HADOOP_HOME/etc/hadoop)
   set on ALL the nodes.
   4. all required directories are present on datanodes,users created,ipv6
   disabled
   5. Added necessary config file parameters, they are as below -

Below are log files for reference. They dont have any errors. Note "Network
topology has 0 racks and 0 datanodes" below suggesting it is not
recognizing ALL datanodes(may be safe mode one, not sure). Any help is much
appreciated.

core-site.xml

<configuration>
<property>
   <name>fs.default.name</name>
   <value>hdfs://localhost:9000</value>
</property>

<property>
   <name>io.file.buffer.size</name>
   <value>131072</value>
</property>
</configuration>

hdfs-site.xml

<configuration>
<property>
   <name>dfs.replication</name>
   <value>2</value>
 </property>
 <property>
   <name>dfs.namenode.name.dir</name>
   <value>file:/home/hadoop/namenode</value>
 </property>
 <property>
   <name>dfs.datanode.data.dir</name>
   <value>file:/home/hadoop/datanode</value>
 </property>
</configuration>

yarn-site.xml

<property>
   <name>yarn.nodemanager.aux-services</name>
   <value>mapreduce_shuffle</value>
</property>
<property>
   <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
   <value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
 <property>
   <name>yarn.nodemanager.log.dirs</name>
   <value>/home/yarn/logs</value>
 </property>
</configuration>

mapred-site.xml

<configuration>
   <property>
      <name>mapreduce.framework.name</name>
      <value>yarn</value>
   </property>

  <property>
      <name>mapreduce.task.io.sort.mb</name>
      <value>1024</value>
   </property>

</configuration>

Namenode Log:

2013-12-06 23:54:46,940 INFO org.apache.hadoop.hdfs.StateChange:
STATE* Leaving safe mode after 1 secs
2013-12-06 23:54:46,940 INFO org.apache.hadoop.hdfs.StateChange:
STATE* Network topology has 0 racks and 0 datanodes
2013-12-06 23:54:46,940 INFO org.apache.hadoop.hdfs.StateChange:
STATE* UnderReplicatedBlocks has 0 blocks
2013-12-06 23:54:46,972 INFO org.apache.hadoop.ipc.Server: IPC Server
Responder: starting
2013-12-06 23:54:46,972 INFO org.apache.hadoop.ipc.Server: IPC Server
listener on 9000: starting
2013-12-06 23:54:46,975 INFO
org.apache.hadoop.hdfs.server.namenode.NameNode: NameNode RPC up at:
localhost/192.168.56.1:9000
2013-12-06 23:54:46,975 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Starting services
required for active state
2013-12-06 23:55:08,530 INFO org.apache.hadoop.hdfs.StateChange:
BLOCK* registerDatanode: from DatanodeRegistration(192.168.56.1,
storageID=DS-1268869381-192.168.56.1-50010-1386350725676,
infoPort=50075, ipcPort=50020,
storageInfo=lv=-47;cid=CID-d6194959-5a13-4d8b-8428-25134e8fb746;nsid=2144581313;c=0)
storage DS-1268869381-192.168.56.1-50010-1386350725676
2013-12-06 23:55:08,535 INFO org.apache.hadoop.net.NetworkTopology:
Adding a new node: /default-rack/192.168.56.1:50010
2013-12-06 23:55:08,717 INFO
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK*
processReport: Received first block report from 192.168.56.1:50010
after starting up or becoming active. Its block contents are no longer
considered stale
2013-12-06 23:55:08,718 INFO BlockStateChange: BLOCK* processReport:
from DatanodeRegistration(192.168.56.1,
storageID=DS-1268869381-192.168.56.1-50010-1386350725676,
infoPort=50075, ipcPort=50020,
storageInfo=lv=-47;cid=CID-d6194959-5a13-4d8b-8428-25134e8fb746;nsid=2144581313;c=0),
blocks: 0, processing time: 2 msecs

Datanode on masterserver log:

2013-12-06 23:55:08,469 INFO
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl:
Adding block pool BP-1981795271-192.168.56.1-1386350567299
2013-12-06 23:55:08,470 INFO
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl:
Scanning block pool BP-1981795271-192.168.56.1-1386350567299 on volume
/home/hadoop/datanode/current...
2013-12-06 23:55:08,479 INFO
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl:
Time taken to scan block pool BP-1981795271-192.168.56.1-1386350567299
on /home/hadoop/datanode/current: 8ms
2013-12-06 23:55:08,479 INFO
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl:
Total time to scan all replicas for block pool
BP-1981795271-192.168.56.1-1386350567299: 9ms
2013-12-06 23:55:08,479 INFO
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl:
Adding replicas to map for block pool
BP-1981795271-192.168.56.1-1386350567299 on volume
/home/hadoop/datanode/current...
2013-12-06 23:55:08,479 INFO
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl:
Time to add replicas to map for block pool
BP-1981795271-192.168.56.1-1386350567299 on volume
/home/hadoop/datanode/current: 0ms
2013-12-06 23:55:08,479 INFO
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl:
Total time to add all replicas to map: 0ms
2013-12-06 23:55:08,485 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool
BP-1981795271-192.168.56.1-1386350567299 (storage id
DS-1268869381-192.168.56.1-50010-1386350725676) service to
localhost/192.168.56.1:9000 beginning handshake with NN
2013-12-06 23:55:08,560 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool Block pool
BP-1981795271-192.168.56.1-1386350567299 (storage id
DS-1268869381-192.168.56.1-50010-1386350725676) service to
localhost/192.168.56.1:9000 successfully registered with NN
2013-12-06 23:55:08,560 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: For namenode
localhost/192.168.56.1:9000 using DELETEREPORT_INTERVAL of 300000 msec
 BLOCKREPORT_INTERVAL of 21600000msec Initial delay: 0msec;
heartBeatInterval=3000
2013-12-06 23:55:08,674 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Namenode Block pool
BP-1981795271-192.168.56.1-1386350567299 (storage id
DS-1268869381-192.168.56.1-50010-1386350725676) service to
localhost/192.168.56.1:9000 trying to claim ACTIVE state with txid=5
2013-12-06 23:55:08,674 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Acknowledging ACTIVE
Namenode Block pool BP-1981795271-192.168.56.1-1386350567299 (storage
id DS-1268869381-192.168.56.1-50010-1386350725676) service to
localhost/192.168.56.1:9000
2013-12-06 23:55:08,767 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: BlockReport of 0
blocks took 2 msec to generate and 90 msecs for RPC and NN processing
2013-12-06 23:55:08,767 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: sent block report,
processed command:org.apache.hadoop.hdfs.server.protocol.FinalizeCommand@38568c24
2013-12-06 23:55:08,773 INFO org.apache.hadoop.util.GSet: Computing
capacity for map BlockMap
2013-12-06 23:55:08,773 INFO org.apache.hadoop.util.GSet: VM type       = 64-bit
2013-12-06 23:55:08,773 INFO org.apache.hadoop.util.GSet: 0.5% max
memory = 889 MB
2013-12-06 23:55:08,773 INFO org.apache.hadoop.util.GSet: capacity
 = 2^19 = 524288 entries
2013-12-06 23:55:08,774 INFO
org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: Periodic
Block Verification Scanner initialized with interval 504 hours for
block pool BP-1981795271-192.168.56.1-1386350567299
2013-12-06 23:55:08,778 INFO
org.apache.hadoop.hdfs.server.datanode.DataBlockScanner: Added
bpid=BP-1981795271-192.168.56.1-1386350567299 to blockPoolScannerMap,
new size=1

Re: Datanode not starting on slaves, no error issue

Posted by Manjunath Hegde <he...@gmail.com>.
Not sure if it reached first time, so sending it again. Apologies for
repeat.


On Sat, Dec 7, 2013 at 11:53 AM, Manjunath Hegde <he...@gmail.com> wrote:

> Hi,
>
> I am unable to start datanode deamon on my cluster(version v2.2). It
> starts fine in master node but simply do not start in data nodes. No log
> files are created on data nodes,they are created in master-node deamon and
> no error message. I have made sure below things are right. I also have
> appropriate aliases in /etc/hosts file for name resolution.
>
>    1. I am able to ssh all data nodes from master withought password. I
>    have also set HADOOP_SECURE_DN_USER user to "hadoop" this is the user i am
>    planning to start datanodes on, On all nodes.
>    2. I have added data nodes to slaves file, one per line.
>    3. HADOOP_HOME(/home/hadoop/hadoop-2.2.0),HADOOP_CONF_DIR($HADOOP_HOME/etc/hadoop)
>    set on ALL the nodes.
>    4. all required directories are present on datanodes,users
>    created,ipv6 disabled
>    5. Added necessary config file parameters, they are as below -
>
> Below are log files for reference. They dont have any errors. Note
> "Network topology has 0 racks and 0 datanodes" below suggesting it is not
> recognizing ALL datanodes(may be safe mode one, not sure). Any help is much
> appreciated.
>
> core-site.xml
>
> <configuration>
> <property>
>    <name>fs.default.name</name>
>    <value>hdfs://localhost:9000</value>
> </property>
>
> <property>
>    <name>io.file.buffer.size</name>
>    <value>131072</value>
> </property>
> </configuration>
>
> hdfs-site.xml
>
> <configuration>
> <property>
>    <name>dfs.replication</name>
>    <value>2</value>
>  </property>
>  <property>
>    <name>dfs.namenode.name.dir</name>
>    <value>file:/home/hadoop/namenode</value>
>  </property>
>  <property>
>    <name>dfs.datanode.data.dir</name>
>    <value>file:/home/hadoop/datanode</value>
>  </property>
> </configuration>
>
> yarn-site.xml
>
> <property>
>    <name>yarn.nodemanager.aux-services</name>
>    <value>mapreduce_shuffle</value>
> </property>
> <property>
>    <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
>    <value>org.apache.hadoop.mapred.ShuffleHandler</value>
> </property>
>  <property>
>    <name>yarn.nodemanager.log.dirs</name>
>    <value>/home/yarn/logs</value>
>  </property>
> </configuration>
>
> mapred-site.xml
>
> <configuration>
>    <property>
>       <name>mapreduce.framework.name</name>
>       <value>yarn</value>
>    </property>
>
>   <property>
>       <name>mapreduce.task.io.sort.mb</name>
>       <value>1024</value>
>    </property>
>
> </configuration>
>
> Namenode Log:
>
> 2013-12-06 23:54:46,940 INFO org.apache.hadoop.hdfs.StateChange: STATE* Leaving safe mode after 1 secs
> 2013-12-06 23:54:46,940 INFO org.apache.hadoop.hdfs.StateChange: STATE* Network topology has 0 racks and 0 datanodes
> 2013-12-06 23:54:46,940 INFO org.apache.hadoop.hdfs.StateChange: STATE* UnderReplicatedBlocks has 0 blocks
> 2013-12-06 23:54:46,972 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting
> 2013-12-06 23:54:46,972 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 9000: starting
> 2013-12-06 23:54:46,975 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: NameNode RPC up at: localhost/192.168.56.1:9000
> 2013-12-06 23:54:46,975 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Starting services required for active state
> 2013-12-06 23:55:08,530 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* registerDatanode: from DatanodeRegistration(192.168.56.1, storageID=DS-1268869381-192.168.56.1-50010-1386350725676, infoPort=50075, ipcPort=50020, storageInfo=lv=-47;cid=CID-d6194959-5a13-4d8b-8428-25134e8fb746;nsid=2144581313;c=0) storage DS-1268869381-192.168.56.1-50010-1386350725676
> 2013-12-06 23:55:08,535 INFO org.apache.hadoop.net.NetworkTopology: Adding a new node: /default-rack/192.168.56.1:50010
> 2013-12-06 23:55:08,717 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* processReport: Received first block report from 192.168.56.1:50010 after starting up or becoming active. Its block contents are no longer considered stale
> 2013-12-06 23:55:08,718 INFO BlockStateChange: BLOCK* processReport: from DatanodeRegistration(192.168.56.1, storageID=DS-1268869381-192.168.56.1-50010-1386350725676, infoPort=50075, ipcPort=50020, storageInfo=lv=-47;cid=CID-d6194959-5a13-4d8b-8428-25134e8fb746;nsid=2144581313;c=0), blocks: 0, processing time: 2 msecs
>
> Datanode on masterserver log:
>
> 2013-12-06 23:55:08,469 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Adding block pool BP-1981795271-192.168.56.1-1386350567299
> 2013-12-06 23:55:08,470 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Scanning block pool BP-1981795271-192.168.56.1-1386350567299 on volume /home/hadoop/datanode/current...
> 2013-12-06 23:55:08,479 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time taken to scan block pool BP-1981795271-192.168.56.1-1386350567299 on /home/hadoop/datanode/current: 8ms
> 2013-12-06 23:55:08,479 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Total time to scan all replicas for block pool BP-1981795271-192.168.56.1-1386350567299: 9ms
> 2013-12-06 23:55:08,479 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Adding replicas to map for block pool BP-1981795271-192.168.56.1-1386350567299 on volume /home/hadoop/datanode/current...
> 2013-12-06 23:55:08,479 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time to add replicas to map for block pool BP-1981795271-192.168.56.1-1386350567299 on volume /home/hadoop/datanode/current: 0ms
> 2013-12-06 23:55:08,479 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Total time to add all replicas to map: 0ms
> 2013-12-06 23:55:08,485 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool BP-1981795271-192.168.56.1-1386350567299 (storage id DS-1268869381-192.168.56.1-50010-1386350725676) service to localhost/192.168.56.1:9000 beginning handshake with NN
> 2013-12-06 23:55:08,560 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool Block pool BP-1981795271-192.168.56.1-1386350567299 (storage id DS-1268869381-192.168.56.1-50010-1386350725676) service to localhost/192.168.56.1:9000 successfully registered with NN
> 2013-12-06 23:55:08,560 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: For namenode localhost/192.168.56.1:9000 using DELETEREPORT_INTERVAL of 300000 msec  BLOCKREPORT_INTERVAL of 21600000msec Initial delay: 0msec; heartBeatInterval=3000
> 2013-12-06 23:55:08,674 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Namenode Block pool BP-1981795271-192.168.56.1-1386350567299 (storage id DS-1268869381-192.168.56.1-50010-1386350725676) service to localhost/192.168.56.1:9000 trying to claim ACTIVE state with txid=5
> 2013-12-06 23:55:08,674 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Acknowledging ACTIVE Namenode Block pool BP-1981795271-192.168.56.1-1386350567299 (storage id DS-1268869381-192.168.56.1-50010-1386350725676) service to localhost/192.168.56.1:9000
> 2013-12-06 23:55:08,767 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: BlockReport of 0 blocks took 2 msec to generate and 90 msecs for RPC and NN processing
> 2013-12-06 23:55:08,767 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: sent block report, processed command:org.apache.hadoop.hdfs.server.protocol.FinalizeCommand@38568c24
> 2013-12-06 23:55:08,773 INFO org.apache.hadoop.util.GSet: Computing capacity for map BlockMap
> 2013-12-06 23:55:08,773 INFO org.apache.hadoop.util.GSet: VM type       = 64-bit
> 2013-12-06 23:55:08,773 INFO org.apache.hadoop.util.GSet: 0.5% max memory = 889 MB
> 2013-12-06 23:55:08,773 INFO org.apache.hadoop.util.GSet: capacity      = 2^19 = 524288 entries
> 2013-12-06 23:55:08,774 INFO org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: Periodic Block Verification Scanner initialized with interval 504 hours for block pool BP-1981795271-192.168.56.1-1386350567299
> 2013-12-06 23:55:08,778 INFO org.apache.hadoop.hdfs.server.datanode.DataBlockScanner: Added bpid=BP-1981795271-192.168.56.1-1386350567299 to blockPoolScannerMap, new size=1
>
>

Re: Datanode not starting on slaves, no error issue

Posted by Manjunath Hegde <he...@gmail.com>.
Not sure if it reached first time, so sending it again. Apologies for
repeat.


On Sat, Dec 7, 2013 at 11:53 AM, Manjunath Hegde <he...@gmail.com> wrote:

> Hi,
>
> I am unable to start datanode deamon on my cluster(version v2.2). It
> starts fine in master node but simply do not start in data nodes. No log
> files are created on data nodes,they are created in master-node deamon and
> no error message. I have made sure below things are right. I also have
> appropriate aliases in /etc/hosts file for name resolution.
>
>    1. I am able to ssh all data nodes from master withought password. I
>    have also set HADOOP_SECURE_DN_USER user to "hadoop" this is the user i am
>    planning to start datanodes on, On all nodes.
>    2. I have added data nodes to slaves file, one per line.
>    3. HADOOP_HOME(/home/hadoop/hadoop-2.2.0),HADOOP_CONF_DIR($HADOOP_HOME/etc/hadoop)
>    set on ALL the nodes.
>    4. all required directories are present on datanodes,users
>    created,ipv6 disabled
>    5. Added necessary config file parameters, they are as below -
>
> Below are log files for reference. They dont have any errors. Note
> "Network topology has 0 racks and 0 datanodes" below suggesting it is not
> recognizing ALL datanodes(may be safe mode one, not sure). Any help is much
> appreciated.
>
> core-site.xml
>
> <configuration>
> <property>
>    <name>fs.default.name</name>
>    <value>hdfs://localhost:9000</value>
> </property>
>
> <property>
>    <name>io.file.buffer.size</name>
>    <value>131072</value>
> </property>
> </configuration>
>
> hdfs-site.xml
>
> <configuration>
> <property>
>    <name>dfs.replication</name>
>    <value>2</value>
>  </property>
>  <property>
>    <name>dfs.namenode.name.dir</name>
>    <value>file:/home/hadoop/namenode</value>
>  </property>
>  <property>
>    <name>dfs.datanode.data.dir</name>
>    <value>file:/home/hadoop/datanode</value>
>  </property>
> </configuration>
>
> yarn-site.xml
>
> <property>
>    <name>yarn.nodemanager.aux-services</name>
>    <value>mapreduce_shuffle</value>
> </property>
> <property>
>    <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
>    <value>org.apache.hadoop.mapred.ShuffleHandler</value>
> </property>
>  <property>
>    <name>yarn.nodemanager.log.dirs</name>
>    <value>/home/yarn/logs</value>
>  </property>
> </configuration>
>
> mapred-site.xml
>
> <configuration>
>    <property>
>       <name>mapreduce.framework.name</name>
>       <value>yarn</value>
>    </property>
>
>   <property>
>       <name>mapreduce.task.io.sort.mb</name>
>       <value>1024</value>
>    </property>
>
> </configuration>
>
> Namenode Log:
>
> 2013-12-06 23:54:46,940 INFO org.apache.hadoop.hdfs.StateChange: STATE* Leaving safe mode after 1 secs
> 2013-12-06 23:54:46,940 INFO org.apache.hadoop.hdfs.StateChange: STATE* Network topology has 0 racks and 0 datanodes
> 2013-12-06 23:54:46,940 INFO org.apache.hadoop.hdfs.StateChange: STATE* UnderReplicatedBlocks has 0 blocks
> 2013-12-06 23:54:46,972 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting
> 2013-12-06 23:54:46,972 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 9000: starting
> 2013-12-06 23:54:46,975 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: NameNode RPC up at: localhost/192.168.56.1:9000
> 2013-12-06 23:54:46,975 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Starting services required for active state
> 2013-12-06 23:55:08,530 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* registerDatanode: from DatanodeRegistration(192.168.56.1, storageID=DS-1268869381-192.168.56.1-50010-1386350725676, infoPort=50075, ipcPort=50020, storageInfo=lv=-47;cid=CID-d6194959-5a13-4d8b-8428-25134e8fb746;nsid=2144581313;c=0) storage DS-1268869381-192.168.56.1-50010-1386350725676
> 2013-12-06 23:55:08,535 INFO org.apache.hadoop.net.NetworkTopology: Adding a new node: /default-rack/192.168.56.1:50010
> 2013-12-06 23:55:08,717 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* processReport: Received first block report from 192.168.56.1:50010 after starting up or becoming active. Its block contents are no longer considered stale
> 2013-12-06 23:55:08,718 INFO BlockStateChange: BLOCK* processReport: from DatanodeRegistration(192.168.56.1, storageID=DS-1268869381-192.168.56.1-50010-1386350725676, infoPort=50075, ipcPort=50020, storageInfo=lv=-47;cid=CID-d6194959-5a13-4d8b-8428-25134e8fb746;nsid=2144581313;c=0), blocks: 0, processing time: 2 msecs
>
> Datanode on masterserver log:
>
> 2013-12-06 23:55:08,469 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Adding block pool BP-1981795271-192.168.56.1-1386350567299
> 2013-12-06 23:55:08,470 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Scanning block pool BP-1981795271-192.168.56.1-1386350567299 on volume /home/hadoop/datanode/current...
> 2013-12-06 23:55:08,479 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time taken to scan block pool BP-1981795271-192.168.56.1-1386350567299 on /home/hadoop/datanode/current: 8ms
> 2013-12-06 23:55:08,479 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Total time to scan all replicas for block pool BP-1981795271-192.168.56.1-1386350567299: 9ms
> 2013-12-06 23:55:08,479 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Adding replicas to map for block pool BP-1981795271-192.168.56.1-1386350567299 on volume /home/hadoop/datanode/current...
> 2013-12-06 23:55:08,479 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time to add replicas to map for block pool BP-1981795271-192.168.56.1-1386350567299 on volume /home/hadoop/datanode/current: 0ms
> 2013-12-06 23:55:08,479 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Total time to add all replicas to map: 0ms
> 2013-12-06 23:55:08,485 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool BP-1981795271-192.168.56.1-1386350567299 (storage id DS-1268869381-192.168.56.1-50010-1386350725676) service to localhost/192.168.56.1:9000 beginning handshake with NN
> 2013-12-06 23:55:08,560 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool Block pool BP-1981795271-192.168.56.1-1386350567299 (storage id DS-1268869381-192.168.56.1-50010-1386350725676) service to localhost/192.168.56.1:9000 successfully registered with NN
> 2013-12-06 23:55:08,560 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: For namenode localhost/192.168.56.1:9000 using DELETEREPORT_INTERVAL of 300000 msec  BLOCKREPORT_INTERVAL of 21600000msec Initial delay: 0msec; heartBeatInterval=3000
> 2013-12-06 23:55:08,674 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Namenode Block pool BP-1981795271-192.168.56.1-1386350567299 (storage id DS-1268869381-192.168.56.1-50010-1386350725676) service to localhost/192.168.56.1:9000 trying to claim ACTIVE state with txid=5
> 2013-12-06 23:55:08,674 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Acknowledging ACTIVE Namenode Block pool BP-1981795271-192.168.56.1-1386350567299 (storage id DS-1268869381-192.168.56.1-50010-1386350725676) service to localhost/192.168.56.1:9000
> 2013-12-06 23:55:08,767 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: BlockReport of 0 blocks took 2 msec to generate and 90 msecs for RPC and NN processing
> 2013-12-06 23:55:08,767 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: sent block report, processed command:org.apache.hadoop.hdfs.server.protocol.FinalizeCommand@38568c24
> 2013-12-06 23:55:08,773 INFO org.apache.hadoop.util.GSet: Computing capacity for map BlockMap
> 2013-12-06 23:55:08,773 INFO org.apache.hadoop.util.GSet: VM type       = 64-bit
> 2013-12-06 23:55:08,773 INFO org.apache.hadoop.util.GSet: 0.5% max memory = 889 MB
> 2013-12-06 23:55:08,773 INFO org.apache.hadoop.util.GSet: capacity      = 2^19 = 524288 entries
> 2013-12-06 23:55:08,774 INFO org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: Periodic Block Verification Scanner initialized with interval 504 hours for block pool BP-1981795271-192.168.56.1-1386350567299
> 2013-12-06 23:55:08,778 INFO org.apache.hadoop.hdfs.server.datanode.DataBlockScanner: Added bpid=BP-1981795271-192.168.56.1-1386350567299 to blockPoolScannerMap, new size=1
>
>

Re: Datanode not starting on slaves, no error issue

Posted by Manjunath Hegde <he...@gmail.com>.
Not sure if it reached first time, so sending it again. Apologies for
repeat.


On Sat, Dec 7, 2013 at 11:53 AM, Manjunath Hegde <he...@gmail.com> wrote:

> Hi,
>
> I am unable to start datanode deamon on my cluster(version v2.2). It
> starts fine in master node but simply do not start in data nodes. No log
> files are created on data nodes,they are created in master-node deamon and
> no error message. I have made sure below things are right. I also have
> appropriate aliases in /etc/hosts file for name resolution.
>
>    1. I am able to ssh all data nodes from master withought password. I
>    have also set HADOOP_SECURE_DN_USER user to "hadoop" this is the user i am
>    planning to start datanodes on, On all nodes.
>    2. I have added data nodes to slaves file, one per line.
>    3. HADOOP_HOME(/home/hadoop/hadoop-2.2.0),HADOOP_CONF_DIR($HADOOP_HOME/etc/hadoop)
>    set on ALL the nodes.
>    4. all required directories are present on datanodes,users
>    created,ipv6 disabled
>    5. Added necessary config file parameters, they are as below -
>
> Below are log files for reference. They dont have any errors. Note
> "Network topology has 0 racks and 0 datanodes" below suggesting it is not
> recognizing ALL datanodes(may be safe mode one, not sure). Any help is much
> appreciated.
>
> core-site.xml
>
> <configuration>
> <property>
>    <name>fs.default.name</name>
>    <value>hdfs://localhost:9000</value>
> </property>
>
> <property>
>    <name>io.file.buffer.size</name>
>    <value>131072</value>
> </property>
> </configuration>
>
> hdfs-site.xml
>
> <configuration>
> <property>
>    <name>dfs.replication</name>
>    <value>2</value>
>  </property>
>  <property>
>    <name>dfs.namenode.name.dir</name>
>    <value>file:/home/hadoop/namenode</value>
>  </property>
>  <property>
>    <name>dfs.datanode.data.dir</name>
>    <value>file:/home/hadoop/datanode</value>
>  </property>
> </configuration>
>
> yarn-site.xml
>
> <property>
>    <name>yarn.nodemanager.aux-services</name>
>    <value>mapreduce_shuffle</value>
> </property>
> <property>
>    <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
>    <value>org.apache.hadoop.mapred.ShuffleHandler</value>
> </property>
>  <property>
>    <name>yarn.nodemanager.log.dirs</name>
>    <value>/home/yarn/logs</value>
>  </property>
> </configuration>
>
> mapred-site.xml
>
> <configuration>
>    <property>
>       <name>mapreduce.framework.name</name>
>       <value>yarn</value>
>    </property>
>
>   <property>
>       <name>mapreduce.task.io.sort.mb</name>
>       <value>1024</value>
>    </property>
>
> </configuration>
>
> Namenode Log:
>
> 2013-12-06 23:54:46,940 INFO org.apache.hadoop.hdfs.StateChange: STATE* Leaving safe mode after 1 secs
> 2013-12-06 23:54:46,940 INFO org.apache.hadoop.hdfs.StateChange: STATE* Network topology has 0 racks and 0 datanodes
> 2013-12-06 23:54:46,940 INFO org.apache.hadoop.hdfs.StateChange: STATE* UnderReplicatedBlocks has 0 blocks
> 2013-12-06 23:54:46,972 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting
> 2013-12-06 23:54:46,972 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 9000: starting
> 2013-12-06 23:54:46,975 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: NameNode RPC up at: localhost/192.168.56.1:9000
> 2013-12-06 23:54:46,975 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Starting services required for active state
> 2013-12-06 23:55:08,530 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* registerDatanode: from DatanodeRegistration(192.168.56.1, storageID=DS-1268869381-192.168.56.1-50010-1386350725676, infoPort=50075, ipcPort=50020, storageInfo=lv=-47;cid=CID-d6194959-5a13-4d8b-8428-25134e8fb746;nsid=2144581313;c=0) storage DS-1268869381-192.168.56.1-50010-1386350725676
> 2013-12-06 23:55:08,535 INFO org.apache.hadoop.net.NetworkTopology: Adding a new node: /default-rack/192.168.56.1:50010
> 2013-12-06 23:55:08,717 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* processReport: Received first block report from 192.168.56.1:50010 after starting up or becoming active. Its block contents are no longer considered stale
> 2013-12-06 23:55:08,718 INFO BlockStateChange: BLOCK* processReport: from DatanodeRegistration(192.168.56.1, storageID=DS-1268869381-192.168.56.1-50010-1386350725676, infoPort=50075, ipcPort=50020, storageInfo=lv=-47;cid=CID-d6194959-5a13-4d8b-8428-25134e8fb746;nsid=2144581313;c=0), blocks: 0, processing time: 2 msecs
>
> Datanode on masterserver log:
>
> 2013-12-06 23:55:08,469 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Adding block pool BP-1981795271-192.168.56.1-1386350567299
> 2013-12-06 23:55:08,470 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Scanning block pool BP-1981795271-192.168.56.1-1386350567299 on volume /home/hadoop/datanode/current...
> 2013-12-06 23:55:08,479 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time taken to scan block pool BP-1981795271-192.168.56.1-1386350567299 on /home/hadoop/datanode/current: 8ms
> 2013-12-06 23:55:08,479 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Total time to scan all replicas for block pool BP-1981795271-192.168.56.1-1386350567299: 9ms
> 2013-12-06 23:55:08,479 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Adding replicas to map for block pool BP-1981795271-192.168.56.1-1386350567299 on volume /home/hadoop/datanode/current...
> 2013-12-06 23:55:08,479 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time to add replicas to map for block pool BP-1981795271-192.168.56.1-1386350567299 on volume /home/hadoop/datanode/current: 0ms
> 2013-12-06 23:55:08,479 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Total time to add all replicas to map: 0ms
> 2013-12-06 23:55:08,485 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool BP-1981795271-192.168.56.1-1386350567299 (storage id DS-1268869381-192.168.56.1-50010-1386350725676) service to localhost/192.168.56.1:9000 beginning handshake with NN
> 2013-12-06 23:55:08,560 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool Block pool BP-1981795271-192.168.56.1-1386350567299 (storage id DS-1268869381-192.168.56.1-50010-1386350725676) service to localhost/192.168.56.1:9000 successfully registered with NN
> 2013-12-06 23:55:08,560 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: For namenode localhost/192.168.56.1:9000 using DELETEREPORT_INTERVAL of 300000 msec  BLOCKREPORT_INTERVAL of 21600000msec Initial delay: 0msec; heartBeatInterval=3000
> 2013-12-06 23:55:08,674 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Namenode Block pool BP-1981795271-192.168.56.1-1386350567299 (storage id DS-1268869381-192.168.56.1-50010-1386350725676) service to localhost/192.168.56.1:9000 trying to claim ACTIVE state with txid=5
> 2013-12-06 23:55:08,674 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Acknowledging ACTIVE Namenode Block pool BP-1981795271-192.168.56.1-1386350567299 (storage id DS-1268869381-192.168.56.1-50010-1386350725676) service to localhost/192.168.56.1:9000
> 2013-12-06 23:55:08,767 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: BlockReport of 0 blocks took 2 msec to generate and 90 msecs for RPC and NN processing
> 2013-12-06 23:55:08,767 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: sent block report, processed command:org.apache.hadoop.hdfs.server.protocol.FinalizeCommand@38568c24
> 2013-12-06 23:55:08,773 INFO org.apache.hadoop.util.GSet: Computing capacity for map BlockMap
> 2013-12-06 23:55:08,773 INFO org.apache.hadoop.util.GSet: VM type       = 64-bit
> 2013-12-06 23:55:08,773 INFO org.apache.hadoop.util.GSet: 0.5% max memory = 889 MB
> 2013-12-06 23:55:08,773 INFO org.apache.hadoop.util.GSet: capacity      = 2^19 = 524288 entries
> 2013-12-06 23:55:08,774 INFO org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: Periodic Block Verification Scanner initialized with interval 504 hours for block pool BP-1981795271-192.168.56.1-1386350567299
> 2013-12-06 23:55:08,778 INFO org.apache.hadoop.hdfs.server.datanode.DataBlockScanner: Added bpid=BP-1981795271-192.168.56.1-1386350567299 to blockPoolScannerMap, new size=1
>
>

Re: Datanode not starting on slaves, no error issue

Posted by Manjunath Hegde <he...@gmail.com>.
Not sure if it reached first time, so sending it again. Apologies for
repeat.


On Sat, Dec 7, 2013 at 11:53 AM, Manjunath Hegde <he...@gmail.com> wrote:

> Hi,
>
> I am unable to start datanode deamon on my cluster(version v2.2). It
> starts fine in master node but simply do not start in data nodes. No log
> files are created on data nodes,they are created in master-node deamon and
> no error message. I have made sure below things are right. I also have
> appropriate aliases in /etc/hosts file for name resolution.
>
>    1. I am able to ssh all data nodes from master withought password. I
>    have also set HADOOP_SECURE_DN_USER user to "hadoop" this is the user i am
>    planning to start datanodes on, On all nodes.
>    2. I have added data nodes to slaves file, one per line.
>    3. HADOOP_HOME(/home/hadoop/hadoop-2.2.0),HADOOP_CONF_DIR($HADOOP_HOME/etc/hadoop)
>    set on ALL the nodes.
>    4. all required directories are present on datanodes,users
>    created,ipv6 disabled
>    5. Added necessary config file parameters, they are as below -
>
> Below are log files for reference. They dont have any errors. Note
> "Network topology has 0 racks and 0 datanodes" below suggesting it is not
> recognizing ALL datanodes(may be safe mode one, not sure). Any help is much
> appreciated.
>
> core-site.xml
>
> <configuration>
> <property>
>    <name>fs.default.name</name>
>    <value>hdfs://localhost:9000</value>
> </property>
>
> <property>
>    <name>io.file.buffer.size</name>
>    <value>131072</value>
> </property>
> </configuration>
>
> hdfs-site.xml
>
> <configuration>
> <property>
>    <name>dfs.replication</name>
>    <value>2</value>
>  </property>
>  <property>
>    <name>dfs.namenode.name.dir</name>
>    <value>file:/home/hadoop/namenode</value>
>  </property>
>  <property>
>    <name>dfs.datanode.data.dir</name>
>    <value>file:/home/hadoop/datanode</value>
>  </property>
> </configuration>
>
> yarn-site.xml
>
> <property>
>    <name>yarn.nodemanager.aux-services</name>
>    <value>mapreduce_shuffle</value>
> </property>
> <property>
>    <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
>    <value>org.apache.hadoop.mapred.ShuffleHandler</value>
> </property>
>  <property>
>    <name>yarn.nodemanager.log.dirs</name>
>    <value>/home/yarn/logs</value>
>  </property>
> </configuration>
>
> mapred-site.xml
>
> <configuration>
>    <property>
>       <name>mapreduce.framework.name</name>
>       <value>yarn</value>
>    </property>
>
>   <property>
>       <name>mapreduce.task.io.sort.mb</name>
>       <value>1024</value>
>    </property>
>
> </configuration>
>
> Namenode Log:
>
> 2013-12-06 23:54:46,940 INFO org.apache.hadoop.hdfs.StateChange: STATE* Leaving safe mode after 1 secs
> 2013-12-06 23:54:46,940 INFO org.apache.hadoop.hdfs.StateChange: STATE* Network topology has 0 racks and 0 datanodes
> 2013-12-06 23:54:46,940 INFO org.apache.hadoop.hdfs.StateChange: STATE* UnderReplicatedBlocks has 0 blocks
> 2013-12-06 23:54:46,972 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting
> 2013-12-06 23:54:46,972 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 9000: starting
> 2013-12-06 23:54:46,975 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: NameNode RPC up at: localhost/192.168.56.1:9000
> 2013-12-06 23:54:46,975 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Starting services required for active state
> 2013-12-06 23:55:08,530 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* registerDatanode: from DatanodeRegistration(192.168.56.1, storageID=DS-1268869381-192.168.56.1-50010-1386350725676, infoPort=50075, ipcPort=50020, storageInfo=lv=-47;cid=CID-d6194959-5a13-4d8b-8428-25134e8fb746;nsid=2144581313;c=0) storage DS-1268869381-192.168.56.1-50010-1386350725676
> 2013-12-06 23:55:08,535 INFO org.apache.hadoop.net.NetworkTopology: Adding a new node: /default-rack/192.168.56.1:50010
> 2013-12-06 23:55:08,717 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: BLOCK* processReport: Received first block report from 192.168.56.1:50010 after starting up or becoming active. Its block contents are no longer considered stale
> 2013-12-06 23:55:08,718 INFO BlockStateChange: BLOCK* processReport: from DatanodeRegistration(192.168.56.1, storageID=DS-1268869381-192.168.56.1-50010-1386350725676, infoPort=50075, ipcPort=50020, storageInfo=lv=-47;cid=CID-d6194959-5a13-4d8b-8428-25134e8fb746;nsid=2144581313;c=0), blocks: 0, processing time: 2 msecs
>
> Datanode on masterserver log:
>
> 2013-12-06 23:55:08,469 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Adding block pool BP-1981795271-192.168.56.1-1386350567299
> 2013-12-06 23:55:08,470 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Scanning block pool BP-1981795271-192.168.56.1-1386350567299 on volume /home/hadoop/datanode/current...
> 2013-12-06 23:55:08,479 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time taken to scan block pool BP-1981795271-192.168.56.1-1386350567299 on /home/hadoop/datanode/current: 8ms
> 2013-12-06 23:55:08,479 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Total time to scan all replicas for block pool BP-1981795271-192.168.56.1-1386350567299: 9ms
> 2013-12-06 23:55:08,479 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Adding replicas to map for block pool BP-1981795271-192.168.56.1-1386350567299 on volume /home/hadoop/datanode/current...
> 2013-12-06 23:55:08,479 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time to add replicas to map for block pool BP-1981795271-192.168.56.1-1386350567299 on volume /home/hadoop/datanode/current: 0ms
> 2013-12-06 23:55:08,479 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Total time to add all replicas to map: 0ms
> 2013-12-06 23:55:08,485 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool BP-1981795271-192.168.56.1-1386350567299 (storage id DS-1268869381-192.168.56.1-50010-1386350725676) service to localhost/192.168.56.1:9000 beginning handshake with NN
> 2013-12-06 23:55:08,560 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool Block pool BP-1981795271-192.168.56.1-1386350567299 (storage id DS-1268869381-192.168.56.1-50010-1386350725676) service to localhost/192.168.56.1:9000 successfully registered with NN
> 2013-12-06 23:55:08,560 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: For namenode localhost/192.168.56.1:9000 using DELETEREPORT_INTERVAL of 300000 msec  BLOCKREPORT_INTERVAL of 21600000msec Initial delay: 0msec; heartBeatInterval=3000
> 2013-12-06 23:55:08,674 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Namenode Block pool BP-1981795271-192.168.56.1-1386350567299 (storage id DS-1268869381-192.168.56.1-50010-1386350725676) service to localhost/192.168.56.1:9000 trying to claim ACTIVE state with txid=5
> 2013-12-06 23:55:08,674 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Acknowledging ACTIVE Namenode Block pool BP-1981795271-192.168.56.1-1386350567299 (storage id DS-1268869381-192.168.56.1-50010-1386350725676) service to localhost/192.168.56.1:9000
> 2013-12-06 23:55:08,767 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: BlockReport of 0 blocks took 2 msec to generate and 90 msecs for RPC and NN processing
> 2013-12-06 23:55:08,767 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: sent block report, processed command:org.apache.hadoop.hdfs.server.protocol.FinalizeCommand@38568c24
> 2013-12-06 23:55:08,773 INFO org.apache.hadoop.util.GSet: Computing capacity for map BlockMap
> 2013-12-06 23:55:08,773 INFO org.apache.hadoop.util.GSet: VM type       = 64-bit
> 2013-12-06 23:55:08,773 INFO org.apache.hadoop.util.GSet: 0.5% max memory = 889 MB
> 2013-12-06 23:55:08,773 INFO org.apache.hadoop.util.GSet: capacity      = 2^19 = 524288 entries
> 2013-12-06 23:55:08,774 INFO org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: Periodic Block Verification Scanner initialized with interval 504 hours for block pool BP-1981795271-192.168.56.1-1386350567299
> 2013-12-06 23:55:08,778 INFO org.apache.hadoop.hdfs.server.datanode.DataBlockScanner: Added bpid=BP-1981795271-192.168.56.1-1386350567299 to blockPoolScannerMap, new size=1
>
>