You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-dev@hadoop.apache.org by "ikweesung (JIRA)" <ji...@apache.org> on 2014/01/22 08:53:22 UTC
[jira] [Created] (HDFS-5809) BlockPoolSliceScanner make datanode to
drop into infinite loop
ikweesung created HDFS-5809:
-------------------------------
Summary: BlockPoolSliceScanner make datanode to drop into infinite loop
Key: HDFS-5809
URL: https://issues.apache.org/jira/browse/HDFS-5809
Project: Hadoop HDFS
Issue Type: Bug
Components: datanode
Affects Versions: 2.0.0-alpha
Environment: jdk1.6, centos6.4
Reporter: ikweesung
Priority: Critical
Hello, everyone.
When hadoop cluster starts, BlockPoolSliceScanner start scanning the blocks in my cluster.
Then, randomly one datanode drop into infinite loop as the log show, and finally all datanodes drop into infinite loop.
Every datanode just verify fail by one block.
When i check the fail block like this : hadoop fsck / -files -blocks | grep blk_1223474551535936089_4702249, no hdfs file contains the block.
It seems that in while block of BlockPoolSliceScanner's scan method drop into infinite loop .
BlockPoolSliceScanner: 650
while (datanode.shouldRun
&& !datanode.blockScanner.blockScannerThread.isInterrupted()
&& datanode.isBPServiceAlive(blockPoolId)) { ....
The log finally printed in method verifyBlock(BlockPoolSliceScanner:453).
Please excuse my poor English.
--
This message was sent by Atlassian JIRA
(v6.1.5#6160)