You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-dev@hadoop.apache.org by "J.Andreina (Created) (JIRA)" <ji...@apache.org> on 2012/02/15 10:37:59 UTC
[jira] [Created] (HDFS-2951) Block reported as corrupt while
running multi threaded client program that performs write and read
operation on a set of files
Block reported as corrupt while running multi threaded client program that performs write and read operation on a set of files
------------------------------------------------------------------------------------------------------------------------------
Key: HDFS-2951
URL: https://issues.apache.org/jira/browse/HDFS-2951
Project: Hadoop HDFS
Issue Type: Bug
Components: data-node
Affects Versions: 0.23.0
Reporter: J.Andreina
Fix For: 0.24.0
Block incorrectly detected as bad in the following scenario:
Running multi threaded client program which performs write and read operation on a set of files
One block detected as bad by DN
Multiple recoveries where triggered from the NN side(It was happening every 1 hr)
After around 6 hrs the recovery was successful(Commitblocksynchronization successful at NN side)
At the DN side around the same time when Commitblocksynchronization happened one more NN recovery call has come and this was subsequently failig as already the block was recovered and the generation timestamp is updated.
At the DN side block verification failed and the block was reported as bad.
FSCk report is indicating that the block is corrupt
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira