You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-dev@hadoop.apache.org by "Raghu Angadi (JIRA)" <ji...@apache.org> on 2008/03/14 00:37:24 UTC
[jira] Created: (HADOOP-3015) DataNode should clean up temporary
files when writeBlock fails.
DataNode should clean up temporary files when writeBlock fails.
---------------------------------------------------------------
Key: HADOOP-3015
URL: https://issues.apache.org/jira/browse/HADOOP-3015
Project: Hadoop Core
Issue Type: Bug
Components: dfs
Affects Versions: 0.15.3
Reporter: Raghu Angadi
Once a datanode starts receiving a block and if it fails to complete receiving the block, it leaves the temporary block files in the temp directory. Because of this, same block can not be written to this node for next one hour.
DataNode should really delete these files and allow the next attempt.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.