You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-dev@hadoop.apache.org by "Raghu Angadi (JIRA)" <ji...@apache.org> on 2009/05/08 21:22:45 UTC
[jira] Updated: (HADOOP-5796) DFS Write pipeline does not detect
defective datanode correctly in some cases (HADOOP-3339)
[ https://issues.apache.org/jira/browse/HADOOP-5796?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Raghu Angadi updated HADOOP-5796:
---------------------------------
Attachment: toreproduce-5796.patch
Attached patch {{toreproduce-5796.patch}} helps illustrate the problem. How to reproduce :
Create an HDFS cluster with 2 datanodes. For one of them set "dfs.datanode.address" to "0.0.0.0:50013". Now try to write a 5MB file. You will notice that when ever the 50013 is the last datanode in the pipeline, write is aborted.
The hunk from patch for HADOOP-1700 that reverts the earlier fix :
{noformat}
@@ -2214,10 +2218,15 @@
/* The receiver thread cancelled this thread.
* We could also check any other status updates from the
* receiver thread (e.g. if it is ok to write to replyOut).
+ * It is prudent to not send any more status back to the client
+ * because this datanode has a problem. The upstream datanode
+ * will detect a timout on heartbeats and will declare that
+ * this datanode is bad, and rightly so.
*/
LOG.info("PacketResponder " + block + " " + numTargets +
" : Thread is interrupted.");
running = false;
+ continue;
}
if (!didRead) {
{noformat}
I don't think the added justification is always correct.
Suggested fix :
============
- the loop should 'continue' if write to the local disk fails.
- it should not if write to downstream mirror fails. (this test case)
> DFS Write pipeline does not detect defective datanode correctly in some cases (HADOOP-3339)
> -------------------------------------------------------------------------------------------
>
> Key: HADOOP-5796
> URL: https://issues.apache.org/jira/browse/HADOOP-5796
> Project: Hadoop Core
> Issue Type: Bug
> Affects Versions: 0.19.0
> Reporter: Raghu Angadi
> Priority: Blocker
> Fix For: 0.20.1
>
> Attachments: toreproduce-5796.patch
>
>
> HDFS write pipeline does not select the correct datanode in some error cases. One example : say DN2 is the second datanode and write to it times out since it is in a bad state.. pipeline actually removes the first datanode. If such a datanode happens to be the last one in the pipeline, write is aborted completely with a hard error.
> Essentially the error occurs when writing to a downstream datanode fails rather than reading. This bug was actually fixed in 0.18 (HADOOP-3339). But HADOOP-1700 essentially reverted it. I am not sure why.
> It is absolutely essential for HDFS to handle failures on subset of datanodes in a pipeline. We should not have at least known bugs that lead to hard failures.
> I will attach patch for a hack that illustrates this problem. Still thinking of how an automated test would look like for this one.
> My preferred target for this fix is 0.20.1.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.