You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-dev@hadoop.apache.org by "Jason (JIRA)" <ji...@apache.org> on 2009/01/10 05:57:59 UTC
[jira] Commented: (HADOOP-3232) Datanodes time out
[ https://issues.apache.org/jira/browse/HADOOP-3232?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12662621#action_12662621 ]
Jason commented on HADOOP-3232:
-------------------------------
I propose an alternate solution for this.
If the block information was managed by having a inotify task (in linux/solaris), and the windows equivalent which I forget, the datanode could be informed each time a file in the dfs tree is created, updated, or deleted.
With this information being delivered, it can maintain an accurate block map with only 1 full scan of the datanode blocks, at start time.
With this algorithm the data nodes will be able to scale to a much larger number of blocks.
The other thing is the way the sync blocks on the FSDataset.FSVolumeSet are held totally aggravates this bug in 0.18.1.
The jason@attributor.com address will be going away shortly, I will be switching to jason.hadoop@gmail.com in the next little bit.
> Datanodes time out
> ------------------
>
> Key: HADOOP-3232
> URL: https://issues.apache.org/jira/browse/HADOOP-3232
> Project: Hadoop Core
> Issue Type: Bug
> Components: dfs
> Affects Versions: 0.16.2
> Environment: 10 node cluster + 1 namenode
> Reporter: Johan Oskarsson
> Assignee: Johan Oskarsson
> Priority: Critical
> Fix For: 0.18.0
>
> Attachments: du-nonblocking-v1.patch, du-nonblocking-v2-trunk.patch, du-nonblocking-v4-trunk.patch, du-nonblocking-v5-trunk.patch, du-nonblocking-v6-trunk.patch, hadoop-hadoop-datanode-new.log, hadoop-hadoop-datanode-new.out, hadoop-hadoop-datanode.out, hadoop-hadoop-namenode-master2.out
>
>
> I recently upgraded to 0.16.2 from 0.15.2 on our 10 node cluster.
> Unfortunately we're seeing datanode timeout issues. In previous versions we've often seen in the nn webui that one or two datanodes "last contact" goes from the usual 0-3 sec to ~200-300 before it drops down to 0 again.
> This causes mild discomfort but the big problems appear when all nodes do this at once, as happened a few times after the upgrade.
> It was suggested that this could be due to namenode garbage collection, but looking at the gc log output it doesn't seem to be the case.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.