You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-dev@hadoop.apache.org by "Konstantin Shvachko (JIRA)" <ji...@apache.org> on 2008/03/18 02:20:24 UTC

[jira] Commented: (HADOOP-3034) Need to be able to evacuate a datanode

    [ https://issues.apache.org/jira/browse/HADOOP-3034?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12579695#action_12579695 ] 

Konstantin Shvachko commented on HADOOP-3034:
---------------------------------------------

Sounds like decommission feature.
http://wiki.apache.org/hadoop/FAQ#17

> Need to be able to evacuate a datanode
> --------------------------------------
>
>                 Key: HADOOP-3034
>                 URL: https://issues.apache.org/jira/browse/HADOOP-3034
>             Project: Hadoop Core
>          Issue Type: Improvement
>            Reporter: Ted Dunning
>
> It would be very helpful if there were some way to evacuate data from one or more nodes.
> This scenario arise fairly often when several nodes need to be powered down at nearly the same time.  Currently, they can only be taken down a few at a time (n-1 nodes at a time where n is the replication factor) and then you have to wait until all files on these nodes have been replicated.
> One implementation would be to be to allow the nodes in question be put into read only mode and mark all blocks on those nodes as not counting as replicants.  This should cause the namenode to copy these blocks and as soon as fsck shows no under-replicated files, the nodes will be known to be clear for power-down.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.