You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-dev@hadoop.apache.org by "dhruba borthakur (JIRA)" <ji...@apache.org> on 2008/07/09 02:14:31 UTC

[jira] Commented: (HADOOP-3677) Problems with generation stamp upgrade

    [ https://issues.apache.org/jira/browse/HADOOP-3677?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12611857#action_12611857 ] 

dhruba borthakur commented on HADOOP-3677:
------------------------------------------

One workaround is as follows:

1. Shut down namenode and then restart namenode (with existing release). This will cause datanodes to send block reports and delete blocks that are not in the namespace.

2. Shutdown cluster. Install new software on all nodes. Restart with -upgrade option. This will not have to delete blocks becuase orphaned blocks were already deleted in Step-1.

If this workaround sounds feasible, then we can remove this issue from the 0.18 Blocker list.

> Problems with generation stamp upgrade
> --------------------------------------
>
>                 Key: HADOOP-3677
>                 URL: https://issues.apache.org/jira/browse/HADOOP-3677
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: dfs
>    Affects Versions: 0.18.0
>            Reporter: Konstantin Shvachko
>            Assignee: dhruba borthakur
>            Priority: Blocker
>             Fix For: 0.18.0
>
>
> # The generation stamp upgrade renames blocks' meta-files so that the name contains the block generation stamp as stated in HADOOP-2656.
> If a data-node has blocks that do not belong to any files and the name-node asks the data-node to remove those blocks 
> during or before the upgrade started the data-node will remove the blocks but not the meta-files because their names 
> are still in the old format which is not recognized by the new code. So we can end up with a number of garbage files which
> will be hard to recognize that they are unused and the system will never remove them automatically.
> I think this should be handled by the upgrade code in the end, but may be it will be right to fix HADOOP-3002 for the 0.18 release,
> which will avoid scheduling block removal when the name-node is in safe-mode.
> # I was not able to get the upgrade -force option to work. This option lets the name-node proceed with a distributed upgrade even if
> the data-nodes are not able to complete their local upgrades. Did we test this feature at all for the generation stamp upgrade?

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.