You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-dev@hadoop.apache.org by "Raghu Angadi (JIRA)" <ji...@apache.org> on 2007/07/31 21:45:53 UTC

[jira] Updated: (HADOOP-1664) Hadoop DFS upgrade prcoedure

     [ https://issues.apache.org/jira/browse/HADOOP-1664?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Raghu Angadi updated HADOOP-1664:
---------------------------------

    Attachment: datanode.log.txt

Namenode log looks fine. It starts the CRC upgrade and is waiting for datanodes to start the same and join. But for some reason, datanodes don't start the CRC upgrade. I am not sure what was going on. If you ever able to reproduce this, please let me know. 

I am attaching relevant part of one of the datanode's log.


> Hadoop DFS upgrade prcoedure
> ----------------------------
>
>                 Key: HADOOP-1664
>                 URL: https://issues.apache.org/jira/browse/HADOOP-1664
>             Project: Hadoop
>          Issue Type: Improvement
>          Components: dfs
>    Affects Versions: 0.14.0
>            Reporter: Christian Kunz
>         Attachments: datanode.log.txt
>
>
> When upgrading from a July-9  to a July-25 nightly release, we are able to upgrade successfully on a single-node cluster, but failed on a 10 and a 200 node cluster.
> As it is not sure whether we made a mistake or not, I file this as an improvement. But going forward it is imperative that there is a safe and well-documented procedure to upgrade dfs without loss of data, including a rollback procedure and listing of operational procedures that are irreversibly destructive (hopefully an empty list).

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.