You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-user@hadoop.apache.org by sam liu <sa...@gmail.com> on 2014/09/18 08:38:26 UTC

Failed to rollback from hadoop-2.4.1 to hadoop 2.2.0

Hi Expert,

Below are my steps and is it a hadoop bug or did I miss any thing? Thanks!

Step:
[A] Upgrade
1. Install Hadoop 2.2.0 cluster
2. Stop Hadoop services
3. Replace 2.2.0 binaries with 2.4.1 binaries
4. Start datanodes: $HADOOP_HOME/sbin/hadoop-daemon.sh start datanode
5. Start namenode with option upgrade: $HADOOP_HOME/sbin/hadoop-daemon.sh
start namenode -upgrade
6. Start secondary namenode, tasktracker and jobtracker

Result:

    Whole upgrade process could be completed successfully.

[B] Rollback
1. Stop all hadoop services
2. Replace 2.4.1 binaries with 2.2.0 binaries
3. Start datanodes: $HADOOP_HOME/sbin/hadoop-daemon.sh start datanode
4. Start namenode with option upgrade: $HADOOP_HOME/sbin/hadoop-daemon.sh
start namenode -rollback

Result:

    Namenode service could be started
    Datanodes failed with exception:
    Issue: DataNode failed with following exception
    2014-09-17 11:04:51,416 INFO
org.apache.hadoop.hdfs.server.common.Storage: Lock on
/hadoop/hdfs/data/in_use.lock acquired by nodename 817443@shihc071-public
    2014-09-17 11:04:51,418 FATAL
org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for
block pool Block pool BP-977402492-9.181.64.185-1410497086460 (storage id )
service to hostname/ip:9000
    org.apache.hadoop.hdfs.server.common.IncorrectVersionException:
Unexpected version of storage directory /hadoop/hdfs/data. Reported: -55.
Expecting = -47.
    at
org.apache.hadoop.hdfs.server.common.Storage.setLayoutVersion(Storage.java:1082)
    at
org.apache.hadoop.hdfs.server.datanode.DataStorage.setFieldsFromProperties(DataStorage.java:302)
    at
org.apache.hadoop.hdfs.server.common.Storage.readProperties(Storage.java:921)

Re: Failed to rollback from hadoop-2.4.1 to hadoop 2.2.0

Posted by Azuryy Yu <az...@gmail.com>.
yes, this is an issue, I also found that. can you please file an issue?


On Sun, Sep 21, 2014 at 12:08 PM, sam liu <sa...@gmail.com> wrote:

> I rollback from 2.4.1 to 2.2.0 and seems 2.2.0 does not has option
> upgradeProgress, right?
>
> I guess it might be a hadoop issue, as I still could not start datanode
> after rollback
>
> 2014-09-18 4:15 GMT-07:00 Susheel Kumar Gadalay <sk...@gmail.com>:
>
> What is the o/p of command
>>
>> hdfs dfsadmin -upgradeProgress status
>>
>> If it says upgrade is complete then you can do some sanity check by hdfs
>> fsck.
>>
>> Stop the servers by stop-dfs.sh and then rollback by command
>> start-dfs.sh -rollback
>>
>> On 9/18/14, sam liu <sa...@gmail.com> wrote:
>> > Thanks for your comment!
>> >
>> > I can upgrade from 2.2.0 to 2.4.1 using command 'start-dfs.sh -upgrade',
>> > however failed to rollback from 2.4.1 to 2.2.0 using command
>> 'start-dfs.sh
>> > -rollback': the namenode always stays on safe mode(awaiting reported
>> blocks
>> > (0/315)).
>> >
>> > Why?
>> >
>> > 2014-09-18 1:51 GMT-07:00 Susheel Kumar Gadalay <sk...@gmail.com>:
>> >
>> >> You have to upgrade both name node and data node.
>> >>
>> >> Better issue start-dfs.sh -upgrade.
>> >>
>> >> Check whether current and previous directories are present in both
>> >> dfs.namenode.name.dir and dfs.datanode.data.dir directory.
>> >>
>> >> On 9/18/14, sam liu <sa...@gmail.com> wrote:
>> >> > Hi Expert,
>> >> >
>> >> > Below are my steps and is it a hadoop bug or did I miss any thing?
>> >> Thanks!
>> >> >
>> >> > Step:
>> >> > [A] Upgrade
>> >> > 1. Install Hadoop 2.2.0 cluster
>> >> > 2. Stop Hadoop services
>> >> > 3. Replace 2.2.0 binaries with 2.4.1 binaries
>> >> > 4. Start datanodes: $HADOOP_HOME/sbin/hadoop-daemon.sh start datanode
>> >> > 5. Start namenode with option upgrade:
>> >> > $HADOOP_HOME/sbin/hadoop-daemon.sh
>> >> > start namenode -upgrade
>> >> > 6. Start secondary namenode, tasktracker and jobtracker
>> >> >
>> >> > Result:
>> >> >
>> >> >     Whole upgrade process could be completed successfully.
>> >> >
>> >> > [B] Rollback
>> >> > 1. Stop all hadoop services
>> >> > 2. Replace 2.4.1 binaries with 2.2.0 binaries
>> >> > 3. Start datanodes: $HADOOP_HOME/sbin/hadoop-daemon.sh start datanode
>> >> > 4. Start namenode with option upgrade:
>> >> > $HADOOP_HOME/sbin/hadoop-daemon.sh
>> >> > start namenode -rollback
>> >> >
>> >> > Result:
>> >> >
>> >> >     Namenode service could be started
>> >> >     Datanodes failed with exception:
>> >> >     Issue: DataNode failed with following exception
>> >> >     2014-09-17 11:04:51,416 INFO
>> >> > org.apache.hadoop.hdfs.server.common.Storage: Lock on
>> >> > /hadoop/hdfs/data/in_use.lock acquired by nodename
>> >> > 817443@shihc071-public
>> >> >     2014-09-17 11:04:51,418 FATAL
>> >> > org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization
>> failed
>> >> for
>> >> > block pool Block pool BP-977402492-9.181.64.185-1410497086460
>> (storage
>> >> id )
>> >> > service to hostname/ip:9000
>> >> >     org.apache.hadoop.hdfs.server.common.IncorrectVersionException:
>> >> > Unexpected version of storage directory /hadoop/hdfs/data. Reported:
>> >> > -55.
>> >> > Expecting = -47.
>> >> >     at
>> >> >
>> >>
>> org.apache.hadoop.hdfs.server.common.Storage.setLayoutVersion(Storage.java:1082)
>> >> >     at
>> >> >
>> >>
>> org.apache.hadoop.hdfs.server.datanode.DataStorage.setFieldsFromProperties(DataStorage.java:302)
>> >> >     at
>> >> >
>> >>
>> org.apache.hadoop.hdfs.server.common.Storage.readProperties(Storage.java:921)
>> >> >
>> >>
>> >
>>
>
>

Re: Failed to rollback from hadoop-2.4.1 to hadoop 2.2.0

Posted by Azuryy Yu <az...@gmail.com>.
yes, this is an issue, I also found that. can you please file an issue?


On Sun, Sep 21, 2014 at 12:08 PM, sam liu <sa...@gmail.com> wrote:

> I rollback from 2.4.1 to 2.2.0 and seems 2.2.0 does not has option
> upgradeProgress, right?
>
> I guess it might be a hadoop issue, as I still could not start datanode
> after rollback
>
> 2014-09-18 4:15 GMT-07:00 Susheel Kumar Gadalay <sk...@gmail.com>:
>
> What is the o/p of command
>>
>> hdfs dfsadmin -upgradeProgress status
>>
>> If it says upgrade is complete then you can do some sanity check by hdfs
>> fsck.
>>
>> Stop the servers by stop-dfs.sh and then rollback by command
>> start-dfs.sh -rollback
>>
>> On 9/18/14, sam liu <sa...@gmail.com> wrote:
>> > Thanks for your comment!
>> >
>> > I can upgrade from 2.2.0 to 2.4.1 using command 'start-dfs.sh -upgrade',
>> > however failed to rollback from 2.4.1 to 2.2.0 using command
>> 'start-dfs.sh
>> > -rollback': the namenode always stays on safe mode(awaiting reported
>> blocks
>> > (0/315)).
>> >
>> > Why?
>> >
>> > 2014-09-18 1:51 GMT-07:00 Susheel Kumar Gadalay <sk...@gmail.com>:
>> >
>> >> You have to upgrade both name node and data node.
>> >>
>> >> Better issue start-dfs.sh -upgrade.
>> >>
>> >> Check whether current and previous directories are present in both
>> >> dfs.namenode.name.dir and dfs.datanode.data.dir directory.
>> >>
>> >> On 9/18/14, sam liu <sa...@gmail.com> wrote:
>> >> > Hi Expert,
>> >> >
>> >> > Below are my steps and is it a hadoop bug or did I miss any thing?
>> >> Thanks!
>> >> >
>> >> > Step:
>> >> > [A] Upgrade
>> >> > 1. Install Hadoop 2.2.0 cluster
>> >> > 2. Stop Hadoop services
>> >> > 3. Replace 2.2.0 binaries with 2.4.1 binaries
>> >> > 4. Start datanodes: $HADOOP_HOME/sbin/hadoop-daemon.sh start datanode
>> >> > 5. Start namenode with option upgrade:
>> >> > $HADOOP_HOME/sbin/hadoop-daemon.sh
>> >> > start namenode -upgrade
>> >> > 6. Start secondary namenode, tasktracker and jobtracker
>> >> >
>> >> > Result:
>> >> >
>> >> >     Whole upgrade process could be completed successfully.
>> >> >
>> >> > [B] Rollback
>> >> > 1. Stop all hadoop services
>> >> > 2. Replace 2.4.1 binaries with 2.2.0 binaries
>> >> > 3. Start datanodes: $HADOOP_HOME/sbin/hadoop-daemon.sh start datanode
>> >> > 4. Start namenode with option upgrade:
>> >> > $HADOOP_HOME/sbin/hadoop-daemon.sh
>> >> > start namenode -rollback
>> >> >
>> >> > Result:
>> >> >
>> >> >     Namenode service could be started
>> >> >     Datanodes failed with exception:
>> >> >     Issue: DataNode failed with following exception
>> >> >     2014-09-17 11:04:51,416 INFO
>> >> > org.apache.hadoop.hdfs.server.common.Storage: Lock on
>> >> > /hadoop/hdfs/data/in_use.lock acquired by nodename
>> >> > 817443@shihc071-public
>> >> >     2014-09-17 11:04:51,418 FATAL
>> >> > org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization
>> failed
>> >> for
>> >> > block pool Block pool BP-977402492-9.181.64.185-1410497086460
>> (storage
>> >> id )
>> >> > service to hostname/ip:9000
>> >> >     org.apache.hadoop.hdfs.server.common.IncorrectVersionException:
>> >> > Unexpected version of storage directory /hadoop/hdfs/data. Reported:
>> >> > -55.
>> >> > Expecting = -47.
>> >> >     at
>> >> >
>> >>
>> org.apache.hadoop.hdfs.server.common.Storage.setLayoutVersion(Storage.java:1082)
>> >> >     at
>> >> >
>> >>
>> org.apache.hadoop.hdfs.server.datanode.DataStorage.setFieldsFromProperties(DataStorage.java:302)
>> >> >     at
>> >> >
>> >>
>> org.apache.hadoop.hdfs.server.common.Storage.readProperties(Storage.java:921)
>> >> >
>> >>
>> >
>>
>
>

Re: Failed to rollback from hadoop-2.4.1 to hadoop 2.2.0

Posted by Azuryy Yu <az...@gmail.com>.
yes, this is an issue, I also found that. can you please file an issue?


On Sun, Sep 21, 2014 at 12:08 PM, sam liu <sa...@gmail.com> wrote:

> I rollback from 2.4.1 to 2.2.0 and seems 2.2.0 does not has option
> upgradeProgress, right?
>
> I guess it might be a hadoop issue, as I still could not start datanode
> after rollback
>
> 2014-09-18 4:15 GMT-07:00 Susheel Kumar Gadalay <sk...@gmail.com>:
>
> What is the o/p of command
>>
>> hdfs dfsadmin -upgradeProgress status
>>
>> If it says upgrade is complete then you can do some sanity check by hdfs
>> fsck.
>>
>> Stop the servers by stop-dfs.sh and then rollback by command
>> start-dfs.sh -rollback
>>
>> On 9/18/14, sam liu <sa...@gmail.com> wrote:
>> > Thanks for your comment!
>> >
>> > I can upgrade from 2.2.0 to 2.4.1 using command 'start-dfs.sh -upgrade',
>> > however failed to rollback from 2.4.1 to 2.2.0 using command
>> 'start-dfs.sh
>> > -rollback': the namenode always stays on safe mode(awaiting reported
>> blocks
>> > (0/315)).
>> >
>> > Why?
>> >
>> > 2014-09-18 1:51 GMT-07:00 Susheel Kumar Gadalay <sk...@gmail.com>:
>> >
>> >> You have to upgrade both name node and data node.
>> >>
>> >> Better issue start-dfs.sh -upgrade.
>> >>
>> >> Check whether current and previous directories are present in both
>> >> dfs.namenode.name.dir and dfs.datanode.data.dir directory.
>> >>
>> >> On 9/18/14, sam liu <sa...@gmail.com> wrote:
>> >> > Hi Expert,
>> >> >
>> >> > Below are my steps and is it a hadoop bug or did I miss any thing?
>> >> Thanks!
>> >> >
>> >> > Step:
>> >> > [A] Upgrade
>> >> > 1. Install Hadoop 2.2.0 cluster
>> >> > 2. Stop Hadoop services
>> >> > 3. Replace 2.2.0 binaries with 2.4.1 binaries
>> >> > 4. Start datanodes: $HADOOP_HOME/sbin/hadoop-daemon.sh start datanode
>> >> > 5. Start namenode with option upgrade:
>> >> > $HADOOP_HOME/sbin/hadoop-daemon.sh
>> >> > start namenode -upgrade
>> >> > 6. Start secondary namenode, tasktracker and jobtracker
>> >> >
>> >> > Result:
>> >> >
>> >> >     Whole upgrade process could be completed successfully.
>> >> >
>> >> > [B] Rollback
>> >> > 1. Stop all hadoop services
>> >> > 2. Replace 2.4.1 binaries with 2.2.0 binaries
>> >> > 3. Start datanodes: $HADOOP_HOME/sbin/hadoop-daemon.sh start datanode
>> >> > 4. Start namenode with option upgrade:
>> >> > $HADOOP_HOME/sbin/hadoop-daemon.sh
>> >> > start namenode -rollback
>> >> >
>> >> > Result:
>> >> >
>> >> >     Namenode service could be started
>> >> >     Datanodes failed with exception:
>> >> >     Issue: DataNode failed with following exception
>> >> >     2014-09-17 11:04:51,416 INFO
>> >> > org.apache.hadoop.hdfs.server.common.Storage: Lock on
>> >> > /hadoop/hdfs/data/in_use.lock acquired by nodename
>> >> > 817443@shihc071-public
>> >> >     2014-09-17 11:04:51,418 FATAL
>> >> > org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization
>> failed
>> >> for
>> >> > block pool Block pool BP-977402492-9.181.64.185-1410497086460
>> (storage
>> >> id )
>> >> > service to hostname/ip:9000
>> >> >     org.apache.hadoop.hdfs.server.common.IncorrectVersionException:
>> >> > Unexpected version of storage directory /hadoop/hdfs/data. Reported:
>> >> > -55.
>> >> > Expecting = -47.
>> >> >     at
>> >> >
>> >>
>> org.apache.hadoop.hdfs.server.common.Storage.setLayoutVersion(Storage.java:1082)
>> >> >     at
>> >> >
>> >>
>> org.apache.hadoop.hdfs.server.datanode.DataStorage.setFieldsFromProperties(DataStorage.java:302)
>> >> >     at
>> >> >
>> >>
>> org.apache.hadoop.hdfs.server.common.Storage.readProperties(Storage.java:921)
>> >> >
>> >>
>> >
>>
>
>

Re: Failed to rollback from hadoop-2.4.1 to hadoop 2.2.0

Posted by Azuryy Yu <az...@gmail.com>.
yes, this is an issue, I also found that. can you please file an issue?


On Sun, Sep 21, 2014 at 12:08 PM, sam liu <sa...@gmail.com> wrote:

> I rollback from 2.4.1 to 2.2.0 and seems 2.2.0 does not has option
> upgradeProgress, right?
>
> I guess it might be a hadoop issue, as I still could not start datanode
> after rollback
>
> 2014-09-18 4:15 GMT-07:00 Susheel Kumar Gadalay <sk...@gmail.com>:
>
> What is the o/p of command
>>
>> hdfs dfsadmin -upgradeProgress status
>>
>> If it says upgrade is complete then you can do some sanity check by hdfs
>> fsck.
>>
>> Stop the servers by stop-dfs.sh and then rollback by command
>> start-dfs.sh -rollback
>>
>> On 9/18/14, sam liu <sa...@gmail.com> wrote:
>> > Thanks for your comment!
>> >
>> > I can upgrade from 2.2.0 to 2.4.1 using command 'start-dfs.sh -upgrade',
>> > however failed to rollback from 2.4.1 to 2.2.0 using command
>> 'start-dfs.sh
>> > -rollback': the namenode always stays on safe mode(awaiting reported
>> blocks
>> > (0/315)).
>> >
>> > Why?
>> >
>> > 2014-09-18 1:51 GMT-07:00 Susheel Kumar Gadalay <sk...@gmail.com>:
>> >
>> >> You have to upgrade both name node and data node.
>> >>
>> >> Better issue start-dfs.sh -upgrade.
>> >>
>> >> Check whether current and previous directories are present in both
>> >> dfs.namenode.name.dir and dfs.datanode.data.dir directory.
>> >>
>> >> On 9/18/14, sam liu <sa...@gmail.com> wrote:
>> >> > Hi Expert,
>> >> >
>> >> > Below are my steps and is it a hadoop bug or did I miss any thing?
>> >> Thanks!
>> >> >
>> >> > Step:
>> >> > [A] Upgrade
>> >> > 1. Install Hadoop 2.2.0 cluster
>> >> > 2. Stop Hadoop services
>> >> > 3. Replace 2.2.0 binaries with 2.4.1 binaries
>> >> > 4. Start datanodes: $HADOOP_HOME/sbin/hadoop-daemon.sh start datanode
>> >> > 5. Start namenode with option upgrade:
>> >> > $HADOOP_HOME/sbin/hadoop-daemon.sh
>> >> > start namenode -upgrade
>> >> > 6. Start secondary namenode, tasktracker and jobtracker
>> >> >
>> >> > Result:
>> >> >
>> >> >     Whole upgrade process could be completed successfully.
>> >> >
>> >> > [B] Rollback
>> >> > 1. Stop all hadoop services
>> >> > 2. Replace 2.4.1 binaries with 2.2.0 binaries
>> >> > 3. Start datanodes: $HADOOP_HOME/sbin/hadoop-daemon.sh start datanode
>> >> > 4. Start namenode with option upgrade:
>> >> > $HADOOP_HOME/sbin/hadoop-daemon.sh
>> >> > start namenode -rollback
>> >> >
>> >> > Result:
>> >> >
>> >> >     Namenode service could be started
>> >> >     Datanodes failed with exception:
>> >> >     Issue: DataNode failed with following exception
>> >> >     2014-09-17 11:04:51,416 INFO
>> >> > org.apache.hadoop.hdfs.server.common.Storage: Lock on
>> >> > /hadoop/hdfs/data/in_use.lock acquired by nodename
>> >> > 817443@shihc071-public
>> >> >     2014-09-17 11:04:51,418 FATAL
>> >> > org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization
>> failed
>> >> for
>> >> > block pool Block pool BP-977402492-9.181.64.185-1410497086460
>> (storage
>> >> id )
>> >> > service to hostname/ip:9000
>> >> >     org.apache.hadoop.hdfs.server.common.IncorrectVersionException:
>> >> > Unexpected version of storage directory /hadoop/hdfs/data. Reported:
>> >> > -55.
>> >> > Expecting = -47.
>> >> >     at
>> >> >
>> >>
>> org.apache.hadoop.hdfs.server.common.Storage.setLayoutVersion(Storage.java:1082)
>> >> >     at
>> >> >
>> >>
>> org.apache.hadoop.hdfs.server.datanode.DataStorage.setFieldsFromProperties(DataStorage.java:302)
>> >> >     at
>> >> >
>> >>
>> org.apache.hadoop.hdfs.server.common.Storage.readProperties(Storage.java:921)
>> >> >
>> >>
>> >
>>
>
>

Re: Failed to rollback from hadoop-2.4.1 to hadoop 2.2.0

Posted by sam liu <sa...@gmail.com>.
I rollback from 2.4.1 to 2.2.0 and seems 2.2.0 does not has option
upgradeProgress, right?

I guess it might be a hadoop issue, as I still could not start datanode
after rollback

2014-09-18 4:15 GMT-07:00 Susheel Kumar Gadalay <sk...@gmail.com>:

> What is the o/p of command
>
> hdfs dfsadmin -upgradeProgress status
>
> If it says upgrade is complete then you can do some sanity check by hdfs
> fsck.
>
> Stop the servers by stop-dfs.sh and then rollback by command
> start-dfs.sh -rollback
>
> On 9/18/14, sam liu <sa...@gmail.com> wrote:
> > Thanks for your comment!
> >
> > I can upgrade from 2.2.0 to 2.4.1 using command 'start-dfs.sh -upgrade',
> > however failed to rollback from 2.4.1 to 2.2.0 using command
> 'start-dfs.sh
> > -rollback': the namenode always stays on safe mode(awaiting reported
> blocks
> > (0/315)).
> >
> > Why?
> >
> > 2014-09-18 1:51 GMT-07:00 Susheel Kumar Gadalay <sk...@gmail.com>:
> >
> >> You have to upgrade both name node and data node.
> >>
> >> Better issue start-dfs.sh -upgrade.
> >>
> >> Check whether current and previous directories are present in both
> >> dfs.namenode.name.dir and dfs.datanode.data.dir directory.
> >>
> >> On 9/18/14, sam liu <sa...@gmail.com> wrote:
> >> > Hi Expert,
> >> >
> >> > Below are my steps and is it a hadoop bug or did I miss any thing?
> >> Thanks!
> >> >
> >> > Step:
> >> > [A] Upgrade
> >> > 1. Install Hadoop 2.2.0 cluster
> >> > 2. Stop Hadoop services
> >> > 3. Replace 2.2.0 binaries with 2.4.1 binaries
> >> > 4. Start datanodes: $HADOOP_HOME/sbin/hadoop-daemon.sh start datanode
> >> > 5. Start namenode with option upgrade:
> >> > $HADOOP_HOME/sbin/hadoop-daemon.sh
> >> > start namenode -upgrade
> >> > 6. Start secondary namenode, tasktracker and jobtracker
> >> >
> >> > Result:
> >> >
> >> >     Whole upgrade process could be completed successfully.
> >> >
> >> > [B] Rollback
> >> > 1. Stop all hadoop services
> >> > 2. Replace 2.4.1 binaries with 2.2.0 binaries
> >> > 3. Start datanodes: $HADOOP_HOME/sbin/hadoop-daemon.sh start datanode
> >> > 4. Start namenode with option upgrade:
> >> > $HADOOP_HOME/sbin/hadoop-daemon.sh
> >> > start namenode -rollback
> >> >
> >> > Result:
> >> >
> >> >     Namenode service could be started
> >> >     Datanodes failed with exception:
> >> >     Issue: DataNode failed with following exception
> >> >     2014-09-17 11:04:51,416 INFO
> >> > org.apache.hadoop.hdfs.server.common.Storage: Lock on
> >> > /hadoop/hdfs/data/in_use.lock acquired by nodename
> >> > 817443@shihc071-public
> >> >     2014-09-17 11:04:51,418 FATAL
> >> > org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed
> >> for
> >> > block pool Block pool BP-977402492-9.181.64.185-1410497086460 (storage
> >> id )
> >> > service to hostname/ip:9000
> >> >     org.apache.hadoop.hdfs.server.common.IncorrectVersionException:
> >> > Unexpected version of storage directory /hadoop/hdfs/data. Reported:
> >> > -55.
> >> > Expecting = -47.
> >> >     at
> >> >
> >>
> org.apache.hadoop.hdfs.server.common.Storage.setLayoutVersion(Storage.java:1082)
> >> >     at
> >> >
> >>
> org.apache.hadoop.hdfs.server.datanode.DataStorage.setFieldsFromProperties(DataStorage.java:302)
> >> >     at
> >> >
> >>
> org.apache.hadoop.hdfs.server.common.Storage.readProperties(Storage.java:921)
> >> >
> >>
> >
>

Re: Failed to rollback from hadoop-2.4.1 to hadoop 2.2.0

Posted by sam liu <sa...@gmail.com>.
I rollback from 2.4.1 to 2.2.0 and seems 2.2.0 does not has option
upgradeProgress, right?

I guess it might be a hadoop issue, as I still could not start datanode
after rollback

2014-09-18 4:15 GMT-07:00 Susheel Kumar Gadalay <sk...@gmail.com>:

> What is the o/p of command
>
> hdfs dfsadmin -upgradeProgress status
>
> If it says upgrade is complete then you can do some sanity check by hdfs
> fsck.
>
> Stop the servers by stop-dfs.sh and then rollback by command
> start-dfs.sh -rollback
>
> On 9/18/14, sam liu <sa...@gmail.com> wrote:
> > Thanks for your comment!
> >
> > I can upgrade from 2.2.0 to 2.4.1 using command 'start-dfs.sh -upgrade',
> > however failed to rollback from 2.4.1 to 2.2.0 using command
> 'start-dfs.sh
> > -rollback': the namenode always stays on safe mode(awaiting reported
> blocks
> > (0/315)).
> >
> > Why?
> >
> > 2014-09-18 1:51 GMT-07:00 Susheel Kumar Gadalay <sk...@gmail.com>:
> >
> >> You have to upgrade both name node and data node.
> >>
> >> Better issue start-dfs.sh -upgrade.
> >>
> >> Check whether current and previous directories are present in both
> >> dfs.namenode.name.dir and dfs.datanode.data.dir directory.
> >>
> >> On 9/18/14, sam liu <sa...@gmail.com> wrote:
> >> > Hi Expert,
> >> >
> >> > Below are my steps and is it a hadoop bug or did I miss any thing?
> >> Thanks!
> >> >
> >> > Step:
> >> > [A] Upgrade
> >> > 1. Install Hadoop 2.2.0 cluster
> >> > 2. Stop Hadoop services
> >> > 3. Replace 2.2.0 binaries with 2.4.1 binaries
> >> > 4. Start datanodes: $HADOOP_HOME/sbin/hadoop-daemon.sh start datanode
> >> > 5. Start namenode with option upgrade:
> >> > $HADOOP_HOME/sbin/hadoop-daemon.sh
> >> > start namenode -upgrade
> >> > 6. Start secondary namenode, tasktracker and jobtracker
> >> >
> >> > Result:
> >> >
> >> >     Whole upgrade process could be completed successfully.
> >> >
> >> > [B] Rollback
> >> > 1. Stop all hadoop services
> >> > 2. Replace 2.4.1 binaries with 2.2.0 binaries
> >> > 3. Start datanodes: $HADOOP_HOME/sbin/hadoop-daemon.sh start datanode
> >> > 4. Start namenode with option upgrade:
> >> > $HADOOP_HOME/sbin/hadoop-daemon.sh
> >> > start namenode -rollback
> >> >
> >> > Result:
> >> >
> >> >     Namenode service could be started
> >> >     Datanodes failed with exception:
> >> >     Issue: DataNode failed with following exception
> >> >     2014-09-17 11:04:51,416 INFO
> >> > org.apache.hadoop.hdfs.server.common.Storage: Lock on
> >> > /hadoop/hdfs/data/in_use.lock acquired by nodename
> >> > 817443@shihc071-public
> >> >     2014-09-17 11:04:51,418 FATAL
> >> > org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed
> >> for
> >> > block pool Block pool BP-977402492-9.181.64.185-1410497086460 (storage
> >> id )
> >> > service to hostname/ip:9000
> >> >     org.apache.hadoop.hdfs.server.common.IncorrectVersionException:
> >> > Unexpected version of storage directory /hadoop/hdfs/data. Reported:
> >> > -55.
> >> > Expecting = -47.
> >> >     at
> >> >
> >>
> org.apache.hadoop.hdfs.server.common.Storage.setLayoutVersion(Storage.java:1082)
> >> >     at
> >> >
> >>
> org.apache.hadoop.hdfs.server.datanode.DataStorage.setFieldsFromProperties(DataStorage.java:302)
> >> >     at
> >> >
> >>
> org.apache.hadoop.hdfs.server.common.Storage.readProperties(Storage.java:921)
> >> >
> >>
> >
>

Re: Failed to rollback from hadoop-2.4.1 to hadoop 2.2.0

Posted by sam liu <sa...@gmail.com>.
I rollback from 2.4.1 to 2.2.0 and seems 2.2.0 does not has option
upgradeProgress, right?

I guess it might be a hadoop issue, as I still could not start datanode
after rollback

2014-09-18 4:15 GMT-07:00 Susheel Kumar Gadalay <sk...@gmail.com>:

> What is the o/p of command
>
> hdfs dfsadmin -upgradeProgress status
>
> If it says upgrade is complete then you can do some sanity check by hdfs
> fsck.
>
> Stop the servers by stop-dfs.sh and then rollback by command
> start-dfs.sh -rollback
>
> On 9/18/14, sam liu <sa...@gmail.com> wrote:
> > Thanks for your comment!
> >
> > I can upgrade from 2.2.0 to 2.4.1 using command 'start-dfs.sh -upgrade',
> > however failed to rollback from 2.4.1 to 2.2.0 using command
> 'start-dfs.sh
> > -rollback': the namenode always stays on safe mode(awaiting reported
> blocks
> > (0/315)).
> >
> > Why?
> >
> > 2014-09-18 1:51 GMT-07:00 Susheel Kumar Gadalay <sk...@gmail.com>:
> >
> >> You have to upgrade both name node and data node.
> >>
> >> Better issue start-dfs.sh -upgrade.
> >>
> >> Check whether current and previous directories are present in both
> >> dfs.namenode.name.dir and dfs.datanode.data.dir directory.
> >>
> >> On 9/18/14, sam liu <sa...@gmail.com> wrote:
> >> > Hi Expert,
> >> >
> >> > Below are my steps and is it a hadoop bug or did I miss any thing?
> >> Thanks!
> >> >
> >> > Step:
> >> > [A] Upgrade
> >> > 1. Install Hadoop 2.2.0 cluster
> >> > 2. Stop Hadoop services
> >> > 3. Replace 2.2.0 binaries with 2.4.1 binaries
> >> > 4. Start datanodes: $HADOOP_HOME/sbin/hadoop-daemon.sh start datanode
> >> > 5. Start namenode with option upgrade:
> >> > $HADOOP_HOME/sbin/hadoop-daemon.sh
> >> > start namenode -upgrade
> >> > 6. Start secondary namenode, tasktracker and jobtracker
> >> >
> >> > Result:
> >> >
> >> >     Whole upgrade process could be completed successfully.
> >> >
> >> > [B] Rollback
> >> > 1. Stop all hadoop services
> >> > 2. Replace 2.4.1 binaries with 2.2.0 binaries
> >> > 3. Start datanodes: $HADOOP_HOME/sbin/hadoop-daemon.sh start datanode
> >> > 4. Start namenode with option upgrade:
> >> > $HADOOP_HOME/sbin/hadoop-daemon.sh
> >> > start namenode -rollback
> >> >
> >> > Result:
> >> >
> >> >     Namenode service could be started
> >> >     Datanodes failed with exception:
> >> >     Issue: DataNode failed with following exception
> >> >     2014-09-17 11:04:51,416 INFO
> >> > org.apache.hadoop.hdfs.server.common.Storage: Lock on
> >> > /hadoop/hdfs/data/in_use.lock acquired by nodename
> >> > 817443@shihc071-public
> >> >     2014-09-17 11:04:51,418 FATAL
> >> > org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed
> >> for
> >> > block pool Block pool BP-977402492-9.181.64.185-1410497086460 (storage
> >> id )
> >> > service to hostname/ip:9000
> >> >     org.apache.hadoop.hdfs.server.common.IncorrectVersionException:
> >> > Unexpected version of storage directory /hadoop/hdfs/data. Reported:
> >> > -55.
> >> > Expecting = -47.
> >> >     at
> >> >
> >>
> org.apache.hadoop.hdfs.server.common.Storage.setLayoutVersion(Storage.java:1082)
> >> >     at
> >> >
> >>
> org.apache.hadoop.hdfs.server.datanode.DataStorage.setFieldsFromProperties(DataStorage.java:302)
> >> >     at
> >> >
> >>
> org.apache.hadoop.hdfs.server.common.Storage.readProperties(Storage.java:921)
> >> >
> >>
> >
>

Re: Failed to rollback from hadoop-2.4.1 to hadoop 2.2.0

Posted by sam liu <sa...@gmail.com>.
I rollback from 2.4.1 to 2.2.0 and seems 2.2.0 does not has option
upgradeProgress, right?

I guess it might be a hadoop issue, as I still could not start datanode
after rollback

2014-09-18 4:15 GMT-07:00 Susheel Kumar Gadalay <sk...@gmail.com>:

> What is the o/p of command
>
> hdfs dfsadmin -upgradeProgress status
>
> If it says upgrade is complete then you can do some sanity check by hdfs
> fsck.
>
> Stop the servers by stop-dfs.sh and then rollback by command
> start-dfs.sh -rollback
>
> On 9/18/14, sam liu <sa...@gmail.com> wrote:
> > Thanks for your comment!
> >
> > I can upgrade from 2.2.0 to 2.4.1 using command 'start-dfs.sh -upgrade',
> > however failed to rollback from 2.4.1 to 2.2.0 using command
> 'start-dfs.sh
> > -rollback': the namenode always stays on safe mode(awaiting reported
> blocks
> > (0/315)).
> >
> > Why?
> >
> > 2014-09-18 1:51 GMT-07:00 Susheel Kumar Gadalay <sk...@gmail.com>:
> >
> >> You have to upgrade both name node and data node.
> >>
> >> Better issue start-dfs.sh -upgrade.
> >>
> >> Check whether current and previous directories are present in both
> >> dfs.namenode.name.dir and dfs.datanode.data.dir directory.
> >>
> >> On 9/18/14, sam liu <sa...@gmail.com> wrote:
> >> > Hi Expert,
> >> >
> >> > Below are my steps and is it a hadoop bug or did I miss any thing?
> >> Thanks!
> >> >
> >> > Step:
> >> > [A] Upgrade
> >> > 1. Install Hadoop 2.2.0 cluster
> >> > 2. Stop Hadoop services
> >> > 3. Replace 2.2.0 binaries with 2.4.1 binaries
> >> > 4. Start datanodes: $HADOOP_HOME/sbin/hadoop-daemon.sh start datanode
> >> > 5. Start namenode with option upgrade:
> >> > $HADOOP_HOME/sbin/hadoop-daemon.sh
> >> > start namenode -upgrade
> >> > 6. Start secondary namenode, tasktracker and jobtracker
> >> >
> >> > Result:
> >> >
> >> >     Whole upgrade process could be completed successfully.
> >> >
> >> > [B] Rollback
> >> > 1. Stop all hadoop services
> >> > 2. Replace 2.4.1 binaries with 2.2.0 binaries
> >> > 3. Start datanodes: $HADOOP_HOME/sbin/hadoop-daemon.sh start datanode
> >> > 4. Start namenode with option upgrade:
> >> > $HADOOP_HOME/sbin/hadoop-daemon.sh
> >> > start namenode -rollback
> >> >
> >> > Result:
> >> >
> >> >     Namenode service could be started
> >> >     Datanodes failed with exception:
> >> >     Issue: DataNode failed with following exception
> >> >     2014-09-17 11:04:51,416 INFO
> >> > org.apache.hadoop.hdfs.server.common.Storage: Lock on
> >> > /hadoop/hdfs/data/in_use.lock acquired by nodename
> >> > 817443@shihc071-public
> >> >     2014-09-17 11:04:51,418 FATAL
> >> > org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed
> >> for
> >> > block pool Block pool BP-977402492-9.181.64.185-1410497086460 (storage
> >> id )
> >> > service to hostname/ip:9000
> >> >     org.apache.hadoop.hdfs.server.common.IncorrectVersionException:
> >> > Unexpected version of storage directory /hadoop/hdfs/data. Reported:
> >> > -55.
> >> > Expecting = -47.
> >> >     at
> >> >
> >>
> org.apache.hadoop.hdfs.server.common.Storage.setLayoutVersion(Storage.java:1082)
> >> >     at
> >> >
> >>
> org.apache.hadoop.hdfs.server.datanode.DataStorage.setFieldsFromProperties(DataStorage.java:302)
> >> >     at
> >> >
> >>
> org.apache.hadoop.hdfs.server.common.Storage.readProperties(Storage.java:921)
> >> >
> >>
> >
>

Re: Failed to rollback from hadoop-2.4.1 to hadoop 2.2.0

Posted by Susheel Kumar Gadalay <sk...@gmail.com>.
What is the o/p of command

hdfs dfsadmin -upgradeProgress status

If it says upgrade is complete then you can do some sanity check by hdfs fsck.

Stop the servers by stop-dfs.sh and then rollback by command
start-dfs.sh -rollback

On 9/18/14, sam liu <sa...@gmail.com> wrote:
> Thanks for your comment!
>
> I can upgrade from 2.2.0 to 2.4.1 using command 'start-dfs.sh -upgrade',
> however failed to rollback from 2.4.1 to 2.2.0 using command 'start-dfs.sh
> -rollback': the namenode always stays on safe mode(awaiting reported blocks
> (0/315)).
>
> Why?
>
> 2014-09-18 1:51 GMT-07:00 Susheel Kumar Gadalay <sk...@gmail.com>:
>
>> You have to upgrade both name node and data node.
>>
>> Better issue start-dfs.sh -upgrade.
>>
>> Check whether current and previous directories are present in both
>> dfs.namenode.name.dir and dfs.datanode.data.dir directory.
>>
>> On 9/18/14, sam liu <sa...@gmail.com> wrote:
>> > Hi Expert,
>> >
>> > Below are my steps and is it a hadoop bug or did I miss any thing?
>> Thanks!
>> >
>> > Step:
>> > [A] Upgrade
>> > 1. Install Hadoop 2.2.0 cluster
>> > 2. Stop Hadoop services
>> > 3. Replace 2.2.0 binaries with 2.4.1 binaries
>> > 4. Start datanodes: $HADOOP_HOME/sbin/hadoop-daemon.sh start datanode
>> > 5. Start namenode with option upgrade:
>> > $HADOOP_HOME/sbin/hadoop-daemon.sh
>> > start namenode -upgrade
>> > 6. Start secondary namenode, tasktracker and jobtracker
>> >
>> > Result:
>> >
>> >     Whole upgrade process could be completed successfully.
>> >
>> > [B] Rollback
>> > 1. Stop all hadoop services
>> > 2. Replace 2.4.1 binaries with 2.2.0 binaries
>> > 3. Start datanodes: $HADOOP_HOME/sbin/hadoop-daemon.sh start datanode
>> > 4. Start namenode with option upgrade:
>> > $HADOOP_HOME/sbin/hadoop-daemon.sh
>> > start namenode -rollback
>> >
>> > Result:
>> >
>> >     Namenode service could be started
>> >     Datanodes failed with exception:
>> >     Issue: DataNode failed with following exception
>> >     2014-09-17 11:04:51,416 INFO
>> > org.apache.hadoop.hdfs.server.common.Storage: Lock on
>> > /hadoop/hdfs/data/in_use.lock acquired by nodename
>> > 817443@shihc071-public
>> >     2014-09-17 11:04:51,418 FATAL
>> > org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed
>> for
>> > block pool Block pool BP-977402492-9.181.64.185-1410497086460 (storage
>> id )
>> > service to hostname/ip:9000
>> >     org.apache.hadoop.hdfs.server.common.IncorrectVersionException:
>> > Unexpected version of storage directory /hadoop/hdfs/data. Reported:
>> > -55.
>> > Expecting = -47.
>> >     at
>> >
>> org.apache.hadoop.hdfs.server.common.Storage.setLayoutVersion(Storage.java:1082)
>> >     at
>> >
>> org.apache.hadoop.hdfs.server.datanode.DataStorage.setFieldsFromProperties(DataStorage.java:302)
>> >     at
>> >
>> org.apache.hadoop.hdfs.server.common.Storage.readProperties(Storage.java:921)
>> >
>>
>

Re: Failed to rollback from hadoop-2.4.1 to hadoop 2.2.0

Posted by Susheel Kumar Gadalay <sk...@gmail.com>.
What is the o/p of command

hdfs dfsadmin -upgradeProgress status

If it says upgrade is complete then you can do some sanity check by hdfs fsck.

Stop the servers by stop-dfs.sh and then rollback by command
start-dfs.sh -rollback

On 9/18/14, sam liu <sa...@gmail.com> wrote:
> Thanks for your comment!
>
> I can upgrade from 2.2.0 to 2.4.1 using command 'start-dfs.sh -upgrade',
> however failed to rollback from 2.4.1 to 2.2.0 using command 'start-dfs.sh
> -rollback': the namenode always stays on safe mode(awaiting reported blocks
> (0/315)).
>
> Why?
>
> 2014-09-18 1:51 GMT-07:00 Susheel Kumar Gadalay <sk...@gmail.com>:
>
>> You have to upgrade both name node and data node.
>>
>> Better issue start-dfs.sh -upgrade.
>>
>> Check whether current and previous directories are present in both
>> dfs.namenode.name.dir and dfs.datanode.data.dir directory.
>>
>> On 9/18/14, sam liu <sa...@gmail.com> wrote:
>> > Hi Expert,
>> >
>> > Below are my steps and is it a hadoop bug or did I miss any thing?
>> Thanks!
>> >
>> > Step:
>> > [A] Upgrade
>> > 1. Install Hadoop 2.2.0 cluster
>> > 2. Stop Hadoop services
>> > 3. Replace 2.2.0 binaries with 2.4.1 binaries
>> > 4. Start datanodes: $HADOOP_HOME/sbin/hadoop-daemon.sh start datanode
>> > 5. Start namenode with option upgrade:
>> > $HADOOP_HOME/sbin/hadoop-daemon.sh
>> > start namenode -upgrade
>> > 6. Start secondary namenode, tasktracker and jobtracker
>> >
>> > Result:
>> >
>> >     Whole upgrade process could be completed successfully.
>> >
>> > [B] Rollback
>> > 1. Stop all hadoop services
>> > 2. Replace 2.4.1 binaries with 2.2.0 binaries
>> > 3. Start datanodes: $HADOOP_HOME/sbin/hadoop-daemon.sh start datanode
>> > 4. Start namenode with option upgrade:
>> > $HADOOP_HOME/sbin/hadoop-daemon.sh
>> > start namenode -rollback
>> >
>> > Result:
>> >
>> >     Namenode service could be started
>> >     Datanodes failed with exception:
>> >     Issue: DataNode failed with following exception
>> >     2014-09-17 11:04:51,416 INFO
>> > org.apache.hadoop.hdfs.server.common.Storage: Lock on
>> > /hadoop/hdfs/data/in_use.lock acquired by nodename
>> > 817443@shihc071-public
>> >     2014-09-17 11:04:51,418 FATAL
>> > org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed
>> for
>> > block pool Block pool BP-977402492-9.181.64.185-1410497086460 (storage
>> id )
>> > service to hostname/ip:9000
>> >     org.apache.hadoop.hdfs.server.common.IncorrectVersionException:
>> > Unexpected version of storage directory /hadoop/hdfs/data. Reported:
>> > -55.
>> > Expecting = -47.
>> >     at
>> >
>> org.apache.hadoop.hdfs.server.common.Storage.setLayoutVersion(Storage.java:1082)
>> >     at
>> >
>> org.apache.hadoop.hdfs.server.datanode.DataStorage.setFieldsFromProperties(DataStorage.java:302)
>> >     at
>> >
>> org.apache.hadoop.hdfs.server.common.Storage.readProperties(Storage.java:921)
>> >
>>
>

Re: Failed to rollback from hadoop-2.4.1 to hadoop 2.2.0

Posted by Susheel Kumar Gadalay <sk...@gmail.com>.
What is the o/p of command

hdfs dfsadmin -upgradeProgress status

If it says upgrade is complete then you can do some sanity check by hdfs fsck.

Stop the servers by stop-dfs.sh and then rollback by command
start-dfs.sh -rollback

On 9/18/14, sam liu <sa...@gmail.com> wrote:
> Thanks for your comment!
>
> I can upgrade from 2.2.0 to 2.4.1 using command 'start-dfs.sh -upgrade',
> however failed to rollback from 2.4.1 to 2.2.0 using command 'start-dfs.sh
> -rollback': the namenode always stays on safe mode(awaiting reported blocks
> (0/315)).
>
> Why?
>
> 2014-09-18 1:51 GMT-07:00 Susheel Kumar Gadalay <sk...@gmail.com>:
>
>> You have to upgrade both name node and data node.
>>
>> Better issue start-dfs.sh -upgrade.
>>
>> Check whether current and previous directories are present in both
>> dfs.namenode.name.dir and dfs.datanode.data.dir directory.
>>
>> On 9/18/14, sam liu <sa...@gmail.com> wrote:
>> > Hi Expert,
>> >
>> > Below are my steps and is it a hadoop bug or did I miss any thing?
>> Thanks!
>> >
>> > Step:
>> > [A] Upgrade
>> > 1. Install Hadoop 2.2.0 cluster
>> > 2. Stop Hadoop services
>> > 3. Replace 2.2.0 binaries with 2.4.1 binaries
>> > 4. Start datanodes: $HADOOP_HOME/sbin/hadoop-daemon.sh start datanode
>> > 5. Start namenode with option upgrade:
>> > $HADOOP_HOME/sbin/hadoop-daemon.sh
>> > start namenode -upgrade
>> > 6. Start secondary namenode, tasktracker and jobtracker
>> >
>> > Result:
>> >
>> >     Whole upgrade process could be completed successfully.
>> >
>> > [B] Rollback
>> > 1. Stop all hadoop services
>> > 2. Replace 2.4.1 binaries with 2.2.0 binaries
>> > 3. Start datanodes: $HADOOP_HOME/sbin/hadoop-daemon.sh start datanode
>> > 4. Start namenode with option upgrade:
>> > $HADOOP_HOME/sbin/hadoop-daemon.sh
>> > start namenode -rollback
>> >
>> > Result:
>> >
>> >     Namenode service could be started
>> >     Datanodes failed with exception:
>> >     Issue: DataNode failed with following exception
>> >     2014-09-17 11:04:51,416 INFO
>> > org.apache.hadoop.hdfs.server.common.Storage: Lock on
>> > /hadoop/hdfs/data/in_use.lock acquired by nodename
>> > 817443@shihc071-public
>> >     2014-09-17 11:04:51,418 FATAL
>> > org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed
>> for
>> > block pool Block pool BP-977402492-9.181.64.185-1410497086460 (storage
>> id )
>> > service to hostname/ip:9000
>> >     org.apache.hadoop.hdfs.server.common.IncorrectVersionException:
>> > Unexpected version of storage directory /hadoop/hdfs/data. Reported:
>> > -55.
>> > Expecting = -47.
>> >     at
>> >
>> org.apache.hadoop.hdfs.server.common.Storage.setLayoutVersion(Storage.java:1082)
>> >     at
>> >
>> org.apache.hadoop.hdfs.server.datanode.DataStorage.setFieldsFromProperties(DataStorage.java:302)
>> >     at
>> >
>> org.apache.hadoop.hdfs.server.common.Storage.readProperties(Storage.java:921)
>> >
>>
>

Re: Failed to rollback from hadoop-2.4.1 to hadoop 2.2.0

Posted by Susheel Kumar Gadalay <sk...@gmail.com>.
What is the o/p of command

hdfs dfsadmin -upgradeProgress status

If it says upgrade is complete then you can do some sanity check by hdfs fsck.

Stop the servers by stop-dfs.sh and then rollback by command
start-dfs.sh -rollback

On 9/18/14, sam liu <sa...@gmail.com> wrote:
> Thanks for your comment!
>
> I can upgrade from 2.2.0 to 2.4.1 using command 'start-dfs.sh -upgrade',
> however failed to rollback from 2.4.1 to 2.2.0 using command 'start-dfs.sh
> -rollback': the namenode always stays on safe mode(awaiting reported blocks
> (0/315)).
>
> Why?
>
> 2014-09-18 1:51 GMT-07:00 Susheel Kumar Gadalay <sk...@gmail.com>:
>
>> You have to upgrade both name node and data node.
>>
>> Better issue start-dfs.sh -upgrade.
>>
>> Check whether current and previous directories are present in both
>> dfs.namenode.name.dir and dfs.datanode.data.dir directory.
>>
>> On 9/18/14, sam liu <sa...@gmail.com> wrote:
>> > Hi Expert,
>> >
>> > Below are my steps and is it a hadoop bug or did I miss any thing?
>> Thanks!
>> >
>> > Step:
>> > [A] Upgrade
>> > 1. Install Hadoop 2.2.0 cluster
>> > 2. Stop Hadoop services
>> > 3. Replace 2.2.0 binaries with 2.4.1 binaries
>> > 4. Start datanodes: $HADOOP_HOME/sbin/hadoop-daemon.sh start datanode
>> > 5. Start namenode with option upgrade:
>> > $HADOOP_HOME/sbin/hadoop-daemon.sh
>> > start namenode -upgrade
>> > 6. Start secondary namenode, tasktracker and jobtracker
>> >
>> > Result:
>> >
>> >     Whole upgrade process could be completed successfully.
>> >
>> > [B] Rollback
>> > 1. Stop all hadoop services
>> > 2. Replace 2.4.1 binaries with 2.2.0 binaries
>> > 3. Start datanodes: $HADOOP_HOME/sbin/hadoop-daemon.sh start datanode
>> > 4. Start namenode with option upgrade:
>> > $HADOOP_HOME/sbin/hadoop-daemon.sh
>> > start namenode -rollback
>> >
>> > Result:
>> >
>> >     Namenode service could be started
>> >     Datanodes failed with exception:
>> >     Issue: DataNode failed with following exception
>> >     2014-09-17 11:04:51,416 INFO
>> > org.apache.hadoop.hdfs.server.common.Storage: Lock on
>> > /hadoop/hdfs/data/in_use.lock acquired by nodename
>> > 817443@shihc071-public
>> >     2014-09-17 11:04:51,418 FATAL
>> > org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed
>> for
>> > block pool Block pool BP-977402492-9.181.64.185-1410497086460 (storage
>> id )
>> > service to hostname/ip:9000
>> >     org.apache.hadoop.hdfs.server.common.IncorrectVersionException:
>> > Unexpected version of storage directory /hadoop/hdfs/data. Reported:
>> > -55.
>> > Expecting = -47.
>> >     at
>> >
>> org.apache.hadoop.hdfs.server.common.Storage.setLayoutVersion(Storage.java:1082)
>> >     at
>> >
>> org.apache.hadoop.hdfs.server.datanode.DataStorage.setFieldsFromProperties(DataStorage.java:302)
>> >     at
>> >
>> org.apache.hadoop.hdfs.server.common.Storage.readProperties(Storage.java:921)
>> >
>>
>

Re: Failed to rollback from hadoop-2.4.1 to hadoop 2.2.0

Posted by sam liu <sa...@gmail.com>.
Thanks for your comment!

I can upgrade from 2.2.0 to 2.4.1 using command 'start-dfs.sh -upgrade',
however failed to rollback from 2.4.1 to 2.2.0 using command 'start-dfs.sh
-rollback': the namenode always stays on safe mode(awaiting reported blocks
(0/315)).

Why?

2014-09-18 1:51 GMT-07:00 Susheel Kumar Gadalay <sk...@gmail.com>:

> You have to upgrade both name node and data node.
>
> Better issue start-dfs.sh -upgrade.
>
> Check whether current and previous directories are present in both
> dfs.namenode.name.dir and dfs.datanode.data.dir directory.
>
> On 9/18/14, sam liu <sa...@gmail.com> wrote:
> > Hi Expert,
> >
> > Below are my steps and is it a hadoop bug or did I miss any thing?
> Thanks!
> >
> > Step:
> > [A] Upgrade
> > 1. Install Hadoop 2.2.0 cluster
> > 2. Stop Hadoop services
> > 3. Replace 2.2.0 binaries with 2.4.1 binaries
> > 4. Start datanodes: $HADOOP_HOME/sbin/hadoop-daemon.sh start datanode
> > 5. Start namenode with option upgrade: $HADOOP_HOME/sbin/hadoop-daemon.sh
> > start namenode -upgrade
> > 6. Start secondary namenode, tasktracker and jobtracker
> >
> > Result:
> >
> >     Whole upgrade process could be completed successfully.
> >
> > [B] Rollback
> > 1. Stop all hadoop services
> > 2. Replace 2.4.1 binaries with 2.2.0 binaries
> > 3. Start datanodes: $HADOOP_HOME/sbin/hadoop-daemon.sh start datanode
> > 4. Start namenode with option upgrade: $HADOOP_HOME/sbin/hadoop-daemon.sh
> > start namenode -rollback
> >
> > Result:
> >
> >     Namenode service could be started
> >     Datanodes failed with exception:
> >     Issue: DataNode failed with following exception
> >     2014-09-17 11:04:51,416 INFO
> > org.apache.hadoop.hdfs.server.common.Storage: Lock on
> > /hadoop/hdfs/data/in_use.lock acquired by nodename 817443@shihc071-public
> >     2014-09-17 11:04:51,418 FATAL
> > org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed
> for
> > block pool Block pool BP-977402492-9.181.64.185-1410497086460 (storage
> id )
> > service to hostname/ip:9000
> >     org.apache.hadoop.hdfs.server.common.IncorrectVersionException:
> > Unexpected version of storage directory /hadoop/hdfs/data. Reported: -55.
> > Expecting = -47.
> >     at
> >
> org.apache.hadoop.hdfs.server.common.Storage.setLayoutVersion(Storage.java:1082)
> >     at
> >
> org.apache.hadoop.hdfs.server.datanode.DataStorage.setFieldsFromProperties(DataStorage.java:302)
> >     at
> >
> org.apache.hadoop.hdfs.server.common.Storage.readProperties(Storage.java:921)
> >
>

Re: Failed to rollback from hadoop-2.4.1 to hadoop 2.2.0

Posted by sam liu <sa...@gmail.com>.
Thanks for your comment!

I can upgrade from 2.2.0 to 2.4.1 using command 'start-dfs.sh -upgrade',
however failed to rollback from 2.4.1 to 2.2.0 using command 'start-dfs.sh
-rollback': the namenode always stays on safe mode(awaiting reported blocks
(0/315)).

Why?

2014-09-18 1:51 GMT-07:00 Susheel Kumar Gadalay <sk...@gmail.com>:

> You have to upgrade both name node and data node.
>
> Better issue start-dfs.sh -upgrade.
>
> Check whether current and previous directories are present in both
> dfs.namenode.name.dir and dfs.datanode.data.dir directory.
>
> On 9/18/14, sam liu <sa...@gmail.com> wrote:
> > Hi Expert,
> >
> > Below are my steps and is it a hadoop bug or did I miss any thing?
> Thanks!
> >
> > Step:
> > [A] Upgrade
> > 1. Install Hadoop 2.2.0 cluster
> > 2. Stop Hadoop services
> > 3. Replace 2.2.0 binaries with 2.4.1 binaries
> > 4. Start datanodes: $HADOOP_HOME/sbin/hadoop-daemon.sh start datanode
> > 5. Start namenode with option upgrade: $HADOOP_HOME/sbin/hadoop-daemon.sh
> > start namenode -upgrade
> > 6. Start secondary namenode, tasktracker and jobtracker
> >
> > Result:
> >
> >     Whole upgrade process could be completed successfully.
> >
> > [B] Rollback
> > 1. Stop all hadoop services
> > 2. Replace 2.4.1 binaries with 2.2.0 binaries
> > 3. Start datanodes: $HADOOP_HOME/sbin/hadoop-daemon.sh start datanode
> > 4. Start namenode with option upgrade: $HADOOP_HOME/sbin/hadoop-daemon.sh
> > start namenode -rollback
> >
> > Result:
> >
> >     Namenode service could be started
> >     Datanodes failed with exception:
> >     Issue: DataNode failed with following exception
> >     2014-09-17 11:04:51,416 INFO
> > org.apache.hadoop.hdfs.server.common.Storage: Lock on
> > /hadoop/hdfs/data/in_use.lock acquired by nodename 817443@shihc071-public
> >     2014-09-17 11:04:51,418 FATAL
> > org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed
> for
> > block pool Block pool BP-977402492-9.181.64.185-1410497086460 (storage
> id )
> > service to hostname/ip:9000
> >     org.apache.hadoop.hdfs.server.common.IncorrectVersionException:
> > Unexpected version of storage directory /hadoop/hdfs/data. Reported: -55.
> > Expecting = -47.
> >     at
> >
> org.apache.hadoop.hdfs.server.common.Storage.setLayoutVersion(Storage.java:1082)
> >     at
> >
> org.apache.hadoop.hdfs.server.datanode.DataStorage.setFieldsFromProperties(DataStorage.java:302)
> >     at
> >
> org.apache.hadoop.hdfs.server.common.Storage.readProperties(Storage.java:921)
> >
>

Re: Failed to rollback from hadoop-2.4.1 to hadoop 2.2.0

Posted by sam liu <sa...@gmail.com>.
Thanks for your comment!

I can upgrade from 2.2.0 to 2.4.1 using command 'start-dfs.sh -upgrade',
however failed to rollback from 2.4.1 to 2.2.0 using command 'start-dfs.sh
-rollback': the namenode always stays on safe mode(awaiting reported blocks
(0/315)).

Why?

2014-09-18 1:51 GMT-07:00 Susheel Kumar Gadalay <sk...@gmail.com>:

> You have to upgrade both name node and data node.
>
> Better issue start-dfs.sh -upgrade.
>
> Check whether current and previous directories are present in both
> dfs.namenode.name.dir and dfs.datanode.data.dir directory.
>
> On 9/18/14, sam liu <sa...@gmail.com> wrote:
> > Hi Expert,
> >
> > Below are my steps and is it a hadoop bug or did I miss any thing?
> Thanks!
> >
> > Step:
> > [A] Upgrade
> > 1. Install Hadoop 2.2.0 cluster
> > 2. Stop Hadoop services
> > 3. Replace 2.2.0 binaries with 2.4.1 binaries
> > 4. Start datanodes: $HADOOP_HOME/sbin/hadoop-daemon.sh start datanode
> > 5. Start namenode with option upgrade: $HADOOP_HOME/sbin/hadoop-daemon.sh
> > start namenode -upgrade
> > 6. Start secondary namenode, tasktracker and jobtracker
> >
> > Result:
> >
> >     Whole upgrade process could be completed successfully.
> >
> > [B] Rollback
> > 1. Stop all hadoop services
> > 2. Replace 2.4.1 binaries with 2.2.0 binaries
> > 3. Start datanodes: $HADOOP_HOME/sbin/hadoop-daemon.sh start datanode
> > 4. Start namenode with option upgrade: $HADOOP_HOME/sbin/hadoop-daemon.sh
> > start namenode -rollback
> >
> > Result:
> >
> >     Namenode service could be started
> >     Datanodes failed with exception:
> >     Issue: DataNode failed with following exception
> >     2014-09-17 11:04:51,416 INFO
> > org.apache.hadoop.hdfs.server.common.Storage: Lock on
> > /hadoop/hdfs/data/in_use.lock acquired by nodename 817443@shihc071-public
> >     2014-09-17 11:04:51,418 FATAL
> > org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed
> for
> > block pool Block pool BP-977402492-9.181.64.185-1410497086460 (storage
> id )
> > service to hostname/ip:9000
> >     org.apache.hadoop.hdfs.server.common.IncorrectVersionException:
> > Unexpected version of storage directory /hadoop/hdfs/data. Reported: -55.
> > Expecting = -47.
> >     at
> >
> org.apache.hadoop.hdfs.server.common.Storage.setLayoutVersion(Storage.java:1082)
> >     at
> >
> org.apache.hadoop.hdfs.server.datanode.DataStorage.setFieldsFromProperties(DataStorage.java:302)
> >     at
> >
> org.apache.hadoop.hdfs.server.common.Storage.readProperties(Storage.java:921)
> >
>

Re: Failed to rollback from hadoop-2.4.1 to hadoop 2.2.0

Posted by sam liu <sa...@gmail.com>.
Thanks for your comment!

I can upgrade from 2.2.0 to 2.4.1 using command 'start-dfs.sh -upgrade',
however failed to rollback from 2.4.1 to 2.2.0 using command 'start-dfs.sh
-rollback': the namenode always stays on safe mode(awaiting reported blocks
(0/315)).

Why?

2014-09-18 1:51 GMT-07:00 Susheel Kumar Gadalay <sk...@gmail.com>:

> You have to upgrade both name node and data node.
>
> Better issue start-dfs.sh -upgrade.
>
> Check whether current and previous directories are present in both
> dfs.namenode.name.dir and dfs.datanode.data.dir directory.
>
> On 9/18/14, sam liu <sa...@gmail.com> wrote:
> > Hi Expert,
> >
> > Below are my steps and is it a hadoop bug or did I miss any thing?
> Thanks!
> >
> > Step:
> > [A] Upgrade
> > 1. Install Hadoop 2.2.0 cluster
> > 2. Stop Hadoop services
> > 3. Replace 2.2.0 binaries with 2.4.1 binaries
> > 4. Start datanodes: $HADOOP_HOME/sbin/hadoop-daemon.sh start datanode
> > 5. Start namenode with option upgrade: $HADOOP_HOME/sbin/hadoop-daemon.sh
> > start namenode -upgrade
> > 6. Start secondary namenode, tasktracker and jobtracker
> >
> > Result:
> >
> >     Whole upgrade process could be completed successfully.
> >
> > [B] Rollback
> > 1. Stop all hadoop services
> > 2. Replace 2.4.1 binaries with 2.2.0 binaries
> > 3. Start datanodes: $HADOOP_HOME/sbin/hadoop-daemon.sh start datanode
> > 4. Start namenode with option upgrade: $HADOOP_HOME/sbin/hadoop-daemon.sh
> > start namenode -rollback
> >
> > Result:
> >
> >     Namenode service could be started
> >     Datanodes failed with exception:
> >     Issue: DataNode failed with following exception
> >     2014-09-17 11:04:51,416 INFO
> > org.apache.hadoop.hdfs.server.common.Storage: Lock on
> > /hadoop/hdfs/data/in_use.lock acquired by nodename 817443@shihc071-public
> >     2014-09-17 11:04:51,418 FATAL
> > org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed
> for
> > block pool Block pool BP-977402492-9.181.64.185-1410497086460 (storage
> id )
> > service to hostname/ip:9000
> >     org.apache.hadoop.hdfs.server.common.IncorrectVersionException:
> > Unexpected version of storage directory /hadoop/hdfs/data. Reported: -55.
> > Expecting = -47.
> >     at
> >
> org.apache.hadoop.hdfs.server.common.Storage.setLayoutVersion(Storage.java:1082)
> >     at
> >
> org.apache.hadoop.hdfs.server.datanode.DataStorage.setFieldsFromProperties(DataStorage.java:302)
> >     at
> >
> org.apache.hadoop.hdfs.server.common.Storage.readProperties(Storage.java:921)
> >
>

Re: Failed to rollback from hadoop-2.4.1 to hadoop 2.2.0

Posted by Susheel Kumar Gadalay <sk...@gmail.com>.
You have to upgrade both name node and data node.

Better issue start-dfs.sh -upgrade.

Check whether current and previous directories are present in both
dfs.namenode.name.dir and dfs.datanode.data.dir directory.

On 9/18/14, sam liu <sa...@gmail.com> wrote:
> Hi Expert,
>
> Below are my steps and is it a hadoop bug or did I miss any thing? Thanks!
>
> Step:
> [A] Upgrade
> 1. Install Hadoop 2.2.0 cluster
> 2. Stop Hadoop services
> 3. Replace 2.2.0 binaries with 2.4.1 binaries
> 4. Start datanodes: $HADOOP_HOME/sbin/hadoop-daemon.sh start datanode
> 5. Start namenode with option upgrade: $HADOOP_HOME/sbin/hadoop-daemon.sh
> start namenode -upgrade
> 6. Start secondary namenode, tasktracker and jobtracker
>
> Result:
>
>     Whole upgrade process could be completed successfully.
>
> [B] Rollback
> 1. Stop all hadoop services
> 2. Replace 2.4.1 binaries with 2.2.0 binaries
> 3. Start datanodes: $HADOOP_HOME/sbin/hadoop-daemon.sh start datanode
> 4. Start namenode with option upgrade: $HADOOP_HOME/sbin/hadoop-daemon.sh
> start namenode -rollback
>
> Result:
>
>     Namenode service could be started
>     Datanodes failed with exception:
>     Issue: DataNode failed with following exception
>     2014-09-17 11:04:51,416 INFO
> org.apache.hadoop.hdfs.server.common.Storage: Lock on
> /hadoop/hdfs/data/in_use.lock acquired by nodename 817443@shihc071-public
>     2014-09-17 11:04:51,418 FATAL
> org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for
> block pool Block pool BP-977402492-9.181.64.185-1410497086460 (storage id )
> service to hostname/ip:9000
>     org.apache.hadoop.hdfs.server.common.IncorrectVersionException:
> Unexpected version of storage directory /hadoop/hdfs/data. Reported: -55.
> Expecting = -47.
>     at
> org.apache.hadoop.hdfs.server.common.Storage.setLayoutVersion(Storage.java:1082)
>     at
> org.apache.hadoop.hdfs.server.datanode.DataStorage.setFieldsFromProperties(DataStorage.java:302)
>     at
> org.apache.hadoop.hdfs.server.common.Storage.readProperties(Storage.java:921)
>

Re: Failed to rollback from hadoop-2.4.1 to hadoop 2.2.0

Posted by Susheel Kumar Gadalay <sk...@gmail.com>.
You have to upgrade both name node and data node.

Better issue start-dfs.sh -upgrade.

Check whether current and previous directories are present in both
dfs.namenode.name.dir and dfs.datanode.data.dir directory.

On 9/18/14, sam liu <sa...@gmail.com> wrote:
> Hi Expert,
>
> Below are my steps and is it a hadoop bug or did I miss any thing? Thanks!
>
> Step:
> [A] Upgrade
> 1. Install Hadoop 2.2.0 cluster
> 2. Stop Hadoop services
> 3. Replace 2.2.0 binaries with 2.4.1 binaries
> 4. Start datanodes: $HADOOP_HOME/sbin/hadoop-daemon.sh start datanode
> 5. Start namenode with option upgrade: $HADOOP_HOME/sbin/hadoop-daemon.sh
> start namenode -upgrade
> 6. Start secondary namenode, tasktracker and jobtracker
>
> Result:
>
>     Whole upgrade process could be completed successfully.
>
> [B] Rollback
> 1. Stop all hadoop services
> 2. Replace 2.4.1 binaries with 2.2.0 binaries
> 3. Start datanodes: $HADOOP_HOME/sbin/hadoop-daemon.sh start datanode
> 4. Start namenode with option upgrade: $HADOOP_HOME/sbin/hadoop-daemon.sh
> start namenode -rollback
>
> Result:
>
>     Namenode service could be started
>     Datanodes failed with exception:
>     Issue: DataNode failed with following exception
>     2014-09-17 11:04:51,416 INFO
> org.apache.hadoop.hdfs.server.common.Storage: Lock on
> /hadoop/hdfs/data/in_use.lock acquired by nodename 817443@shihc071-public
>     2014-09-17 11:04:51,418 FATAL
> org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for
> block pool Block pool BP-977402492-9.181.64.185-1410497086460 (storage id )
> service to hostname/ip:9000
>     org.apache.hadoop.hdfs.server.common.IncorrectVersionException:
> Unexpected version of storage directory /hadoop/hdfs/data. Reported: -55.
> Expecting = -47.
>     at
> org.apache.hadoop.hdfs.server.common.Storage.setLayoutVersion(Storage.java:1082)
>     at
> org.apache.hadoop.hdfs.server.datanode.DataStorage.setFieldsFromProperties(DataStorage.java:302)
>     at
> org.apache.hadoop.hdfs.server.common.Storage.readProperties(Storage.java:921)
>

Re: Failed to rollback from hadoop-2.4.1 to hadoop 2.2.0

Posted by Susheel Kumar Gadalay <sk...@gmail.com>.
You have to upgrade both name node and data node.

Better issue start-dfs.sh -upgrade.

Check whether current and previous directories are present in both
dfs.namenode.name.dir and dfs.datanode.data.dir directory.

On 9/18/14, sam liu <sa...@gmail.com> wrote:
> Hi Expert,
>
> Below are my steps and is it a hadoop bug or did I miss any thing? Thanks!
>
> Step:
> [A] Upgrade
> 1. Install Hadoop 2.2.0 cluster
> 2. Stop Hadoop services
> 3. Replace 2.2.0 binaries with 2.4.1 binaries
> 4. Start datanodes: $HADOOP_HOME/sbin/hadoop-daemon.sh start datanode
> 5. Start namenode with option upgrade: $HADOOP_HOME/sbin/hadoop-daemon.sh
> start namenode -upgrade
> 6. Start secondary namenode, tasktracker and jobtracker
>
> Result:
>
>     Whole upgrade process could be completed successfully.
>
> [B] Rollback
> 1. Stop all hadoop services
> 2. Replace 2.4.1 binaries with 2.2.0 binaries
> 3. Start datanodes: $HADOOP_HOME/sbin/hadoop-daemon.sh start datanode
> 4. Start namenode with option upgrade: $HADOOP_HOME/sbin/hadoop-daemon.sh
> start namenode -rollback
>
> Result:
>
>     Namenode service could be started
>     Datanodes failed with exception:
>     Issue: DataNode failed with following exception
>     2014-09-17 11:04:51,416 INFO
> org.apache.hadoop.hdfs.server.common.Storage: Lock on
> /hadoop/hdfs/data/in_use.lock acquired by nodename 817443@shihc071-public
>     2014-09-17 11:04:51,418 FATAL
> org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for
> block pool Block pool BP-977402492-9.181.64.185-1410497086460 (storage id )
> service to hostname/ip:9000
>     org.apache.hadoop.hdfs.server.common.IncorrectVersionException:
> Unexpected version of storage directory /hadoop/hdfs/data. Reported: -55.
> Expecting = -47.
>     at
> org.apache.hadoop.hdfs.server.common.Storage.setLayoutVersion(Storage.java:1082)
>     at
> org.apache.hadoop.hdfs.server.datanode.DataStorage.setFieldsFromProperties(DataStorage.java:302)
>     at
> org.apache.hadoop.hdfs.server.common.Storage.readProperties(Storage.java:921)
>

Re: Failed to rollback from hadoop-2.4.1 to hadoop 2.2.0

Posted by Susheel Kumar Gadalay <sk...@gmail.com>.
You have to upgrade both name node and data node.

Better issue start-dfs.sh -upgrade.

Check whether current and previous directories are present in both
dfs.namenode.name.dir and dfs.datanode.data.dir directory.

On 9/18/14, sam liu <sa...@gmail.com> wrote:
> Hi Expert,
>
> Below are my steps and is it a hadoop bug or did I miss any thing? Thanks!
>
> Step:
> [A] Upgrade
> 1. Install Hadoop 2.2.0 cluster
> 2. Stop Hadoop services
> 3. Replace 2.2.0 binaries with 2.4.1 binaries
> 4. Start datanodes: $HADOOP_HOME/sbin/hadoop-daemon.sh start datanode
> 5. Start namenode with option upgrade: $HADOOP_HOME/sbin/hadoop-daemon.sh
> start namenode -upgrade
> 6. Start secondary namenode, tasktracker and jobtracker
>
> Result:
>
>     Whole upgrade process could be completed successfully.
>
> [B] Rollback
> 1. Stop all hadoop services
> 2. Replace 2.4.1 binaries with 2.2.0 binaries
> 3. Start datanodes: $HADOOP_HOME/sbin/hadoop-daemon.sh start datanode
> 4. Start namenode with option upgrade: $HADOOP_HOME/sbin/hadoop-daemon.sh
> start namenode -rollback
>
> Result:
>
>     Namenode service could be started
>     Datanodes failed with exception:
>     Issue: DataNode failed with following exception
>     2014-09-17 11:04:51,416 INFO
> org.apache.hadoop.hdfs.server.common.Storage: Lock on
> /hadoop/hdfs/data/in_use.lock acquired by nodename 817443@shihc071-public
>     2014-09-17 11:04:51,418 FATAL
> org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for
> block pool Block pool BP-977402492-9.181.64.185-1410497086460 (storage id )
> service to hostname/ip:9000
>     org.apache.hadoop.hdfs.server.common.IncorrectVersionException:
> Unexpected version of storage directory /hadoop/hdfs/data. Reported: -55.
> Expecting = -47.
>     at
> org.apache.hadoop.hdfs.server.common.Storage.setLayoutVersion(Storage.java:1082)
>     at
> org.apache.hadoop.hdfs.server.datanode.DataStorage.setFieldsFromProperties(DataStorage.java:302)
>     at
> org.apache.hadoop.hdfs.server.common.Storage.readProperties(Storage.java:921)
>