You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by bharath vissapragada <bh...@gmail.com> on 2009/08/03 21:08:23 UTC

namenode -upgrade problem

Hi all ,

I have noticed some problem in my cluster when i changed the hadoop version
on the same DFS directory .. The namenode log on the master says the
following ..


ile system image contains an old layout version -16.
An upgrade to version -18 is required.
Please restart NameNode with -upgrade option.
    at
org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:312)
    at
org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:87)
    at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:309)
    at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:288)
    at
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:163)
    at
org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:208)
    at
org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:194)
    at
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:859)
    at
org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:868)
2009-08-04 00:27:51,498 INFO org.apache.hadoop.ipc.Server: Stopping server
on 54310
2009-08-04 00:27:51,498 ERROR
org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException:
File system image contains an old layout version -16.
An upgrade to version -18 is required.
Please restart NameNode with -upgrade option.
    at
org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:312)
    at
org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:87)
    at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:309)
    at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:288)
    at
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:163)
    at
org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:208)
    at
org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:194)
    at
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:859)
    at
org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:868)

2009-08-04 00:27:51,499 INFO
org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG

Can anyone explain me the reason ... i googled it .. but those explanations
weren't quite useful

Thanks

Re: namenode -upgrade problem

Posted by Aaron Kimball <aa...@cloudera.com>.
The only time you would need to upgrade is if you've increased the Hadoop
version but are retaining the same HDFS :) So, that's the normal case.

What does "netstat --listening --numeric --program" report?
- Aaron

On Wed, Aug 5, 2009 at 10:53 AM, bharath vissapragada <
bharathvissapragada1990@gmail.com> wrote:

> yes .. I have stopped all the daemons ... when i use jps ...i get only ...
> "<pid> Jps"
>
> Actually .. i upgraded the version from 18.2 to 19.x  on the same path of
> hdfs .. is it a problem?
>
>
> On Wed, Aug 5, 2009 at 11:02 PM, Aaron Kimball <aa...@cloudera.com> wrote:
>
> > Are you sure you stopped all the daemons? Use 'sudo jps' to make sure :)
> > - Aaron
> >
> > On Mon, Aug 3, 2009 at 7:26 PM, bharath vissapragada <
> > bharathvissapragada1990@gmail.com> wrote:
> >
> > > Todd thanks for replying ..
> > >
> > > I stopped the cluster and issued the command
> > >
> > > "bin/hadoop namenode -upgrade" and iam getting this exception
> > >
> > > 09/08/04 07:52:39 ERROR namenode.NameNode: java.net.BindException:
> > Problem
> > > binding to master/10.2.24.21:54310 : Address already in use
> > >    at org.apache.hadoop.ipc.Server.bind(Server.java:171)
> > >    at org.apache.hadoop.ipc.Server$Listener.<init>(Server.java:234)
> > >    at org.apache.hadoop.ipc.Server.<init>(Server.java:960)
> > >    at org.apache.hadoop.ipc.RPC$Server.<init>(RPC.java:465)
> > >    at org.apache.hadoop.ipc.RPC.getServer(RPC.java:427)
> > >    at
> > >
> > >
> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:153)
> > >     at
> > >
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:208)
> > >    at
> > >
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:194)
> > >    at
> > >
> > >
> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:859)
> > >    at
> > > org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:868)
> > > Caused by: java.net.BindException: Address already in use
> > >    at sun.nio.ch.Net.bind(Native Method)
> > >    at
> > >
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:119)
> > >    at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59)
> > >    at org.apache.hadoop.ipc.Server.bind(Server.java:169)
> > >    ... 9 more
> > >
> > > any clue?
> > >
> > > On Tue, Aug 4, 2009 at 12:51 AM, Todd Lipcon <to...@cloudera.com>
> wrote:
> > >
> > > > On Mon, Aug 3, 2009 at 12:08 PM, bharath vissapragada <
> > > > bharathvissapragada1990@gmail.com> wrote:
> > > >
> > > > > Hi all ,
> > > > >
> > > > > I have noticed some problem in my cluster when i changed the hadoop
> > > > version
> > > > > on the same DFS directory .. The namenode log on the master says
> the
> > > > > following ..
> > > > >
> > > > >
> > > > > ile system image contains an old layout version -16.
> > > > > *An upgrade to version -18 is required.
> > > > > Please restart NameNode with -upgrade option.
> > > > > *
> > > >
> > > >
> > > > See bolded text above -- you need to run namenode -upgrade to upgrade
> > > your
> > > > metadata format to the current version.
> > > >
> > > > -Todd
> > > >
> > > >   at
> > > > >
> > > >
> > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:312)
> > > > >    at
> > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:87)
> > > > >    at
> > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:309)
> > > > >    at
> > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:288)
> > > > >    at
> > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:163)
> > > > >    at
> > > > >
> > >
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:208)
> > > > >    at
> > > > >
> > >
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:194)
> > > > >    at
> > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:859)
> > > > >    at
> > > > >
> > org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:868)
> > > > > 2009-08-04 00:27:51,498 INFO org.apache.hadoop.ipc.Server: Stopping
> > > > server
> > > > > on 54310
> > > > > 2009-08-04 00:27:51,498 ERROR
> > > > > org.apache.hadoop.hdfs.server.namenode.NameNode:
> java.io.IOException:
> > > > > File system image contains an old layout version -16.
> > > > > An upgrade to version -18 is required.
> > > > > Please restart NameNode with -upgrade option.
> > > > >    at
> > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:312)
> > > > >    at
> > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:87)
> > > > >    at
> > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:309)
> > > > >    at
> > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:288)
> > > > >    at
> > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:163)
> > > > >    at
> > > > >
> > >
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:208)
> > > > >    at
> > > > >
> > >
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:194)
> > > > >    at
> > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:859)
> > > > >    at
> > > > >
> > org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:868)
> > > > >
> > > > > 2009-08-04 00:27:51,499 INFO
> > > > > org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG
> > > > >
> > > > > Can anyone explain me the reason ... i googled it .. but those
> > > > explanations
> > > > > weren't quite useful
> > > > >
> > > > > Thanks
> > > > >
> > > >
> > >
> >
>

Re: namenode -upgrade problem

Posted by bharath vissapragada <bh...@gmail.com>.
yes .. I have stopped all the daemons ... when i use jps ...i get only ...
"<pid> Jps"

Actually .. i upgraded the version from 18.2 to 19.x  on the same path of
hdfs .. is it a problem?


On Wed, Aug 5, 2009 at 11:02 PM, Aaron Kimball <aa...@cloudera.com> wrote:

> Are you sure you stopped all the daemons? Use 'sudo jps' to make sure :)
> - Aaron
>
> On Mon, Aug 3, 2009 at 7:26 PM, bharath vissapragada <
> bharathvissapragada1990@gmail.com> wrote:
>
> > Todd thanks for replying ..
> >
> > I stopped the cluster and issued the command
> >
> > "bin/hadoop namenode -upgrade" and iam getting this exception
> >
> > 09/08/04 07:52:39 ERROR namenode.NameNode: java.net.BindException:
> Problem
> > binding to master/10.2.24.21:54310 : Address already in use
> >    at org.apache.hadoop.ipc.Server.bind(Server.java:171)
> >    at org.apache.hadoop.ipc.Server$Listener.<init>(Server.java:234)
> >    at org.apache.hadoop.ipc.Server.<init>(Server.java:960)
> >    at org.apache.hadoop.ipc.RPC$Server.<init>(RPC.java:465)
> >    at org.apache.hadoop.ipc.RPC.getServer(RPC.java:427)
> >    at
> >
> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:153)
> >     at
> > org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:208)
> >    at
> > org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:194)
> >    at
> >
> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:859)
> >    at
> > org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:868)
> > Caused by: java.net.BindException: Address already in use
> >    at sun.nio.ch.Net.bind(Native Method)
> >    at
> > sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:119)
> >    at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59)
> >    at org.apache.hadoop.ipc.Server.bind(Server.java:169)
> >    ... 9 more
> >
> > any clue?
> >
> > On Tue, Aug 4, 2009 at 12:51 AM, Todd Lipcon <to...@cloudera.com> wrote:
> >
> > > On Mon, Aug 3, 2009 at 12:08 PM, bharath vissapragada <
> > > bharathvissapragada1990@gmail.com> wrote:
> > >
> > > > Hi all ,
> > > >
> > > > I have noticed some problem in my cluster when i changed the hadoop
> > > version
> > > > on the same DFS directory .. The namenode log on the master says the
> > > > following ..
> > > >
> > > >
> > > > ile system image contains an old layout version -16.
> > > > *An upgrade to version -18 is required.
> > > > Please restart NameNode with -upgrade option.
> > > > *
> > >
> > >
> > > See bolded text above -- you need to run namenode -upgrade to upgrade
> > your
> > > metadata format to the current version.
> > >
> > > -Todd
> > >
> > >   at
> > > >
> > >
> > > >
> > > >
> > >
> >
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:312)
> > > >    at
> > > >
> > > >
> > >
> >
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:87)
> > > >    at
> > > >
> > > >
> > >
> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:309)
> > > >    at
> > > >
> > > >
> > >
> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:288)
> > > >    at
> > > >
> > > >
> > >
> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:163)
> > > >    at
> > > >
> > org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:208)
> > > >    at
> > > >
> > org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:194)
> > > >    at
> > > >
> > > >
> > >
> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:859)
> > > >    at
> > > >
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:868)
> > > > 2009-08-04 00:27:51,498 INFO org.apache.hadoop.ipc.Server: Stopping
> > > server
> > > > on 54310
> > > > 2009-08-04 00:27:51,498 ERROR
> > > > org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException:
> > > > File system image contains an old layout version -16.
> > > > An upgrade to version -18 is required.
> > > > Please restart NameNode with -upgrade option.
> > > >    at
> > > >
> > > >
> > >
> >
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:312)
> > > >    at
> > > >
> > > >
> > >
> >
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:87)
> > > >    at
> > > >
> > > >
> > >
> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:309)
> > > >    at
> > > >
> > > >
> > >
> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:288)
> > > >    at
> > > >
> > > >
> > >
> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:163)
> > > >    at
> > > >
> > org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:208)
> > > >    at
> > > >
> > org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:194)
> > > >    at
> > > >
> > > >
> > >
> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:859)
> > > >    at
> > > >
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:868)
> > > >
> > > > 2009-08-04 00:27:51,499 INFO
> > > > org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG
> > > >
> > > > Can anyone explain me the reason ... i googled it .. but those
> > > explanations
> > > > weren't quite useful
> > > >
> > > > Thanks
> > > >
> > >
> >
>

Re: namenode -upgrade problem

Posted by Aaron Kimball <aa...@cloudera.com>.
Are you sure you stopped all the daemons? Use 'sudo jps' to make sure :)
- Aaron

On Mon, Aug 3, 2009 at 7:26 PM, bharath vissapragada <
bharathvissapragada1990@gmail.com> wrote:

> Todd thanks for replying ..
>
> I stopped the cluster and issued the command
>
> "bin/hadoop namenode -upgrade" and iam getting this exception
>
> 09/08/04 07:52:39 ERROR namenode.NameNode: java.net.BindException: Problem
> binding to master/10.2.24.21:54310 : Address already in use
>    at org.apache.hadoop.ipc.Server.bind(Server.java:171)
>    at org.apache.hadoop.ipc.Server$Listener.<init>(Server.java:234)
>    at org.apache.hadoop.ipc.Server.<init>(Server.java:960)
>    at org.apache.hadoop.ipc.RPC$Server.<init>(RPC.java:465)
>    at org.apache.hadoop.ipc.RPC.getServer(RPC.java:427)
>    at
>
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:153)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:208)
>    at
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:194)
>    at
>
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:859)
>    at
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:868)
> Caused by: java.net.BindException: Address already in use
>    at sun.nio.ch.Net.bind(Native Method)
>    at
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:119)
>    at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59)
>    at org.apache.hadoop.ipc.Server.bind(Server.java:169)
>    ... 9 more
>
> any clue?
>
> On Tue, Aug 4, 2009 at 12:51 AM, Todd Lipcon <to...@cloudera.com> wrote:
>
> > On Mon, Aug 3, 2009 at 12:08 PM, bharath vissapragada <
> > bharathvissapragada1990@gmail.com> wrote:
> >
> > > Hi all ,
> > >
> > > I have noticed some problem in my cluster when i changed the hadoop
> > version
> > > on the same DFS directory .. The namenode log on the master says the
> > > following ..
> > >
> > >
> > > ile system image contains an old layout version -16.
> > > *An upgrade to version -18 is required.
> > > Please restart NameNode with -upgrade option.
> > > *
> >
> >
> > See bolded text above -- you need to run namenode -upgrade to upgrade
> your
> > metadata format to the current version.
> >
> > -Todd
> >
> >   at
> > >
> >
> > >
> > >
> >
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:312)
> > >    at
> > >
> > >
> >
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:87)
> > >    at
> > >
> > >
> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:309)
> > >    at
> > >
> > >
> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:288)
> > >    at
> > >
> > >
> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:163)
> > >    at
> > >
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:208)
> > >    at
> > >
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:194)
> > >    at
> > >
> > >
> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:859)
> > >    at
> > > org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:868)
> > > 2009-08-04 00:27:51,498 INFO org.apache.hadoop.ipc.Server: Stopping
> > server
> > > on 54310
> > > 2009-08-04 00:27:51,498 ERROR
> > > org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException:
> > > File system image contains an old layout version -16.
> > > An upgrade to version -18 is required.
> > > Please restart NameNode with -upgrade option.
> > >    at
> > >
> > >
> >
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:312)
> > >    at
> > >
> > >
> >
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:87)
> > >    at
> > >
> > >
> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:309)
> > >    at
> > >
> > >
> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:288)
> > >    at
> > >
> > >
> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:163)
> > >    at
> > >
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:208)
> > >    at
> > >
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:194)
> > >    at
> > >
> > >
> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:859)
> > >    at
> > > org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:868)
> > >
> > > 2009-08-04 00:27:51,499 INFO
> > > org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG
> > >
> > > Can anyone explain me the reason ... i googled it .. but those
> > explanations
> > > weren't quite useful
> > >
> > > Thanks
> > >
> >
>

Re: namenode -upgrade problem

Posted by bharath vissapragada <bh...@gmail.com>.
Todd thanks for replying ..

I stopped the cluster and issued the command

"bin/hadoop namenode -upgrade" and iam getting this exception

09/08/04 07:52:39 ERROR namenode.NameNode: java.net.BindException: Problem
binding to master/10.2.24.21:54310 : Address already in use
    at org.apache.hadoop.ipc.Server.bind(Server.java:171)
    at org.apache.hadoop.ipc.Server$Listener.<init>(Server.java:234)
    at org.apache.hadoop.ipc.Server.<init>(Server.java:960)
    at org.apache.hadoop.ipc.RPC$Server.<init>(RPC.java:465)
    at org.apache.hadoop.ipc.RPC.getServer(RPC.java:427)
    at
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:153)
    at
org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:208)
    at
org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:194)
    at
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:859)
    at
org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:868)
Caused by: java.net.BindException: Address already in use
    at sun.nio.ch.Net.bind(Native Method)
    at
sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:119)
    at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59)
    at org.apache.hadoop.ipc.Server.bind(Server.java:169)
    ... 9 more

any clue?

On Tue, Aug 4, 2009 at 12:51 AM, Todd Lipcon <to...@cloudera.com> wrote:

> On Mon, Aug 3, 2009 at 12:08 PM, bharath vissapragada <
> bharathvissapragada1990@gmail.com> wrote:
>
> > Hi all ,
> >
> > I have noticed some problem in my cluster when i changed the hadoop
> version
> > on the same DFS directory .. The namenode log on the master says the
> > following ..
> >
> >
> > ile system image contains an old layout version -16.
> > *An upgrade to version -18 is required.
> > Please restart NameNode with -upgrade option.
> > *
>
>
> See bolded text above -- you need to run namenode -upgrade to upgrade your
> metadata format to the current version.
>
> -Todd
>
>   at
> >
>
> >
> >
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:312)
> >    at
> >
> >
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:87)
> >    at
> >
> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:309)
> >    at
> >
> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:288)
> >    at
> >
> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:163)
> >    at
> > org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:208)
> >    at
> > org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:194)
> >    at
> >
> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:859)
> >    at
> > org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:868)
> > 2009-08-04 00:27:51,498 INFO org.apache.hadoop.ipc.Server: Stopping
> server
> > on 54310
> > 2009-08-04 00:27:51,498 ERROR
> > org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException:
> > File system image contains an old layout version -16.
> > An upgrade to version -18 is required.
> > Please restart NameNode with -upgrade option.
> >    at
> >
> >
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:312)
> >    at
> >
> >
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:87)
> >    at
> >
> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:309)
> >    at
> >
> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:288)
> >    at
> >
> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:163)
> >    at
> > org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:208)
> >    at
> > org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:194)
> >    at
> >
> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:859)
> >    at
> > org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:868)
> >
> > 2009-08-04 00:27:51,499 INFO
> > org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG
> >
> > Can anyone explain me the reason ... i googled it .. but those
> explanations
> > weren't quite useful
> >
> > Thanks
> >
>

Re: namenode -upgrade problem

Posted by Todd Lipcon <to...@cloudera.com>.
On Mon, Aug 3, 2009 at 12:08 PM, bharath vissapragada <
bharathvissapragada1990@gmail.com> wrote:

> Hi all ,
>
> I have noticed some problem in my cluster when i changed the hadoop version
> on the same DFS directory .. The namenode log on the master says the
> following ..
>
>
> ile system image contains an old layout version -16.
> *An upgrade to version -18 is required.
> Please restart NameNode with -upgrade option.
> *


See bolded text above -- you need to run namenode -upgrade to upgrade your
metadata format to the current version.

-Todd

   at
>

>
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:312)
>    at
>
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:87)
>    at
>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:309)
>    at
>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:288)
>    at
>
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:163)
>    at
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:208)
>    at
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:194)
>    at
>
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:859)
>    at
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:868)
> 2009-08-04 00:27:51,498 INFO org.apache.hadoop.ipc.Server: Stopping server
> on 54310
> 2009-08-04 00:27:51,498 ERROR
> org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException:
> File system image contains an old layout version -16.
> An upgrade to version -18 is required.
> Please restart NameNode with -upgrade option.
>    at
>
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:312)
>    at
>
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:87)
>    at
>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:309)
>    at
>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:288)
>    at
>
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:163)
>    at
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:208)
>    at
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:194)
>    at
>
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:859)
>    at
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:868)
>
> 2009-08-04 00:27:51,499 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG
>
> Can anyone explain me the reason ... i googled it .. but those explanations
> weren't quite useful
>
> Thanks
>